Skip to content

Commit

Permalink
Update speakers.yml
Browse files Browse the repository at this point in the history
  • Loading branch information
samvelyan authored Oct 8, 2024
1 parent 7dad529 commit b915abf
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion sitedata/speakers.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
date: Mar 23, 2023
abstract: "Everyone is now talking about Language Models (LMs), and arguing about whether they can reason or produce original content. Is there value in comparing the behavior of language models and humans? And if so, how should we make such comparisons? In this talk, I will draw inspiration from comparative psychology, to suggest that careful methods are needed to ensure that comparisons between humans and models are made fairly, and to highlight an important distinction between LMs and cognitive models that can lead to unfair comparisons. But I will also argue that careful comparisons of LMs to humans offer an opportunity to reconsider our assumptions about the origin and nature of human capabilities. I will illustrate these arguments by focusing on two of our recent papers comparing language models to humans: we find that LMs can process recursive grammar structures more reliably than prior work has suggested (https://arxiv.org/abs/2210.15303), and that LMs show human-like content effects on logical reasoning tasks (https://arxiv.org/abs/2207.07051)."
bio: 'Andrew Lampinen is a Senior Research Scientist at DeepMind. He completed my PhD in Cognitive Psychology at Stanford University. Prior to that, his background was background is in mathematics, physics, and machine learning.'
video: https://www.youtube.com/watch?v=ff-ip0A40ks
video: https://www.youtube.com/embed/ff-ip0A40ks
UID: "27"
- title: "Reinforcement Learning from Human Feedback"
speaker: Nathan Lambert
Expand Down

0 comments on commit b915abf

Please sign in to comment.