In-Person
Hosted by: Brain Research Institute

Neuroscience Research Building Auditorium (Room 132) 

Abstract: Large language models (LLMs) are the first generation of artificial neural networks to master the structure of human language. In this talk, I develop a computational framework for using LLMs to study the neural basis of real-world human language. I showcase this framework in two examples from recent work: (1) modeling the linguistic features driving brain-to-brain coupling in dyadic conversations; and (2) modeling the linguistic features driving functional connectivity between regions of the language network. Ultimately, I argue that the unique success of LLMs demands that they be taken seriously as models for a neuroscientific theory of natural language.