The Edinburgh Lectures in Language Evolution 2023

The Edinburgh Lectures in Language Evolution are an annual series of talks hosted by the Centre for Language Evolution. Each year, we welcome distinguished visiting speakers to deliver lectures summarising their research, sharing their thoughts on where the field is headed, and what exciting breakthroughs are yet to come.

This years’ talk series is presented in partnership with the UK Research and Innovation Centre for Doctoral Training in Natural Language Processing, and the University of Edinburgh’s School of Informatics.


Click here to watch on Zoom.

Click here to watch on YouTube.


You can register for the Lectures by providing us with your email address. Links to the Zoom webinars will only be shared with attendees who have given us their email address. The talks will also be live-streamed to YouTube for public viewing and will be available to watch at a later date.

Click here to register.

Dates and details

This year, we will be welcoming four speakers to join us virtually on the first four Mondays of November. All talks take place at 4pm GMT (5pm CET, 11am EST, 8am PST).

  • Steve Piantadosi (UC Berkeley) on November 6th
  • Isabelle Dautriche (Aix-Marseille University) on November 13th
  • Balthasar Bickel (University of Zurich) on November 20th
  • Adele Goldberg (Princeton University) on November 27th

In 2023, the Lectures will take place virtually via Zoom Webinar. Each event will begin with a lecture by each invited speaker followed by a panel discussion featuring all four speakers, after which the audience will be invited to join the conversation during a Q&A session.

November 6th: Steve Piantadosi

Steve Piantadosi did his undergraduate work at UNC, Chapel Hill in Mathematics and Linguistics, before going to graduate school at the department of Brain and Cognitive Science at MIT. There, he worked on psychological models of language and learning models, with a focus on early mathematics and conceptual development. Following that, he was a postdoctoral researcher at the University of Rochester, exploring the
origins of compositional thinking in infancy. He left the university to come to the University of California, Berkeley in 2018, where he is in the departments of psychology and neuroscience. His research now includes field experiments on mathematics with an indigenous community in Bolivia, work on early mathematics learning in kids and primates, and
modeling work to develop computational accounts of cognition.

Talk details

Syntax and Semantics in the Age of Large Language Models

The recent rise in large language models profoundly changes the landscape for theories of human language. I’ll discuss how these models should cause us to rethink many popular ideas about grammar. I’ll also discuss the way in which these models implement theories of language and grammar, as well as the links and gaps between these models and child language learning. Despite important differences, I’ll argue that people who care about learning should take LLMs seriously. modern language models provide a compelling way to think about semantics and conceptual representation. I’ll argue that the sense in which they possess “meanings” is likely to be analogous to how human words and concepts achieve meaning by implementing “conceptual role” theories which are at least partially accessible in the statistics present in text, even without embodied contact with the world. 

November 13th: Isabelle Dautriche

Isabelle Dautriche is a cognitive scientist in Centre National de la Recherche Scientifique (CNRS) at Aix-Marseille University (France). Her research focuses on the cognitive foundations of language looking at how children learn language but also at how different species might represent the world in the absence of language. 

Talk details

The primitive of semantic computations in human infants and baboons (Papio papio)

Human languages share a number of universal features, from the properties of lexical meanings to the way these meanings are composed and realised on the surface (i.e. word order). In this talk, I will present the results of a research program investigating experimentally the cognitive origins of these shared features of language in non-human primates and in human infants to determine whether these properties can be found outside language and outside humans.  I will report three sets of studies suggesting that baboons (Papio papio) and infants (i) like to manipulate ‘concepts’ of the same shape as ours, (ii) can compose mental representations and (iii) display attentional preferences consistent with some of the word order found in language. 

November 20th: Balthasar Bickel

Balthasar Bickel got his graduate training in the Cognitive Anthropology group at the Max Planck Institute for Psycholinguistics in Nijmegen and received his PhD in 1997 from the University of Zurich. After postdocs in Mainz and Berkeley and an assistant professorship in Zurich, he became a professor of linguistic typology at the University of Leipzig in 2002. In 2011 he took over the chair in general linguistics at the University of Zurich, where he founded the Department of Comparative Language Science in 2014. Since 2020 he has been the Director of a National Research Center in Switzerland, the NCCR Evolving Language.

Talk details

Biases in Language Evolution

One of the most specific properties of the language phenotype is its relentless change over time and space. Capturing this property is a prerequisite for tracing language’s origins. However, while historical data, cross-linguistic surveys, and phylogenetic models have revealed many intriguing patterns (e.g. the dominance of certain word orders over others), these might fail to characterize the phenotype at the species level because much of linguistic diversity got lost and was deeply transformed in human history, notably in the wake of large Neolithic spreads. In my presentation I combine phylogenetic modeling of trends in language change with experimental work across both human populations and primate species in order to identify biologically determined biases that shape the language phenotype independently of population history. I will chiefly focus on one bias that evolved prior to language (a hominid preference for animate agents), one bias that was arguably boosted by language (a human preference for self-similar embedding), and one bias that evolved later than language (an agriculturalist preference for labiodentals). These diverse biases illustrate some of the likely transitions in the long and complex trajectory of language evolution.

November 27th: Adele Goldberg

Talk details

Accessibility and Attractor Dynamics over Historical Time

There are times when a curiously odd relic of language presents us with a thread, which when pulled, reveals deep and general facts about human language. This presentation unspools such a case. Prior to 1930, English speakers uniformly preferred male-before-female word order in conjoined nouns such as uncles and aunts; nephews and nieces; men and women. Since then, at least a half dozen items have systematically reversed their preferred order (e.g., aunts and uncles, nieces and nephews) while others have not (men and women).  I will suggest that three simple aspects of cognitive accessibility predict the word order of both familiar and novel A&B noun combinations generally, as well as the historical change focused on here: 1) the relative accessibility of the A and B terms individually, 2) competition from B&A order, and critically, 3) attractor dynamics (i.e., similarity to related A’& B’ cases).  The conclusions illustrate important aspects of human memory, a) it is vast; b) many factors influence accessibility (the most important one is meaning); c) similar item-specific memories cluster together and attract new similar cases.