Scientists studying how people understand speech have uncovered evidence that challenges long-standing ideas about language in the human brain. New research suggests comprehension may unfold gradually, in a way that closely mirrors how artificial intelligence systems process words and meaning.
Others are reading now
The findings point to unexpected similarities between biological and machine intelligence.
Listening and meaning
According to research published in Nature Communications and reported by ScienceDaily, the study tracked brain activity as participants listened to a 30-minute spoken podcast. The work was led by Dr. Ariel Goldstein of the Hebrew University of Jerusalem, alongside researchers from Google Research and Princeton University.
Using electrocorticography, the team measured how different regions of the brain responded over time as language was processed. They found that understanding did not happen all at once, but emerged step by step.
Later stages of brain activity aligned closely with deeper processing layers in advanced AI language models such as GPT-2 and Llama 2, particularly in established language regions including Broca’s area.
A layered process
The researchers observed that early neural signals corresponded to simpler stages of processing, while later signals reflected more complex interpretation based on broader context. This pattern closely resembled how modern AI systems build meaning through multiple internal layers.
Also read
Goldstein said: “What surprised us most was how closely the brain’s temporal unfolding of meaning matches the sequence of transformations inside large language models. Even though these systems are built very differently, both seem to converge on a similar step-by-step buildup toward understanding.”
The similarity was strongest in higher-level language regions, where brain responses peaked later and were most closely associated with deeper AI representations.
Challenging old theories
The findings question traditional rule-based theories of language, which emphasise fixed structures such as phonemes and grammatical hierarchies. The researchers found that these classic linguistic features did not explain real-time brain activity as well as context-driven representations derived from AI models.
Instead, the study supports a view of language comprehension as a flexible and statistical process, where meaning gradually emerges from context rather than rigid rules.
Researchers said the results suggest AI systems may serve not only as tools for generating language, but also as models for understanding how the human brain creates meaning.
Also read
Opening new doors
To support further research, the team has released a public dataset containing neural recordings and language features from the study. The resource is intended to allow scientists worldwide to test competing theories of language and develop computational models that more closely reflect human cognition.
The researchers said this approach could help bridge neuroscience and artificial intelligence, offering new insight into both fields.
Sources: ScienceDaily, Nature Communications, The Hebrew University of Jerusalem