Neuroimaging captures words ‘making pictures’ in the brain

Primary page content

For the first time scientists have recorded evidence of which part of the brain reactivates when people hear a newly-learned word associated with a visual image.

A tiger - one example of an object-related word

Using neuroimaging, researchers at Goldsmiths, University of London and Freie Universität Berlin were able to show that when we hear a visual word (for example, ‘tiger’), the part of the brain which normally receives the visual information is reactivated with something like an image of the associated animal.

While this process may sound obvious, a study published in the journal Frontiers in Human Neuroscience by computational cognitive neuroscientist Dr Max Garagnani and colleagues earlier this year provides the first piece of experimental evidence to demonstrate it.

The research team recruited 24 healthy right-handed monolingual native speakers of German (15 women and nine men), aged between 18 and 35, to participate in an experiment, which took place over four consecutive days. Participants underwent training during days one to three and functional magnetic resonance imaging (fMRI) scanning on day four.

Through the training sessions they were asked to learn 64 entirely new, meaningless, word-forms: these pseudo-words (such as ‘Shruba’ or ‘Flipe’) were (randomly) paired up with a basic object (such as ‘dog’) or action (such as ‘grasping’) category. In each of the three training sessions (lasting about an hour) each of the spoken words to be learnt was presented together with the corresponding picture of an object or action, sixteen times, in random order.

After training, researchers used fMRI to analyse the participants’ brain responses when they heard one of the new speech items.

Contrary to their prediction, hearing a newly-learnt action-related word did not produce stronger activity in the motor-related cortical areas. However, they found that hearing the newly-learnt object-related word sounds selectively triggered activity in V1, primary visual cortex, as well as secondary and higher visual areas of the brain.

This localised V1 activation was only observed with the pseudo-words associated with the object category, such as a familiar animal. Hearing these spoken items elicited a significantly stronger visual response than when the participants heard the pseudo-words that had been paired with a type of action.

The experiment models features of early stages of language learning, where words are semantically grounded in objects and actions. By documenting the formation of associative semantic links between a novel spoken word form and a basic conceptual category, the study provides experimental support for how word-meaning is acquired in the brain, helping inform our understanding of language learning and development.

Prior studies using neuroimaging typically have not required participants to learn entirely new words – they used words which already exist (in English, German or Italian, for example). As a result, experiment results were potentially confounded by the memories or information a person may have already associated a specific word with, prior to the experiment.

Dr Max Garagnani, Senior Lecturer in Computer Science at, Goldsmiths said: “The fact that primary sensory cortices kick-in when processing aspects of semantics is of utmost importance for the current debate in cognitive neuroscience on the role of semantic grounding.

“While various forms of learning (eg. combinatorial, inferential, trial and error) might play a role, grounding the meaning of an initial set of words via the correlation between objects in the world and symbol occurrences is one important and necessary stage of language acquisition. 

“Our study indicates there is no other way to provide semantic grounding of an initial, basic vocabulary. For the first time, we have shown that it is a link between language and meaning information in the primary visual cortex that emerges as a result of the co-occurrence of words and objects in the world.” 

Semantic Grounding of Novel Spoken Words in the Primary Visual Cortex by Max Garagnani, Evgeniya Kirilina and Friedemann Pulvermüller was published in the journal Frontiers in Human Neuroscience. DOI:

Dr Max Garagnani (Department of Computing) and Dr Maria Herrojo-Ruiz (Department of Psychology) lead the MSc Computational Cognitive Neuroscience at Goldsmiths.