California, US (BBN) – Direct recordings have revealed what is happening in our brains as we make sense of speech in a noisy room.
Focusing on one conversation in a loud, distracting environment is called “the cocktail party effect”, reports BBC.
It is a common festive phenomenon and of interest to researchers seeking to improve speech recognition technology.
Neuroscientists recorded from people’s brains during a test that recreated the moment when unintelligible speech suddenly makes sense.
A team measured people’s brain activity as the words of a previously unintelligible sentence suddenly became clear when a subject was told the meaning of the “garbled speech”.
The findings are published in the journal Nature Communications.
Lead researcher Christopher Holdgraf from the University of California, Berkeley, and his colleagues were able to work with epilepsy patients, who had had a portion of their skull removed and electrodes placed on the brain surface to track their seizures. Seven of these subjects took part in the scientists’ auditory test.
FILTERING THE NOISE
First, the researchers played a very distorted, garbled sentence to each subject, which almost no-one was able to understand.
They then played a normal, easy to understand version of the same sentence and immediately repeated the garbled version.
“After hearing the intact sentence” the researchers explained in their paper, all the subjects understood the subsequent “noisy version”.
The brain recordings showed this moment of recognition as brain activity patterns in the areas of the brain that are known to be associated with processing sound and understanding speech.
When the subjects heard the very garbled sentence, the scientists reported that they saw little activity in those parts of the brain.
Hearing the clearly understandable sentence then triggered patterns of activity in those brain areas.
The scientific revelation was seeing how that then altered the nature of the brain’s response when the subject heard the distorted, garbled phrase again.
Auditory and speech processing areas then “lit up” and changed their pattern of activity over time, apparently tuning in to the words among the distortion.
“The brain actually changes the way it focuses on different parts of the sound,” explained the researchers. “When patients heard the clear sentences first, the auditory cortex (the part of the brain associated with processing sound) enhanced the speech signal.”
Mr Holdgraf said: “We’re starting to look for more subtle or complex relationships between the brain activity and the sound.
“Rather than just looking at ‘up or down’, it’s looking at the details of how the brain activity changes across time, and how that activity relates to features in the sound.
This, he added, gets closer to the mechanisms behind perception.
“By understanding the ways in which our brains filter out noise in the world, the researcher concluded, “we hope to be able to create devices that help people with speech and hearing impediments accomplish the same thing.”
“It is unbelievable how fast and plastic the brain is,” added co-author Prof Robert Knight. “(And) this is the first time we have any evidence on how it actually works in humans.”
Prof Knight and his colleagues are aiming to use the findings to develop a speech decoder, a brain implant to interpret people’s imagined speech, which could help those with certain neurodegenerative diseases that affect their ability to speak.
BBN/MS/ANS