Deciphering Words from Brain Activity
This approach could help people who are unable to speak
For individuals who have congenital or acquired speech impairments, communicating with those around them can be very tough. Now, a team of experts believes it's on the right track towards developing a solution that will enable these people to talk to others just like before.A new system being developed at the University of California in Berkeley (UCB) shows great promise in deciphering and interpreting neural activity taking place in the brain's temporal lobe. This is where most of the signals used by the auditory system are being processed.
Even if a person is unable to speak for various reasons, the words still form inside their heads, and UCB neuroscientists believe they may be able to use a brain imaging technique to literally pick up those words right when they form.
If that turns out to be possible, then investigators would have basically solved one of the major inconveniences associated with conditions such as stroke or paralysis. The fact that scientists managed to decode the electrical activity of the temporal lobe was critical to this study.
In a series of experiments, participants were asked to listen to a normal conversation, while the team correlated the brain activity they saw with the sounds being spoken. Eventually, this allowed them to predict what the test subjects were trying to say based only on their neural activity patterns.
“This research is based on sounds a person actually hears, but to use it for reconstructing imagined conversations, these principles would have to apply to someone’s internal verbalizations,” explains the first author of the new study, Brian N. Pasley.
The expert, a postdoctoral researcher at UCB, adds that recent studies have revealed an interesting correlation between the way sound is heard and the way sound is imaged in the human brains. Apparently, the two processes activate similar regions of the cortex.
“If you can understand the relationship well enough between the brain recordings and sound, you could either synthesize the actual sound a person is thinking, or just write out the words with a type of interface device,” Pasley says.
The UCB team worked closely with colleagues at UC San Francisco (UCSF), the University of Maryland and the Johns Hopkins University for this research. The work is published in the January 31 issue of the peer-reviewed, open-access scientific journal PLoS Biology.