Research may give insight into behaviors such as speech

Mar 8, 2012 10:06 GMT  ·  By

A duo of California Institute of Technology (Caltech) biologists say that they've managed to figure out how the human brain processes various types of sound signals, including speech and the information speech contains within.

One of the things that have always puzzled investigators about speech is that the vibrational frequencies of vocal cords do not produce a single type of sound, but rather a very vast array. How the brain sweeps through all of them has never been understood in detail.

A good example of this is the way in which someone's voice changes pitch as they speak. In order for this effect to be perceived, the vocal cords undergo an extremely complex series of changes in the frequencies they produce.

The brain needs to be capable of detecting these so-called frequency-modulated (FM) sweeps. Such frequencies are used in tone languages such as Chinese Mandarin, where the pitch of a sound can convey different information.

In a paper published in the March 8 issue of the esteemed journal Neuron, investigators say that FM sweep decoding is essential for all languages. People who hear someone talk, for example, need to figure out whether these frequency changes are ascending or descending.

What the Caltech team did was basically figure out which area of the brain is responsible for sorting FM sweeps, and dividing them into categories. The investigation was conducted on unsuspecting rats.

“This type of processing is very important for understanding language and speech in humans. There are some people who have deficits in processing this kind of changing frequency; they experience difficulty in reading and learning language, and in perceiving the emotional states of speakers,” Guangying Wu explains.

“Our research might help us understand these types of disorders, and may give some clues for future therapeutic designs or designs for prostheses like hearing implants,” adds Wu, a Caltech Broad Senior Research Fellow in Brain Circuitry, and the principal investigator on the new study.

An area located just underneath the cerebral cortex – near the center of the brain – was found to be the point of origin for FM sweep analysis. This area is called the midbrain. “Some people thought this type of sorting happened in a different region, for example in the auditory nerve or in the brain stem,” Wu explains.

Additionally, the research team was able to determine that the direction of FM sweeps was very important for the way in which sounds were processed. Some neurons in the midbrain were found to be more selective to upward sweeps, while others reacted significantly to downward ones.

“Our findings suggest that neural networks in the midbrain can convert from non-selective neurons that process all sounds to direction-selective neurons that help us give meanings to words based on how they are spoken. That's a very fundamental process,” Wu concludes.