May 6, 2011 15:06 GMT  ·  By
Image showing brain areas more active in controls than in schizophrenia patients during a working memory task
   Image showing brain areas more active in controls than in schizophrenia patients during a working memory task

A collaboration of researchers in the United States recently developed a computer network that they then used to gain a deeper understanding of the causes that lead to the development of schizophrenia.

The team behind the new work says they used the network to simulate what happens in the human brain during an excessively-high release of the neurotransmitter dopamine. The experts were puzzled by what they saw.

Apparently, when the human brain is flooded with dopamine, it tends to start recalling memories in a manner usually employed by schizophrenic patients. Scientists say that these conclusions provide an interesting insight into how the mental condition evolves.

The new work was carried out by researchers at the University of Texas in Austin (UTA) and the Yale University, PsychCentral reports. Details of the investigation appear in the latest issue of the esteemed medical journal Biological Psychiatry.

“The hypothesis is that dopamine encodes the importance, the salience, of experience. When there’s too much dopamine, it leads to exaggerated salience, and the brain ends up learning from things that it shouldn’t be learning from,” explains Uli Grasemann.

The expert, who is a UTA Department of Computer Science graduate student, adds that schizophrenia also contains the hyper-learning component. This is activated when people with the disease lose their ability to forget or ignore as much data as they should.

It's unnatural for the human brain to remember everything it sees and all the stimuli it's exposed to. In schizophrenic patients, this may be happening at a large scale, and its understandable how this may be driving them crazy.

“With neural networks, you basically train them by showing them examples, over and over and over again. Every time you show it an example, you say, if this is the input, then this should be your output, and if this is the input, then that should be your output,” Grasemann explains.

“You do it again and again thousands of times, and every time it adjusts a little bit more towards doing what you want. In the end, if you do it enough, the network has learned,” he goes on to say.

The neural network on which the team carried out its research was developed by UTA professor Risto Miikkulainen, PhD, who is also Grasemann's supervisor.