Amplifying them yields wrong results

Apr 28, 2009 11:01 GMT  ·  By
Correctly reading fMRI data is a very tricky business, on account of the large volume of information that this type of scans generates
   Correctly reading fMRI data is a very tricky business, on account of the large volume of information that this type of scans generates

Experts at the National Institute of Mental Health in Bethesda, Maryland, have recently made a very serious claim that could have potentially far-reaching implications for the scientific community. They argue that neuroimaging studies published in 2008, in some of the world's most prestigious science journals, may have been wrong and biased, although not intentionally. It would seem that methodological errors in the way scientists conduct this type of researches has not only infiltrated their work, but has also passed through the peer-review filters that important publications employ to secure the exactitude of the information they publish.

Apparently, the famous journals Nature, Science, Nature Neuroscience, Neuron and The Journal of Neuroscience have all published neuroimaging papers that are not completely accurate or fair to their conclusions, an NIMH team led by experts Nikolaus Kriegeskorte and Chris Baker has found. The investigation has analyzed some 134 functional Magnetic Resonance Imaging (fMRI) studies published in 2008, and has learned that 57 of them included “non-independent selective analysis,” whereas in 20 of the cases, the authors provided too little information to lead to a clear conclusion.

The “non-independent selective analysis” phrase refers to a research methodology that involves a science team using a set of data to set up a scientific experiment, and then the same information to test and verify the work hypothesis. Naturally, the data needs to be different, in order for the conclusion to be free of any suspicion, but a large number of the published studies did not take this into account. While the results of the investigation do not necessarily mean that the outcomes of the scientific papers are wrong, they do raise some serious questions about the peer-review system prestigious journals employ.

“We are not saying that the papers draw wrong conclusions, because in some cases the error will not have been critical. But in other cases we don't know, and this creates an ambiguity. It is crucial to analyze your results with a set of data that are independent of that used in the earlier selection process. It is even OK to split your total data and use one half to select the voxels, and the other to further analyze the response in these voxels,” Chris Baker says, as quoted by Nature. “It is a poor reflection on the quality of peer review of prestige journals – they really need to up their game in terms of rigour,” University College London (UCL) Wellcome Trust Centre for Neuroimaging Scientific Director Karl Friston adds.

The error “applies equally to single-unit electrophysiology, electroencephalography, gene microarray studies or even behavioral data,” Baker explains. “For those of us with a few years of fMRI experience, the issue is entirely passé, but there will always be a substantial minority on a steep learning curve. What surprised me is how frequent the errors are,” Friston concludes.