Software bug calls MRI scans into question

A recent research paper demonstrates that common software packages for fMRI analysis resulted in up to 70% false-positive rates, questioning the validity of some 40,000 fMRI studies and their interpretations.

Functional magnetic resonance imaging (fMRI) is a functional neuroimaging technique using MRI technology that measures brain activity by distinguishing variations related to blood flow.

Statistical software not validated

The report, ‘Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates’ was published in scientific journal Proceedings of the National Academy of Sciences of the United States of America (PNAS).

“Functional MRI (fMRI) is 25 years old, yet surprisingly its most common statistical methods have not been validated using real data,” the report states. Scientists from Swedish and English universities took resting-state fMRI data from 499 healthy controls to conduct 3 million task group analyses.

The researchers said they should have found up to 5% false-positives, but instead reported false-positive rates of up to 70% in common software packages.

In addition, whilst conducting the research a 15-year old software bug was discovered.

“Second, a 15-year-old bug was found in 3dClustSim while testing the three software packages (the bug was fixed by the AFNI group as of May 2015, during preparation of this manuscript). The bug essentially reduced the size of the image searched for clusters, underestimating the severity of the multiplicity correction and overestimating significance,” the report stated.

Moving forward

The researchers conclude their paper by acknowledging that it is not possible to redo 40,000 fMRI studies, and “lamentable archiving and data-sharing practices mean most could not be reanalyzed either.”

The researchers instead urges that the fMRI community focuses on validating existing methods and promote open sharing practices.

“As no analysis method is perfect, and new problems and limitations will be certainly found in the future, we commend all authors to at least share their statistical results [e.g., via] and ideally the full data [e.g.,]. Such shared data provide enormous opportunities for methodologists, but also the ability to revisit results when methods improve years later,” the report concludes.


Edited from source by Cecilia Rehn.