Email updates

Keep up to date with the latest news and content from Behavioral and Brain Functions and BioMed Central.

Open Access Highly Accessed Commentary

Why are psychiatric imaging methods clinically unreliable? Conclusions and practical guidelines for authors, editors and reviewers

Stefan Borgwardt1*, Joaquim Radua2, Andrea Mechelli2 and Paolo Fusar-Poli2

Author Affiliations

1 Department of Psychiatry, University of Basel, Petersgraben 4, Basel, CH 4031, Switzerland

2 Department of Psychosis Studies, Institute of Psychiatry, King’s College London, De Crespigny Park, London, SE5 8AF, UK

For all author emails, please log on.

Behavioral and Brain Functions 2012, 8:46  doi:10.1186/1744-9081-8-46

The electronic version of this article is the complete one and can be found online at: http://www.behavioralandbrainfunctions.com/content/8/1/46


Received:8 May 2012
Accepted:24 August 2012
Published:1 September 2012

© 2012 Borgwardt et al.; licensee BioMed Central Ltd.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

No reliable anatomical or functional alterations have been confirmed in psychiatric neuroimaging; however it can become reliable with translational impact on clinical practice when considering crucial methodological issues. We provide guidelines to authors, editors and reviewers in the implementation/evaluation of neuroimaging studies to bend neuroimaging to be more than basic neuroscience.

Keywords:
Guidelines; Neuroimaging; Magnetic resonance imaging; Region-of-interest; Voxel-based morphometry; Meta-analyses

Background

More than three decades after Johnstone’s first computerised axial tomography of the brain of individuals with schizophrenia [1], no consistent or reliable anatomical or functional alterations have been univocally associated with any mental disorder and no neurobiological alterations have been ultimately confirmed in psychiatric neuroimaging.

A number of methodological problems may underlie the inconsistencies across studies and the difficulty of identifying reliable results. Heterogeneity in psychiatric neuroimaging originates from multiple differences across studies: in conceptual issues underlying psychiatric diagnoses and psychopathology [2,3], the inclusion criteria for and the clinical characteristics of psychiatric samples [4]; the use of different paradigms and designs [4], and the use of different forms of image acquisition and image analysis [5].

Discussion

The latter point is critically addressed by the recent study of Ioannidis [6]. He stated that “the excess significance may be due to unpublished negative results, or it may be due to negative results having been turned into positive results through selective exploratory analyses”[6]. Because of multiple comparisons across different brain regions, reporting of regions of interest (ROIs) can be guided by post-hoc significance of the results, with the whole brain results remaining unpublished [6]. Additionally, when there are many ROI analyses that can be performed, only one of them, the one with the best results, may be presented [6]. These practices limit the correct localization of the potential brain abnormalities, which should be based on a whole-brain analysis of the differences between patients and controls. To make an analogy, it’s as if an attorney decides to investigate only an arbitrary subgroup of the suspects of a crime, and not to report any proof, which may involve individuals which he wants to keep untarnished.

As Ioannidis acknowledged these concerns do mainly refer to morphometry studies and not directly extend to automated whole-brain voxel-based studies or functional imaging studies. In particular, voxel-based meta-analyses have the potential to overcome the limited sample size of individual studies revealing structural differences at specific brain coordinates rather than differences in volumes of pre-specified ROIs. A recently developed meta-analytic method, Signed Differential Mapping [7,8], considers null findings as well and thus attenuates the disproportionate influence of single study data sets. However, even meta-analyses of voxel-based studies are grounded on the available published results, which often do not report null findings. In this regard, it must be noted that no meta-analytic method can detect an abnormality if this is deliberately not reported in the individual studies, e.g. by repeating the analysis with different parameters until the finding disappears. This may be the case of abnormalities in regions not thought to be related to the disorder, which may be “felt” to be false positives or artifacts [8] by the authors of the studies and by the peer-reviewers.

Conclusions and practical guidelines for authors, editors and reviewers

Only by overcoming these biases, the results of psychiatric neuroimaging can become more reliable and have a translational impact on clinical practice. The study by Ioannidis represents a milestone in psychiatric imaging, pointing to crucial methodological issues at the level of imaging analysis. Although the Ioannidis study makes general recommendations, this manuscript tries to formulate a checklist of practical guidelines for authors, editors and reviewers that are easy to implement and follow. This may help to ultimately bend psychiatric neuroimaging to be something more than basic neuroscience:

i. With an increasing number of ways of preprocessing the data becoming available, this should be described in enough detail by the authors to allow exact replication;

ii. ROI studies (employing preselected masks or adopting Small Volume Corrections) should first report standard whole brain results and acknowledge if no significant clusters were detected at whole brain level before presenting the ROI findings;

iii. Both ROIs and whole brain studies should first report the results significant at p < 0.05 corrected for multiple comparisons (i.e. FWE, FDR, Montecarlo) and then employ more liberal thresholds;

iv. When several ROIs are used, correction for multiple comparisons should be based on a mask which includes all of them rather than considering each ROI separately;

v. Authors should be encouraged to blind the statistical analyses of the imaging datasets to avoid ROI analyses be built post-hoc on the basis of the results;

vi. All studies should report a statistical analysis modelling an agreed set of possible confounding variables; these could include, for instance, gender, age and handedness. In addition, studies would have the option of reporting further statistical analyses modelling additional study-specific confounding variables;

vii. All studies should acknowledge the number of analyses or brain correlations performed, giving a clear rationale for each, to avoid conducting exploratory analyses and reporting the most significant result;

viii. The potential overlapping of the patient and control group with previously published studies should be clearly acknowledged, and the spatial coordinates always reported, to assist future voxel-based meta-analyses in the field;

ix. Peer-reviews should be as strict when assessing the methods of a study reporting abnormalities in expected brain regions, as when assessing the methods of a study not finding any expectable finding;

x. Acceptance or rejection of a manuscript should not depend on whether abnormalities are detected or not, nor on the specific brain regions found to be abnormal.

Competing interests

The authors’ declare that they have no competing interests.

Authors’ contributions

SB and PFP conceived these guidelines and drafted the manuscript. JR and AM helped discussing the limitations of imaging analysis. All authors read and approved the final manuscript.

References

  1. Johnstone EC, Crow TJ, Frith CD, Husband J, Kreel L: Cerebral ventricular size and cognitive impairment in chronic schizophrenia.

    Lancet 1976, 7992:924-926. OpenURL

  2. Fusar-Poli P, Broome MR: Conceptual issues in psychiatric neuroimaging.

    Curr Opin Psychiatry 2006, 19:608-612. PubMed Abstract | Publisher Full Text OpenURL

  3. Fusar-Poli P, Broome M, Barale F, Stanghellini G: Why is psychiatric imaging clinically unreliable? Epistemological perspectives in clinical neuroscience.

    Psychother Psychosom 2009, 78:320-321. PubMed Abstract | Publisher Full Text OpenURL

  4. Fusar-Poli P, Allen P, McGuire P: Neuroimaging studies of the early stages of psychosis: a critical review.

    Eur Psychiatry 2008, 23:237-244. PubMed Abstract | Publisher Full Text OpenURL

  5. Fusar-Poli P, Bhattacharyya S, Allen P, Crippa JA, Borgwardt S, Martin-Santos R, Seal M, O’Carroll C, Atakan Z, Zuardi AW, McGuire P: Effect of image analysis software on neurofunctional activation during processing of emotional human faces.

    J Clin Neurosci 2010, 17:311-314. PubMed Abstract | Publisher Full Text OpenURL

  6. Ioannidis JP: Excess significance bias in the literature on brain volume abnormalities.

    Arch Gen Psychiatry 2011, 68:773-780. PubMed Abstract | Publisher Full Text OpenURL

  7. Radua J, van den Heuvel OA, Surguladze S, Mataix-Cols D: Meta-analytical comparison of voxel-based morphometry studies in obsessivecompulsive disorder vs. other anxiety disorders.

    Arch Gen Psychiatry 2010, 67:701-711. PubMed Abstract | Publisher Full Text OpenURL

  8. Radua J, Mataix-Cols D, Phillips ML, El-Hage W, Kronhaus DM, Cardoner N, Surguladze S: A new meta-analytic method for neuroimaging studies that combines reported peak coordinates and statistical parametric maps.

    Eur Psychiatry 2011.

    published online June 7

    OpenURL