scispace - formally typeset
Search or ask a question
Journal ArticleDOI

False alarm? A comprehensive reanalysis of "Evidence that psychopathology symptom networks have limited replicability" by Forbes, Wright, Markon, and Krueger (2017).

01 Oct 2017-Journal of Abnormal Psychology (American Psychological Association)-Vol. 126, Iss: 7, pp 989-999
TL;DR: An assessment of the replicability of four different network models for symptoms of major depression and generalized anxiety across two samples and a comprehensive reanalysis of the data led to results directly opposed to Forbes et al.
Abstract: Forbes, Wright, Markon, and Krueger (2017) stated that "psychopathology networks have limited replicability" (p. 1011) and that "popular network analysis methods produce unreliable results" (p. 1011). These conclusions are based on an assessment of the replicability of four different network models for symptoms of major depression and generalized anxiety across two samples; in addition, Forbes et al. analyzed the stability of the network models within the samples using split-halves. Our reanalysis of the same data with the same methods led to results directly opposed to theirs: All network models replicated very well across the two data sets and across the split-halves. We trace the differences between Forbes et al.'s results and our own to the fact that they did not appear to accurately implement all network models and used debatable metrics to assess replicability. In particular, they deviated from existing estimation routines for relative importance networks, did not acknowledge the fact that the skip structure used in the interviews strongly distorted correlations between symptoms, and incorrectly assumed that network structures and metrics should be the same not only across the different samples but also across the different network models used. In addition to a comprehensive reanalysis of the data, we end with a discussion of best practices concerning future research into the replicability of psychometric networks. (PsycINFO Database Record

Summary (1 min read)

Introduction

  • A second issue that the authors consider to qualify as a statistical inaccuracy concerns FWMK’s use of a distorted tetrachoric correlation matrix, which underlies both their factor analyses and their association networks.
  • FWMK’s abstract presents, as a main result, that “only 13-21% of the edges were consistently estimated across these networks”.
  • Analysis code should naturally always be available, as it is needed to replicate and verify reported analyses – the current report illustrates how important this is – and the authors commend FWMK for sharing their code.

Estimating network structures

  • E.g., the authors obtained 35 edges in the NCS DAG network often.
  • Co-morbid obsessive-compulsive disorder and depression: a Bayesian net work approach.

Replicating errors in Forbes

  • Shape of curve and placement of nodes might differ using different qgraph v ersions Error 2: Implausible correlation matrix due to imputation method.
  • Establish correlation matrices using different methods for handling missing data.
  • This shows that the main problem lies in the imputation of zeroes and not in the fact that the nearest positive definite matrix is used.

Stability Analyses

  • These codes use bootnet to establish stability assessments of the Ising model, default="IsingFit") # Bootstraps ran on 24-core supercomputer.
  • Table B1. Summary of split-half comparisons for the NCS-R data.
  • This table matches the analysis reported in Table 3 of Forbes et al. (2017).
  • In addition to the metrics discussed by FMWK (see their Table 2 for detailed explanations), the table reports Pearson correlations between network parameters in the two samples (all > .9), and replication statistics for censored and uncensored relative importance networks as implemented in accordance with Robinaugh et al. (2014).

In addition to the metrics discussed by FMWK (see their Table 2 for detailed explanations), the table reports Pearson correlations between network parameters in the two samples (all > .9), and replication statistics for censored and uncensored relative importance networks as implemented in accordance with Robinaugh et al. (2014).

  • Ising Model estimated on NSMHWB data, also known as Right.
  • Black boxes represent significant differences and gray boxes represent non-significant differences.
  • Given the similarity of the datasets and networks, this is surprising.
  • In NCS, the edge 6— 14 is slightly stronger than 4—18, leading shortest paths between the two clusters (on which betweenness centrality is estimated) to more consistently (irrespective of the particular participants included in the sample) go through nodes 6 and 14.

Did you find this useful? Give us your feedback

Figures (5)

Content maybe subject to copyright    Report

UvA-DARE is a service provided by the library of the University of Amsterdam (http
s
://dare.uva.nl)
UvA-DARE (Digital Academic Repository)
False alarm? A comprehensive reanalysis of "evidence that psychopathology
symptom networks have limited replicability" by Forbes, Wright, Markon, and
Krueger (2017)
Borsboom, D.; Fried, E.I.; Epskamp, S.; Waldorp, L.J.; van Borkulo, C.D.; van der Maas,
H.L.J.; Cramer, A.O.J.
DOI
10.1037/abn0000306
Publication date
2017
Document Version
Submitted manuscript
Published in
Journal of Abnormal Psychology
Link to publication
Citation for published version (APA):
Borsboom, D., Fried, E. I., Epskamp, S., Waldorp, L. J., van Borkulo, C. D., van der Maas, H.
L. J., & Cramer, A. O. J. (2017). False alarm? A comprehensive reanalysis of "evidence that
psychopathology symptom networks have limited replicability" by Forbes, Wright, Markon,
and Krueger (2017).
Journal of Abnormal Psychology
,
126
(7), 989-999.
https://doi.org/10.1037/abn0000306
General rights
It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s)
and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open
content license (like Creative Commons).
Disclaimer/Complaints regulations
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please
let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material
inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter
to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You
will be contacted as soon as possible.
Download date:10 Aug 2022

Seediscussions,stats,andauthorprofilesforthispublicationat:https://www.researchgate.net/publication/320885559
Falsealarm?Acomprehensivereanalysisof
"Evidencethatpsychopathologysymptom
networkshavelimitedre....
ArticleinJournalofAbnormalPsychology·October2017
DOI:10.1037/abn0000306
CITATIONS
8
READS
361
7authors,including:
Someoftheauthorsofthispublicationarealsoworkingontheserelatedprojects:
PersonalizedNetworkModelinginPsychopathology:TheImportanceofContemporaneousand
TemporalConnectionsViewproject
Psychosis:TowardsaDynamicalSystemsApproachViewproject
DennyBorsboom
UniversityofAmsterdam
195PUBLICATIONS8,138CITATIONS
SEEPROFILE
EikoFried
UniversityofAmsterdam
70PUBLICATIONS804CITATIONS
SEEPROFILE
SachaEpskamp
UniversityofAmsterdam
49PUBLICATIONS2,183CITATIONS
SEEPROFILE
LourensWaldorp
UniversityofAmsterdam
97PUBLICATIONS2,169CITATIONS
SEEPROFILE
AllcontentfollowingthispagewasuploadedbyEikoFriedon12November2017.
Theuserhasrequestedenhancementofthedownloadedfile.

1
False alarm?
A comprehensive reanalysis of “Evidence that psychopathology symptom networks have
limited replicability” by Forbes, Wright, Markon, and Krueger.
Denny Borsboom
Eiko I. Fried
Sacha Epskamp
Lourens J. Waldorp
Claudia D. van Borkulo
Han L. J. Van der Maas
University of Amsterdam
Angélique O. J. Cramer
Tilburg University
Word count:
Main text: 7061
Abstract: 223

2
Author note
We would like to thank Richard McNally, Jeroen Vermunt, Claudi Bockting, and Helma van
den Berg for their comments on an earlier draft of this paper, and Ria Hoekstra for her help in
gathering and processing data used in this manuscript. Denny Borsboom, Eiko Fried, Claudia
van Borkulo, and Lourens Waldorp are supported by European Research Council
Consolidator Grant no. 647209. Angélique Cramer is supported by Veni grant no. 451-14-
002 awarded by the Netherlands Organisation for Scientific Research (NWO).
Correspondence concerning this article should be addressed to Denny Borsboom, Department
of Psychology, University of Amsterdam, Nieuwe Achtergracht 129-B, 1018 WT
Amsterdam, The Netherlands, email: dennyborsboom@gmail.com.

3
Abstract
Forbes, Wright, Markon, and Krueger (2017) state that “psychopathology networks have
limited replicability” and that “popular network analysis methods produce unreliable results”.
These conclusions are based on an assessment of the replicability of four different network
models for symptoms of major depression and generalized anxiety across two samples; in
addition, Forbes et al. (2017) analyze the stability of the network models within the samples
using split-halves. Our re-analysis of the same data with the same methods led to results
directly opposed to those of Forbes et al. (2017): All network models replicate very well
across the two datasets and across the split-halves. We trace the differences between Forbes
et al.’s (2017) results and our own to the fact that they did not appear to accurately implement
all network models, and used debatable metrics to assess replicability. In particular, Forbes et
al. (2017) deviate from existing estimation routines for relative importance networks, do not
acknowledge the fact that the skip-structure used in the interviews strongly distorted
correlations between symptoms, and incorrectly assume that network structures and metrics
should not only be expected to be the same across the different samples, but also across the
different network models used. In addition to a comprehensive re-analysis of the data, we end
with a discussion of best practices concerning future research into the replicability of
psychometric networks.

Citations
More filters
Journal ArticleDOI
TL;DR: The authors summarize the history of the unidimensional idea, review modern research into p, demystify statistical models, articulate some implications of p for prevention and clinical practice, and outline a transdiagnostic research agenda.
Abstract: In both child and adult psychiatry, empirical evidence has now accrued to suggest that a single dimension is able to measure a person’s liability to mental disorder, comorbidity among disorders, persistence of disorders over time, and severity of symptoms. This single dimension of general psychopathology has been termed “p,” because it conceptually parallels a dimension already familiar to behavioral scientists and clinicians: the “g” factor of general intelligence. As the g dimension reflects low to high mental ability, the p dimension represents low to high psychopathology severity, with thought disorder at the extreme. The dimension of p unites all disorders. It influences present/absent status on hundreds of psychiatric symptoms, which modern nosological systems typically aggregate into dozens of distinct diagnoses, which in turn aggregate into three overarching domains, namely, the externalizing, internalizing, and psychotic experience domains, which finally aggregate into one dimension of psychopath...

542 citations

Journal ArticleDOI
TL;DR: Critically examine several issues with the use of the most popular centrality indices in psychological networks: degree, betweenness, and closeness centrality, and conclude that betweenness and closness centrality seem especially unsuitable as measures of node importance.
Abstract: Centrality indices are a popular tool to analyze structural aspects of psychological networks. As centrality indices were originally developed in the context of social networks, it is unclear to what extent these indices are suitable in a psychological network context. In this article we critically examine several issues with the use of the most popular centrality indices in psychological networks: degree, betweenness, and closeness centrality. We show that problems with centrality indices discussed in the social network literature also apply to the psychological networks. Assumptions underlying centrality indices, such as presence of a flow and shortest paths, may not correspond with a general theory of how psychological variables relate to one another. Furthermore, the assumptions of node distinctiveness and node exchangeability may not hold in psychological networks. We conclude that, for psychological networks, betweenness and closeness centrality seem especially unsuitable as measures of node importance. We therefore suggest three ways forward: (a) using centrality measures that are tailored to the psychological network context, (b) reconsidering existing measures of importance used in statistical models underlying psychological networks, and (c) discarding the concept of node centrality entirely. Foremost, we argue that one has to make explicit what one means when one states that a node is central, and what assumptions the centrality measure of choice entails, to make sure that there is a match between the process under study and the centrality measure that is used. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

438 citations


Cites background or result from "False alarm? A comprehensive reanal..."

  • ...Further evidence that psychopathology networks have limited replicability and utility: Response to Borsboom et al. (2017) and Steinley et al. (2017)....

    [...]

  • ...…et al., 2013), low stability in cross-sectional data (Epskamp et al., 2017), or inconsistency in findings regarding the most central node across datasets of similar psychological variables (Bringmann et al., 2016; Forbes, Wright, Markon, & Krueger, 2017, however, see also Borsboom et al., 2017)....

    [...]

Journal ArticleDOI
TL;DR: The importance of future replicability efforts to improve clinical psychological science is discussed and code, model output, and correlation matrices are provided to make the results of this article fully reproducible.
Abstract: The growing literature conceptualizing mental disorders like posttraumatic stress disorder (PTSD) as networks of interacting symptoms faces three key challenges. Prior studies predominantly used (a) small samples with low power for precise estimation, (b) nonclinical samples, and (c) single samples. This renders network structures in clinical data, and the extent to which networks replicate across data sets, unknown. To overcome these limitations, the present cross-cultural multisite study estimated regularized partial correlation networks of 16 PTSD symptoms across four data sets of traumatized patients receiving treatment for PTSD (total N = 2,782). Despite differences in culture, trauma type, and severity of the samples, considerable similarities emerged, with moderate to high correlations between symptom profiles (0.43-0.82), network structures (0.62-0.74), and centrality estimates (0.63-0.75). We discuss the importance of future replicability efforts to improve clinical psychological science and provide code, model output, and correlation matrices to make the results of this article fully reproducible.

283 citations


Cites background from "False alarm? A comprehensive reanal..."

  • ...…matrix, and given that network and factor models are mathematically equivalent under a set of conditions (Epskamp, Maris, Waldorp, & Borsboom, 2016; Kruis & Maris, 2016), generalizability problems for one type of model imply generalizability problems for the other (Borsboom et  al., 2017)....

    [...]

Journal ArticleDOI
TL;DR: An overview and critical analysis of 363 articles produced in the first decade of the network approach to psychopathology is provided, with a focus on key theoretical, methodological, and empirical contributions.
Abstract: The network approach to psychopathology posits that mental disorders can be conceptualized and studied as causal systems of mutually reinforcing symptoms. This approach, first posited in 2008, has grown substantially over the past decade and is now a full-fledged area of psychiatric research. In this article, we provide an overview and critical analysis of 363 articles produced in the first decade of this research program, with a focus on key theoretical, methodological, and empirical contributions. In addition, we turn our attention to the next decade of the network approach and propose critical avenues for future research in each of these domains. We argue that this program of research will be best served by working toward two overarching aims: (a) the identification of robust empirical phenomena and (b) the development of formal theories that can explain those phenomena. We recommend specific steps forward within this broad framework and argue that these steps are necessary if the network approach is to develop into a progressive program of research capable of producing a cumulative body of knowledge about how specific mental disorders operate as causal systems.

272 citations


Cites background from "False alarm? A comprehensive reanal..."

  • ...…2017; Fried, Epskamp, Nesse, Tuerlinckx, & Borsboom, 2016), with some arguing that these methods are inherently unstable (for an extended discussion, see Borsboom, Robinaugh, The Psychosystems Group, Rhemtulla, & Cramer, 2018; Borsboom et al., 2017; Forbes, Wright, Markon, & Krueger, 2017a, 2017b)....

    [...]

  • ...First, researchers have expressed concerns about their replicability (Fried & Cramer, 2017; Fried, Epskamp, Nesse, Tuerlinckx, & Borsboom, 2016), with some arguing that these methods are inherently unstable (for an extended discussion, see Borsboom, Robinaugh, The Psychosystems Group, Rhemtulla, & Cramer, 2018; Borsboom et al., 2017; Forbes, Wright, Markon, & Krueger, 2017a, 2017b)....

    [...]

Journal ArticleDOI
TL;DR: An overview of networks, how they can be visualised and analysed, and a simple example of how to conduct network analysis in R using data on the Theory Planned Behaviour (TPB).
Abstract: Objective: The present paper presents a brief overview on network analysis as a statistical approach for health psychology researchers. Networks comprise graphical representations of the relationsh...

261 citations


Cites background from "False alarm? A comprehensive reanal..."

  • ...A general concern for networks concerns their replicability (e.g. see Forbes, Wright, Markon, & Krueger, 2017; and responses by Borsboom et al., 2017; Steinley, Hoffman, Brusco, & Sher, 2017) and research needs to address this issue by estimating the stability of the networks and examining…...

    [...]

References
More filters
MonographDOI
TL;DR: The art and science of cause and effect have been studied in the social sciences for a long time as mentioned in this paper, see, e.g., the theory of inferred causation, causal diagrams and the identification of causal effects.
Abstract: 1. Introduction to probabilities, graphs, and causal models 2. A theory of inferred causation 3. Causal diagrams and the identification of causal effects 4. Actions, plans, and direct effects 5. Causality and structural models in the social sciences 6. Simpson's paradox, confounding, and collapsibility 7. Structural and counterfactual models 8. Imperfect experiments: bounds and counterfactuals 9. Probability of causation: interpretation and identification Epilogue: the art and science of cause and effect.

12,606 citations

Journal ArticleDOI
TL;DR: An overview of the World Mental Health Survey Initiative version of the WHO Composite International Diagnostic Interview (CIDI) is presented and a discussion of the methodological research on which the development of the instrument was based is discussed.
Abstract: This paper presents an overview of the World Mental Health (WMH) Survey Initiative version of the World Health Organization (WHO) Composite International Diagnostic Interview (CIDI) and a discussion of the methodological research on which the development of the instrument was based. The WMH-CIDI includes a screening module and 40 sections that focus on diagnoses (22 sections), functioning (four sections), treatment (two sections), risk factors (four sections), socio-demographic correlates (seven sections), and methodological factors (two sections). Innovations compared to earlier versions of the CIDI include expansion of the diagnostic sections, a focus on 12-month as well as lifetime disorders in the same interview, detailed assessment of clinical severity, and inclusion of information on treatment, risk factors, and consequences. A computer-assisted version of the interview is available along with a direct data entry software system that can be used to keypunch responses to the paper-and-pencil version of the interview. Computer programs that generate diagnoses are also available based on both ICD-10 and DSM-IV criteria. Elaborate CD-ROM-based training materials are available to teach interviewers how to administer the interview as well as to teach supervisors how to monitor the quality of data collection.

4,232 citations

Journal ArticleDOI
TL;DR: It is shown that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs and is hence equivalent to variable selection for Gaussian linear models.
Abstract: The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs. Neighborhood selection estimates the conditional independence restrictions separately for each node in the graph and is hence equivalent to variable selection for Gaussian linear models. We show that the proposed neighborhood selection scheme is consistent for sparse high-dimensional graphs. Consistency hinges on the choice of the penalty parameter. The oracle value for optimal prediction does not lead to a consistent neighborhood estimate. Controlling instead the probability of falsely joining some distinct connectivity components of the graph, consistent estimation for sparse graphs is achieved (with exponential rates), even when the number of variables grows as the number of observations raised to an arbitrary power.

3,793 citations


"False alarm? A comprehensive reanal..." refers background in this paper

  • ...…structure and this can only be done by (a) establishing mathematical proof that the method converges on the true structure in the long run (as, e.g., Meinshausen and Bühlmann, 2006, have done for the Gaussian graphical model and Ravikumar et al., 2010 for the Ising model) or (b) simulating such…...

    [...]

  • ...More generally, one can prove that every latent variable structure implies a specific network structure, as Molenaar (2003) already suspected and as Maris and his coworkers have been recently able to formally prove (Epskamp et al....

    [...]

Journal ArticleDOI
TL;DR: An examines methodologies suited to identify such symptom networks and discusses network analysis techniques that may be used to extract clinically and scientifically useful information from such networks (e.g., which symptom is most central in a person's network).
Abstract: In network approaches to psychopathology, disorders result from the causal interplay between symptoms (e.g., worry → insomnia → fatigue), possibly involving feedback loops (e.g., a person may engage in substance abuse to forget the problems that arose due to substance abuse). The present review examines methodologies suited to identify such symptom networks and discusses network analysis techniques that may be used to extract clinically and scientifically useful information from such networks (e.g., which symptom is most central in a person's network). The authors also show how network analysis techniques may be used to construct simulation models that mimic symptom dynamics. Network approaches naturally explain the limited success of traditional research strategies, which are typically based on the idea that symptoms are manifestations of some common underlying factor, while offering promising methodological alternatives. In addition, these techniques may offer possibilities to guide and evaluate therape...

1,824 citations


"False alarm? A comprehensive reanal..." refers background or methods in this paper

  • ...These sequences accurately reflect the actual order of the symptoms in the interview, and thus the DAGs correctly pick up the skip structure, which we know is a true causal structure in the data (see also Borsboom & Cramer, 2013, Figure 7)....

    [...]

  • ...We also realize that we are guilty as charged in this respect since we, too, used NCS-R data, albeit it for illustration or hypothesis-generating purposes (Borsboom & Cramer, 2013; Cramer et al., 2010)....

    [...]

Journal ArticleDOI
TL;DR: A method to visualize comorbidity networks is proposed and it is argued that this approach generates realistic hypotheses about pathways to comor bidity, overlapping symptoms, and diagnostic boundaries, that are not naturally accommodated by latent variable models.
Abstract: The pivotal problem of comorbidity research lies in the psychometric foundation it rests on, that is, latent variable theory, in which a mental disorder is viewed as a latent variable that causes a constellation of symptoms. From this perspective, comorbidity is a (bi)directional relationship between multiple latent variables. We argue that such a latent variable perspective encounters serious problems in the study of comorbidity, and offer a radically different conceptualization in terms of a network approach, where comorbidity is hypothesized to arise from direct relations between symptoms of multiple disorders. We propose a method to visualize comorbidity networks and, based on an empirical network for major depression and generalized anxiety, we argue that this approach generates realistic hypotheses about pathways to comorbidity, overlapping symptoms, and diagnostic boundaries, that are not naturally accommodated by latent variable models: Some pathways to comorbidity through the symptom space are more likely than others; those pathways generally have the same direction (i.e., from symptoms of one disorder to symptoms of the other); overlapping symptoms play an important role in comorbidity; and boundaries between diagnostic categories are necessarily fuzzy.

918 citations


"False alarm? A comprehensive reanal..." refers methods in this paper

  • ...We also realize that we are guilty as charged in this respect since we, too, used NCS-R data, albeit it for illustration or hypothesis-generating purposes (Borsboom & Cramer, 2013; Cramer et al., 2010)....

    [...]

Frequently Asked Questions (4)
Q1. What contributions have the authors mentioned in the paper "False alarm? a comprehensive reanalysis of "Evidence that psychopathology symptom networks have limited re..." ?

Borsboom, D., Fried, E. I., Epskamp, S., Waldorp, L. J., van Borkulo, C. D., van der Maas, H. L. this paper, and Cramer, A. J. ( 2017 ). False alarm ? A comprehensive reanalysis of `` evidence that psychopathology symptom networks have limited replicability '' by Forbes, Wright, Markon, and Krueger. 

Relative importance networks (uncensored) DAGsFirst half Second half First half Second half First half Second half First half Second half Network characteristics Connectivity (% of possible) 46.7% (45.1-48.4) 47.1% (44.4- 49.7) 38.6% (37.9- 39.2) 38.6% (37.9- 38.9) 100% (100-100) 100% (100-100) 17% (16.3-19) 17.3% (15.7-18.3) Density (as in Forbes et al.) 1.14 (1.11-1.17) 1.12 (1.08-1.19) 0.13 (0.13-0.13) 0.13 (0.13-0.13) 0.06 (0.06-0.06) 0.06 (0.06-0.06) 

<- boot.strength(DataNCScat, R = 1000, algorithm = "hc", algor ithm.args = list(restart = 5, perturb = 10), debug = TRUE) # Edges with strength > 0.85: DAG_NCS <- amat(averaged.network(bnlearnRes_NCS, threshold = 0.85)) Step 4: Run relative importance networks using Robinaugh et al.'s (2014) procedure, using normalized lmg. 

This shows that the main problem lies in the imputation of zeroes and not in the fact that the nearest positive definite matrix is used.