False alarm? A comprehensive reanalysis of "Evidence that psychopathology symptom networks have limited replicability" by Forbes, Wright, Markon, and Krueger (2017).
Summary (1 min read)
Introduction
- A second issue that the authors consider to qualify as a statistical inaccuracy concerns FWMK’s use of a distorted tetrachoric correlation matrix, which underlies both their factor analyses and their association networks.
- FWMK’s abstract presents, as a main result, that “only 13-21% of the edges were consistently estimated across these networks”.
- Analysis code should naturally always be available, as it is needed to replicate and verify reported analyses – the current report illustrates how important this is – and the authors commend FWMK for sharing their code.
Estimating network structures
- E.g., the authors obtained 35 edges in the NCS DAG network often.
- Co-morbid obsessive-compulsive disorder and depression: a Bayesian net work approach.
Replicating errors in Forbes
- Shape of curve and placement of nodes might differ using different qgraph v ersions Error 2: Implausible correlation matrix due to imputation method.
- Establish correlation matrices using different methods for handling missing data.
- This shows that the main problem lies in the imputation of zeroes and not in the fact that the nearest positive definite matrix is used.
Stability Analyses
- These codes use bootnet to establish stability assessments of the Ising model, default="IsingFit") # Bootstraps ran on 24-core supercomputer.
- Table B1. Summary of split-half comparisons for the NCS-R data.
- This table matches the analysis reported in Table 3 of Forbes et al. (2017).
- In addition to the metrics discussed by FMWK (see their Table 2 for detailed explanations), the table reports Pearson correlations between network parameters in the two samples (all > .9), and replication statistics for censored and uncensored relative importance networks as implemented in accordance with Robinaugh et al. (2014).
In addition to the metrics discussed by FMWK (see their Table 2 for detailed explanations), the table reports Pearson correlations between network parameters in the two samples (all > .9), and replication statistics for censored and uncensored relative importance networks as implemented in accordance with Robinaugh et al. (2014).
- Ising Model estimated on NSMHWB data, also known as Right.
- Black boxes represent significant differences and gray boxes represent non-significant differences.
- Given the similarity of the datasets and networks, this is surprising.
- In NCS, the edge 6— 14 is slightly stronger than 4—18, leading shortest paths between the two clusters (on which betweenness centrality is estimated) to more consistently (irrespective of the particular participants included in the sample) go through nodes 6 and 14.
Did you find this useful? Give us your feedback
Citations
542 citations
438 citations
Cites background or result from "False alarm? A comprehensive reanal..."
...Further evidence that psychopathology networks have limited replicability and utility: Response to Borsboom et al. (2017) and Steinley et al. (2017)....
[...]
...…et al., 2013), low stability in cross-sectional data (Epskamp et al., 2017), or inconsistency in findings regarding the most central node across datasets of similar psychological variables (Bringmann et al., 2016; Forbes, Wright, Markon, & Krueger, 2017, however, see also Borsboom et al., 2017)....
[...]
283 citations
Cites background from "False alarm? A comprehensive reanal..."
...…matrix, and given that network and factor models are mathematically equivalent under a set of conditions (Epskamp, Maris, Waldorp, & Borsboom, 2016; Kruis & Maris, 2016), generalizability problems for one type of model imply generalizability problems for the other (Borsboom et al., 2017)....
[...]
272 citations
Cites background from "False alarm? A comprehensive reanal..."
...…2017; Fried, Epskamp, Nesse, Tuerlinckx, & Borsboom, 2016), with some arguing that these methods are inherently unstable (for an extended discussion, see Borsboom, Robinaugh, The Psychosystems Group, Rhemtulla, & Cramer, 2018; Borsboom et al., 2017; Forbes, Wright, Markon, & Krueger, 2017a, 2017b)....
[...]
...First, researchers have expressed concerns about their replicability (Fried & Cramer, 2017; Fried, Epskamp, Nesse, Tuerlinckx, & Borsboom, 2016), with some arguing that these methods are inherently unstable (for an extended discussion, see Borsboom, Robinaugh, The Psychosystems Group, Rhemtulla, & Cramer, 2018; Borsboom et al., 2017; Forbes, Wright, Markon, & Krueger, 2017a, 2017b)....
[...]
261 citations
Cites background from "False alarm? A comprehensive reanal..."
...A general concern for networks concerns their replicability (e.g. see Forbes, Wright, Markon, & Krueger, 2017; and responses by Borsboom et al., 2017; Steinley, Hoffman, Brusco, & Sher, 2017) and research needs to address this issue by estimating the stability of the networks and examining…...
[...]
References
12,606 citations
4,232 citations
3,793 citations
"False alarm? A comprehensive reanal..." refers background in this paper
...…structure and this can only be done by (a) establishing mathematical proof that the method converges on the true structure in the long run (as, e.g., Meinshausen and Bühlmann, 2006, have done for the Gaussian graphical model and Ravikumar et al., 2010 for the Ising model) or (b) simulating such…...
[...]
...More generally, one can prove that every latent variable structure implies a specific network structure, as Molenaar (2003) already suspected and as Maris and his coworkers have been recently able to formally prove (Epskamp et al....
[...]
1,824 citations
"False alarm? A comprehensive reanal..." refers background or methods in this paper
...These sequences accurately reflect the actual order of the symptoms in the interview, and thus the DAGs correctly pick up the skip structure, which we know is a true causal structure in the data (see also Borsboom & Cramer, 2013, Figure 7)....
[...]
...We also realize that we are guilty as charged in this respect since we, too, used NCS-R data, albeit it for illustration or hypothesis-generating purposes (Borsboom & Cramer, 2013; Cramer et al., 2010)....
[...]
918 citations
"False alarm? A comprehensive reanal..." refers methods in this paper
...We also realize that we are guilty as charged in this respect since we, too, used NCS-R data, albeit it for illustration or hypothesis-generating purposes (Borsboom & Cramer, 2013; Cramer et al., 2010)....
[...]
Related Papers (5)
Frequently Asked Questions (4)
Q2. what is the censored value of a network?
Relative importance networks (uncensored) DAGsFirst half Second half First half Second half First half Second half First half Second half Network characteristics Connectivity (% of possible) 46.7% (45.1-48.4) 47.1% (44.4- 49.7) 38.6% (37.9- 39.2) 38.6% (37.9- 38.9) 100% (100-100) 100% (100-100) 17% (16.3-19) 17.3% (15.7-18.3) Density (as in Forbes et al.) 1.14 (1.11-1.17) 1.12 (1.08-1.19) 0.13 (0.13-0.13) 0.13 (0.13-0.13) 0.06 (0.06-0.06) 0.06 (0.06-0.06)
Q3. What is the way to run the DAG analysis?
<- boot.strength(DataNCScat, R = 1000, algorithm = "hc", algor ithm.args = list(restart = 5, perturb = 10), debug = TRUE) # Edges with strength > 0.85: DAG_NCS <- amat(averaged.network(bnlearnRes_NCS, threshold = 0.85)) Step 4: Run relative importance networks using Robinaugh et al.'s (2014) procedure, using normalized lmg.
Q4. what is the problem with the relimp_NCS_uncensored?
This shows that the main problem lies in the imputation of zeroes and not in the fact that the nearest positive definite matrix is used.