scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Benchmarking of participant-level confound regression strategies for the control of motion artifact in studies of functional connectivity.

TL;DR: A systematic evaluation of 14 participant‐level confound regression methods for functional connectivity highlights the heterogeneous efficacy of existing methods, and suggests that different confounding regression strategies may be appropriate in the context of specific scientific goals.
About: This article is published in NeuroImage.The article was published on 2017-07-01 and is currently open access. It has received 790 citations till now.
Citations
More filters
Journal ArticleDOI
16 Aug 2017-Neuron
TL;DR: A novel MRI dataset containing 5 hr of RSFC data, 6 hour of task fMRI, multiple structural MRIs, and neuropsychological tests from each of ten adults generated ten high-fidelity, individual-specific functional connectomes, revealing several new types of spatial and organizational variability in brain networks.

869 citations


Cites methods from "Benchmarking of participant-level c..."

  • ...RSFC preprocessing—Additional preprocessing steps to reduce spurious variance unlikely to reflect neuronal activity were executed as recommended in (Ciric et al., 2017; Power et al., 2014)....

    [...]

  • ...Additional preprocessing steps to reduce spurious variance unlikely to reflect neuronal activity were executed as recommended in (Ciric et al., 2017; Power et al., 2014)....

    [...]

Journal ArticleDOI
TL;DR: In this article , the authors used three of the largest neuroimaging datasets currently available, with a total sample size of around 50,000 individuals, to quantify brain-wide association studies effect sizes and reproducibility as a function of sample size.
Abstract: Magnetic resonance imaging (MRI) has transformed our understanding of the human brain through well-replicated mapping of abilities to specific structures (for example, lesion studies) and functions1-3 (for example, task functional MRI (fMRI)). Mental health research and care have yet to realize similar advances from MRI. A primary challenge has been replicating associations between inter-individual differences in brain structure or function and complex cognitive or mental health phenotypes (brain-wide association studies (BWAS)). Such BWAS have typically relied on sample sizes appropriate for classical brain mapping4 (the median neuroimaging study sample size is about 25), but potentially too small for capturing reproducible brain-behavioural phenotype associations5,6. Here we used three of the largest neuroimaging datasets currently available-with a total sample size of around 50,000 individuals-to quantify BWAS effect sizes and reproducibility as a function of sample size. BWAS associations were smaller than previously thought, resulting in statistically underpowered studies, inflated effect sizes and replication failures at typical sample sizes. As sample sizes grew into the thousands, replication rates began to improve and effect size inflation decreased. More robust BWAS effects were detected for functional MRI (versus structural), cognitive tests (versus mental health questionnaires) and multivariate methods (versus univariate). Smaller than expected brain-phenotype associations and variability across population subsamples can explain widespread BWAS replication failures. In contrast to non-BWAS approaches with larger effects (for example, lesions, interventions and within-person), BWAS reproducibility requires samples with thousands of individuals.

611 citations

Journal ArticleDOI
TL;DR: These results indicate that simple linear regression of regional fMRI time series against head motion parameters and WM/CSF signals (with or without expansion terms) is not sufficient to remove head motion artefacts, and group comparisons in functional connectivity between healthy controls and schizophrenia patients are highly dependent on preprocessing strategy.

564 citations

Journal ArticleDOI
TL;DR: In this paper , the authors used three of the largest neuroimaging datasets currently available, with a total sample size of around 50,000 individuals, to quantify brain-wide association studies effect sizes and reproducibility as a function of sample size.
Abstract: Magnetic resonance imaging (MRI) has transformed our understanding of the human brain through well-replicated mapping of abilities to specific structures (for example, lesion studies) and functions1-3 (for example, task functional MRI (fMRI)). Mental health research and care have yet to realize similar advances from MRI. A primary challenge has been replicating associations between inter-individual differences in brain structure or function and complex cognitive or mental health phenotypes (brain-wide association studies (BWAS)). Such BWAS have typically relied on sample sizes appropriate for classical brain mapping4 (the median neuroimaging study sample size is about 25), but potentially too small for capturing reproducible brain-behavioural phenotype associations5,6. Here we used three of the largest neuroimaging datasets currently available-with a total sample size of around 50,000 individuals-to quantify BWAS effect sizes and reproducibility as a function of sample size. BWAS associations were smaller than previously thought, resulting in statistically underpowered studies, inflated effect sizes and replication failures at typical sample sizes. As sample sizes grew into the thousands, replication rates began to improve and effect size inflation decreased. More robust BWAS effects were detected for functional MRI (versus structural), cognitive tests (versus mental health questionnaires) and multivariate methods (versus univariate). Smaller than expected brain-phenotype associations and variability across population subsamples can explain widespread BWAS replication failures. In contrast to non-BWAS approaches with larger effects (for example, lesions, interventions and within-person), BWAS reproducibility requires samples with thousands of individuals.

520 citations

Journal ArticleDOI
TL;DR: Initial evidence regarding the usefulness of early imaging biomarkers for predicting cognitive outcomes and risk of neuropsychiatric disorders is discussed.
Abstract: In humans, the period from term birth to ∼2 years of age is characterized by rapid and dynamic brain development and plays an important role in cognitive development and risk of disorders such as autism and schizophrenia. Recent imaging studies have begun to delineate the growth trajectories of brain structure and function in the first years after birth and their relationship to cognition and risk of neuropsychiatric disorders. This Review discusses the development of grey and white matter and structural and functional networks, as well as genetic and environmental influences on early-childhood brain development. We also discuss initial evidence regarding the usefulness of early imaging biomarkers for predicting cognitive outcomes and risk of neuropsychiatric disorders.

469 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, a different approach to problems of multiple significance testing is presented, which calls for controlling the expected proportion of falsely rejected hypotheses -the false discovery rate, which is equivalent to the FWER when all hypotheses are true but is smaller otherwise.
Abstract: SUMMARY The common approach to the multiplicity problem calls for controlling the familywise error rate (FWER). This approach, though, has faults, and we point out a few. A different approach to problems of multiple significance testing is presented. It calls for controlling the expected proportion of falsely rejected hypotheses -the false discovery rate. This error rate is equivalent to the FWER when all hypotheses are true but is smaller otherwise. Therefore, in problems where the control of the false discovery rate rather than that of the FWER is desired, there is potential for a gain in power. A simple sequential Bonferronitype procedure is proved to control the false discovery rate for independent test statistics, and a simulation study shows that the gain in power is substantial. The use of the new procedure and the appropriateness of the criterion are illustrated with examples.

83,420 citations


"Benchmarking of participant-level c..." refers methods in this paper

  • ...This distribution was used to obtain two measures of the pipeline's ability to mitigate motion artifact, including: 1) the number of edges significantly related to motion, which was computed after using the false discovery rate (FDR; Benjamini and Hochberg, 1995) to account for multiple comparisons; and 2) the median absolute value of all QC-FC correlations....

    [...]

  • ...…ability to mitigate motion artifact, including: 1) the number of edges significantly related to motion, which was computed after using the false discovery rate (FDR; Benjamini and Hochberg, 1995) to account for multiple comparisons; and 2) the median absolute value of all QC-FC correlations....

    [...]

Book
13 Aug 2009
TL;DR: This book describes ggplot2, a new data visualization package for R that uses the insights from Leland Wilkisons Grammar of Graphics to create a powerful and flexible system for creating data graphics.
Abstract: This book describes ggplot2, a new data visualization package for R that uses the insights from Leland Wilkisons Grammar of Graphics to create a powerful and flexible system for creating data graphics. With ggplot2, its easy to: produce handsome, publication-quality plots, with automatic legends created from the plot specification superpose multiple layers (points, lines, maps, tiles, box plots to name a few) from different data sources, with automatically adjusted common scales add customisable smoothers that use the powerful modelling capabilities of R, such as loess, linear models, generalised additive models and robust regression save any ggplot2 plot (or part thereof) for later modification or reuse create custom themes that capture in-house or journal style requirements, and that can easily be applied to multiple plots approach your graph from a visual perspective, thinking about how each component of the data is represented on the final plot. This book will be useful to everyone who has struggled with displaying their data in an informative and attractive way. You will need some basic knowledge of R (i.e. you should be able to get your data into R), but ggplot2 is a mini-language specifically tailored for producing graphics, and youll learn everything you need in the book. After reading this book youll be able to produce graphics customized precisely for your problems,and youll find it easy to get graphics out of your head and on to the screen or page.

29,504 citations


"Benchmarking of participant-level c..." refers methods in this paper

  • ...All graphs were generated using ggplot2 in R version 3.2.3 (Wickham, 2009); brain renderings were prepared in BrainNet Viewer (Xia et al., 2013)....

    [...]

Journal ArticleDOI
TL;DR: This article proposes a method for detecting communities, built around the idea of using centrality indices to find community boundaries, and tests it on computer-generated and real-world graphs whose community structure is already known and finds that the method detects this known structure with high sensitivity and reliability.
Abstract: A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.

14,429 citations


"Benchmarking of participant-level c..." refers methods in this paper

  • ...Finally, the modularity quality of the resultant consensus partition was estimated according to an established null model (Girvan and Newman, 2002); the mean of Q values across subjects provided an estimate of the sub-network definition still present in the connectome after de-noising....

    [...]

Journal ArticleDOI
TL;DR: This work proposes a heuristic method that is shown to outperform all other known community detection methods in terms of computation time and the quality of the communities detected is very good, as measured by the so-called modularity.
Abstract: We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection method in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2.6 million customers and by analyzing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad-hoc modular networks. .

13,519 citations


"Benchmarking of participant-level c..." refers methods in this paper

  • ...To determine Q, community detection was performed on each subject's de-noised network using the Louvain heuristic (Blondel et al., 2008), which partitions the connectome into sub-networks in a manner that maximizes the value of Q....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a simple method to extract the community structure of large networks based on modularity optimization, which is shown to outperform all other known community detection methods in terms of computation time.
Abstract: We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks.

11,078 citations