scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Bayesian model of shape and appearance for subcortical brain segmentation

TL;DR: A fully-automated segmentation method that uses manually labelled image data to provide anatomical training information and is assessed both quantitatively, using Leave-One-Out testing on the 336 training images, and qualitatively,Using an independent clinical dataset involving Alzheimer's disease.
About: This article is published in NeuroImage.The article was published on 2011-06-01 and is currently open access. It has received 2047 citations till now. The article focuses on the topics: Active appearance model & Brain segmentation.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper introduces a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface reconstruction and segmentation of brain MRI of arbitrarily many time points and successfully reduces variability and avoids over-regularization.

1,949 citations

Journal ArticleDOI
TL;DR: UK Biobank brain imaging is described and results derived from the first 5,000 participants' data release are presented, which have already yielded a rich range of associations between brain imaging and other measures collected by UK Biobanks.
Abstract: Medical imaging has enormous potential for early disease prediction, but is impeded by the difficulty and expense of acquiring data sets before symptom onset. UK Biobank aims to address this problem directly by acquiring high-quality, consistently acquired imaging data from 100,000 predominantly healthy participants, with health outcomes being tracked over the coming decades. The brain imaging includes structural, diffusion and functional modalities. Along with body and cardiac imaging, genetics, lifestyle measures, biological phenotyping and health records, this imaging is expected to enable discovery of imaging markers of a broad range of diseases at their earliest stages, as well as provide unique insight into disease mechanisms. We describe UK Biobank brain imaging and present results derived from the first 5,000 participants' data release. Although this covers just 5% of the ultimate cohort, it has already yielded a rich range of associations between brain imaging and other measures collected by UK Biobank.

1,343 citations

Journal ArticleDOI
TL;DR: The aim of this study is to incorporate support for multi-shell data into the CSD approach as well as to exploit the unique b-value dependencies of the different tissue types to estimate a multi-tissue ODF.

1,015 citations

Journal ArticleDOI
TL;DR: The pipeline is described in detail, following a brief overview of UK Biobank brain imaging and the acquisition protocol and several quantitative investigations carried out as part of the development of both the imaging protocol and the processing pipeline.

927 citations

Journal ArticleDOI
TL;DR: Results indicate that some of the known false positives associated with tractography algorithms are prevented, such that the biological accuracy of the reconstructions should be improved, provided that state-of-the-art streamlines tractography methods are used.

876 citations


Cites methods from "A Bayesian model of shape and appea..."

  • ...In this context, FIRST was configured to automatically segment the nucleus accumbens, caudate, globus pallidus, putamen and thalamus (all bilaterally)....

    [...]

  • ...Although FIRST provides mask images of each segmented structure, these masks over-estimate the volumetric extent of the relevant structure, as a voxel must only overlap with a small fraction of the mesh to be included in the mask....

    [...]

  • ...We employed the FMRIB Integrated Registration and Segmentation Tool (FIRST) (Patenaude et al., 2011), which calculates closed triangular meshes to estimate the positions and shapes of various sub-cortical structures....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, a different approach to problems of multiple significance testing is presented, which calls for controlling the expected proportion of falsely rejected hypotheses -the false discovery rate, which is equivalent to the FWER when all hypotheses are true but is smaller otherwise.
Abstract: SUMMARY The common approach to the multiplicity problem calls for controlling the familywise error rate (FWER). This approach, though, has faults, and we point out a few. A different approach to problems of multiple significance testing is presented. It calls for controlling the expected proportion of falsely rejected hypotheses -the false discovery rate. This error rate is equivalent to the FWER when all hypotheses are true but is smaller otherwise. Therefore, in problems where the control of the false discovery rate rather than that of the FWER is desired, there is potential for a gain in power. A simple sequential Bonferronitype procedure is proved to control the false discovery rate for independent test statistics, and a simulation study shows that the gain in power is substantial. The use of the new procedure and the appropriateness of the criterion are illustrated with examples.

83,420 citations


"A Bayesian model of shape and appea..." refers methods in this paper

  • ...In (A) the figure shows the colour-coding of the surface and vectors reflecting the uncorrected F-statistics associated with the change, while in (B) it shows the p-values, corrected for multiple-comparisons using False Discovery Rate (Benjamini and Hochberg, 1995; Genovese et al., 2002)....

    [...]

Book
28 Jul 2013
TL;DR: In this paper, the authors describe the important ideas in these areas in a common conceptual framework, and the emphasis is on concepts rather than mathematics, with a liberal use of color graphics.
Abstract: During the past decade there has been an explosion in computation and information technology. With it have come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting---the first comprehensive treatment of this topic in any book. This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression and path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for ``wide'' data (p bigger than n), including multiple testing and false discovery rates. Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie co-developed much of the statistical modeling software and environment in R/S-PLUS and invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, projection pursuit and gradient boosting.

19,261 citations

Journal ArticleDOI
TL;DR: This work uses snakes for interactive interpretation, in which user-imposed constraint forces guide the snake near features of interest, and uses scale-space continuation to enlarge the capture region surrounding a feature.
Abstract: A snake is an energy-minimizing spline guided by external constraint forces and influenced by image forces that pull it toward features such as lines and edges. Snakes are active contour models: they lock onto nearby edges, localizing them accurately. Scale-space continuation can be used to enlarge the capture region surrounding a feature. Snakes provide a unified account of a number of visual problems, including detection of edges, lines, and subjective contours; motion tracking; and stereo matching. We have used snakes successfully for interactive interpretation, in which user-imposed constraint forces guide the snake near features of interest.

18,095 citations


"A Bayesian model of shape and appea..." refers methods in this paper

  • ...The desired cross-subject vertex correspondence is optimised by within-surface motion constraints and minimal smoothing within the 3D deformable model (Kass et al., 1988; Lobregt and Viergever, 1995; Smith, 2002)....

    [...]

Journal ArticleDOI

17,845 citations

Journal ArticleDOI
TL;DR: An automated method for segmenting magnetic resonance head images into brain and non‐brain has been developed and described and examples of results and the results of extensive quantitative testing against “gold‐standard” hand segmentations, and two other popular automated methods.
Abstract: An automated method for segmenting magnetic resonance head images into brain and non-brain has been developed. It is very robust and accurate and has been tested on thousands of data sets from a wide variety of scanners and taken with a wide variety of MR sequences. The method, Brain Extraction Tool (BET), uses a deformable model that evolves to fit the brain's surface by the application of a set of locally adaptive model forces. The method is very fast and requires no preregistration or other pre-processing before being applied. We describe the new method and give examples of results and the results of extensive quantitative testing against "gold-standard" hand segmentations, and two other popular automated methods.

9,887 citations


"A Bayesian model of shape and appea..." refers methods in this paper

  • ...This regularisation displacement is identical to that used by BET (Smith, 2002), except that its weighting is initially set to zero and adaptively updated, in order to try to retain better vertex correspondence across subjects....

    [...]

  • ...The desired cross-subject vertex correspondence is optimised by within-surface motion constraints and minimal smoothing within the 3D deformable model (Kass et al., 1988; Lobregt and Viergever, 1995; Smith, 2002)....

    [...]