scispace - formally typeset
Search or ask a question
Author

Christian Barillot

Bio: Christian Barillot is an academic researcher from AmeriCorps VISTA. The author has contributed to research in topics: Diffusion MRI & Corticospinal tract. The author has an hindex of 8, co-authored 44 publications receiving 498 citations.

Papers
More filters
01 Jan 2007
TL;DR: The results show that the optimized NL-means filter outperforms the classical implementation of the NL- means filter, as well as two other classical denoising methods (Anisotropic Diffusion and Total Variation minimization process) in terms of accuracy with low computation time.
Abstract: A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image quality and to improve the performance of all the tasks needed for quantitative imaging analysis. The method proposed in this paper is based on a 3D optimized blockwise version of the Non Local (NL) means filter [1]. The NL-means filter uses the redundancy of information in the image under study to remove the noise. The performance of the NL-means filter has been already demonstrated for 2D images, but reducing the computational burden is a critical aspect to extend the method to 3D images. To overcome this problem, we propose improvements to reduce the computational complexity. These different improvements allow to drastically divide the computational time while preserving the performances of the NL-means filter. A fully-automated and optimized version of the NL-means filter is then presented. Our contributions to the NL-means filter are: (a) an automatic tuning of the smoothing parameter, (b) a selection of the most relevant voxels, (c) a blockwise implementation and (d) a parallelized computation. Quantitative validation was carried out on synthetic datasets generated with BrainWeb [2]. The results show that our optimized NL-means filter outperforms the classical implementation of the NL-means filter, as well as two other classical denoising methods (Anisotropic Diffusion [3] and Total Variation minimization process [4]) in terms of accuracy (measured by the Peak Signal to Noise Ratio) with low computation time. Finally, qualitative results on real data are presented.

122 citations

Book ChapterDOI
26 Oct 2005
TL;DR: Promising preliminary results on longitudinal multi-sequences of clinical data are shown, and an original rejection scheme for outliers is proposed.
Abstract: We propose to segment Multiple Sclerosis (MS) lesions overtime in multidimensional Magnetic Resonance (MR) sequences. We use a robust algorithm that allows the segmentation of the abnormalities using the whole time series simultaneously and we propose an original rejection scheme for outliers. We validate our method using the BrainWeb simulator. To conclude, promising preliminary results on longitudinal multi-sequences of clinical data are shown.

75 citations

Journal ArticleDOI
TL;DR: It is shown, using simulations, that this statistical test applied on modal distances can detect a possible genetic encoding of the shape of the central sulcus and is applied to real data to highlight evidence of genetic encode of theshape of neuroanatomical structures.

72 citations

Book ChapterDOI
25 Sep 2002
TL;DR: The SPM spatial normalization method is evaluated, which is widely used by the neuroscience community and extended to functional MEG data, to show that the inter- subject functional variability can be reduced with inter-subject non-rigid registration methods.
Abstract: This paper is concerned with inter-subject registration of anatomical and functional brain data, and extends our previouswork [7] on evaluation of inter-subject registration methods. The paper evaluates the SPM spatial normalization method [1], which is widely used by the neuroscience community. This paper also extends the previous evaluation framework to functional MEG data. The impact of three different registration methods on the registration of somatosensory MEG data is studied.We showthat the inter-subject functional variability can be reduced with inter-subject non-rigid registration methods, which is in accordance with the hypothesis that part of the inter-subject functional variability is encoded in the inter-subject anatomical variability.

47 citations

Journal ArticleDOI
TL;DR: A focusing strategy from coarse-to-fine scales which leads to an improvement in the accuracy of the registration process of an automatic 3D non-rigid registration method in a multiscale framework is introduced.
Abstract: In this paper, we embed the minimization scheme of an automatic 3D non-rigid registration method in a multiscale framework. The initial model formulation was expressed as a robust multiresolution and multigrid minimization scheme. At the finest level of the multiresolution pyramid, we introduce a focusing strategy from coarse-to-fine scales which leads to an improvement in the accuracy of the registration process. A focusing strategy has been tested for a linear and a non-linear scale-space. Results on real 3D ultrasound images are discussed.

37 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: DARTEL has been applied to intersubject registration of 471 whole brain images, and the resulting deformations were evaluated in terms of how well they encode the shape information necessary to separate male and female subjects and to predict the ages of the subjects.

6,999 citations

Journal ArticleDOI
TL;DR: The ability to quantify the variance of the human brain as a function of age in a large population of subjects for whom data is also available about their genetic composition and behaviour will allow for the first assessment of cerebral genotype-phenotype-behavioural correlations in humans to take place in a population this large.
Abstract: Motivated by the vast amount of information that is rapidly accumulating about the human brain in digital form, we embarked upon a program in 1992 to develop a four–dimensional probabilistic atlas and reference system for the human brain. Through an International Consortium for Brain Mapping (ICBM) a dataset is being collected that includes 7000 subjects between the ages of eighteen and ninety years and including 342 mono– and dizygotic twins. Data on each subject includes detailed demographic, clinical, behavioural and imaging information. DNA has been collected for genotyping from 5800 subjects. A component of the programme uses post–mortem tissue to determine the probabilistic distribution of microscopic cyto– and chemoarchitectural regions in the human brain. This, combined with macroscopic information about structure and function derived from subjects in vivo , provides the first large scale opportunity to gain meaningful insights into the concordance or discordance in micro– and macroscopic structure and function. The philosophy, strategy, algorithm development, data acquisition techniques and validation methods are described in this report along with database structures. Examples of results are described for the normal adult human brain as well as examples in patients with Alzheimer's disease and multiple sclerosis. The ability to quantify the variance of the human brain as a function of age in a large population of subjects for whom data is also available about their genetic composition and behaviour will allow for the first assessment of cerebral genotype–phenotype–behavioural correlations in humans to take place in a population this large. This approach and its application should provide new insights and opportunities for investigators interested in basic neuroscience, clinical diagnostics and the evaluation of neuropsychiatric disorders in patients.

2,094 citations

Journal ArticleDOI
TL;DR: Non-invasive mapping of brain structure and function with magnetic resonance imaging (MRI) has opened up unprecedented opportunities for studying the neural substrates underlying cognitive development, with an emerging consensus of a continuous increase throughout adolescence in the volume of white matter.

1,225 citations