scispace - formally typeset
Search or ask a question
Author

Bertrand Thirion

Bio: Bertrand Thirion is an academic researcher from Université Paris-Saclay. The author has contributed to research in topics: Cluster analysis & Cognition. The author has an hindex of 51, co-authored 311 publications receiving 73839 citations. Previous affiliations of Bertrand Thirion include French Institute for Research in Computer Science and Automation & French Institute of Health and Medical Research.


Papers
More filters
Journal ArticleDOI
TL;DR: The findings of this study reframe the role of the DMN core and its relation to canonical networks in schizophrenia to underline the importance of large‐scale neural interactions as effective biomarkers and indicators of how to tailor psychiatric care to single patients.
Abstract: Schizophrenia is a devastating mental disease with an apparent disruption in the highly associative default mode network (DMN). Interplay between this canonical network and others probably contributes to goal-directed behavior so its disturbance is a candidate neural fingerprint underlying schizophrenia psychopathology. Previous research has reported both hyperconnectivity and hypoconnectivity within the DMN, and both increased and decreased DMN coupling with the multimodal saliency network (SN) and dorsal attention network (DAN). This study systematically revisited network disruption in patients with schizophrenia using data-derived network atlases and multivariate pattern-learning algorithms in a multisite dataset (n = 325). Resting-state fluctuations in unconstrained brain states were used to estimate functional connectivity, and local volume differences between individuals were used to estimate structural co-occurrence within and between the DMN, SN, and DAN. In brain structure and function, sparse inverse covariance estimates of network coupling were used to characterize healthy participants and patients with schizophrenia, and to identify statistically significant group differences. Evidence did not confirm that the backbone of the DMN was the primary driver of brain dysfunction in schizophrenia. Instead, functional and structural aberrations were frequently located outside of the DMN core, such as in the anterior temporoparietal junction and precuneus. Additionally, functional covariation analyses highlighted dysfunctional DMN-DAN coupling, while structural covariation results highlighted aberrant DMN-SN coupling. Our findings reframe the role of the DMN core and its relation to canonical networks in schizophrenia. We thus underline the importance of large-scale neural interactions as effective biomarkers and indicators of how to tailor psychiatric care to single patients.

45 citations

Journal ArticleDOI
TL;DR: This paper exploits the idea that each individual brain region has a specific connection profile to create parcellations of the cortical mantle using MR diffusion imaging using a K-means approach including spatial regularization.
Abstract: This paper exploits the idea that each individual brain region has a specific connection profile to create parcellations of the cortical mantle using MR diffusion imaging. The parcellation is performed in two steps. First, the cortical mantle is split at a macroscopic level into 36 large gyri using a sulcus recognition system. Then, for each voxel of the cortex, a connection profile is computed using a probabilistic tractography framework. The tractography is performed from q-ball fields using regularized particle trajectories. Fiber ODF are inferred from the q-balls using a sharpening process focusing the weight around the q-ball local maxima. A sophisticated mask of propagation computed from a T1-weighted image perfectly aligned with the diffusion data prevents the particles from crossing the cortical folds. During propagation, the particles father child particles in order to improve the sampling of the long fascicles. For each voxel, intersection of the particle trajectories with the gyri lead to a connectivity profile made up of only 36 connection strengths. These profiles are clustered on a gyrus by gyrus basis using a K-means approach including spatial regularization. The reproducibility of the results is studied for three subjects using spatial normalization.

43 citations

Journal ArticleDOI
TL;DR: A method for automatically and accurately extracting surface meshes of several tissues of the head from anatomical magnetic resonance (MR) images and a new theory of random shapes that may prove useful in building anatomical and functional atlases are discussed.

42 citations

Book ChapterDOI
14 Sep 2014
TL;DR: It is shown on real data that this approach provides significantly higher classification accuracy than directly using Pearson's correlation, and a non-parametric scheme for identifying significantly discriminative connections from classifier weights is proposed.
Abstract: We present a Riemannian approach for classifying fMRI connectivity patterns before and after intervention in longitudinal studies. A fundamental difficulty with using connectivity as features is that covariance matrices live on the positive semi-definite cone, which renders their elements inter-related. The implicit independent feature assumption in most classifier learning algorithms is thus violated. In this paper, we propose a matrix whitening transport for projecting the covariance estimates onto a common tangent space to reduce the statistical dependencies between their elements. We show on real data that our approach provides significantly higher classification accuracy than directly using Pearson’s correlation. We further propose a non-parametric scheme for identifying significantly discriminative connections from classifier weights. Using this scheme, a number of neuroanatomically meaningful connections are found, whereas no significant connections are detected with pure permutation testing.

42 citations

Journal ArticleDOI
TL;DR: This work presents a matrix-factorization algorithm that scales to input matrices with both huge number of rows and columns and comes with convergence guarantees to reach a stationary point of the matrix-Factorization problem.
Abstract: We present a matrix-factorization algorithm that scales to input matrices with both huge number of rows and columns. Learned factors may be sparse or dense and/or nonnegative, which makes our algorithm suitable for dictionary learning, sparse component analysis, and nonnegative matrix factorization. Our algorithm streams matrix columns while subsampling them to iteratively learn the matrix factors. At each iteration, the row dimension of a new sample is reduced by subsampling, resulting in lower time complexity compared to a simple streaming algorithm. Our method comes with convergence guarantees to reach a stationary point of the matrix-factorization problem. We demonstrate its efficiency on massive functional magnetic resonance imaging data (2 TB), and on patches extracted from hyperspectral images (103 GB). For both problems, which involve different penalties on rows and columns, we obtain significant speed-ups compared to state-of-the-art algorithms.

42 citations


Cited by
More filters
Journal Article
TL;DR: Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems, focusing on bringing machine learning to non-specialists using a general-purpose high-level language.
Abstract: Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net.

47,974 citations

Posted Content
TL;DR: Scikit-learn as mentioned in this paper is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems.
Abstract: Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from this http URL.

28,898 citations

28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

Proceedings ArticleDOI
13 Aug 2016
TL;DR: XGBoost as discussed by the authors proposes a sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning to achieve state-of-the-art results on many machine learning challenges.
Abstract: Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges. We propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning. More importantly, we provide insights on cache access patterns, data compression and sharding to build a scalable tree boosting system. By combining these insights, XGBoost scales beyond billions of examples using far fewer resources than existing systems.

14,872 citations

Proceedings ArticleDOI
TL;DR: This paper proposes a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning and provides insights on cache access patterns, data compression and sharding to build a scalable tree boosting system called XGBoost.
Abstract: Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges. We propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning. More importantly, we provide insights on cache access patterns, data compression and sharding to build a scalable tree boosting system. By combining these insights, XGBoost scales beyond billions of examples using far fewer resources than existing systems.

13,333 citations