scispace - formally typeset
Open AccessJournal ArticleDOI

Comprehensive decoding mental processes from Web repositories of functional brain images

Reads0
Chats0
TLDR
In this article , the authors trained neural networks to predict cognitive labels on tens of thousands of brain images and successfully decoded more than 50 classes of mental processes on a large test set, which demonstrated that image-based meta-analyses can be undertaken at scale and with minimal manual data curation.
Abstract
Associating brain systems with mental processes requires statistical analysis of brain activity across many cognitive processes. These analyses typically face a difficult compromise between scope-from domain-specific to system-level analysis-and accuracy. Using all the functional Magnetic Resonance Imaging (fMRI) statistical maps of the largest data repository available, we trained machine-learning models that decode the cognitive concepts probed in unseen studies. For this, we leveraged two comprehensive resources: NeuroVault-an open repository of fMRI statistical maps with unconstrained annotations-and Cognitive Atlas-an ontology of cognition. We labeled NeuroVault images with Cognitive Atlas concepts occurring in their associated metadata. We trained neural networks to predict these cognitive labels on tens of thousands of brain images. Overcoming the heterogeneity, imbalance and noise in the training data, we successfully decoded more than 50 classes of mental processes on a large test set. This success demonstrates that image-based meta-analyses can be undertaken at scale and with minimal manual data curation. It enables broad reverse inferences, that is, concluding on mental processes given the observed brain activity.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

On the benefits of self-taught learning for brain decoding

TL;DR: In this article , a self-taught learning framework was proposed for improving brain decoding on new tasks. But the magnitude of the benefits strongly depends on the number of samples available both for pretraining and fine-tuning the models and on the complexity of the targeted downstream task.
Journal ArticleDOI

On the benefits of self-taught learning for brain decoding

TL;DR: The results suggest that pre-training can be bene⬁cial when studying difficult classiflcation problems such as those with few training samples or complex classi fication tasks.
References
More filters
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal Article

Scikit-learn: Machine Learning in Python

TL;DR: Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems, focusing on bringing machine learning to non-specialists using a general-purpose high-level language.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

Regularization and variable selection via the elastic net

TL;DR: It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.

Automatic differentiation in PyTorch

TL;DR: An automatic differentiation module of PyTorch is described — a library designed to enable rapid research on machine learning models that focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead.
Related Papers (5)