Representational Similarity Analysis – Connecting the Branches of Systems Neuroscience
Reads0
Chats0
TLDR
A new experimental and data-analytical framework called representational similarity analysis (RSA) is proposed, in which multi-channel measures of neural activity are quantitatively related to each other and to computational theory and behavior by comparing RDMs.Abstract:
A fundamental challenge for systems neuroscience is to quantitatively relate its three major branches of research: brain-activity measurement, behavioral measurement, and computational modeling. Using measured brain-activity patterns to evaluate computational network models is complicated by the need to define the correspondency between the units of the model and the channels of the brain-activity data, e.g. single-cell recordings or voxels from functional magnetic resonance imaging (fMRI). Similar correspondency problems complicate relating activity patterns between different modalities of brain-activity measurement, and between subjects and species. In order to bridge these divides, we suggest abstracting from the activity patterns themselves and computing representational dissimilarity matrices, which characterize the information carried by a given representation in a brain or model. We propose a new experimental and data-analytical framework called representational similarity analysis (RSA), in which multi-channel measures of neural activity are quantitatively related to each other and to computational theory and behavior by comparing representational dissimilarity matrices. We demonstrate RSA by relating representations of visual objects as measured with fMRI to computational models spanning a wide range of complexities. We argue that these ideas, which have deep roots in psychology and neuroscience, will allow the integrated quantitative analysis of data from all three branches, thus contributing to a more unified systems neuroscience.read more
Citations
More filters
Posted Content
Comparing representational geometries using whitened unbiased-distance-matrix similarity
Jörn Diedrichsen,Eva Berlot,Marieke Mur,Heiko H. Schütt,Mahdiyar Shahbazi,Nikolaus Kriegeskorte +5 more
TL;DR: The whitened unbiased RDM cosine similarity (WUC) as mentioned in this paper is a new criterion for RDM similarity, which allows for near-optimal model selection combined with robustness to correlated measurement noise.
Posted ContentDOI
Automated EEG mega-analysis I: Spectral and amplitude characteristics across studies
Nima Bigdely-Shamlo,Jonathan Touryan,Alejandro Ojeda,Christian Kothe,Tim Mullen,Kay A. Robbins +5 more
TL;DR: It is demonstrated that when meta-data are consistent across studies, both channel-level and source-level EEG mega-analysis are possible and can provide insights unavailable in single studies.
Posted Content
If deep learning is the answer, then what is the question?
TL;DR: A roadmap for systems neuroscience research in the age of deep learning is offered and the conceptual and methodological challenges of comparing behaviour, learning dynamics, and neural representation in artificial and biological systems are discussed.
Journal ArticleDOI
Decoding individual identity from brain activity elicited in imagining common experiences
Andrew J. Anderson,Kelsey McDermott,Brian Rooks,Kathi L. Heffner,David Dodell-Feder,David Dodell-Feder,Feng Lin +6 more
TL;DR: It is demonstrated that participants’ neural representations are better predicted by their own models than other peoples’, which showcases how neuroimaging and personalized models can quantify individual-differences in imagined experiences.
Journal ArticleDOI
Neurocomputational mechanisms underlying immoral decisions benefiting self or others.
Chen Qu,Yang Hu,Yang Hu,Zixuan Tang,Zixuan Tang,Edmund Derrington,Edmund Derrington,Jean-Claude Dreher,Jean-Claude Dreher +8 more
TL;DR: Model-based functional magnetic resonance imaging is used to investigate how immoral behaviors change when benefiting oneself or someone else, and provides insights for understanding the neurobiological basis of moral flexibility.
References
More filters
Journal ArticleDOI
Nonlinear dimensionality reduction by locally linear embedding.
Sam T. Roweis,Lawrence K. Saul +1 more
TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Journal ArticleDOI
A global geometric framework for nonlinear dimensionality reduction.
TL;DR: An approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set and efficiently computes a globally optimal solution, and is guaranteed to converge asymptotically to the true structure.
Journal ArticleDOI
Statistical parametric maps in functional imaging: A general linear approach
Karl J. Friston,Andrew P. Holmes,Keith J. Worsley,J-B. Poline,Chris D. Frith,Richard S. J. Frackowiak +5 more
TL;DR: In this paper, the authors present a general approach that accommodates most forms of experimental layout and ensuing analysis (designed experiments with fixed effects for factors, covariates and interaction of factors).