scispace - formally typeset
Open AccessJournal ArticleDOI

Representational Similarity Analysis – Connecting the Branches of Systems Neuroscience

Reads0
Chats0
TLDR
A new experimental and data-analytical framework called representational similarity analysis (RSA) is proposed, in which multi-channel measures of neural activity are quantitatively related to each other and to computational theory and behavior by comparing RDMs.
Abstract
A fundamental challenge for systems neuroscience is to quantitatively relate its three major branches of research: brain-activity measurement, behavioral measurement, and computational modeling. Using measured brain-activity patterns to evaluate computational network models is complicated by the need to define the correspondency between the units of the model and the channels of the brain-activity data, e.g. single-cell recordings or voxels from functional magnetic resonance imaging (fMRI). Similar correspondency problems complicate relating activity patterns between different modalities of brain-activity measurement, and between subjects and species. In order to bridge these divides, we suggest abstracting from the activity patterns themselves and computing representational dissimilarity matrices, which characterize the information carried by a given representation in a brain or model. We propose a new experimental and data-analytical framework called representational similarity analysis (RSA), in which multi-channel measures of neural activity are quantitatively related to each other and to computational theory and behavior by comparing representational dissimilarity matrices. We demonstrate RSA by relating representations of visual objects as measured with fMRI to computational models spanning a wide range of complexities. We argue that these ideas, which have deep roots in psychology and neuroscience, will allow the integrated quantitative analysis of data from all three branches, thus contributing to a more unified systems neuroscience.

read more

Citations
More filters
Posted Content

Understanding Synthetic Gradients and Decoupled Neural Interfaces

TL;DR: In this article, the authors investigate the mechanism by which synthetic gradient estimators approximate the true loss, and how that leads to drastically different layer-wise representations, and expose the relationship of using synthetic gradients to other error approximation techniques and find a unifying language for discussion and comparison.
Journal ArticleDOI

Faces and voices in the brain: A modality-general person-identity representation in superior temporal sulcus.

TL;DR: The results showed that pattern discriminants that were trained to discriminate pairs of identities from their faces could also discriminate the respective voices in the right posterior superior temporal sulcus (rpSTS), suggesting that the rp STS is a person-selective multimodal region that shows a modality-general person-identity representation and integrates face and voice identity information.
Journal ArticleDOI

An Integrated Neural Decoder of Linguistic and Experiential Meaning.

TL;DR: Initial evidence that modeling nonlinguistic “experiential” knowledge contributes to decoding neural representations of sentence meaning is presented, and a model-based approach is presented that reveals early evidence that experiential and linguistically acquired knowledge can be detected in brain activity elicited in reading natural sentences.
Posted ContentDOI

Conjunctive Representations that Integrate Stimuli, Responses, and Rules are Critical for Action Selection

TL;DR: The strength of conjunctions was the most important predictor of trial-by-trial variability in response times (RTs) and was closely, and selectively, related to an important behavioral indicator of event files—the partial-overlap priming pattern.
Journal ArticleDOI

Representational similarity analysis reveals task-dependent semantic influence of the visual word form area.

TL;DR: Positive evidence is provided for the presence of both orthographic and task-relevant semantic information in the VWFA and have significant implications for the neurobiological basis of reading.
References
More filters
Journal ArticleDOI

Nonlinear dimensionality reduction by locally linear embedding.

TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Journal ArticleDOI

A global geometric framework for nonlinear dimensionality reduction.

TL;DR: An approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set and efficiently computes a globally optimal solution, and is guaranteed to converge asymptotically to the true structure.
Journal ArticleDOI

Statistical parametric maps in functional imaging: A general linear approach

TL;DR: In this paper, the authors present a general approach that accommodates most forms of experimental layout and ensuing analysis (designed experiments with fixed effects for factors, covariates and interaction of factors).
Related Papers (5)