scispace - formally typeset
Open AccessJournal ArticleDOI

Representational Similarity Analysis – Connecting the Branches of Systems Neuroscience

Reads0
Chats0
TLDR
A new experimental and data-analytical framework called representational similarity analysis (RSA) is proposed, in which multi-channel measures of neural activity are quantitatively related to each other and to computational theory and behavior by comparing RDMs.
Abstract
A fundamental challenge for systems neuroscience is to quantitatively relate its three major branches of research: brain-activity measurement, behavioral measurement, and computational modeling. Using measured brain-activity patterns to evaluate computational network models is complicated by the need to define the correspondency between the units of the model and the channels of the brain-activity data, e.g. single-cell recordings or voxels from functional magnetic resonance imaging (fMRI). Similar correspondency problems complicate relating activity patterns between different modalities of brain-activity measurement, and between subjects and species. In order to bridge these divides, we suggest abstracting from the activity patterns themselves and computing representational dissimilarity matrices, which characterize the information carried by a given representation in a brain or model. We propose a new experimental and data-analytical framework called representational similarity analysis (RSA), in which multi-channel measures of neural activity are quantitatively related to each other and to computational theory and behavior by comparing representational dissimilarity matrices. We demonstrate RSA by relating representations of visual objects as measured with fMRI to computational models spanning a wide range of complexities. We argue that these ideas, which have deep roots in psychology and neuroscience, will allow the integrated quantitative analysis of data from all three branches, thus contributing to a more unified systems neuroscience.

read more

Citations
More filters
Journal ArticleDOI

fMRI Analysis-by-Synthesis Reveals a Dorsal Hierarchy That Extracts Surface Slant

TL;DR: A hierarchical refinement of visual representations moving from the representation of edges and figure–ground segmentation (V1, V2) to spatially extensive disparity gradients in V3A is demonstrated, which reveals a relatively short computational hierarchy that captures key information about the 3-D structure of nearby surfaces.
Journal ArticleDOI

The Hippocampus Generalizes across Memories that Share Item and Context Information

TL;DR: Multivariate analyses of activity patterns measured with fMRI characterize how the hippocampus distinguishes between memories based on similarity at the level of items and/or context to lend new insights into the way the hippocampus may balance multiple mnemonic operations in adaptively guiding behavior.
Journal ArticleDOI

Automaticity and control in prospective memory: a computational model.

TL;DR: A computational model implementing the latter account is presented, using a parallel distributed processing (interactive activation) framework, which suggests that PM should be considered to result from the interplay between bottom-up triggering of PM responses by perceptual input, and top-down monitoring for appropriate cues.
Journal ArticleDOI

Shared neural correlates for building phrases in signed and spoken language.

TL;DR: The neurobiological similarity of sign and speech goes beyond gross measures such as lateralization: the same fronto-temporal network achieves the planning of structured linguistic expressions.
Journal ArticleDOI

Redefining the resolution of semantic knowledge in the brain: Advances made by the introduction of models of semantics in neuroimaging.

TL;DR: Using multivariate analysis, predictions based on patient lesion data have been confirmed during semantic processing in healthy controls and it is argued why semantic models are possibly the most valuable addition to the research of semantics in recent years.
References
More filters
Journal ArticleDOI

Nonlinear dimensionality reduction by locally linear embedding.

TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Journal ArticleDOI

A global geometric framework for nonlinear dimensionality reduction.

TL;DR: An approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set and efficiently computes a globally optimal solution, and is guaranteed to converge asymptotically to the true structure.
Journal ArticleDOI

Statistical parametric maps in functional imaging: A general linear approach

TL;DR: In this paper, the authors present a general approach that accommodates most forms of experimental layout and ensuing analysis (designed experiments with fixed effects for factors, covariates and interaction of factors).
Related Papers (5)