scispace - formally typeset
Open AccessJournal ArticleDOI

Representational Similarity Analysis – Connecting the Branches of Systems Neuroscience

Reads0
Chats0
TLDR
A new experimental and data-analytical framework called representational similarity analysis (RSA) is proposed, in which multi-channel measures of neural activity are quantitatively related to each other and to computational theory and behavior by comparing RDMs.
Abstract
A fundamental challenge for systems neuroscience is to quantitatively relate its three major branches of research: brain-activity measurement, behavioral measurement, and computational modeling. Using measured brain-activity patterns to evaluate computational network models is complicated by the need to define the correspondency between the units of the model and the channels of the brain-activity data, e.g. single-cell recordings or voxels from functional magnetic resonance imaging (fMRI). Similar correspondency problems complicate relating activity patterns between different modalities of brain-activity measurement, and between subjects and species. In order to bridge these divides, we suggest abstracting from the activity patterns themselves and computing representational dissimilarity matrices, which characterize the information carried by a given representation in a brain or model. We propose a new experimental and data-analytical framework called representational similarity analysis (RSA), in which multi-channel measures of neural activity are quantitatively related to each other and to computational theory and behavior by comparing representational dissimilarity matrices. We demonstrate RSA by relating representations of visual objects as measured with fMRI to computational models spanning a wide range of complexities. We argue that these ideas, which have deep roots in psychology and neuroscience, will allow the integrated quantitative analysis of data from all three branches, thus contributing to a more unified systems neuroscience.

read more

Citations
More filters
Posted ContentDOI

Entorhinal and ventromedial prefrontal cortices abstract and generalise the structure of reinforcement learning problems

TL;DR: This work introduces a task-remapping paradigm, where subjects solve multiple reinforcement learning (RL) problems differing in structural or sensory properties, and shows that, as with space, entorhinal representations are preserved across different RL problems only if task structure is preserved.
Journal ArticleDOI

Age-related differences in the neural correlates of vivid remembering.

TL;DR: Results showed that highly vivid memories were associated with greater activity in the precuneus in young than older adults, providing new evidence that aging is associated with reduced reinstatement of activity in brain regions that processed the encoding of complex stimuli, but older individuals judge these impoverished memory representations as subjectively vivid.
Journal ArticleDOI

Relating the Past with the Present: Information Integration and Segregation during Ongoing Narrative Processing.

TL;DR: The authors examined how the brain dynamically updates event representations by integrating new information over multiple minutes while segregating irrelevant input and found that storyline-specific neural patterns were reinstated (i.e., became more active) during storyline transitions.
Posted ContentDOI

Connecting Concepts in the Brain: Mapping Cortical Representations of Semantic Relations

TL;DR: It is concluded that the human brain uses distributed networks to encode not only concepts but also relationships between concepts, and the default mode network plays a central role in semantic processing for abstraction of concepts.
Journal ArticleDOI

Do domain-general executive resources play a role in linguistic prediction? Re-evaluation of the evidence and a path forward

TL;DR: The most compelling evidence is an apparent reduction in predictive behavior during language comprehension in populations with lower executive resources, such as children, older adults, and second language (L2) learners, and it is proposed that these between-population differences can be explained without invoking executive resources.
References
More filters
Journal ArticleDOI

Nonlinear dimensionality reduction by locally linear embedding.

TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Journal ArticleDOI

A global geometric framework for nonlinear dimensionality reduction.

TL;DR: An approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set and efficiently computes a globally optimal solution, and is guaranteed to converge asymptotically to the true structure.
Journal ArticleDOI

Statistical parametric maps in functional imaging: A general linear approach

TL;DR: In this paper, the authors present a general approach that accommodates most forms of experimental layout and ensuing analysis (designed experiments with fixed effects for factors, covariates and interaction of factors).
Related Papers (5)