scispace - formally typeset
Search or ask a question
Topic

Efficient coding hypothesis

About: Efficient coding hypothesis is a research topic. Over the lifetime, 139 publications have been published within this topic receiving 33256 citations.


Papers
More filters
Journal ArticleDOI
13 Jun 1996-Nature
TL;DR: It is shown that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex.
Abstract: The receptive fields of simple cells in mammalian primary visual cortex can be characterized as being spatially localized, oriented and bandpass (selective to structure at different spatial scales), comparable to the basis functions of wavelet transforms. One approach to understanding such response properties of visual neurons has been to consider their relationship to the statistical structure of natural images in terms of efficient coding. Along these lines, a number of studies have attempted to train unsupervised learning algorithms on natural images in the hope of developing receptive fields with similar properties, but none has succeeded in producing a full set that spans the image space and contains all three of the above properties. Here we investigate the proposal that a coding strategy that maximizes sparseness is sufficient to account for these properties. We show that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex. The resulting sparse image code provides a more efficient representation for later stages of processing because it possesses a higher degree of statistical independence among its outputs.

5,947 citations

Journal ArticleDOI
TL;DR: These deviations from linearity provide a potential explanation for the weak forms of non-linearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in response to naturalistic stimuli.

3,840 citations

Journal ArticleDOI
TL;DR: The results obtained with six natural images suggest that the orientation and the spatial-frequency tuning of mammalian simple cells are well suited for coding the information in such images if the goal of the code is to convert higher-order redundancy into first- order redundancy.
Abstract: The relative efficiency of any particular image-coding scheme should be defined only in relation to the class of images that the code is likely to encounter. To understand the representation of images by the mammalian visual system, it might therefore be useful to consider the statistics of images from the natural environment (i.e., images with trees, rocks, bushes, etc). In this study, various coding schemes are compared in relation to how they represent the information in such natural images. The coefficients of such codes are represented by arrays of mechanisms that respond to local regions of space, spatial frequency, and orientation (Gabor-like transforms). For many classes of image, such codes will not be an efficient means of representing information. However, the results obtained with six natural images suggest that the orientation and the spatial-frequency tuning of mammalian simple cells are well suited for coding the information in such images if the goal of the code is to convert higher-order redundancy (e.g., correlation between the intensities of neighboring pixels) into first-order redundancy (i.e., the response distribution of the coefficients). Such coding produces a relatively high signal-to-noise ratio and permits information to be transmitted with only a subset of the total number of cells. These results support Barlow's theory that the goal of natural vision is to represent the information in the natural environment with minimal redundancy.

3,077 citations

Book
15 Nov 1996
TL;DR: Spikes provides a self-contained review of relevant concepts in information theory and statistical decision theory about the representation of sensory signals in neural spike trains and a quantitative framework is used to pose precise questions about the structure of the neural code.
Abstract: Our perception of the world is driven by input from the sensory nerves. This input arrives encoded as sequences of identical spikes. Much of neural computation involves processing these spike trains. What does it mean to say that a certain set of spikes is the right answer to a computational problem? In what sense does a spike train convey information about the sensory world? Spikes begins by providing precise formulations of these and related questions about the representation of sensory signals in neural spike trains. The answers to these questions are then pursued in experiments on sensory neurons.The authors invite the reader to play the role of a hypothetical observer inside the brain who makes decisions based on the incoming spike trains. Rather than asking how a neuron responds to a given stimulus, the authors ask how the brain could make inferences about an unknown stimulus from a given neural response. The flavor of some problems faced by the organism is captured by analyzing the way in which the observer can make a running reconstruction of the sensory stimulus as it evolves in time. These ideas are illustrated by examples from experiments on several biological systems. Intended for neurobiologists with an interest in mathematical analysis of neural data as well as the growing number of physicists and mathematicians interested in information processing by "real" nervous systems, Spikes provides a self-contained review of relevant concepts in information theory and statistical decision theory. A quantitative framework is used to pose precise questions about the structure of the neural code. These questions in turn influence both the design and analysis of experiments on sensory neurons.

2,811 citations

Journal ArticleDOI
TL;DR: It is shown that a new unsupervised learning algorithm based on information maximization, a nonlinear "infomax" network, when applied to an ensemble of natural scenes produces sets of visual filters that are localized and oriented.

2,354 citations


Network Information
Related Topics (5)
Visual cortex
18.8K papers, 1.2M citations
72% related
Sensory system
20K papers, 1M citations
69% related
Neuron
22.5K papers, 1.3M citations
69% related
Neurogenesis
20.1K papers, 1.2M citations
68% related
Axon
18.9K papers, 1M citations
68% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20216
20206
20195
20182
20175
20166