scispace - formally typeset
Open AccessProceedings Article

Neural Networks for Efficient Bayesian Decoding of Natural Images from Retinal Neurons

TLDR
In this article, an approximate Bayesian method for decoding natural images from the spiking activity of populations of retinal ganglion cells (RGCs) was developed, which sidestep known computational challenges with Bayesian inference by exploiting artificial neural networks developed for computer vision.
Abstract
Decoding sensory stimuli from neural signals can be used to reveal how we sense our physical environment, and is valuable for the design of brain-machine interfaces. However, existing linear techniques for neural decoding may not fully reveal or exploit the fidelity of the neural signal. Here we develop a new approximate Bayesian method for decoding natural images from the spiking activity of populations of retinal ganglion cells (RGCs). We sidestep known computational challenges with Bayesian inference by exploiting artificial neural networks developed for computer vision, enabling fast nonlinear decoding that incorporates natural scene statistics implicitly. We use a decoder architecture that first linearly reconstructs an image from RGC spikes, then applies a convolutional autoencoder to enhance the image. The resulting decoder, trained on natural images and simulated neural responses, significantly outperforms linear decoding, as well as simple point-wise nonlinear decoding. These results provide a tool for the assessment and optimization of retinal prosthesis technologies, and reveal that the retina may provide a more accurate representation of the visual scene than previously appreciated.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Generative adversarial networks for reconstructing natural images from brain activity

TL;DR: A method for reconstructing visual stimuli from brain activity using a deep convolutional generative adversarial network capable of generating gray scale photos, similar to stimuli presented during two functional magnetic resonance imaging experiments is explored.
Journal ArticleDOI

Interpreting encoding and decoding models.

TL;DR: Encoding and decoding models are widely used in systems, cognitive, and computational neuroscience to make sense of brain-activity data but the interpretation of their results requires care and many models must be tested and inferentially compared for analyses to drive theoretical progress.
Journal ArticleDOI

Neural data science: accelerating the experiment-analysis-theory cycle in large-scale neuroscience.

TL;DR: This review focuses on recent advances in methods for analyzing neural time-series data with single-neuronal precision.
Proceedings Article

BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos

TL;DR: This work introduces a probabilistic framework for the analysis of behavioral video and neural activity, which provides tools for compression, segmentation, generation, and decoding of behavioral videos and develops a novel Bayesian decoding approach.
Posted Content

Interpreting Encoding and Decoding Models

TL;DR: In this article, the authors evaluate decoding and encoding models in terms of their generalization performance, which depends on the level of generalization a model achieves (e.g. to new response measurements for the same stimuli, to new stimuli from the same population, or to stimuli from a different population).
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Related Papers (5)