scispace - formally typeset
Search or ask a question

Showing papers on "Autoencoder published in 2003"


Proceedings ArticleDOI
20 Jul 2003
TL;DR: This paper provides some insight into the convergence of several iterative methods of sensor restoration using the autoencoder to some unique answer given a specific operating point (i.e., the known sensor values), regardless of how the missing sensor values are initialized.
Abstract: The neural network autoencoder is a useful tool for the restoration of missing sensors when enough known sensors with some relation to those missing are available. Through the idea of a contraction mapping, this paper provides some insight into the convergence of several iterative methods of sensor restoration using the autoencoder to some unique answer given a specific operating point (i.e., the known sensor values), regardless of how the missing sensor values are initialized.

28 citations


Journal ArticleDOI
TL;DR: A viewpoint invariant face recognition method in which several viewpoint dependent classifiers are combined by a gating network so that one of the classifiers is selectively activated depending on the viewpoint of a given face image.

12 citations


Journal ArticleDOI
TL;DR: The response properties of the MST neurons are similar to those obtained from neurophysiological experiments, and a cost function of the autoencoder is defined from which a learning rule is derived by a gradient descent method within a mean-field approximation.
Abstract: We propose a model for a system with middle temporal neurons and medial superior temporal (MST) neurons by using a three-layered autoencoder. Noise effect is taken into account by using the framework of statistical physics. We define a cost function of the autoencoder, from which a learning rule is derived by a gradient descent method, within a mean-field approximation. We find a pair of values of two noise levels at which a minimum value of the cost function is attained. We investigate response properties of the MST neurons to optical flows for various types of motion at the pair of optimal values of two noise levels. We obtain that the response properties of the MST neurons are similar to those obtained from neurophysiological experiments.

6 citations


01 Jan 2003
TL;DR: The modulation of neural activation learned by this model qualitatively matches that measured in animals during covert visual attention tasks, and can learn to encode its inputs optimally.
Abstract: When a sensory stimulus is encoded in a lossy fashion for ef- ficient transmission, there are necessarily tradeoffs between the represented fidelity of various aspects of the input pat- tern. In the model of attention presented here, a top-down signal informs the encoder of these tradeoffs. Given an en- semble of input patterns and tradeoff requirements, our sys- tem can learn to encode its inputs optimally. This general model is instantiated in a simple network: an autoencoder with a bottleneck, innervated by a top-down attentional sig- nal, trained using backpropagation. The only information the encoder receives concerning the semantics of the top-down attentional signal is from the optimization criterion, which penalizes the system more heavily for errors made near a simple attentional spotlight. The modulation of neural activ- ity learned by this model qualitatively matches that measured in animals during covert visual attention tasks.