scispace - formally typeset
Search or ask a question

Showing papers on "Autoencoder published in 1996"


Proceedings ArticleDOI
03 Jun 1996
TL;DR: A multi-sensor fusion model using an autoencoder neural network for 3D object recognition, which fuses multiple sensory data to integrate its internal object representation, found that the internal representation is generalized about the viewpoints which were not in the training sets of the target.
Abstract: In this paper, we propose a multi-sensor fusion model using an autoencoder neural network for 3D object recognition, which fuses multiple sensory data to integrate its internal object representation. This model was evaluated using camera images from many viewpoints on a hemisphere around the target. Three images were generated from one camera image by hue and saturation value clusters. After learning the target's images from many viewpoints in an autoencoder neural network, the continuous internal representations which correspond to viewpoints, were constructed in a compress layer of the autoencoder neural network. We found that the internal representation is generalized about the viewpoints which were not in the training sets of the target. The average of the squared errors of the autoencoder neural network is about three times higher when the compared object is unknown than when the object has already been taught as the target but not the learning point. Results of the experiment demonstrate the effectiveness of our proposed model to 3D object recognition.

9 citations