scispace - formally typeset
Search or ask a question

Showing papers on "Autoencoder published in 1995"


Proceedings Article
20 Aug 1995
TL;DR: A particular Novelty Detection approach to classification that uses a Redundancy Compression and Non-Redundancy Differentiation technique based on the hippocampus, a part of the brain critically involved in learning and memory, is presented.
Abstract: Novelty Detection techniques are concept-learning methods that proceed by recognizing positive instances of a concept rather than differentiating between its positive and negative instances. Novelty Detection approaches consequently require very few, if any, negative training instances. This paper presents a particular Novelty Detection approach to classification that uses a Redundancy Compression and Non-Redundancy Differentiation technique based on the [Gluck & Myers, 1993] model of the hippocampus, a part of the brain critically involved in learning and memory. In particular, this approach consists of training an autoencoder to reconstruct positive input instances at the output layer and then using this autoencoder to recognize novel instances. Classification is possible, after training, because positive instances are expected to be reconstructed accurately while negative instances are not. The purpose of this paper is to compare HIPPO, the system that implements this technique, to C4.5 and feedforward neural network classification on several applications.

417 citations


Journal ArticleDOI
TL;DR: The method was applied to two problems: an autoencoder to produce six alphabet letters and the assimilation for the formation of plurals and nasalization in an artificial language to confirm that the features of input patterns could be detected by maximizing the hidden information.
Abstract: In this paper, We propose a method to maximize the hidden information stored in hidden units. The hidden information is defined by the decrease in uncertainty of hidden units with respect to input patterns. By maximizing the hidden information, the hidden unit can detect features and extract rules behind input patterns. Our method was applied to two problems: an autoencoder to produce six alphabet letters and the assimilation for the formation of plurals and nasalization in an artificial language. In the first problem, the results explicitly confirmed that the features of input patterns could be detected by maximizing the hidden information. In the second experiment, we could clearly see that the rules of the assimilation were extracted by maximizing the hidden information, even if the rules are obscured by some other factors.

62 citations


Journal ArticleDOI
TL;DR: It is found that the linear perceptron model can classify sex from facial images with 81% accuracy, compared to 92% accuracy with compression coding on the same data set (Golomb et al. 1991).
Abstract: Recognizing the sex of conspecifics is important. Humans rely primarily on visual pattern recognition for this task. A wide variety of linear and nonlinear models have been developed to understand this task of sex recognition from human faces.' These models have used both pixelbased and feature-based representations of the face for input. Fleming and Cottrell (1990) and Golomb et a!. (1991) utilized first an autoencoder compression network on a pixel-based representation, and then a classification network. Brunelli and Poggio (1993) used a type of radial basis function network with geometrical face measurements as input. O'Toole and colleagues (1991, 1993) represented faces as principal components. When the hidden units of an autoencoder have a linear output function, then the N hidden units in the network span the first N principal components of the input (Baldi and Hornik 1989). Bruce et al. (1993) constructed a discriminant function for sex with 2-D and 3-D facial measures. In this note we compare the performance of a simple perceptron and a standard multilayer perceptron (MLP) on the sex classification task. We used a range of spatial resolutions of the face to determine how the reliability of sex discrimination is related to resolution. A normalized pixel-based representation was used for the faces because it explicitly retained texture and shape information while also maintaining geometric relationships. We found that the linear perceptron model can classify sex from facial images with 81% accuracy, compared to 92% accuracy with compression coding on the same data set (Golomb et al. 1991). The advantage of using a simple linear perceptron with normalized pixelbased inputs is that it allows us to see explicitly those regions of the face

40 citations


Journal ArticleDOI
TL;DR: The use of the Autoencoder to process clinical trial data and adverse experience reports is described and experience in autoencoding is discussed.
Abstract: A computer-driven terminology processing system, referred to as an “autoencoder,” is being developed at Merck Research Laboratories to aid in the management of clinical and regulatory data. The components of the autoencoder are: a large dictionary of clinical and therapy terms, a learning database to provide feedback, a thesaurus or word-substitution file, a modified Soundex algorithm, and an outcome report with a level of confidence indicator. The autoencoder operates in conjunction with the dictionary system to conduct comparisons of selected terms taken from input documents against dictionary terms. Algorithms evaluate these comparisons and assist users of the system in selecting dictionary terms to encode input documents. The use of the autoencoder to process clinical trial data and adverse experience reports is described and experience in autoencoding is discussed.

6 citations


Journal ArticleDOI
TL;DR: A neural network based labeled object identification system, to be used for product classification at the final inspection stage of an IBM personal computer manufacturing line, and shown to satisfy the primary requirements of a typical industrial classification system.

3 citations


Proceedings ArticleDOI
17 Mar 1995
TL;DR: A nonlinear 5 layer artificial neural autoencoder network for image data compression is constructed and trained using the back propagation algorithm and medical CT images to provide a compression/decompression tool that provides maximum flexibility and can be used independently from the training environment.
Abstract: A nonlinear 5 layer artificial neural autoencoder network for image data compression is constructed and trained using the back propagation algorithm and medical CT images. The influence of linear and nonlinear pre/postprocessing operations is studied as well as an alternative compression scheme. Important implementational issues of neural networks are addressed as well as autoencoder issues. One of the results of this work is a compression/decompression tool that provides maximum flexibility and can be used independently from the training environment.

3 citations