scispace - formally typeset
Search or ask a question

Showing papers on "Autoencoder published in 2007"


Journal ArticleDOI
21 May 2007-Chaos
TL;DR: This work introduces a novel method for identifying the modular structures of a network based on the maximization of an objective function: the ratio association, and develops an efficient optimization algorithm,based on the deterministic annealing scheme.
Abstract: We introduce a novel method for identifying the modular structures of a network based on the maximization of an objective function: the ratio association. This cost function arises when the communities detection problem is described in the probabilistic autoencoder frame. An analogy with kernel k-means methods allows us to develop an efficient optimization algorithm, based on the deterministic annealing scheme. The performance of the proposed method is shown on real data sets and on simulated networks.

67 citations


Journal ArticleDOI
TL;DR: A novel theoretical approach to human category learning is proposed in which categories are represented as coordinated statistical models of the properties of the members, which shows good qualitative fits to benchmark human learning data and provides a compelling theoretical alternative to established models.
Abstract: A novel theoretical approach to human category learning is proposed in which categories are represented as coordinated statistical models of the properties of the members. Key elements of the account are learning to recode inputs as task-constrained principle components and evaluating category membership in terms of model fit—that is, the fidelity of the reconstruction after recoding and decoding the stimulus. The approach is implemented as a computational model called DIVA (for DIVergent Autoencoder), an artificial neural network that uses reconstructive learning to solve N-way classification tasks. DIVA shows good qualitative fits to benchmark human learning data and provides a compelling theoretical alternative to established models.

60 citations


Posted Content
TL;DR: This investigation compares 3 different data imputation models and identifies their merits by using accuracy measures to improve the overall performance of the autoencoder network and show promising potential for future investigation.
Abstract: Data collection often results in records that have missing values or variables. This investigation compares 3 different data imputation models and identifies their merits by using accuracy measures. Autoencoder Neural Networks, Principal components and Support Vector regression are used for prediction and combined with a genetic algorithm to then impute missing variables. The use of PCA improves the overall performance of the autoencoder network while the use of support vector regression shows promising potential for future investigation. Accuracies of up to 97.4 % on imputation of some of the variables were achieved.

18 citations


01 Jan 2007
TL;DR: Deep autoencoders showed signs of improvement when pretrained over the ones without pretraining, and the tuning which followed the pretraining approach was able to reduce the data dimensionality very efficiently.
Abstract: Dimensionality reduction is a method of obtaining the information from a high dimensionalfeature space using fewer intrinsic dimensions. Reducing dimensionality of high dimensional datais good for better classification, regression, presentation and visualization of data. By representingdata in the lower dimensional space, most of the time we don’t lose much information that mattersto the application.Recently Hinton et al. [5] used a deep autoencoder for dimensionality reduction of multipledatasets. The autoencoders used by them are multi-layer identity mapping neural networks repre-sented by a function f(x) = x, where x is a multidimensional input vector to the network. Theyargue that deep autoencoders could be easily trained using a gradient descent method providedthe initial weights are near good solutions. To choose the best set of initial weights they suggestedto use a pretraining method. In their formulation of a pretraining method they used a two layernetwork called Restricted Boltzmann Machine (RBM) to initialize the weights prior to using afine tuning gradient descent method. They trained two layers at a time and the output of thefirst RBM is used as an input to the next RBM. Layer by layer, training trickles down the wholenetwork. They claimed that by pretraining, they were able to obtain a good set of initial weights.Their second claim was that the fine tuning which followed the pretraining approach was able toreduce the data dimensionality very efficiently. The results presented in their paper support theirarguments, however, there still remain some areas which require some additional studies.Firstly, they reported deep autoencoders showed significant improvement when pretrained overthe ones without pretraining.

5 citations


Posted Content
TL;DR: An autoencoder, which is a specialized type of neural network, was used to detect anomalies in normal routing behavior, and was able to detect both global instability caused by worms as well as localized routing instability.
Abstract: Internet worms cause billions of dollars in damage yearly, affecting millions of users worldwide. For countermeasures to be deployed timeously, it is necessary to use an automated system to detect the spread of a worm. This paper discusses a method of determining the presence of a worm, based on routing information currently available from Internet routers. An autoencoder, which is a specialized type of neural network, was used to detect anomalies in normal routing behavior. The autoencoder was trained using information from a single router, and was able to detect both global instability caused by worms as well as localized routing instability.

4 citations


Proceedings ArticleDOI
29 Oct 2007
TL;DR: This paper proposes an auto-associative neural network system (AANNS) based on multiple Autoencoders that has the functions ofAuto-association, incremental learning and local update, which are the foundations of cognitive science.
Abstract: Recently, a nonlinear dimension reduction technique, called Autoencoder, had been proposed. It can efficiently carry out mappings in both directions between the original data and low-dimensional code space. However, a single Autoencoder commonly maps all data into a single subspace. If the original data set have remarkable different categories (for example, characters and handwritten digits), then only one Autoencoder will not be efficient. To deal with the data of remarkable different categories, this paper proposes an auto-associative neural network system (AANNS) based on multiple Autoencoders. The novel technique has the functions of auto-association, incremental learning and local update. Excitingly, these functions are the foundations of cognitive science. Experimental results on benchmark MNIST digit dataset and handwritten character-digit dataset show the advantages of the proposed model.

4 citations


Proceedings ArticleDOI
07 Dec 2007
TL;DR: A novel method for identifying the modular structures of a network is introduced, described as a process of compression of information, by means of the autoencoder frame, and the best partition in modules is found to be the maximizer of an objective function: the ratio association.
Abstract: We introduce a novel method for identifying the modular structures of a network: this problem is described as a process of compression of information, by means of the autoencoder frame. As a result, the best partition in modules is found to be the maximizer of an objective function: the ratio association. The performance of the proposed method is shown on a real data set and on simulated networks. The optimization algorithm we use is based on the deterministic annealing scheme.