scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Journal ArticleDOI

Improved Computation for Levenberg–Marquardt Training

TL;DR: The improved computation presented in this paper is aimed to optimize the neural networks learning process using Levenberg-Marquardt (LM) algorithm, and the improved memory and time efficiencies are especially true for large sized patterns training.
Book

Graph Representation Learning

TL;DR: This work has shown that graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry, and building relational inductive biases into deep learnin...
Journal ArticleDOI

Advanced Spectral Classifiers for Hyperspectral Images: A review

TL;DR: The classification of hyperspectral images is a challenging task for a number of reasons, such as the presence of redundant features, the imbalance among the limited number of available training samples, and the high dimensionality of the data.
Journal ArticleDOI

Machine Learning of Molecular Electronic Properties in Chemical Compound Space

TL;DR: In this article, a deep multi-task artificial neural network is used to predict multiple electronic ground and excited-state properties, such as atomization energy, polarizability, frontier orbital eigenvalues, ionization potential, electron affinity and excitation energies.
Proceedings ArticleDOI

Lossy Image Compression with Compressive Autoencoders

TL;DR: In this article, the authors proposed a new approach to the problem of optimizing autoencoders for lossy image compression, and showed that minimal changes to the loss are sufficient to train deep autoencoder competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs.
References
More filters
Related Papers (5)