scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Book

Adaptive Wireless Transceivers: Turbo-Coded, Turbo-Equalized and Space-Time Coded TDMA, CDMA and OFDM Systems

Lajos Hanzo, +1 more
TL;DR: Adaptive Wireless Transceivers provides the reader with a broad overview of near-instantaneously adaptive transceivers in the context of TDMA, CDMA and OFDM systems as discussed by the authors.
Proceedings ArticleDOI

Error back propagation for sequence training of Context-Dependent Deep NetworkS for conversational speech transcription

TL;DR: This work investigates back-propagation based sequence training of Context-Dependent Deep-Neural-Network HMMs, or CD-DNN-HMMs, for conversational speech transcription and finds that to get reasonable results, heuristics are needed that point to a problem with lattice sparseness.
Journal ArticleDOI

Automated adaptive inference of phenomenological dynamical models.

TL;DR: In this paper, a coarse-grained model of network dynamics is proposed that automatically adapts its complexity to the available data and produces accurate predictions even when microscopic details are unknown.
Journal ArticleDOI

Artificial neural networks enabled by nanophotonics.

TL;DR: Research into emerging ANNs enabled by nanophtonics that harness photons’ ability to carry vast amounts of information that will help researchers develop artificial neural networks with uses including brain disease research and machine learning are reviewed.
Proceedings ArticleDOI

An algorithm for fast convergence in training neural networks

TL;DR: The modified Levenberg-Marquardt algorithm for feedforward neural networks gives a better convergence rate compared to the standard LM method and is less computationally intensive and requires less memory.
References
More filters
Related Papers (5)