Journal ArticleDOI
Learning representations by back-propagating errors
Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.Abstract:
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.read more
Citations
More filters
Journal Article
How Learning Can Guide Evolution.
TL;DR: The assumption that acquired character istics are not in- herited is ofte n taken to imply that adaptations t he adaptations an organism learns dur ing its lifeti me cannot guide the course of evolut ion as discussed by the authors.
Journal ArticleDOI
A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction
TL;DR: A framework for reconstructing dynamic sequences of 2-D cardiac magnetic resonance images from undersampled data using a deep cascade of convolutional neural networks (CNNs) to accelerate the data acquisition process is proposed and it is demonstrated that CNNs can learn spatio-temporal correlations efficiently by combining convolution and data sharing approaches.
Journal ArticleDOI
An integrated theory of language production and comprehension
Martin J. Pickering,Simon Garrod +1 more
TL;DR: It is asserted that producing and understanding are interwoven, and that this interweaving is what enables people to predict themselves and each other.
Proceedings ArticleDOI
Look Closer to See Better: Recurrent Attention Convolutional Neural Network for Fine-Grained Image Recognition
TL;DR: Li et al. as discussed by the authors proposed a recurrent attention convolutional neural network (RA-CNN) which recursively learns discriminative region attention and region-based feature representation at multiple scales in a mutual reinforced way.
Journal ArticleDOI
Illuminating the “black box”: a randomization approach for understanding variable contributions in artificial neural networks
TL;DR: By extending randomization approaches to ANNs, the “black box” mechanics of ANNs can be greatly illuminated and by coupling this new explanatory power of neural networks with its strong predictive abilities, ANNs promise to be a valuable quantitative tool to evaluate, understand, and predict ecological phenomena.