Proceedings ArticleDOI
A direct adaptive method for faster backpropagation learning: the RPROP algorithm
Martin Riedmiller,Heinrich Braun +1 more
- Vol. 1, pp 586-591
Reads0
Chats0
TLDR
A learning algorithm for multilayer feedforward networks, RPROP (resilient propagation), is proposed that performs a local adaptation of the weight-updates according to the behavior of the error function to overcome the inherent disadvantages of pure gradient-descent.Abstract:
A learning algorithm for multilayer feedforward networks, RPROP (resilient propagation), is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP performs a local adaptation of the weight-updates according to the behavior of the error function. Contrary to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforeseeable influence of the size of the derivative, but only dependent on the temporal behavior of its sign. This leads to an efficient and transparent adaptation process. The capabilities of RPROP are shown in comparison to other adaptive techniques. >read more
Citations
More filters
Journal ArticleDOI
Deep learning in neural networks
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Journal ArticleDOI
Bidirectional recurrent neural networks
Mike Schuster,Kuldip K. Paliwal +1 more
TL;DR: It is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution.
Book
Tabu Search
Fred Glover,Manuel Laguna +1 more
TL;DR: This book explores the meta-heuristics approach called tabu search, which is dramatically changing the authors' ability to solve a host of problems that stretch over the realms of resource planning, telecommunications, VLSI design, financial analysis, scheduling, spaceplanning, energy distribution, molecular engineering, logistics, pattern classification, flexible manufacturing, waste management,mineral exploration, biomedical analysis, environmental conservation and scores of other problems.
Journal ArticleDOI
The Graph Neural Network Model
TL;DR: A new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains, and implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space.
Book
Deep Learning: Methods and Applications
TL;DR: This monograph provides an overview of general deep learning methodology and its applications to a variety of signal and information processing tasks, including natural language and text processing, information retrieval, and multimodal information processing empowered by multi-task deep learning.
References
More filters
Journal ArticleDOI
Increased Rates of Convergence Through Learning Rate Adaptation
TL;DR: A study of Steepest Descent and an analysis of why it can be slow to converge and four heuristics for achieving faster rates of convergence are proposed.
An empirical study of learning speed in back-propagation networks
TL;DR: A new learning algorithm is developed that is faster than standard backprop by an order of magnitude or more and that appears to scale up very well as the problem size increases.
Learning to tell two spirals apart
TL;DR: A networkarchitecture is exhibited that facilitates the learning of the spiral task, and the leaming speed of several variants of the back-propagation algorithm is compared.
Journal ArticleDOI
SuperSAB: fast adaptive back propagation with good scaling properties
TL;DR: It is shown that SuperSAB may converge orders of magnitude faster than the original back propagation algorithm, and is only slightly instable, while the algorithm is very insensitive to the choice of parameter values, and has excellent scaling properties.
Optimization of the Backpropagation Algorithm for Training Multilayer Perceptrons
W. Schiffmann,M. Joost,R. Werner +2 more
TL;DR: Learning rate adaptation for each training pattern 12 and nearly optimal learning rate adjust using line search 15 5.6.1 Polak–Ribiere method and line search 17 5.4 Evolutionarily adapted learning rate 12 5.5 Global learning rate adaptation 8 5.1 Fixed calculating of the learning rate.