scispace - formally typeset
Topic

Recurrent neural network

About: Recurrent neural network is a(n) research topic. Over the lifetime, 29231 publication(s) have been published within this topic receiving 890011 citation(s). The topic is also known as: RNN.

...read more

Papers
  More

Journal ArticleDOI: 10.1162/NECO.1997.9.8.1735
Sepp Hochreiter1, Jürgen Schmidhuber2Institutions (2)
01 Nov 1997-Neural Computation
Abstract: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.

...read more

49,735 Citations


Open accessBook
Christopher M. Bishop1Institutions (1)
01 Jan 1995-
Abstract: From the Publisher: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts, the book examines techniques for modelling probability density functions and the properties and merits of the multi-layer perceptron and radial basis function network models. Also covered are various forms of error functions, principal algorithms for error function minimalization, learning and generalization in neural networks, and Bayesian techniques and their applications. Designed as a text, with over 100 exercises, this fully up-to-date work will benefit anyone involved in the fields of neural computation and pattern recognition.

...read more

19,046 Citations


Open accessProceedings ArticleDOI: 10.3115/V1/D14-1179
01 Jan 2014-
Abstract: In this paper, we propose a novel neural network model called RNN Encoder‐ Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder‐Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.

...read more

Topics: Encoder (54%), Recurrent neural network (52%), Artificial neural network (52%) ...read more

14,140 Citations


Book ChapterDOI: 10.1016/S0065-2458(08)60404-0
Suresh Kothari1, Heekuck Oh1Institutions (1)
Abstract: Publisher Summary This chapter provides an account of different neural network architectures for pattern recognition. A neural network consists of several simple processing elements called neurons. Each neuron is connected to some other neurons and possibly to the input nodes. Neural networks provide a simple computing paradigm to perform complex recognition tasks in real time. The chapter categorizes neural networks into three types: single-layer networks, multilayer feedforward networks, and feedback networks. It discusses the gradient descent and the relaxation method as the two underlying mathematical themes for deriving learning algorithms. A lot of research activity is centered on learning algorithms because of their fundamental importance in neural networks. The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue. It closes with the discussion of performance and implementation issues.

...read more

Topics: Deep learning (67%), Recurrent neural network (67%), Time delay neural network (66%) ...read more

12,585 Citations


Journal ArticleDOI: 10.1007/BF02551274
George Cybenko1Institutions (1)
Abstract: In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks.

...read more

10,615 Citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202267
20213,032
20203,648
20193,677
20183,012
20172,181

Top Attributes

Show by:

Topic's top 5 most impactful authors

Jun Wang

198 papers, 12.3K citations

Yoshua Bengio

92 papers, 49.1K citations

Jun Tani

79 papers, 1.3K citations

Jürgen Schmidhuber

75 papers, 71.4K citations

Björn Schuller

61 papers, 3.4K citations

Network Information
Related Topics (5)
Supervised learning

20.8K papers, 710.5K citations

95% related
Unsupervised learning

22.7K papers, 1M citations

95% related
Artificial neural network

207K papers, 4.5M citations

94% related
Semi-supervised learning

12.1K papers, 611.2K citations

93% related
Feature vector

48.8K papers, 954.4K citations

93% related