scispace - formally typeset
Author

David E. Rumelhart

Bio: David E. Rumelhart is a academic researcher from Stanford University. The author has contributed to research in topic(s): Artificial neural network & Context (language use). The author has an hindex of 39, co-authored 113 publication(s) receiving 101117 citation(s). Previous affiliations of David E. Rumelhart include University of California, San Diego & PARC.

...read more

Papers
  More

Journal ArticleDOI: 10.1038/323533A0
01 Jan 1988-Nature
Abstract: We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

...read more

19,542 Citations


Book ChapterDOI: 10.1016/B978-1-4832-1446-7.50035-2
01 Jan 1988-
Abstract: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion

...read more

Topics: Delta rule (53%), Semi-supervised learning (50%)

16,807 Citations



Open accessBook
03 Jan 1986-
Abstract: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion

...read more

Topics: Delta rule (53%)

13,245 Citations



Cited by
  More

Open accessProceedings Article
03 Dec 2012-
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

...read more

Topics: Convolutional neural network (61%), Deep learning (59%), Dropout (neural networks) (54%) ...read more

73,871 Citations


Open accessJournal ArticleDOI: 10.1023/A:1022627411411
Corinna Cortes1, Vladimir Vapnik1Institutions (1)
15 Sep 1995-Machine Learning
Abstract: The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

...read more

Topics: Feature learning (63%), Active learning (machine learning) (62%), Feature vector (62%) ...read more

35,157 Citations


Journal ArticleDOI: 10.1109/5.726791
Yann LeCun1, Léon Bottou2, Léon Bottou3, Yoshua Bengio4  +3 moreInstitutions (5)
01 Jan 1998-
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

...read more

Topics: Neocognitron (64%), Intelligent character recognition (64%), Artificial neural network (60%) ...read more

34,930 Citations


Journal ArticleDOI: 10.1038/NATURE14539
Yann LeCun1, Yann LeCun2, Yoshua Bengio3, Geoffrey E. Hinton4  +1 moreInstitutions (5)
28 May 2015-Nature
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

...read more

33,931 Citations


Open accessBook
Richard S. Sutton1, Andrew G. BartoInstitutions (1)
01 Jan 1988-
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

...read more

Topics: Learning classifier system (69%), Reinforcement learning (69%), Apprenticeship learning (65%) ...read more

32,257 Citations


Performance
Metrics

Author's H-index: 39

No. of papers from the Author in previous years
YearPapers
20172
20131
19983
19955
19946
19935

Top Attributes

Show by:

Author's top 5 most impactful journals

Cognitive Science

4 papers, 2.8K citations

Psychological Review

3 papers, 5.6K citations

Communications of The ACM

2 papers, 965 citations

Nature

2 papers, 20.4K citations

Network Information
Related Authors (5)
Andreas S. Weigend

10 papers, 2.3K citations

82% related
James L. McClelland

323 papers, 80.2K citations

71% related
Donald A. Norman

292 papers, 71.2K citations

66% related
Victor Abrash

28 papers, 945 citations

64% related
Michael M. Cohen

53 papers, 3.5K citations

62% related