scispace - formally typeset
Book ChapterDOI

GradientBased Learning Applied to Document Recognition

TLDR
Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.
Abstract
Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful Gradient-Based Learning technique. Given an appropriate network architecture, Gradient-Based Learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation, recognition, and language modeling. A new learning paradigm, called Graph Transformer Networks (GTN), allows such multi-module systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure. Two systems for on-line handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of Graph Transformer Networks. A Graph Transformer Network for reading bank check is also described. It uses Convolutional Neural Network character recognizers combined with global training techniques to provides record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.

read more

Citations
More filters
Proceedings ArticleDOI

Blockout: Dynamic Model Selection for Hierarchical Deep Networks

TL;DR: Blockout as mentioned in this paper proposes a method for regularization and model selection that simultaneously learns both the model architecture and parameters, which allows for structure learning via back-propagation of hierarchical deep networks.
Posted Content

The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study

TL;DR: In this paper, the authors investigate how the final parameters found by stochastic gradient descent are influenced by over-parameterization, and they find that the optimal SGD hyper-parameters are determined by a "normalized noise scale", which is a function of the batch size, learning rate, and initialization conditions.
Posted Content

Variational Autoencoders Pursue PCA Directions (by Accident)

TL;DR: In this article, the authors explain that the diagonal approximation in the encoder together with the inherent stochasticity force local orthogonality of the decoder in VAE.
Proceedings ArticleDOI

Etalumis: bringing probabilistic programming to scientific simulators at scale

TL;DR: A novel PPL framework that couples directly to existing scientific simulators through a cross-platform probabilistic execution protocol and provides Markov chain Monte Carlo (MCMC) and deep-learning-based inference compilation (IC) engines for tractable inference is presented.
Posted Content

Multi-Modality Fusion based on Consensus-Voting and 3D Convolution for Isolated Gesture Recognition

TL;DR: A convolutional twostream consensus voting network (2SCVN) is proposed which explicitly models both the short-term and long-term structure of the RGB sequences and significantly improves the recognition accuracy.
Related Papers (5)