scispace - formally typeset
Book ChapterDOI

GradientBased Learning Applied to Document Recognition

TLDR
Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.
Abstract
Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful Gradient-Based Learning technique. Given an appropriate network architecture, Gradient-Based Learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation, recognition, and language modeling. A new learning paradigm, called Graph Transformer Networks (GTN), allows such multi-module systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure. Two systems for on-line handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of Graph Transformer Networks. A Graph Transformer Network for reading bank check is also described. It uses Convolutional Neural Network character recognizers combined with global training techniques to provides record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.

read more

Citations
More filters
Proceedings ArticleDOI

Boosting Domain Adaptation by Discovering Latent Domains

TL;DR: In this article, the authors propose a novel Convolutional neural network (CNN) architecture which automatically discovers latent domains in visual datasets and exploits this information to learn robust target classifiers.
Proceedings ArticleDOI

Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent

TL;DR: The DMGC model is introduced, the first conceptualization of the parameter space that exists when implementing low-precision SGD, and it is shown that it provides a way to both classify these algorithms and model their performance.
Proceedings ArticleDOI

Generative Adversarial Minority Oversampling

TL;DR: In this article, a three-player adversarial game between a convex generator, a multi-class classifier network, and a real/fake discriminator is proposed to perform oversampling in deep learning systems.
Posted Content

Minimal Gated Unit for Recurrent Neural Networks

TL;DR: In this article, Minimal Gated Unit (MGU) is proposed for RNNs, which only contains one gate, which is a minimal design among all gated hidden units.
Posted Content

Events-to-Video: Bringing Modern Computer Vision to Event Cameras

TL;DR: This work proposes a novel, recurrent neural network to reconstruct videos from a stream of events and train it on a large amount of simulated event data, which surpasses state-of-the-art reconstruction methods by a large margin and opens the door to bringing the outstanding properties of event cameras to an entirely new range of tasks.
Related Papers (5)