scispace - formally typeset
Book ChapterDOI

GradientBased Learning Applied to Document Recognition

TLDR
Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.
Abstract
Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful Gradient-Based Learning technique. Given an appropriate network architecture, Gradient-Based Learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation, recognition, and language modeling. A new learning paradigm, called Graph Transformer Networks (GTN), allows such multi-module systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure. Two systems for on-line handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of Graph Transformer Networks. A Graph Transformer Network for reading bank check is also described. It uses Convolutional Neural Network character recognizers combined with global training techniques to provides record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.

read more

Citations
More filters
Proceedings ArticleDOI

Deep Neural Network for Image Recognition Based on the Caffe Framework

TL;DR: The developing the deep neural network model for image recognition and a corresponding experimental research on an example of the MNIST data set are described.
Proceedings ArticleDOI

A Further Step to Perfect Accuracy by Training CNN with Larger Data

TL;DR: The effects of combining the datasets prior to training and the effects of transfer learning during training are revealed and an almost perfect accuracy is shown suggesting the ability of the network to generalize all forms of text.
Proceedings ArticleDOI

Combining Multi-scale Character Recognition and Linguistic Knowledge for Natural Scene Text OCR

TL;DR: A novel method to recognize scene texts avoiding the conventional character segmentation step is proposed, relying on a neural classification approach, to every window in order to recognize valid characters and identify non valid ones.
Posted Content

Truly shift-invariant convolutional neural networks

TL;DR: Adapt polyphase sampling (APS) is proposed, a simple sub-sampling scheme that allows convolutional neural networks to achieve 100% consistency in classification performance under shifts, without any loss in accuracy.
Journal Article

Distributional Generalization: A New Kind of Generalization

TL;DR: A new notion of generalization is introduced -- Distributional Generalization -- which roughly states that outputs of a classifier at train and test time are close *as distributions*, as opposed to close in just their average error.
Related Papers (5)