scispace - formally typeset
Book ChapterDOI

GradientBased Learning Applied to Document Recognition

TLDR
Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.
Abstract
Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful Gradient-Based Learning technique. Given an appropriate network architecture, Gradient-Based Learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation, recognition, and language modeling. A new learning paradigm, called Graph Transformer Networks (GTN), allows such multi-module systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure. Two systems for on-line handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of Graph Transformer Networks. A Graph Transformer Network for reading bank check is also described. It uses Convolutional Neural Network character recognizers combined with global training techniques to provides record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.

read more

Citations
More filters
Proceedings ArticleDOI

Integrated Optic Disc and Cup Segmentation with Deep Learning

TL;DR: A comprehensive solution based on applying convolutional neural networks to feature exaggerated inputs emphasizing disc pallor without blood vessel obstruction, as well as the degree of vessel kinking is described, which can be used to direct the most challenging cases for manual inspection in glaucoma.
Proceedings ArticleDOI

RATM: Recurrent Attentive Tracking Model

TL;DR: In this article, an attention-based modular neural framework for computer vision is proposed, which consists of three modules: a recurrent attention module controlling where to look in an image or video frame, a feature-extraction module providing a representation of what is seen, and an objective module formalizing why the model learns its attentive behavior.
Posted Content

The Lottery Ticket Hypothesis at Scale

TL;DR: It is found that later resetting produces stabler winning tickets and that improved stability correlates with higher winning ticket accuracy.
Posted Content

Learning the Structure of Deep Sparse Graphical Models

TL;DR: The cascading Indian buffet process (CIBP) is introduced, which provides a prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference.
Posted Content

Rademacher Complexity for Adversarially Robust Generalization.

TL;DR: In this paper, Wang et al. study the adversarially robust generalization problem through the lens of Rademacher complexity and prove tight bounds for the adversarial Rademachacher complexity for binary linear classifiers.
Related Papers (5)