scispace - formally typeset
Book ChapterDOI

GradientBased Learning Applied to Document Recognition

TLDR
Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.
Abstract
Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful Gradient-Based Learning technique. Given an appropriate network architecture, Gradient-Based Learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation, recognition, and language modeling. A new learning paradigm, called Graph Transformer Networks (GTN), allows such multi-module systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure. Two systems for on-line handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of Graph Transformer Networks. A Graph Transformer Network for reading bank check is also described. It uses Convolutional Neural Network character recognizers combined with global training techniques to provides record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.

read more

Citations
More filters
Proceedings ArticleDOI

Improving Code Search with Co-Attentive Representation Learning

TL;DR: Experimental results show that the proposed co-attentive representation learning model, CARLCS-CNN, significantly outperforms DeepCS by 26.72% in terms of MRR (mean reciprocal rank) and is five times faster than DeepCS in model training and four times in testing.
Posted Content

Feature Selection using Stochastic Gates

TL;DR: This study proposes a method for feature selection in high-dimensional non-linear function estimation problems based on minimizing the $\ell_0$ norm of the vector of indicator variables that represent if a feature is selected or not and relies on the continuous relaxation of Bernoulli distributions.
Posted Content

Detail-Preserving Pooling in Deep Networks

TL;DR: In this article, an adaptive pooling method that magnifies spatial changes and preserves important structural detail is proposed, which can be learned jointly with the rest of the network and consistently outperforms previous pooling approaches.
Proceedings Article

Learning the structure of sum-product networks via an SVD-based algorithm

TL;DR: Two new structure learning algorithms for sum-product networks, in the generative and discriminative settings, that are based on recursively extracting rank-one submatrices from data are presented.
Posted Content

Differentially Private Data Generative Models.

TL;DR: It is demonstrated that both DP-AuGM and DP-VaeGM can be easily integrated with real-world machine learning applications, such as machine learning as a service and federated learning, which are otherwise threatened by the membership inference attack and the GAN-based attack, respectively.
Related Papers (5)