scispace - formally typeset
Book ChapterDOI

GradientBased Learning Applied to Document Recognition

TLDR
Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.
Abstract
Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful Gradient-Based Learning technique. Given an appropriate network architecture, Gradient-Based Learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation, recognition, and language modeling. A new learning paradigm, called Graph Transformer Networks (GTN), allows such multi-module systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure. Two systems for on-line handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of Graph Transformer Networks. A Graph Transformer Network for reading bank check is also described. It uses Convolutional Neural Network character recognizers combined with global training techniques to provides record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.

read more

Citations
More filters
Posted Content

Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation

TL;DR: In this paper, the authors proposed two approaches for generating a backdoor that is hardly perceptible yet effective in poisoning the model, and carried out extensive experimental evaluations under various assumptions on the adversary model.
Proceedings Article

Self-Supervised Generalisation with Meta Auxiliary Learning

TL;DR: The proposed method, Meta AuXiliary Learning (MAXL), outperforms single-task learning on 7 image datasets, without requiring any additional data, and is even competitive when compared with human-defined auxiliary labels.
Posted Content

Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty

TL;DR: This work describes how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems and outlines methods for displaying uncertainty to stakeholders and recommends how to collect information required for incorporating uncertainty into existing ML pipelines.
Proceedings ArticleDOI

User-Guided Deep Anime Line Art Colorization with Conditional Adversarial Networks

TL;DR: This work proposes a novel deep conditional adversarial architecture for scribble based anime line art colorization that integrates the conditional framework with WGAN-GP criteria as well as the perceptual loss to enable it to robustly train a deep network that makes the synthesized images more natural and real.
Journal ArticleDOI

CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks.

TL;DR: CNN-Cert as mentioned in this paper proposes a general and efficient framework, which can handle various architectures including convolutional layers, max-pooling layers, batch normalization layer, residual blocks, as well as general activation functions.
Related Papers (5)