scispace - formally typeset
Book ChapterDOI

GradientBased Learning Applied to Document Recognition

TLDR
Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.
Abstract
Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful Gradient-Based Learning technique. Given an appropriate network architecture, Gradient-Based Learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation, recognition, and language modeling. A new learning paradigm, called Graph Transformer Networks (GTN), allows such multi-module systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure. Two systems for on-line handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of Graph Transformer Networks. A Graph Transformer Network for reading bank check is also described. It uses Convolutional Neural Network character recognizers combined with global training techniques to provides record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.

read more

Citations
More filters
Posted Content

Deep Neural Networks for Anatomical Brain Segmentation

TL;DR: To the knowledge, this technique is the first to tackle the anatomical segmentation of the whole brain using deep neural networks and it does not require any non-linear registration of the MR images.
Proceedings ArticleDOI

On deep generative models with applications to recognition

TL;DR: This work uses one of the best, pixel-level, generative models of natural images–a gated MRF–as the lowest level of a deep belief network that has several hidden layers and shows that the resulting DBN is very good at coping with occlusion when predicting expression categories from face images.
Proceedings ArticleDOI

From Source to Target and Back: Symmetric Bi-Directional Adaptive GAN

TL;DR: In this paper, a symmetric mapping among domains is proposed to preserve the class identity of an image passing through both domain mappings, and a new class consistency loss is defined to align the generators in the two directions.
Proceedings ArticleDOI

Deep TextSpotter: An End-to-End Trainable Scene Text Localization and Recognition Framework

TL;DR: The proposed method achieves state-of-the-art accuracy in the end-to-end text recognition on two standard datasets – ICDar 2013 and ICDAR 2015, whilst being an order of magnitude faster than competing methods.
Posted Content

Learning From Noisy Large-Scale Datasets With Minimal Supervision

TL;DR: An approach to effectively use millions of images with noisy annotations in conjunction with a small subset of cleanly-annotated images to learn powerful image representations and is particularly effective for a large number of classes with wide range of noise in annotations.
Related Papers (5)