scispace - formally typeset
Book ChapterDOI

GradientBased Learning Applied to Document Recognition

TLDR
Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.
Abstract
Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful Gradient-Based Learning technique. Given an appropriate network architecture, Gradient-Based Learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation, recognition, and language modeling. A new learning paradigm, called Graph Transformer Networks (GTN), allows such multi-module systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure. Two systems for on-line handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of Graph Transformer Networks. A Graph Transformer Network for reading bank check is also described. It uses Convolutional Neural Network character recognizers combined with global training techniques to provides record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.

read more

Citations
More filters
Proceedings ArticleDOI

Neural Network Classifiers Using Stochastic Computing with a Hardware-Oriented Approximate Activation Function

TL;DR: The experimental results indicate the new proposed architecture achieves more than 25%, 60% and 3x reduction than previous stochastic neural networks, and more than 30x, 30x and 52% reduction than conventional binary neural Networks, in terms of area, power and energy, while maintaining the similar error rates compared to the conventional neural networks.
Journal ArticleDOI

Classification of chaotic time series with deep learning

TL;DR: In this article, the authors use standard deep neural networks to classify univariate time series generated by discrete and continuous dynamical systems based on their chaotic or non-chaotic behaviour.
Journal ArticleDOI

Towards fully automated third molar development staging in panoramic radiographs

TL;DR: A fully automated staging process by employing the full potential of deep learning, using convolutional neural networks (CNNs) in every step of the procedure, showing promising results compared with manual staging.
Journal ArticleDOI

Classification of Computed Tomography Images in Different Slice Positions Using Deep Learning.

TL;DR: This study enrolled 1539 patients who underwent contrast or noncontrast CT imaging, followed by dividing the CT imaging dataset for creating classification models into 10 classes for brain, neck, chest, abdomen, and pelvis with contrast-enhanced and plain imaging and compared the convolutional neural network (CNN) architecture between AlexNet and GoogLeNet.
Posted Content

Blurring the Line Between Structure and Learning to Optimize and Adapt Receptive Fields.

TL;DR: This semi-structured composition is strictly more expressive than free-form filtering, and changes in its structured parameters would require changes in free- form architecture, so in effect this optimizes over receptive field size and shape, tuning locality to the data and task.
Related Papers (5)