scispace - formally typeset
Book ChapterDOI

GradientBased Learning Applied to Document Recognition

TLDR
Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.
Abstract
Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful Gradient-Based Learning technique. Given an appropriate network architecture, Gradient-Based Learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation, recognition, and language modeling. A new learning paradigm, called Graph Transformer Networks (GTN), allows such multi-module systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure. Two systems for on-line handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of Graph Transformer Networks. A Graph Transformer Network for reading bank check is also described. It uses Convolutional Neural Network character recognizers combined with global training techniques to provides record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.

read more

Citations
More filters
Proceedings ArticleDOI

Full-stack optimization for accelerating CNNs using powers-of-two weights with FPGA validation

TL;DR: A highlight of this full-stack optimization framework is an efficient Selector-Accumulator (SAC) architecture for implementing CNNs with powers-of-two weights which has 9x higher energy efficiency compared to other implementations while achieving comparable latency.
Proceedings ArticleDOI

clCaffe: OpenCL Accelerated Caffe for Convolutional Neural Networks

TL;DR: This work presents OpenCL acceleration of a well-known deep learning framework, Caffe, while focusing on the convolution layer which has been optimized with three different approaches, GEMM, spatial domain, and frequency domain, which greatly enhances the ability to leverage deep learning use cases on all types of OpenCL devices.
Proceedings ArticleDOI

Supervised Image Classification Using Deep Convolutional Wavelets Network

TL;DR: A new approach to supervised image classification by the combination of two techniques of learning: the wavelet network and the deep learning is proposed that is remarkably efficient for image classification compared to a known classifier.
Proceedings ArticleDOI

DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression

TL;DR: DeepSZ as discussed by the authors proposes an accuracy-loss bounded neural network compression framework, which involves four key steps: network pruning, error bound assessment, optimization for error bound configuration, and compressed model generation.
Proceedings Article

Learning Autoencoders with Relational Regularization

TL;DR: The relational regularized autoencoder (RAE) outperforms existing methods and helps co-training of multiple autoencoders even if they have heterogeneous architectures and incomparable latent spaces.
Related Papers (5)