Book ChapterDOI
GradientBased Learning Applied to Document Recognition
Simon Haykin,Bart Kosko +1 more
- pp 306-351
TLDR
Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.Abstract:
Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful Gradient-Based Learning technique. Given an appropriate network architecture, Gradient-Based Learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation, recognition, and language modeling. A new learning paradigm, called Graph Transformer Networks (GTN), allows such multi-module systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure. Two systems for on-line handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of Graph Transformer Networks. A Graph Transformer Network for reading bank check is also described. It uses Convolutional Neural Network character recognizers combined with global training techniques to provides record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.read more
Citations
More filters
Proceedings ArticleDOI
Transfer learning using convolutional neural networks for object classification within X-ray baggage security imagery
TL;DR: A transfer learning paradigm is employed such that a pre-trained CNN, primarily trained for generalized image classification tasks where sufficient training data exists, can be specifically optimized as a later secondary process that targets specific this application domain.
Proceedings ArticleDOI
Latent Space Sparse Subspace Clustering
TL;DR: A method that learns the projection of data and finds the sparse coefficients in the low-dimensional latent space and applies spectral clustering to a similarity matrix built from these sparse coefficients.
Journal ArticleDOI
Hierarchical committee of deep convolutional neural networks for robust facial expression recognition
TL;DR: This paper builds a hierarchical architecture of the committee with exponentially-weighted decision fusion of deep CNNs for robust facial expression recognition for the third Emotion Recognition in the Wild (EmotiW2015) challenge.
Proceedings ArticleDOI
Performance and Scalability of GPU-Based Convolutional Neural Networks
TL;DR: This paper presents the implementation of a framework for accelerating training and classification of arbitrary Convolutional Neural Networks (CNNs) on the GPU and describes the basic parts of a CNN and demonstrates the performance and scalability improvement that can be achieved by shifting the computation-intensive tasks of aCNN to the GPU.
Proceedings Article
Composite Quantization for Approximate Nearest Neighbor Search
Ting Zhang,Chao Du,Jingdong Wang +2 more
TL;DR: This paper presents a novel compact coding approach, composite quantization, for approximate nearest neighbor search, to use the composition of several elements selected from the dictionaries to accurately approximate a vector and to represent the vector by a short code composed of the indices of the selected elements.