Proceedings ArticleDOI
Combining multiple representations and classifiers for pen-based handwritten digit recognition
F. Alimoglu,Ethem Alpaydin +1 more
- Vol. 2, pp 637-640
TLDR
This work investigates techniques to combine multiple representations of a handwritten digit to increase classification accuracy without significantly increasing system complexity or recognition time and implements and compares voting, mixture of experts, stacking and cascading.Abstract:
We investigate techniques to combine multiple representations of a handwritten digit to increase classification accuracy without significantly increasing system complexity or recognition time. We compare multiexpert and multistage combination techniques and discuss in detail in a comparative manner methods for combining multiple learners: voting, mixture of experts, stacking, boosting and cascading. In pen based handwritten character recognition, the input is the dynamic movement of the pentip over the pressure sensitive tablet. There is also the image formed as a result of this movement. On a real world database, we notice that the two multi layer perceptron (MLP) neural network based classifiers using these representations separately make errors on different patterns, implying that a suitable combination of the two would lead to higher accuracy. Thus we implement and compare voting, mixture of experts, stacking and cascading. Combined classifiers have an error percentage less than individual ones. The final combined system of two MLPs has less complexity and memory requirement than a single k nearest neighbor using one of the representations.read more
Citations
More filters
Proceedings Article
Power Iteration Clustering
Frank Lin,William W. Cohen +1 more
TL;DR: PIC finds a very low-dimensional embedding of a dataset using truncated power iteration on a normalized pair-wise similarity matrix of the data and turns out to be an effective cluster indicator, consistently outperforming widely used spectral methods such as NCut on real datasets.
Robust speech recognition using articulatory information
TL;DR: It is argued and demonstrated empirically that the articulatory feature approach can lead to greater robustness by enhancing the accuracy of the bottom-up acoustic modeling component in a speech recognition system, to improve the robustness of speech recognition systems in adverse acoustic environments.
Journal ArticleDOI
Robust continuous clustering
Sohil Shah,Vladlen Koltun +1 more
TL;DR: This work presents a clustering algorithm that reliably achieves high accuracy across domains, handles high data dimensionality, and scales to large datasets, and optimizes a smooth global objective, using efficient numerical methods.
Journal ArticleDOI
Multiple classifier decision combination strategies for character recognition: A review
A. F. Rahman,Michael Fairhurst +1 more
TL;DR: This paper explicitly reviews the field of multiple classifier decision combination strategies for character recognition, from some of its early roots to the present day and illustrates explicitly how the principles underlying the application of multi-classifier approaches to character recognition can easily generalise to a wide variety of different task domains.
Journal ArticleDOI
A Unified Framework for Representation-Based Subspace Clustering of Out-of-Sample and Large-Scale Data
TL;DR: A unified framework that makes the representation-based subspace clustering algorithms feasible to cluster both the out-of-sample and the large-scale data, and gives an estimation for the error bounds by treating each subspace as a point in a hyperspace.
References
More filters
Journal ArticleDOI
Original Contribution: Stacked generalization
TL;DR: The conclusion is that for almost any real-world generalization problem one should use some version of stacked generalization to minimize the generalization error rate.
Journal ArticleDOI
Adaptive mixtures of local experts
TL;DR: A new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases, which is demonstrated to be able to be solved by a very simple expert network.
Journal ArticleDOI
Neural network ensembles
Lars Kai Hansen,Peter Salamon +1 more
TL;DR: It is shown that the remaining residual generalization error can be reduced by invoking ensembles of similar networks, which helps improve the performance and training of neural networks for classification.
Journal ArticleDOI
Methods of combining multiple classifiers and their applications to handwriting recognition
TL;DR: On applying these methods to combine several classifiers for recognizing totally unconstrained handwritten numerals, the experimental results show that the performance of individual classifiers can be improved significantly.
Journal ArticleDOI
The state of the art in online handwriting recognition
TL;DR: The state of the art of online handwriting recognition during a period of renewed activity in the field is described, based on an extensive review of the literature, including journal articles, conference proceedings, and patents.