scispace - formally typeset
Book ChapterDOI

A fast parallel optimization for training support vector machine

Reads0
Chats0
TLDR
A fast SVM training algorithm for multi-classes consisting of parallel and sequential optimizations is presented and it is shown that, without sacrificing the generalization performance, the proposed algorithm has achieved a speed-up factor of 110, when compared with Keerthi et al.'s modified SMO.
Abstract
A fast SVM training algorithm for multi-classes consisting of parallel and sequential optimizations is presented. The main advantage of the parallel optimization step is to remove most non-support vectors quickly, which dramatically reduces the training time at the stage of sequential optimization. In addition, some strategies such as kernel caching, shrinking and calling BLAS functions are effectively integrated into the algorithm to speed up the training. Experiments on MNIST handwritten digit database have shown that, without sacrificing the generalization performance, the proposed algorithm has achieved a speed-up factor of 110, when compared with Keerthi et al.'s modified SMO. Moreover, for the first time ever we investigated the training performance of SVM on handwritten Chinese database ETL9B with more than 3000 categories and about 500,000 training samples. The total training time is just 5.1 hours. The raw error rate of 1.1% on ETL9B has been achieved.

read more

Citations
More filters
Proceedings Article

Parallel Support Vector Machines: The Cascade SVM

TL;DR: An algorithm for support vector machines (SVM) that can be parallelized efficiently and scales to very large problems with hundreds of thousands of training vectors, which can be spread over multiple processors with minimal communication overhead and requires far less memory.
Journal ArticleDOI

Parallel sequential minimal optimization for the training of support vector machines

TL;DR: The parallel SMO is developed using message passing interface (MPI) and shows great speedup on the adult data set and the Mixing National Institute of Standard and Technology (MNIST) data set when many processors are used.
Journal ArticleDOI

Incorporating prior knowledge in support vector machines for classification: A review

TL;DR: A review of the current state of research regarding the incorporation of two general types of prior knowledge into SVMs for classification and a discussion is conducted to regroup sample and optimization methods under a regularization framework.
Journal Article

Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems

TL;DR: Parallel software for solving the quadratic program arising in training support vector machines for classification problems implements an iterative decomposition technique and exploits both the storage and the computing resources available on multiprocessor systems.
Journal ArticleDOI

An improved handwritten Chinese character recognition system using support vector machine

TL;DR: The enhanced nonlinear normalization method not only solves the aliasing problem in the original Yamada et al.'s nonlinearnormalization method but also avoids the undue stroke distortion in the peripheral region of the normalized image.
References
More filters
Book

Pattern classification and scene analysis

TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Book

Nonlinear Programming

Book ChapterDOI

Text Categorization with Suport Vector Machines: Learning with Many Relevant Features

TL;DR: This paper explores the use of Support Vector Machines for learning text classifiers from examples and analyzes the particular properties of learning with text data and identifies why SVMs are appropriate for this task.

Fast training of support vector machines using sequential minimal optimization, advances in kernel methods

J. C. Platt
TL;DR: SMO breaks this large quadratic programming problem into a series of smallest possible QP problems, which avoids using a time-consuming numerical QP optimization as an inner loop and hence SMO is fastest for linear SVMs and sparse data sets.