scispace - formally typeset
Open AccessJournal ArticleDOI

Support-Vector Networks

Corinna Cortes, +1 more
- 15 Sep 1995 - 
- Vol. 20, Iss: 3, pp 273-297
TLDR
High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Abstract
The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

A Comparative Study on Machine Learning Algorithms for Smart Manufacturing: Tool Wear Prediction Using Random Forests

TL;DR: Experimental results have shown that RFs can generate more accurate predictions than FFBP ANNs with a single hidden layer and SVR, and experimental results have also shown thatRFs can be more accurate than support vector regression (SVR) without a hidden layer.
Posted Content

End-to-End Incremental Learning

TL;DR: This work proposes an approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes, based on a loss composed of a distillation measure to retain the knowledge acquired from theold classes, and a cross-entropy loss to learn the new classes.
Journal ArticleDOI

Using tri-axial acceleration data to identify behavioral modes of free-ranging animals: general concepts and tools illustrated for griffon vultures

TL;DR: This work focuses on the use of tri-axial acceleration (ACC) data to identify behavioral modes of GPS-tracked free-ranging wild animals and illustrates how ACC-identified behavioral modes provide the means to examine how vulture flight is affected by environmental factors, hence facilitating the integration of behavioral, biomechanical and ecological data.
Journal Article

Sampling methods for the Nyström method

TL;DR: This work reports results of extensive experiments that provide a detailed comparison of various fixed and adaptive sampling techniques, and demonstrates the performance improvement associated with the ensemble Nystrom method when used in conjunction with either fixed or adaptive sampling schemes.
Book ChapterDOI

Component-based face recognition with 3D morphable models

TL;DR: A 3D morphable model is used to compute 3D face models from three input images of each subject in the training database and the system achieved a recognition rate significantly better than a comparable global face recognition system.
References
More filters
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Proceedings ArticleDOI

A training algorithm for optimal margin classifiers

TL;DR: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented, applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions.
Book

Methods of Mathematical Physics

TL;DR: In this paper, the authors present an algebraic extension of LINEAR TRANSFORMATIONS and QUADRATIC FORMS, and apply it to EIGEN-VARIATIONS.
Related Papers (5)