Support-Vector Networks
Corinna Cortes,Vladimir Vapnik +1 more
TLDR
High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.Abstract:
The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.
High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.read more
Citations
More filters
Proceedings Article
Support Vector Machines Under Adversarial Label Noise
TL;DR: This paper assumes that the adversary has control over some training data, and aims to subvert the SVM learning process, and proposes a strategy to improve the robustness of SVMs to training data manipulation based on a simple kernel matrix correction.
Proceedings Article
Prior Knowledge in Support Vector Kernels
TL;DR: It is shown that both invariances under group transformations and prior knowledge about locality in images can be incorporated by constructing appropriate kernel functions by exploring methods for incorporating prior knowledge in Support Vector learning machines.
Proceedings ArticleDOI
Object detection using a max-margin Hough transform
Subhransu Maji,Jitendra Malik +1 more
TL;DR: A discriminative Hough transform based object detector where each local part casts a weighted vote for the possible locations of the object center and it is shown that the weights can be learned in a max-margin framework which directly optimizes the classification performance.
Journal ArticleDOI
PreBIND and Textomy - mining the biomedical literature for protein-protein interactions using a support vector machine
Ian Donaldson,Joel Martin,Berry de Bruijn,Cheryl Wolting,Vicki Lay,Brigitte Tuekam,Shudong Zhang,Berivan Baskin,Gary D. Bader,Gary D. Bader,Katerina Michalickova,Tony Pawson,Christopher W. V. Hogue +12 more
TL;DR: This work presents an information extraction system that was designed to locate protein-protein interaction data in the literature and present these data to curators and the public for review and entry into BIND.
Journal ArticleDOI
Assessment of the effects of training data selection on the landslide susceptibility mapping: a comparison between support vector machine (SVM), logistic regression (LR) and artificial neural networks (ANN)
TL;DR: The results show that the random landslide training data selection affected the parameter estimations of the SVM, LR and ANN algorithms and had an effect on the accuracy of the susceptibility model because landslide conditioning factors vary according to the geographic locations in the study area.
References
More filters
Journal ArticleDOI
Learning representations by back-propagating errors
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Proceedings ArticleDOI
A training algorithm for optimal margin classifiers
TL;DR: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented, applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions.
Book
Methods of Mathematical Physics
Richard Courant,David Hilbert +1 more
TL;DR: In this paper, the authors present an algebraic extension of LINEAR TRANSFORMATIONS and QUADRATIC FORMS, and apply it to EIGEN-VARIATIONS.