New Support Vector Algorithms
Reads0
Chats0
TLDR
A new class of support vector algorithms for regression and classification that eliminates one of the other free parameters of the algorithm: the accuracy parameter in the regression case, and the regularization constant C in the classification case.Abstract:
We propose a new class of support vector algorithms for regression and classification. In these algorithms, a parameter ν lets one effectively control the number of support vectors. While this can be useful in its own right, the parameterization has the additional benefit of enabling us to eliminate one of the other free parameters of the algorithm: the accuracy parameter epsilon in the regression case, and the regularization constant C in the classification case. We describe the algorithms, give some theoretical results concerning the meaning and the choice of ν, and report experimental results.read more
Citations
More filters
Journal ArticleDOI
Security Evaluation of Pattern Classifiers under Attack
TL;DR: A framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and given examples of its use in three real applications show that security evaluation can provide a more complete understanding of the classifier's behavior in adversarial environments, and lead to better design choices.
Proceedings Article
Ranking with Large Margin Principle: Two Approaches
Amnon Shashua,Anat Levin +1 more
TL;DR: Two main approaches to the problem of ranking k instances with the use of a "large margin" principle are introduced: the "fixed margin" policy in which the margin of the closest neighboring classes is being maximized and a direct generalization of SVM to ranking learning.
Journal ArticleDOI
Learning the Kernel with Hyperkernels
TL;DR: The equivalent representer theorem for the choice of kernels is state and a semidefinite programming formulation of the resulting optimization problem is presented, which leads to a statistical estimation problem similar to the problem of minimizing a regularized risk functional.
Proceedings ArticleDOI
Most likely heteroscedastic Gaussian process regression
TL;DR: This paper follows Goldberg et al.'s approach and model the noise variance using a second GP in addition to the GP governing the noise-free output value, using a Markov chain Monte Carlo method to approximate the posterior noise variance.
Journal ArticleDOI
Machine Recognition of Music Emotion: A Review
Yi-Hsuan Yang,Homer H. Chen +1 more
TL;DR: This article provides a comprehensive review of the methods that have been proposed for music emotion recognition and concludes with suggestions for further research.
References
More filters
Book
The Nature of Statistical Learning Theory
TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Journal ArticleDOI
Support-Vector Networks
Corinna Cortes,Vladimir Vapnik +1 more
TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Book
Matrix Analysis
Roger A. Horn,Charles R. Johnson +1 more
TL;DR: In this article, the authors present results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrate their importance in a variety of applications, such as linear algebra and matrix theory.
Journal ArticleDOI
A Tutorial on Support Vector Machines for Pattern Recognition
TL;DR: There are several arguments which support the observed high accuracy of SVMs, which are reviewed and numerous examples and proofs of most of the key theorems are given.