New Support Vector Algorithms
TLDR
A new class of support vector algorithms for regression and classification that eliminates one of the other free parameters of the algorithm: the accuracy parameter in the regression case, and the regularization constant C in the classification case.Abstract:
We propose a new class of support vector algorithms for regression and classification. In these algorithms, a parameter ν lets one effectively control the number of support vectors. While this can be useful in its own right, the parameterization has the additional benefit of enabling us to eliminate one of the other free parameters of the algorithm: the accuracy parameter epsilon in the regression case, and the regularization constant C in the classification case. We describe the algorithms, give some theoretical results concerning the meaning and the choice of ν, and report experimental results.read more
Citations
More filters
Journal ArticleDOI
LIBSVM: A library for support vector machines
Chih-Chung Chang,Chih-Jen Lin +1 more
TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Journal ArticleDOI
A tutorial on support vector regression
TL;DR: This tutorial gives an overview of the basic ideas underlying Support Vector (SV) machines for function estimation, and includes a summary of currently used algorithms for training SV machines, covering both the quadratic programming part and advanced methods for dealing with large datasets.
Pattern Recognition and Machine Learning
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Journal ArticleDOI
Robust enumeration of cell subsets from tissue expression profiles
Aaron M. Newman,Chih Long Liu,Michael R. Green,Andrew J. Gentles,Weiguo Feng,Yue Xu,Chuong D. Hoang,Maximilian Diehn,Arash Ash Alizadeh +8 more
TL;DR: CIBERSORT outperformed other methods with respect to noise, unknown mixture content and closely related cell types when applied to enumeration of hematopoietic subsets in RNA mixtures from fresh, frozen and fixed tissues, including solid tumors.
Book
Kernel Methods for Pattern Analysis
TL;DR: This book provides an easy introduction for students and researchers to the growing field of kernel-based pattern analysis, demonstrating with examples how to handcraft an algorithm or a kernel for a new specific application, and covering all the necessary conceptual and mathematical tools to do so.
References
More filters
Book
Neural Network Learning: Theoretical Foundations
Martin Anthony,Peter L. Bartlett +1 more
TL;DR: The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction, and discuss the computational complexity of neural network learning.
Journal Article
Theoretical Foundations of the Potential Function Method in Pattern Recognition Learning
Journal ArticleDOI
Comparing support vector machines with Gaussian kernels to radial basis function classifiers
Bernhard Schölkopf,Kah-Kay Sung,C.J.C. Burges,Federico Girosi,Partha Niyogi,Tomaso Poggio,Vladimir Vapnik +6 more
TL;DR: The results show that on the United States postal service database of handwritten digits, the SV machine achieves the highest recognition accuracy, followed by the hybrid system, and the SV approach is thus not only theoretically well-founded but also superior in a practical application.
Journal ArticleDOI
The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
TL;DR: Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights.
Book ChapterDOI
Predicting Time Series with Support Vector Machines
Klaus-Robert Müller,Alexander J. Smola,Gunnar Rätsch,Bernhard Schölkopf,Jens Kohlmorgen,Vladimir Vapnik +5 more
TL;DR: Two different cost functions for Support Vectors are made use: training with an e insensitive loss and Huber's robust loss function and how to choose the regularization parameters in these models are discussed.