Choosing Multiple Parameters for Support Vector Machines
TLDR
The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVMs) is considered by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters.Abstract:
The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVMs) is considered. This is done by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters. Usual methods for choosing parameters, based on exhaustive search become intractable as soon as the number of parameters exceeds two. Some experimental results assess the feasibility of our approach for a large number of parameters (more than 100) and demonstrate an improvement of generalization performance.read more
Citations
More filters
Posted Content
Self-Tuning Networks: Bilevel Optimization of Hyperparameters using Structured Best-Response Functions
TL;DR: This work aims to adapt regularization hyperparameters for neural networks by fitting compact approximations to the best-response function, which mapshyperparameters to optimal weights and biases, and outperforms competing hyperparameter optimization methods on large-scale deep learning problems.
Journal ArticleDOI
Choosing Parameters of Kernel Subspace LDA for Recognition of Face Images Under Pose and Illumination Variations
TL;DR: An eigenvalue-stability-bounded margin maximization (ESBMM) algorithm is proposed to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on the previously developed sub space LDA method.
Posted Content
Encoding High Dimensional Local Features by Sparse Coding Based Fisher Vectors
TL;DR: Wang et al. as mentioned in this paper proposed a sparse coding based Fisher vector coding (SCFVC) method for high-dimensional local features, where each local feature is drawn from a Gaussian distribution whose mean vector is sampled from a subspace.
Proceedings ArticleDOI
Effectiveness of Random Search in SVM hyper-parameter tuning
Rafael Gomes Mantovani,André L. Rossi,Joaquin Vanschoren,Bernd Bischl,André C. P. L. F. de Carvalho +4 more
TL;DR: The experimental results show that the predictive performance of models using Random Search is equivalent to those obtained using meta-heuristics and Grid Search, but with a lower computational cost.
Journal ArticleDOI
Parameters optimization of support vector machines for imbalanced data using social ski driver algorithm
Alaa Tharwat,Thomas Gabel +1 more
TL;DR: This paper proposes a social ski-driver (SSD) optimization algorithm which is inspired from different evolutionary optimization algorithms for optimizing the parameters of SVMs, with the aim of improving the classification performance.
References
More filters
Book
The Nature of Statistical Learning Theory
TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Journal ArticleDOI
Support-Vector Networks
Corinna Cortes,Vladimir Vapnik +1 more
TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Statistical learning theory
TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Book
An Introduction to Support Vector Machines and Other Kernel-based Learning Methods
TL;DR: This is the first comprehensive introduction to Support Vector Machines (SVMs), a new generation learning system based on recent advances in statistical learning theory, and will guide practitioners to updated literature, new applications, and on-line software.
Journal ArticleDOI
Molecular classification of cancer: class discovery and class prediction by gene expression monitoring.
Todd R. Golub,Todd R. Golub,Donna K. Slonim,Pablo Tamayo,Christine Huard,Michelle Gaasenbeek,Jill P. Mesirov,Hilary A. Coller,Mignon L. Loh,James R. Downing,Michael A. Caligiuri,Clara D. Bloomfield,Eric S. Lander +12 more
TL;DR: A generic approach to cancer classification based on gene expression monitoring by DNA microarrays is described and applied to human acute leukemias as a test case and suggests a general strategy for discovering and predicting cancer classes for other types of cancer, independent of previous biological knowledge.