scispace - formally typeset
Open AccessJournal ArticleDOI

Choosing Multiple Parameters for Support Vector Machines

TLDR
The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVMs) is considered by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters.
Abstract
The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVMs) is considered. This is done by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters. Usual methods for choosing parameters, based on exhaustive search become intractable as soon as the number of parameters exceeds two. Some experimental results assess the feasibility of our approach for a large number of parameters (more than 100) and demonstrate an improvement of generalization performance.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Optimizing the data-dependent kernel under a unified kernel optimization framework

TL;DR: A unified kernel optimization framework, which can use any discriminant criteria formulated in a pairwise manner as the objective functions, based on a data-dependent kernel function is proposed.

From simple classification methods to machine learning for the binary discrimination of beers using electronic nose data

TL;DR: This paper deals with the development and the implementation of an electronic nose based on Metal Oxide Semiconductor (MOS) as an innovative instrument to beer aroma recognition and showed that if simple methods perform well they may be preferred.
Journal ArticleDOI

Group-Sensitive Multiple Kernel Learning for Object Recognition

TL;DR: A group-sensitive multiple kernel learning (GS-MKL) method is proposed for object recognition to accommodate the intraclass diversity and the interclass correlation and has achieved encouraging performance comparable to the state-of-the-art and outperformed several existing MKL methods.
Posted Content

Non-Sparse Regularization and Efficient Training with Multiple Kernels

TL;DR: Improvements in MKL have finally made MKL fit for deployment to practical applications: MKL now has a good chance of improving the accuracy (over a plain sum kernel) at an affordable computational cost.
Journal ArticleDOI

Wasserstein Discriminant Analysis

TL;DR: Wasserstein Discriminant Analysis (WDA) as mentioned in this paper is a supervised method that can improve classification of high-dimensional data by computing a suitable linear map onto a lower dimensional subspace.
References
More filters
Book

The Nature of Statistical Learning Theory

TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Journal ArticleDOI

Support-Vector Networks

TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

Statistical learning theory

TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Book

An Introduction to Support Vector Machines and Other Kernel-based Learning Methods

TL;DR: This is the first comprehensive introduction to Support Vector Machines (SVMs), a new generation learning system based on recent advances in statistical learning theory, and will guide practitioners to updated literature, new applications, and on-line software.
Journal ArticleDOI

Molecular classification of cancer: class discovery and class prediction by gene expression monitoring.

TL;DR: A generic approach to cancer classification based on gene expression monitoring by DNA microarrays is described and applied to human acute leukemias as a test case and suggests a general strategy for discovering and predicting cancer classes for other types of cancer, independent of previous biological knowledge.