scispace - formally typeset
Open AccessJournal ArticleDOI

New Support Vector Algorithms

Reads0
Chats0
TLDR
A new class of support vector algorithms for regression and classification that eliminates one of the other free parameters of the algorithm: the accuracy parameter in the regression case, and the regularization constant C in the classification case.
Abstract
We propose a new class of support vector algorithms for regression and classification. In these algorithms, a parameter ν lets one effectively control the number of support vectors. While this can be useful in its own right, the parameterization has the additional benefit of enabling us to eliminate one of the other free parameters of the algorithm: the accuracy parameter epsilon in the regression case, and the regularization constant C in the classification case. We describe the algorithms, give some theoretical results concerning the meaning and the choice of ν, and report experimental results.

read more

Citations
More filters
Journal ArticleDOI

Weighted-support vector machines for predicting membrane protein types based on pseudo-amino acid composition.

TL;DR: The spectral analysis technique is introduced to represent the statistical sample of a protein and the weighted support vector machine (SVM) algorithm is applied, showing remarkable power in dealing with the bias caused by the situation when one subset in the training dataset contains many more samples than the other.
Journal ArticleDOI

MicroRNA predictors of longevity in Caenorhabditis elegans.

TL;DR: This work quantitatively examined Caenorhabditis elegans reared individually in a novel apparatus and observed throughout their lives and identified three microRNAs in which early-adulthood expression patterns individually predict up to 47% of lifespan differences, inferring that these micro RNAs not only report on but also likely determine longevity.
Journal ArticleDOI

Composite Binary Losses

TL;DR: This work characterises when margin losses can be proper composite losses, explicitly show how to determine a symmetric loss in full from half of one of its partial losses, introduces an intrinsic parametrisation of composite binary losses and gives a complete characterisation of the relationship between proper losses and "classification calibrated" losses.
Patent

Data classification method and data classification device

TL;DR: In this article, a separation surface set storage part stores information defining a plurality of separation surfaces which separate a feature space into at least one known class region and an unknown class region respectively.
Journal ArticleDOI

Classification of colonic tissues using near-infrared Raman spectroscopy and support vector machines.

TL;DR: NIR Raman spectroscopy in combination with a powerful SVM technique has great potential for providing an effective and accurate diagnostic schema for cancer diagnosis in the colon.
References
More filters
Book

The Nature of Statistical Learning Theory

TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Journal ArticleDOI

Support-Vector Networks

TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Book

Matrix Analysis

TL;DR: In this article, the authors present results of both classic and recent matrix analyses using canonical forms as a unifying theme, and demonstrate their importance in a variety of applications, such as linear algebra and matrix theory.
Journal ArticleDOI

A Tutorial on Support Vector Machines for Pattern Recognition

TL;DR: There are several arguments which support the observed high accuracy of SVMs, which are reviewed and numerous examples and proofs of most of the key theorems are given.
Book

Nonlinear Programming