scispace - formally typeset
Open AccessProceedings Article

Transduction with Confidence and Credibility

TLDR
A new transductive learning algorithm using Support Vector Machines is described, which provides confidence values for its predicted classifications of new examples and a measure of "credibility" which serves as an indicator of the reliability of the data upon which it makes its prediction.
Abstract
In this paper we follow the same general ideology as in [Gammerman et al., 1998], and describe a new transductive learning algorithm using Support Vector Machines. The algorithm presented provides confidence values for its predicted classifications of new examples. We also obtain a measure of "credibility" which serves as an indicator of the reliability of the data upon which we make our prediction. Experiments compare the new algorithm to a standard Support Vector Machine and other transductive methods which use Support Vector Machines, such as Vapnik's margin transduction. Empirical results show that the new algorithm not only produces confidence and credibility measures, but is comparable to, and sometimes exceeds the performance of the other algorithms.

read more

Content maybe subject to copyright    Report

Citations
More filters
BookDOI

Semi-Supervised Learning

TL;DR: Semi-supervised learning (SSL) as discussed by the authors is the middle ground between supervised learning (in which all training examples are labeled) and unsupervised training (where no label data are given).
Book

Adaptive computation and machine learning

TL;DR: This book attempts to give an overview of the different recent efforts to deal with covariate shift, a challenging situation where the joint distribution of inputs and outputs differs between the training and test stages.
Posted Content

Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning

TL;DR: The DkNN algorithm is evaluated on several datasets, and it is shown the confidence estimates accurately identify inputs outside the model, and that the explanations provided by nearest neighbors are intuitive and useful in understanding model failures.
Book ChapterDOI

Inductive Confidence Machines for Regression

TL;DR: The inductive approach described in this paper may be the only option available when dealing with large data sets and is much faster than the existing transductive techniques.
Book ChapterDOI

Inductive Conformal Prediction: Theory and Application to Neural Networks

TL;DR: The Bayesian framework and PAC theory can be used for producing upper bounds on the probability of error for a given algorithm with respect to some confidence level 1 − δ; both of these approaches however, have their drawbacks.
References
More filters
Book

The Nature of Statistical Learning Theory

TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?

Statistical learning theory

TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Proceedings Article

Handwritten Digit Recognition with a Back-Propagation Network

TL;DR: Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task, and has 1% error rate and about a 9% reject rate on zipcode digits provided by the U.S. Postal Service.
Journal Article

Sequential Minimal Optimization : A Fast Algorithm for Training Support Vector Machines

TL;DR: The sequential minimal optimization (SMO) algorithm as mentioned in this paper uses a series of smallest possible QP problems to solve a large QP problem, which avoids using a time-consuming numerical QP optimization as an inner loop.
Proceedings Article

Semi-Supervised Support Vector Machines

TL;DR: A general S3VM model is proposed that minimizes both the misclassification error and the function capacity based on all the available data that can be converted to a mixed-integer program and then solved exactly using integer programming.