scispace - formally typeset
Open AccessProceedings Article

Algorithmic Luckiness

TLDR
In contrast to standard statistical learning theory which studies uniform bounds on the expected error, the authors presented a framework that exploits the specific learning algorithm used. But the main difference to previous approaches lies in the complexity measure; rather than covering all hypotheses in a given hypothesis space, it is only necessary to cover the functions which could have been learned using the fixed learning algorithm.
Abstract
In contrast to standard statistical learning theory which studies uniform bounds on the expected error we present a framework that exploits the specific learning algorithm used. Motivated by the luckiness framework [8] we are also able to exploit the serendipity of the training sample. The main difference to previous approaches lies in the complexity measure; rather than covering all hypotheses in a given hypothesis space it is only necessary to cover the functions which could have been learned using the fixed learning algorithm. We show how the resulting framework relates to the VC, luckiness and compression frameworks. Finally, we present an application of this framework to the maximum margin algorithm for linear classifiers which results in a bound that exploits both the margin and the distribution of the data in feature space.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

On the generalization ability of on-line learning algorithms

TL;DR: This paper proves tight data-dependent bounds for the risk of this hypothesis in terms of an easily computable statistic M/sub n/ associated with the on-line performance of the ensemble, and obtains risk tail bounds for kernel perceptron algorithms interms of the spectrum of the empirical kernel matrix.
Journal ArticleDOI

Theory of classification : a survey of some recent advances

TL;DR: The last few years have witnessed important new developments in the theory and practice of pattern classification, see as discussed by the authors for a survey of the main new ideas that have lead to these important recent developments.
Journal ArticleDOI

Why Does Deep and Cheap Learning Work So Well

TL;DR: It is argued that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine learning, a deep neural network can be more efficient than a shallow one.
Book ChapterDOI

An introduction to boosting and leveraging

TL;DR: An introduction to theoretical and practical aspects ofboosting and Ensemble learning is provided, providing a useful reference for researchers in the field of Boosting as well as for those seeking to enter this fascinating area of research.
Journal ArticleDOI

Learning the Kernel with Hyperkernels

TL;DR: The equivalent representer theorem for the choice of kernels is state and a semidefinite programming formulation of the resulting optimization problem is presented, which leads to a statistical estimation problem similar to the problem of minimizing a regularized risk functional.
References
More filters
Book

The Nature of Statistical Learning Theory

TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?

Statistical learning theory

TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Proceedings ArticleDOI

A training algorithm for optimal margin classifiers

TL;DR: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented, applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions.
Book ChapterDOI

Probability Inequalities for sums of Bounded Random Variables

TL;DR: In this article, upper bounds for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt are derived for certain sums of dependent random variables such as U statistics.
Related Papers (5)