scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Book
15 Aug 2002
TL;DR: Self-Optimizing and Pareto-Optimal Policies in General Environments Based on Bayes-Mixtures based on Baye-Mixture policies are proposed.
Abstract: Statistical Learning Theory.- Agnostic Learning Nonconvex Function Classes.- Entropy, Combinatorial Dimensions and Random Averages.- Geometric Parameters of Kernel Machines.- Localized Rademacher Complexities.- Some Local Measures of Complexity of Convex Hulls and Generalization Bounds.- Online Learning.- Path Kernels and Multiplicative Updates.- Predictive Complexity and Information.- Mixability and the Existence of Weak Complexities.- A Second-Order Perceptron Algorithm.- Tracking Linear-Threshold Concepts with Winnow.- Inductive Inference.- Learning Tree Languages from Text.- Polynomial Time Inductive Inference of Ordered Tree Patterns with Internal Structured Variables from Positive Data.- Inferring Deterministic Linear Languages.- Merging Uniform Inductive Learners.- The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions.- PAC Learning.- New Lower Bounds for Statistical Query Learning.- Exploring Learnability between Exact and PAC.- PAC Bounds for Multi-armed Bandit and Markov Decision Processes.- Bounds for the Minimum Disagreement Problem with Applications to Learning Theory.- On the Proper Learning of Axis Parallel Concepts.- Boosting.- A Consistent Strategy for Boosting Algorithms.- The Consistency of Greedy Algorithms for Classification.- Maximizing the Margin with Boosting.- Other Learning Paradigms.- Performance Guarantees for Hierarchical Clustering.- Self-Optimizing and Pareto-Optimal Policies in General Environments Based on Bayes-Mixtures.- Prediction and Dimension.- Invited Talk.- Learning the Internet.

2 citations

Proceedings ArticleDOI
01 Jan 2013
TL;DR: The theory of consistency of the empirical risk minimization principle when samples are corrupted by noise is established on credibility space; the bounds on the rate of uniform convergence of learning processes about samples corrupted by Noise is proposed and proven on the non-additive measure space.
Abstract: The bounds on the rate of convergence of learning processes play an important role in statistical learning theory However, the researches about them presently only focus on probability measure (additive measure) space And the samples we deal with are supposed to be noise-free This paper explores the statistical learning theory on credibility space The theory of consistency of the empirical risk minimization principle when samples are corrupted by noise is established on credibility space; the bounds on the rate of uniform convergence of learning processes about samples corrupted by noise is proposed and proven on the non-additive measure space

2 citations

Journal Article
TL;DR: The classification and regression methods of SVM are introduced, the characters of the methods are analyzed, and the application future of S VM in earthquake prediction is discussed also.
Abstract: Statistical learning theory(SLT) is a small-sample statistics theory.Support vector machine(SVM) is a new machine learning method based on statistical learning theory.It can process the high nonlinear problems with classification and regression.SVM not only can solve some problems,such as small-sample,over-fitting,high-dimension and local minimum,but also has higher generalization(forecasting) ability than that of the artificial neural networks.In this paper,the classification and regression methods of SVM are introduced,the characters of the methods are analyzed,and the application future of SVM in earthquake prediction is discussed also.

2 citations

Proceedings ArticleDOI
07 Oct 2001
TL;DR: It is shown that the regularized solution can be derived from the Fourier transformation operator in the transformation domain and with equivalent form from the linear differential operators in the spatial domain.
Abstract: The paper provides a new viewpoint on regularization theory from different perspectives. It is shown that the regularized solution can be derived from the Fourier transformation operator in the transformation domain and with equivalent form from the linear differential operator in the spatial domain. The state-of-the-art research in regularization is briefly reviewed with extended discussions on Occam's razor, minimum length description, Bayesian framework, pruning algorithms, statistical learning theory, and equivalent regularization.

2 citations

01 Jan 2012
TL;DR: A brief description of Support Vector Machine method is first brought and after that some important implementations in petroleum engineering are discussed shortly.
Abstract: Support Vector Machine is a supervised computer learning algorithm which is originated from Statistical Learning Theory and is used for both classification and regression tasks in wide variety of engineering problems. SVM implementations show that it gives rise to more accurate results rather than neural networks and statistical methods in most applications. Furthermore, Support Vector Machine is more convenient for situations where the populations are small and non-linear. The basic ideas behind the Support Vector Machine algorithm, however, can be explained without ever reading an equation. So in this paper, a brief description of Support Vector Machine method is first brought and after that some important implementations in petroleum engineering are discussed shortly.

2 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847