scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Proceedings ArticleDOI
Partha Niyogi1
28 May 2000
TL;DR: An analysis of real-valued function learning using neural networks shows how the generalization ability of a learner is bounded both by finite data and limited representational capacity and shifts attention away from asymptotics to learning with finite resources.
Abstract: We discuss two seemingly disparate problems of learning from examples within the framework of statistical learning theory. The first involves real-valued function learning using neural networks and an analysis of this has two interesting aspects (1) it shows how the generalization ability of a learner is bounded both by finite data and limited representational capacity (2) it shifts attention away from asymptotics to learning with finite resources. The perspective that this yields is then brought to bear on the second problem of learning natural language grammars to articulate some issues that computational linguistics needs to deal with.
Posted Content
TL;DR: In this article, the authors introduce the generalization appropriate for the missing case, the one of models with values in R^Q. This provides a new guaranteed risk for M-SVMs which appears superior to the existing one.
Abstract: Bounds on the risk play a crucial role in statistical learning theory. They usually involve as capacity measure of the model studied the VC dimension or one of its extensions. In classification, such "VC dimensions" exist for models taking values in {0, 1}, {1,..., Q} and R. We introduce the generalizations appropriate for the missing case, the one of models with values in R^Q. This provides us with a new guaranteed risk for M-SVMs which appears superior to the existing one.
Proceedings ArticleDOI
01 Nov 2007
TL;DR: Results indicate that the proposed material erosion rate model based on principal component least square SVM (PCLS-SVM) features high learning speed and well generalization ability.
Abstract: Support vector machine (SVM) is a novel machine learning method based on statistical learning theory. A material erosion rate model based on principal component least square SVM (PCLS-SVM) is proposed. PCA calculates principal components in high dimensional feature space and reduces dimensions of sample. Cross validation method is used to select parameters of PCLS-SVM model. PCLS-SVM is applied to prediction of material erosion rate. Results indicate that this method features high learning speed and well generalization ability.
Journal ArticleDOI
TL;DR: In this paper, the authors studied how Lavrentiev regularization can be used in the context of learning theory, especially in regularization networks that are closely related to support vector machines.
Abstract: In this paper we study how Lavrentiev regularization can be used in the context of learning theory, especially in regularization networks that are closely related to support vector machines. We briefly discuss formulations of learning from examples in the context of ill-posed inverse problem and regularization. We then study the interplay between the Lavrentiev regularization of the concerned continuous and discretized ill-posed inverse problems. As the main result of this paper, we give an improved probabilistic bound for the regularization networks or least square algorithms, where we can afford to choose the regularization parameter in a larger interval.
01 Jan 2011
TL;DR: Solomonoff’s formal, general, complete, and essentially unique theory of universal induction and prediction, rooted in algorithmic information theory and based on the philosophical and technical ideas of Ockham, Epicurus, Bayes, Turing, and Kolmogorov, essentially constitutes a conceptual solution to the induction problem.
Abstract: Humans and many other intelligent systems (have to) learn from experience, build models of the environment from the acquired knowledge, and use these models for prediction. In philosophy this is called inductive inference, in statistics it is called estimation and prediction, and in computer science it is addressed by machine learning. I will first review unsuccessful attempts and unsuitable approaches towards a general theory of induction, including Popper’s falsificationism and denial of (the necessity of) confirmation, frequentist statistics and much of statistical learning theory, subjective Bayesianism, Carnap’s confirmation theory, the data paradigm, eliminative induction, and deductive approaches. I will also debunk some other misguided views, such as the no-free-lunch myth and pluralism. ibr/? I will then turn to Solomonoff’s formal, general, complete, and essentially unique theory of universal induction and prediction, rooted in algorithmic information theory and based on the philosophical and technical ideas of Ockham, Epicurus, Bayes, Turing, and Kolmogorov. This theory provably addresses most issues that have plagued other inductive approaches, and essentially constitutes a conceptual solution to the induction problem. Some theoretical guarantees, extensions to (re)active learning, practical approximations, applications, and experimental results are mentioned, but they are not the focus of this talk. I will conclude with some general advice to philosophers and scientists interested in the foundations of induction.

Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847