scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Proceedings ArticleDOI
01 Oct 2012
TL;DR: A fuzzy SVM multi-category classifier system model based on “one-against-all” is designed and established, which improves the performance of SVM and classification precision by reducing the blind area with fuzzy theory and is applied in engine fault diagnosis.
Abstract: Support Vector Machines (SVM) is a new-generation machine learning technique based on the statistical learning theory. They can solve small-sample learning problems better by using Structural Risk Minimization in place of Experiential Risk Minimization. It can solve the problem of small sample sets learning and avoid the problem of over-learning with limited swatch amount at the same time. A fuzzy SVM multi-category classifier system model based on “one-against-all” is designed and established, which improves the performance of SVM and classification precision by reducing the blind area with fuzzy theory. It has good learning ability and generalization performance by the experiment with RBF-NN, Common-SVM and Fuzzy-SVM. At last, this model is applied in engine fault diagnosis, which improves classification accuracy and satisfies with the request of fault diagnosis for the engine.
Posted Content
TL;DR: In this paper, a MapReduce based distributed parallel SVM training algorithm for binary classification problems is presented, where the authors used statistical learning theory to find the predictive hypothesis that minimize our empirical risks from hypothesis spaces that created with reduce function of mapReduce.
Abstract: Although Support Vector Machine (SVM) algorithm has a high generalization property to classify for unseen examples after training phase and it has small loss value, the algorithm is not suitable for real-life classification and regression problems. SVMs cannot solve hundreds of thousands examples in training dataset. In previous studies on distributed machine learning algorithms, SVM is trained over a costly and preconfigured computer environment. In this research, we present a MapReduce based distributed parallel SVM training algorithm for binary classification problems. This work shows how to distribute optimization problem over cloud computing systems with MapReduce technique. In the second step of this work, we used statistical learning theory to find the predictive hypothesis that minimize our empirical risks from hypothesis spaces that created with reduce function of MapReduce. The results of this research are important for training of big datasets for SVM algorithm based classification problems. We provided that iterative training of split dataset with MapReduce technique; accuracy of the classifier function will converge to global optimal classifier function's accuracy in finite iteration size. The algorithm performance was measured on samples from letter recognition and pen-based recognition of handwritten digits dataset.
30 Mar 2009
TL;DR: A Patient Pessimist's Guide to Induction in Statistical Learning Theory as discussed by the authors discusses the application of statistical learning theory to the philosophy of induction, and the purpose of these remarks is to say something more.
Abstract: Reliable Reasoning is a simple, accessible, beautifully explained introduction to Vapnik and Chervonenkis’s statistical learning theory. It includes a modest discussion of the application of the theory to the philosophy of induction; the purpose of these remarks is to say something more. 1. A Patient Pessimist’s Guide to Induction Philosophical Learning Theory Vapnik and Chervonenkis’s statistical learning theory may be compared to formal learning theory, familiar to philosophers from the work of Putnam (1963) and Kelly (1996). There are significant technical differences between the two theories, but considered as philosophical frameworks for thinking about inductive reasoning, they have much in common. I will say that they are both—in their epistemological incarnations—species of philosophical learning theory . The programmatic goal of formal learning theory is to investigate methods for learning from experience that are guaranteed to converge on the truth. (or at least guaranteed to come as close as possible) under some given set of circumstances. If you have a method that is sure to converge, the thought goes, then provided that the particular circumstances within which the guarantee is offered actually hold, you are sure to find the truth—eventually. The problem of induction is in that case solved. Or at least, a problem of induction is solved, since different methods may be recommended in different circumstances. Statistical learning theory takes a similar approach; the most important difference for philosophical purposes is that, assuming as it does that we live in an inherently stochastic world, it does not pursue convergence per se but a kind (or several kinds) of probabilistic convergence. Rather than providing a guarantee of convergence on the truth, it
Book ChapterDOI
01 Jan 2023
TL;DR: In this article , the use of SVM in practice is provided with a basic understanding of the theory behind it and several examples in the field of water recourse engineering are provided and the effect of the SVM parameters on the resulting regression, how to select good values for those parameters, and the using of different kernel functions are described.
Abstract: The Support Vector machine (SVM) is currently a hot topic in the statistical learning area and is widely used in data classification and regression modeling. However, obtaining the best results with SVMs requires an understanding of their workings and the various ways a user can influence their accuracy. In this chapter, the use of SVM in practice is provided with a basic understanding of the theory behind it. Several examples in the field of water recourse engineering are provided and the effect of the SVM parameters on the resulting regression, how to select good values for those parameters, and the use of different kernel functions are described.

Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847