scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
01 Jan 2015
TL;DR: These notes are work in progress, and are being adapted from lecture notes from a course the author taught at Columbia University, based on various materials, and in particular notes developed during a reading group in the University of Wisconsin Madison.
Abstract: These notes are work in progress, and are being adapted from lecture notes from a course the author taught at Columbia University. These are based on various materials, and in particular notes developed during a reading group in the University of Wisconsin Madison (which was coordinated by Robert Nowak). Any comments and remarks are most welcome. The author wants to thank Ervin Tánczos and Sándor Kolumbán for helping in the revision of the notes and the correction of typos. Naturally, the author of the current version is the sole responsible for the errors it contains.

2 citations

01 Jan 2005
TL;DR: The support vector machine was used to predict the time sequence of the strong earthquakes, and to forecast the maximum earthquake magnitude in China's mainland next year, and the results show the method has good forecasting effect.
Abstract: Statistical learning theory is a small-sample statistics theory.Support vector machine is a new machine learning method based on statistical learning theory.It is not only helpful to solve some problems,such as small-sample,devilishly learning,big-dimension and local minimum,but also is of strong generalization(forecasting) ability.The support vector machine was used to predict the time sequence of the strong earthquakes,and to forecast the maximum earthquake magnitude in China's mainland next year.The results show the method has good forecasting effect.The results also indicate that the activity of strong earthquakes in the world and the sunspot.Though the relation is still not clear,nonlinear relation is well shown by use of the support vector machine.

2 citations

Proceedings ArticleDOI
25 Jul 2015
TL;DR: The improved PSO was applied to optimize parameters of SVM to enhance the classification ability in the paper, and the simulation results showed that the method proposed had a better diagnosis results.
Abstract: For the disadvantages of PSO (Particle Swarm Optimization) algorithm, such as premature convergence and easily getting into local extremum, an improved PSO algorithm was presented in this paper. On the one hand, the population with worse performance moved near the global optimization value of the other population with certain probability; on the other hand, one population was randomly chosen to mutate to stimulate the particles jump out the local extremum when the two populations continuously trapped into the same local extremum. The simulation results showed that the improved PSO had a better optimization performance. SVM (Support Vector Machine) trained the improved PSO was applied to fault diagnosis of diesel engine valve. The simulation results showed that the improved PSO-SVM acquired higher accuracy. Keywords-particle swarm optimization; multi-population; support vector machine; fault diagnosis I. INTRODUCTION Fault diagnosis can be viewed as pattern classification problems, and SVM which is a kind of machine learn algorithms based on Statistical Learning Theory has the adaptive generation ability(5-6). However, the SVM performance is largely related to parameters of the model, so the improved PSO was applied to optimize parameters of SVM to enhance the classification ability in the paper. The simulation results showed that the method proposed in the paper had a better diagnosis results.

2 citations

Journal ArticleDOI
TL;DR: This work establishes a new upper bound on the number of samples sufficient for PAC learning in the realizable case that matches known lower bounds up to numerical constant factors.
Abstract: This work establishes a new upper bound on the number of samples sufficient for PAC learning in the realizable case. The bound matches known lower bounds up to numerical constant factors. This solv...

2 citations

Proceedings ArticleDOI
28 Jun 2011
TL;DR: It is found that the overfit can decay to zero with increasing network size even if only a vanishing fraction of the total network is observed, but the amount of observed data on a per-node basis should increase with the size of the graph.
Abstract: The latent position model is a well known model for social network analysis which has also found application in other fields, such as analysis of marketing and e-commerce data. In such applications, the data sets are increasingly massive and only partially observed, giving rise to the possibility of overfitting by the model. Using tools from statistical learning theory, we bound the VC dimension of the latent position model, leading to bounds on the overfit of the model. We find that the overfit can decay to zero with increasing network size even if only a vanishing fraction of the total network is observed. However, the amount of observed data on a per-node basis should increase with the size of the graph.

2 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847