scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
01 Jan 2005
TL;DR: The result of air-water two-phase flow regimes identification in horizontal pipe by using SVM is compared with that by usingBP neural network, which shows that the SVM has higher identification accuracy than BP neural network.
Abstract: The support vector machine (SVM) is a machine-learning algorithm base on the statistical learning theory (SLT), which has desirable classification ability even if with fewer samples. SVM provides us with a new method to develop the intelligent flow regimes identification. A novel method of flow regime identification based on support vector machine and wavelet packet decomposition is proposed in this paper. The energy of different frequency bands after wavelet packet decomposition constitutes the input vectors of support vector machine as feature vectors. The result of air-water two-phase flow regimes identification in horizontal pipe by using SVM is compared with that by using BP neural network, which shows that the SVM has higher identification accuracy than BP neural network. The results prove the method is efficient and feasible.

5 citations

Proceedings ArticleDOI
29 Jul 2010
TL;DR: It is shown by simulation that the CPOS algorithm can derive a set of optimal parameters of WSVM, and WSVM model possess some advantages such as simple structure, fast convergence speed with high generalization ability.
Abstract: Statistical Learning Theory focuses on the machine learning theory for small samples. Support vector machine (SVM) are new methods based on statistical learning theory. There are many kinds of function can be used for kernel of SVM. Wavelet function is a set of bases that can approximate arbitrary functions in arbitrary precision. So Marr wavelet was used to construct wavelet kernel. On the other hand, the parameter selection should to be done before training WSVM. Modified chaotic particle swarm optimization (CPOS) was adpoted to select parameters of SVM. It is shown by simulation that the CPOS algorithm can derive a set of optimal parameters of WSVM, and WSVM model possess some advantages such as simple structure, fast convergence speed with high generalization ability.

5 citations

Journal ArticleDOI
TL;DR: The results show that the SVM-based model for predicting WLAN traffic is reasonable and feasible and has the best performance among the above mentioned prediction models.
Abstract: The predictability of network traffic is an important and widely studied topic because it can lead to the solutions to get more efficient dynamic bandwidth allocation, admission control, congestion control and better performance wireless networks. Support vector machine (SVM) is a novel type of learning machine based on statistical learning theory, can solve small-sample learning problems. The work presented in this paper aims to examine the feasibility of applying SVM to predict actual WLAN traffic. We study one-step-ahead prediction and multi-step-ahead prediction without any assumption on the statistical property of actual WLAN traffic. We also evaluate the performance of different prediction models such as ARIMA, FARIMA, artificial neural network, and wavelet-based model using three actual WLAN traffic. The results show that the SVM-based model for predicting WLAN traffic is reasonable and feasible and has the best performance among the above mentioned prediction models.

5 citations

Journal Article
TL;DR: Calculation instances show the effectiveness of the optimization algorithm and improved the precision and efficiency of classification effectively and the most optimal parameters are obtained by genetic algorithm random search character.
Abstract: The support vector machine(SVM),which is based on the statistical learning theory and the structural risk minimum principle,guarantees the largest generalization ability of a model.Aiming at the parameters selection of support vector machine still lacks theory support and is very difficult to select.A genetic support vector machine algorithm is proposed based on genetic algorithm and tenfold crossing.The most optimal parameters are obtained by genetic algorithm random search character.Finally,calculation instances show the effectiveness of the optimization algorithm and improved the precision and efficiency of classification effectively.

5 citations

01 Jan 2006
TL;DR: In this article, the authors introduced a way of thinking about (one kind of) enumerative induction, which chooses a hypothesis from a given class C with minimal error on the data, and described a fundamental result (due to Vapnik and Chervonenkis) that enumerative inductive induction uniformly converges to the best rule in C if and only if the VC dimension of C is finite.
Abstract: The following is a draft of Chapter Three of Reliable Reasoning: Induction and Statistical Learning Theory, to be published by MIT Press. Basic statistical learning theory is concerned with learning from data—either learning how to classify items or learning how to estimate the value of an unknown function. The basic framework assumes that there is a fixed background probability distribution relating observable features of an item to its classification or to the value of the unknown function, where the same distribution determines the probability that a given item will turn up, either as a datum or as a new case to be classified or evaluated. Apart from assuming that the probabilities are independent and identical, no other assumptions are made about the background probability distribution. (Questions about epistemic reliability appear to require some such assumption about background probability.) In the previous chapter, we introduced a way of thinking about (one kind of) enumerative induction, which chooses a hypothesis from a given class C with minimal error on the data. We described a fundamental result (due to Vapnik and Chervonenkis): enumerative induction uniformly converges to the best rule in C if and only if the “VC dimension” of C is finite. (We there explained uniform convergence and VC-dimension.) The present Chapter Three discusses a somewhat different method of inductive inference.

5 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847