scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Journal Article
TL;DR: Experimental results indicate that SVM has achieved a nearly 100% recognition rate and has certain advantages over the BP algorithm in approximated classification for traffic signs and in fine classification, SVM shows its superiority to theBP algorithm.
Abstract: Support vector machine(SVM)is a novel machine learning method based on the statistical learning theory,which can avoid over-learning and provides good generalization performanceIn this research,Multi-category SVM(M-SVM)is applied to traffic sign recognition and is compared with the BP algorithm,which has been commonly used in neural networks116 Chinese ideal signs and 23 Japanese signs are first chosen for training M-SVMs and BP intelligent classifiersNext,noise signs,level twisted signs from real Chinese and Japanese traffic signs are selected as a test set for the purpose of two network testingExperimental results indicate that SVM has achieved a nearly 100% recognition rate and has certain advantages over the BP algorithm in approximated classification for traffic signsIn fine classification,SVM shows its superiority to the BP algorithmBased on the analysis of the results,one may come to a conclusion that SVM algorithm is well worth the research efforts and is very promising in the area of traffic sign recognition

1 citations

Book ChapterDOI
19 Aug 2004
TL;DR: By using SVM, the authors can classify and identity some probability distributions appeared in queuing system and solve the density function regression problem through using support vector regression (SVR).
Abstract: The solution to performances of queuing system is based on knowing the distributions of customers arrival or service time. Support vector machine (SVM) based on statistical learning theory has been used generally in machine learning because of its good generalization ability. By using SVM we can classify and identity some probability distributions appeared in queuing system and solve the density function regression problem through using support vector regression (SVR). Some other problems needed to be solved are formulated in the end.

1 citations

Journal ArticleDOI
TL;DR: In this paper, a high efficiency energy management control strategy for a hybrid fuel cell vehicle using neural networks and statistical learning theory was proposed, where the weights of a neural network were designed to minimize the fuel consumption during a given path.

1 citations

Journal Article
TL;DR: Experimental results indicate that this method significantly improves the classification accuracy of SVM for the unbalanced samples and the speed of classification is much faster than that of conventional SVM in the condition that the correct rate does not decline.
Abstract: Support Vector Machine is a quite efficient classification technique developed on statistical learning theory.However,when the two-class problem samples are very unbalanced,SVM has a poor performance.To significantly improve the classification performance of imbalanced datasets,the nature characteristics of Sparse Least Squares SVM is analyzed and a kind of algorithm for the unbalanced samples is proposed in this paper.The experiments on the UCI database are done with this algorithm.Experimental results indicate that this method significantly improves the classification accuracy of SVM for the unbalanced samples.The speed of classification is much faster than that of conventional SVM in the condition that the correct rate does not decline,especially in the case of large number of support vectors.

1 citations

Proceedings ArticleDOI
12 Jul 2009
TL;DR: The bounds on the rate of uniform convergence of learning processes when samples are corrupted by equality-expect noise on quasi-probability space are dealt with.
Abstract: The bounds on the rate of uniform convergence of learning processes play an important role in the Statistical Learning Theory. They provide theoretical bases for the application of support vector machine and reflect the generalization ability of the learning machines. This paper mainly deals with the bounds on the rate of uniform convergence of learning processes when samples are corrupted by equality-expect noise on quasi-probability space.

1 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847