scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Proceedings ArticleDOI
24 Sep 2015
TL;DR: The experimental results shows that the classification of the bearing faults of the induction motor based on wavelet packet decomposition and time domain features and pattern recognition using support vector machine provides a new approach for intelligent bearing fault diagnosis of induction motor.
Abstract: In this paper an intelligent condition monitoring of induction motor based on the wavelet packet decomposition and time domain features have been presented. The classification has been done using the support vector machine (SVM) on the basis of statistical learning theory. The data has been collected on a 10 HP induction motor in the lab having different bearing defects using piezoelectric type accelerometer. The signal is then processed to extract the time domain and wavelet features. Wavelet packet decomposition is used to extract the features from time-frequency domain. In this work, 3rd level wavelet packet decomposition has been considered. The experimental results shows that the classification of the bearing faults of the induction motor based on wavelet packet decomposition and time domain features and pattern recognition using support vector machine provides a new approach for intelligent bearing fault diagnosis of induction motor. GUI using MATLAB is developed for the work to make it more users friendly.

9 citations

Journal ArticleDOI
19 Sep 2014
TL;DR: This paper describes the latest progress of parameters optimisation of SVM based on swarm intelligence in recent years, and points out the research and development prospects of this kind of method.
Abstract: Support vector machine (SVM) is a new machine learning method based on statistical learning theory, which has become a hot research topic in the field of machine learning because of its excellent performance. However, the performance of SVM is very sensitive to its parameters. At present, swarm intelligence is the most common method to optimise the parameters of SVM. In this paper, the research on parameters optimisation of SVM based on swarm intelligence algorithms is reviewed. Firstly, we briefly introduce the theoretical basis of SVM. Secondly, we describe the latest progress of parameters optimisation of SVM based on swarm intelligence in recent years. Finally, we point out the research and development prospects of this kind of method.

9 citations

Journal Article
Yu Zhao1, Bing Li1, Xiu Li1, Wenhuang Liu1, Shouju Ren1 
TL;DR: This work introduces an improved one-class SVM, which has been shown to perform very well compared with other traditional methods, ANN, Decision Tree, and Naive Bays.
Abstract: Customer Chum Prediction is an increasingly pressing issue in today's ever-competitive commercial arena. Although there are several researches in chum prediction, but the accuracy rate, which is very important to business, is not high enough. Recently, Support Vector Machines (SVMs), based on statistical learning theory, are gaining applications in the areas of data mining, machine learning, computer vision and pattern recognition because of high accuracy and good generalization capability. But there has no report about using SVM to Customer Churn Prediction. According to churn data set characteristic, the number of negative examples is very small, we introduce an improved one-class SVM. And we have tested our method on the wireless industry customer chum data set. Our method has been shown to perform very well compared with other traditional methods, ANN, Decision Tree, and Naive Bays.

9 citations

Posted Content
TL;DR: In this article, the authors consider batch reinforcement learning with general value function approximation and study the minimal assumptions to reliably estimate/minimize Bellman error, and characterizes the generalization performance by (local) Rademacher complexities of general function classes.
Abstract: This paper considers batch Reinforcement Learning (RL) with general value function approximation. Our study investigates the minimal assumptions to reliably estimate/minimize Bellman error, and characterizes the generalization performance by (local) Rademacher complexities of general function classes, which makes initial steps in bridging the gap between statistical learning theory and batch RL. Concretely, we view the Bellman error as a surrogate loss for the optimality gap, and prove the followings: (1) In double sampling regime, the excess risk of Empirical Risk Minimizer (ERM) is bounded by the Rademacher complexity of the function class. (2) In the single sampling regime, sample-efficient risk minimization is not possible without further assumptions, regardless of algorithms. However, with completeness assumptions, the excess risk of FQI and a minimax style algorithm can be again bounded by the Rademacher complexity of the corresponding function classes. (3) Fast statistical rates can be achieved by using tools of local Rademacher complexity. Our analysis covers a wide range of function classes, including finite classes, linear spaces, kernel spaces, sparse linear features, etc.

9 citations

Journal Article
TL;DR: A soft sensor model based on the SVM that features high learning speed, good approximation, well generalization ability, and little dependence on the sample set is presented.
Abstract: Support vector machine (SVM) is a new learning machine based on the statistical learning theory. This paper presents a soft sensor model based on the SVM. Theoretical and simulation analysis indicates that this method features high learning speed, good approximation, well generalization ability, and little dependence on the sample set. It has the better performance than the soft sensor modeling based on the RBF neural network.

9 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847