scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Proceedings ArticleDOI
05 Nov 2007
TL;DR: A systematic approach based on LS-SVM and wavelet decomposition for fault diagnosis of hydroturbine generating units (HGU) and the result showed in the simulation result that the fault types can be identified and diagnosed by the above method.
Abstract: The algorithm of support vector machines (SVM), a novel machine learning method based on statistical learning theory, has been successfully used in pattern recognition and function estimation. The theory of least squares support vector machines (LS-SVM) is a least squares version of standard SVM, which involves equality instead of inequality constraints and works with a least squares object function. A systematic approach based on LS-SVM and wavelet decomposition for fault diagnosis of hydroturbine generating units (HGU) is proposed in this paper. The vibration signals under abnormal conditions are collected and preprocessed with the wavelet decomposition and feature information of signals is extracted as the feature vectors for training and testing the LS-SVM. To classify multiple fault modes of HGU, a multiclass classifier based on LS-SVM with minimum output codes (MOC) is constructed and used in the fault diagnosis for HGU. It's showed in the simulation result that the fault types can be identified and diagnosed by the above method. Compared with the result of a RBF neural network, more excellent identification accuracy indicates the feasibility and effectiveness of LS-SVM in the fault diagnosis of HGU.

3 citations

Journal Article
TL;DR: The four-layer SVMs overcome the neural network's shortcomings such as requiring a large mount of training samples and easily getting into the local minimum when it is trained, and have the merits of excellent accurate ratio and high speed.
Abstract: The support vector machine (SVM) is a novel machine-learning algorithm based on the statistical learning theory (SLT).SVM is a powerful tool for the system with small samples,nonlinear,and dynamic high dimensions.This paper adopts the multi-layer decision–making least square support vector machines (LS-SVMs) for oil-filled transformer fault diagnosis and adopts the multi-layer dynamic adaptive optimal algorithm of the constant δ,c.The method utilizes the least characteristic vectors—the four gases components as fault diagnosis,then the four-layer SVMs are used to train and identify the states of the transformers.The four-layer SVMs overcome the neural network's shortcomings such as requiring a large mount of training samples and easily getting into the local minimum when it is trained. Compared with the neural network, the four-layer SVMs have the merits of excellent accurate ratio and high speed.

3 citations

Journal Article
TL;DR: Support vector machines can solve small-sample learning problems better by using structural risk minimization in place of experiential risk minimizations and convert a nonlinear learning problem into a linear learning problem in order to reduce the algorithm complexity.
Abstract: Support vector machines(SVM) were developed from the machine learning theory of small samples based on statistical learning theory(SLT) by Vapnik et al,which were originally designed for binary classification problems.It can solve small-sample learning problems better by using structural risk minimization in place of experiential risk minimization.Moreover,SVM can convert a nonlinear learning problem into a linear learning problem in order to reduce the algorithm complexity by using the kernel function concept.A multi-class classification method of SVM is applied to lake water quality assessment.A case study shows that the method is reliable in the classification and evaluation of lake water quality.

3 citations

Posted Content
TL;DR: This paper showed that for certain, large hypotheses classes, some interpolating ERMs enjoy very good statistical guarantees while others fail in the worst sense, and that the same phenomenon occurs for DNNs with zero training error and sufficiently large architectures.
Abstract: A common strategy to train deep neural networks (DNNs) is to use very large architectures and to train them until they (almost) achieve zero training error. Empirically observed good generalization performance on test data, even in the presence of lots of label noise, corroborate such a procedure. On the other hand, in statistical learning theory it is known that over-fitting models may lead to poor generalization properties, occurring in e.g. empirical risk minimization (ERM) over too large hypotheses classes. Inspired by this contradictory behavior, so-called interpolation methods have recently received much attention, leading to consistent and optimally learning methods for some local averaging schemes with zero training error. However, there is no theoretical analysis of interpolating ERM-like methods so far. We take a step in this direction by showing that for certain, large hypotheses classes, some interpolating ERMs enjoy very good statistical guarantees while others fail in the worst sense. Moreover, we show that the same phenomenon occurs for DNNs with zero training error and sufficiently large architectures.

3 citations

Proceedings ArticleDOI
13 Mar 2010
TL;DR: This paper applies the support vector machine into building the time series forecasting model, studies the relevant parameters which have impact on the models to predicting accuracy, and offers the parameter adaptive optimization algorithm which supports vector machine prediction model by building on genetic algorithm.
Abstract: Support vector machines, which are based on statistical learning theory and structural risk minimization principle, in theory, ensure the maximum generalization ability of the model. So compared with the neural network model established on the Empirical Risk Minimization principle, they are more comprehensive in theory. In this paper, it applies the support vector machine into building the time series forecasting model, studies the relevant parameters which have impact on the models to predicting accuracy. It offers the parameter adaptive optimization algorithm which supports vector machine prediction model by building on genetic algorithm, which is based on the analysis of the influence of the parameters on the time series forecasting accuracy.

3 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847