scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Proceedings Article
01 Jan 2003
TL;DR: This research found SVM to perform well in the case of discrimination compared to some other existing popular classifiers.
Abstract: Appropriate training data always play an important role in constructing an efficient classifier to solve the data mining classification problem Support Vector Machine (SVM) is a comparatively new approach in constructing a model/classifier for data analysis, based on Statistical Learning Theory (SLT) SVM utilizes a transformation of the basic constrained optimization problem compared to that of a quadratic programming method, which can be solved parsimoniously through standard methods Our research focuses on SVM to classify a number of different sizes of data sets We found SVM to perform well in the case of discrimination compared to some other existing popular classifiers

1 citations

Proceedings ArticleDOI
21 Mar 2015
TL;DR: A kind of neural network classification method based on statistical learning theory, is different with the traditional neural network method is put forward based on information entropy.
Abstract: Time series is an important data, the prevalence of time series to make use of data mining technology can effective access to information and knowledge discovery. In the process of time series data mining, feature representation and similarity measure is an important basic work, the smooth implementation of for other data mining task provides a good data processing method and technical support. Based on information entropy, this paper puts forward a kind of neural network classification method based on statistical learning theory, is different with the traditional neural network method. The proposed methodology is verified through the numerical analysis and experimental simulation. Further research interests are also discussed.

1 citations

Journal Article
TL;DR: This paper uses the linear regression function to fit the signal including harmonic needed to be measured and the iterative reweighted least squares procedure is educed for signal spectrum analysis.
Abstract: Power system frequency generates an important impact on operation,control and protection of the power system,and it is the important index of weighing power quality,so it is very necessary to measure it accuratelyThe each method used now has advantage or disadvantage in the speed and precision of measurement,computation load,the ability to restrain harmonic and the difficult or easy degree to achieve,and it is difficult to give attention to allThe support vector method(SVM) is the updated and more effective part of statistical learning theory,and the core of it was put forward from 1992 to 1995 and is still in constant developing stage at presentThe theory based on the principle of structure risk minimization provides a new perspective in machine learning,and has been widely applied to pattern recognition and function fittingThe basic idea of SVM for recognition is to map the input space into a high dimensional feature space via nonlinear mapping and to do linear regression in this space for the purpose of approaching primitive functionThis algorithm is a convex quadratic optimization problem and can ensure that the obtained solution is the global optimal solutionFirst,this paper uses the linear regression function to fit the signal including harmonic needed to be measured and the iterative reweighted least squares procedure is educed for signal spectrum analysisThen, a new method to measure power system frequency is presentedIn case of the few sample number,this method,using single frequency to sound out whether it can fit the original signal curve well or not,can measure power system frequency with accuracy by comparing different amplitudeThrough discussing three examples and comparing with FFT,it can be proved that the new method has the characteristics of simple computation and high precision

1 citations

Proceedings ArticleDOI
21 Dec 2008
TL;DR: An improved multi object optimization algorithm based on simulated annealing is proposed and applied to super-parameters optimization of the support vector machine with a RBF kernel.
Abstract: Based on the statistical learning theory support vector machine focuses on the machine learning strategies under small samples and gets better generalization ability than those tools based on the experience risk minimization principle. Its classing or regression performance will be affected by relative super-parameters. An improved multi object optimization algorithm based on simulated annealing is proposed and applied to super-parameters optimization of the support vector machine with a RBF kernel. Then selection of proper searching space, initial feasible solution,initial temperature and design of an optimal object function are discussed in detail. The validation on the some standard datasets is carried out and its feasibility and effectiveness are confirmed.

1 citations

Jing Peng1, Peng Zhang1
01 Jan 2005
TL;DR: A new approach to discriminant analysis is established, called DLA, which is well-posed and behaves well in the SSS situation, and the classification performance of the nearest-neighbor rule with each subspace representation is evaluated.
Abstract: Linear discriminant analysis (LDA) suffers from the small sample size (SSS) problem. Researchers have proposed several modified versions of LDA to deal with this problem. However, a solid theoretical analysis is missing. We analyze LDA and the SSS problem based on learning theory. Originally derived from Fishers criterion, LDA can also be formulated as a least square (LS) approximation problem. In this way it can be clearly shown that LDA is an ill-posed problem and thus is inherently unstable. In order to transform the ill-posed problem into a well-posed one, a regularization term is necessary. We establish a new approach to discriminant analysis. We call it discriminant learning analysis (DLA). DLA is well-posed and behaves well in the SSS situation. Parzen Windows as a nonparametric method has been applied to a variety of density estimation as well as classification tasks. While it converges to the unknown probability density in the asymptotic limit, there is a lack of theoretical analysis on its performance with finite samples. We establish a finite sample error bound for Parzen Windows, and discuss its properties. This analysis provides interesting insight to Parzen Windows as well as the nearest neighbor method from the point of view of learning theory. Texture is an important property of surfaces that enables us to distinguish objects. There are several approaches to computing texture features. Of particular interest is multi-channel filtering because of its simplicity. However, the main difficulty associated with such an approach is the resolution of decomposition. Most techniques proposed are optimized with respect to image representation, thus giving no direct guarantee for good feature separation. This dissertation proposes a systematic method for learning optimal filters for texture classification. Since the filter training in the proposed technique is naturally tied to classifier training, the resulting filters are optimized with respect to classification. This dissertation also investigates the use of subspace analysis methods for learning low-dimensional representations for classification. We propose a kernel-pooled local discriminant subspace method and compare it against competing techniques: kernel principal component analysis (KPCA) and generalized discriminant analysis (GDA) in classification problems. We evaluate the classification performance of the nearest-neighbor rule with each subspace representation.

1 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847