scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Book ChapterDOI
09 Jul 2012
TL;DR: This paper addresses the question what is the VC dimension of the Choquet integral when being used as a binary classifier and provides a first interesting result in the form of (relatively tight) lower and upper bounds.
Abstract: The idea of using the Choquet integral as an aggregation operator in machine learning has gained increasing attention in recent years, and a number of corresponding methods have already been proposed. Complementing these contributions from a more theoretical perspective, this paper addresses the following question: What is the VC dimension of the (discrete) Choquet integral when being used as a binary classifier? The VC dimension is a key notion in statistical learning theory and plays an important role in estimating the generalization performance of a learning method. Although we cannot answer the above question exactly, we provide a first interesting result in the form of (relatively tight) lower and upper bounds.

9 citations

Journal ArticleDOI
TL;DR: This paper first reviews the formulations of SILF-SVR where soft insensitive loss function is utilized and ordinary Kriging, and then proves the equivalence between the two techniques under the assumption that the kernel function is substituted by covariance function.
Abstract: Support vector regression (SVR) is a powerful learning technique in the framework of statistical learning theory, while Kriging is a well-entrenched prediction method traditionally used in the spatial statistics field. However, the two techniques share the same framework of reproducing kernel Hilbert space. In this paper, we first review the formulations of SILF-SVR where soft insensitive loss function is utilized and ordinary Kriging, and then prove the equivalence between the two techniques under the assumption that the kernel function is substituted by covariance function.

9 citations

Book ChapterDOI
01 Jan 2013
TL;DR: In this article, a perturbed version of the operator equation for the estimator is used to solve the Tikhonov regularization problem, where this equation is seen as a perturbation of the ideal estimator.
Abstract: One-parameter regularization methods, such as the Tikhonov regularization, are used to solve the operator equation for the estimator in the statistical learning theory. Recently, there has been a lot of interest in the construction of the so called extrapolating estimators, which approximate the input–output relationship beyond the scope of the empirical data. The standard Tikhonov regularization produces rather poor extrapolating estimators. In this paper, we propose a novel view on the operator equation for the estimator where this equation is seen as a perturbed version of the operator equation for the ideal estimator. This view suggests the dual regularized total least squares (DRTLS) and multi-penalty regularization (MPR), which are multi-parameter regularization methods, as methods of choice for constructing better extrapolating estimators. We propose and test several realizations of DRTLS and MPR for constructing extrapolating estimators. It will be seen that, among the considered realizations, a realization of MPR gives best extrapolating estimators. For this realization, we propose a rule for the choice of the used regularization parameters that allows an automatic selection of the suitable extrapolating estimator.

9 citations

Journal ArticleDOI
TL;DR: A novel and efficient pairing support vector algorithm for data regression, called PSVR, which embodies the essence of statistical learning theory by adopting the principle of structural risk minimization, resulting in better generalization capability than TSVR.

9 citations

Proceedings ArticleDOI
20 Jun 2005
TL;DR: An integrated method is applied for process monitoring and fault diagnosis, which combines PCA for fault feature extraction and multiple SVMs ( MSVMs) for identification of different fault sources, and results show that the proposed PCA-MSVMs method performs good diagnosis capability and overall diagnosis correctness rate.
Abstract: on-line monitoring and fault diagnosis of chemical process is extremely important for operation safety and product quality. Principal component analysis (PCA) has been successfully as a multivariate statistical process control tool for detecting faults in processes with highly correlated variables. PCA and other statistical techniques, however, have difficulties in differentiating faults correctly in complex chemical process. Support vector machine (SVM) is a novel approach based on statistical learning theory, which has emerged for feature identification and classification. In this paper, an integrated method is applied for process monitoring and fault diagnosis, which combines PCA for fault feature extraction and multiple SVMs (MSVMs) for identification of different fault sources. This approach is verified and illustrated on the Tennessee Eastman benchmark process as a case study. Results show that the proposed PCA-MSVMs method performs good diagnosis capability and overall diagnosis correctness rate.

9 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847