scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Proceedings ArticleDOI
24 Jul 2000
TL;DR: A new learning machine model for classification problems, based on decompositions of multiclass classification problems in sets of two-class subproblems, assigned to nonlinear dichotomizers that learn their task independently of each other is presented.
Abstract: We present a new learning machine model for classification problems, based on decompositions of multiclass classification problems in sets of two-class subproblems, assigned to nonlinear dichotomizers that learn their task independently of each other. The experimentation performed on classical data sets, shows that this learning machine model achieves significant performance improvements over MLP, and previous classifiers models based on decomposition of polychotomies into dichotomies. The theoretical reasons of the good properties of generalization of the proposed learning machine model are explained in the framework of the statistical learning theory.

12 citations

Journal ArticleDOI
TL;DR: The considered recursive algorithm is robust with respect to the uncertainty of statistical characteristics of disturbances and has the increased convergence speed in initial iterations, formulated theorem for convergence of estimated parameters with probability one.

12 citations

Proceedings ArticleDOI
19 Jul 2018
TL;DR: This work presents MiSoSouP, a suite of algorithms for extracting high-quality approximations of the most interesting subgroups, according to different interestingness measures, from a random sample of a transactional dataset, and describes a new formulation of these measures that makes it possible to approximate them using sampling.
Abstract: We present MiSoSouP, a suite of algorithms for extracting high-quality approximations of the most interesting subgroups, according to different interestingness measures, from a random sample of a transactional dataset. We describe a new formulation of these measures that makes it possible to approximate them using sampling. We then discuss how pseudodimension, a key concept from statistical learning theory, relates to the sample size needed to obtain an high-quality approximation of the most interesting subgroups. We prove an upper bound on the pseudodimension of the problem at hand, which results in small sample sizes. Our evaluation on real datasets shows that MiSoSouP outperforms state-of-the-art algorithms offering the same guarantees, and it vastly speeds up the discovery of subgroups w.r.t. analyzing the whole dataset.

12 citations

Proceedings ArticleDOI
04 May 1998
TL;DR: This work suggests using the framework of statistical learning theory to explain the effect of weight initialization on complexity control in multilayer perceptron (MLP) networks trained via backpropagation.
Abstract: Complexity control of a learning method is critical for obtaining good generalization with finite training data. We discuss complexity control in multilayer perceptron (MLP) networks trained via backpropagation. For such networks, the number of hidden units and/or network weights is usually used as a complexity parameter. However, application of backpropagation training introduces additional mechanisms for complexity control. These mechanisms are implicit in the implementation of an optimization procedure, and they cannot be easily quantified (in contrast to the number of weights or the number of hidden units). We suggest using the framework of statistical learning theory to explain the effect of weight initialization. Using this framework, we demonstrate the effect of weight initialization on complexity control in MLP networks.

12 citations

Proceedings ArticleDOI
18 Nov 2008
TL;DR: The research results show that the prediction accuracy of RS-SVM is better than that of standard SVM, and the theory of the Rough Set for good performance in attribute reduction is introduced.
Abstract: Evaluation of construction projects is an important task for management of construction projects.An accurate forecast is required to enable supporting the investment decision and to ensure the project's feasible at the minimal cost. So controlling and rationally determining the project cost plays the most important roles in the budget management of the construction project. Ways and means have been explored to satisfy the requirements for prediction of construction projects. Recently a novel regression technique, called Support Vector Machines (SVM), based on the statistical learning theory is exploded in this paper for the prediction of construction project cost. Nevertheless, The standard SVM still has some difficults in attribute reduction and precision of prediction. This paper introduced the theory of the Rough Set (RS) for good performance in attribute reduction, considered and extracted substances components of construction project as parameters, and seted up the Model of the Construction Project Cost Forecasting based on the RS-SVM. The research results show that the prediction accuracy of RS-SVM is better than that of standard SVM.

12 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847