scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The aim of this work is the study of an efficient DDC approach for nonlinear dynamic systems that exploits the data directly coming from the plant by using local linear regression models chosen for their simplicity of training and efficiency in incorporating new data generated by the plant.
Abstract: The problem of data-driven control (DDC) represents an important topic in the area of automation, due to the availability of large amount of data generated by the production processes occurring in industrial plants. The aim of this work is the study of an efficient DDC approach for nonlinear dynamic systems that exploits the data directly coming from the plant. In this framework, the control problem consists in the design of an automatic regulator able to execute a task by using the data collected during the successful operation of the plant, regulated by a reference controller such as a human operator. The proposed synthetic regulator is based on local linear regression models chosen for their simplicity of training and efficiency in incorporating new data generated by the plant. The conditions under which the derived controller converges to the optimal one are analysed in the context of statistical learning theory, which provides an appropriate framework to efficiently address this kind of DDC problem. Simulation results involving a dynamical system are provided to show the properties of the proposed method in an applicative context.

3 citations

Journal ArticleDOI
TL;DR: In this article , the performance of robust learning with Huber regression has been studied and a new comparison theorem is established, which characterizes the gap between the excess generalization error and the prediction error.

3 citations

Proceedings ArticleDOI
10 Aug 2015
TL;DR: This tutorial surveys the use of Rademacher Averages and the VC-dimension in sampling-based algorithms for graph analysis and pattern mining, and shows a generic recipe for formulating data mining problems in a way that allows to use these concepts in efficient randomized algorithms for those problems.
Abstract: Rademacher Averages and the Vapnik-Chervonenkis dimension are fundamental concepts from statistical learning theory. They allow to study simultaneous deviation bounds of empirical averages from their expectations for classes of functions, by considering properties of the functions, of their domain (the dataset), and of the sampling process. In this tutorial, we survey the use of Rademacher Averages and the VC-dimension in sampling-based algorithms for graph analysis and pattern mining. We start from their theoretical foundations at the core of machine learning, then show a generic recipe for formulating data mining problems in a way that allows to use these concepts in efficient randomized algorithms for those problems. Finally, we show examples of the application of the recipe to graph problems (connectivity, shortest paths, betweenness centrality) and pattern mining. Our goal is to expose the usefulness of these techniques for the data mining researcher, and to encourage research in the area.

3 citations

Journal ArticleDOI
TL;DR: A recent work settles the sample complexity of learning Myerson's optimal auction by showing matching upper and lower bounds, up to a poly-logarithmic factor, for all families of value distributions that have been considered in the literature.
Abstract: The sample complexity of learning Myerson's optimal auction from i.i.d. samples of bidders' values has received much attention since its introduction by Cole and Roughgarden (STOC 2014). This letter gives a brief introduction of a recent work that settles the sample complexity by showing matching upper and lower bounds, up to a poly-logarithmic factor, for all families of value distributions that have been considered in the literature. The upper bounds are unified under a novel framework, which builds on the strong revenue monotonicity by Devanur, Huang, and Psomas (STOC 2016), and an information theoretic argument. This is fundamentally different from the previous approaches that rely on either constructing an ∈-net of the mechanism space, either explicitly, or implicitly via statistical learning theory, or learning an approximately accurate version of the virtual values. To our knowledge, it is the first time information theoretical arguments are used to show sample complexity upper bounds, instead of lower bounds. The lower bounds are also unified under a meta construction of hard instances.

3 citations

Journal Article
TL;DR: The result shows that this method, which forecasts the disastrous weather using the support vector machines based on the statistical learning theory, is simplified, and the prediction is correct, and can be applied to forecast the practical disastrous weather.
Abstract: A new method,which forecasts the disastrous weather using the support vector machines(SVM) based on the statistical learning theory,was presented.The method took the disastrous weather as the classification in pattern recognition,and constructed the model and the training samples,then applied the method to forecast the abnormal high temperature and cool summer in summer.The result shows that this method is simplified,and the prediction is correct,and can be applied to forecast the practical disastrous weather.

3 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847