Topic
Statistical learning theory
About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The most basic assumption in statistical learning theory is that training data and test data are drawn from the same underlying distribution, but in many applications, the "in-domai...
Abstract: The most basic assumption used in statistical learning theory is that training data and test data are drawn from the same underlying distribution. Unfortunately, in many applications, the "in-domai...
16 citations
••
29 Oct 2012TL;DR: In this article, the authors tackle the problem of prediction and confidence intervals for time series using a statistical learning approach and quantile loss functions and show that the Gibbs estimator is able to predict as well as the best predictor in a given family for a wide set of loss functions.
Abstract: In this paper, we tackle the problem of prediction and confidence intervals for time series using a statistical learning approach and quantile loss functions. In a first time, we show that the Gibbs estimator is able to predict as well as the best predictor in a given family for a wide set of loss functions. In particular, using the quantile loss function of [1], this allows to build confidence intervals. We apply these results to the problem of prediction and confidence regions for the French Gross Domestic Product (GDP) growth, with promising results.
16 citations
•
11 Apr 2019TL;DR: A novel robust zero- sum game framework for pool-based active learning grounded on advanced statistical learning theory that avoids the issues of many previous algorithms such as inefficiency, sampling bias and sensitivity to imbalanced data distribution is presented.
Abstract: In this paper, we present a novel robust zero- sum game framework for pool-based active learning grounded on advanced statistical learning theory. Pool-based active learning usually consists of two components, namely, learning of a classifier given labeled data and querying of unlabeled data for labeling. Most previous studies on active learning consider these as two separate tasks and propose various heuristics for selecting important unlabeled data for labeling, which may render the selection of unlabeled examples sub-optimal for minimizing the classification error. In contrast, the present work formulates active learning as a unified optimization framework for learning the classifier, i.e., the querying of labels and the learning of models are unified to minimize a common objective for statistical learning. In addition, the proposed method avoids the issues of many previous algorithms such as inefficiency, sampling bias and sensitivity to imbalanced data distribution. Besides theoretical analysis, we conduct extensive experiments on benchmark datasets and demonstrate the superior performance of the proposed active learning method compared with the state-of-the-art methods.
16 citations
••
23 Oct 2006TL;DR: It is proved that if a kernel has a perfect alignment with the classification task, the SVM classifier has better performances.
Abstract: This paper studies several key aspects of support vector machine (SVM) for Web page classification. Developed from statistical learning theory, SVM is widely investigated and used for text categorization because of its high generalization performance and tolerant ability of processing high dimension classification. Firstly, some methods for Web page presentation are studied. Secondly the Web page classification based on SVM is implementation on data set, and NB classifier is used for study the performance of the SVM classifier processing high dimension space. Finally, the comparison on the polynomial kernel function and the radius basis function (RBF) kernel function is studied. It is proved that if a kernel has a perfect alignment with the classification task, the SVM classifier has better performances
16 citations