Topic
Statistical learning theory
About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.
Papers published on a yearly basis
Papers
More filters
•
24 May 2019TL;DR: In this paper, the authors address the question of how to learn robustly in such scenarios, and derive a procedure that allows for learning from all available sources, yet automatically suppresses irrelevant or corrupted data.
Abstract: Modern machine learning methods often require more data for training than a single expert can provide. Therefore, it has become a standard procedure to collect data from external sources, e.g. via crowdsourcing. Unfortunately, the quality of these sources is not always guaranteed. As additional complications, the data might be stored in a distributed way, or might even have to remain private. In this work, we address the question of how to learn robustly in such scenarios. Studying the problem through the lens of statistical learning theory, we derive a procedure that allows for learning from all available sources, yet automatically suppresses irrelevant or corrupted data. We show by extensive experiments that our method provides significant improvements over alternative approaches from robust statistics and distributed optimization.
28 citations
••
01 Jan 2018TL;DR: In statistical learning theory (regression, classification, etc.) there are many regression models, such as algebraic polynomials, which help in the development of models for classification.
Abstract: In statistical learning theory (regression, classification, etc.) there are many regression models, such as algebraic polynomials,
28 citations
••
23 Jun 2008TL;DR: A framework formulated under statistical learning theory that facilitates robust learning of a discriminative projection is proposed and the experimental results suggest that the proposed method outperforms some recent regularized techniques when the number of training samples is small.
Abstract: Learning a robust projection with a small number of training samples is still a challenging problem in face recognition, especially when the unseen faces have extreme variation in pose, illumination, and facial expression. To address this problem, we propose a framework formulated under statistical learning theory that facilitates robust learning of a discriminative projection. Dimensionality reduction using the projection matrix is combined with a linear classifier in the regularized framework of lasso regression. The projection matrix in conjunction with the classifier parameters are then found by solving an optimization problem over the Stiefel manifold. The experimental results on standard face databases suggest that the proposed method outperforms some recent regularized techniques when the number of training samples is small.
28 citations
•
TL;DR: The research shows that the method suggested features higher performance on classification and generalization ability and shorter training time over the methods based on artificial neural networks, especially for small samples.
Abstract: Fault diagnosis method based on SVM is proposed in this paper. The research shows that the method suggested features higher performance on classification and generalization ability and shorter training time over the methods based on artificial neural networks, especially for small samples.
28 citations
••
01 Dec 2008TL;DR: This paper proposes to address the small sample size (SSS) problem in the framework of statistical learning theory by compute linear discriminants by regularized least squares regression, where the singularity problem is resolved.
Abstract: Linear discriminant analysis (LDA) as a dimension reduction method is widely used in classification such as face recognition. However, it suffers from the small sample size (SSS) problem when data dimensionality is greater than the sample size, as in images where features are high dimensional and correlated. In this paper, we propose to address the SSS problem in the framework of statistical learning theory. We compute linear discriminants by regularized least squares regression, where the singularity problem is resolved. The resulting discriminants are complete in that they include both regular and irregular information. We show that our proposal and its nonlinear extension belong to the same framework where powerful classifiers such as support vector machines are formulated. In addition, our approach allows us to establish an error bound for LDA. Finally, our experiments validate our theoretical analysis results.
28 citations