scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Book ChapterDOI
TL;DR: This work reports on combining several SVM with different kernel functions and variable credit client data sets and presents classification results produced by various combination strategies and compares them to the results obtained earlier with more traditional single SVM credit scoring models.
Abstract: Support vector machines (SVM) from statistical learning theory are powerful classification methods with a wide range of applications including credit scoring. The urgent need to further boost classification performance in many applications leads the machine learning community into developing SVM with multiple kernels and many other combined approaches. Owing to the huge size of the credit market, even small improvements in classification accuracy might considerably reduce effective misclassification costs experienced by banks. Under certain conditions, the combination of different models may reduce or at least stabilize the risk of misclassification. We report on combining several SVM with different kernel functions and variable credit client data sets. We present classification results produced by various combination strategies and we compare them to the results obtained earlier with more traditional single SVM credit scoring models.

8 citations

Journal ArticleDOI
14 Sep 2020
TL;DR: The intersection and fusion of the research methods of machine learning methods, such as deep learning and the SVM, are discussed, and the theories and methods that may be induced in the future are expected.
Abstract: As one of the main methods of machine learning, the support vector machine (SVM) not only has a solid theoretical foundation of statistical learning, but also shows excellent generalization performance in many fields, so it has received extensive attention. However, in recent years, compared with the vigorous development of deep learning,the research on SVM has fallen into a trough. This paper starts from the essence of SVM, discusses the intersection and fusion of the research methods of machine learning methods, such as deep learning and the SVM, and then puts forward some new ideas. Specifically, it includes three aspects: the principle of large margin with the low density property of SVM, the high-dimensional division technique of kernel mapping and its statistical learning theory, the shallow learning of SVM and its extension to deep learning and broad learning. At the same time, the excellent properties that can be further explored from these three aspects, and the theories and methods that may be induced in the future are expected.

8 citations

Journal ArticleDOI
TL;DR: A complete SVM-like framework of linear PCA (SVPCA) for deciding the projection direction is constructed, where new expected risk and margin are introduced and a new definition of support vectors is established.

8 citations

Journal ArticleDOI
Ibrahim M. Alabdulmohsin1
13 Apr 2020-Entropy
TL;DR: It is proved that under the Axiom of Choice, the existence of an empirical risk minimization (ERM) rule that has a vanishing learning capacity is equivalent to the assertion that the hypothesis space has a finite Vapnik–Chervonenkis (VC) dimension.
Abstract: In this paper, we introduce the notion of “learning capacity” for algorithms that learn from data, which is analogous to the Shannon channel capacity for communication systems. We show how “learning capacity” bridges the gap between statistical learning theory and information theory, and we will use it to derive generalization bounds for finite hypothesis spaces, differential privacy, and countable domains, among others. Moreover, we prove that under the Axiom of Choice, the existence of an empirical risk minimization (ERM) rule that has a vanishing learning capacity is equivalent to the assertion that the hypothesis space has a finite Vapnik–Chervonenkis (VC) dimension, thus establishing an equivalence relation between two of the most fundamental concepts in statistical learning theory and information theory. In addition, we show how the learning capacity of an algorithm provides important qualitative results, such as on the relation between generalization and algorithmic stability, information leakage, and data processing. Finally, we conclude by listing some open problems and suggesting future directions of research.

8 citations

Proceedings ArticleDOI
24 Aug 2007
TL;DR: The investigation indicates that in the small data sets and real-life production, SVM models are capable of maintaining the stability of predictive accuracy, and more suitable for noisy and dynamic spinning process.
Abstract: Although many works have been done to construct prediction models on yarn processing quality, the relation between spinning variables and yarn properties has not been established conclusively so far. Support vector machines (SVMs), based on statistical learning theory, are gaining applications in the areas of machine learning and pattern recognition because of the high accuracy and good generalization capability. This study briefly introduces the SVM regression algorithms, and presents the SVM model for predicting yarn properties. Model selection which amounts to search in hyper-parameter space is performed for study of suitable parameters with grid-research method. Experimental results have been compared with those of artificial neural network (ANN) models. The investigation indicates that in the small data sets and real-life production, SVM models are capable of maintaining the stability of predictive accuracy, and more suitable for noisy and dynamic spinning process.

8 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847