scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Proceedings ArticleDOI
29 Oct 2007
TL;DR: The definition of random vectors and the definition of the distributed function and the expectation ofrandom vectors are given, and the key theorem of learning theory on set-valued probability space is given, which has laid the theoretical foundation for us to establish the statisticallearning theory on probability space.
Abstract: Statistical Learning Theory based on random samples on probability space is considered as the best theory about small samples statistics learning at present and has become a new hot field in machine learning after neural networks. However, the theory can not handle the small samples statistical learning problems on set-valued probability space which widely exists in real world. This paper discussed statistical learning theory on a special kind of set-valued probability space. Firstly, we shall give the definition of random vectors and the definition of the distributed function and the expectation of random vectors, and then we will give the definition of the expected risk functional, the empirical risk functional and the definition of the consistency of the principle (method) of empirical risk minimization (ERM) on set-valued probability space. Finally, we will give and prove the key theorem of learning theory on set-valued probability space, which has laid the theoretical foundation for us to establish the statistical learning theory on probability space.

1 citations

Proceedings ArticleDOI
01 Dec 2016
TL;DR: This paper detects the community using the SVM, then the algorithm is tested in LFR benchmark datasets and real-world datasets including karate networks and dolphins networks, which reach the higher accuracy rate.
Abstract: As we know many things of the world can be abstracted as the network, and it's important to detect the community structure. Nowadays many of the classification algorithm are used to detect the community, but its accuracy can't meet the requirement. SVM, which is related to statistical learning theory and has a serious of advantages in solving the problem about non-line classification problem, is a famous algorithm in machine learning, it can find the category as the higher accuracy, and it can play well even if the scale of the network is so small. So we detect the community using the SVM, then we test the algorithm in LFR benchmark datasets and real-world datasets including karate networks and dolphins networks, which reach the higher accuracy rate.

1 citations

Proceedings ArticleDOI
01 Aug 2006
TL;DR: In this article, the key theorem of statistical learning theory is given and proven when samples are corrupted by equality-expect noise on quasi-probability space and some new concepts of learning theory are given.
Abstract: Based on statistical learning theory on probability space and good properties of quasi-probability, some important inequalities are proven on quasi-probability space in this paper. Furthermore, some new concepts of learning theory are given and the key theorem of statistical learning theory is given and proven when samples are corrupted by equality-expect noise on quasi-probability space.

1 citations

Proceedings ArticleDOI
24 Jun 2022
TL;DR: The Support Vector Machine (SVM) as discussed by the authors is a binary classifier based on the linear classifier with the optimal margin in the feature space and thus the learning strategy is to maximize the margin, which can be transformed into a convex quadratic programming problem.
Abstract: The Support Vector methods was proposed by V.Vapnik in 1965, when he was trying to solve problems in pattern recognition. In 1971, Kimeldorf proposed a method of constructing kernel space based on support vectors. In 1990s, V.Vapnik formally introduced the Support Vector Machine (SVM) methods in Statistical Learning. Since then, SVM has been widely applied in pattern recognition, natural language process and so on. Informally, SVM is a binary classifier. The model is based on the linear classifier with the optimal margin in the feature space and thus the learning strategy is to maximize the margin, which can be transformed into a convex quadratic programming problem. It uses the principle of structural risk minimization instead of empirical risk minimization to fit small data samples. Kernel trick is used to transform non-linear sample space into linear space, decreasing the complexity of algorithm. Even though, it still has broader prospects for development.

1 citations

Proceedings ArticleDOI
11 Jul 2007
TL;DR: An automatic Mail-head Character Categorization system is implemented using Support Vector Machine and the training time is substantially reduced because of the lower dimensional feature space.
Abstract: Support Vector Machine is a novel machine learning method based on statistical learning theory, and it has been successfully applied to spam filtering system. This paper gives a research to Mail-head Character Categorization using Support Vector Machine(SVM).A total of 106 features were extracted and the filtering performance is good. And the training time is substantially reduced because of the lower dimensional feature space. Finally we implement an automatic Mail-head Character Categorization system, and it also gives the test results.

1 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847