scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Proceedings ArticleDOI
13 May 2002
TL;DR: This work explores use of active learning for the annotation engine that minimizes the number of training samples to be labeled for satisfactory performance in statistical modeling for content based retrieval.
Abstract: Statistical modeling for content based retrieval is examined in the context of recent TREC Video benchmark exercise. The TREC Video exercise can be viewed as a test bed for evaluation and comparison of a variety of different algorithms on a set of high-level queries for multimedia retrieval. We report on the use of techniques adopted from statistical learning theory. Our method, as in most statistical methods, depend on training of models based on large data sets. A plethora of statistical models such the Gaussian mixture models, support vector machines etc. can be thought of, only a few of which are exploited in this preliminary report. Training requires a large amount of annotated (labeled) data. Thus, we explore use of active learning for the annotation engine that minimizes the number of training samples to be labeled for satisfactory performance.

7 citations

Posted Content
01 Oct 2018
TL;DR: A novel measure-theoretic theory for machine learning that does not require statistical assumptions is introduced and a new regularization method in deep learning is derived and shown to outperform previous methods in CIFar-10, CIFAR-100, and SVHN.
Abstract: This paper introduces a novel measure-theoretic theory for machine learning that does not require statistical assumptions. Based on this theory, a new regularization method in deep learning is derived and shown to outperform previous methods in CIFAR-10, CIFAR-100, and SVHN. Moreover, the proposed theory provides a theoretical basis for a family of practically successful regularization methods in deep learning. We discuss several consequences of our results on one-shot learning, representation learning, deep learning, and curriculum learning. Unlike statistical learning theory, the proposed learning theory analyzes each problem instance individually via measure theory, rather than a set of problem instances via statistics. As a result, it provides different types of results and insights when compared to statistical learning theory.

7 citations

Journal Article
TL;DR: Wang et al. as mentioned in this paper proposed an evolutionary statistical learning theory based on Vapnik-Chervonenkis dimension to solve local optima and overfitting problems.
Abstract: Statistical learning theory was developed by Vapnik. It is a learning theory based on Vapnik-Chervonenkis dimension. It also has been used in learning models as good analytical tools. In general, a learning theory has had several problems. Some of them are local optima and over-fitting problems. As well, statistical learning theory has same problems because the kernel type, kernel parameters, and regularization constant C are determined subjectively by the art of researchers. So, we propose an evolutionary statistical learning theory to settle the problems of original statistical learning theory. Combining evolutionary computing into statistical learning theory, our theory is constructed. We verify improved performances of an evolutionary statistical learning theory using data sets from KDD cup. Keywords—Evolutionary computing, Local optima, Over-fitting, Statistical learning theory

7 citations

Book ChapterDOI
01 Jan 2017
TL;DR: This work proposes a new learning framework, which relies on Statistical Learning Theory that includes constraints inside the learning process itself that allows to train advanced resource-sparing ML models and to efficiently deploy them on smart mobile devices.
Abstract: Most state-of-the-art machine learning (ML) algorithms do not consider the computational constraints of implementing their learned models on mobile devices. These constraints are, for example, the limited depth of the arithmetic unit, the memory availability, and the battery capacity. We propose a new learning framework, which relies on Statistical Learning Theory that includes these constraints inside the learning process itself. This new framework allows to train advanced resource-sparing ML models and to efficiently deploy them on smart mobile devices. The advantages of our proposal are presented on a smartphone-based Human Activity Recognition application and compared against a conventional ML approach.

7 citations

Xiao Jian-hua1
01 Jan 2004
TL;DR: The results of simulation experiments show that SVR has a good generalization ability and capability of tolerating noise, and the prediction results with BP network and RBF network are compared.
Abstract: Support vector machines (SVM) are a kind of novel machine learning methods based on statistical learning theory, which has been developed to solve classification and regression problems. This paper applies support vector regression (SVR) to chaotic time series prediction, and compares the prediction results with BP network and RBF network. The results of simulation experiments show that SVR has a good generalization ability and capability of tolerating noise.

7 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847