scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Posted Content
TL;DR: The main finding from this research is that whereas the 1AA technique is more predisposed to yielding unclassified and mixed pixels, the resulting classification accuracy is not significantly different from 1A1 approach.
Abstract: Support Vector Machines (SVMs) are a relatively new supervised classification technique to the land cover mapping community They have their roots in Statistical Learning Theory and have gained prominence because they are robust, accurate and are effective even when using a small training sample By their nature SVMs are essentially binary classifiers, however, they can be adopted to handle the multiple classification tasks common in remote sensing studies The two approaches commonly used are the One-Against-One (1A1) and One-Against-All (1AA) techniques In this paper, these approaches are evaluated in as far as their impact and implication for land cover mapping The main finding from this research is that whereas the 1AA technique is more predisposed to yielding unclassified and mixed pixels, the resulting classification accuracy is not significantly different from 1A1 approach It is the authors conclusion therefore that ultimately the choice of technique adopted boils down to personal preference and the uniqueness of the dataset at hand

87 citations

Proceedings ArticleDOI
07 May 2010
TL;DR: This paper overviews the pattern recognition techniques and describes the state of art in SVM in the field of pattern recognition.
Abstract: It has been more than 30 years that statistical learning theory (SLT) has been introduced in the field of machine learning. Its objective is to provide a framework for studying the problem of inference that is of gaining knowledge, making predictions, making decisions or constructing models from a set of data. Support Vector Machine, a method based on SLT, then emerged and becoming a widely accepted method for solving real-world problems. This paper overviews the pattern recognition techniques and describes the state of art in SVM in the field of pattern recognition.

87 citations

Journal ArticleDOI
TL;DR: Implementation results show that the performance of combined kernel approach is better than the single kernel approach, and SVM based method was found to have a better performance based on two epidemiological indices such as sensitivity and specificity.
Abstract: A support vector machine (SVM) is a novel classifier based on the statistical learning theory. To increase the performance of classification, the approach of SVM with kernel is usually used in classification tasks. In this study, we first attempted to investigate the performance of SVM with kernel. Several kernel functions, polynomial, RBF, summation, and multiplication were employed in the SVM and the feature selection approach developed [Hermes, L., & Buhmann, J. M. (2000). Feature selection for support vector machines. In Proceedings of the international conference on pattern recognition (ICPR'00) (Vol. 2, pp. 716-719)] was utilized to determine the important features. Then, a hypertension diagnosis case was implemented and 13 anthropometrical factors related to hypertension were selected. Implementation results show that the performance of combined kernel approach is better than the single kernel approach. Compared with backpropagation neural network method, SVM based method was found to have a better performance based on two epidemiological indices such as sensitivity and specificity.

86 citations

Journal ArticleDOI
TL;DR: This paper decomposes the expected reconstruction error of the MahNMF into the estimation error and the approximation error, and shows how the reduced dimensionality affects the estimation and approximation errors.
Abstract: Extracting low-rank and sparse structures from matrices has been extensively studied in machine learning, compressed sensing, and conventional signal processing, and has been widely applied to recommendation systems, image reconstruction, visual analytics, and brain signal processing. Manhattan nonnegative matrix factorization (MahNMF) is an extension of the conventional NMF, which models the heavy-tailed Laplacian noise by minimizing the Manhattan distance between a nonnegative matrix $X$ and the product of two nonnegative low-rank factor matrices. Fast algorithms have been developed to restore the low-rank and sparse structures of $X$ in the MahNMF. In this paper, we study the statistical performance of the MahNMF in the frame of the statistical learning theory. We decompose the expected reconstruction error of the MahNMF into the estimation error and the approximation error. The estimation error is bounded by the generalization error bounds of the MahNMF, while the approximation error is analyzed using the asymptotic results of the minimum distortion of vector quantization. The generalization error bound is valuable for determining the size of the training sample needed to guarantee a desirable upper bound for the defect between the expected and empirical reconstruction errors. Statistical performance analysis shows how the reduced dimensionality affects the estimation and approximation errors. Our framework can also be used for analyzing the performance of the NMF.

85 citations

Journal ArticleDOI
TL;DR: A general function is derived describing the conditioning of a single stimulus component in a discriminative situation that generates empirically testable formulas for learning of classical two- alternatives, probabilistic discriminations, and discriminations based on the outcomes of preceding trials in partial reinforcement experiments.
Abstract: A general function is derived describing the conditioning of a single stimulus component in a discriminative situation. This function, together with the combinatorial rules of statistical learning theory [5, 12], generates empirically testable formulas for learning of classical two-alternative discriminations, probabilistic discriminations, and discriminations based on the outcomes of preceding trials in partial reinforcement experiments.

85 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847