scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
01 Jan 2011
TL;DR: This study shows the RVM is more robust model than the SVM for prediction of rainfall in Vellore (India), and uses SVM and RVM as a regression technique.
Abstract: This article adopts Support Vector Machine (SVM) and Relevance Vector Machine (RVM) for prediction of rainfall in Vellore (India). SVM is firmly based on the theory of statistical learning theory. RVM is a probabilistic basis model. SVM and RVM use air temperature (T), sunshine, humidity and wind speed (V a) as input variables. This article uses SVM and RVM as a regression technique. Equations have been also developed for prediction of rainfall. The developed RVM gives variance of the predicted rainfall. This study shows the RVM is more robust model than the SVM.

18 citations

Posted Content
TL;DR: A new theoretical framework for model compression is developed and a new pruning method called spectral pruning is proposed based on this framework, which defines the ``degrees of freedom'' to quantify the intrinsic dimensionality of a model by using the eigenvalue distribution of the covariance matrix across the internal nodes.
Abstract: Compression techniques for deep neural network models are becoming very important for the efficient execution of high-performance deep learning systems on edge-computing devices. The concept of model compression is also important for analyzing the generalization error of deep learning, known as the compression-based error bound. However, there is still huge gap between a practically effective compression method and its rigorous background of statistical learning theory. To resolve this issue, we develop a new theoretical framework for model compression and propose a new pruning method called {\it spectral pruning} based on this framework. We define the ``degrees of freedom'' to quantify the intrinsic dimensionality of a model by using the eigenvalue distribution of the covariance matrix across the internal nodes and show that the compression ability is essentially controlled by this quantity. Moreover, we present a sharp generalization error bound of the compressed model and characterize the bias--variance tradeoff induced by the compression procedure. We apply our method to several datasets to justify our theoretical analyses and show the superiority of the the proposed method.

18 citations

Proceedings ArticleDOI
14 Jun 2009
TL;DR: From the experimental results, it can be seen that classification based on SVM with FD perform well in EEG signals classification, which indicates this classification method is valid and has promising application.
Abstract: Support vector machine (SVM) is a machine learning technique widely applied in classification problems. SVM are based on the Vapnik's Statistical Learning Theory, and successively extended by a number of researchers. On the order hand, the electroencephalogram (EEG) signal captures the electrical activity of the brain and is an important source of information for studying neurological disorders. In order to extract relevant information of EEG signal, a variety of computerized-analysis methods have been developed. Recent studies indicate that methods based on the nonlinear dynamics theory can extract valuable information from neuronal dynamics. However, many these of methods need large amount of data and are computationally expensive. From chaos theory, a global value that is relatively simple to compute is the fractal dimension (FD), it can be used to measure the geometrical complexity of a time series. The FD of a waveform represents a powerful tool for transient detection. In analysis of EEG this feature can been used to identify and distinguish specific states of physiologic function. A variety of algorithms are available for the computation of FD. In this work, we employ SVM to classify the EEG signals from healthy subjects and epileptic subjects using as the features vector the FD. From the experimental results, we can see that classification based on SVM with FD perform well in EEG signals classification, which indicates this classification method is valid and has promising application.

18 citations

Proceedings ArticleDOI
14 Nov 2005
TL;DR: Simulation results show that the proposedLS-SVM estimation algorithm provides a powerful tool for identification and soft-sensor modeling and has promising application in industrial process applications.
Abstract: Support vector machines (SVM) is a novel machine learning method based on small-sample statistical learning theory (SLT), and is powerful for the problem with small sample, nonlinearity, high dimension, and local minima.SVM have been very successful in pattern recognition ,fault diagnoses and function estimation problems. Least squares support vector machines (LS-SVM) is an SVM version which involves equality instead of inequality constraints and works with a least squares cost function. This paper discusses least squares support vector machines (LS-SVM) estimation algorithm and introduces applications of the novel method for the nonlinear control systems. Then identification of MIMO models and soft-sensor modeling based on least squares support vector machines (LS-SVM) is proposed. The simulation results show that the proposed method provides a powerful tool for identification and soft-sensor modeling and has promising application in industrial process applications

18 citations

Proceedings Article
21 Jul 2002
TL;DR: An algorithm for learning stable machines which is motivated by recent results in statistical learning theory and is similar to Breiman's bagging despite some important differences in that it computes an ensemble combination of machines trained on small random sub-samples of an initial training set.
Abstract: We present an algorithm for learning stable machines which is motivated by recent results in statistical learning theory. The algorithm is similar to Breiman's bagging despite some important differences in that it computes an ensemble combination of machines trained on small random sub-samples of an initial training set. A remarkable property is that it is often possible to just use the empirical error of these combinations of machines for model selection. We report experiments using support vector machines and neural networks validating the theory.

18 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847