scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Wang, Qing, Qian, Weiqi, He, Kai-Feng 
01 Jan 2015
TL;DR: In this paper, the feasibility of applying SVMs to high angle-of-attack unsteady aerodynamic modeling field is examined and several issues associated with use of SVMs are discussed in detail, such as selection of input variables, selection of output variables and determination of SVM parameters.
Abstract: Accurate aerodynamic models are the basis of flight simulation and control law design.Mathematically modeling unsteady aerodynamics at high angles of attack bears great difficulties in model structure determination and parameter estimation due to little understanding of the flow mechanism.Support vector machines(SVMs)based on statistical learning theory provide a novel tool for nonlinear system modeling.The work presented here examines the feasibility of applying SVMs to high angle-of-attack unsteady aerodynamic modeling field.Mainly,after a review of SVMs,several issues associated with unsteady aerodynamic modeling by use of SVMs are discussed in detail,such as selection of input variables,selection of output variables and determination of SVM parameters.The least squares SVM(LS-SVM)models are set up from certain dynamic wind tunnel test data of a delta wing and an aircraft configuration,and then used to predict the aerodynamic responses in other tests.The predictions are in good agreement with the test data,which indicates the satisfying learning and generalization performance of LS-SVMs.

21 citations

Journal ArticleDOI
TL;DR: A new linear optimization problem to maximize the margin and some theoretically motivated learning algorithms are obtained and the intuitive margin is also defined.

21 citations

Journal ArticleDOI
TL;DR: The experimental results show that the proposed PSO-ISVR predictor can improve the computational efficiency and the overall prediction accuracy compared with the results produced by the SVR and other regression methods.
Abstract: A new global nonlinear predictor with a particle swarm-optimized interval support vector regression (PSO-ISVR) is proposed to address three issues (viz., kernel selection, model optimization, kernel method speed) encountered when applying SVR in the presence of large data sets. The novel prediction model can reduce the SVR computing overhead by dividing input space and adaptively selecting the optimized kernel functions to obtain optimal SVR parameter by PSO. To quantify the quality of the predictor, its generalization performance and execution speed are investigated based on statistical learning theory. In addition, experiments using synthetic data as well as the stock volume weighted average price are reported to demonstrate the effectiveness of the developed models. The experimental results show that the proposed PSO-ISVR predictor can improve the computational efficiency and the overall prediction accuracy compared with the results produced by the SVR and other regression methods. The proposed PSO-ISVR provides an important tool for nonlinear regression analysis of big data.

21 citations

Proceedings ArticleDOI
24 Oct 1999
TL;DR: This work implements SVM as receivers in CDMA systems and compares SVM with traditional and adaptive receivers, and shows that a linear SVM converges to the MMSE receiver in the noiseless case.
Abstract: We apply support vector machines (SVM) or optimal margin classifiers to multiuser detection problems. SVM are well suited for multiuser detection problems as they are based on principles of statistical learning theory where the goal is to construct a maximum margin classifier. We show that a linear SVM converges to the MMSE receiver in the noiseless case. The SVM are also modified to construct nonlinear receivers by using kernel functions and they approximate optimal nonlinear multiuser detection receivers. Using the sequential minimization optimization (SMO) algorithm, we implement SVM as receivers in CDMA systems and compare SVM with traditional and adaptive receivers. The simulation performance of SVM compares favorably to these receivers.

21 citations

Proceedings Article
03 May 2021
TL;DR: In this article, a few-shot learning via representation learning is studied, where one uses T source tasks with n1 data per task to learn a representation in order to reduce the sample complexity of a target task for which there is only n2(≪n1) data.
Abstract: This paper studies few-shot learning via representation learning, where one uses T source tasks with n1 data per task to learn a representation in order to reduce the sample complexity of a target task for which there is only n2(≪n1) data. Specifically, we focus on the setting where there exists a good common representation between source and target, and our goal is to understand how much a sample size reduction is possible. First, we study the setting where this common representation is low-dimensional and provide a risk bound of O~(dkn1T+kn2) on the target task for the linear representation class; here d is the ambient input dimension and k(≪d) is the dimension of the representation. This result bypasses the Ω(1T) barrier under the i.i.d. task assumption, and can capture the desired property that all n1T samples from source tasks can be \emph{pooled} together for representation learning. We further extend this result to handle a general representation function class and obtain a similar result. Next, we consider the setting where the common representation may be high-dimensional but is capacity-constrained (say in norm); here, we again demonstrate the advantage of representation learning in both high-dimensional linear regression and neural networks, and show that representation learning can fully utilize all n1T samples from source tasks.

21 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847