scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Journal Article
TL;DR: A new algorithm was proposed which take into account the extreme values in the limited scale of values and shows that this algorithm has a better recognition rate.
Abstract: How to find the optimal parameters of SVM to train the model in order to have a better recognition rate when treating unknown test samples has been the most important issue to solve practical problems.Traditional cross validation algorithm only find the optimal parameters in the whole scale of values.A new algorithm was proposed which take into account the extreme values in the limited scale.Results show that this algorithm has a better recognition rate.

3 citations

Proceedings ArticleDOI
28 Oct 2002
TL;DR: This paper investigates the large scale regression problem by using SVMs, but instead of solving quadratic programming slowly, this paper tries to accelerate the regression speed by linear programming formulation using 1-norm or /spl infin/-norm for the model complexity.
Abstract: The support vector machine (SVM) is a new generation learning system based on recent advances in statistical learning theory. There is evidence showing that SVMs can deliver state-of-the art performance in real-world applications such as text categorisation, handwritten character recognition, image classification, biosequence analysis, etc. In this paper, we investigated the large scale regression problem by using SVMs, but instead of solving quadratic programming slowly, we try to accelerate the regression speed by linear programming formulation using 1-norm or /spl infin/-norm for the model complexity. Computational experiments show that linear programming gives us very good performance without deteriorating the results.

3 citations

19 Aug 2008
TL;DR: A technique based on a concentration inequality for Hilbert spaces is used to provide new much simplified proofs for a number of results in spectral approximation of graph Laplacian operator extending and strengthening results from [26].
Abstract: A large number of learning algorithms, for example, spectral clustering, kernel Principal Components Analysis and many manifold methods are based on estimating eigenvalues and eigenfunctions of operators defined by a similarity function or a kernel, given empirical data. Thus for the analysis of algorithms, it is an important problem to be able to assess the quality of such approximations. The contribution of our paper is two-fold: 1. We use a technique based on a concentration inequality for Hilbert spaces to provide new much simplified proofs for a number of results in spectral approximation. 2. Using these methods we provide several new results for estimating spectral properties of the graph Laplacian operator extending and strengthening results from [26].

3 citations

Proceedings ArticleDOI
23 Oct 2006
TL;DR: Support vector machine-based approach applied to calculate steel quenching degree is presented and experiments show that SVM- based method is effective and superior to ANN-based method.
Abstract: The calculation of steel quenching degree has an important influence on real application. Steel quenching degree is influenced by chemical constitution and other many factors, which makes it difficult to be calculated accurately. Support vector machine (SVM) is a novel machine learning method based on statistical learning theory, which is powerful for solving the problems described by high dimension, small-sample and nonlinearity. In this paper, an SVM-based approach applied to calculate steel quenching degree is presented. With real data collected from Jiangyin Xingcheng Steel Work CO., LTD, experiments show that SVM-based method is effective and superior to ANN-based method.

3 citations

Book ChapterDOI
20 Sep 2006
TL;DR: It is demonstrated that, rewritten in a convex update setting and using an appropriate updating vector selection procedure, Rosenblatt’s rule does indeed provide maximum margins for kernel perceptrons, although with a convergence slower than that achieved by other more sophisticated methods, such as the Schlesinger–Kozinec (SK) algorithm.
Abstract: Statistical learning theory make large margins an important property of linear classifiers and Support Vector Machines were designed with this target in mind. However, it has been shown that large margins can also be obtained when much simpler kernel perceptrons are used together with ad–hoc updating rules, different in principle from Rosenblatt’s rule. In this work we will numerically demonstrate that, rewritten in a convex update setting and using an appropriate updating vector selection procedure, Rosenblatt’s rule does indeed provide maximum margins for kernel perceptrons, although with a convergence slower than that achieved by other more sophisticated methods, such as the Schlesinger–Kozinec (SK) algorithm.

3 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847