scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Dissertation
01 Jan 2010
TL;DR: A new margin-based classifier with decision boundary surface area regularization and optimization via variational level set methods is developed and a new distortion is proposed for the quantization or clustering of prior probabilities appearing in the thresholds of likelihood ratio tests.
Abstract: The design and analysis of decision rules using detection theory and statistical learning theory is important because decision making under uncertainty is pervasive. Three perspectives on limiting the complexity of decision rules are considered in this thesis: geometric regularization, dimensionality reduction, and quantization or clustering. Controlling complexity often reduces resource usage in decision making and improves generalization when learning decision rules from noisy samples. A new margin-based classifier with decision boundary surface area regularization and optimization via variational level set methods is developed. This novel classifier is termed the geometric level set (GLS) classifier. A method for joint dimensionality reduction and margin-based classification with optimization on the Stiefel manifold is developed. This dimensionality reduction approach is extended for information fusion in sensor networks. A new distortion is proposed for the quantization or clustering of prior probabilities appearing in the thresholds of likelihood ratio tests. This distortion is given the name mean Bayes risk error (MBRE). The quantization framework is extended to model human decision making and discrimination in segregated populations. Thesis Supervisor: Alan S. Willsky Title: Edwin Sibley Webster Professor of Electrical Engineering

11 citations

Journal ArticleDOI
TL;DR: Using the techniques of statistical learning theory, theoretical characteristics of the approximation algorithm are provided with partitioning schemes, and a splitting rule is designed for vertex partitioning.
Abstract: Ontology is a useful tool with wide applications in various fields and attracts widespread attention of scholars, and ontology concept similarity calculation is an essential problem in these application algorithms. An effective method to get similarity between vertices on ontology is based on a function, which maps ontology graph into a line and maps each vertex in graph into a real-value, and the similarity is measured by the difference of their corresponding scores. The area under the receiver operating characteristics curve (AUC) criterion multi-dividing method is suitable for ontology problem. In this paper, we present piecewise constant function approximation approach for AUC criterion multi-dividing ontology algorithm and focus on vertex partitioning schemes. Using the techniques of statistical learning theory, theoretical characteristics of the approximation algorithm are provided with partitioning schemes, and a splitting rule is designed for vertex partitioning.

11 citations

Journal ArticleDOI
TL;DR: This paper proposes an ELM implementation that exploits the Spark distributed in memory technology and shows how to take advantage of SLT results in order to select ELM hyperparameters able to provide the best generalization performance.
Abstract: Recently, social networks and other forms of media communication have been gathering the interest of both the scientific and the business world, leading to the increasing development of the science of opinion and sentiment analysis. Facing the huge amount of information present on the Web represents a crucial task and leads to the study and creation of efficient models able to tackle the task. To this end, current research proposes an efficient approach to support emotion recognition and polarity detection in natural language text. In this paper, we show how the most recent advances in statistical learning theory (SLT) can support the development of an efficient extreme learning machine (ELM) and the assessment of the resultant model’s performance when applied to big social data analysis. ELM, developed to overcome some issues in back-propagation networks, represents a powerful learning tool. However, the main problem is represented by the necessity to cope with a large number of available samples, and the generalization performance has to be carefully assessed. For this reason, we propose an ELM implementation that exploits the Spark distributed in memory technology and show how to take advantage of SLT results in order to select ELM hyperparameters able to provide the best generalization performance.

11 citations

Journal ArticleDOI
TL;DR: Support vector based regression was employed to formulate available PVT data into Pb and a comparison among SVR, neural network and three well-known empirical correlations demonstrated superiority of SVR model.
Abstract: Accurate determination of oil bubble point pressure (Pb) from laboratory experiments is time, cost and labor intensive. Therefore, the quest for an accurate, fast, and cheap method of determining Pb is inevitable. Since support vector based regression satisfies all components of such a quest through a supervised learning algorithm plant based on statistical learning theory, it was employed to formulate available PVT data into Pb. Open-sources literature data were used for SVR model construction and Iranian Oils data were employed for model evaluation. A comparison among SVR, neural network and three well-known empirical correlations demonstrated superiority of SVR model.

11 citations

Journal Article
TL;DR: In this paper, the key theorem of learning theory on quasi-probability spaces is proved, and the bounds on the rate of uniform convergence of learning process on a quasi-random variable and its distribution function, expected value and variance are presented.
Abstract: Some properties of quasi-probability are further discussed.The definitions and properties of quasi-random variable and its distribution function,expected value and variance are then presented.Markov inequality,Chebyshev's inequality and the Khinchine's law of large numbers on quasi-probability spaces are also proved.Then the key theorem of learning theory on quasi-probability spaces is proved,and the bounds on the rate of uniform convergence of learning process on quasi-probability spaces are constructed.The investigations will help lay essential theoretical foundations for the systematic and comprehensive development of the quasi-statistical learning theory.

11 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847