scispace - formally typeset
Search or ask a question
Topic

Statistical learning theory

About: Statistical learning theory is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 158033 citations.


Papers
More filters
Journal Article
LI Xiaoming1
TL;DR: A neural network reconfiguration model based on the extreme learning machine is proposed, which reflects the nonlinear relationship between the load pattern and the switch state of distribution grid and has both better generalization performance and faster training speed.
Abstract: To minimize the active power loss of distribution grid reconfiguration,a neural network reconfiguration model based on the extreme learning machine is proposed,which reflects the nonlinear relationship between the load pattern and the switch state of distribution grid.Having simple network structure and fast training speed,the model takes the load pattern as its input and outputs the switch states to reconfigure the distribution grid with minimum active power loss.The structural risk minimization rule of the statistical learning theory is introduced into the extreme learning machine based on the empirical risk minimization to minimize the empirical risk and confidence interval.The actual risk is thus minimized and the expectation error is decreased.Simulative research is carried out for two typical cases of distribution network reconfiguration with different reconfiguration models:support vector machine,BP neural network and extreme learning machine.Results show that the proposed model has both better generalization performance and faster training speed.

1 citations

Book ChapterDOI
01 Jan 2019

1 citations

Journal Article
TL;DR: This paper introduced the support vector machine(SVM) based on the theory of traditional statistics that can solve small-sample learning problems better by using experiential risk minimization (ERM) in place of structural risk minimizations (SRM) and can change the problem in non-linearity space to that in the linearity space in order to reduce the algorithm complexity by using the kernel function idea.
Abstract: Most of the existing methods are based on traditional statistics,which provides that conclusion only for the situation where sample size is tending to infinity.So they may not work well in practical case with limi-ted samples and easily lead to the problem of overfilling.This paper introduced the support vector machine(SVM) based on the theory of traditional statistics.This method can solve small-sample learning problems better by using experiential risk minimization(ERM) in place of structural risk minimization(SRM).Moreover,this theory can change the problem in non-linearity space to that in the linearity space in order to reduce the algorithm complexity by using the kernel function idea.It studies some relational contents including the optimization algorithm and the solution to multi-classification.Finally,through an example,it shows that the pro-posed method is effective and feasible.

1 citations

Journal ArticleDOI
TL;DR: In this article, the statistical evidence for negative transfer in part-whole free recall was reviewed, and it was argued that there was insufficient evidence to support a conclusion of negative transfer.
Abstract: A series of studies on part-whole free recall led to the conclusion that learning part of a list before learning the entire list produces negative transfer late in learning. The statistical evidence for this conclusion is shown to depend upon assumptions about (1) the asymptotic level reached and (2) the relative magnitude of the variance between conditions as compared to the variance of Ss within conditions. Evidence concerning these assumptions is reviewed, and it is argued that there was insufficient evidence to support a conclusion of negative transfer in part-whole free recall.

1 citations

Proceedings Article
07 Aug 2011
TL;DR: This research is motivated by how case similarity is assessed in retrospect in law, and finding similar mappings of case facts to court decisions, or similar strategies that courts used to decide cases based on evaluating case facts is an interesting and yet unexplored research problem.
Abstract: Strategy mining is a new area of research about discovering strategies in decision-making. In this paper, we formulate the strategy-mining problem as a clustering problem, called the latent-strategy problem. In a latent-strategy problem, a corpus of data instances is given, each of which is represented by a set of features and a decision label. The inherent dependency of the decision label on the features is governed by a latent strategy. The objective is to find clusters, each of which contains data instances governed by the same strategy. Existing clustering algorithms are inappropriate to cluster dependency because they either assume feature independency (e.g., K-means) or only consider the co-occurrence of features without explicitly modeling the special dependency of the decision label on other features (e.g., Latent Dirichlet Allocation (LDA)). In this paper, we present a baseline unsupervised learning algorithm for dependency clustering. Our model-based clustering algorithm iterates between an assignment step and a minimization step to learn a mixture of decision tree models that represent latent strategies. Similar to the Expectation Maximization algorithm, our algorithm is grounded in the statistical learning theory. Different from other clustering algorithms, our algorithm is irrelevant-feature resistant and its learned clusters (modeled by decision trees) are strongly interpretable and predictive. We systematically evaluate our algorithm using a common law dataset comprised of actual cases. Experimental results show that our algorithm significantly outperforms K-means and LDA on clustering dependency.

1 citations


Network Information
Related Topics (5)
Artificial neural network
207K papers, 4.5M citations
86% related
Cluster analysis
146.5K papers, 2.9M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Optimization problem
96.4K papers, 2.1M citations
80% related
Fuzzy logic
151.2K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20239
202219
202159
202069
201972
201847