scispace - formally typeset

Stability (learning theory)

About: Stability (learning theory) is a(n) research topic. Over the lifetime, 17459 publication(s) have been published within this topic receiving 549832 citation(s). The topic is also known as: algorithmic stability. more


Open accessBook
Vladimir Vapnik1Institutions (1)
01 Jan 1995-
Abstract: Setting of the learning problem consistency of learning processes bounds on the rate of convergence of learning processes controlling the generalization ability of learning processes constructing learning algorithms what is important in learning theory?. more

38,164 Citations

Open access
01 Jan 1998-
Abstract: A comprehensive look at learning and generalization theory. The statistical theory of learning and generalization concerns the problem of choosing desired functions on the basis of empirical data. Highly applicable to a variety of computer science and robotics fields, this book offers lucid coverage of the theory as a whole. Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more. more

26,121 Citations

Journal ArticleDOI: 10.1109/TKDE.2009.191
Sinno Jialin Pan1, Qiang Yang1Institutions (1)
Abstract: A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research. more

Topics: Semi-supervised learning (69%), Inductive transfer (68%), Multi-task learning (67%) more

13,267 Citations

Open accessJournal ArticleDOI: 10.1016/J.NEUNET.2014.09.003
Jürgen Schmidhuber1Institutions (1)
01 Jan 2015-Neural Networks
Abstract: In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. more

Topics: Deep learning (65%), Deep belief network (64%), Unsupervised learning (63%) more

11,176 Citations

Open accessProceedings ArticleDOI: 10.1145/130385.130401
01 Jul 1992-
Abstract: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms. more

Topics: Margin (machine learning) (61%), Decision boundary (59%), Stability (learning theory) (58%) more

10,033 Citations

No. of papers in the topic in previous years

Top Attributes

Show by:

Topic's top 5 most impactful authors

Eric Rogers

50 papers, 699 citations

Krzysztof Galkowski

29 papers, 423 citations

Frank L. Lewis

17 papers, 1.7K citations

Ali H. Sayed

13 papers, 864 citations

George W. Evans

11 papers, 568 citations

Network Information
Related Topics (5)
Semi-supervised learning

12.1K papers, 611.2K citations

91% related
Computational learning theory

4.6K papers, 456.3K citations

90% related
Active learning (machine learning)

13.1K papers, 566.6K citations

90% related
Online machine learning

5.4K papers, 364.3K citations

89% related
Unsupervised learning

22.7K papers, 1M citations

89% related