scispace - formally typeset
Topic

Computational learning theory

About: Computational learning theory is a(n) research topic. Over the lifetime, 4613 publication(s) have been published within this topic receiving 456386 citation(s).

...read more

Papers
  More

Open accessBook
Vladimir Vapnik1Institutions (1)
01 Jan 1995-
Abstract: Setting of the learning problem consistency of learning processes bounds on the rate of convergence of learning processes controlling the generalization ability of learning processes constructing learning algorithms what is important in learning theory?.

...read more

38,164 Citations


Open accessJournal ArticleDOI: 10.1023/A:1022627411411
Corinna Cortes1, Vladimir Vapnik1Institutions (1)
15 Sep 1995-Machine Learning
Abstract: The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

...read more

Topics: Feature learning (63%), Active learning (machine learning) (62%), Feature vector (62%) ...read more

35,157 Citations


Open accessBook
Richard S. Sutton1, Andrew G. BartoInstitutions (1)
01 Jan 1988-
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

...read more

Topics: Learning classifier system (69%), Reinforcement learning (69%), Apprenticeship learning (65%) ...read more

32,257 Citations


Open access
01 Jan 1998-
Abstract: A comprehensive look at learning and generalization theory. The statistical theory of learning and generalization concerns the problem of choosing desired functions on the basis of empirical data. Highly applicable to a variety of computer science and robotics fields, this book offers lucid coverage of the theory as a whole. Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.

...read more

26,121 Citations


Journal ArticleDOI: 10.1109/TKDE.2009.191
Sinno Jialin Pan1, Qiang Yang1Institutions (1)
Abstract: A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.

...read more

Topics: Semi-supervised learning (69%), Inductive transfer (68%), Multi-task learning (67%) ...read more

13,267 Citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20221
202118
20208
201918
201842
2017270

Top Attributes

Show by:

Topic's top 5 most impactful authors

Rocco A. Servedio

20 papers, 978 citations

Sanjay Jain

14 papers, 158 citations

John Case

12 papers, 140 citations

Shai Ben-David

10 papers, 498 citations

Thomas Zeugmann

9 papers, 170 citations

Network Information
Related Topics (5)
Stability (learning theory)

17.4K papers, 549.8K citations

90% related
Semi-supervised learning

12.1K papers, 611.2K citations

89% related
Active learning (machine learning)

13.1K papers, 566.6K citations

89% related
Instance-based learning

5.7K papers, 489.4K citations

89% related
Supervised learning

20.8K papers, 710.5K citations

88% related