scispace - formally typeset
Open AccessJournal Article

Learning from examples

Z. Pawlak
- 01 Jan 1986 - 
- Vol. 34, pp 573-587
About
This article is published in Bulletin of The Polish Academy of Sciences-technical Sciences.The article was published on 1986-01-01 and is currently open access. It has received 32 citations till now. The article focuses on the topics: Algorithmic learning theory & Inference.

read more

Content maybe subject to copyright    Report

Citations
More filters
Dissertation

Attribute-Oriented Induction in Relational Databases.

Yandong Cai
TL;DR: An attributeoriented induction method to extract characteristic rules and classification rules from relational databases that adopts the artificial intelligence "learning from examples" paradigm and applies an attribute-oriented concept tree ascending technique in the learning process which provides a simple, efficient way of learning from large databases.
Journal ArticleDOI

Influence of prior knowledge on concept acquisition: Experimental and computational results.

TL;DR: It is demonstrated that prior knowledge can influence the rate of concept learning and that the influence of prior causal knowledge can dominate the influenceof the logical form.
Journal ArticleDOI

Statistical theory of learning curves under entropic loss criterion

TL;DR: A universal property of learning curves is elucidated, which shows how the generalization error, training error, and the complexity of the underlying stochastic machine are related and how the behavior of a stochastics machine is improved as the number of training examples increases.
Proceedings ArticleDOI

Characterizing research in computing education: a preliminary analysis of the literature

TL;DR: It is found that this subset of computing education research has more in common with research in information systems than with that in computer science or software engineering; and that the papers published at ICER generally appear to conform to the specified ICER requirements.
Journal ArticleDOI

How tight are the Vapnik-Chervonenkis bounds?

TL;DR: It is found that, in some cases, the average generalization of neural networks trained on a variety of simple functions is significantly better than the VC bound: the approach to perfect performance is exponential in the number of examples m, rather than the 1/m result of the bound.
Related Papers (5)