scispace - formally typeset
Search or ask a question

Showing papers by "Stephen Muggleton published in 1999"


Journal ArticleDOI
TL;DR: Advances in ILP theory and implementation related to the challenges of LLL are already producing beneficial advances in other sequence-oriented applications of ILP, and LLL is starting to develop its own character as a sub-discipline of AI involving the confluence of computational linguistics, machine learning and logic programming.

117 citations


Journal ArticleDOI
TL;DR: The results reviewed have been published in some of the top general science journals, and are among the strongest examples of semi-automated scientiic discovery in the Artiicial Intelligence literature.
Abstract: This paper is an overview of scientiic knowledge discovery tasks carried out using Inductive Logic Programming (ILP). The results reviewed have been published in some of the top general science journals, and as such are among the strongest examples of semi-automated scientiic discovery in the Artiicial Intelligence literature. Space restrictions do not permit this paper to cover other discovery areas of ILP. These include the discovery of linguistic features in natural language data and the discovery of patterns in traac data.

83 citations



Proceedings Article
24 Jun 1999
TL;DR: Experiments in the paper show that on English past tense data AP has significantly higher predictive accuracy on this data than both previously reported results and CProgol in inductive mode, however, on KRK illegal AP does not outperform CProGol in induction mode.
Abstract: Inductive Logic Programming (ILP) involves constructing an hypothesis H on the basis of background knowledge B and training examples E. An independent test set is used to evaluate the accuracy of H. This paper concerns an alternative approach called Analogical Prediction (AP). AP takes B, E and then for each test example 〈x, y〉 forms an hypothesis Hx from B, E, x. Evaluation of AP is based on estimating the probability that Hx(x) = y for a randomly chosen 〈x, y〉. AP has been implemented within CProgol4.4. Experiments in the paper show that on English past tense data AP has significantly higher predictive accuracy on this data than both previously reported results and CProgol in inductive mode. However, on KRK illegal AP does not outperform CProgol in inductive mode. We conjecture that AP has advantages for domains in which a large proportion of the examples must be treated as exceptions with respect to the hypothesis vocabulary. The relationship of AP to analogy and instance-based learning is discussed. Limitations of the given implementation of AP are discussed and improvements suggested.

21 citations


Proceedings Article
01 Jan 1999
TL;DR: The U-learnability model is used to analyze a top-down decision tree induction algorithm and proves that an idealized variant of the well-known decision tree learning algorithm CART is a U-learner under a natural set of assumptions regarding target hypotheses.
Abstract: Automated inductive learning is a vital part of machine intelligence and the design of intelligent agents. A useful formalization of inductive learning is the model of PAC-learnability. Nevertheless, the ability to learn every target concept expressible in a given representation, as required in the PAC-learnability model, is highly demanding and leads to many negative results for interesting concept classes. A new model of learn-ability, called Universal Learnability or U-learnability, recently has been proposed as a less demanding, average-case variant of PAC-learnability. This paper uses the U-learnability model to analyze a top-down decision tree induction algorithm. Speciically, this paper proves that an idealized variant of the well-known decision tree learning algorithm CART|one of the most successful existing machine learning algorithms|is a U-learner under a natural set of assumptions regarding target hypotheses. (The motivation and description of these assumptions is best delayed until the U-learnability model is described.) Equally interestingly, various related PAC-learning algorithms such as those for k-DNF cannot be used to U-learn under the same assumptions. Finally, the paper raises a number of 1 related open questions and general research directions; open questions include not only U-learnability questions, but also several new PAC-learnability questions and one question regarding a general property of propositional logic.

19 citations


Book Chapter
01 Jan 1999
TL;DR: ASE-Progol, a Closed Loop Machine Learning system, is proposed that will be the first attempt to use a robot to carry out experiments selected by Active Learning within a real world application.
Abstract: Machine Learning (ML) systems that produce human-comprehensible hypotheses from data are typically open loop, with no direct link between the ML system and the collection of data. This paper describes the alternative, it Closed Loop Machine Learning. This is related to the area of Active Learning in which the ML system actively selects experiments to discriminate between contending hypotheses. In Closed Loop Machine Learning the system not only selects but also carries out the experiments in the learning domain. ASE-Progol, a Closed Loop Machine Learning system, is proposed. ASE-Progol will use the ILP system Progol to form the initial hypothesis set. It will then devise experiments to select between competing hypotheses, direct a robot to perform the experiments, and finally analyse the experimental results. ASE-Progol will then revise its hypotheses and repeat the cycle until a unique hypothesis remains. This will be, to our knowledge, the first attempt to use a robot to carry out experiments selected by Active Learning within a real world application.

11 citations


Proceedings Article
24 Jun 1999
TL;DR: The problem of repairing incomplete background knowledge using Theory Recovery is examined, using the logical back-propagation ability of Progol 5.0 to perform theory recovery and results are consistent with the derived bound.
Abstract: In this paper we examine the problem of repairing incomplete background knowledge using Theory Recovery. Repeat Learning under ILP considers the problem of updating background knowledge in order to progressively increase the performance of an ILP algorithm as it tackles a sequence of related learning problems. Theory recovery is suggested as a suitable mechanism. A bound is derived for the performance of theory recovery in terms of the information content of the missing predicate definitions. Experiments are described that use the logical back-propagation ability of Progol 5.0 to perform theory recovery. The experimental results are consistent with the derived bound.

11 citations


Proceedings Article
01 Jan 1999
TL;DR: The experiments show that the Inductive Logic Programming algorithm Progol gives overall signiicant predictive accuracy of user interests, however, the results are highly polarized.
Abstract: WorldWide Web (WWW) usage is increasing rapidly. Users waste time downloading pages that turn out to have no interest to them. Interesting pages are often overlooked. Modern browsers allow the user to specify search strings. This paper experimentally investigates a different approach based on learning user WWW pages preferences from examples. WWW users were drawn from a class of students. The experiments show that the Inductive Logic Programming algorithm Progol gives overall signiicant predictive accuracy of user interests. However, the results are highly polarized. Some users are very predictable and others not. The polarization was surprisingly found to correlate in all cases with student exam performance. This work was conducted as part of Parson's MSc thesis 7].

4 citations