scispace - formally typeset
Search or ask a question

Showing papers by "Michael J. Pazzani published in 1989"


Proceedings Article
20 Aug 1989
TL;DR: A technique for detecting errors of omission, assigning blame for the error of omission to an inference rule in thedomain theory, and revising the domain theory to accommodate new examples is described.
Abstract: In this paper, we address an issue that arises when the background knowledge used by explanationbased learning is incorrect. In particular, we consider the problems that can be caused by a domain theory that may be overly specific. Under this condition, generalizations formed by explanation-based learning will make errors of omission when they are relied upon to make predictions or explanations. We describe a technique for detecting errors of omission, assigning blame for the error of omission to an inference rule in the domain theory, and revising the domain theory to accommodate new examples.

24 citations


Book ChapterDOI
01 Dec 1989
TL;DR: This chapter reviews explanation-based learning with weak domain theories, a learning system developed that uses a weak domain theory to generate plausible explanations for a state change after it has occurred.
Abstract: Publisher Summary This chapter reviews explanation-based learning with weak domain theories One facet of a weak domain theory is that the influence of a number of factors is known A weak theory does not provide any means of combining these influences The theory constrains features that play a part in predictive relationships Only when an accurate predictive relationship cannot be made by considering combinations of known influences are other factors considered A learning system called PostHoc has been developed that uses this sort of background knowledge to propose hypotheses that are then tested against further data PostHoc utilizes a weak domain theory to generate plausible explanations for a state change after it has occurred This background knowledge is also used to revise hypotheses that fail to make accurate predictions

11 citations


Book ChapterDOI
01 Dec 1989
TL;DR: This chapter discusses a framework for integrating empirical learning with explanation-based learning and presents an algorithm that does this with both pure conjunctive concepts and k-CNF concepts.
Abstract: Publisher Summary This chapter discusses a framework for integrating empirical learning with explanation-based learning and presents an algorithm that does this with both pure conjunctive concepts and k-CNF concepts. The framework involves using an empirical and an explanation-based method to form separate hypotheses, and then combining the hypotheses from the separate sources to form a composite hypothesis. An additional important complication arises because the system is required to learn the domain theory (via an empirical method) while using the domain theory to support the explanation-based method. The hypotheses produced by explanation-based learning with a domain theory acquired with such a one-sided empirical learning method will also never be more general than the correct hypothesis. As both the empirical and explanation-based hypotheses are not more general than the correct hypothesis, they can be combined by finding the least general hypothesis consistent with both hypotheses. In this manner, the integrated hypothesis would be the least general hypothesis that is consistent with both the observed data and the domain knowledge.

8 citations


06 Oct 1989
TL;DR: Sarrett et al. as mentioned in this paper presented an approach to model the average case behavior of learning algorithms and applied the model to three learning algorithms: a purely empirical algorithm (Bruner's Wholist), an algorithm which prefers analytical (explanation-based) learning over empirical learning (EBL-FIRST-TM), and an algorithm integrating both analytical and empirical learning(lOSC-TM).
Abstract: Author(s): Sarrett, Wendy E.; Pazzani, Michael J. | Abstract: We present an approach to modeling the average case behavior of learning algorithms. Our motivation is to mathematically model the performance of learning algorithms in order to better understand the nature of their empirical behavior. We are interested in how differences in learning algorithms influence the expected accuracy of the concepts learned.We present the Average Case Learning Model and apply the model to three learning algorithms: a purely empirical algorithm (Bruner's Wholist), an algorithm which prefers analytical (explanation-based) learning over empirical learning (EBL-FIRST-TM) and an algorithm integrating both analytical and empirical learning (lOSC-TM). The Average Case Learning Model is unique in that it is able to accurately predict the expected behavior of learning algorithms. We compare average case analysis to Valiant's Probably Approximately Correct (PAC) learning model.

8 citations


Proceedings ArticleDOI
27 Mar 1989
TL;DR: In this paper, the authors compare strategies of learning from failures to learning from successes in the context of a generate-and-test problem solver, and show that failure-driven learning creates rules which distinguish between failures.
Abstract: The author compares strategies of learning from failures to learning from successes in the context of a generate-and-test problem solver. One result is fairly straightforward: failure-driven learning creates rules which distinguish between failures. This is demonstrated by the fact that the number of hypotheses decreases after learning. A more subtle result is that the performance of the system, measured in terms of logical inferences, decreased with failure-driven learning more than it did with two variants of success driven learning. Diagnosis results are presented for ACES designed to process telemetry data from a satellite and isolate the cause of problems with the attitude control system. >

3 citations


Book ChapterDOI
01 Jan 1989
TL;DR: The approach outlined in this paper is a promising means of creating knowledge-based systems to perform useful functions by using similarity-based learning to acquire general schemata which represent plans for achieving goals.
Abstract: In spite of the limitations of OCCAM's language capabilities, I believe that the approach outlined in this paper is a promising means of creating knowledge-based systems to perform useful functions The approach consists of using similarity-based learning to acquire general schemata which represent plans for achieving goals These schemata are specialized with explanation-based learning to create a memory which indicates the conditions under which the plans for achieving goals have proved successful Explanation-based learning in OCCAM makes use of the representation for complex plans which is created by the similarity-based learning process The schemata formed by explanation-based learning serve as efficient means of recognizing the class of situations which would have the same explanation as a training example

3 citations


10 May 1989
TL;DR: Schulenburg and Pazzani as discussed by the authors constructed a system called SALLY that starts with a few, very general principles for understanding the intention of the speaker of an utterance and then creates a specialized rule to understand directly similar utterances in the future.
Abstract: Author(s): Schulenburg, David; Pazzani, Michael J. | Abstract: We describe an approach to deriving efficient rules for interpreting the intended meaning of indirect speech acts. We have constructed a system called SALLY that starts with a few, very general principles for understanding the intention of the speaker of an utterance. After inferring the intended meaning of a particular utterance, SALLY creates a specialized rule to understand directly similar utterances in the future.

1 citations


Proceedings ArticleDOI
27 Mar 1989
TL;DR: In this article, an explanation-based learning method is proposed to abstract general principles from a small number of prior cases, and the authors demonstrate the feasibility of applying this method to economic sanction incidents.
Abstract: Explanation-based learning, a method of abstracting general principles from a small number of prior cases, is discussed. The author demonstrates the feasibility of applying this method to economic sanction incidents. This approach is contrasted with regression analysis, a traditional quantitative method. A method for integrating these two approaches is proposed. >

1 citations