scispace - formally typeset
Search or ask a question
Author

Walter Hamscher

Other affiliations: PricewaterhouseCoopers
Bio: Walter Hamscher is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Troubleshooting & Model-based reasoning. The author has an hindex of 8, co-authored 12 publications receiving 763 citations. Previous affiliations of Walter Hamscher include PricewaterhouseCoopers.

Papers
More filters
Book ChapterDOI
01 Oct 1988
TL;DR: Model-based reasoning as mentioned in this paper surveys the state of the art in model-based diagnosis and troubleshooting and concludes that diagnostic reasoning from a model is reasonably well understood, but there is a rich supply of research issues in the modeling process itself.
Abstract: We survey the current state of the art in model-based reasoning, particularly its application to diagnosis and troubleshooting, reviewing areas that are well understood and exploring areas that present challenging research topics. We conclude that diagnostic reasoning from a model is reasonably well understood, but that there is a rich supply of research issues in the modeling process itself. In a sense we know how to do model- based reasoning; we don''t know how to model the behavior of complex devices, how to create models, and how to select the ``right'''' model for the task at hand.

265 citations

Book
31 Oct 1995
TL;DR: It is argued for the importance of fault models that are explicit, separated from the troubleshooting mechanism, and retractable in much the same sense that inferences are retracted in current systems.
Abstract: While expert systems have traditionally been built using large coliections of rules based on empirlcal associations, interest has grown recently in the use of systems that reason from representations of structure and function. Our work explores the use of such models in troubleshooting digital electronics. We describe our work to date on (i) a language for describing structure, (ii) a language for describing function, and (i/i) a set of prlnctples for troubleshooting that uses the two descriptions to guide its investigation. In discussing troubleshooting we show why the traditional approach --- test generation --- solves a different [JrdJklll dnti vve &SCllSS a Ilumber of its pIdC,hd ShOrt~Olllill~S. We consider next the style of debugging known as violated expectations and demonstrate why it is a fundclmental advance over traditional test generation. Further exploration of this approach. however, demonstrates that it is incapable of dealing with commonly known classes of faults. We explain the shortcoming as arisirlg from the use of a fault model that is both implicit and inseparable from the basic troubleshooting metl~odology. We argue for the importance of fault models that are explicit, separated from the troubleshooting mechanism, and retractable in much the same sense that inferences are retracted in current systems.

151 citations

Proceedings Article
18 Aug 1982
TL;DR: In this article, the authors explore the use of such models in troubleshooting digital electronics and argue for the importance of fault models that are explicit, separated from the troubleshooting mechanism, and retractable in much the same sense that inferences are retracted in current systems.
Abstract: While expert systems have traditionally been built using large collections of rules based on empirical associations, interest has grown recently in the use of systems that reason from representations of structure and function. Our work explores the use of such models in troubleshooting digital electronics. We describe our work to date on (i) a language for describing structure, (ii) a language for describing function, and (iii) a set of principles for troubleshooting that uses the two descriptions to guide its investigation. In discussing troubleshooting we show why the traditional approach --- test generation --- solves a different problem and we discuss a number of its practical shortcomings. We consider next the style of debugging known as violated expectations and demonstrate why it is a fundamental advance over traditional test generation. Further exploration of this approach, however, demonstrates that it is incapable of dealing with commonly known classes of faults. We explain the shortcoming as arising from the use of a fault model that is both implicit and inseparable from the basic troubleshooting methodology. We argue for the importance of fault models that are explicit, separated from the troubleshooting mechanism, and retractable in much the same sense that inferences are retracted in current systems.

145 citations

Proceedings Article
01 Jan 1992
TL;DR: In this paper, the authors describe candidate generation for digital devrces with state, a fault localization problem that is intractable when the devices are described at low levels of abstraction, and is underconstrained when described at higher levels of abstractions, and demonstrate that the same candidate generation procedure that works for combinatorial circuits becomes indiscriminate when applied to a state circuit modeled in that extended representation.
Abstract: “Hard problems” can be hard because they are computationally intractable. or because they are underconstrained. Here we describe candidate generation for digital devrces with state, a fault localization problem that is intractable when the devices are described at low levels of abstraction, and is underconstrained when described at higher levels of abstraction. Previous v;ork [l] has shown that a fault in a combinatorial digital circuit can be localized using a constraint-based representation of structure and behavior. ln this paper we (1) extend this represerltation to model a circuit with state by choosrng a time granularity and vocabulary of signals appropriate to that circuit; (2) demonstrate that the same candidate generation procedure that works for combinatorial circuits becomes indiscriminate when applied to a state circuit modeled in that extended representationL(3) show how the common technique of singlestepping can be viewed as a divide-and-conquer approach to overcoming that lack of constraint; and (4) illustrate how using structural de?ail can help to make the candidate generator discriminating once again, but only at great cost.

64 citations

Proceedings Article
13 Jul 1987
TL;DR: Joshua, a system which provides syntactically uniform access to heterogeneously implemented knowledge bases, is presented and it is shown how a different TMS, implemented for another system, was thus Interfaced to Joshua, speeding up one application by a factor of 3.
Abstract: This paper presents Joshua, a system which provides syntactically uniform access to heterogeneously implemented knowledge bases. Its power comes from the observation that there is a Protocol of Inference consisting of a small set of abstract actions, each of which can be implemented in many ways. We use the object-oriented programming facilities of Flavors to control the choice of implementation. A statement is an instance of a class identified with its predicate. The steps of the protocol are implemented by methods inherited from the classes. Inheritance of protocol methods is a compile-time operation, leading to very fine-grained control with little run-time cost. Joshua has two major advantages: First, a Joshua programmer can easily change his program to use more efficient data structures without changing the rule set or other knowledge-level structures. We show how we thus sped up one application by a factor of 3. Second, it is straightforward to build an interface which incorporates an existing tool into Joshua, without modifying the tool. We show how a different TMS, implemented for another system, was thus Interfaced to Joshua.

47 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The diagnostic procedure presented in this paper is model-based, inferring the behavior of the composite device from knowledge of the structure and function of the individual components comprising the device.

2,199 citations

Journal ArticleDOI
Johan de Kleer1, John Seely Brown1
TL;DR: A fairly encompassing account of qualitative physics, which introduces causality as an ontological commitment for explaining how devices behave, and presents algorithms for determining the behavior of a composite device from the generic behavior of its components.

1,550 citations

01 Jun 1984
TL;DR: A system that reasons from first principles, i.e., using knowledge of structure and behavior, which is implemented and tested on several examples in the domain of troubleshooting digital electronic circuits and describes a technique it calls constraint suspension that provides a powerful tool for troubleshooting.
Abstract: Abstract We describe a system that reasons from first principles, i.e., using knowledge of structure and behavior. The system has been implemented and tested on several examples in the domain of troubleshooting digital electronic circuits. We give an example of the system in operation, illustrating that this approach provides several advantages, including a significant degree of device independence, the ability to constrain the hypotheses it considers at the outset, yet deal with a progressively wider range of problems, and the ability to deal with situations that are novel in the sense that their outward manifestations may not have been encountered previously. As background we review our basic approach to describing structure and behavior, then explore some of the technologies used previously in troubleshooting. Difficulties encountered there lead us to a number of new contributions, four of which make up the central focus of this paper. • — We describe a technique we call constraint suspension that provides a powerful tool for troubleshooting. • — We point out the importance of making explicit the assumptions underlying reasoning and describe a technique that helps enumerate assumptions methodically. • — The result is an overall strategy for troubleshooting based on the progressive relaxation of underlying assumptions. The system can focus its efforts initially, yet will methodically expand its focus to include a broad range of faults. • — Finally, abstracting from our examples, we find that the concept of adjacency proves to be useful in understanding why some faults are especially difficult to diagnose and why multiple representations are useful.

989 citations

Journal ArticleDOI
TL;DR: In this paper, the authors describe a system that reasons from first principles, i.e., using knowledge of structure and behavior, to deal with situations that are novel in the sense that their outward manifestations may not have been encountered previously.

959 citations

Book
05 Jun 2014
TL;DR: This comprehensive collection of articles shows the breadth and depth of DAI research as well as to practical problems in artificial intelligence, distributed computing systems, and human-computer interaction.
Abstract: Most artificial intelligence research investigates intelligent behavior for a single agent--solving problems heuristically, understanding natural language, and so on. Distributed Artificial Intelligence (DAI) is concerned with coordinated intelligent behavior: intelligent agents coordinating their knowledge, skills, and plans to act or solve problems, working toward a single goal, or toward separate, individual goals that interact. DAI provides intellectual insights about organization, interaction, and problem solving among intelligent agents. This comprehensive collection of articles shows the breadth and depth of DAI research. The selected information is relevant to emerging DAI technologies as well as to practical problems in artificial intelligence, distributed computing systems, and human-computer interaction. "Readings in Distributed Artificial Intelligence" proposes a framework for understanding the problems and possibilities of DAI. It divides the study into three realms: the natural systems approach (emulating strategies and representations people use to coordinate their activities), the engineering/science perspective (building automated, coordinated problem solvers for specific applications), and a third, hybrid approach that is useful in analyzing and developing mixed collections of machines and human agents working together. The editors introduce the volume with an important survey of the motivations, research, and results of work in DAI. This historical and conceptual overview combines with chapter introductions to guide the reader through this fascinating field. A unique and extensive bibliography is also provided.

926 citations