scispace - formally typeset
Search or ask a question

Showing papers on "Domain knowledge published in 1988"


Journal ArticleDOI
TL;DR: A layered behavioral model is used to analyze how three of these problems—the thin spread of application domain knowledge, fluctuating and conflicting requirements, and communication bottlenecks and breakdowns—affected software productivity and quality through their impact on cognitive, social, and organizational processes.
Abstract: The problems of designing large software systems were studied through interviewing personnel from 17 large projects. A layered behavioral model is used to analyze how three of these problems—the thin spread of application domain knowledge, fluctuating and conflicting requirements, and communication bottlenecks and breakdowns—affected software productivity and quality through their impact on cognitive, social, and organizational processes.

2,210 citations


Book ChapterDOI
01 Jun 1988
TL;DR: It is futile to try acquiring knowledge and constructing systems unless it is known what it is for, and important issues need raising before detailed knowledge acquisition can take place.
Abstract: Important issues need raising before detailed knowledge acquisition can take place for designing expert systems. It is futile to try acquiring knowledge and constructing systems unless it is known what it is for.

437 citations


BookDOI
01 Jan 1988
TL;DR: This book describes the principles that guided the expert systems research group's work, looks in detail at the design and operation of each tool or methodology, and reports some lessons learned from the enterprise.
Abstract: In June of 1983, our expert systems research group at Carnegie Mellon University began to work actively on automating knowledge acquisition for expert systems. In the last five years, we have developed several tools under the pressure and influence of building expert systems for business and industry. These tools include the five described in chapters 2 through 6 - MORE, MOLE, SALT, KNACK and SIZZLE. One experiment, conducted jointly by developers at Digital Equipment Corporation, the Soar research group at Carnegie Mellon, and members of our group, explored automation of knowledge acquisition and code development for XCON (also known as R1), a production-level expert system for configuring DEC computer systems. This work influenced the development of RIME, a programming methodology developed at Digital which is the subject of chapter 7. This book describes the principles that guided our work, looks in detail at the design and operation of each tool or methodology, and reports some lessons learned from the enterprise. of the work, brought out in the introductory chapter, is A common theme that much power can be gained by understanding the roles that domain knowledge plays in problem solving. Each tool can exploit such an understanding because it focuses on a well defined problem-solving method used by the expert systems it builds. Each tool chapter describes the basic problem-solving method assumed by the tool and the leverage provided by committing to the method."

341 citations


Journal ArticleDOI
Cecile Paris1
TL;DR: This research addresses the issue of how the user's domain knowledge can affect an answer by studying texts and proposes two distinct descriptive strategies that can be used in a question answering program, and shows how they can be mixed to include the appropriate information from the knowledge base, given the users' domain knowledge.
Abstract: A question answering program providing access to a large amount of data will be most useful if it can tailor its answers to each individual user. In particular, a user's level of knowledge about the domain of discourse is an important factor in this tailoring if the answer provided is to be both informative and understandable to the user. In this research, we address the issue of how the user's domain knowledge can affect an answer. By studying texts, we found that the user's level of domain knowledge affected the kind of information provided and not just the amount of information, as was previously assumed. Depending on the user's assumed domain knowledge, a description can be either parts-oriented or process-oriented. Thus the user's level of expertise in a domain can guide a system in choosing the appropriate facts from the knowledge base to include in an answer. We propose two distinct descriptive strategies that can be used in a question answering program, and show how they can be mixed to include the appropriate information from the knowledge base, given the user's domain knowledge. We have implemented these strategies in TAILOR, a computer system that generates descriptions of devices. TAILOR uses one of the two discourse strategies identified in texts to construct a description for either a novice or an expert. It can merge the strategies automatically to produce a wide range of different descriptions to users who fall between the extremes of novice or expert, without requiring an a priori set of user stereotypes.

217 citations



Journal ArticleDOI
TL;DR: The Generic Task approach as mentioned in this paper proposes that knowledge systems should be built out of building blocks, each of which is appropriate for a basic type of problem solving, and each generic task uses forms of knowledge and control strategies that are characteristic to it, and are in general conceptually closer to domain knowledge.
Abstract: The level of abstraction of much of the work in knowledge-based systems (the rule, frame, logic level) is too low to provide a rich enough vocabulary for knowledge and control. I provide an overview of a framework called the Generic Task approach that proposes that knowledge systems should be built out of building blocks, each of which is appropriate for a basic type of problem solving. Each generic task uses forms of knowledge and control strategies that are characteristic to it, and are in general conceptually closer to domain knowledge. This facilitates knowledge acquisition and can produce a more perspicuous explanation of problem solving. The relationship of the constructs at the generic task level to the rule-frame level is analogous to that between high-level programming languages and assembly languages in computer science. I describe a set of generic tasks that have been found particularly useful in constructing diagnostic, design and planning systems. In particular, I describe two tools, CSRL and DSPL, that are useful for building classification-based diagnostic systems and skeletal planning systems respectively, and a high level toolbox that is under construction called the Generic Task toolbox.

130 citations


Journal ArticleDOI
TL;DR: With the aid of the first substantial systematic analysis of a sample of expert systems applications developed in the real world, what is actually going on in terms of knowledge acquisition is described.
Abstract: Knowledge acquisition has long been considered to be the major constraint in the development of expert systems. Conventional wisdom also maintains that the major problem encountered in knowledge acquisition is in identifying the varying structures and characteristics of domain knowledge and matching these to suitable acquisition techniques. With the aid of the first substantial systematic analysis of a sample of expert systems applications developed in the real world, the authors describe what is actually going on in terms of knowledge acquisition. In the light of the evidence, it is argued that a reappraisal of the conventional approach to knowledge acquisition is necessary.

127 citations


Book ChapterDOI
01 Jan 1988
TL;DR: SALT6 is a knowledge-acquisition tool for generating expert systems that use a propose-and-revise problem-solving strategy that provides the basis for SALT’s knowledge representation.
Abstract: SALT6 is a knowledge-acquisition tool for generating expert systems that use a propose-and-revise problem-solving strategy. The SALT-assumed method constructs a design incrementally by proposing values for design parameters, identifying constraints on design parameters as the design develops, and revising decisions in response to constraint violations in the proposal. This problem-solving strategy provides the basis for SALT’s knowledge representation. SALT uses its knowledge of the intended problem-solving strategy in identifying relevant domain knowledge, in detecting weaknesses in the knowledge base in order to guide its interrogation of the domain expert, in generating an expert system that performs the task and explains its line of reasoning, and in analyzing test case coverage. The strong commitment to problem-solving strategy that gives SALT its power also defines its scope.

124 citations


Proceedings Article
21 Aug 1988
TL;DR: Modified explanation-based learning techniques allow the use of the incomplete domain theory to justify the actions of a case with respect to the facts known when the case was originally executed.
Abstract: Proper indexing of cases is critically important to the functioning of a case-based reasoner. In real domains such as fault recovery, a body of domain knowledge exists that can be captured and brought to bear on the indexing problem-even though the knowledge is incomplete. Modified explanation-based learning techniques allow the use of the incomplete domain theory to justify the actions of a case with respect to the facts known when the case was originally executed. Demonstrably relevant facts are generalized to form primary indices for the case. Inconsistencies between the domain theory and the actual case can also be used to determine facts that are demonstrably irrelevant to the case. The remaining facts are treated as secondary indices, subject to refinement via similarity based inductive techniques.

120 citations


Journal ArticleDOI
TL;DR: Daneman and Carpenter as discussed by the authors used the reading span measure as an index of processing efficiency and found that domain knowledge influences processing at a situational model or mental model level but not at a micro-level or propositional level.

112 citations


Book ChapterDOI
01 Jan 1988
TL;DR: In this paper, the authors present an umbrella structure of domain semantics that organizes and makes explicit what particular pieces of knowledge mean about problem solving in the domain, which is essential to be capable of avoiding potential errors and specifying performance boundaries when building intelligent machines.
Abstract: Publisher Summary This chapter discusses the cognitive systems engineering. To build a cognitive description of a problem solving world, it is necessary to understand how representations of the world interact with different cognitive demands imposed by the application world in question and with characteristics of the cognitive agents, both for existing and prospective changes in the world. Building a cognitive description is part of a problem driven approach to the application of computational power. In tool-driven approaches, knowledge acquisition focuses on describing domain knowledge in terms of the syntax of computational mechanisms, that is, the language of implementation is used as a cognitive language. Semantic questions are displaced either to whoever selects the computational mechanisms or to the domain expert who enters knowledge. The alternative is to provide an umbrella structure of domain semantics that organizes and makes explicit what particular pieces of knowledge mean about problem solving in the domain. Acquiring and using domain semantics is essential to be capable of avoiding potential errors and specifying performance boundaries when building intelligent machines.

Book ChapterDOI
01 Jan 1988
TL;DR: This paper presents a concept acquisition methodology that uses data, domain knowledge, and tentative concept descriptions in an integrated way to produce discriminant and operational concept descriptions, by integrating inductive and deductive learning.
Abstract: In this paper we present a concept acquisition methodology that uses data (concept examples and counterexamples), domain knowledge and tentative concept descriptions in an integrated way. Domain knowledge can be incomplete and/or incorrect with respect to the given data; moreover, the tentative concept descriptions can be expressed in a form which is not operational. The methodology is aimed at producing discriminant and operational concept descriptions, by integrating inductive and deductive learning. In fact, the domain theory is used in a deductive process, that tries to operationalize the tentative concept descriptions, but the obtained results are tested on the whole learning set rather than on a single example. Moreover, deduction is interleaved with the application of data-driven inductive steps. In this way, a search in a constrained space of possible descriptions can help overcome some limitations of the domain theory (e.g. inconsistency). The method has been tested in the framework of the inductive learning system, “ML-SMART,”, previously developed by the authors, and a simple example is also given.

Journal ArticleDOI
TL;DR: A diagnostic methodology that integrates compiled knowledge with deep-level knowledge, thus achieving diagnostic efficiency without sacrificing flexibility and reliability under novel circumstances is advocated and an agenda-based inference control algorithm that generates malfunction hypotheses by deriving them from structural and functional information of the process is described.

Book
01 Jan 1988
TL;DR: When you read more every page of this knowledge engineering for expert systems, what you will obtain is something great.
Abstract: Read more and get great! That's what the book enPDFd knowledge engineering for expert systems will give for every reader to read this book. This is an on-line book provided in this website. Even this book becomes a choice of someone to read, many in the world also loves it so much. As what we talk, when you read more every page of this knowledge engineering for expert systems, what you will obtain is something great.

Journal ArticleDOI
01 Sep 1988
TL;DR: A conceptual contingency model matching the characteristics of knowledge acquisition methodologies to several decision types is proposed and developed, and implications of the proposed model and future research directions are addressed.
Abstract: A conceptual contingency model matching the characteristics of knowledge acquisition (KA) methodologies to several decision types is proposed KA methodologies are divided into three categories: knowledge engineer-driven, expert-driven, and machine-driven To evaluate current KA methodologies, a framework is proposed by addressing the nature of knowledge and problem domains Different methodologies in each category are described and evaluated for their ability to support various kinds of problem domain and the types of knowledge they are designed to elicit A contingency model mapping these methodologies to Mintzberg's managerial decision categories is developed Implications of the proposed model and future research directions are addressed

DOI
01 Jun 1988
TL;DR: TAILOR is shown how it can use information about a user's level of expertise to combine several discourse strategies in a single text, choosing the most appropriate at each point in the generation process, in order to generate texts for users anywhere along the knowledge spectrum.
Abstract: A question answering program that provides access to a large amount of data will be most useful if it can tailor its answers to each individual user. In particular, a user's level of knowledge about the domain of discourse is an important factor in this tailoring if the answer provided is to be both informative and understandable to the user. In this research, we address the issue of how the user's domain knowledge, or the level of expertise, might affect an answer. By studying texts we found that the user's level of domain knowledge affected the kind of information provided and not just the amount of information, as was previously assumed. Depending on the user's assumed domain knowledge, a description of a complex physical object can be either parts-oriented or process-oriented. Thus the user's level of expertise in a domain can guide a system in choosing the appropriate facts from the knowledge base to include in an answer. We propose two distinct descriptive strategies that can be used to generate texts aimed at naive and expert users. Users are not necessarily truly expert or fully naive however, but can be anywhere along a knowledge spectrum whose extremes are naive and expert. In this work, we show how our generation system, TAILOR, can use information about a user's level of expertise to combine several discourse strategies in a single text, choosing the most appropriate at each point in the generation process, in order to generate texts for users anywhere along the knowledge spectrum. TAILOR's ability to combine discourse strategies based on a user model allows for the generation of a wider variety of texts and the most appropriate one for the user.

Journal ArticleDOI
TL;DR: The knowledge elicitation problem arises from the need to acquire the knowledge of human experts in an explicit form suitable for encoding in a computer program such as an expert system as mentioned in this paper, which is very difficult to perform successfully because of the size and complexity of knowledge structures in the human brain, and because much procedural knowledge is tacit and unavailable to conscious verbal report via interview methods.
Abstract: The knowledge elicitation problem arises from the need to acquire the knowledge of human experts in an explicit form suitable for encoding in a computer program such as an expert system. This is very difficult to perform successfully because of the size and complexity of knowledge structures in the human brain, and because much procedural knowledge is tacit and unavailable to conscious verbal report via interview methods. The present paper draws upon an extensive review of research in the field of cognitive psychology in an attempt to offer a practical approach to this problem. First, a wide range of cognitive theories concerning the nature of knowledge representation in humans is considered, and a synthesis of the current state of theory is provided. Second, attention is drawn to a number of performance factors which may constrain the exhibition of a person's underlying cognitive competence. There then follows a review and discussion of a number of alternative psychological methodologies that mi...

Proceedings ArticleDOI
14 Mar 1988
TL;DR: A model of knowledge-based text condensation is presented which has been implemented as part of the text information system TOPIC and supports variable degrees of abstraction for text summarization as well as content-oriented retrieval of text knowledge.
Abstract: A model of knowledge-based text condensation is presented which has been implemented as part of the text information system TOPIC. Two major processes, text parsing and text condensation, are considered in detail, with emphasis on the latter. Based on principles of semantic parsing, the text parser serves the purpose of augmenting the initial domain knowledge base with the knowledge encoded in a text, thus generating a specific text knowledge base. A condensation process then transforms these text representation structures into a more abstract thematic description of what the text is about, filtering out irrelevant knowledge structures and preserving only the most significant concepts. Finally, a hierarchical representation of the thematic units of the text is generated in terms of a text graph which supports variable degrees of abstraction for text summarization as well as content-oriented retrieval of text knowledge. >

Journal ArticleDOI
01 May 1988
TL;DR: The authors trace the evolution of the mechanisms for classification as the computational complexity of the problem increases, from numerical parameter-setting schemes, through those using intermediate abstractions and then relations between symbols, and finally to complex symbolic structures that explicitly incorporate domain knowledge.
Abstract: The general information-processing task of classification is considered and reviewed from the perspectives of the knowledge-based-reasoning, pattern-recognition, and connectionist paradigms in artificial intelligence, paying special attention to knowledge-based classificatory problem solving. The authors trace the evolution of the mechanisms for classification as the computational complexity of the problem increases, from numerical parameter-setting schemes, through those using intermediate abstractions and then relations between symbols, and finally to complex symbolic structures that explicitly incorporate domain knowledge. >

Journal ArticleDOI
TL;DR: This paper discusses the use of repertory grid-centred knowledge acquisition tools such as the Expertise Transfer System (ETS), AQUINAS, KITTEN, and KSSO, and Dimensions of use are presented along with specific applications.
Abstract: Repertory grid-centred knowledge acquisition tools are useful as knowledge engineering aids when building many kinds of complex knowledge-based systems. These systems help in rapid prototyping and knowledge base analysis, refinement, testing, and delivery. These tools, however, are also being used as more general knowledge-based decision aids. Such features as the ability to very rapidly prototype knowledge bases for one-shot decisions and quickly combine and weigh various sources of knowledge, make these tools valuable outside of the traditional knowledge engineering process. This paper discusses the use of repertory grid-centred tools such as the Expertise Transfer System (ETS), AQUINAS, KITTEN, and KSSO. Dimensions of use are presented along with specific applications. Many of these dimensions are discussed within the context of ETS and AQUINAS applications at Boeing.

Proceedings Article
01 Mar 1988
TL;DR: This paper presents definitions of resource-bounded knowledge, belief, and common knowledge that in a precise sense capture the behavior of resource -bounded processors.
Abstract: Traditional treatments of knowledge in distributed systems have not been able to account for processors' limited computational resources. This paper presents definitions of resource-bounded knowledge, belief, and common knowledge that in a precise sense capture the behavior of resource-bounded processors. Subtle properties of the resulting notions are discussed, and they are successfully applied to two problems in distributed computing.

Journal ArticleDOI
01 Mar 1988
TL;DR: This work presents an overview of the KBMC system and focuses on the knowledge-based specifica tion method used in the system, which automates the model construction phase of the simulation life cycle.
Abstract: A Knowledge-Based Model Construction (KBMC) system has been developed to automate the model construction phase of the simulation life cycle. The system's underlying rule base in corporates several types of knowledge. This includes domain knowledge that facilitates a structured interactive dialog for the acquisition of a complete model specification from a user. An executable discrete simulation model in SIMAN is automatically constructed by the system from this specification, utilizing model ing knowledge and SIMAN knowledge. We present an overview of the KBMC system and focus on the knowledge-based specifica tion method used in the KBMC system.

Journal ArticleDOI
TL;DR: An efficient knowledge-based system approach to malfunction diagnosis in chemical processing plants is discussed in this paper, which involves a hierarchical diagnostic structure in which the nodes represent specific malfunction hypotheses, instead of being a static collection of knowledge, the hierarchy is a collection of small individual specialists coordinated to arrive at an overall diagnosis.

Journal ArticleDOI
J. W. Roach1, J. E. Tatem1
TL;DR: An experiment in which knowledge of music, a highly structured domain is applied to extract primitive musical features from handwritten scores shows that if the domain of image processing is well defined, significant improvements in low-level segmentations can be achieved.

Journal ArticleDOI
TL;DR: This tutorial for novice knowledge engineers and managers discusses some considerations involved in using multiple experts, including deciding when multiple experts may be necessary, eliciting knowledge from multiple experts individually or in small groups, and knowledge engineer capabilities and preparation.
Abstract: The already difficult knowledge acquisition process is complicated when the expert system being developed requires interaction with multiple experts. In this tutorial for novice knowledge engineers and managers we discuss some considerations involved in using multiple experts, including (1) deciding when multiple experts may be necessary, (2) eliciting knowledge from multiple experts individually or in small groups, and (3) knowledge engineer capabilities and preparation. Next, we present three specific group-appropriate techniques to elicit knowledge during a knowledge acquisition session: brainstorming, consensus decision making, and the nominal group technique. Finally, we consider the importance and objectives of debriefing following knowledge acquisition from multiple experts

Proceedings ArticleDOI
09 Feb 1988
TL;DR: The SCISOR system is a computer program designed to scan naturally occurring texts in constrained domains, extract information, and answer questions about that information to deal gracefully with gaps in lexical and syntactic knowledge.
Abstract: The SCISOR system is a computer program designed to scan naturally occurring texts in constrained domains, extract information, and answer questions about that information. The system currently reads newspapers stories in the domain of corporate mergers and acquisitions. The language analysis strategy used by SCISOR combines full syntactic (bottom-up) parsing and conceptual expectation-driven (top-down) parsing. Four knowledge sources, including syntactic and semantic information and domain knowledge, interact in a flexible manner. This integration produces a more robust semantic analyzer designed to deal gracefully with gaps in lexical and syntactic knowledge, transports easily to new domains, and facilitates the extraction of information from texts.

Proceedings Article
21 Aug 1988
TL;DR: AARON demonstrates that, given appropriate interaction between domain knowledge and knowledge of representational strategy, relatively rich representations may result from sparse information.
Abstract: AARON is a program designed to investigate the cognitive principles underlying visual representation. Under continuous development for fifteen years, it is now able autonomously to make "freehand" drawings of people in garden-like settings. This has required a complex interplay between two bodies of knowledge: object-specific knowledge of how people are constructed and how they move, together with morphological knowledge of plant growth: and procedural knowledge of representational strategy. AARON'S development through the events leading up to this recently-implemented knowledge-based form is discussed as an example of an "expert's system" as opposed to an "expert system." AARON demonstrates that, given appropriate interaction between domain knowledge and knowledge of representational strategy, relatively rich representations may result from sparse information.

Book ChapterDOI
01 Jul 1988
TL;DR: Different forms of metaknowledge are studied, a model of learning is presented that describes how the knowledge engineer detects problem-solving failures and tracks them back to gaps in domain knowledge, which are then reformulated as questions to ask a teacher.
Abstract: Knowledge engineers are efficient, active leamers. They systematically approach domains and acquire knowledge to solve routine, practical problems. By modeling their methods, we may develop a basis for teaching other students how to direct their own learning. In particular, a knowledge engineer is good at detecting gaps in a knowledge base and asking focused questions to improve an expert system''s performance. This ability stems from domain-general knowledge about: problem-solving procedures, the categorization of routine problem-solving knowledge, and domain and task differences. this paper studies these different forms of metaknowledge, and illustrates its incorporation in an intelligent tutoring system. A model of learning is presented that describes how the knowledge engineer detects problem-solving failures and tracks them back to gaps in domain knowledge, which are then reformulated as questions to ask a teacher. We describe how this model of active learning is being developed and tested in a knowledge acquisition program for an expert system.

Journal ArticleDOI
TL;DR: The Frame-based Object Recognition and Modeling (3-D FORM) System, a practical framework for geometric representation and reasoning that performs both top-down and bottom-up reasoning, depending on the current available knowledge, is developed.
Abstract: The capabilities for representing and reasoning about three-dimensional (3-D) objects are essential for knowledge-based, 3-D photointerpretation systems that combine domain knowledge with image processing, as demonstrated by 3- D Mosaic and ACRONYM. Three-dimensional representation of objects is necessary for many additional applications, such as robot navigation and 3-D change detection. Geometric reasoning is especially important because geometric relationships between object parts are a rich source of domain knowledge. A practical framework for geometric representation and reasoning must incorporate projections between a two-dimensional (2-D) image and a 3-D scene, shape and surface properties of objects, and geometric and topological relationships between objects. In addition, it should allow easy modification and extension of the system's domain knowledge and be flexible enough to organize its reasoning efficiently to take advantage of the current available knowledge. We are developing such a framework -- the Frame-based Object Recognition and Modeling (3-D FORM) System. This system uses frames to represent objects such as buildings and walls, geometric features such as lines and planes, and geometric relationships such as parallel lines. Active procedures attached to the frames dynamically compute values as needed. Because the order of processing is controlled largely by the order of slot access, the system performs both top-down and bottom-up reasoning, depending on the current available knowledge. The FORM system is being implemented with the Carnegie-Mellon University-built Framekit tool in Common Lisp (Carbonell and Joseph 1986). To date, it has been applied to two types of geometric reasoning problems: interpreting 3-D wire frame data and solving sets of geometric constraints.

Journal ArticleDOI
TL;DR: The role in supporting the knowledge engineer in the tasks of knowledge elicitation and domain understanding is discussed, and an example of how KEATS was used to build an electronic fault diagnosis system is presented.
Abstract: The ‘Knowledge Engineer's Assistant’ (KEATS) is a software environment suitable for constructing knowledge-based systems. In this paper, we discuss its role in supporting the knowledge engineer in the tasks of knowledge elicitation and domain understanding. KEATS is based upon our own investigations of the behaviour and needs of knowledge engineers and provides two enhancements to other modern ‘shells’. ‘toolkits’, and ‘environments’ for knowledge engineering: (i) transcript analysis facilities, and (ii) a sketchpad on which the KE may draw a freehand representation of the domain, from which code is automatically generated. KEATS uses a hybrid representation formalism that includes a frame-based language and a rule interpreter. We describe the novel components of KEATS in detail, and present an example of how KEATS was used to build an electronic fault diagnosis system.