scispace - formally typeset
Search or ask a question

Showing papers on "Domain knowledge published in 1986"


01 Jun 1986
TL;DR: The objective of this document is to define what is meant by "blackboard systems," and to show the richness and diversity of blackboard system designs, to bridge the gap between a model and working systems.
Abstract: The first blackboard system was the HEARSAY-II speech understanding system that evolved between 1971 and 1976. Subsequently, many systems have been built that have similar system organizations and run-time behavior. The objectives of this document are: (1) to define what is meant by "blackboard systems," and (2) to show the richness and diversity of blackboard system designs. The article begins with a discussion of the underlying concept behind all blackboard systems, the blackboard model of problem solving. In order to bridge the gap between a model and working systems, the blackboard framework, an extension of the basic blackboard model is introduced, including a detailed description of the model''s components and their behavior. A model does not come into existence on its own and is usually an abstraction of many examples. In section 2, the history of ideas is traced and the designs of some applications systems that helped shape the blackboard model are detailed. We then describe and contrast existing blackboard systems. Blackboard systems can generally be divided into two categories; application and skeletal systems. In application systems the blackboard system components are integrated with the domain knowledge required to solve the problem at hand. Skeletal systems are devoid of domain knowledge, and, as the name implies, consist of the essential system components from which application systems can be built by the addition of knowledge and the specification of control (i.e. meta-knowledge). Application systems will be discussed in Section 3, and skeletal systems will be discussed elsewhere. In Section 3.6, we summarize the features of the applications systems and in Section 4 present the author''s perspective on the utility of the blackboard approach to problem solving and knowledge engineering.

265 citations


Journal ArticleDOI
TL;DR: This chapter discusses the knowledge-based systems or KBS, the idea is not just to construct systems that exhibit knowledge, but to represent that knowledge somehow in the data structures of the program, and to have the system perform whatever it is doing by manipulating that knowledge explicitly.

244 citations


Journal ArticleDOI
TL;DR: In this paper, a psycholinguistic analysis of the development of writing skill and reports a developmental study of knowledge effects in writing is presented, focusing on the interaction of the Content and Discourse components.

221 citations


Book
01 Jan 1986
TL;DR: This book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts and it will show you the best book collections and completed collections.
Abstract: Downloading the book in this website lists can give you more advantages. It will show you the best book collections and completed collections. So many books can be found in this website. So, this is not only this introduction to knowledge base systems. However, this book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts. This is simple, read the soft file of the book and you get it.

153 citations


Journal ArticleDOI
TL;DR: A preliminary cognitive model of the process of software design is presented and a model of expert problem‐solving skills for a task in which domain knowledge played an extensive role is developed.
Abstract: In this article we present a preliminary cognitive model of the process of software design. Our goal was to develop a model of expert problem-solving skills for a task in which domain knowledge played an extensive role. In our model the process of design is captured via goals-and-operators interacting with a knowledge base. We have defined the goals and operators as ones which are general to design, rather than specific to the current task. In addition, we have structured the atomic level operators so that they are able to access domain specific knowledge acquired through experience. This structure enables both general processes and domain specific knowledge to play critical roles in producing any particular design artifact. From our protocol analysis we have built a model which unites several recurring behaviors into an interpretable whole. the behaviors we account for include the building of mental models, mental simulation, and balanced development.

101 citations


Journal ArticleDOI
01 Oct 1986
TL;DR: A formal methodology is proposed that employs techniques from the field of cognitive psychology to uncover expert knowledge as well as an appropriate representation of that knowledge.
Abstract: The process of eliciting knowledge from human experts and representing that knowledge in an expert or knowledge-based system suffers from numerous problems. Not only is this process time-consuming and tedious, but the weak knowledge acquisition methods typically used (i.e., interviews and protocol analysis) are inadequate for eliciting tacit knowledge and may, in fact, lead to inaccuracies in the knowledge base. In addition, the intended knowledge representation scheme guides the acquisition of knowledge resulting in a representation-driven knowledge base as opposed to one that is knowledge-driven. In this paper, a formal methodology is proposed that employs techniques from the field of cognitive psychology to uncover expert knowledge as well as an appropriate representation of that knowledge. The advantages of such a methodology are discussed, as well as results from studies concerning the elicitation of concepts from experts and the assignment of labels to links in empirically derived semantic networks.

81 citations


Proceedings Article
11 Aug 1986
TL;DR: A methodology, called ontological analysis, which provides this level of analysis and consists of an analysis tool and its principles of use that result in a formal specification of the knowledge elements in a task domain.
Abstract: Knowledge engineering suffers from a lack of formal tools for understanding domains of interest. Current practice relies on an intuitive, informal approach for collecting expert knowledge and formulating it into a representation scheme adequate for symbolic processing. Implicit in this process, the knowledge engineer formulates a model of the domain, and creates formal data structures (knowledge base) and procedures (inference engine) to solve the task at hand. Newell (1982) has proposed that there should be a knowledge level analysis to aid the development of AI systems in general and knowledge-based expert systems in particular. This paper describes a methodology, called ontological analysis, which provides this level of analysis. The methodology consists of an analysis tool and its principles of use that result in a formal specification of the knowledge elements in a task domain.

75 citations


Journal ArticleDOI
TL;DR: This paper presents a structured framework for the development of an expert system and the five major aspects of expert system development are: problem definition; knowledge acquisition, representation and coordination; inference mechanism; implementation; and learning.
Abstract: An expert system can be denned as ‘a tool which has the capability to understand problem specific knowledge and use the domain knowledge intelligently to suggest alternate paths of action’. This paper presents a structured framework for the development of an expert system. The five major aspects of expert system development are: Problem definition; knowledge acquisition, representation and coordination; inference mechanism; implementation; and learning. These aspects are illustrated through the help pf a modular robot configuration prototype expert system. Several industrial engineering applications in the areas of process planning, facilities planning, and maintenance and fault diagnosis are discussed and a comparative analysis of the different systems is presented.

69 citations


Proceedings ArticleDOI
01 Sep 1986
TL;DR: A method is presented of combining user-specified domain knowledge with efficient retrieval techniques based on probabilistic models that are being implemented as part of the I3R expert intermediary system.
Abstract: The introduction of domain knowledge into a document retrieval system has two important consequences; an increase in the effectiveness of retrieval and a decrease in the efficiency of text processing. In this paper, a method is presented of combining user-specified domain knowledge with efficient retrieval techniques based on probabilistic models. The domain knowledge is represented as a collection of frames that contain rules specifying recognition conditions for domain concepts and relationships between concepts. The inference network represented in these frames is used to infer the concepts that are related to a user's query. This approach is being implemented as part of the I3R expert intermediary system.

60 citations


Journal ArticleDOI
TL;DR: The approach is to develop knowledge acquisition tools that make explicit the knowledge representation implications of various methods, including Salt and Sear, the knowledge acquisition tool described in this article.
Abstract: F or several years, Digital Equipment Corporation has used a system called RI (or sometimes Xcon) to configure the computer systems it manufactures. The most recent account of RI's development' notes that as RI has grown to several thousand rules, maintaining and developing it have become substantially more difficult. As we explored what kind of tool could facilitate knowledge acquisition for RI, we saw that the most valuable tool would (1) help determine the role any new piece of knowledge should play and (2) suggest how to represent the knowledge so it would be applied whenever relevant. Several researchers in recent years2-4 have stressed that to maintain and to continue to develop a knowledge base it is critical to identify the various knowledge roles and to represent the knowledge in a way that does not conflate these roles. What is not yet clear is how many interestingly different roles exist and, if there are many, how one identifies the appropriate-subset for a particular expert system. We believe we will find the answers to these questions by studying knowledge roles in different problem-solving methods. Our approach is to develop knowledge acquisition tools that make explicit the knowledge representation implications of various methods. Until recently, most of the research in knowledge acquisition tools has concentrated on tools for classification problem-solvers.5 Because knowledge acquisition tools such as Teiresias,6 ETS,7 and More8 presuppose relatively similar problem-solving methods, the systems built with these tools have similar knowledge roles. However, knowledge acquisition tools for constructive problem-solvers are now being developed-e.g., Salt9 and Sear, the knowledge acquisition tool we describe in this article-that are

58 citations


Journal ArticleDOI
TL;DR: Escort is a real time expert system which relieves the cognitive load on users of information systems generating large volumes of dynamic data and provides dynamic advice to the operator.
Abstract: Escort is a real time expert system which relieves the cognitive load on users of information systems generating large volumes of dynamic data. Escort is currently configured to help process plant operators in centralised control rooms. The system provides advice to the operators to help them handle and avoid crises. A demonstration system advising on a simulated oil production process has been implemented. Its key features are that it operates in real time, contains four domain knowledge bases, is controlled by a knowledge based scheduler and provides dynamic advice to the operator.

Journal ArticleDOI
TL;DR: A new style of information processing, requirements for knowledge representation and a knowledge representation satisfying these requirements are discussed, a knowledge processing system designed on this basis and a newstyle of problem solving using this system.
Abstract: A new generation computer is expected to be the knowledge processing system of the future. However, many aspects are yet unknown regarding this technology, and a number of fundamental concepts, directly concerning knowledge processing system design need investigation, such as knowledge, data, inference, communication, information management, learning, and human interface.



Journal ArticleDOI
TL;DR: The Parametric Interpretation Expert System is a knowledge system for interpreting the parametric test data collected at the end of complex semiconductor fabrication processes, which reflects the way fabrication engineers reason causally about semiconductor failures.
Abstract: The Parametric Interpretation Expert System (PIES) is a knowledge system for interpreting the parametric test data collected at the end of complex semiconductor fabrication processes. The system transforms hundreds of measurements into a concise statement of all the overall health of the process and the nature and probable cause of any anomalies. A key feature of PIES is the structure of the knowledge base, which reflects the way fabrication engineers reason causally about semiconductor failures. This structure permits fabrication engineers to do their own knowledge engineering, to build the knowledge base, and then to maintain it to reflect process modifications and operating experience. The approach appears applicable to other process control and diagnosis tasks.

Journal ArticleDOI
Kiyoshi Niwa1
01 May 1986
TL;DR: A new approach to assisting problem solving in ill-structured management domains, i.e., a knowledge-based human and computer cooperative system, which includes a knowledge base that stores experimental knowledge; a computer inference function which uses its knowledge base logically; and human association ability which uses the knowledge base intuitively.
Abstract: A new approach to assisting problem solving in ill-structured management domains, i.e., a knowledge-based human and computer cooperative system, is proposed. The system includes 1) a knowledge base that stores experimental knowledge; 2) a computer inference function which uses its knowledge base logically; and 3) human association ability which uses the knowledge base intuitively. The emphasis is on cooperation of 2) and 3). In order to realize this cooperation, a guide function of human association is devised and incorporated into a computer. An example system is developed for large-scale thermal power construction project risk management. This system enables project managers to make maximum use of an experimental knowledge base so as to help them effectively control their projects. The proposed approach represents a frontier in both knowledge engineering and human-computer interaction.

Proceedings ArticleDOI
01 Jan 1986
TL;DR: It is suggested that true common knowledge of higher levels can be implemented as eager common knowledge on lower levels and that the distinction between these two kinds of common knowledge can be associated with the level of abstraction.
Abstract: Explicit use of knowledge expressions in the design of distributed algorithms is explored. A non-trivial case study is carried through, illustrating the facilities that a design language could have for setting and deleting the knowledge that the processes possess about the global state and about the knowledge of other processes. No implicit capabilities for logical reasoning are assumed. A language basis is used that allows common knowledge not only by an eager protocol but also in the true sense. The observation is made that the distinction between these two kinds of common knowledge can be associated with the level of abstraction: true common knowledge of higher levels of abstraction: true common knowledge of higher levels can be implemented as eager common knowledge on lower levels. A knowledge-motivated abstraction tool is therefore suggested to be useful in supporting stepwise refinement of distributed algorithms.

03 Mar 1986
TL;DR: The class of formulas in the propositional modal logic of knowledge that are valid in attainable knowledge states are axiomatized, the complexity of the decision problem is determined, and the states of knowledge are characterized.

Journal ArticleDOI
Abstract: Each policy process is in need of knowledge. Policy research is supposed to provide it, but it does so only in part. One of the reasons is that a policy researcher tends to produce new knowledge, although the need is more general. Knowledge management is considered to be a necessary prerequisite to nourished policymaking. Apart from the production of knowledge it includes activities like translation, structuring, interpretation, and so on, of both existing and newly produced knowledge elements. The primate of the two-cultures approach is rejected. Knowledge types are articulated, rules for handling knowledge are explored. A framework is developed for harmonizing the requirements of both pol icymaking and research.

Journal ArticleDOI
01 Jun 1986
TL;DR: The purpose of this paper is to present an application of fuzzy logic as an organizational framework for a knowledge-based interpretation system in the domain of geotechnical engineering.
Abstract: The purpose of this paper is to present an application of fuzzy logic as an organizational framework for a knowledge-based interpretation system in the domain of geotechnical engineering. Knowledge-based systems (KBS) are emerging as a powerful means of dealing with the ill-structured problems encountered in many engineering and medical applications1. (It has been stated2 that expert systems are problem-solving programs that solve substantial problems generally conceded as being difficult and requiring expertise. They are called knowledge-based because their performance depends critically on the use of facts and heuristics (or rules of thumb) used by experts). In a KBS, the knowledge or rules of judgement pertaining to the domain are encoded in the system in an explicit manner; these rules can be examined and modified, if necessary. This is in contrast to the way traditional algorithmic programs are structured. The motivation for this project was two-fold: generally, to demonstrate that knowledge...

Proceedings ArticleDOI
25 Aug 1986
TL;DR: This paper presents a recent advance in multi-lingual knowledge-based machine translation (KBMT), which provides for separate syntactic and semantic knowledge sources that are integrated dynamically for parsing and generation.
Abstract: Building on the well-established premise that reliable machine translation requires a significant degree of text comprehension, this paper presents a recent advance in multi-lingual knowledge-based machine translation (KBMT). Unlike previous approaches, the current method provides for separate syntactic and semantic knowledge sources that are integrated dynamically for parsing and generation. Such a separation enables the system to have syntactic grammars, language specific but domain general, and semantic knowledge bases, domain specific but language general. Subsequently, grammars and domain knowledge are precompiled automatically in any desired combination to produce very efficient and very thorough real-time parsers. A pilot implementation of our KBMT architecture using functional grammars and entity-oriented semantics demonstrates the feasibility of the new approach.1

Journal ArticleDOI
V. R. Waldron1
TL;DR: The goal of the knowledge acquisition interview is to obtain complete, accurate, and reliable knowledge for the expert system.
Abstract: An expert system is an application of artificial intelligence which requires the transfer of knowledge from human to machine. The acquisition of this knowledge from experienced practitioners (domain experts) is typically inefficient. Problems associated with the subjective reporting of knowledge must be overcome. To do this, a structured rather than unstructured interviewing approach can be used by knowledge engineers. Open, closed, and probing questions as well as direction responses should be used, together with a general-to-specific technique. The goal of the knowledge acquisition interview is to obtain complete, accurate, and reliable knowledge for the expert system.

Journal ArticleDOI
TL;DR: The use of social science knowledge by policymakers has fallen short of what many social scientists would prefer Research that supports this conclusion may be flawed by a methodological bias that overlooks the variety of knowledge sources used by decision makers as discussed by the authors.
Abstract: The use of social science knowledge by policymakers has fallen short of what many social scientists would prefer Research that supports this conclusion may be flawed by a methodological bias that overlooks the variety of knowledge sources used by decision makers A survey of social workers that measures knowledge use from the perspective of the user, rather than the producer, of information identifies three types of knowledge sources, all of which are integrated in the decision-making process We argue here for a shift in the direction of knowledge utilization research that will recognize similarities between knowledge use and knowledge creation

Journal ArticleDOI
01 May 1986
TL;DR: The relational knowledge base architecture the authors propose consists of a number of unification engines, several disk systems, a control processor, and a multiport page-memory to support a variety of knowledge representations.
Abstract: A relational knowledge base model and an architecture which manipulates the model are presented. An item stored in the relational knowledge base is called a term. A unification operation on terms in the relational knowledge base is used as the retrieval mechanism. The relational knowledge base architecture we propose consists of a number of unification engines, several disk systems, a control processor, and a multiport page-memory. The system has a knowledge compiler to support a variety of knowledge representations.

Proceedings ArticleDOI
01 Dec 1986
TL;DR: This work has implemented programs that use artificial intelligence techniques to prepare high-level, intelligent summaries of databases, and that use empirical databases in turn, in combination with statistical and Al methods, to generate new domain knowledge base.
Abstract: The work described here addresses two problems: information overload of database users, and knowledge acquisition for use in Al systems. We have implemented programs that use artificial intelligence techniques to prepare high-level, intelligent summaries of databases, and that use empirical databases in turn, in combination with statistical and Al methods, to generate new domain knowledge base. Both programs are examples of the aquisition of knowledge from data: the Summarization Module fuses large amounts of data succinctly, the Discovery Module extracts new knowledge present implicitly in data. We describe the implementation of our programs and outline planned extensions which combine both approaches. This work is distinguished from current knowledge engineering approaches in that we prime the system with expert knowledge, and then use factual data to learn more about the domain.


Journal ArticleDOI
01 Jun 1986
TL;DR: In this paper, a tutorial highlighting the strengths and weaknesses of several popular knowledge representation techniques is presented, and guidelines for the application of artificial intelligence to management-domain problems are proposed.
Abstract: Guidelines for the application of artificial intelligence to management-domain problems are proposed. A tutorial highlighting the strengths and weaknesses of several popular knowledge representation techniques is presented. Matching the strengths of these techniques with the requirements of different management decision-making domains provides a basis for the proposed guidelines. Management areas for which current approaches to knowledge representation provide little support are also discussed.

Book ChapterDOI
08 Aug 1986
TL;DR: A knowledge based system for ship classification that was originally developed using the PROSPECTOR updating method has been reimplemented to use the inference procedure developed by Pearl and Kim, and the comparative performance of the two versions of the system is discussed.
Abstract: One of the most important aspects of current expert systems technology is the ability to make causal inferences about the impact of new evidence. When the domain knowledge and problem knowledge are uncertain and incomplete, Bayesian reasoning has proven to be an effective way of forming such inferences [3,4,8]. While several reasoning schemes have been developed. based on Bayes Rule, there has been very little work examining the comparative effectiveness of these schemes in a real application. This paper describes a knowledge based system for ship classification [1], originally developed using the PROSPECTOR updating method [2], that has been reimplemented to use the inference procedure developed by Pearl and Kim [4,5|. We discuss our reasons for making this change, the implementation of the new inference engine, and the comparative performance of the two versions of the system.

Book ChapterDOI
01 May 1986
TL;DR: An account of some of the issues that arise in studying knowledge, belief, and conjecture in Artificial Intelligence and knowledge Representation is provided, intended primarily for the computer scientist with little exposure to AI and Knowledge Representation, and who is interested in understanding some the issues.
Abstract: It is by now a cliche to claim that knowledge representation is a fundamental research issue in Artificial Intelligence (AI) underlying much of the research, and the progress, of the last fifteen years. And yet. it is difficult to pinpoint exactly what knowledge representation is, does, or promises to do. A thorough survey of the field by Ron Brachman and Brian Smith [Brachman & Smith 80] points out quite clearly the tremendous range in viewpoints and methodologies of researchers in knowledge representation. This paper is a further attempt to look at the field in order to examine the state of the art and provide some insights into the nature of the research methods and results. The distinctive mark of this overview is its viewpoint: that propositions encoded in knowledge bases have a number of important features, and these features serve, or ought to serve, as a basis for guiding current interest and activity in AI. Accordingly, the paper provides an account of some of the issues that arise in studying knowledge, belief, and conjecture, and discusses some of the approaches that have been adopted in formalizing and using some of these features in AI. The account is intended primarily for the computer scientist with little exposure to AI and Knowledge Representation, and who is interested in understanding some of the issues. As such, the paper concentrates on raising issues and sketching possible approaches to solutions. More technical details can be found in the work referenced throughout the paper.

Journal ArticleDOI
TL;DR: The postempiricist deconstruction of epistemology has discredited the philosophical notion of valid scientific knowledge as a "Mirror of Nature" (Rorty, 1979) and the search for transcendental justifications of truth as discussed by the authors.
Abstract: We can distinguish between two different ways of conceptualizing scientific knowledge. Traditional epistemology has been concerned with the epistemic conditions under which science can produce "valid" and "true" knowledge as accurate accounts of "objective reality." For empiricist epistemology, "truth" signifies the intrinsic property of sentences that are not selected by a community of knowledge producers and validators but by reality itself. Scientists select hypotheses; reality selects true or sufficiently corroborated propositions. The epistemological notion of science conceives of "true knowledge" as a relation of correspondence between theory and something else that is not language: reality. The postempiricist deconstruction of epistemology has discredited the philosophical notion of valid scientific knowledge as a "Mirror of Nature" (Rorty, 1979) and the search for transcendental justifications of truth. Postempiricism replaces the philosophy of knowledge by a sociology (or archaeology) of truth. Truth, then, no longer refers to an intrinsic quality of sentences that corresponds to nature. True sentences do not fit into nature; rather, they fit into the cultural possession of particular groups of professional knowledge producers. Like deviance or mental retardation, truth is a contingent label (an epistemological complement) we attach to practices and sentences that are conventionally shared and accepted by particular social groups. Scientific knowledge is not a system of intrinsically accurate and culturally privileged objective representations of reality; it is the cultural property of professional groups that control access to "legitimate" scientific knowledge production: "The 'pure' universe of even the 'purest' science is a social field like any other, with its distributions of power and its monopolies, its struggles and strategies, interests and profits" (Bourdieu, 1975, p. 19).'