scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge extraction published in 1989"


Book
01 Jan 1989
TL;DR: In this age of modern era, the use of internet must be maximized as one of the benefits is to get the on-line building large knowledge based systems book, as the world window, as many people suggest.
Abstract: In this age of modern era, the use of internet must be maximized. Yeah, internet will help us very much not only for important thing but also for daily activities. Many people now, from any level can use internet. The sources of internet connection can also be enjoyed in many places. As one of the benefits is to get the on-line building large knowledge based systems book, as the world window, as many people suggest.

680 citations



Journal ArticleDOI
TL;DR: The PUPS theory and its ACT∗ predecessor are computational embodiments of psychology's effort to develop a theory of the origins of knowledge as discussed by the authors, which contains proposals for extraction of knowledge from the environment, a strength-based prioritization of knowledge, knowledge compilation mechanisms for forming use-specific versions of knowledge and induction mechanisms for extending knowledge.

104 citations


Book
01 Jan 1989

93 citations


Book ChapterDOI
03 Jan 1989
TL;DR: This work presents in a unifying framework the basic notions of I DDL to code design knowledge in the IIICAD system, an intelligent, integrated, and interactive computer-aided design environment developing at the Centre for Mathematics and Computer Science.
Abstract: We present in a unifying framework the basic notions of I DDL (Integrated Data Description Language) to code design knowledge in the IIICAD system IIICAD is an intelligent, integrated, and interactive computer-aided design environment we are currently developing at the Centre for Mathematics and Computer Science

55 citations


Journal ArticleDOI
TL;DR: A literature review of three aspects of knowledge base design, namely: knowledge integration, knowledge verification, and knowledge base partitioning is presented.

36 citations


Journal ArticleDOI
05 Jul 1989
TL;DR: The Knowledge/Data Model is an instance of a new class of models, called hyper-semantic data models, which facilitate the incorporation of knowledge in the form of heuristics, uncertainty, constraints and other Artificial Intelligence concepts, together with object-oriented concepts found in Semantic Data Models.
Abstract: This paper describes a new area of data modeling, a model in this new area, and the schema specification language for the model. The Knowledge/Data Model captures both knowledge semantics, as specified in Knowledge Based Systems, and data semantics, as represented by Semantic Data Models. The Knowledge/Data Model is an instance of a new class of models, called hyper-semantic data models, which facilitate the incorporation of knowledge in the form of heuristics, uncertainty, constraints and other Artificial Intelligence concepts, together with object-oriented concepts found in Semantic Data Models. The unified knowledge/data modeling features are provided via the constructs of the Knowledge/Data Language.

34 citations


Journal ArticleDOI
TL;DR: In this paper, the Dialog Manager subsystem of the AQUINAS knowledge acquisition workbench provides automated assistance to a knowledge engineer or domain expert in analysing the problem domain, classifying the problem tasks and sub-tasks, identifying problem-solving methods, proposing knowledge acquisition tools, and suggesting the use of specific strategies for knowledge acquisition provided in selected tools.
Abstract: One of the most troublesome and time-consuming activities in constructing a knowledge-based system is the elicitation and modelling of knowledge from the human expert about the problem domain A major obstacle is that little guidance is available to the domain expert or knowledge engineer to help with (1) classifying the application task and identifying a problem-solving method, and (2) given the application task characteristics, selecting knowledge acquisition tools and strategies to be applied in creating and refining the knowledge base Our objective is to provide automated assistance to a knowledge engineer or domain expert in analysing the problem domain, classifying the problem tasks and sub-tasks, identifying problem-solving methods, proposing knowledge acquisition tools, and suggesting the use of specific strategies for knowledge acquisition provided in selected tools We describe such an implementation in the Dialog Manager subsystem of the AQUINAS knowledge acquisition workbench The Dialog Manager provides advice to potential AQUINAS users as well as continuing guidance to users who select AQUINAS for knowledge base development

32 citations


Proceedings Article
01 Sep 1989

32 citations


Book ChapterDOI
01 Dec 1989
TL;DR: In this article, a form of domain knowledge called views controls the search to identify non-superficial consequences of new information, such as contradicting existing knowledge or revealing a gap in the knowledge base.
Abstract: Adding new information to an existing knowledge base can have significant consequences. For example, new information might contradict existing knowledge or reveal a “gap”in the knowledge base. Most approaches to knowledge-base refinement either ignore these consequences or compute them exhaustively. Our approach, formalized in a task called knowledge integration, is to partially elaborate the consequences of new information. A form of domain knowledge called views controls the search to identify non-superficial consequences of new information. A prototype knowledge integration program has been implemented and demonstrated with a complex extension to a large knowledge base.

24 citations


Journal ArticleDOI
01 Dec 1989
TL;DR: The prototype to be presented in this paper aims at extending the capabilities of conventional software approaches by incorporating knowledge concerning the suitability of certain data analysis methods in a knowledge-based part and the application area is market research.
Abstract: Scientists from different research areas have developed a variety of models and methods to support data analysis problems in their specific fields of interest. However, the preconditions for a proper usage of the corresponding software which is often provided in the shape of software packages or individual programs may cause a severe problem. These preconditions preponderantly demand knowledge as well about theoretical and software specific aspects of algorithms used as about essentials of the area of application. The prototype to be presented in this paper aims at extending the capabilities of conventional software approaches by incorporating knowledge concerning the suitability of certain data analysis methods. The knowledge-based part is realized in PROLOG. The application area is market research. The prototype already comprises several data analysis procedures, especially from multidimensional scaling and cluster analysis, and data management facilities which can be invoked by the user based on the recommendations given by the system.

Journal ArticleDOI
TL;DR: The use of the model is developed that distinguishes between data as directly observable facts, information as structured collections of data, and knowledge as methods of using information using the concept of semantic categories for a semantic information retrieval system.
Abstract: In this article we identify the need for a new theory of data, information, and knowledge. A model is developed that distinguishes between data as directly observable facts, information as structured collections of data, and knowledge as methods of using information. The model is intended to support a wide range of information systems. In the article we develop the use of the model for a semantic information retrieval system using the concept of semantic categories. The likely benefits of this are discussed, though as yet no detailed evaluation has been conducted.

Book ChapterDOI
01 Jan 1989
TL;DR: It is argued that the additional constraints imposed by the addition of an explanation facility can guide the creation of a knowledge base in a manner that reduces the need to subsequently re-structure the knowledge base as the system's functionality increases.
Abstract: In constructing an expert system, there are usually several ways to represent a given piece of knowledge regardless of the knowledge representation formalism used. Initially, all of them may appear to be equivalent, but as the system evolves, it often becomes apparent that some are better than others, leading to the need to revise representations. Such revisions can be very time-consuming and prone to error. In this paper, we argue that the additional constraints imposed by the addition of an explanation facility can guide the creation of a knowledge base in a manner that reduces the need to subsequently re-structure the knowledge base as the system's functionality increases. We describe criteria that may be applied after the knowledge base is constructed to reveal potential weaknesses as well as those that may be employed during knowledge base construction. Finally, we briefly describe an expert system shell we have constructed that embodies these guiding principles, the Explainable Expert Systems framework.

Journal ArticleDOI
TL;DR: The development and use of a Knowledge Dictionary, a tool to facilitate the documentation and maintenance of rule based expert systems, is discussed, which utilizes the relational data model to store the heuristics in a data form rather than in an executable code form.
Abstract: The development and use of a Knowledge Dictionary, a tool to facilitate the documentation and maintenance of rule based expert systems, is discussed. The Knowledge Dictionary may be used to record heuristics and their component parts, facts and rule actions, in such a way that a knowledge engineer, or end user, may determine the usage of any part of the knowledge, may easily add new parts, and may run the expert system to determine the effect of the maintenance. The Knowledge Dictionary utilizes the relational data model to store the heuristics in a data form rather than in an executable code form. Use of the relational model provides the knowledge engineer with all the power of relational calculus to interrogate the stored knowledge.

Journal ArticleDOI
TL;DR: In this article, the authors give and analyse the structure of knowledge-based systems as it relates to the integration of acquisition, and derive design principles for knowledge based systems integrating acquisition with other aspects of their operation.
Abstract: Tools and techniques for knowledge acquisition for knowledge based systems need to be integrated with the overall system and not treated as separate components. The variety of sources of knowledge, representations and applications within the system, and user roles, makes such integration complex. The complexity raises many issues not generally thought of as part of knowledge acquisition yet fundamentally significant in extending and integrating acquisition tools. This paper gives and analysis of the structure of knowledge-based systems as it relates to the integration of acquisition. It is intended to form a “requirements specification” for knowledge based systems integrating acquisition with other aspects of their operation. The analysis results in a number of design principles derived systematically from consideration of the distinctions involved. The design principles are derived through an approach applicable to any knowledge structure as symmetric pairs, one relating to differenctiation and the other to integration.

Proceedings ArticleDOI
03 Jan 1989
TL;DR: PDM interactively aids users in defining a logic model of their planning problem and uses it to generate problem-specific inferences and as input to a model building component that mechanically constructs the algebraic schema of the appropriate LP model.
Abstract: A description is given of PDM, a knowledge-based tool designed to help nonexpert users construct linear programming (LP) models of production, distribution, and inventory (PDI) planning problems. PDM interactively aids users in defining a logic model of their planning problem and uses it to generate problem-specific inferences and as input to a model building component that mechanically constructs the algebraic schema of the appropriate LP model. Interesting features of PDM include the application of domain knowledge to guide user interaction, the use of syntactic knowledge of the problem representation language to effect model revision, and the use of a small set of primitive modeling rules in model construction. >

Proceedings ArticleDOI
23 Oct 1989
TL;DR: A technique based on modeling a knowledge base using a predicate/transition (Pr/T) net representation that can detect major types of inconsistencies and incompleteness is proposed.
Abstract: A technique for detecting inconsistencies in, and incompleteness of, a knowledge base is proposed. The technique is based on modeling a knowledge base using a predicate/transition (Pr/T) net representation. Inconsistency and incompleteness patterns in a knowledge base are then defined with respect to the Pr/T net model and are identified by using a syntactic pattern recognition method. This technique can be included as part of a knowledge acquisition process in any rule-based system. It can detect major types of inconsistencies and incompleteness. The use of the technique can be easily automated. An example of its use is presented. >

Proceedings ArticleDOI
23 Oct 1989
TL;DR: The authors describe structured matching informally and give a formal definition of the task and strategy of structured matching, showing how HYPER corresponds to the formal definition.
Abstract: The authors describe structured matching informally and give a formal definition of the task and strategy of structured matching. Structured matching integrates the knowledge and control for making a decision within a hierarchical structure. Structured matching has several desirable characteristics: it is qualitative and tractable, it facilitates knowledge acquisition and explanation, and it explicitly represents decision-making knowledge. The authors describe how structured matching is implemented in the HYPER tool, showing how HYPER corresponds to the formal definition. HYPER is a tool for building problem-solving modules that measure the fit of a hypothesis to a situation. Structured matching is a generalization of hypothesis matching. The authors present examples of structured matching in several knowledge-based systems. >

Journal ArticleDOI
TL;DR: The knowledge acquisition phase of knowledge engineering is related to the phenomenology of domain expertise and the relationship between knowledge extraction and meta-knowledge (knowledge about what the authors know and how they reason) is stressed.
Abstract: The knowledge acquisition phase of knowledge engineering is related to the phenomenology of domain expertise. The relationship between knowledge extraction and meta-knowledge (knowledge about what we know and how we reason) is stressed. Two knowledge extraction techniques for hard-to-trace expert reasoning are introduced. The nature of geographical knowledge and expertise is examined and related to the process of building spatial theory.

Book ChapterDOI
01 Jan 1989
TL;DR: This chapter will be dealing with design and implementation of an expert system in the same manner as traditional software or are other methods involved, laying the foundation for more complex sample systems to come.
Abstract: The Ticket Information System illustrated a number of different components and functions of an expert system: the knowledge base, containing the existing knowledge in a declarative form, the goal-oriented processing and case-oriented supplementing of this knowledge according to the control structure described and finally the explanatory element, which explicated responses and queries on the part of the system on demand, based on an internal, running protocol of the dialog being conducted. We are certain that this little system has left the reader with any number of questions. For instance, as to the system architecture: Where or how are the various data structures and functions localized, and how do they interact with one another? Or with respect to the representation and processing of the knowledge built into the system: Are there theoretical or practical “laws” or circumstances dictating the form of knowledge representation or the strategy for knowledge processing selected, or could one have just as well realized these quite differently. And finally, perhaps a question of technique: Does one design and implement an expert system in the same manner as traditional software or are other methods involved? This chapter will be dealing with these and related questions, laying the foundation for more complex sample systems to come.

Book ChapterDOI
TL;DR: This paper gives a detailed description of two inductive, similarity-based methods for acquiring task knowledge from the dialogue history which are general in that they may be used to acquire procedural task knowledge in different domains.
Abstract: Research in the field of Human-Computer Interaction has brought forth several methods to formally model procedural knowledge of computer users Such models have been mainly used for analytic purposes It is shown that similar models can also serve as knowledge bases for intelligent user support systems, particularly plan-recognizing help systems This is demonstrated by a prototypical application (FINIX) which provides intelligent help for UNIX file handling operations As for other knowledge-based systems, knowledge acquisition is crucial in order to make this approach practically useful This paper gives a detailed description of two inductive, similarity-based methods for acquiring task knowledge from the dialogue history The first approach is semi-automatic and relies on interactions with a human referee, whereas the second is completely automated based on certain heuristics The methods have been implemented and successfully tested in the FINIX environment They are general in that they may be used to acquire procedural task knowledge in different domains Both methods are analyzed according to the underlying machine learning principles

Journal ArticleDOI
TL;DR: This paper describes a proposal for building strategic knowledge from this representation using a combination of analysis of the existing knowledge base and acquisition of domain-specific control knowledge for ordering subtasks.

Proceedings ArticleDOI
21 Feb 1989
TL;DR: A knowledge acquisition tool (KNACQ) is described that has sharply decreased the effort in building knowledge bases and is used by both the understanding components and the generation components of Janus.
Abstract: Although natural language technology has achieved a high degree of domain independence through separating domain-independent modules from domain-dependent knowledge bases, portability, as measured by effort to move from one application to another, is still a problem. Here we describe a knowledge acquisition tool (KNACQ) that has sharply decreased our effort in building knowledge bases. The knowledge bases acquired with KNACQ are used by both the understanding components and the generation components of Janus.

Patent
27 Jul 1989
TL;DR: In this article, the history patterns of various measurement information are learnt from a pattern file by a neural processor 70 in a learning process and the operation is supported based on a learnt result in a supporting process.
Abstract: PURPOSE: To support an operation at a normal time or a non-normal/abnormal time by effectively using past history. CONSTITUTION: The history patterns of various measurement information are learnt from a pattern file by a neural processor 70 in a learning process and the operation is supported based on a learnt result in a supporting process. Knowledge and a knowledge candidate are extracted from the learnt result in a knowledge extraction process, and whether the knowledge candidate is appropriate or not is diagnosed in a knowledge diagnosis process. Then, the operation is supported based on a knowledge group obtained in this learning and that which is previously inputted in an operation supporting process. A system processor 42 executes these processes, operates a keyboard 44 and a display 46 as necessary and accesses to a knowledge base 60A and a knowledge candidate base 60B. Thus, a highly precise operation supporting system can comparatively easily be constituted. COPYRIGHT: (C)1991,JPO&Japio

Journal ArticleDOI
TL;DR: This paper reports on the development of a knowledge- based query system, Expert-MCA, developed by coupling the techniques of data base management systems, natural language processing, and knowledge-based expert systems for use in the U.S. Army military construction program (MCA).
Abstract: The objectives of project control systems go beyond documentation to recognition of problems and evaluation of their causes. Through the use of broad information systems, a construction manager today is able to assemble a wide variety of data about his projects or programs. Retrieval, analysis, and interpretation of the meaning of these data, however, usually require the user to have detailed knowledge about the structure and content of the data base and the use of a data retrieval language, and have further programming skills to allow the computer to perform analyses. The work reported in this paper has three goals: to ease retrieval with near-natural language query capabilities; to acquire and accumulate data access and analysis knowledge; and to exercise such knowledge to find patterns in project data. This paper reports on the development of a knowledge-based query system, Expert-MCA, developed by coupling the techniques of data base management systems, natural language processing, and knowledge-based expert systems for use in the U.S. Army military construction program (MCA). The system architecture is presented, followed by a description of how knowledge is represented and processed in answering English-like queries. Finally, examples are used to explain its capabilities in more detail.


Journal ArticleDOI
TL;DR: The application of neural net to learn the design of control configurations for distillation columns by example is shown to avoid the difficulties associated with knowledge acquisition.

Proceedings ArticleDOI
20 Sep 1989
TL;DR: A description is given of WharfRat, a knowledge base of data type implementations which employs case-based reasoning as its primary retrieval mechanism, and the process by which two case descriptions are compared.
Abstract: A description is given of WharfRat, a knowledge base of data type implementations which employs case-based reasoning as its primary retrieval mechanism. Given a description of an abstract data type, it retrieves the most similar data type implementation in the knowledge base. The focus of the study is the process by which two case descriptions are compared. Similarity between data types is modeled by a fuzzy relation. A set of similarity matching rules has been developed and implemented. The system employs a general, graph-based data model in which object types are organized in a specialization network. Abstract data representations are built using the constructs of the general data model. This system is the first step toward developing a complete programming-by-similarity system. >

Journal ArticleDOI
TL;DR: The integration of machine learning into knowledge acquisition is still not yet well achieved and the solutions implemented in the BLIP system, currently under development at the Technical University of Berlin1.
Abstract: Ever since knowledge acquisition systems have been applied to real world problems, the issue of integration has become important (Gaines 88). Most often, the following types of integration are considered:• Systems are to be integrated: the knowledge acquisition system is more closely linked with an expert system shell (Eshelman et al. 87), or a database system is linked to the knowledge acquisition system in order to read data of a domain.• Various sources of knowledge are to be integrated: text files, data files, statistics, rules and facts all contain knowledge about a domain and should be handled by the same system (Gaines 88).• The represented knowledge of various experts is to be integrated either into one consistent domain model or into a model which shows the conflicting views of the domain (Shaw 88).• Diverse knowledge sources with their respective representations are to be integrated, e.g. a taxonomy of domain concepts, possible values of attributes, well-formedness conditions of facts and rules.• Diverse tasks of knowledge acquisition are to be integrated: declaring epistemic primitives, defining concepts, adding facts and rules, deducing new facts from rules, dealing with inconsistencies, changing the terminology, changing facts and rules, grouping facts or rules together, presenting possible operations, showing views of the represented domain model, indicating consequences of an operation to the user. A particular topic of task integration is the integration of machine learning into a knowledge acquisition system.In this paper, we want to discuss only the last two integration problems and present the solutions we implemented in the BLIP system, currently under development at the Technical University of Berlin1. First, we give a short overview of knowledge acquisition tasks and indicate which tasks some prototype systems can handle. As we will see, the integration of machine learning into knowledge acquisition is still not yet well achieved. Second, we discuss the integration of tasks. In particular, the integration of machine learning into knowledge acquisition is discussed in some detail. The architecture of BLIP illustrates the paradigm of cooperative balanced modeling of both system and user. Third, we investigate the integration of knowledge sources. The integrity between diverse knowledge sources, the propagation of consequences of the operation to all relevant knowledge sources, and the interpretability of a component's results by other components are the 3 issues there. In BLIP, the knowledge needed for the learning task is also integrated into the domain knowledge. This gives BLIP the power of closed-loop learning (Michalski 87). Fourth, we describe the integration of BLIP's learning into knowledge revision in more detail.

Journal Article
TL;DR: A knowledge acquisition subsystem to acquire knowledge in the CHECK system, a two level diagnostic architecture combining heuristic and causal knowledge (reasoning) and a graphical tool to acquire causal knowledge is presented.
Abstract: We present the knowledge acquisition subsystem to acquire knowledge in the CHECK system, a two level diagnostic architecture combining heuristic and causal knowledge (reasoning). We present NEED, a graphical tool to acquire causal knowledge