scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge extraction published in 1986"


Book
01 Jan 1986
TL;DR: This book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts and it will show you the best book collections and completed collections.
Abstract: Downloading the book in this website lists can give you more advantages. It will show you the best book collections and completed collections. So many books can be found in this website. So, this is not only this introduction to knowledge base systems. However, this book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts. This is simple, read the soft file of the book and you get it.

153 citations


Proceedings Article
11 Aug 1986
TL;DR: A methodology, called ontological analysis, which provides this level of analysis and consists of an analysis tool and its principles of use that result in a formal specification of the knowledge elements in a task domain.
Abstract: Knowledge engineering suffers from a lack of formal tools for understanding domains of interest. Current practice relies on an intuitive, informal approach for collecting expert knowledge and formulating it into a representation scheme adequate for symbolic processing. Implicit in this process, the knowledge engineer formulates a model of the domain, and creates formal data structures (knowledge base) and procedures (inference engine) to solve the task at hand. Newell (1982) has proposed that there should be a knowledge level analysis to aid the development of AI systems in general and knowledge-based expert systems in particular. This paper describes a methodology, called ontological analysis, which provides this level of analysis. The methodology consists of an analysis tool and its principles of use that result in a formal specification of the knowledge elements in a task domain.

75 citations


Patent
06 Mar 1986
TL;DR: In this article, a knowledge base processor is called by an application program to access knowledge base and to govern the execution or interpretation of knowledge base to find the values of selected objects or expressions defined in the knowledge base.
Abstract: A knowledge base processor is callable by an application program to access a knowledge base and to govern the execution or interpretation of the knowledge base to find the values of selected objects or expressions defined in the knowledge base. The application program is written in a conventional computer language which specifies control by the ordering of program steps. The application program provides a user interface for input/output and provides top level control for calling the knowledge base processor to find values for goal expressions. During its search for the values of goal expressions, the knowledge base processor calls the application program to determine values of expressions which are not concluded by the knowledge base, and to signal important events during the execution of the knowledge base. Preferably the knowledge base processor and the application program each include a library of subroutines which are linked-loaded to provide a complete knowledge system for a specific application or task. Therefore, the knowledge base processor provides the esential functions for symbolic reasoning, and establishes a framework for building the knowledge system which permits application program developers to exploit the best available conventional data processing capabilities. The application programmer is free to exercise his or her knowledge and skill regarding the use of conventional programming languages and their support facilities such as utility libraries, optimizing compliers and user interfaces.

74 citations


Journal ArticleDOI
TL;DR: The development of ExperTAXsm, an expert system designed as a part of the service rendered by the audit and tax practices of the accounting firm of Coopers & Lybrand in the corporate tax accrual and planning function is described.
Abstract: This paper describes the development of ExperTAXsm, an expert system designed as a part of the service rendered by the audit and tax practices of the accounting firm of Coopers & Lybrand in the corporate tax accrual and planning function. The current operating environment is described and the expert system based solution is presented. The knowledge engineering process is described in detail and novel techniques for knowledge extraction are presented. The resulting system and its knowledge base is also presented together with a discussion on detailed knowledge acquisition and ongoing knowledge base maintenance facilities. © 1986 Wiley Periodicals, Inc. (Harry Schatz: Harry Schatz is a Senior Consultant in the Decision Support Group of Coopers & Lybrand. He has more than twelve years' experience in the design and implementation of decison support systems, including database management systems, modelling and simulation systems, business computer graphics systems and knowledge-based expert systems. He formerly worked for EPS, Inc., a decision support software company, and Comshare, Inc., a computer services firm. He holds a B.S. in Computer Science from the University of Michigan and is a member of the IEEE Computer Society.)

55 citations


Journal ArticleDOI
TL;DR: A new style of information processing, requirements for knowledge representation and a knowledge representation satisfying these requirements are discussed, a knowledge processing system designed on this basis and a newstyle of problem solving using this system.
Abstract: A new generation computer is expected to be the knowledge processing system of the future. However, many aspects are yet unknown regarding this technology, and a number of fundamental concepts, directly concerning knowledge processing system design need investigation, such as knowledge, data, inference, communication, information management, learning, and human interface.

49 citations


Book ChapterDOI
22 Sep 1986
TL;DR: This paper gives a short introduction to some of the concepts of the MESON system, including definitions of data types, operations on data and knowledge base consistency and reports on criticism on this approach from the standpoint of database research.
Abstract: The MESON project aims at a unified view at data- and knowledge base management systems (DBMS/KBMS). A KL-ONE based knowledge representation system has been implemented and is now evaluated from a terminologically independent viewpoint. The main topic of this evaluation can be summarized under the term “data model“, including definitions of data types, operations on data and knowledge base consistency. This paper gives a short introduction to some of the concepts of the MESON system and reports on criticism on this approach from the standpoint of database research.

36 citations


Journal ArticleDOI
TL;DR: The Parametric Interpretation Expert System is a knowledge system for interpreting the parametric test data collected at the end of complex semiconductor fabrication processes, which reflects the way fabrication engineers reason causally about semiconductor failures.
Abstract: The Parametric Interpretation Expert System (PIES) is a knowledge system for interpreting the parametric test data collected at the end of complex semiconductor fabrication processes. The system transforms hundreds of measurements into a concise statement of all the overall health of the process and the nature and probable cause of any anomalies. A key feature of PIES is the structure of the knowledge base, which reflects the way fabrication engineers reason causally about semiconductor failures. This structure permits fabrication engineers to do their own knowledge engineering, to build the knowledge base, and then to maintain it to reflect process modifications and operating experience. The approach appears applicable to other process control and diagnosis tasks.

34 citations



Journal ArticleDOI
Kiyoshi Niwa1
01 May 1986
TL;DR: A new approach to assisting problem solving in ill-structured management domains, i.e., a knowledge-based human and computer cooperative system, which includes a knowledge base that stores experimental knowledge; a computer inference function which uses its knowledge base logically; and human association ability which uses the knowledge base intuitively.
Abstract: A new approach to assisting problem solving in ill-structured management domains, i.e., a knowledge-based human and computer cooperative system, is proposed. The system includes 1) a knowledge base that stores experimental knowledge; 2) a computer inference function which uses its knowledge base logically; and 3) human association ability which uses the knowledge base intuitively. The emphasis is on cooperation of 2) and 3). In order to realize this cooperation, a guide function of human association is devised and incorporated into a computer. An example system is developed for large-scale thermal power construction project risk management. This system enables project managers to make maximum use of an experimental knowledge base so as to help them effectively control their projects. The proposed approach represents a frontier in both knowledge engineering and human-computer interaction.

34 citations


Proceedings Article
01 Jan 1986

31 citations


Proceedings ArticleDOI
05 Feb 1986
TL;DR: A simple network model, which allows the representation of types, is-a relationships and disjointness constraints is considered, and the concepts of consistency and redundancy are introduced.
Abstract: In the spirit of integrating data base and artificial intelligence techniques, a number of concepts widely used in relational data base theory are introduced in a knowledge representation scheme. A simple network model, which allows the representation of types, is-a relationships and disjointness constraints is considered. The concepts of consistency and redundancy are introduced and characterized by means of implication of constraints and systems of inference rules, and by means of graph theoretic concepts.

Proceedings ArticleDOI
01 Jan 1986
TL;DR: It is suggested that true common knowledge of higher levels can be implemented as eager common knowledge on lower levels and that the distinction between these two kinds of common knowledge can be associated with the level of abstraction.
Abstract: Explicit use of knowledge expressions in the design of distributed algorithms is explored. A non-trivial case study is carried through, illustrating the facilities that a design language could have for setting and deleting the knowledge that the processes possess about the global state and about the knowledge of other processes. No implicit capabilities for logical reasoning are assumed. A language basis is used that allows common knowledge not only by an eager protocol but also in the true sense. The observation is made that the distinction between these two kinds of common knowledge can be associated with the level of abstraction: true common knowledge of higher levels of abstraction: true common knowledge of higher levels can be implemented as eager common knowledge on lower levels. A knowledge-motivated abstraction tool is therefore suggested to be useful in supporting stepwise refinement of distributed algorithms.

Journal ArticleDOI
01 May 1986
TL;DR: The relational knowledge base architecture the authors propose consists of a number of unification engines, several disk systems, a control processor, and a multiport page-memory to support a variety of knowledge representations.
Abstract: A relational knowledge base model and an architecture which manipulates the model are presented. An item stored in the relational knowledge base is called a term. A unification operation on terms in the relational knowledge base is used as the retrieval mechanism. The relational knowledge base architecture we propose consists of a number of unification engines, several disk systems, a control processor, and a multiport page-memory. The system has a knowledge compiler to support a variety of knowledge representations.

Proceedings ArticleDOI
01 Dec 1986
TL;DR: This work has implemented programs that use artificial intelligence techniques to prepare high-level, intelligent summaries of databases, and that use empirical databases in turn, in combination with statistical and Al methods, to generate new domain knowledge base.
Abstract: The work described here addresses two problems: information overload of database users, and knowledge acquisition for use in Al systems. We have implemented programs that use artificial intelligence techniques to prepare high-level, intelligent summaries of databases, and that use empirical databases in turn, in combination with statistical and Al methods, to generate new domain knowledge base. Both programs are examples of the aquisition of knowledge from data: the Summarization Module fuses large amounts of data succinctly, the Discovery Module extracts new knowledge present implicitly in data. We describe the implementation of our programs and outline planned extensions which combine both approaches. This work is distinguished from current knowledge engineering approaches in that we prime the system with expert knowledge, and then use factual data to learn more about the domain.

Journal ArticleDOI
01 Jun 1986
TL;DR: In this paper, a tutorial highlighting the strengths and weaknesses of several popular knowledge representation techniques is presented, and guidelines for the application of artificial intelligence to management-domain problems are proposed.
Abstract: Guidelines for the application of artificial intelligence to management-domain problems are proposed. A tutorial highlighting the strengths and weaknesses of several popular knowledge representation techniques is presented. Matching the strengths of these techniques with the requirements of different management decision-making domains provides a basis for the proposed guidelines. Management areas for which current approaches to knowledge representation provide little support are also discussed.

Book ChapterDOI
09 Jul 1986
TL;DR: This research note summarizes a technique which was employed in a specific context: knowledge extraction from a copy of an existing clinical database, RX, and speculates about the generalization of the approach.
Abstract: A variety of types of linkages from knowledge bases to databases have been proposed, and a few have been implemented [MW84]. In this research note, we summarize a technique which was employed in a specific context: knowledge extraction from a copy of an existing clinical database. The knowledge base is also used to drive the extracting process. RX builds causal models in its domain to generate input for statistical hypothesis verification. We distinguish two information types: knowledge and data, and recognize four types of knowledge: categorical, definitional, causal (represented in frames), and operational, represented by rules. Based on our experience, we speculate about the generalization of the approach.

Book ChapterDOI
01 Jun 1986
TL;DR: A large medical knowledge-base, which was a source of more than 5000 training examples, was compressed to 3% of its original size, and most of the extracted rules turned out to be quite meaningful from the medical point of view.
Abstract: We present a method for a knowledge-base compression, restricted to rules of specific form, which uses a learning from examples facility. A large medical knowledge-base, which was a source of more than 5000 training examples, was compressed to 3% of its original size. Most of the extracted rules turned out to be quite meaningful from the medical point of view.



Proceedings ArticleDOI
Paul S. Jacobs1
25 Aug 1986
TL;DR: The Ace framework applies knowledge representation fundamentals to the task of encoding knowledge about language, and permits specialized linguistic knowledge to derive partially from more abstract knowledge, facilitating the use of abstractions in generating specialized phrases.
Abstract: The development of natural language interfaces to Artificial Intelligence systems is dependent on the representation of knowledge. A major impediment to building such systems has been the difficulty in adding sufficient linguistic and conceptual knowledge to extend and adapt their capabilities. This difficulty has been apparent in systems which perform the task of language production, i. e. the generation of natural language output to satisfy the communicative requirements of a system.The Ace framework applies knowledge representation fundamentals to the task of encoding knowledge about language. Within this framework, linguistic and conceptual knowledge are organized into hierarchies, and structured associations are used to join knowledge structures that are metaphorically or referentially related. These structured associations permit specialized linguistic knowledge to derive partially from more abstract knowledge, facilitating the use of abstractions in generating specialized phrases. This organization, used by a generator called KING (Knowledge INtensive Generator), promotes the extensibility and adaptability of the generation system.

Proceedings Article
25 Aug 1986
TL;DR: This paper describes a method for retrieval-byunification (RBU) operations, especially unificationjoin, on a relational knowledge base, and proposes a method which involves ordering terms and, as result, omitting some pairs from this processing.
Abstract: This paper describes a method for retrieval-byunification (RBU) operations, especially unificationjoin, on a relational knowledge base. The relational knowledge base is a conceptual model for a knowledge base. In this model knowledge is represented by term relations. Terms in the term relations are retrieved with operation called RBUs (i.e., unification-join and unification-restriction). To perform unification-join in the simplest manner, all possible pairs of tuples in term relations should be checked to see if each pair of terms in the tuples is unifiable or not. This would result in an extremely heavy processing load. We propose a method which involves ordering terms and, as result, omitting some pairs from this processing. The paper also describes a method for implementing the unificaiion engine (UE), that is, hardware dedicated to the RBU operations.

Proceedings Article
05 Feb 1986
TL;DR: New tools are introduced, based on a d a t a b a s e p r o g r a m m i n g language, which allow t h e s u p p o r t of ru les on t h E bas is of exis t ing re la t ions.
Abstract: C u r r e n t d a t a b a s e sys tems p rov ide only r a t h e r l imi t ed too l s for a d v a n c e d app l i ca t ions , for examp le for t h e d a t a b a s e s u p p o r t of knowledge ba sed systems. Especia l ly , i t is n o t poss ib le to r e p r e s e n t r ecu r s ive ru les which a re def ined on t h e bas is of s t o r e d d a t a (facts) . Using t h e t e rmino logy of PROLOG, th i s m e a n s t h a t d a t a b a s e sys tems p rov ide m e a n s for manag ing facts , b u t ru les a r e n o t suppo r t ed . The s e m a n t i c gap, t he r e fo r e , be tween c o n v e n t i o n a l d a t a b a s e sys tems on t h e one hand and knowledge base m a n a g e m e n t syst e m s on the o t h e r is too large. This p a p e r p r e s e n t s new tools, based on a d a t a b a s e p r o g r a m m i n g language , which allow t h e s u p p o r t of ru les on t h e bas is of exis t ing re la t ions . These ru l e s may be recurs ive , t h u s prov id ing s imi la r m e c h a n i s m s as PROLOG does. The tools a r e se t o r i en ted , t h e y allow, the re fo re , an eff icient imp l e m e n t a t i o n . Moreover , some d i s a d v a n t a g e s of PROLOG are avoided , for e x a m p l e inf in i te r ecurs ion . In add i t ion , d a t a t ypes a re def ined for t he r e p r e s e n t a t i o n of u p d a t a b l e ru l e bases t h u s p rov id ing a first s tep towards i n t e g r a t e d fact and ru le m a n a g e m e n t using rel a t i ona l t echno logy .

Proceedings Article
11 Aug 1986
TL;DR: This paper explores VLSI synthesis and the role that traditional AI methods can play in solving this problem, and divides design knowledge into three categories: knowledge about modules used to design chips; knowledge used to distinguish and select modules; and knowledge about how to compose new designs from modules.
Abstract: This paper explores VLSI synthesis and the role that traditional AI methods can play in solving this problem. VLSI synthesis is hard because interactions among decisions at different levels of abstraction make design choices difficult to identify and evaluate. Our knowledge engineering strategy tackles this problem by organizing knowledge to encourage reasoning about the design through multiple levels of abstraction. We divide design knowledge into three categories: knowledge about modules used to design chips; knowledge used to distinguish and select modules; and knowledge about how to compose new designs from modules. We discuss the uses of procedural and declarative knowledge in each type of knowledge, the types of knowledge useful in each category, and efficient representations for them.

Journal ArticleDOI
TL;DR: Several ideas and examples are presented illustrating possibilities in designing certain knowledge-based software tools, i.e., the software tools with a knowledge base containing necessary and relevant knowledge and human experience represented in a suitable way.

Proceedings ArticleDOI
25 Aug 1986
TL;DR: A method of anal yzing texts is described by referring to information contained in the intersentential relations and the headings of texts and then extracting requested knowledge such as a summary from texts in an efficient way.
Abstract: i. Introduction The study of text understanding and knowlegde extraction has been actively done by many researchers. The authors also studied a method of structured information extraction from texts without a global text analysis. The method is available for a comparatively sbort text such as a patent claim clause and an abstract of a technical paper. This paper describes tile outline of a method of knowledge extraction from a longer text which needs a global tex analysis. The kinds of texts ~e expository texts ~) or explanation texts-'. Expository texts described here mean those which have various hierarchical headings such as a title, a heading of each section and sometimes an abstract. In this deEinJtion, most of texts, including technical papers reports and newspapers, are expository. Texts of this kind disclose the main knowledge in a top-down manner and show not only the location of an attribute value in a text but also severn[ key points of the content. This property of expository texts contrasts with that of novels and stories in which an unexpected development of the plot is preferred. This paper pays attention to such characteristics of expository texts and describes a method of anal yzing texts by referring to information contained in the intersentential relations and the headings of texts and then extracting requested knowledge such as a summary from texts in an efficient way.





Journal ArticleDOI
01 Sep 1986
TL;DR: This work suggests that the integration of the information derived from both protocol and CAP for knowledge extraction would provide more effective information for the development of expert systems than is feasible with either system alone.
Abstract: A current bottleneck In the automation of software Is the lack of available, standardized, reliable and valid methods for extracting knowledge from expert programmers. This paper discusses the development of Computer Aided Protocol (CAP) to automatically collect the general and specific cognitive task components of a programmer. Results Indicate that CAP was able to collect lower level goals while protocol analysis collected only 56 percent of these lower level goals. However, protocol analysis was able to obtain significantly more procedural knowledge relating to cognitive states of the subject and more high level goals than CAP (F(1,8)=11.23; p<.004). This work suggests that the integration of the information derived from both protocol and CAP for knowledge extraction would provide more effective information for the development of expert systems than is feasible with either system alone.