scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 1996"


Journal ArticleDOI
TL;DR: A relatively new development—information extraction (IE)—is the subject of this article and can transform the raw material, refining and reducing it to a germ of the original text.
Abstract: here may be more text data in electronic form than ever before, but much of it is ignored. No human can read, understand, and synthesize megabytes of text on an everyday basis. Missed information— and lost opportunities—has spurred researchers to explore various information management strategies to establish order in the text wilderness. The most common strategies are information retrieval (IR) and information filtering [4]. A relatively new development—information extraction (IE)—is the subject of this article. We can view IR systems as combine harvesters that bring back useful material from vast fields of raw material. With large amounts of potentially useful information in hand, an IE system can then transform the raw material, refining and reducing it to a germ of the original text (see Figure 1). Suppose financial analysts are investigating production of semiconductor devices (see Figure 2). They might want to know several things:

962 citations


Journal ArticleDOI
Bart Selman1, Henry Kautz1
TL;DR: It is shown how propositional logical theories can be compiled into Horn theories that approximate the original information, and the approximations bound the original theory from below and above in terms of logical strength.
Abstract: Computational efficiency is a central concern in the design of knowledge representation systems In order to obtain efficient systems, it has been suggested that one should limit the form of the statements in the knowledge base or use an incomplete inference mechanism The former approach is often too restrictive for practical applications, whereas the latter leads to uncertainty about exactly what can and cannot be inferred from the knowledge base We present a third alternative, in which knowledge given in a general representation language is translated (compiled) into a tractable form—allowing for efficient subsequent query answeringWe show how propositional logical theories can be compiled into Horn theories that approximate the original information The approximations bound the original theory from below and above in terms of logical strength The procedures are extended to other tractable languages (for example, binary clauses) and to the first-order case Finally, we demonstrate the generality of our approach by compiling concept descriptions in a general frame-based language into a tractable form

348 citations


Book ChapterDOI
01 Aug 1996
TL;DR: The conventional approaches to knowledge representation, e.g., semantic networks, frames, predicate calculus and Prolog, are based on bivalent logic as mentioned in this paper, and they cannot come to grips with the issue of uncertainty and imprecision.
Abstract: The conventional approaches to knowledge representation, e.g., semantic networks, frames, predicate calculus and Prolog, are based on bivalent logic. A serious shortcoming of such approaches is their inability to come to grips with the issue of uncertainty and imprecision. As a consequence, the conventional approaches do not provide an adequate model for modes of reasoning which are approximate rather than exact. Most modes of human reasoning and all of commonsense reasoning fall into this category.

335 citations


Journal ArticleDOI
TL;DR: A survey of methods for representing and reasoning with imperfect information can be found in this paper, where a classification of different types of imperfections and sources of such imperfections are discussed.
Abstract: This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering work that explicitly concerns the representation of imperfect information, and related work on how imperfect information may be used as a basis for reasoning. The work that is surveyed is drawn from both the field of databases and the field of artificial intelligence. Both of these areas have long been concerned with the problems caused by imperfect information, and this paper stresses the relationships between the approaches developed in each.

293 citations


Journal ArticleDOI
01 Nov 1996
TL;DR: The main thesis of this paper is that the part-whole relation cannot simply be considered as an ordinary attribute: its specific ontological nature requires to be understood and integrated within data-modelling formalisms and methodologies.
Abstract: Knowledge bases, data bases and object-oriented systems (referred to in the paper as Object-Centered systems) all rely on attributes as the main construct used to associate properties to objects; among these, a fundamental role is played by the so-called part-whole relation. The representation of such structural information usually requires particular semantics together with specialized inference and update mechanisms, but rarely do current modelling formalisms and methodologies give it a specific, ‘first-class’ dignity. The main thesis of this paper is that the part-whole relation cannot simply be considered as an ordinary attribute: its specific ontological nature requires to be understood and integrated within data-modelling formalisms and methodologies. On the basis of such an ontological perspective, we survey the conceptual modelling issues involving part-whole relations, and the various modelling frameworks provided by knowledge representation and object-oriented formalisms.

256 citations


Journal ArticleDOI
TL;DR: The goal here is to provide a brief overview of the key issues in knowledge discovery in an industrial context and outline representative applications.
Abstract: a phenomenal rate. From the financial sector to telecommunications operations , companies increasingly rely on analysis of huge amounts of data to compete. Although ad hoc mixtures of statistical techniques and file management tools once sufficed for digging through mounds of corporate data, the size of modern data warehouses, the mission-critical nature of the data, and the speed with which analyses need to be made now call for a new approach. A new generation of techniques and tools is emerging to intelligently assist humans in analyzing mountains of data and finding critical nuggets of useful knowledge, and in some cases to perform analyses automatically. These techniques and tools are the subject of the growing field of knowledge discovery in databases (KDD) [5]. KDD is an umbrella term describing a variety of activities for making sense of data. We use the term to describe the overall process of finding useful patterns in data, including not only the data mining step of running specific discovery algorithms but also pre-and postprocessing and a host of other important activities. Our goal here is to provide a brief overview of the key issues in knowledge discovery in an industrial context and outline representative applications. The different data mining methods at the core of the KDD process can have different goals. In general, we distinguish two types: • Verification, in which the system is limited to verifying a user's hypothesis, and • Discovery, in which the system finds new patterns. Ad hoc techniques—no longer adequate for sifting through vast collections of data—are giving way to data mining and knowledge discovery for turning corporate data into competitive business advantage.

244 citations


Journal Article
TL;DR: The paper provides an informal example-based introduction to DATR and to techniques for its use, including finite-state transduction, the encoding of DAGs and lexical rules, and the representation of ambiguity and alternation.
Abstract: Much recent research on the design of natural language lexicons has made use of nonmonotonic inheritance networks as originally developed for general knowledge representation purposes in Artificial Intelligence. DATR is a simple, spartan language for defining nonmonotonic inheritance networks with path/value equations, one that has been designed specifically for lexical knowledge representation. In keeping with its intendedly minimalist character, it lacks many of the constructs embodied either in general-purpose knowledge representation languages or in contemporary grammar formalisms. The present paper shows that the language is nonetheless sufficiently expressive to represent concisely the structure of lexical information at a variety of levels of linguistic analysis. The paper provides an informal example-based introduction to DATR and to techniques for its use, including finite-state transduction, the encoding of DAGs and lexical rules, and the representation of ambiguity and alternation. Sample analysis of phenomena such as inflectional syncretism and verbal subcategorization are given that show how the language can be used to squeeze out redundancy from lexical descriptions.

170 citations


Journal ArticleDOI
01 May 1996
TL;DR: It is pointed out that within a numerical framework, two numbers are needed to account for partial ignorance about events, because on top of truth and falsity, the state of total ignorance must be encoded independently of the number of underlying alternatives.
Abstract: This paper advocates the use of nonpurely probabilistic approaches to higher-order uncertainty. One of the major arguments of Bayesian probability proponents is that representing uncertainty is always decision-driven and as a consequence, uncertainty should be represented by probability. Here we argue that representing partial ignorance is not always decision-driven. Other reasoning tasks such as belief revision for instance are more naturally carried out at the purely cognitive level. Conceiving knowledge representation and decision-making as separate concerns opens the way to nonpurely probabilistic representations of incomplete knowledge. It is pointed out that within a numerical framework, two numbers are needed to account for partial ignorance about events, because on top of truth and falsity, the state of total ignorance must be encoded independently of the number of underlying alternatives. The paper also points out that it is consistent to accept a Bayesian view of decision-making and a non-Bayesian view of knowledge representation because it is possible to map nonprobabilistic degrees of belief to betting probabilities when needed. Conditioning rules in non-Bayesian settings are reviewed, and the difference between focusing on a reference class and revising due to the arrival of new information is pointed out. A comparison of Bayesian and non-Bayesian revision modes is discussed on a classical example.

142 citations


Journal ArticleDOI
TL;DR: Three diagnostic methods for use with industrial processes using multilevel flow models, which work well with systems that can be described using flows, while it currently lacks the capability of capturing important aspects of other types of systems, for example, electronic circuits.

138 citations


Journal ArticleDOI
TL;DR: It is argued that cardinality restrictions on concepts are of importance in applications such as configuration of technical systems, an application domain of description logics systems that is currently gaining in interest and shows that including such restrictions in the description language leaves the important inference problems such as instance testing decidable.

130 citations


Book ChapterDOI
14 Nov 1996
TL;DR: An approach to automating the acquisition of adaptation knowledge overcoming many of the associated knowledge-engineering costs is described, which makes use of inductive techniques, which learn adaptation knowledge from case comparison.
Abstract: A major challenge for case-based reasoning (CBR) is to overcome the knowledge-engineering problems incurred by developing adaptation knowledge. This paper describes an approach to automating the acquisition of adaptation knowledge overcoming many of the associated knowledge-engineering costs. This approach makes use of inductive techniques, which learn adaptation knowledge from case comparison. We also show how this adaptation knowledge can be usefully applied. The method has been tested in a property-evaluation CBR system and the technique is illustrated by examples taken from this domain. In addition, we examine how any available domain knowledge might be exploited in such an adaptation-rule learning-system.

Book
01 Jun 1996
TL;DR: This chapter discusses non-standard theories of uncertainty in plausible reasoning, foundations of logic programming, and the Consistency-based approach to Automated Diagnosis of Devices.
Abstract: Preface 1 Non-standard theories of uncertainty in plausible reasoning Didier Dubois and Henri Prade 2 Probabilistic foundations of reasoning with conditionals Judea Pearl and Moises Goldszmidt 3 Foundations of logic programming Vladimir Lifshitz 4 Abductive theories in artificial intelligence Kurt Konolige 5 Inductive logic programming Stefan Wrobel 6 Reasoning in description logics Francesco M Donini, Maurizio Lenzerini, Daniele Nardi and Andrea Schaerf 7 Artificial Intelligence: A Computational Perspective Bernhard Nebel 8 The Consistency-based Approach to Automated Diagnosis of Devices Oscar Dressler and Peter Struss Index

Journal ArticleDOI
TL;DR: The proposed approach is extended to handle fault impacts expressed as event chronologies, allowing a finer representation of the available knowledge through the introduction of an appropriate representation of uncertainty and incompleteness based on Zadeh's possibility theory and fuzzy sets.
Abstract: The fault mode effects and criticality analyses (FMECA) describe the impact of identified faults. They form an important category of knowledge gathered during the design phase of a satellite and are used also for diagnosis activities. This paper proposes their extension, allowing a finer representation of the available knowledge, at approximately the same cost, through the introduction of an appropriate representation of uncertainty and incompleteness based on Zadeh's possibility theory and fuzzy sets. The main benefit of the approach is to provide a qualitative treatment of uncertainty where we can for instance distinguish manifestations which are more or less certainly present (or absent) and manifestations which are more or less possibly present (or absent) when a given fault is present. In a second step, the proposed approach is extended to handle fault impacts expressed as event chronologies. Efficient, real-time compatible discrimination techniques exploiting uncertain observations are introduced, and an example of satellite fault diagnosis illustrates the method. A brief rationale for the choice of possibility theory and fuzzy sets is provided.

DOI
01 Jan 1996
TL;DR: Four major issues in explaining the conclusions of procedurally implemented deductive systems are explored and a meta-language for describing interesting aspects of complicated objects is used to limit the amount of information that should be presented or explained.
Abstract: OF THE DISSERTATION EXPLAINING REASONING IN DESCRIPTION LOGICS by Deborah L. McGuinness Dissertation Director: Alexander Borgida Knowledge-based systems, like other software systems, need to be debugged while being developed. In addition, systems providing \expert advice" need to be able to justify their conclusions. Traditionally, developers have been supported during debugging by tools which o er a trace of the operations performed by the system (e.g., a sequence of rule rings in a rule-based expert system) or, more generally by an explanation facility for the reasoner. Description Logics, formal systems developed to reason with taxonomies or classi cation hierarchies, form the basis of several recent knowledge-based systems but do not currently o er such facilities. In this thesis, we explore four major issues in explaining the conclusions of procedurally implemented deductive systems, concentrating on a speci c solution for a class of description logics. First, we consider how to explain a highly optimized procedural implementation in a declarative manner. We begin with a formal proof-theoretic foundation for explanation and we illustrate our approach using examples from our implementation in the classic knowledge representation system. Next, we consider the issue of handling long, complicated deduction chains. We introduce methods designed to break up description logic queries and answers into small, manageable pieces, and we show how these are used in our approach and how they support automatically ii generated explanations of followup questions. Next, we consider the problem of explaining negative deductions. We provide a constructive method for explanation based on generating counter-examples. Finally, we address the issue of limiting both object presentation and explanation. We o er a meta-language for describing interesting aspects of complicated objects and use this language to limit the amount of information that should be presented or explained. The work in this thesis has been motivated by design and application work on a description logic-based system and a signi cant portion of our work has been implemented for classic and is in use.

Journal ArticleDOI
TL;DR: In this paper, a model-based approach to reasoning is proposed, in which the knowledge base is represented as a set of models (satisfying assignments) rather than a logical formula, and the set of queries is restricted.

Proceedings ArticleDOI
23 Jun 1996
TL;DR: This paper proposes a computationally based expert system for managing fault propagation in internetworks using the concept of fuzzy cognitive maps (FCM), a graphical archetype which encodes and processes vague causal reasoning numerically.
Abstract: This paper proposes a computationally based expert system for managing fault propagation in internetworks using the concept of fuzzy cognitive maps (FCM), a graphical archetype which encodes and processes vague causal reasoning numerically. The dynamic features of FCM are exploited to characterize the time-varying aspects of network faults, while its graphical features are used as a framework for representing the distributed properties of fault propagation. In this scheme, a network fault due to one or more managed objects induces causal fuzzy relationships on adjacent objects. This causal relationship is captured in a matrix which allows causal inference representations to be viewed as feedback associative memory with computational recall capabilities.

Journal ArticleDOI
01 Apr 1996
TL;DR: An extension of the conventional definition of mass functions in Evidence Theory for use in Data Mining, as a means to represent evidence of the existence of rules in the database is suggested.
Abstract: Data Mining or Knowledge Discovery in Databases is currently one of the most exciting and challenging areas where database techniques are coupled with techniques from Artificial Intelligence and mathematical sub-disciplines to great potential advantage. It has been defined as the non-trivial extraction of implicit, previously unknown and potentially useful information from data. A lot of research effort is being directed towards building tools for discovering interesting patterns which are hidden below the surface in databases. However, most of the work being done in this field has been problem-specific and no general framework has yet been proposed for Data Mining. In this paper we seek to remedy this by proposing, EDM — Evidence-based Data Mining — a general framework for Data Mining based on Evidence Theory. Having a general framework for Data Mining offers a number of advantages. It provides a common method for representing knowledge which allows prior knowledge from the user or knowledge discoveryd by another discovery process to be incorporated into the discovery process. A common knowledge representation also supports the discovery of meta-knowledge from knowledge discovered by different Data Mining techniques. Furthermore, a general framework can provide facilities that are common to most discovery processes, e.g. incorporating domain knowledge and dealing with missing values. The framework presented in this paper has the following additional advantages. The framework is inherently parallel. Thus, algorithms developed within this framework will also be parallel and will therefore be expected to be efficient for large data sets — a necessity as most commercial data sets, relational or otherwise, are very large. This is compounded by the fact that the algorithms are complex. Also, the parallelism within the framework allows its use in parallel, distributed and heterogeneous databases. The framework is easily updated and new discovery methods can be readily incorporated within the framework, making it ‘general’ in the functional sense in addition to the representational sense considered above. The framework provides an intuitive way of dealing with missing data during the discovery process using the concept of Ignorance borrowed from Evidence Theory. The framework consists of a method for representing data and knowledge, and methods for data manipulation or knowledge discovery. We suggest an extension of the conventional definition of mass functions in Evidence Theory for use in Data Mining, as a means to represent evidence of the existence of rules in the database. The discovery process within EDM consists of a series of operations on the mass functions. Each operation is carried out by an EDM operator. We provide a classification for the EDM operators based on the discovery functions performed by them and discuss aspects of the induction, domain and combination operator classes. The application of EDM to two separate Data Mining tasks is also addressed, highlighting the advantages of using a general framework for Data Mining in general and, in particular, using one that is based on Evidence Theory.

Book ChapterDOI
17 Sep 1996
TL;DR: Insight is given into the diverse alternatives for the representation of transitive relations such as part-whole relations, family relations or partial orders in general in terminological knowledge representation systems.
Abstract: Motivated by applications that demand for the adequate representation of part-whole relations, different possibilities of representing transitive relations in terminological knowledge representation systems axe investigated. A well-known concept language, ALC, is extended by three different kinds of transitive roles. It turns out that these extensions differ largely in expressiveness and computational complexity, hence this investigation gives insight into the diverse alternatives for the representation of transitive relations such as part-whole relations, family relations or partial orders in general.

Journal ArticleDOI
TL;DR: In this paper, the data model is formally defined and a nonredundancy preserving primitive operator, the merge, is described and it is proven that non redundancy is always preserved in the model.
Abstract: This paper fully develops a previous approach by George et al. (1993) to modeling uncertainty in class hierarchies. The model utilizes fuzzy logic to generalize equality to similarity which permitted impreciseness in data to be represented by uncertainty in classification. In this paper, the data model is formally defined and a nonredundancy preserving primitive operator, the merge, is described. It is proven that nonredundancy is always preserved in the model. An object algebra is proposed, and transformations that preserve query equality are discussed.

Journal ArticleDOI
TL;DR: The theory tightly unifies the constraint logic programming scheme of Jaffar and Lassez (1987), the generalized annotated logic programming theory of Kifer and Subrahmanian (1989), and the stable model semantics of Gelfond and Lifschitz (1988).
Abstract: Deductive databases that interact with, and are accessed by, reasoning agents in the real world (such as logic controllers in automated manufacturing, weapons guidance systems, aircraft landing systems, land-vehicle maneuvering systems, and air-traffic control systems) must have the ability to deal with multiple modes of reasoning. Specifically, the types of reasoning we are concerned with include, among others, reasoning about time, reasoning about quantitative relationships that may be expressed in the form of differential equations or optimization problems, and reasoning about numeric modes of uncertainty about the domain which the database seeks to describe. Such databases may need to handle diverse forms of data structures, and frequently they may require use of the assumption-based nonmonotonic representation of knowledge. A hybrid knowledge base is a theoretical framework capturing all the above modes of reasoning. The theory tightly unifies the constraint logic programming scheme of Jaffar and Lassez (1987), the generalized annotated logic programming theory of Kifer and Subrahmanian (1989), and the stable model semantics of Gelfond and Lifschitz (1988). New techniques are introduced which extend both the work on annotated logic programming and the stable model semantics.

Book ChapterDOI
01 Jan 1996
TL;DR: This paper describes how the exploration of attribute exploration can be modified such that it determines a minimal set of implications that fills the gap between previously given implications (called background implications) and all valid implications.
Abstract: Implications between attributes can represent knowledge about objects in a specified context. This knowledge representation is especially useful when it is not possible to list all specified objects. Attribute exploration is a tool of formal concept analysis that supports the acquisition of this knowledge. For a specified context this interactive procedure determines a minimal list of valid implications between attributes of this context together with a list of objects which are counterexamples for all implications not valid in the context. This paper describes how the exploration can be modified such that it determines a minimal set of implications that fills the gap between previously given implications (called background implications) and all valid implications. The list of implications can be simplified further if exceptions are allowed for the implications.

Proceedings ArticleDOI
19 Jun 1996
TL;DR: A conceptualization of the coordination task around the notion of structured "conversation" amongst agents is proposed and a complete multiagent programming language and system for explicitly representing, applying and capturing coordination knowledge is built.
Abstract: The agent view provides a level of abstraction at which we envisage computational systems carrying out cooperative work by interoperating across net worked people, organizations and machines. A major challenge in building such systems is coordinating the behavior of the individual agents to achieve the individual and shared goals of the participants. We propose a conceptualization of the coordination task around the notion of structured "conversation" amongst agents. Based on this notion we build a complete multiagent programming language and system for explicitly representing, applying and capturing coordination knowledge. The language provides KQML-based communication, an agent definition and execution environment, support for describing interactions as multiple structured conversations among agents and rule-based approaches to conversation selection, conversation execution and event handling. The major application of the system is the construction and integration of multiagent supply chain systems for manufacturing enterprises. This application is used throughout the paper to illustrate the introduced concepts and language constructs.

Journal ArticleDOI
TL;DR: The main goal of this paper is to describe in detail how PROTEGE-II was used to model the elevator-configuration task, and provide a starting point for comparison with other frameworks that use abstract problem-solving methods.
Abstract: This paper describes how we applied the PROTEGE-II architecture to build a knowledge-based system that configures elevators. The elevator-configuration task was solved originally with a system that employed the propose-and-revise problem-solving method (VT). A variant of this task, here named the Sisyphus-2 problem, is used by the knowledge-acquisition community for comparative studies. PROTEGE-II is a knowledge-engineering environment that focuses on the use of reusable ontologies and problem-solving methods to generate task-specific knowledge-acquisition tools and executable problem solvers. The main goal of this paper is to describe in detail how we used PROTEGE-II to model the elevator-configuration task. This description provides a starting point for comparison with other frameworks that use abstract problem-solving methods. Beginning with the textual description of the elevator-configuration task, we analysed the domain knowledge with respect to PROTEGE-II’s main goal: to build domain-specific knowledge-acquisition tools. We used PROTEGE-II’s suite of tools to construct a knowledge-based system, called ELVIS, that includes a reusable domain ontology, a knowledge-acquisition tool, and a propose-and-revise problem-solving method that is optimized to solve the elevator-configuration task. We entered domain-specific knowledge about elevator configuration into the knowledge base with the help of a task-specific knowledge-acquisition tool that PROTEGE-II generated from the ontologies. After we constructed mapping relations to connect the knowledge base with the method’s code, the final executable problem solver solved the test case provided with the Sisyphus-2 material. We have found that the development of ELVIS has afforded a valuable test case for evaluating PROTEGE-II’s suite of system-building tools. Only projects based on reasonably large problems, such as the Sisyphus-2 task, will allow us to improve the design of PROTEGE-II and its ability to produce reusable components.

Patent
17 Jun 1996
TL;DR: In this paper, an integrated system and method for providing a flexible expert system development and runtime environment with an integrated natural language processor and set-oriented knowledge base is presented, which includes an input device, a spreading activation module, reasoning module, a decision module, and a knowledge base.
Abstract: An integrated system and method for providing a flexible expert system development and runtime environment with an integrated natural language processor and set-oriented knowledge base. The system and process include an input device, a spreading activation module, a reasoning module, a decision module, and a knowledge base. The system and method may also include a natural language processing module. The spreading activation module utilizes the knowledge base, which is set-oriented with named relationships between concepts to traverse the knowledge base efficiently. The reasoning module executes related, nested logic statements which manipulate the complex facts in the knowledge base. The decision module selects the value or values from a list which are most relevant at the moment the module is called. The knowledge base represents all data in a nested, set-oriented manner. The system and method, in turn, produce an output in response to the input or command into the system.

Journal ArticleDOI
TL;DR: This paper considers terminological cycles in a very small terminological representation language and finds that the effect of the three types of semantics introduced by B. Nebel can completely be described with the help of finite automata.
Abstract: In most of the implemented terminological knowledge representation systems it is not possible to state recursive concept definitions, so-called terminological cycles. One reason is that it is not clear what kind of semantics to use for such cycles. In addition, the inference algorithms used in such systems may go astray in the presence of terminological cycles. In this paper we consider terminological cycles in a very small terminological representation language. For this language, the effect of the three types of semantics introduced by B. Nebel can completely be described with the help of finite automata. These descriptions provide for a rather intuitive understanding of terminologies with recursive definitions, and they give an insight into the essential features of the respective semantics. In addition, one obtains algorithms and complexity results for the subsumption problem and for related inference tasks. The results of this paper may help to decide what kind of semantics is most appropriate for cyclic definitions, depending on the representation task.

Journal ArticleDOI
TL;DR: The paper presents components of the framework, explicitly identifies interactions between these components, and explains how these interactions are developed into an integrated framework, and presents the rationale for the design decisions made in the framework.
Abstract: This paper presents the design of a software framework for conceptual design. It develops an approach to mapping an evolving symbolic description of design into a geometric description. The distinct elements of the symbol-form mapping are: (a) deriving spatial relationships between objects as a consequence of the functional relationships; (b) instantiating alternative feasible solutions subject to these relationships; and (c) presenting the evolving descriptions of geometry. Computational support for each of these elements is provided within a conceptual design framework. The paper presents components of the framework, explicitly identifies interactions between these components, and explains how these interactions are developed into an integrated framework. It presents the rationale for the design decisions made in the framework. An example is presented to clarify the approach adopted. The applicability of the approach is then discussed.

Proceedings Article
09 Sep 1996
TL;DR: A reference model architecture for intelligent systems is suggested to tie together concepts from all these separate fields into a unified framework that includes both biological and machine embodiments of the components of mind.
Abstract: While the mind remains a mysterious and inaccessible phenomenon, many of the components of mind, such as perception, behavior generation, knowledge representation, value judgment, reason, intention, emotion, memory, imagination, recognition, learning, attention, and intelligence are becoming well defined and amenable to analysis. Progress is rapid in the cognitive and neurosciences as well as in artificial intelligence, control theory, and many other fields related to the engineering of mind. A reference model architecture for intelligent systems is suggested to tie together concepts from all these separate fields into a unified framework that includes both biological and machine embodiments of the components of mind. It is argued that such a reference model architecture will facilitate the development of scientific models of mind.

Book ChapterDOI
13 Aug 1996
TL;DR: The extension of Noos, the knowledge modeling framework designed to integrate learning methods and based on the task/method decomposition principle, is presented and allows communication and cooperation among agents implemented in Noos by means of three basic constructs: alien references, foreign method evaluation, and mobile methods.
Abstract: We are investigating possible modes of cooperation among homogeneous agents with learning capabilities. In this paper we will be focused on agents that learn and solve problems using Case-based Reasoning (CBR), and we will present two modes of cooperation among them: Distributed Case-based Reasoning (DistCBR) and Collective Case-based Reasoning (ColCBR). We illustrate these modes with an application where different CBR agents able to recommend chromatography techniques for protein purification cooperate. The approach taken is to extend Noos, the representation language being used by the CBR agents. Noos is knowledge modeling framework designed to integrate learning methods and based on the task/method decomposition principle. The extension we present, Plural Noos, allows communication and cooperation among agents implemented in Noos by means of three basic constructs: alien references, foreign method evaluation, and mobile methods.

Journal ArticleDOI
TL;DR: A modularization technique for active rules called stratification is introduced; it presents a theory of stratification and indicates how stratification can be practically applied and is illustrated by several examples.
Abstract: Active database systems can be used to establish and enforce data management policies. A large amount of the semantics that normally needs to be coded in application programs can be abstracted and assigned to active rules. This trend is sometimes called “knowledge independence” a nice consequence of achieving full knowledge independence is that data management policies can then effectively evolve just by modifying rules instead of application programs. Active rules, however, may be quite complex to understand and manage: rules react to arbitrary event sequences, they trigger each other, and sometimes the outcome of rule processing may depend on the order in which events occur or rules are scheduled. Although reasoning on a large collection of rules is very difficult, the task becomes more manageable when the rules are few. Therefore, we are convinced that modularization, similar to what happens in any software development process, is the key principle for designing active rules; however, this important notion has not been addressed so far. This article introduces a modularization technique for active rules called stratification; it presents a theory of stratification and indicates how stratification can be practically applied. The emphasis of this article is on providing a solution to a very concrete and practical problem; therefore, our approach is illustrated by several examples.

Book
27 Jun 1996
TL;DR: This book discusses machine learning, natural language understanding, and more.
Abstract: KNOWLEDGE IN AI Overview Introduction Representing Knowledge Metrics for Assessing Knowledge Representation Schemes Logic Representations Procedural Representation Network Representations Structured Representations General Knowledge The Frame Problem Knowledge Elicitation Summary Exercises Recommended Further Reading REASONING Overview What is Reasoning? Forward and Backward Reasoning Reasoning with Uncertainty Summary Exercises Recommended Further Reading SEARCH Introduction Exhaustive Search and Simple Pruning Heuristic Search Knowledge-Rich Search Summary Exercises Recommended Further Reading MACHINE LEARNING Overview Why Do We Want Machine Learning? How Machines Learn Deductive Learning Inductive Learning Explanation-Based Learning Example: Query-by-Browsing Summary Recommended Further Reading GAME PLAYING Overview Introduction Characteristics of Game Playing Standard Games Non-Zero-Sum Games and Simultaneous Play The Adversary is Life! Probability Summary Exercises Recommended Further Reading EXPERT SYSTEMS Overview What Are Expert Systems? Uses of Expert Systems Architecture of an Expert System Examples of Four Expert Systems Building an Expert System Limitations of Expert Systems Summary Exercises Recommended Further Reading NATURAL LANGUAGE UNDERSTANDING Overview What is Natural Language Understanding? Why Do We Need Natural Language Understanding? Why Is Natural Language Understanding Difficult? An Early Attempt at Natural Language Understanding: SHRDLU How Does Natural Language Understanding Work? Syntactic Analysis Semantic Analysis Pragmatic Analysis Summary Exercises Recommended Further Reading Solution to SHRDLU Problem COMPUTER VISION Overview Introduction Digitization and Signal Processing Edge Detection Region Detection Reconstructing Objects Identifying Objects Multiple Images Summary Exercises Recommended Further Reading PLANNING AND ROBOTICS Overview Introduction Global Planning Local Planning Limbs, Legs, and Eyes Practical Robotics Summary Exercises Recommended Further Reading AGENTS Overview Software Agents Co-operating Agents and Distributed AI Summary Exercises Recommended Further Reading MODELS OF THE MIND Overview Introduction What is the Human Mind? Production System Models Connectionist Models of Cognition Summary Exercises Recommended Further Reading Notes EPILOGUE: PHILOSOPHICAL AND SOCIOLOGICAL ISSUES Overview Intelligent Machines or Engineering Tools? What Is Intelligence? Computational Argument vs. Searle's Chinese Room Who Is Responsible? Morals and Emotions Social Implications Summary Recommended Further Reading