scispace - formally typeset
Search or ask a question

Showing papers in "Knowledge Engineering Review in 1995"


Journal ArticleDOI
TL;DR: Agent theory is concerned with the question of what an agent is, and the use of mathematical formalisms for representing and reasoning about the properties of agents as discussed by the authors ; agent architectures can be thought of as software engineering models of agents; and agent languages are software systems for programming and experimenting with agents.
Abstract: The concept of an agent has become important in both Artificial Intelligence (AI) and mainstream computer science. Our aim in this paper is to point the reader at what we perceive to be the most important theoretical and practical issues associated with the design and construction of intelligent agents. For convenience, we divide these issues into three areas (though as the reader will see, the divisions are at times somewhat arbitrary). Agent theory is concerned with the question of what an agent is, and the use of mathematical formalisms for representing and reasoning about the properties of agents. Agent architectures can be thought of as software engineering models of agents;researchers in this area are primarily concerned with the problem of designing software or hardware systems that will satisfy the properties specified by agent theorists. Finally, agent languages are software systems for programming and experimenting with agents; these languages may embody principles proposed by theorists. The paper is not intended to serve as a tutorial introduction to all the issues mentioned; we hope instead simply to identify the most important issues, and point to work that elaborates on them. The article includes a short review of current and potential applications of agent technology.

6,714 citations


Journal ArticleDOI
TL;DR: Research on how people use and understand linguistic expressions of uncertainty is reviewed, with a view toward the needs of researchers and others interested in artificial intelligence systems.
Abstract: This article reviews research on how people use and understand linguistic expressions of uncertainty, with a view toward the needs of researchers and others interested in artificial intelligence systems. We discuss and present empirical results within an inductively developed theoretical framework consisting of two background assumptions and six principles describing the underlying cognitive processes.

157 citations


Journal ArticleDOI
TL;DR: It seems quite natural to compare formal languages for specifying KBS with formal languages which were developed by the software community for specifying software systems, the subject of this paper.
Abstract: During the last few years, a number of formal specification languages for knowledge-based systems (KBS) have been developed. Characteristics of such systems are a complex knowledge base and an inference engine which uses this knowledge to solve a given problem. Languages for KBS have to cover both these aspects. They have to provide a means to specify a complex and large amount of knowledge and they have to provide a means to specify the dynamic reasoning behaviour of a KBS. Nevertheless, KBS are just a specific type of software system. Therefore, it seems quite natural to compare formal languages for specifying KBS with formal languages which were developed by the software community for specifying software systems. That is the subject of this paper.

55 citations


Journal ArticleDOI
TL;DR: This paper presents a general setting for the other contributions in this issue of the Journal, which each deal with a specific issue in more detail.
Abstract: This paper presents a general discussion of the role of formal methods in Knowledge Engineering. We give an historical account of the development of the field of Knowledge Engineering towards the use of formal methods. Subsequently, we discuss the pro's and cons of formal methods. We do this by summarising the proclaimed advantages, and by arguing against some of the commonly heard objections against formal methods. We briefly summarise the current state of the art and discuss the most important directions that future research in this field should take. This paper presents a general setting for the other contributions in this issue of the Journal, which each deal with a specific issue in more detail.

50 citations




Journal ArticleDOI
TL;DR: The various approaches proposed in the literature are reviewed, and related to types of knowledge and problem solving employed in the medical eld and the appropriateness of logic for building medical knowledge-based expert systems is further motivated.
Abstract: The safety-critical nature of the application of knowledge-based systems to the eld of medicine, demands the adoption of reliable engineering principles with a solid foundation for their construction. Logical languages with their inherent, precise notions of consistency, soundness and completeness oer such a foundation, thus promoting scrutinous engineering of medical knowledge. Moreover, logic techniques provide a powerful means for getting insight into the structure and meaning of medical knowledge used in medical problem solving. Unfortunately, logic is currently only used on a small scale for building practical medical knowledge-based systems. In this paper, the various approaches proposed in the literature are reviewed, and related to dieren t types of knowledge and problem solving employed in the medical eld. The appropriateness of logic for building medical knowledge-based expert systems is further motivated.

25 citations


Journal ArticleDOI
TL;DR: This paper examines how formal specification techniques can support the verification and validation (V&V) of knowledge-based systems and notes that there are concerns in using formal specification technique for V&V, not least being the effort involved in creating the specifications.
Abstract: This paper examines how formal specification techniques can support the verification and validation (V&V) of knowledge-based systems. Formal specification techniques provide levels of description which support both verification and validation, and V&V techniques feed back to assist the development of the specifications. Developing a formal specification for a system requires the prior construction of a conceptual model for the intended system. Many elements of this conceptual model can be effectively used to support V&V. Using these elements, the V&V process becomes deeper and more elaborate, and it produces results of a better quality compared with the V&V activities which can be performed on systems developed without conceptual models. However, we note that there are concerns in using formal specification techniques for V&V, not least being the effort involved in creating the specifications.

20 citations


Journal ArticleDOI
TL;DR: The psychological evidence for the claim that human judgement and reasoning are vulnerable to cognitive biases is reviewed in the context of the debate concerning human judgemental competence under uncertainty.
Abstract: The claim is frequently made that human judgement and reasoning are vulnerable to cognitive biases. Such biases are assumed to be inherent in that they are attributed to the nature of the mental processes that produce judgement. In this paper, we review the psychological evidence for this claim in the context of the debate concerning human judgemental competence under uncertainty. We consider recent counter-arguments which suggest that the evidence for cognitive biases may be dependent on observations of performance on inappropriate tasks and by comparisons with inappropriate normative standards. We also consider the practical implications for the design of decision support systems.

20 citations


Journal ArticleDOI
TL;DR: In the last decade, the rise of object-centred formalisms has significantly influenced the convergence of languages for describing structured objects as mentioned in this paper, which share the goal of representing a part of the world in a structured way.
Abstract: Structured objects are items with defined properties that are to be represented in a computer system. Research in Knowledge Representation (KR) and in Database Design (DB) has produced languages for describing structured objects. Although different in the particular means for defining properties, both areas share the goal of representing a part of the world in a structured way. Moreover, the rise of object-centred formalisms in the last decade has significantly influenced the convergence of languages.

17 citations


Journal ArticleDOI
TL;DR: This paper is a review of research into cognitive expertise organized in terms of a simple model of the knowledge and cognitive processes that might be expected to be enhanced in experts relative to non-experts.
Abstract: This paper is a review of research into cognitive expertise. The review is organized in terms of a simple model of the knowledge and cognitive processes that might be expected to be enhanced in experts relative to non-experts. This focus on cognitive competence underlying expert performance permits the identification of skills and knowledge that we might wish to capture and model in expert systems. The competence perspective also indicates areas of weakness in human experts. In these areas, we might wish to support or replace the expert with, for example, a normative system rather than attempting to model his or her knowledge.

Journal ArticleDOI
TL;DR: This paper provides a summary of application oriented work using qualitative reasoning and provides a picture of in which application areas the techniques have been applied, and who is working in each area.
Abstract: The techniques of qualitative reasoning are now becoming sufficiently mature to be applied to real world problems. In order to better understand which techniques are being used successfully for real world applications, and which application areas can be suitably addressed using qualitative reasoning techniques, it is helpful to have a summary of what application oriented work has been done to date. This helps to provide a picture of the application areas in which the techniques are being applied, and who is working in each application domain. In this paper, we summarize over 40 relevant projects.

Journal ArticleDOI
TL;DR: The focus of the meeting was on validation techniques for KBS, where validation is defined as the process of determining if a KBS meets its users' requirements; implicitly, validation includes verification, which is the processof determining ifA KBS has been constructed to comply with certain formallyspecified properties, such as consistency and irredundancy.
Abstract: Assuring the reliability of knowledge-based systems has become an important issue in the development of the knowledge engineering discipline. There has been a workshop devoted to these topics at most of the major AI conferences (IJCAI, AAAI, and ECAI) for the last five years, and the 1994 European Conference on Artificial Intelligence (ECAI-94) in Amsterdam was no exception. The focus of the meeting was on validation techniques for KBS, where validation is defined as the process of determining if a KBS meets its users' requirements; implicitly, validation includes verification, which is the process of determining if a KBS has been constructed to comply with certain formallyspecified properties, such as consistency and irredundancy.

Journal ArticleDOI
TL;DR: The role case- based reasoning has had in development of computer-aided instruction systems and some problems to be addressed when case-based planning is applied to lesson planning within a tutoring system are considered.
Abstract: In this paper we consider the role case-based reasoning has had in development of computer-aided instruction systems. We survey several case-based teaching systems, each of which is representative of a basic pedagogical principle that motivated its development. Firstly, 15 pedagogical principles are presented that were identified from the analysis of case-based teaching systems. We present some background to the principles, and indicate which systems they are incorporated in. Next, the teaching systems themselves are described, with emphasis on how case-based reasoning has been applied. Finally, we discuss some problems to be addressed when case-based planning is applied to lesson planning within a tutoring system.

Journal ArticleDOI
TL;DR: Though subsequent conferences have seen greater mix of papers, IPMU remains largely non-probabilistic with the result that the bulk of the participants come from Europe rather than the United States (despite the large amount of work on uncertainty, and especially probability, that is carried out in the US) making IPMU something of a counterpoint to UAI.
Abstract: The First International Conference on Information Processing and the Management of Uncertainty (IPMU) was held in 1986 at a time of great debate about the necessity of modelling uncertainty in intelligent systems (which at that time largely meant rule-based expert systems) and the best way of doing so. Whereas the founders of the Conference on Uncertainty in Artificial Intelligence (UAI) in the United States set out with the aim of promoting the use of probability, the organisers of IPMU chose a diametrically opposed course. Though there were a few papers on probability at IPMU '86, the main focus was on alternative methods, primarily those based upon fuzzy sets. Though subsequent conferences have seen greater mix of papers, IPMU remains largely non-probabilistic with the result that the bulk of the participants come from Europe rather than the United States (despite the large amount of work on uncertainty, and especially probability, that is carried out in the US) making IPMU something of a counterpoint to UAT. The difference in participation is exacerbated by the location—whilst the UAI remains in North America, IPMU alternates between Paris and other cities in Europe, including Urbino in 1988 and Palma in 1992.

Journal ArticleDOI
TL;DR: This special issue of the Review presents papers by experimental psychologists who have worked extensively on expertise, decision making and reasoning under uncertainty, all topics that overlap strongly with the interests of expert systems and AI researchers and developers.
Abstract: The knowledge engineering community has been working on the design of schemes for knowledge representation and reasoning for more than two decades. Much of this work, particularly work on the development of expert systems, explicitly or implicitly assumes that artificial knowledge-based systems emulate to some degree the natural knowledge representation and reasoning methods of human problem solvers and decision makers. Experimental psychologists and other cognitive scientists have been studying the properties of natural cognition for even longer, indeed for much of this century. Their findings indicate not only that some of our engineering assumptions about the nature of human expertise may be a little simplistic, and also that one needs to be very careful about those aspects of human knowledge and expertise one should attempt to emulate. This special issue of the Review presents papers by experimental psychologists who have worked extensively on expertise, decision making and reasoning under uncertainty, all topics that overlap strongly with the interests of expert systems and AI researchers and developers. Their reviews of work on these topics are instructive for those of us who are interested in the natural counterparts of the artificial mechanisms and techniques we use. The first paper, \"Cognitive expertise research and knowledge engineering\" by Fergus Bolger, provides an overview of psychological studies of expertise, drawing attention to weaknesses in our criteria for defining an \"expert\". He summarises our current understanding of the cognitive processes that underpin expertise and identifies some implications for knowledge engineers. In \"Bias in human judgement under uncertainty?\", Peter Ayton and Eva Pascoe focus on an aspect of expertise which is a major issue for expert systems designers; uncertain inference methods. They discuss important doubts about the competence of human judgement when compared with the behaviour prescribed by normative mathematical theory, and many subtleties of human understanding which are not well reflected in current knowledge technologies. Finally, in a related paper reviewing \"Human linguistic probability processing\", Tom Wallsten and David Budescu look at issues concerning the intuitive representation of uncertainty, and, in particular, how we use natural language concepts to reason with and communicate uncertainty about our beliefs and inferences. They close with some principles that summarize the cognitive processes that underly human uncertain reasoning and decision making. It is interesting to consider these principles in the context of the probabilities, certainty factors and non-monotonic logics which are the focii of AI research. Although one frequently comes across remarks in the AI and expert systems literature to the effect that \"expert systems emulate human expertise\", such claims are rarely examined in depth. The findings of psychologists described in these papers indicate both that their validity can be seriously questioned, and the desirability of a design strategy based on emulation is open to considerable debate.

Journal ArticleDOI
TL;DR: A surprising consensus about architectures is beginning to emerge within the small community of researchers applying artificial intelligence to robotics; the consensus is that a multi- layer, hierarchical architecture is necessary.
Abstract: James Albus states that “an architecture is a description of how a system is constructed from basic components and how those components fit together to form the whole” (Albus, 1995). A software architecture for physical agents reflects the organising principles that its designers have learned from many prior experiences in building such agents. Architectures that have been proposed for physical agents have differed greatly—from subsumption (Brooks, 1986) to Soar (Laird et al., 1987). However, a surprising consensus about architectures is beginning to emerge within the small community of researchers applying artificial intelligence to robotics. The consensus is that a multi- layer, hierarchical architecture is necessary. In particular, the community is moving towards a three-layered architecture. The lowest layer is a reactive control system inspired by subsumption (Brooks, 1986). The top layer is a traditional symbolic planning and modelling system. The middle layer is the key; it serves as a “differential” between the short-range reaction and long-range reasoning.

Journal ArticleDOI
TL;DR: An evaluation of two commercially available workbenches for supporting the KADS approach: KADS Tool from ILOG and Open K ADS Tool from Bull are reported on.
Abstract: The KADS methodology and its successor, CommonKADS, have gained a reputation for being useful approaches to building knowledge-based systems in a manner which is both systematic and well documented. However, these methods require considerable effort to use them completely. It has been suggested that automated support for KADS or CommonKADS users, in the form of “knowledge engineering workbenches”, could be very useful. These tools would provide computerised assistance to knowledge engineers in organising and representing knowledge, in a similar fashion to the support which CASE tools provide for software engineers. To provide support for KADS or CommonKADS, the workbenches should provide specific support for the modelling techniques recommended by these methods, which are very detailed in the representation and analysis stages of knowledge engineering. A good knowledge engineering workbench should also be easy to use, should be robust and reliable, and should generate output in a presentable format.This paper reports on an evaluation of two commercially available workbenches for supporting the KADS approach: KADS Tool from ILOG and Open KADS Tool from Bull. This evaluation was carried out by AIAI as part of the CATALYST project, funded by the European Community's ESSI programme, which aimed to introduce CommonKADS to two technology-oriented companies. Information is also presented on two other workbenches: the CommonKADS workbench (which will soon become commercially available) and the VITAL workbench. The results show various strengths and weaknesses in each tool.

Journal ArticleDOI
TL;DR: A new generation of very powerful Reduced Instruction Set Computers which, while not exactly matching Turing's Spartan hardware design, are conceptually much nearer to it than the vast majority of the computer architectures that have been designed over the last three decades.
Abstract: The first is Turing's insistence that the computer has a hardware system that would be as simple as possible. Turing's philosophy being that the main functionality of the ACE computer would be achieved by programming rather than complex electronic circuitry. The trend in computer architectures since the publication of this report has been towards more and more complex hardware. However, the inevitable result of this has been the computer becoming increasingly baroque and inefficient. This has resulted in a new generation of very powerful Reduced Instruction Set Computers which, while not exactly matching Turing's Spartan hardware design, are conceptually much nearer to it than the vast majority of the computer architectures that have been designed over the last three decades.

Journal ArticleDOI
TL;DR: An assessment on how effective logic programming has been and could be in the process of software development and the quality of its products is given.
Abstract: The Workshop on Applications of Logic Programming in Software Engineering was held at S. Margherita Ligure, Italy, on June 18 1994. This workshop was organized in conjunction with the International Conference of Logic Programming. We recognize that over the past decade, several sporadic research efforts have addressed the use of logic programming to improve the process of software development and the quality of its products. The workshop, we believed, would give an assessment on how effective logic programming has been and could be.

Journal ArticleDOI
TL;DR: Its up-to-date handling of the complete process of creating real-world AI systems is perhaps most relevant to those new to applied AI from related fields, although it assumes some knowledge of computational terminology.
Abstract: Its up-to-date handling of the complete process of creating real-world AI systems is perhaps most relevant to those new to applied AI from related fields, although it assumes some knowledge of computational terminology. The book is suitable for use as a reference guide for all involved in any aspect of AI and its plethora of relevant bibliography will prove useful. However, its plausibility as a text-book for students is questionable especially due to its high price. One consolation for those willing to pay this substantial cost is that its outlook on AI will last well into the next century as it is printed on acid-free paper!

Journal ArticleDOI
TL;DR: The author provides a very fluid introduction, and the rest of the material is very theoretical and difficult to follow, due to the lack of enough motivating examples.
Abstract: The book contains definitions and theorems on almost every page, and many of them are generalizations of results related to Horn clause logic programming. Reinterpretation of some well known results of mathematical logic in this domain is quite interesting, for example, the ChurchRosser Property of SOS and the Halting Problem. Although the author provides a very fluid introduction, the rest of the material is very theoretical and difficult to follow, due to the lack of enough motivating examples.


Journal ArticleDOI
TL;DR: The purpose of the workshop was to bring together people who are strongly motivated and experienced about the role and purpose of logic programming, and Prolog in particular, both in secondary school and at university level.
Abstract: Logic programming originated from the discovery that a subset of predicate logic could be given a procedural interpretation which has been first embodied in the programming language Prolog. The unique features of logic programming make it appealing for numerous applications in artificial intelligence, computer-aided design and verification, databases and operations research, as well as to explore parallel and concurrent programming. In the beginning, educationalists were attracted by logic programming, and Prolog in particular, for a number of reasons. Some hoped that the roots of the language in logic would make it a suitable medium to develop logic-deductive abilities; some believed that Prolog would offer many problem solving opportunities; some saw Prolog as suitable for the building by learners of database models of various school subjects. Also, logic programming was considered as an introductory programming paradigm both in secondary and university courses in computer science, as well as in other disciplines (such as cognitive science, artificial intelligence, and so on). Interest in respect of the educational applications of logic programming is confirmed by the conferences and the workshops organised in recent years in this field. Accordingly, we set up a workshop on \"Logic Programming and Education\" as one of the events of ICLP 94, the eleventh international conference on logic programming, one of the two major annual conferences which report recent research results in logic programming. The purpose of the workshop was to bring together people who are strongly motivated and experienced about the role and purpose of logic programming, and Prolog in particular, both in secondary school and at university level. The workshop was held in Santa Margherita Ligure, Italy in June 1994. It consisted of eleven refereed papers and one invited lecture.

Journal ArticleDOI
TL;DR: A more theoretical approach to knowledge acquisition has recently started investigating how ML techniques can be taken into account in model-based knowledge acquisition methodologies such as KADS, or the generic tasks of Chandrasekaran.
Abstract: “Integration of Machine Learning and Knowledge Acquisition” may be a surprising title for an ECAI-94 workshop, since most machine learning (ML) systems are intended for knowledge acquisition (KA). So what seems problematic about integrating ML and KA? The answer lies in the difference between the approaches developed by what is referred to as ML and KA research. Apart from sonic major exceptions, such as learning apprentice tools (Mitchell et al., 1989), or libraries like the Machine Learning Toolbox (MLT Consortium, 1993), most ML algorithms have been described without any characterization in terms of real application needs, in terms of what they could be effectively useful for. Although ML methods have been applied to “real world” problems few general and reusable conclusions have been drawn from these knowledge acquisition experiments. As ML techniques become more and more sophisticated and able to produce various forms of knowledge, the number of possible applications grows. ML methods tend then to be more precisely specified in terms of the domain knowledge initially required, the control knowledge to be set and the nature of the system output (MLT Consortium, 1993; Kodratoff et al., 1994).


Journal ArticleDOI
TL;DR: No comparison of competing proposals is available, let alone an empirical determination of the benefits of using an ontology, nor is it clear how ontologies can best be evaluated.
Abstract: A number of groups developing knowledge-based systems have found (or at least posited) that the design and representation of a limitative set of concepts and relations, a so-called ontology, can contribute to sharing and reusing knowledge bases. However, very few descriptions of implemented ontologies have appeared in the literature. No comparison of competing proposals is available, let alone an empirical determination of the benefits of using an ontology. There is no accepted method for designing and building such ontologies, nor is it clear how ontologies can best be evaluated.



Journal ArticleDOI
Simon Parsons1
TL;DR: The Engineering of Knowledge-Based Systems started life concentrating on rule-based systems and was expanded rather late in the day by the addition of a couple of chapters which gloss over additional topics, a shame because, with a little more effort, the book could have been very good indeed.
Abstract: from the index, where a third of the nine chapters on theory are largely concerned with rules while model-based and qualitative reasoning, two of the mainstays of work on \"deep knowledge\" are relegated to subsections of the chapter on \"advanced techniques\". The problem is also clear from the fact that alternative methods of knowledge representation, such as frames, are treated rather briefly, and that the chapter on verification and validation only gives examples concerning rules. Thus it looks a lot as though The Engineering of Knowledge-Based Systems started life concentrating on rule-based systems and was expanded rather late in the day by the addition of a couple of chapters which gloss over additional topics. This is a shame because, with a little more effort to make the brief descriptions of areas such as objects and qualitative reasoning, the book could have been very good indeed.