scispace - formally typeset
Search or ask a question

Showing papers in "Knowledge Engineering Review in 1996"


Journal ArticleDOI
TL;DR: This paper outlines a methodology for developing and evaluating ontologies, first discussing informal techniques, concerning such issues as scoping, handling ambiguity, reaching agreement and producing definitions, and considers, a more formal approach.
Abstract: This paper is intended to serve as a comprehensive introduction to the emerging field concerned with the design and use of ontologies. We observe that disparate backgrounds, languages, tools and techniques are a major barrier to effective communication among people, organisations and/or software understanding (i.e. an “ontology”) in a given subject area, can improve such communication, which in turn, can give rise to greater reuse and sharing, inter-operability, and more reliable software. After motivating their need, we clarify just what ontologies are and what purpose they serve. We outline a methodology for developing and evaluating ontologies, first discussing informal techniques, concerning such issues as scoping, handling ambiguity, reaching agreement and producing definitions. We then consider the benefits and describe, a more formal approach. We re-visit the scoping phase, and discuss the role of formal languages and techniques in the specification, implementation and evalution of ontologies. Finally, we review the state of the art and practice in this emerging field, considering various case studies, software tools for ontology development, key research issues and future prospects.

3,568 citations


Journal ArticleDOI
TL;DR: This overview paper presents a typology of agents, places agents in context, defines them and goes on, inter alia, to overview critically the rationales, hypotheses, goals, challenges and state-of-the-art demonstrators of the various agent types in this typology.
Abstract: Agent software is a rapidly developing area of research. However, the overuse of the word ‘agent’ has tended to mask the fact that, in reality, there is a truly heterogeneous body of research being carried out under this banner. This overview paper presents a typology of agents. Next, it places agents in context, defines them and then goes on, inter alia, to overview critically the rationales, hypotheses, goals, challenges and state-of-the-art demonstrators of the various agent types in our typology. Hence, it attempts to make explicit much of what is usually implicit in the agents literature. It also proceeds to overview some other general issues which pertain to all the types of agents in the typology. This paper largely reviews software agents, and it also contains some strong opinions that are not necessarily widely accepted by the agent community.

1,757 citations


Journal ArticleDOI
TL;DR: Two characterizations of a minimum-information principle in possibilistic reasoning" Int.
Abstract: Benferhat, S, Dubois D and Prade, H, 1992. \"Representing default rules in possibilistic logic\" In: Proc. of the 3rd Inter. Conf. on Principles of knowledge Representation and Reasoning (KR'92), 673-684, Cambridge, MA, October 26-29. De Finetti, B, 1936. \"La logique de la probabilite\" Actes du Congres Inter, de Philosophic Scientifique, Paris. (Hermann et Cie Editions, 1936, IV1-IV9). Driankov, D, Hellendoorn, H and Reinfrank, M, 1995. An Introduction to Fuzzy Control, Springer-Verlag. Dubois, D and Prade, H, 1988. \"An introduction to possibilistic and fuzzy logics\" In: Non-Standard Logics for Automated Reasoning (P Smets, A Mamdani, D Dubois and H Prade, editors), 287-315, Academic Press. Dubois, D and Prade, H, 1994. \"Can we enforce full compositionality in uncertainty calculi?\" In: Proc. 12th US National Conf. On Artificial Intelligence (AAAI94), 149-154, Seattle, WA. Elkan, C, 1994. \"The paradoxical success of fuzzy logic\" IEEE Expert August, 3-8. Lehmann, D and Magidor. M, 1992. \"What does a conditional knowledge base entail?\" Artificial Intelligence 55 (1) 1-60. Maung, 1,1995. \"Two characterizations of a minimum-information principle in possibilistic reasoning\" Int. J. of Approximate Reasoning 12 133-156. Pearl, J, 1990. \"System Z: A natural ordering of defaults with tractable applications to default reasoning\" Proc. of the 2nd Conf. on Theoretical Aspects of Reasoning about Knowledge (TARK'90) 121-135, San Francisco, CA, Morgan Karfman. Shoham, Y, 1988. Reasoning about Change MIT Press. Smets, P, 1988. \"Belief functions\" In: Non-Standard Logics for Automated Reasoning (P Smets, A Mamdani, D Dubois and H Prade, editors), 253-286, Academic Press. Smets, P, 1990a. \"The combination of evidence in the transferable belief model\" IEEE Trans, on Pattern Anal. Mach. Intell. 12 447-458. Smets, P, 1990b. \"Constructing the pignistic probability function in a context of uncertainty\" Un certainty in Artificial Intelligence 5 (M Henrion et al., editors), 29-40, North-Holland. Smets, P, 1995. \"Quantifying beliefs by belief functions: An axiomatic justification\" In: Procoj the 13th Inter. Joint Conf. on Artificial Intelligence (IJACT93), 598-603, Chambey, France, August 28-September 3. Smets, P and Kennes, R, 1994. \"The transferable belief model\" Artificial Intelligence 66 191-234.

301 citations


Journal ArticleDOI
TL;DR: Reading soft computing fuzzy logic neural networks and distributed artificial intelligence is also a way as one of the collective books that gives many advantages.
Abstract: No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading soft computing fuzzy logic neural networks and distributed artificial intelligence is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.

180 citations


Journal ArticleDOI
TL;DR: It appears that the majority of approaches focus on automating model construction entirely, and assessing and debugging a model in cooperation with a modeller is identified as an important topic for future research.
Abstract: In qualitative reasoning research, much effort has been spent on developing representation and reasoning formalisms. Only recently, the process of constructing models in terms of these formalisms has been recognised as an important research topic of its own. Approaches addressing this topic are examined in this review. For this purpose a general model of the task of constructing qualitative models is developed that serves as a frame of reference in considering these approaches. Two categories of approaches are identified: model composition and model induction approaches. The former compose a model from predefined partial models and the latter infer a model from behavioural data. Similarities and differences between the approaches are discussed using the general task model as a reference. It appears that the majority of approaches focus on automating model construction entirely. Assessing and debugging a model in cooperation with a modeller is identified as an important topic for future research

158 citations


Journal ArticleDOI
TL;DR: An introduction to some of the basic concepts of executable temporal logics, together with an overview of the main approaches being pursued.
Abstract: In recent years a number of programming languages based upon the direct execution of temporal logic formulae have been developed. The use of such logics provides a powerful basis for the representation and implementation of a range of dynamic behaviours. Though many of these languages are still experimental, they are beginning to be applied, not only in computer science and AI, but also in less obvious areas such as user interfaces, process control and social modelling. This article provides an introduction to some of the basic concepts of executable temporal logics, together with an overview of the main approaches being pursued.

50 citations


Journal ArticleDOI
TL;DR: This paper reviews a number of applications of machine learning to industrial control problems from the point of view of trying to automatically build rule-based reactive systems for tasks that, if performed by humans, would require a high degree of skill, yet are generally performed without thinking.
Abstract: This paper reviews a number of applications of machine learning to industrial control problems. We take the point of view of trying to automatically build rule-based reactive systems for tasks that, if performed by humans, would require a high degree of skill, yet are generally performed without thinking. Such skills are said to be sub-cognitive. While this might seem restrictive, most human skill is executed subconsciously and only becomes conscious when an unfamiliar circumstance is encountered. This kind of skill lends itself well to representation by a reactive system, that is, one that does not create a detailed model of the world, but rather, attempts to match percepts with actions in a very direct manner.

31 citations


Journal ArticleDOI
TL;DR: Modern computational methods and tools are being developed which add further capability to traditional statistical analysis tools, which have created a new range of problems and challenges for analysts, as well as new opportunities for intelligent systems in data analysis.
Abstract: Two phenomena have probably affected modern data analysts' lives more than anything else. First, the size of real-world data sets is getting increasingly large, especially during the last decade or so. Second, modern computational methods and tools are being developed which add further capability to traditional statistical analysis tools. These two developments have created a new range of problems and challenges for analysts, as well as new opportunities for intelligent systems in data analysis.

22 citations


Journal ArticleDOI
TL;DR: This paper focuses on the formal specification of dynamic behaviour in TFL, the Task Formal Language, and proposes a different representation of dynamic knowledge, based on Algebraic Data Types, as opposed to dynamic or temporal logic.
Abstract: TFL, the Task Formal Language, has been developed for integrating the static and dynamic aspects of knowledge based systems. This paper focuses on the formal specification of dynamic behaviour. Although fundamental in knowledge based systems, strategic reasoning has been rather neglected until now by the existing formal specifications. Most languages were generally more focused on the domain and problem-solving knowledge specification than on the control. The formalisation presented here differs from previous ones in several aspects. First, a different representation of dynamic knowledge is proposed: TFL is based on Algebraic Data Types, as opposed to dynamic or temporal logic. Second, dynamic strategic reasoning is emphasised, whereas existing languages only offer to specify algorithmic control. Then, TFL does not only provide the specification of the problem-solving knowledge of the object system, but also of its strategic knowledge. Finally, the dynamic knowledge of the meta-system itself is also specified. Moreover, modularisation is another important feature of the presented language.

20 citations


Journal ArticleDOI
TL;DR: This article is an essay on directions and methodology in computer-science oriented research on scientific discovery, all from the viewpoint of computer science.
Abstract: This article is an essay on directions and methodology in computer-science oriented research on scientific discovery. The essay starts by reviewing briefly some of the history of computing in scientific reasoning, and some of the results and impact that have been achieved. The remainder analyses some of the goals of this field, its relations with sister fields, and the practical applications of this analysis to evaluating research quality, reviewing, and methodology. An earlier review in this journal (Kocabas 1991b) analysed scientific discovery programs in terms of their designs, achievements and shortcomings; the focus here is research directions, evaluation and methodology, all from the viewpoint of computer science.

19 citations


Journal ArticleDOI
TL;DR: The motivation and background behind this new field, its theory and current state of the art, compares existing approaches and discusses the underlying issues are reviewed.
Abstract: Automated modelling is a young research field and is attracting increasingly more attention. It is a cross-disciplinary field involving simulation, modelling, qualitative reasoning, bond graphs and systems dynamics. It is an investigation of the modelling process with the purpose of developing computer tools which will automatically follow modelling principles. In addition, these tools will take into account the details of an application and generate the most appropriate model for the application. Its objective is to develop computer modelling tools which will have perception of model correctness, completeness and appropriateness and can perform modelling automatically. One way to achieve this objective is to introduce well-defined models and automate the process of assembling submodels into models to create well-defined models. This paper reviews the motivation and background behind this new field, its theory and current state of the art, compares existing approaches and discusses the underlying issues. It is hoped that more researchers will become aware of this field and be encouraged to work in it.

Journal ArticleDOI
TL;DR: This paper presents a review of pattern matching techniques, showing that the techniques and approaches are as diverse and varied as the applications.
Abstract: This paper presents a review of pattern matching techniques. The application areas for pattern matching are extensive, ranging from CAD systems to chemical analysis and from manufacturing to image processing. Published techniques and methods are classified and assessed within the context of three key issues: pattern classes, similarity types and matching methods. It has been shown that the techniques and approaches are as diverse and varied as the applications.

Journal ArticleDOI
TL;DR: The third edition of the Temporal Representation and Reasoning workshop was held on May 19–20 1996 in Key West, FL, with a particular emphasis given to the foundational aspects of temporal representation and reasoning through an investigation of the relationships between different approaches to temporal issues in AI, computer science and logic.
Abstract: Time is one of the most relevant topics in AI. It plays a major role in several of AI research areas, ranging from logical foundations to applications of knowledge-based systems. Despite the ubiquity of time in AI, researchers tend to specialise and focus on time in particular contexts or applications, overlooking meaningful connections between different areas. In an attempt to promote crossfertilisation and reduce isolation, the Temporal Representation and Reasoning (TIME) workshop series was started in 1994. The third edition of the workshop was held on May 19–20 1996 in Key West, FL, with S. D. Goodwin and H. J. Hamilton as General Chairs, and L. Chittaro and A. Montanari as Program Chairs. A particular emphasis was given to the foundational aspects of temporal representation and reasoning through an investigation of the relationships between different approaches to temporal issues in AI, computer science and logic.

Journal ArticleDOI
TL;DR: The technical presentations covered topics ranging from the theoretical to the applied, and at the theoretical end of the spectrum were presentations on deontic logic and the logic of action, defeasible reasoning, the logical basis for decision, and ethical and legal theories.
Abstract: The Third International Workshop on Deontic Logic in Computer Science (ΔEON'96) took place in Sesimbra, Portugal, from 11–13 January 1996. It consisted of 12 refereed technical presentations and four invited talks. The invited speakers were Nuel Belnap (Pittsburgh University, USA), Andrew Jones (Oslo University, Norway), Krister Segerberg (Uppsala University, Sweden) and Marek Sergot (Imperial College, UK).



Journal ArticleDOI
TL;DR: A logic programming perspective on programming patterns, systematic program development, design for provability, and the paradigm of meta-programming is presented.
Abstract: Logic programming is a programming paradigm with potential to contribute to software engineering. This paper is concerned with one dimension of that potential, the impact that experience with developing logic programs can have on software design. We present a logic programming perspective on programming patterns, systematic program development, design for provability, and the paradigm of meta-programming.



Journal ArticleDOI
TL;DR: In Chapter 4 the ACt Environment is described, which helps provide a good feel of how one would use the actual system, with a facility to generate visualisations of algebraic terms defined in the interface.
Abstract: defined in the interface. Once again the authors back this up with a detailed case study, this time a material requirements planning system. In Chapter 4 the ACt Environment is described. This helps provide a good feel of how one would use the actual system. Of particular note is a facility to generate visualisations of algebraic terms. I feel that this chapter is effective as it can be, however there is no real substitute for first hand experience of the system. Appendix A gives a more detailed treatment of the formal theory underpinning ACT. Realistically, one needs a good grasp of category theory to understand this. Finally, Appendix B gives some brief user instructions for the ACT Environment. I hearily recommend the book to all computer science students at Technical University of Berlin (the authors' institution)! It should also be very useful to others studying algebraic techniques in general, and industrialists wanting to gain an understanding of what is possible with algebraic specifications. The balance between theory and examples has been struck, with plenty of case studies to back up the concepts. To summarise, if the area interests you and you are prepared to grapple with a little discrete mathematics, then buy it.

Journal ArticleDOI
TL;DR: Kaelbling's focus on the situatedness of the learning system being embedded in its environment reflects the recent experience gained by much direct experimentation with physical robots.
Abstract: The definitions above, separated by ten years, represent two very different conceptions of learning. For Simon learning depends on an internal change in representation, and for Kaelbling it is instead measured in terms of an external change in behaviour. Furthermore, Kaelbling's focus on the situatedness of the learning system being embedded in its environment reflects the recent experience gained by much direct experimentation with physical robots.

Journal ArticleDOI
TL;DR: This paper attempts to study a more recently developed distance metric and show that this metric is capable of measuring the importance of different attributes.
Abstract: The basic nearest neighbour algorithm works by storing the training instances and classifying a new case by predicting that it has the same class as its nearest stored instance. To measure the distance between instances, some distance metric needs to be used. In situations when all attributes have numeric values, the conventional nearest neighbour method treats examples as points in feature spaces and uses Euclidean distance as the distance metric. In tasks with only nominal attributes, the simple “over-lap” metric is usually used. To handle classification tasks that have mixed types of attributes, the two different metrics are simply combined. Work by researchers in the machine learning field has shown that this approach performs poorly. This paper attempts to study a more recently developed distance metric and show that this metric is capable of measuring the importance of different attributes. With the use of discretisation for numeric-valued attributes, this method provides an integrated way in dealing with problem domains with mixtures of attribute types. Through detailed analyses, this paper tries to provide further insights into the understanding of nearest neighbour classification techniques and promote further use of this type of classification algorithm.

Journal ArticleDOI
TL;DR: It is shown how some rule-based coordination language have been used to build an environment of this kind, called process-centred development environments, which is a trend toward more complex programming environments.
Abstract: Software process modelling is the activity of formalising the production lifecycle of large software systems Its aim is to formally describe a software development process, which is then effectively used and possibly enacted by an environment able to support the geographically distributed and coordinated activities involved in the process itself I show that rule-based languages, especially logic programming languages, are an important technology for the specification, modelling, enactment and coordination of software processes This is because most routine activities in any development process can be defined by rules Some initial proposals aimed at simply simulating the software process by a Prolog-like program embedding some development rules A further step toward the integration of rule-based languages in the software process has been taken using a dynamic knowledge base as project database, and a number of special primitives have been introduced to support process programs Currently there is a trend toward more complex programming environments, called process-centred development environments I show how some rule-based coordination language have been used to build an environment of this kind

Journal ArticleDOI
TL;DR: This book attempts to address a wide set of issues in the construction of knowledge acquisition system and will prove useful to all modelling such systems.
Abstract: the area and all opportunities for reminders to related or earlier work are used, even at the cost of repetition. The insights and connections offered by this book are especially valuable since the system it refers to, MOBAL, is a real working system not just a theoretical possibility. In addition to detailing the development of a specific system, the book attempts to address a wide set of issues in the construction of knowledge acquisition system and will prove useful to all modelling such systems.

Journal ArticleDOI
TL;DR: In his thought-provoking paper, Valdes-Perez (1996, this volume) carefully describes the methodology and future directions for research in Machine Scientific Discovery (MSD), as viewed predominantly from the standpoint of computer science.
Abstract: In his thought-provoking paper, Valdes-Perez (1996, this volume) carefully describes the methodology and future directions for research in Machine Scientific Discovery (MSD), as viewed predominantly from the standpoint of computer science. Here, I shall offer some remarks from the angle of domain sciences (despite the fact that my basic area of expertise is actually general and computational linguistics).