scispace - formally typeset
Search or ask a question

Showing papers on "Ontology (information science) published in 1996"


Journal ArticleDOI
TL;DR: This paper outlines a methodology for developing and evaluating ontologies, first discussing informal techniques, concerning such issues as scoping, handling ambiguity, reaching agreement and producing definitions, and considers, a more formal approach.
Abstract: This paper is intended to serve as a comprehensive introduction to the emerging field concerned with the design and use of ontologies. We observe that disparate backgrounds, languages, tools and techniques are a major barrier to effective communication among people, organisations and/or software understanding (i.e. an “ontology”) in a given subject area, can improve such communication, which in turn, can give rise to greater reuse and sharing, inter-operability, and more reliable software. After motivating their need, we clarify just what ontologies are and what purpose they serve. We outline a methodology for developing and evaluating ontologies, first discussing informal techniques, concerning such issues as scoping, handling ambiguity, reaching agreement and producing definitions. We then consider the benefits and describe, a more formal approach. We re-visit the scoping phase, and discuss the role of formal languages and techniques in the specification, implementation and evalution of ontologies. Finally, we review the state of the art and practice in this emerging field, considering various case studies, software tools for ontology development, key research issues and future prospects.

3,568 citations


Journal ArticleDOI
12 Dec 1996
TL;DR: This work tries to reconcile the dual (schematic and semantic) perspectives by enumerating possible semantic similarities between objects having schema and data conflicts, and modeling schema correspondences as the projection of semantic proximity with respect to (wrt) context.
Abstract: In a multidatabase system, schematic conflicts between two objects are usually of interest only when the objects have some semantic similarity. We use the concept of semantic proximity, which is essentially an abstraction/mapping between the domains of the two objects associated with the context of comparison. An explicit though partial context representation is proposed and the specificity relationship between contexts is defined. The contexts are organized as a meet semi-lattice and associated operations like the greatest lower bound are defined. The context of comparison and the type of abstractions used to relate the two objects form the basis of a semantic taxonomy. At the semantic level, the intensional description of database objects provided by the context is expressed using description logics. The terms used to construct the contexts are obtained from {\em domain-specific ontologies}. Schema correspondences are used to store mappings from the semantic level to the data level and are associated with the respective contexts. Inferences about database content at the federation level are modeled as changes in the context and the associated schema correspondences. We try to reconcile the dual (schematic and semantic) perspectives by enumerating possible semantic similarities between objects having schema and data conflicts, and modeling schema correspondences as the projection of semantic proximity with respect to (wrt) context.

501 citations


Journal ArticleDOI
TL;DR: The preliminary exploration into an organisation ontology for the TOVE enterprise model puts forward a number of conceptualizations for modeling organisations: activities, agents, roles, positions, goals, communication, authority, commitment.

217 citations


Journal ArticleDOI
TL;DR: An ontology for representing requirements that supports a generic requirements management process in engineering design domain and identifies the axioms capturing the constraints and relationships among the objects is presented.
Abstract: We present an ontology for representing requirements that supports a generic requirements management process in engineering design domain The requirement ontology we propose is a part of a more gen...

191 citations


Proceedings Article
01 Jan 1996
TL;DR: A suite of principles, designs criteria and verification process used in the knowledge conceptualization process of a consensuated domain ontology in the domain of chemicals and an approach that integrates the following intermediate representation techniques are presented.
Abstract: This paper presents the suite of principles, designs criteria and verification process used in the knowledge conceptualization process of a consensuated domain ontology in the domain of chemicals. To achieve agreement between different development teams we propose the use of a common and shared conceptual model as starting point. To capture domain knowledge of a given domain and organize it in a shared and consensuated conceptual model, we recommend an approach that integrates the following intermediate representation techniques: Data Dictionary, Concepts Classification Trees, Tables of Instance Attributes, Table of Class Attributes, Table of Constants, Tables of Formulas, Attributes Classification Trees, and Tables of Instances. We also provide a set of guidelines to verify the knowledge gathered inside each intermediate representations and between intermediate representations.

185 citations


Journal ArticleDOI
TL;DR: This paper provides an initial framework for verifying Knowledge Sharing Technology (KST), the engineering activity that guarantees the correctness of the definitions in an ontology, its associated software environments and documentation with respect to a frame of reference during each phase and between phases of its life cycle.
Abstract: Based on the empirical verification of bibliographic-data and other Ontolingua ontologies, this paper provides an initial framework for verifying Knowledge Sharing Technology (KST). Verification of KST refers to the engineering activity that guarantees the correctness of the definitions in an ontology, its associated software environments and documentation with respect to a frame of reference during each phase and between phases of its life cycle. Verification of the ontologies refers to building the correct ontology, and it verifies that (1) the architecture of the ontology is sound, (2) the lexicon and the syntax of the definitions are correct and (3) the content of the ontologies and their definitions are internally and metaphysically consistent, complete, concise, expandable and sensitive.

171 citations


01 Jan 1996
TL;DR: This paper summarises the top level schema of the GALEN Common Reference model, which includes a taxonomy and an associated set of semantic link types.
Abstract: This paper summarises the top level schema of the GALEN Common Reference model. The top levels of a structure tested in a range of uses is described. This includes a taxonomy and an associated set of semantic link types. The current status of the project and its goals is discussed.

104 citations


01 Jan 1996
TL;DR: Examples are presented showing how the use of SHOE can support a new generation of knowledge-based search and knowledge discovery tools that operate on the WorM-Wide Web.
Abstract: This paper describes SHOE, a set of Simple HTML Ontology Extensions. SHOE allows World-Wide Web authors to annotate their pages with ontology-based knowledge about page contents. We present examples showing how the use of SHOE can support a new generation of knowledge-based search and knowledge discovery tools that operate on the WorM-Wide Web.

87 citations


01 Jan 1996
TL;DR: The Mikrokosmos (uK) Machine Translation System is a knowledge-based machine translation (KBMT) system under development at New Mexico State University, and several issues in semantic analysis programming will be briefly probed, including dependency-directed processing, ‘‘bes t-first’’ search, and knowledgeable treatment of unexpect ed and ambiguous inputs.
Abstract: The Mikrokosmos (uK) Machine Translation System is a knowledge-based machine translation (KBMT) system under development at New Mexico State University. Unlike previous research in interlingual MT, this project is a large-scale, practical MT system. By the end of the year, a le xicon of approximately 20,000 Spanish words (with over 35,000 word senses) supported by an ontology of 6000 concepts will be in place. High quality semantic analyses of over 400 article-length Spanish texts in the domain of company mergers and acquisitions will have been produced. In the coming year, uK intends to expand into other languages, notably Thai. This paper introduces the central concepts involved in KBMT, including Text-Meaning Representation (TMR), ontology, and the semantic lexicon. The semantic analyzer ‘‘engine’’ will then be described in detail, with examples of how knowledge from the ontology, lexicon and syntactic analysis are combined to create the basic semantic dependency structures found in the TMR outputs. Several issues in semantic analysis programming will be briefly probed, including dependency-directed processing, ‘‘bes t-first’’ search, and knowledgeable treatment of unexpect ed and ambiguous inputs.

82 citations


Journal ArticleDOI
TL;DR: The main goal of this paper is to describe in detail how PROTEGE-II was used to model the elevator-configuration task, and provide a starting point for comparison with other frameworks that use abstract problem-solving methods.
Abstract: This paper describes how we applied the PROTEGE-II architecture to build a knowledge-based system that configures elevators. The elevator-configuration task was solved originally with a system that employed the propose-and-revise problem-solving method (VT). A variant of this task, here named the Sisyphus-2 problem, is used by the knowledge-acquisition community for comparative studies. PROTEGE-II is a knowledge-engineering environment that focuses on the use of reusable ontologies and problem-solving methods to generate task-specific knowledge-acquisition tools and executable problem solvers. The main goal of this paper is to describe in detail how we used PROTEGE-II to model the elevator-configuration task. This description provides a starting point for comparison with other frameworks that use abstract problem-solving methods. Beginning with the textual description of the elevator-configuration task, we analysed the domain knowledge with respect to PROTEGE-II’s main goal: to build domain-specific knowledge-acquisition tools. We used PROTEGE-II’s suite of tools to construct a knowledge-based system, called ELVIS, that includes a reusable domain ontology, a knowledge-acquisition tool, and a propose-and-revise problem-solving method that is optimized to solve the elevator-configuration task. We entered domain-specific knowledge about elevator configuration into the knowledge base with the help of a task-specific knowledge-acquisition tool that PROTEGE-II generated from the ontologies. After we constructed mapping relations to connect the knowledge base with the method’s code, the final executable problem solver solved the test case provided with the Sisyphus-2 material. We have found that the development of ELVIS has afforded a valuable test case for evaluating PROTEGE-II’s suite of system-building tools. Only projects based on reasonably large problems, such as the Sisyphus-2 task, will allow us to improve the design of PROTEGE-II and its ability to produce reusable components.

78 citations


Book ChapterDOI
15 Apr 1996
TL;DR: This work provides an ontology suitable for describing object properties and the generation and transfer of forces in the scene, and provides a computational procedure to test the feasibility of such interpretations by reducing the problem to a feasibility test in linear programming.
Abstract: Understanding observations of image sequences requires one to reason about qualitative scene dynamics. For example, on observing a hand lifting a cup, we may infer that an 'active' hand is applying an upwards force (by grasping) on a 'passive' cup. In order to perform such reasoning, we require an ontology that describes object properties and the generation and transfer of forces in the scene. Such an ontology should include, for example: the presence of gravity, the presence of a ground plane, whether objects are 'active' or 'passive', whether objects are contacting and/or attached to other objects, and so on. In this work we make these ideas precise by presenting an implemented computational system that derives symbolic force-dynamic descriptions from video sequences. Our approach to scene dynamics is based on an analysis of the Newtonian mechanics of a simplified scene model. The critical requirement is that, given image sequences, we can obtain estimates for the shape and motion of the objects in the scene. To do this, we assume that the objects can be approximated by a two-dimensional 'layered' scene model. The input to our system consists of a set of polygonal outlines along with estimates for their velocities and accelerations, obtained from a view-based tracker. Given such input, we present a system that extracts force-dynamic descriptions for the image sequence. We provide computational examples to demonstrate that our ontology is sufficiently rich to describe a wide variety of image sequences. This work makes three central contributions. First, we provide an ontology suitable for describing object properties and the generation and transfer of forces in the scene. Second, we provide a computational procedure to test the feasibility of such interpretations by reducing the problem to a feasibility test in linear programming. Finally, we provide a theory of preference ordering between multiple interpretations along with an efficient computational procedure to determine maximal elements in such orderings.

Book ChapterDOI
02 Jan 1996
TL;DR: This work presents a three-level architecture comprising of the ontology, metadata and data levels for enabling correlation of information at a semantic level across multiple forms and representations for answering a user query.
Abstract: Huge amounts of data available in a variety of digital forms has been collected and stored in thousands of repositories. However, the information relevant to a user or application need may be stored in multiple forms in different repositories. Answering a user query may require correlation of information at a semantic level across multiple forms and representations. We present a three-level architecture comprising of the ontology, metadata and data levels for enabling this correlation. Components of this architecture are explained by using an example from a GIS application.

Book
01 Sep 1996
TL;DR: The standardisation of flexible EDI messages and temporal reasoning for automated workflow in health care enterprises and the information marketplace are highlighted.
Abstract: Overview- Electronic commerce: An overview- The standardisation of flexible EDI messages- Machine-negotiated, ontology-based EDI (Electronic Data Interchange)- Changes in the interchange agreements- Advanced electronic commerce security in a workflow environment- Temporal reasoning for automated workflow in health care enterprises- The information marketplace: Achieving success in commercial applications

Proceedings ArticleDOI
28 Jan 1996
TL;DR: An ontology to represent the electrical network from the point of view of diagnosis, i.e., an ontology of the elements of the network that are necessary for fault diagnosis is presented.
Abstract: This paper presents an ontology to represent the electrical network from the point of view of diagnosis, ie, an ontology of the elements of the network that are necessary for fault diagnosis The notion of 'ontology', rooted in philosophy, is used in knowledge engineering to describe explicit specifications of conceptualizations, where a conceptualization is a set of definitions of elements in a domain It is a useful notion in knowledge based systems development because it provides the means for describing explicitly the conceptualization behind the knowledge represented in a knowledge base An ontology, in this case for fault diagnosis, is a combination of small-scale ontologies After a detailed analysis of the network and its behaviour, five viewpoints were identified as relevant for diagnosis We generated an ontology for each to characterize more accurately and clearly the elements involved in the problem Each of them focuses on different relevant aspects of the network and has a compact body of knowledge and a clear meaning in the problem They are identified as transport, control, events, alarms, and trips Each is discussed by the authors in relation to fault diagnosis

Book ChapterDOI
01 Jan 1996
TL;DR: This paper proposes knowledge intensive engineering that is a new way of engineering activities in various product life cycle stages flexibly conducted with more knowledge to create more added value.
Abstract: This paper proposes knowledge intensive engineering that is a new way of engineering activities in various product life cycle stages flexibly conducted with more knowledge to create more added value. Knowledge representation and modeling issues are discussed and a cooperative multiple intelligent agent architecture based on multiple ontology is proposed for building a Knowledge Intensive Engineering Framework (KIEF). KIEF can be used as a knowledge intensive CAD for knowledge intensive design of knowledge intensive machines. This demonstrates the power and usefulness of knowledge intensive engineering. It is also discussed that to achieve knowledge intensive engineering, systematization of knowledge is an essential process to allow intelligent agents to share accumulated knowledge.


Journal ArticleDOI
TL;DR: This article includes an overview of the conceptualization, excerpts from the machine-readable Ontolingua source files, and pointers to the complete ontology library available on the Internet.
Abstract: In the VT/Sisyphus experiment, a set of problem solving systems were being built against a common specification of a problem. An important hypothesis was that the specification could be given, in large part, as a common ontology. This article is that ontology. This ontology is different than normal software specification documents in two fundamental ways. First, it is formal and machine readable (i.e. in the KIF/Ontolingua syntax). Second, the descriptions of the input and output of the task to be performed include domain knowledge (i.e. about elevator configuration) that characterize semantic constraints on possible solutions, rather than describing the form (data structure) of the answer. The article includes an overview of the conceptualization, excerpts from the machine-readable Ontolingua source files, and pointers to the complete ontology library available on the Internet.

Book ChapterDOI
01 Jan 1996
TL;DR: This paper describes CRYSTAL, a fully automated tool that induces such a dictionary of text extraction rules, and discusses issues involved with creating training data, defining a domain ontology, and allowing a flexible and expressive representation while designing a search control mechanism that avoids intractability.
Abstract: Domain-specific text analysis requires a dictionary of linguistic patterns that identify references to relevant information in a text. This paper describes CRYSTAL, a fully automated tool that induces such a dictionary of text extraction rules. We discuss some key issues in developing an automatic dictionary induction system, using CRYSTAL as a concrete example. CRYSTAL derives text extraction rules from training instances and generalizes each rule as far as possible, testing the accuracy of each proposed rule on the training corpus. An error tolerance parameter allows CRYSTAL to manipulate a trade-off between recall and precision. We discuss issues involved with creating training data, defining a domain ontology, and allowing a flexible and expressive representation while designing a search control mechanism that avoids intractability.

Journal ArticleDOI
TL;DR: A solution to the Sisyphus II elevator design problem developed using the VITAL approach to structured knowledge-based system development is discussed and how to use machine learning techniques to uncover additional strategic knowledge not present in the VT domain is shown.
Abstract: In this paper we discuss a solution to the Sisyphus II elevator design problem developed using the VITAL approach to structured knowledge-based system development. In particular we illustrate in detail the process by which an initial model of Propose & Revise problem solving was constructed using a generative grammar of model fragments and then refined and operationalized in the VITAL operational conceptual modelling language (OCML). In the paper we also discuss in detail the properties of a particular Propose & Revise architecture, called “Complete-Model-then-Revise”, and we show that it compares favourably in terms of competence with alternative Propose & Revise models. Moreover, using as an example the VT domain ontology provided as part of the Sisyphus II task, we critically examine the issues affecting the development of reusable ontologies. Finally, we discuss the performance of our problem solver and we show how we can use machine learning techniques to uncover additional strategic knowledge not present in the VT domain.

01 Jan 1996
TL;DR: An approach based on the kif ontology-sharing language for allowing developers to share knowledge-acquisition editors and problem-solving methods is proposed, arguing that sharable ontologies are a fundamental precondition for reusing knowledge.
Abstract: Previous approaches to the reuse of problem-solving methods have relied on the existence of a global data model to serve as the mediator among the individual methods. This hard-coded approach limits the reusability of methods and introduces implicit assumptions into the system architecture that make it di cult to combine reasoning methods in new ways. To overcome these limitations, the prot eg e-ii system associates each method with an ontology that de nes the context of that method. All external interaction between a method and the world can be viewed as the mapping of knowledge between the method's context ontology and the ontologies of the methods with which it is interacting. In this paper, we describe a context-de nition language called model, and its role in the prot eg e-ii system, a metatool for constructing task-speci c expert-system shells. We outline the requirements that gave rise to such a language and argue that sharable ontologies are a fundamental precondition for reusing knowledge, serving as a means for integrating problem-solving, domain-representation, and knowledge-acquisition modules. We propose an approach based on the kif ontology-sharing language for allowing developers to share knowledge-acquisition editors and problem-solving methods. 1 Reuse of Knowledge Over the past two decades, researchers have been looking for ways to increase the productivity of knowledge engineers. Early rule-based shells were realized to be the \assembly language" of knowledge engineering, providing increased exibility at the price of reduced understandability, maintainability, and reusability (Soloway et al., 1987). Instead, in an attempt to increase the knowledge bandwidth between the shell and the domain expert, researchers have turned to role-limiting architectures, replacing the generic rule-based architecture with task-speci c reasoning strategies and custom-tailored knowledge-acquisition editors (McDermott, 1988). Applications based on this architecture contain abstract, but in exible, data models that guide knowledge-acquisition and inference, enabling experts to manipulate knowledge at a high level of abstraction. We refer to these architectures as being driven by explicit, or strong, data models. Although such role-limiting architectures have been demonstrated to increase the productivity of system builders (Musen, 1989a), they are di cult to construct and, due to their commitment to a particular data model and problem-solving method, limited in the range of applications about which they can reason. The limitations of task-speci c architectures have led to the recent development of metatools (Eriksson and Musen, 1992). These tools perform knowledge acquisition at the meta


Proceedings Article
25 Jul 1996
TL;DR: In this paper, the authors illustrate how pedagogical knowledge is represented by various instructional theories and ITS systems, and discuss a number of key issues involved in authoring knowledge, and show how these issues are addressed in the representational framework of the Eon ITS authoring tools.
Abstract: In intelligent learning/tutoring environments which deal with several types of knowledge or with knowledge in complex domains, some form of pedagogical and curriculum knowledge must be represented in order for the ITS to offer guidance and structure in the learning process. In this paper we illustrate how pedagogical knowledge is represented by various instructional theories and ITS systems, discuss a number of key issues involved in authoring pedagogical knowledge, and show how these issues are addressed in the representational framework of the Eon ITS authoring tools. Unique among the methods we use are Ontology objects, which allow for the creation of representational frameworks tailored to classes of domains or tasks, and "topic levels," which provide an additional level of sophistication beyond network representations, while increasing cognitive clarity.

Proceedings ArticleDOI
19 Jun 1996
TL;DR: This work introduces the concept and mechanism of the Dynamic Classificational Ontology (DCO), which is a mediator to help participants identify and resolve ontological similarities and differences in the CFDBS context.
Abstract: A Cooperative Federated Database System (CFDBS) is an information sharing environment in which units of information to be shared may be substantially structured, and participants are actively involved in sharing activities. We focus on the problem of shared ontology for the purpose of discovery in the CFDBS context. We introduce the concept and mechanism of the Dynamic Classificational Ontology (DCO), which is a mediator to help participants identify and resolve ontological similarities and differences. A DCO contains top level knowledge about information units exported by information providers, along with classificational knowledge. By contrast with fixed hierarchical classifications, the DCO builds domain specific, dynamically changing classification schemes; it specifically contains knowledge about overlap among information units. Information providers contribute to the DCO when information units are exported, and the current knowledge in the DCO is in turn utilized to guide export and discovery of information. At the cost of information providers' cooperative efforts, this approach supports much more systematic discovery than that provided by keyword based search, with substantially greater precision and recall. An experimental prototype of the DCO has been developed, and applied and tested to improve the precision and recall of Medline document searches for biomedical information sharing.

Journal ArticleDOI
TL;DR: This paper shows how an interoperable system, based on the Context Interchange Architecture, may be specified in terms of first-order logic which is an ideal specification language at the knowledge level, which represents a theoretical ideal which serves to formalize and communicate the key ideas behind the context Interchange Approach.
Abstract: Currently, there is a proliferation of database integration approaches in response to the need to achieve semantic interoperability in heterogeneous, distributed and autonomous environments. To date, however, we lack abstract, formal descriptions of the task of semantic interoperation, independent of idiosyncratic implementation details. Such abstract descriptions are needed to aid our understanding of these various complex systems that are being constructed. Therefore, we argue that a knowledge level perspective of interoperable systems is desirable. The knowledge level serves as an abstract specification of what a system should do and facilitates the design and analysis of complex systems. In this paper, we show how an interoperable system, based on the Context Interchange Architecture, may be specified in terms of first-order logic which is an ideal specification language at the knowledge level. This specification represents a theoretical ideal which serves to formalize and communicate the key ideas behind the Context Interchange Approach, unencumbered by the details, limitations and compromises of specific implementations. As such, it provides a rigorous basis for implementing such systems. Similar specifications may also be developed for other models of interoperable systems. As a result, we have a rigorous means of understanding, comparing and analyzing these complex systems.

01 Jan 1996
TL;DR: This paper proposes a framework for communication and cooperation among heterogeneous real-world agents based on shared ontologies, and shows mediation mechanism to generate action sequences for agents from given tasks.
Abstract: In this paper, we propose a framework for communication and cooperation among heterogeneous real-world agents based on shared ontologies Ontologies give a background of knowledge to share among agents, that is, systems of concepts are defined which are used to communicate to each other We present three different ontolagies, namely, ontologies for object, space, and action Based on these ontologies, we then show mediation mechanism to generate action sequences for agents from given tasks Our framework is appropriate for cooperation among heterogeneous real-world agents because of (1) adaptiveness for environments and agents, (2) implicit representation of cooperation, and (3) integration of information and real-world agents We realized and verified our mediation mechanism by applying task planning for mobile robots and computer-controlled instruments

Journal ArticleDOI
TL;DR: This article presents the underlying view and the basic approach being taken, the main components of the framework and accompanying methodology, examples of studies recently done and how they relate to the framework, and an explicit ontology of basic CBR task types, domain characterisations, and types of problem solving and learning methods.
Abstract: A particular strength of case-based reasoning (CBR) over most other methods is its inherent combination of problem solving with sustained learning through problem solving experience. This is therefore a particularly important topic of study, and an issue that has now become mature enough to be addressed in a more systematic way. To enable such an analysis of problem solving and learning, we have initiated work towards the development of an analytic framework for studying CBR methods. It provides an explicit ontology of basic CBR task types, domain characterisations, and types of problem solving and learning methods. Further, it incorporates within this framework a methodology for combining a knowledge-level, top-down analysis with a bottom-up, case-driven one. In this article, we present the underlying view and the basic approach being taken, the main components of the framework and accompanying methodology, examples of studies recently done and how they relate to the framework.

Proceedings ArticleDOI
09 Sep 1996
TL;DR: The authors describe a case study which supports the claim that ontologies are reusable components in the design of knowledge systems and illustrate this by discussing how a single legal ontology has been used for the construction of both a planning and an assessment system.
Abstract: The authors describe a case study which supports the claim that ontologies are reusable components in the design of knowledge systems. An ontology documents important domain assumptions which would otherwise remain implicit. Whereas a conceptual (or formal) system specification differs between different knowledge systems (even in the same domain), they show the underlying ontology to be invariant. This makes ontologies reusable for knowledge-system design. They illustrate this by discussing how a single legal ontology has been used for the construction of both a planning and an assessment system and argue that the same ontology can be reused for other knowledge systems as well.

Book ChapterDOI
13 Aug 1996
TL;DR: The need for a common ontology to exchange learning-related information is shown and a solution that is motivated by the human ability to understand each other even in the absence of a common language by using alternative communication channels, such as gestures is proposed.
Abstract: This paper discusses the significance of communication between individual agents that are embedded into learning Multi-Agent Systems. For several learning tasks occurring within a Multi-Agent System, communication activities are investigated and the need for a mutual understanding of agents participating in the learning process is made explicit. Thus, the need for a common ontology to exchange learning-related information is shown. Building this ontology is an additional learning task that is not only extremely important, but also extremely difficult. We propose a. solution that is motivated by the human ability to understand each other even in the absence of a common language by using alternative communication channels, such as gestures. We show some results for the task of cooperative material handling by several manipulators.

Book ChapterDOI
19 Aug 1996
TL;DR: The resulting classification provides an ontology of concept and relation types that can be used for representing both the semantics of verbs and the axioms for reasoning about the corresponding processes.
Abstract: Processes may be analyzed temporally into subprocesses or spatially into participants. The theoretical analysis of processes and participants is intimately connected with the linguistic analysis of verbs that express the processes and thematic roles that relate the processes to the participants. The resulting classification provides an ontology of concept and relation types that can be used for representing both the semantics of verbs and the axioms for reasoning about the corresponding processes.

05 Feb 1996
TL;DR: It is shown how an ontology can be incrementally constructed with this framework, for the domain of physical systems, and it will be seen that mapping ontologies, ontologies that deene interrelationships between other ontology, play an important role in this construction process.
Abstract: An important recent idea to facilitate knowledge sharing is to provide libraries of reusable components (models, ontologies) to end users. However, when libraries become large, nding the right library components is a knowledge demanding task in itself. Our suggestion therefore is that methods will be needed that help the user to gradually construct such knowledge. This paper describes a framework how to do this for reasoning in technical domains. We then show how an ontology can be incrementally constructed with our framework, for the domain of physical systems. We will see that mapping ontologies, ontologies that deene interrelationships between other ontologies, play an important role in this construction process.