scispace - formally typeset
Search or ask a question

Showing papers on "Suggested Upper Merged Ontology published in 2000"


Proceedings Article
01 Jan 2000
TL;DR: In this paper, a semi-automated approach to ontology merging and alignment is presented. But the approach is not suitable for the problem of ontology alignment and merging, as it requires a large and tedious portion of the sharing process.
Abstract: Researchers in the ontology-design field have developed the content for ontologies in many domain areas. Recently, ontologies have become increasingly common on the WorldWide Web where they provide semantics for annotations in Web pages. This distributed nature of ontology development has led to a large number of ontologies covering overlapping domains. In order for these ontologies to be reused, they first need to be merged or aligned to one another. The processes of ontology alignment and merging are usually handled manually and often constitute a large and tedious portion of the sharing process. We have developed and implemented PROMPT, an algorithm that provides a semi-automatic approach to ontology merging and alignment. PROMPT performs some tasks automatically and guides the user in performing other tasks for which his intervention is required. PROMPT also determines possible inconsistencies in the state of the ontology, which result from the user’s actions, and suggests ways to remedy these inconsistencies. PROMPT is based on an extremely general knowledge model and therefore can be applied across various platforms. Our formative evaluation showed that a human expert followed 90% of the suggestions that PROMPT generated and that 74% of the total knowledge-base operations invoked by the user were suggested by PROMPT.

1,119 citations


Proceedings Article
30 Jul 2000
TL;DR: In this paper, a semi-automated approach to ontology merging and alignment is presented. But the approach is not suitable for the problem of ontology alignment and merging, as it requires a large and tedious portion of the sharing process.
Abstract: Researchers in the ontology-design field have developed the content for ontologies in many domain areas. Recently, ontologies have become increasingly common on the WorldWide Web where they provide semantics for annotations in Web pages. This distributed nature of ontology development has led to a large number of ontologies covering overlapping domains. In order for these ontologies to be reused, they first need to be merged or aligned to one another. The processes of ontology alignment and merging are usually handled manually and often constitute a large and tedious portion of the sharing process. We have developed and implemented PROMPT, an algorithm that provides a semi-automatic approach to ontology merging and alignment. PROMPT performs some tasks automatically and guides the user in performing other tasks for which his intervention is required. PROMPT also determines possible inconsistencies in the state of the ontology, which result from the user’s actions, and suggests ways to remedy these inconsistencies. PROMPT is based on an extremely general knowledge model and therefore can be applied across various platforms. Our formative evaluation showed that a human expert followed 90% of the suggestions that PROMPT generated and that 74% of the total knowledge-base operations invoked by the user were suggested by PROMPT.

1,002 citations


Journal ArticleDOI
TL;DR: The paper will describe the process of building an ontology, introducing the reader to the techniques and methods currently in use and the open research questions in ontology development.
Abstract: Much of biology works by applying prior knowledge ('what is known') to an unknown entity, rather than the application of a set of axioms that will elicit knowledge. In addition, the complex biological data stored in bioinformatics databases often require the addition of knowledge to specify and constrain the values held in that database. One way of capturing knowledge within bioinformatics applications and databases is the use of ontologies. An ontology is the concrete form of a conceptualisation of a community's knowledge of a domain. This paper aims to introduce the reader to the use of ontologies within bioinformatics. A description of the type of knowledge held in an ontology will be given.The paper will be illustrated throughout with examples taken from bioinformatics and molecular biology, and a survey of current biological ontologies will be presented. From this it will be seen that the use to which the ontology is put largely determines the content of the ontology. Finally, the paper will describe the process of building an ontology, introducing the reader to the techniques and methods currently in use and the open research questions in ontology development.

399 citations


Book ChapterDOI
02 Oct 2000
TL;DR: This paper establishes a common framework to compare the expressiveness and reasoning capabilities of "traditional" ontology languages (Ontolingua, OKBC, OCML, FLogic, LOOM) and "web-based" ontological languages, and concludes with the results of applying this framework to the selected languages.
Abstract: The interchange of ontologies across the World Wide Web (WWW) and the cooperation among heterogeneous agents placed on it is the main reason for the development of a new set of ontology specification languages, based on new web standards such as XML or RDF. These languages (SHOE, XOL, RDF, OIL, etc) aim to represent the knowledge contained in an ontology in a simple and human-readable way, as well as allow for the interchange of ontologies across the web. In this paper, we establish a common framework to compare the expressiveness and reasoning capabilities of "traditional" ontology languages (Ontolingua, OKBC, OCML, FLogic, LOOM) and "web-based" ontology languages, and conclude with the results of applying this framework to the selected languages.

176 citations


01 Jan 2000
TL;DR: A new approach for modeling large-scale ontologies by transportable methods for modeling ontological axioms that allows for versatile access to and manipulations of axiomatic concepts and relations via graphical user interfaces.
Abstract: This papers presents a new approach for modeling large-scale ontologies. We extend well-established methods for modeling concepts and relations by transportable methods for modeling ontological axioms. The gist of our approach lies in the way we treat the majority of axioms. They are categorized into different types and specified as complex objects that refer to concepts and relations. Considering language and system particularities, this first layer of representation is then translated into the target representation language. This two-layer approach benefits engineering, because the intended meaning of axioms is captured by the categorization of axioms. Classified object representations allow for versatile access to and manipulations of axioms via graphical user interfaces.

147 citations


Patent
06 Oct 2000
TL;DR: In this paper, an ontology-based approach is proposed to generate Java-based object-oriented and relational application program interfaces (APIs) from a given ontology, providing application developers with an API that exactly reflects the entity types and relations (classes and methods) that are represented by the database.
Abstract: A system and method lets a user create or import ontologies and create databases and related application software. These databases can be specially tuned to suit a particular need, and each comes with the same error-detection rules to keep the data clean. Such databases may be searched based on meaning, rather than on words-that-begin-with-something. And multiple databases, if generated from the same basic ontology can communicate with each other without any additional effort. Ontology management and generation tools enable enterprises to create databases that use ontologies to improve data integration, maintainability, quality, and flexibility. Only the relevant aspects of the ontology are targeted, extracting out a sub-model that has the power of the full ontology restricted to objects of interest for the application domain. To increase performance and add desired database characteristics, this sub-model is translated into a database system. Java-based object-oriented and relational application program interfaces (APIs) are then generated from this translation, providing application developers with an API that exactly reflects the entity types and relations (classes and methods) that are represented by the database. This generation approach essentially turns the ontology into a set of integrated and efficient databases.

145 citations


01 Jan 2000
TL;DR: This work presents the TEXT-TO-ONTO Ontology Learning Environment, which is based on a general architecture for discovering conceptual structures and engineering ontologies from text and supports as well the acquisition of conceptual structures as mapping linguistic resources to the acquired structures.
Abstract: Ontologies have become an important means for structuring information and information systems and, hence, important in knowledge as well as in software engineering. However, there remains the problem of engineering large and adequate ontologies within short time frames in order to keep costs low. For this purpose, we present the TEXT-TO-ONTO Ontology Learning Environment, which is based on a general architecture for discovering conceptual structures and engineering ontologies from text. Our Ontology Learning Environment supports as well the acquisition of conceptual structures as mapping linguistic resources to the acquired structures.

99 citations


Journal ArticleDOI
01 Oct 2000-Mind
TL;DR: The semantics and ontology of dispositions are looked at in the light of recent work on the subject and it is concluded that fragility is not a real property and that, while both temperature and its bases are, this does not generate any problem of overdetermination.
Abstract: The paper looks at the semantics and ontology of dispositions in the light of recent work on the subject. Objections to the simple conditionals apparently entailed by disposition statements are met by replacing them with so-called "reduction sentences" and some implications of this are explored. The usual distinction between categorical and dispositional properties is criticised and the relation between dispositions and their bases examined. Applying this discussion to two typical cases leads to the conclusion that fragility is not a real property and that, while both temperature and its bases are, this does not generate any problem of overdetermination.

73 citations


Book ChapterDOI
02 Oct 2000
TL;DR: An activity of ontology construction and its deployment in an interface system for an oil-refinery plant operation which has been done under the umbrella of Human-Media Project for four years is presented.
Abstract: Although the necessity of an ontology and ontological engineering is well-understood, there has been few success stories about ontology construction and its deployment to date. This paper presents an activity of ontology construction and its deployment in an interface system for an oil-refinery plant operation which has been done under the umbrella of Human-Media Project for four years. It also describes the reasons why we need an ontology, what ontology we built, what environment we used for building the ontology and how the ontology is used in the system. The interface has been developed intended to establish a sophisticated technology for advanced interface for plant operators and consists of several agents. The system has been implemented and preliminary evaluation has been done successfully.

69 citations


01 Aug 2000
TL;DR: It is concluded that different needs in KR and reasoning may exist in the building of an ontology-based application, and these needs must be evaluated in order to choose the most suitable ontology language(s).
Abstract: The interchange of ontologies across the World Wide Web (WWW) and the cooperation among heterogeneous agents placed on it is the main reason for the development of a new set of ontology specification languages, based on new web standards such as XML or RDF These languages (SHOE, XOL, RDF, OIL, etc) aim to represent the knowledge contained in an ontology in a simple and human-readable way, as well as allow for the interchange of ontologies across the web In this paper, we establish a common framework to compare the expressiveness of "traditional" ontology languages (Ontolingua, OKBC, OCML, FLogic, LOOM) and "web-based" ontology languages As a result of this study, we conclude that different needs in KR and reasoning may exist in the building of an ontology-based application, and these needs must be evaluated in order to choose the most suitable ontology language(s)

66 citations


Proceedings ArticleDOI
13 Sep 2000
TL;DR: This paper presents a method for acquiring a application-tailored domain ontology from given heterogeneous intranet sources and presents a comprehensive architecture and a system for semi-automatic ontology acquisition.
Abstract: This paper describes our actual and ongoing work in supporting semi-automatic ontology acquisition from a corporate intranet of an insurance company. A comprehensive architecture and a system for semi-automatic ontology acquisition supports processing semi-structured information (e.g. contained in dictionaries) and natural language documents and including existing core ontologies (e.g. GermaNet, WordNet). We present a method for acquiring a application-tailored domain ontology from given heterogeneous intranet sources.

Proceedings ArticleDOI
04 Jan 2000
TL;DR: DOME is developing techniques for ontology-based information content description and a suite of tools for domain ontology management, which makes it possible to dynamically find relevant data sources based on content and to integrate them as needed.
Abstract: Service-oriented business-to-business e-commerce requires dynamic and open interoperable information systems. Although most large organisations have information regarding products, services and customers stored in databases, and XML/DTD allows these to be published over the Internet, sharing information among these systems has been prevented by semantic heterogeneity. True electronic commerce will not happen until the semantics of the terms used to model these information systems can be captured and processed by computers. Developing a machine processable ontology (vocabulary) is intrinsically hard. The semantics of a term varies from one context to another. We believe ontology engineering will be a major effort of any future application development. In this paper we describe our work on building a Domain Ontology Management Environment (DOME). DOME is developing techniques for ontology-based information content description and a suite of tools for domain ontology management. Information content description extends traditional meta data to an ontology. This makes it possible to dynamically find relevant data sources based on content and to integrate them as needed.

Proceedings Article
01 Jan 2000
TL;DR: In this paper, the majority of axioms are categorized into different types and specified as complex objects that refer to concepts and relations, and this first layer of representation is then translated into the target representation language.
Abstract: This papers presents a new approach for modeling large-scale ontologies. We extend well-established methods for modeling concepts and relations by transportable methods for modeling ontological axioms. The gist of our approach lies in the way we treat the majority of axioms. They are categorized into different types and specified as complex objects that refer to concepts and relations. Considering language and system particularities, this first layer of representation is then translated into the target representation language. This two-layer approach benefits engineering, because the intended meaning of axioms is captured by the categorization of axioms. Classified object representations allow for versatile access to and manipulations of axioms via graphical user interfaces.

01 Mar 2000
TL;DR: An example of how to resolve semantic ambiguities is provided for the manufacturing concept ‘resource’ and some ideas on how to use PSL for inter-operability in agent-based systems are presented.
Abstract: The problems of inter-operability are acute for manufacturing applications, as applications using process specifications do not necessarily share syntax and definitions of concepts. The Process Specification Language developed at the National Institute of Standards and Technology proposes a formal ontology and translation mechanisms representing manufacturing concepts. When an application becomes ‘PSL compliant,’ its concepts are expressed using PSL, with a direct one-to-one mapping or with a mapping under certain conditions. An example of how to resolve semantic ambiguities is provided for the manufacturing concept ‘resource’. Finally some ideas on how to use PSL for inter-operability in agent-based systems are presented.

Journal ArticleDOI
TL;DR: A real environment for integrating ontologies supplied by a predetermined set of (experts) users, who might be distributed through a communication network and working cooperatively in the integration process, is introduced.
Abstract: Nowadays, we can find systems and environments supporting processes of ontology building. However, these processes have not been specified enough yet. In this work, a real environment for integrating ontologies supplied by a predetermined set of (experts) users, who might be distributed through a communication network and working cooperatively in the integration process, is introduced. In this environment, the (expert) user can check for the ontology that is being produced, so he/she is able to refine his/her private ontology. Furthermore, the experts who take part of the ontology construction process are allowed to use their own terminology even for requesting information about the global-derived ontology until a specific instant after the integration.


Journal ArticleDOI
TL;DR: This paper discusses a concept which can be expected to be of great importance to formal ontology management, and which is well‐known in traditional software development: refinement.
Abstract: Ontologies have emerged as one of the key issues in information integration and interoperability and in their application to knowledge management and electronic commerce. A trend towards formal methods for ontology management is obvious. This paper discusses a concept which can be expected to be of great importance to formal ontology management, and which is well-known in traditional software development: refinement. We define and discuss ontology refinement, give illustrating examples, and highlight its advantages as compared to other forms of ontology revision. © 2000 John Wiley & Sons, Inc.

Proceedings Article
01 Jan 2000
TL;DR: PROMPT, an algorithm that provides a semi-automatic approach to ontology merging and alignment, is developed and implemented and is based on an extremely general knowledge model and therefore can be applied across various platforms.

Book ChapterDOI
07 Jul 2000
TL;DR: A software gathering service that is mainly supported by an ontology, SoftOnt, and several agents is presented, showing how the SoftOnt ontology is built from distributed and heterogeneous software repositories.
Abstract: Ontologies and agents are two topics that raise a particular attention those days from the theoretical as well as from the application point of view. In this paper we present a software gathering service that is mainly supported by an ontology, SoftOnt, and several agents. The main goal of the paper is to show how the SoftOnt ontology is built from distributed and heterogeneous software repositories. In the particular domain considered, software repositories, we advocate for an automatic creation of a global unique ontology versus a manual creation and the use of multiple ontologies.

Book
01 Jan 2000

Proceedings ArticleDOI
14 May 2000
TL;DR: The authors use ontology as a foundation to realize knowledge sharing and reuse and discuss the method for building ontology, its principles and implementation.
Abstract: After many years research work, many intelligent systems based on knowledge have been created. But the differences in creating methods and applying background contexts make it difficult to share and reuse knowledge. This situation leads to the difficulty of building knowledge systems. In order to solve this problem, we use ontology as a foundation to realize knowledge sharing and reuse. As an important research area in AI, the ontology building method has not acquired a common view. The authors mainly discuss the method for building ontology, its principles and implementation.

Journal ArticleDOI
TL;DR: A multi-phase process of automatic construction of the domain specific ontology from text databases, in which various text mining and natural-language understanding methods are used.
Abstract: The paper describes a multi-phase process of automatic construction of the domain specific ontology from text databases, in which various text mining and natural-language understanding methods are used. The ontology we wish to develop describes a well-defined technical domain. Hence it is called a domain specific ontology, opposed to universal ontologies. We discuss the major techniques used in the process and show some preliminary results.

Journal Article
TL;DR: This work proposes a translation approach which employs a unique translator no matter what source and target languages are involved and relies on a meta-language called M_Kif which extends the Kif language with the concept of meta-relation to describe representation ontologies.
Abstract: Translating ontologies from a source language towards a target one is a required process both for the conception of a new ontology from existing ones (reuse) and for common use of an ontology by different knowledge based systems. The usual approach for translation is based upon the existence of a translator for each pair of languages from and to which the ontology must be translated. Therefore, several translating tools are necessary. We propose a translation approach which employs a unique translator no matter what source and target languages are involved. It relies on two principal components. The first one is a meta-language called M―Kif which extends the Kif language with the concept of meta-relation to describe representation ontologies. The second one is a pivot representation which unifies different styles of representations. The translation tool is an interpretation program of the meta-relation definitions specified in the source and target representation ontologies.

Book ChapterDOI
04 Sep 2000
TL;DR: This paper propose a meta-language called M_Kif which extends the Kif language with the concept of meta-relation to describe representation ontologies, and a pivot representation which unifies different styles of representations.
Abstract: Translating ontologies from a source language towards a target one is a required process both for the conception of a new ontology from existing ones (reuse) and for common use of an ontology by different knowledge based systems. The usual approach for translation is based upon the existence of a translator for each pair of languages from and to which the ontology must be translated. Therefore, several translating tools are necessary. We propose a translation approach which employs a unique translator no matter what source and target languages are involved. It relies on two principal components. The first one is a meta-language called M_Kif which extends the Kif language with the concept of meta-relation to describe representation ontologies. The second one is a pivot representation which unifies different styles of representations. The translation tool is an interpretation program of the meta-relation definitions specified in the source and target representation ontologies.


Book ChapterDOI
13 Dec 2000
TL;DR: An Ontology Server is constructed, which provides ontology adapted in electronic commerce (EC), and is applied to comparative shopping system.
Abstract: Ontology is an essential element for the agent system. The agent can share its knowledge and communicate with each other with it. As the agent system is more widely applied, the importance of ontology is increasing. Though there were some approaches to construct ontology, it was too far to satisfy practical needs. In this paper we have constructed an Ontology Server, which provides ontology adapted in electronic commerce (EC), and have applied it to comparative shopping system.


Book ChapterDOI
28 Aug 2000
TL;DR: In this article, the authors propose three ontology models suitable for modeling large scale type domains. And they apply the models to the problem domain of petroleum waste management, and discuss knowledge representation at different levels of ontologies.
Abstract: The purpose of this paper is to suggest a process for ontology design and its application. Current knowledge modeling methodologies tend to focus on the particular task, which makes them difficult to be reused. In this paper, we propose three different ontology models suitable for modeling large scale type domains. We apply the models to the problem domain of petroleum waste management, and discuss knowledge representation at the different levels of ontologies.

Journal Article
TL;DR: An ontology based English Chinese MT system, where an ontology was developed to model world knowledge is introduced, which can support disambiguation in source language analysis and words chosen in target language generation.
Abstract: There is now a common consensus in the field of Machine Translation(MT) that a well qualified system should integrate both language specific linguistic knowledge and language independent world knowledge.Recently,ontologies are widely used to model world knowledge in knowledge engineering.This paper introduces an ontology based English Chinese MT system,where,an ontology was developed to model world knowledge.It has been built partly by organizing concepts into a hierarchy and connecting their internal structures into a network.By mapping words in a specific language to concepts in the ontology,the system can support disambiguation in source language analysis and words chosen in target language generation.Concepts in the ontology can be also acted as concepts in interlinguas which are represented by Conceptual Graph in our system.

Journal ArticleDOI
Janet Folina1
TL;DR: The Philosophy of Mathematics Today is a substantial anthology consisting of twenty papers from many of the leading philosophers in the field, and is divided into five sections: Ontology, Models, and Indeterminacy; Mathematics, Science, and Method; Finitism and Intuitionism; Frege and the Foundations of Arithmetic; and Sets, Structure, and Abstraction.
Abstract: The Philosophy of Mathematics Today is a substantial anthology consisting of twenty papers from many of the leading philosophers in the field. It is, for the most part, a proceedings volume from a conference organized by the editor in 1993. The intended audience is working philosophers of mathematics, as the articles generally presuppose familiarity with the literature. The contributions represent original work by the authors with two exceptions that are modified versions of previously published papers. The volume has an introduction by the editor, and is divided into five sections: Ontology, Models, and Indeterminacy; Mathematics, Science, and Method; Finitism and Intuitionism; Frege and the Foundations of Arithmetic; and Sets, Structure, and Abstraction. I begin with brief summaries of the papers. Unfortunately, summaries do no justice to the substantial content of these contributions; but perhaps this will whet the reader’s appetite. I then comment in more detail on several of the papers that I found most interesting, original and/or thought-provoking. I conclude with some general comments.