scispace - formally typeset
Search or ask a question

Showing papers on "Upper ontology published in 2000"


Proceedings Article
01 Jan 2000
TL;DR: In this paper, a semi-automated approach to ontology merging and alignment is presented. But the approach is not suitable for the problem of ontology alignment and merging, as it requires a large and tedious portion of the sharing process.
Abstract: Researchers in the ontology-design field have developed the content for ontologies in many domain areas. Recently, ontologies have become increasingly common on the WorldWide Web where they provide semantics for annotations in Web pages. This distributed nature of ontology development has led to a large number of ontologies covering overlapping domains. In order for these ontologies to be reused, they first need to be merged or aligned to one another. The processes of ontology alignment and merging are usually handled manually and often constitute a large and tedious portion of the sharing process. We have developed and implemented PROMPT, an algorithm that provides a semi-automatic approach to ontology merging and alignment. PROMPT performs some tasks automatically and guides the user in performing other tasks for which his intervention is required. PROMPT also determines possible inconsistencies in the state of the ontology, which result from the user’s actions, and suggests ways to remedy these inconsistencies. PROMPT is based on an extremely general knowledge model and therefore can be applied across various platforms. Our formative evaluation showed that a human expert followed 90% of the suggestions that PROMPT generated and that 74% of the total knowledge-base operations invoked by the user were suggested by PROMPT.

1,119 citations


Proceedings Article
30 Jul 2000
TL;DR: In this paper, a semi-automated approach to ontology merging and alignment is presented. But the approach is not suitable for the problem of ontology alignment and merging, as it requires a large and tedious portion of the sharing process.
Abstract: Researchers in the ontology-design field have developed the content for ontologies in many domain areas. Recently, ontologies have become increasingly common on the WorldWide Web where they provide semantics for annotations in Web pages. This distributed nature of ontology development has led to a large number of ontologies covering overlapping domains. In order for these ontologies to be reused, they first need to be merged or aligned to one another. The processes of ontology alignment and merging are usually handled manually and often constitute a large and tedious portion of the sharing process. We have developed and implemented PROMPT, an algorithm that provides a semi-automatic approach to ontology merging and alignment. PROMPT performs some tasks automatically and guides the user in performing other tasks for which his intervention is required. PROMPT also determines possible inconsistencies in the state of the ontology, which result from the user’s actions, and suggests ways to remedy these inconsistencies. PROMPT is based on an extremely general knowledge model and therefore can be applied across various platforms. Our formative evaluation showed that a human expert followed 90% of the suggestions that PROMPT generated and that 74% of the total knowledge-base operations invoked by the user were suggested by PROMPT.

1,002 citations


Journal ArticleDOI
TL;DR: The paper will describe the process of building an ontology, introducing the reader to the techniques and methods currently in use and the open research questions in ontology development.
Abstract: Much of biology works by applying prior knowledge ('what is known') to an unknown entity, rather than the application of a set of axioms that will elicit knowledge. In addition, the complex biological data stored in bioinformatics databases often require the addition of knowledge to specify and constrain the values held in that database. One way of capturing knowledge within bioinformatics applications and databases is the use of ontologies. An ontology is the concrete form of a conceptualisation of a community's knowledge of a domain. This paper aims to introduce the reader to the use of ontologies within bioinformatics. A description of the type of knowledge held in an ontology will be given.The paper will be illustrated throughout with examples taken from bioinformatics and molecular biology, and a survey of current biological ontologies will be presented. From this it will be seen that the use to which the ontology is put largely determines the content of the ontology. Finally, the paper will describe the process of building an ontology, introducing the reader to the techniques and methods currently in use and the open research questions in ontology development.

399 citations


Proceedings Article
20 Aug 2000
TL;DR: A new approach to discover non-taxonomic conceptual relations from text building on shallow text processing techniques is described, using a generalized association rule algorithm that does not only detect relations between concepts, but also determines the appropriate level of abstraction at which to define relations.
Abstract: Non-taxonomic relations between concepts appear as a major building block in common ontology definitions. In fact, their definition consumes much of the time needed for engineering an ontology. We here describe a new approach to discover non-taxonomic conceptual relations from text building on shallow text processing techniques. We use a generalized association rule algorithm that does not only detect relations between concepts, but also determines the appropriate level of abstraction at which to define relations. This is crucial for an appropriate ontology definition in order that it be succinct and conceptually adequate and, hence, easy to understand, maintain, and extend. We also perform an empirical evaluation of our approach with regard to a manually engineered ontology. For this purpose, we present a new paradigm suited to evaluate the degree to which relations that are learned match relations in a manually engineered ontology.

369 citations


Book ChapterDOI
02 Oct 2000
TL;DR: This paper establishes a common framework to compare the expressiveness and reasoning capabilities of "traditional" ontology languages (Ontolingua, OKBC, OCML, FLogic, LOOM) and "web-based" ontological languages, and concludes with the results of applying this framework to the selected languages.
Abstract: The interchange of ontologies across the World Wide Web (WWW) and the cooperation among heterogeneous agents placed on it is the main reason for the development of a new set of ontology specification languages, based on new web standards such as XML or RDF. These languages (SHOE, XOL, RDF, OIL, etc) aim to represent the knowledge contained in an ontology in a simple and human-readable way, as well as allow for the interchange of ontologies across the web. In this paper, we establish a common framework to compare the expressiveness and reasoning capabilities of "traditional" ontology languages (Ontolingua, OKBC, OCML, FLogic, LOOM) and "web-based" ontology languages, and conclude with the results of applying this framework to the selected languages.

176 citations


01 Jan 2000
TL;DR: A new approach for modeling large-scale ontologies by transportable methods for modeling ontological axioms that allows for versatile access to and manipulations of axiomatic concepts and relations via graphical user interfaces.
Abstract: This papers presents a new approach for modeling large-scale ontologies. We extend well-established methods for modeling concepts and relations by transportable methods for modeling ontological axioms. The gist of our approach lies in the way we treat the majority of axioms. They are categorized into different types and specified as complex objects that refer to concepts and relations. Considering language and system particularities, this first layer of representation is then translated into the target representation language. This two-layer approach benefits engineering, because the intended meaning of axioms is captured by the categorization of axioms. Classified object representations allow for versatile access to and manipulations of axioms via graphical user interfaces.

147 citations


Patent
06 Oct 2000
TL;DR: In this paper, an ontology-based approach is proposed to generate Java-based object-oriented and relational application program interfaces (APIs) from a given ontology, providing application developers with an API that exactly reflects the entity types and relations (classes and methods) that are represented by the database.
Abstract: A system and method lets a user create or import ontologies and create databases and related application software. These databases can be specially tuned to suit a particular need, and each comes with the same error-detection rules to keep the data clean. Such databases may be searched based on meaning, rather than on words-that-begin-with-something. And multiple databases, if generated from the same basic ontology can communicate with each other without any additional effort. Ontology management and generation tools enable enterprises to create databases that use ontologies to improve data integration, maintainability, quality, and flexibility. Only the relevant aspects of the ontology are targeted, extracting out a sub-model that has the power of the full ontology restricted to objects of interest for the application domain. To increase performance and add desired database characteristics, this sub-model is translated into a database system. Java-based object-oriented and relational application program interfaces (APIs) are then generated from this translation, providing application developers with an API that exactly reflects the entity types and relations (classes and methods) that are represented by the database. This generation approach essentially turns the ontology into a set of integrated and efficient databases.

145 citations


01 Jan 2000
TL;DR: This work presents the TEXT-TO-ONTO Ontology Learning Environment, which is based on a general architecture for discovering conceptual structures and engineering ontologies from text and supports as well the acquisition of conceptual structures as mapping linguistic resources to the acquired structures.
Abstract: Ontologies have become an important means for structuring information and information systems and, hence, important in knowledge as well as in software engineering. However, there remains the problem of engineering large and adequate ontologies within short time frames in order to keep costs low. For this purpose, we present the TEXT-TO-ONTO Ontology Learning Environment, which is based on a general architecture for discovering conceptual structures and engineering ontologies from text. Our Ontology Learning Environment supports as well the acquisition of conceptual structures as mapping linguistic resources to the acquired structures.

99 citations


Proceedings ArticleDOI
30 May 2000
TL;DR: In this article, the authors present the principles of ontology-supported and ontologydriven conceptual navigation, which realizes the independence between resources and links to facilitate interoperability and reusability.
Abstract: This paper presents the principles of ontology-supported and ontology-driven conceptual navigation Conceptual navigation realizes the independence between resources and links to facilitate interoperability and reusability An engine builds dynamic links, assembles resources under an argumentative scheme and allows optimization with a possible constraint, such as the user's available time Among several strategies, two are discussed in detail with examples of applications On the one hand, conceptual specifications for linking and assembling are embedded in the resource meta-description with the support of the ontology of the domain to facilitate meta-communication Resources are like agents looking for conceptual acquaintances with intention On the other hand, the domain ontology and an argumentative ontology drive the linking and assembling strategies

89 citations


Book ChapterDOI
28 Jun 2000
TL;DR: A general architecture for discovering conceptual structures and engineering ontologies from text and a new approach for discovering non-taxonomic conceptual relations from text are presented.
Abstract: Ontologies have shown their usefulness in application areas such as information integration, natural language processing, metadata for the world wide, to name but a few However, there remains the problem of engineering large and adequate ontologies within short time frames in order to keep costs low We here present a general architecture for discovering conceptual structures and engineering ontologies from text and a new approach for discovering non-taxonomic conceptual relations from text

73 citations


Journal ArticleDOI
01 Oct 2000-Mind
TL;DR: The semantics and ontology of dispositions are looked at in the light of recent work on the subject and it is concluded that fragility is not a real property and that, while both temperature and its bases are, this does not generate any problem of overdetermination.
Abstract: The paper looks at the semantics and ontology of dispositions in the light of recent work on the subject. Objections to the simple conditionals apparently entailed by disposition statements are met by replacing them with so-called "reduction sentences" and some implications of this are explored. The usual distinction between categorical and dispositional properties is criticised and the relation between dispositions and their bases examined. Applying this discussion to two typical cases leads to the conclusion that fragility is not a real property and that, while both temperature and its bases are, this does not generate any problem of overdetermination.

Book ChapterDOI
02 Oct 2000
TL;DR: An activity of ontology construction and its deployment in an interface system for an oil-refinery plant operation which has been done under the umbrella of Human-Media Project for four years is presented.
Abstract: Although the necessity of an ontology and ontological engineering is well-understood, there has been few success stories about ontology construction and its deployment to date. This paper presents an activity of ontology construction and its deployment in an interface system for an oil-refinery plant operation which has been done under the umbrella of Human-Media Project for four years. It also describes the reasons why we need an ontology, what ontology we built, what environment we used for building the ontology and how the ontology is used in the system. The interface has been developed intended to establish a sophisticated technology for advanced interface for plant operators and consists of several agents. The system has been implemented and preliminary evaluation has been done successfully.

01 Aug 2000
TL;DR: It is concluded that different needs in KR and reasoning may exist in the building of an ontology-based application, and these needs must be evaluated in order to choose the most suitable ontology language(s).
Abstract: The interchange of ontologies across the World Wide Web (WWW) and the cooperation among heterogeneous agents placed on it is the main reason for the development of a new set of ontology specification languages, based on new web standards such as XML or RDF These languages (SHOE, XOL, RDF, OIL, etc) aim to represent the knowledge contained in an ontology in a simple and human-readable way, as well as allow for the interchange of ontologies across the web In this paper, we establish a common framework to compare the expressiveness of "traditional" ontology languages (Ontolingua, OKBC, OCML, FLogic, LOOM) and "web-based" ontology languages As a result of this study, we conclude that different needs in KR and reasoning may exist in the building of an ontology-based application, and these needs must be evaluated in order to choose the most suitable ontology language(s)

Proceedings ArticleDOI
13 Sep 2000
TL;DR: This paper presents a method for acquiring a application-tailored domain ontology from given heterogeneous intranet sources and presents a comprehensive architecture and a system for semi-automatic ontology acquisition.
Abstract: This paper describes our actual and ongoing work in supporting semi-automatic ontology acquisition from a corporate intranet of an insurance company. A comprehensive architecture and a system for semi-automatic ontology acquisition supports processing semi-structured information (e.g. contained in dictionaries) and natural language documents and including existing core ontologies (e.g. GermaNet, WordNet). We present a method for acquiring a application-tailored domain ontology from given heterogeneous intranet sources.

Proceedings ArticleDOI
04 Jan 2000
TL;DR: DOME is developing techniques for ontology-based information content description and a suite of tools for domain ontology management, which makes it possible to dynamically find relevant data sources based on content and to integrate them as needed.
Abstract: Service-oriented business-to-business e-commerce requires dynamic and open interoperable information systems. Although most large organisations have information regarding products, services and customers stored in databases, and XML/DTD allows these to be published over the Internet, sharing information among these systems has been prevented by semantic heterogeneity. True electronic commerce will not happen until the semantics of the terms used to model these information systems can be captured and processed by computers. Developing a machine processable ontology (vocabulary) is intrinsically hard. The semantics of a term varies from one context to another. We believe ontology engineering will be a major effort of any future application development. In this paper we describe our work on building a Domain Ontology Management Environment (DOME). DOME is developing techniques for ontology-based information content description and a suite of tools for domain ontology management. Information content description extends traditional meta data to an ontology. This makes it possible to dynamically find relevant data sources based on content and to integrate them as needed.

Proceedings Article
01 Jan 2000
TL;DR: In this paper, the majority of axioms are categorized into different types and specified as complex objects that refer to concepts and relations, and this first layer of representation is then translated into the target representation language.
Abstract: This papers presents a new approach for modeling large-scale ontologies. We extend well-established methods for modeling concepts and relations by transportable methods for modeling ontological axioms. The gist of our approach lies in the way we treat the majority of axioms. They are categorized into different types and specified as complex objects that refer to concepts and relations. Considering language and system particularities, this first layer of representation is then translated into the target representation language. This two-layer approach benefits engineering, because the intended meaning of axioms is captured by the categorization of axioms. Classified object representations allow for versatile access to and manipulations of axioms via graphical user interfaces.

Proceedings Article
19 Aug 2000
TL;DR: This paper reports on an effort among the authors to evaluate alternative ontology-exchange languages, and to recommend one or more languages for use within the larger bioinformatics community.
Abstract: Ontologies are specifications of the concepts in a given field, and of the relationships among those concepts. The development of ontologies for molecular-biology information and the sharing of those ontologies within the bioinformatics community are central problems in bioinformatics. If the bioinformatics community is to share ontologies effectively, ontologies must be exchanged in a form that uses standardized syntax and semantics. This paper reports on an effort among the authors to evaluate alternative ontology-exchange languages, and to recommend one or more languages for use within the larger bioinformatics community. The study selected a set of candidate languages, and defined a set of capabilities that the ideal ontology-exchange language should satisfy. The study scored the languages according to the degree to which they satisfied each capability. In addition, the authors performed several ontology-exchange experiments with the two languages that received the highest scores: OML and Ontolingua. The result of those experiments, and the main conclusion of this study, was that the frame-based semantic model of Ontolingua is preferable to the conceptual graph model of OML, but that the XML-based syntax of OML is preferable to the Lisp-based syntax of Ontolingua.

01 Mar 2000
TL;DR: An example of how to resolve semantic ambiguities is provided for the manufacturing concept ‘resource’ and some ideas on how to use PSL for inter-operability in agent-based systems are presented.
Abstract: The problems of inter-operability are acute for manufacturing applications, as applications using process specifications do not necessarily share syntax and definitions of concepts. The Process Specification Language developed at the National Institute of Standards and Technology proposes a formal ontology and translation mechanisms representing manufacturing concepts. When an application becomes ‘PSL compliant,’ its concepts are expressed using PSL, with a direct one-to-one mapping or with a mapping under certain conditions. An example of how to resolve semantic ambiguities is provided for the manufacturing concept ‘resource’. Finally some ideas on how to use PSL for inter-operability in agent-based systems are presented.

Journal ArticleDOI
TL;DR: A living set of features that allow us to characterize ontologies from the user point of view and have the same logical organization are presented.
Abstract: Knowledge reuse by means of ontologies faces three important problems at present: (1) there are no standardized identifying features that characterize ontologies from the user point of view; (2) there are no web sites using the same logical organization, presenting relevant information about ontologies; and (3) the search for appropriate ontologies is hard, time-consuming and usually fruitless. To solve the above problems, we present: (1) a living set of features that allow us to characterize ontologies from the user point of view and have the same logical organization; (2) a living domain ontology about ontologies (called Reference Ontology) that gathers, describes and has links to existing ontologies; and (3) (ONTO)2 Agent, the ontology-based WWW broker about ontologies that uses Reference Ontology as a source of its knowledge and retrieves descriptions of ontologies that satisfy a given set of constraints.

Journal ArticleDOI
TL;DR: A real environment for integrating ontologies supplied by a predetermined set of (experts) users, who might be distributed through a communication network and working cooperatively in the integration process, is introduced.
Abstract: Nowadays, we can find systems and environments supporting processes of ontology building. However, these processes have not been specified enough yet. In this work, a real environment for integrating ontologies supplied by a predetermined set of (experts) users, who might be distributed through a communication network and working cooperatively in the integration process, is introduced. In this environment, the (expert) user can check for the ontology that is being produced, so he/she is able to refine his/her private ontology. Furthermore, the experts who take part of the ontology construction process are allowed to use their own terminology even for requesting information about the global-derived ontology until a specific instant after the integration.

Book ChapterDOI
02 Oct 2000
TL;DR: The idea that the life cycle of an ontology is highly impacted as a result of the process of reusing it for building another ontology and new intra-dependencies and interdependencies between activities carried out in different ontologies are presented.
Abstract: This paper presents the idea that the life cycle of an ontology is highly impacted as a result of the process of reusing it for building another ontology. One of the more important results of the experiment presented is how the different activities to be carried out during the development of a specific ontology may involve performing other types of activities on other ontologies already built or under construction. We identify in that paper new intra-dependencies between activities carried out inside the same otology and interdependencies between activities carried out in different ontologies. The interrelation between life cycles of several ontologies provokes that integration has to be approached globally rather than as a mere integration of out implementation.


01 Jan 2000
TL;DR: A new approach for language-neutral modeling of large-scale ontologies by categorizing axioms into different types and specifying them as complex objects that refer to concepts and relations.
Abstract: In this paper we present a new approach for language-neutral modeling of large-scale ontologies. The gist of our approach lies in the way we treat the majority of axioms. Instead of capturing axiom semantics in some specific representation language, we categorize axioms into different types and specify them as complex objects that refer to concepts and relations. A separate layer that is language-specific, in fact it may even vary for different inference engines working on the same language, describes how these objects are translated into a target representation. In addition to its far reaching independence with regard to specific representation languages, this approach benefits engineering since the semantics of important types of axioms may be much more elucidated in our ontology engineering tool, OntoEdit, than in comparable tools. Furthermore, our approach is principled in a way that allows for comparably easy adaptation of our tool to requirements for modeling axioms in specific domains. The apparition of these faces in the crowd; Petals on a wet black bough. Ezra Pound, 1913.

Journal ArticleDOI
TL;DR: This paper discusses a concept which can be expected to be of great importance to formal ontology management, and which is well‐known in traditional software development: refinement.
Abstract: Ontologies have emerged as one of the key issues in information integration and interoperability and in their application to knowledge management and electronic commerce. A trend towards formal methods for ontology management is obvious. This paper discusses a concept which can be expected to be of great importance to formal ontology management, and which is well-known in traditional software development: refinement. We define and discuss ontology refinement, give illustrating examples, and highlight its advantages as compared to other forms of ontology revision. © 2000 John Wiley & Sons, Inc.

Book ChapterDOI
07 Jul 2000
TL;DR: A software gathering service that is mainly supported by an ontology, SoftOnt, and several agents is presented, showing how the SoftOnt ontology is built from distributed and heterogeneous software repositories.
Abstract: Ontologies and agents are two topics that raise a particular attention those days from the theoretical as well as from the application point of view. In this paper we present a software gathering service that is mainly supported by an ontology, SoftOnt, and several agents. The main goal of the paper is to show how the SoftOnt ontology is built from distributed and heterogeneous software repositories. In the particular domain considered, software repositories, we advocate for an automatic creation of a global unique ontology versus a manual creation and the use of multiple ontologies.

Proceedings Article
22 May 2000
TL;DR: This work argues that knowledge representation and ontologies may offer solutions to basic problems in web-based educational systems and proposes an approach how ontologies and KNOWLEDGE SPACE may be combined for improving both user modeling and intelligent problem solving support.
Abstract: In the first section we give a very short survey on current research on web-based educational systems and related problems. In the second section we argue that knowledge representation and ontologies may offer solutions to basic problems in this area. Since a wellfounded system of concepts, i.e. an ontology, will significantly advance knowledge sharing and interoperability. This will enable both the design of reusable functional components and the design of authoring tools. In the third part we introduce and discuss our approach of organising system knowledge in an ontology. Finally, we propose an approach how ontologies and KNOWLEDGE SPACE may be combined for improving both user modeling and intelligent problem solving support.

Henry M. Kim1
01 Jan 2000
TL;DR: A methodology for knowledge management that combines both process and data driven approaches to ontology development and builds on them is described, called the BPD/D Ontological Engineering Methodology.
Abstract: A knowledge management system must support the integration of information from disparate sources, wherein a decision maker manipulates information that someone else conceptualised and represented. So the system must minimise ambiguity and imprecision in interpreting shared information. This can be achieved by representing the shared information using ontologies. There are typically two approaches to developing ontologies to support decision making. In one approach, ontologies are developed to support new business processes or decisions, but often are not built from existing data repositories. In the other approach, ontologies are developed from existing data repositories, but often may not support new business processes and decisions. In this paper, a methodology for knowledge management that combines both process and data driven approaches to ontology development and builds on them is described. In this methodology, called the BPD/D Ontological Engineering Methodology, competency questions that state the capability of an ontology to support business processes and decisions are specified. Concurrently, architectural requirements that specify aspects of existing systems that constrain ontology design choices are also stated, so that existing data repositories are explicitly considered and built upon when developing ontologies. Information systems tools to support both ontology-based knowledge management system construction and use can then be designed to support the steps of this methodology.

Book
01 Jan 2000

Proceedings ArticleDOI
04 Jan 2000
TL;DR: An ontology development framework rooted at the core business processes of electronic publishing that can be used to define semantic metadata structures for electronic content is introduced.
Abstract: Converging media industry calls for tighter integration of creativity, business processes, and technologies. Media companies need flexible methods to manage electronic content production and delivery, and metadata is a key enabler in making this goal a reality. However, metadata is useful only if its nature is understood clearly and its structure and usage are well-defined. For this purpose, an ontology, consisting of conceptual models that map the content domain into a limited set of meaningful concepts, is needed. This paper introduces an ontology development framework rooted at the core business processes of electronic publishing that can be used to define semantic metadata structures for electronic content. The framework underlines the different nature of ontology development and metadata publishing, and how these two processes influence each other. This paper discusses also the application of the ontology development framework in practice. The framework has been created in the SmartPush project, where media companies explore new business opportunities for electronic publishing and delivery.

Proceedings ArticleDOI
14 May 2000
TL;DR: The authors use ontology as a foundation to realize knowledge sharing and reuse and discuss the method for building ontology, its principles and implementation.
Abstract: After many years research work, many intelligent systems based on knowledge have been created. But the differences in creating methods and applying background contexts make it difficult to share and reuse knowledge. This situation leads to the difficulty of building knowledge systems. In order to solve this problem, we use ontology as a foundation to realize knowledge sharing and reuse. As an important research area in AI, the ontology building method has not acquired a common view. The authors mainly discuss the method for building ontology, its principles and implementation.