scispace - formally typeset
Search or ask a question

Showing papers on "Ontology (information science) published in 2004"


01 Jan 2004
TL;DR: This document provides an introduction to OWL by informally describing the features of each of the sublanguages of OWL, the Web Ontology Language by providing additional vocabulary along with a formal semantics.
Abstract: The OWL Web Ontology Language is designed for use by applications that need to process the content of information instead of just presenting information to humans. OWL facilitates greater machine interpretability of Web content than that supported by XML, RDF, and RDF Schema (RDF-S) by providing additional vocabulary along with a formal semantics. OWL has three increasingly-expressive sublanguages: OWL Lite, OWL DL, and OWL Full. This document is written for readers who want a first impression of the capabilities of OWL. It provides an introduction to OWL by informally describing the features of each of the sublanguages of OWL. Some knowledge of RDF Schema is useful for understanding this document, but not essential. After this document, interested readers may turn to the OWL Guide for more detailed descriptions and extensive examples on the features of OWL. The normative formal definition of OWL can be found in the OWL Semantics and Abstract Syntax. Status of this document OWL Web Ontology Language Overview https://www.w3.org/TR/owl-features/ 1 de 14 09/05/2017 08:32 a.m. This document has been reviewed by W3C Members and other interested parties, and it has been endorsed by the Director as a W3C Recommendation. W3C's role in making the Recommendation is to draw attention to the specification and to promote its widespread deployment. This enhances the functionality and interoperability of the Web. This is one of six parts of the W3C Recommendation for OWL, the Web Ontology Language. It has been developed by the Web Ontology Working Group as part of the W3C Semantic Web Activity (Activity Statement, Group Charter) for publication on 10 February 2004. The design of OWL expressed in earlier versions of these documents has been widely reviewed and satisfies the Working Group's technical requirements. The Working Group has addressed all comments received, making changes as necessary. Changes to this document since the Proposed Recommendation version are detailed in the change log. Comments are welcome at public-webont-comments@w3.org (archive) and general discussion of related technology is welcome at www-rdf-logic@w3.org (archive). A list of implementations is available. The W3C maintains a list of any patent disclosures related to this work. This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

4,147 citations


01 Jan 2004
TL;DR: The goal of this dissertation is to find and provide the basis for a managerial tool that allows a firm to easily express its business logic and provide a software prototype to capture a company's business model in an information system.
Abstract: The goal of this dissertation is to find and provide the basis for a managerial tool that allows a firm to easily express its business logic. The methodological basis for this work is design science, where the researcher builds an artifact to solve a specific problem. In this case the aim is to provide an ontology that makes it possible to explicit a firm's business model. In other words, the proposed artifact helps a firm to formally describe its value proposition, its customers, the relationship with them, the necessary intra- and inter-firm infrastructure and its profit model. Such an ontology is relevant because until now there is no model that expresses a company's global business logic from a pure business point of view. Previous models essentially take an organizational or process perspective or cover only parts of a firm's business logic. The four main pillars of the ontology, which are inspired by management science and enterprise- and processmodeling, are product, customer interface, infrastructure and finance. The ontology is validated by case studies, a panel of experts and managers. The dissertation also provides a software prototype to capture a company's business model in an information system. The last part of the thesis consists of a demonstration of the value of the ontology in business strategy and Information Systems (IS) alignment. Structure of this thesis: The dissertation is structured in nine parts: Chapter 1 presents the motivations of this research, the research methodology with which the goals shall be achieved and why this dissertation present a contribution to research. Chapter 2 investigates the origins, the term and the concept of business models. It defines what is meant by business models in this dissertation and how they are situated in the context of the firm. In addition this chapter outlines the possible uses of the business model concept. Chapter 3 gives an overview of the research done in the field of business models and enterprise ontologies. Chapter 4 introduces the major contribution of this dissertation: the business model ontology. In this part of the thesis the elements, attributes and relationships of the ontology are explained and described in detail. Chapter 5 presents a case study of the Montreux Jazz Festival which's business model was captured by applying the structure and concepts of the ontology. In fact, it gives an impression of how a business model description based on the ontology looks like. Chapter 6 shows an instantiation of the ontology into a prototype tool: the Business Model Modelling Language BM2L. This is an XML-based description language that allows to capture and describe the business model of a firm and has a large potential for further applications. Chapter 7 is about the evaluation of the business model ontology. The evaluation builds on literature review, a set of interviews with practitioners and case studies. Chapter 8 gives an outlook on possible future research and applications of the business model ontology. The main areas of interest are alignment of business and information technology IT/information systems IS and business model comparison. Finally, chapter 9 presents some conclusions.

1,913 citations


Proceedings ArticleDOI
14 Mar 2004
TL;DR: An OWL encoded context ontology (CONON) is proposed for modeling context in pervasive computing environments, and for supporting logic-based context reasoning, and provides extensibility for adding domain-specific ontology in a hierarchical manner.
Abstract: Here we propose an OWL encoded context ontology (CONON) for modeling context in pervasive computing environments, and for supporting logic-based context reasoning. CONON provides an upper context ontology that captures general concepts about basic context, and also provides extensibility for adding domain-specific ontology in a hierarchical manner. Based on this context ontology, we have studied the use of logic reasoning to check the consistency of context information, and to reason over low-level, explicit context to derive high-level, implicit context. By giving a performance study for our prototype, we quantitatively evaluate the feasibility of logic based context reasoning for nontime-critical applications in pervasive computing environments, where we always have to deal carefully with the limitation of computational resources.

1,236 citations



Journal ArticleDOI
01 Dec 2004
TL;DR: The goal of the paper is to provide a reader who may not be very familiar with ontology research with introduction to major themes in this research and with pointers to different research projects.
Abstract: Semantic integration is an active area of research in several disciplines, such as databases, information-integration, and ontologies. This paper provides a brief survey of the approaches to semantic integration developed by researchers in the ontology community. We focus on the approaches that differentiate the ontology research from other related areas. The goal of the paper is to provide a reader who may not be very familiar with ontology research with introduction to major themes in this research and with pointers to different research projects. We discuss techniques for finding correspondences between ontologies, declarative ways of representing these correspondences, and use of these correspondences in various semantic-integration tasks

1,142 citations


Book ChapterDOI
07 Nov 2004
TL;DR: The architecture of the OWL Plugin is described, the most important features are walked through, and some of the design decisions are discussed.
Abstract: We introduce the OWL Plugin, a Semantic Web extension of the Protege ontology development platform. The OWL Plugin can be used to edit ontologies in the Web Ontology Language (OWL), to access description logic reasoners, and to acquire instances for semantic markup. In many of these features, the OWL Plugin has created and facilitated new practices for building Semantic Web contents, often driven by the needs of and feedback from our users. Furthermore, Protege's flexible open-source platform means that it is easy to integrate custom-tailored components to build real-world applications. This document describes the architecture of the OWL Plugin, walks through its most important features, and discusses some of our design decisions.

1,023 citations


01 May 2004
TL;DR: An inference engine for reasoning with information expressed using the COBRA-ONT ontology and the ongoing research in using the DAML-Time ontology for context reasoning are described.
Abstract: This document describes COBRA-ONT, an ontology for supporting pervasive context-aware systems. COBRA-ONT, expressed in the Web Ontology Language OWL, is a collection of ontologies for describing places, agents and events and their associated properties in an intelligent meeting-room domain. This ontology is developed as a part of the Context Broker Architecture (CoBrA), a broker-centric agent architecture that provides knowledge sharing, context reasoning and privacy protection supports for pervasive context-aware systems. We also describe an inference engine for reasoning with information expressed using the COBRA-ONT ontology and the ongoing research in using the DAML-Time ontology for context reasoning.

958 citations


Book
01 Jan 2004
TL;DR: Theoretical Foundations of Ontologies as discussed by the authors The most outstanding ontologies and methods for building ontologies are discussed in Section 3.1.2.2 Languages for Building Ontologies.
Abstract: Theoretical Foundations of Ontologies.- The Most Outstanding Ontologies.- Methodologies and Methods for Building Ontologies.- Languages for Building Ontologies.- Ontology Tools.

919 citations


Journal ArticleDOI
TL;DR: Extraction of particular biological facts can be accelerated significantly by ontologies, with Textpresso automatically performing nearly as well as expert curators to identify sentences; in searches for two uniquely named genes and an interaction term, the ontology confers a 3-fold increase of search efficiency.
Abstract: We have developed Textpresso, a new text-mining system for scientific literature whose capabilities go far beyond those of a simple keyword search engine Textpresso's two major elements are a collection of the full text of scientific articles split into individual sentences, and the implementation of categories of terms for which a database of articles and individual sentences can be searched The categories are classes of biological concepts (eg, gene, allele, cell or cell group, phenotype, etc) and classes that relate two objects (eg, association, regulation, etc) or describe one (eg, biological process, etc) Together they form a catalog of types of objects and concepts called an ontology After this ontology is populated with terms, the whole corpus of articles and abstracts is marked up to identify terms of these categories The current ontology comprises 33 categories of terms A search engine enables the user to search for one or a combination of these tags and/or keywords within a sentence or document, and as the ontology allows word meaning to be queried, it is possible to formulate semantic queries Full text access increases recall of biological data types from 45% to 95% Extraction of particular biological facts, such as gene-gene interactions, can be accelerated significantly by ontologies, with Textpresso automatically performing nearly as well as expert curators to identify sentences; in searches for two uniquely named genes and an interaction term, the ontology confers a 3-fold increase of search efficiency Textpresso currently focuses on Caenorhabditis elegans literature, with 3,800 full text articles and 16,000 abstracts The lexicon of the ontology contains 14,500 entries, each of which includes all versions of a specific word or phrase, and it includes all categories of the Gene Ontology database Textpresso is a useful curation tool, as well as search engine for researchers, and can readily be extended to other organism-specific corpora of text Textpresso can be accessed at http://wwwtextpressoorg or via WormBase at http://wwwwormbaseorg

699 citations


Proceedings ArticleDOI
03 Sep 2004
TL;DR: How SOUPA can be extended and used to support the applications of CoBrA, a broker-centric agent architecture for building smart meeting rooms, and MoGATU, a peer-to-peer data management for pervasive environments are discussed.
Abstract: We describe a shared ontology called SOUPA - standard ontology for ubiquitous and pervasive applications. SOUPA is designed to model and support pervasive computing applications. This ontology is expressed using the Web ontology language OWL and includes modular component vocabularies to represent intelligent agents with associated beliefs, desires, and intentions, time, space, events, user profiles, actions, and policies for security and privacy. We discuss how SOUPA can be extended and used to support the applications of CoBrA, a broker-centric agent architecture for building smart meeting rooms, and MoGATU, a peer-to-peer data management for pervasive environments.

660 citations


Proceedings Article
22 Aug 2004
TL;DR: A wholly intrinsic measure of Information Content that relies on hierarchical structure alone is presented, which is consequently easier to calculate, yet when used as the basis of a similarity mechanism it yields judgments that correlate more closely with human assessments than other, extrinsic measures of IC that additionally employ corpus analysis.
Abstract: Information Content (IC) is an important dimension of word knowledge when assessing the similarity of two terms or word senses. The conventional way of measuring the IC of word senses is to combine knowledge of their hierarchical structure from an ontology like WordNet with statistics on their actual usage in text as derived from a large corpus. In this paper we present a wholly intrinsic measure of IC that relies on hierarchical structure alone. We report that this measure is consequently easier to calculate, yet when used as the basis of a similarity mechanism it yields judgments that correlate more closely with human assessments than other, extrinsic measures of IC that additionally employ corpus analysis.

Proceedings ArticleDOI
17 May 2004
TL;DR: MWSAF (METEOR-S Web Service Annotation Framework), a framework for semi-automatically marking up Web service descriptions with ontologies, which has developed algorithms to match and annotate WSDL files with relevant ontologies.
Abstract: The World Wide Web is emerging not only as an infrastructure for data, but also for a broader variety of resources that are increasingly being made available as Web services. Relevant current standards like UDDI, WSDL, and SOAP are in their fledgling years and form the basis of making Web services a workable and broadly adopted technology. However, realizing the fuller scope of the promise of Web services and associated service oriented architecture will requite further technological advances in the areas of service interoperation, service discovery, service composition, and process orchestration. Semantics, especially as supported by the use of ontologies, and related Semantic Web technologies, are likely to provide better qualitative and scalable solutions to these requirements. Just as semantic annotation of data in the Semantic Web is the first critical step to better search, integration and analytics over heterogeneous data, semantic annotation of Web services is an equally critical first step to achieving the above promise. Our approach is to work with existing Web services technologies and combine them with ideas from the Semantic Web to create a better framework for Web service discovery and composition. In this paper we present MWSAF (METEOR-S Web Service Annotation Framework), a framework for semi-automatically marking up Web service descriptions with ontologies. We have developed algorithms to match and annotate WSDL files with relevant ontologies. We use domain ontologies to categorize Web services into domains. An empirical study of our approach is presented to help evaluate its performance.

Journal ArticleDOI
TL;DR: Differences between database-schema evolution and ontology evolution will allow us to build on the extensive research in schema evolution, but there are also important differences between database schemas and ontologies.
Abstract: As ontology development becomes a more ubiquitous and collaborative process, ontology versioning and evolution becomes an important area of ontology research. The many similarities between database-schema evolution and ontology evolution will allow us to build on the extensive research in schema evolution. However, there are also important differences between database schemas and ontologies. The differences stem from different usage paradigms, the presence of explicit semantics and different knowledge models. A lot of problems that existed only in theory in database research come to the forefront as practical problems in ontology evolution. These differences have important implications for the development of ontology-evolution frameworks: The traditional distinction between versioning and evolution is not applicable to ontologies. There are several dimensions along which compatibility between versions must be considered. The set of change operations for ontologies is different. We must develop automatic techniques for finding similarities and differences between versions.

Book ChapterDOI
01 Jan 2004
TL;DR: This chapter studies ontology matching: the problem of finding the semantic mappings between two given ontologies, which lies at the heart of numerous information processing applications.
Abstract: This chapter studies ontology matching: the problem of finding the semantic mappings between two given ontologies. This problem lies at the heart of numerous information processing applications. Virtually any application that involves multiple ontologies must establish semantic mappings among them, to ensure interoperability. Examples of such applications arise in myriad domains, including e-commerce, knowledge management, e-learning, information extraction, bio-informatics, web services, and tourism (see Part D of this book on ontology applications).

Journal ArticleDOI
TL;DR: It is shown how to reduce ontology entailment for the OWL DL and OWL Lite ontology languages to knowledge base satisfiability in (respectively) the SHOIN(D) and SHIF( D) description logics.

Book ChapterDOI
07 Nov 2004
TL;DR: In this paper, Quick Ontology Mapping (QOM) is proposed as a way to trade off between effectiveness and efficiency of the mapping generation algorithm, which has lower run-time complexity than existing prominent approaches.
Abstract: (Semi-)automatic mapping - also called (semi-) automatic alignment - of ontologies is a core task to achieve interoperability when two agents or services use different ontologies In the existing literature, the focus has so far been on improving the quality of mapping results We here consider QOM, Quick Ontology Mapping, as a way to trade off between effectiveness (ie quality) and efficiency of the mapping generation algorithms We show that QOM has lower run-time complexity than existing prominent approaches Then, we show in experiments that this theoretical investigation translates into practical benefits While QOM gives up some of the possibilities for producing high-quality results in favor of efficiency, our experiments show that this loss of quality is marginal

Book ChapterDOI
TL;DR: An approach to integrate various similarity methods is presented, which determines similarity through rules which have been encoded by ontology experts and are then combined for one overall result.
Abstract: Ontology mapping is important when working with more than one ontology Typically similarity considerations are the basis for this In this paper an approach to integrate various similarity methods is presented In brief, we determine similarity through rules which have been encoded by ontology experts These rules are then combined for one overall result Several boosting small actions are added All this is thoroughly evaluated with very promising results

Journal ArticleDOI
TL;DR: An ontology of time is being developed for describing the temporal content of Web pages and the temporal properties of Web services, which covers topological properties of instants and intervals, measures of duration, and the meanings of clock and calendar terms.
Abstract: In connection with the DAML project for bringing about the Semantic Web, an ontology of time is being developed for describing the temporal content of Web pages and the temporal properties of Web services This ontology covers topological properties of instants and intervals, measures of duration, and the meanings of clock and calendar terms

Journal ArticleDOI
TL;DR: A method and a tool aimed at the extraction of domain ontologies from Web sites, and more generally from documents shared among the members of virtual organizations, based on a new word sense disambiguation algorithm, called structural semantic interconnections is presented.
Abstract: We present a method and a tool, OntoLearn, aimed at the extraction of domain ontologies from Web sites, and more generally from documents shared among the members of virtual organizations. OntoLearn first extracts a domain terminology from available documents. Then, complex domain terms are semantically interpreted and arranged in a hierarchical fashion. Finally, a general-purpose ontology, WordNet, is trimmed and enriched with the detected domain concepts. The major novel aspect of this approach is semantic interpretation, that is, the association of a complex concept with a complex term . This involves finding the appropriate WordNet concept for each word of a terminological string and the appropriate conceptual relations that hold among the concept components. Semantic interpretation is based on a new word sense disambiguation algorithm, called structural semantic interconnections.

Proceedings Article
22 Aug 2004
TL;DR: A universal measure for comparing the entities of two ontologies that is based on a simple and homogeneous comparison principle and one-to-many relationships and circularity in entity descriptions constitute the key difficulties.
Abstract: Interoperability of heterogeneous systems on the Web will be admittedly achieved through an agreement between the underlying ontologies. However, the richer the ontology description language, the more complex the agreement process, and hence the more sophisticated the required tools. Among current ontology alignment paradigms, similarity-based approaches are both powerful and flexible enough for aligning ontologies expressed in languages like OWL. We define a universal measure for comparing the entities of two ontologies that is based on a simple and homogeneous comparison principle: Similarity depends on the type of entity and involves all the features that make its definition (such as superclasses, properties, instances, etc.). One-to-many relationships and circularity in entity descriptions constitute the key difficulties in this context: These are dealt with through local matching of entity sets and iterative computation of recursively dependent similarities, respectively.

Journal ArticleDOI
01 Dec 2004
TL;DR: It is argued that ontologies in particular and semantics-based technologies in general will play a key role in achieving seamless connectivity.
Abstract: The goal of having networks of seamlessly connected people, software agents and IT systems remains elusive. Early integration efforts focused on connectivity at the physical and syntactic layers. Great strides were made; there are many commercial tools available, for example to assist with enterprise application integration. It is now recognized that physical and syntactic connectivity is not adequate. A variety of research systems have been developed addressing some of the semantic issues. In this paper, we argue that ontologies in particular and semantics-based technologies in general will play a key role in achieving seamless connectivity. We give a detailed introduction to ontologies, summarize the current state of the art for applying ontologies to achieve semantic connectivity and highlight some key challenges.

Journal ArticleDOI
TL;DR: The Mammalian Phenotype Ontology enables robust annotation of mammalian phenotypes in the context of mutations, quantitative trait loci and strains that are used as models of human biology and disease.
Abstract: The Mammalian Phenotype (MP) Ontology enables robust annotation of mammalian phenotypes in the context of mutations, quantitative trait loci and strains that are used as models of human biology and disease. The MP Ontology supports different levels and richness of phenotypic knowledge and flexible annotations to individual genotypes. It continues to develop dynamically via collaborative input from research groups, mutagenesis consortia, and biological domain experts. The MP Ontology is currently used by the Mouse Genome Database and Rat Genome Database to represent phenotypic data.

Proceedings ArticleDOI
17 May 2004
TL;DR: This work presents a service-oriented context-aware middleware (SOCAM) architecture for the building and rapid prototyping of context- aware mobile services, and proposes an ontology-based approach to model various contexts.
Abstract: Computing becomes increasingly mobile and pervasive today; these changes imply that applications and services must be aware and adapt to highly dynamic environments. Today, building context-aware mobile services is a complex and time-consuming task. We present a service-oriented context-aware middleware (SOCAM) architecture for the building and rapid prototyping of context-aware mobile services. We propose an ontology-based approach to model various contexts. Our context model supports semantic representation, context reasoning and context knowledge sharing. We take a service-oriented approach to build our middleware which supports tasks including acquiring, discovering, interpreting, accessing various contexts and interoperability between different context-aware systems.

Book ChapterDOI
TL;DR: Semantic Match as mentioned in this paper is an operator that takes two graph-like structures (e.g., conceptual hierarchies or ontologies) and produces a mapping between those nodes of the two graphs that correspond semantically to each other.
Abstract: We think of Match as an operator which takes two graph-like structures (e.g., conceptual hierarchies or ontologies) and produces a mapping between those nodes of the two graphs that correspond semantically to each other. Semantic matching is a novel approach where semantic correspondences are discovered by computing, and returning as a result, the semantic information implicitly or explicitly codified in the labels of nodes and arcs. In this paper we present an algorithm implementing semantic matching, and we discuss its implementation within the S-Match system. We also test S-Match against three state of the art matching systems. The results, though preliminary, look promising, in particular for what concerns precision and recall.

Proceedings Article
01 May 2004
TL;DR: It is proposed in this paper that one approach to ontology evaluation should be corpus or data driven, because a corpus is the most accessible form of knowledge and its use allows a measure to be derived of the ‘fit’ between an ontology and a domain of knowledge.
Abstract: The evaluation of ontologies is vital for the growth of the Semantic Web. We consider a number of problems in evaluating a knowledge artifact like an ontology. We propose in this paper that one approach to ontology evaluation should be corpus or data driven. A corpus is the most accessible form of knowledge and its use allows a measure to be derived of the ‘fit’ between an ontology and a domain of knowledge. We consider a number of methods for measuring this ‘fit’ and propose a measure to evaluate structural fit, and a probabilistic approach to identifying the best ontology.

Book ChapterDOI
07 Nov 2004
TL;DR: A format for expressing alignments in RDF is presented, which can be seen as an extension of the OWL API and shares some design goals with it, and how this API can be used for effectively aligning ontologies and completing partial alignments, thresholding alignments or generating axioms and transformations is shown.
Abstract: Ontologies are seen as the solution to data heterogeneity on the web. However, the available ontologies are themselves source of heterogeneity. This can be overcome by aligning ontologies, or finding the correspondence between their components. These alignments deserve to be treated as objects: they can be referenced on the web as such, be completed by an algorithm that improves a particular alignment, be compared with other alignments and be transformed into a set of axioms or a translation program. We present here a format for expressing alignments in RDF, so that they can be published on the web. Then we propose an implementation of this format as an Alignment API, which can be seen as an extension of the OWL API and shares some design goals with it. We show how this API can be used for effectively aligning ontologies and completing partial alignments, thresholding alignments or generating axioms and transformations.

Journal ArticleDOI
TL;DR: This work proposes a method, ONTOMETRIC, which allows the users to measure the suitability of existing ontologies, regarding the requirements of their systems.
Abstract: In the last years, the development of ontology-based applications has increased considerably, mainly related to the semantic web. Users currently looking for ontologies in order to incorporate them into their systems, just use their experience and intuition. This makes it difficult for them to justify their choices. Mainly, this is due to the lack of methods that help the user to determine which are the most appropriate ontologies for the new system. To solve this deficiency, the present work proposes a method, ONTOMETRIC, which allows the users to measure the suitability of existing ontologies, regarding the requirements of their systems.

Journal ArticleDOI
TL;DR: Methods and tools allowing the identification of statistically over- or under-represented terms in a gene dataset; the clustering of functionally related genes within a set; and the retrieval of genes sharing annotations with a query gene are developed.
Abstract: We have developed methods and tools based on the Gene Ontology (GO) resource allowing the identification of statistically over- or under-represented terms in a gene dataset; the clustering of functionally related genes within a set; and the retrieval of genes sharing annotations with a query gene. GO annotations can also be constrained to a slim hierarchy or a given level of the ontology. The source codes are available upon request, and distributed under the GPL license.

Proceedings ArticleDOI
17 May 2004
TL;DR: PANKOW (Pattern-based Annotation through Knowledge on theWeb), a method which employs an unsupervised, pattern-based approach to categorize instances with regard to an ontology, is proposed.
Abstract: The success of the Semantic Web depends on the availability of ontologies as well as on the proliferation of web pages annotated with metadata conforming to these ontologies. Thus, a crucial question is where to acquire these metadata from. In this paper wepropose PANKOW (Pattern-based Annotation through Knowledge on theWeb), a method which employs an unsupervised, pattern-based approach to categorize instances with regard to an ontology. The approach is evaluated against the manual annotations of two human subjects. The approach is implemented in OntoMat, an annotation tool for the Semantic Web and shows very promising results.

Journal ArticleDOI
TL;DR: The most representative methodologies for building ontologies from scratch are presented, and the proposed techniques, guidelines and methods to help in the construction task are described.
Abstract: Ontologies are an important component in many areas, such as knowledge management and organization, electronic commerce and information retrieval and extraction. Several methodologies for ontology building have been proposed. In this article, we provide an overview of ontology building. We start by characterizing the ontology building process and its life cycle. We present the most representative methodologies for building ontologies from scratch, and the proposed techniques, guidelines and methods to help in the construction task. We analyze and compare these methodologies. We describe current research issues in ontology reuse. Finally, we discuss the current trends in ontology building and its future challenges, namely, the new issues for building ontologies for the Semantic Web.