scispace - formally typeset
Search or ask a question

Showing papers on "Upper ontology published in 2005"


Proceedings Article
01 Jan 2005
TL;DR: This article comprehensively reviews and provides insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapped.
Abstract: Ontology mapping is seen as a solution provider in today's landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mapping has beeb the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping.

748 citations


01 Jan 2005
TL;DR: A survey of the state of the art in ontology evaluation is presented, typically in order to determine which of several ontologies would best suit a particular purpose.
Abstract: An ontology is an explicit formal conceptualization of some domain of interest Ontologies are increasingly used in various fields such as knowledge management, information extraction, and the semantic web Ontology evaluation is the problem of assessing a given ontology from the point of view of a particular criterion of application, typically in order to determine which of several ontologies would best suit a particular purpose This paper presents a survey of the state of the art in ontology evaluation

641 citations


Book ChapterDOI
15 Jun 2005
TL;DR: Text2Onto as discussed by the authors is a framework for ontology learning from textual resources, where the learned knowledge is represented at a meta-level in the form of instantiated modeling primitives within a so-called Probabilistic Ontology Model (POM).
Abstract: In this paper we present Text2Onto, a framework for ontology learning from textual resources. Three main features distinguish Text2Onto from our earlier framework TextToOnto as well as other state-of-the-art ontology learning frameworks. First, by representing the learned knowledge at a meta-level in the form of instantiated modeling primitives within a so called Probabilistic Ontology Model (POM), we remain independent of a concrete target language while being able to translate the instantiated primitives into any (reasonably expressive) knowledge representation formalism. Second, user interaction is a core aspect of Text2Onto and the fact that the system calculates a confidence for each learned object allows to design sophisticated visualizations of the POM. Third, by incorporating strategies for data-driven change discovery, we avoid processing the whole corpus from scratch each time it changes, only selectively updating the POM according to the corpus changes instead. Besides increasing efficiency in this way, it also allows a user to trace the evolution of the ontology with respect to the changes in the underlying corpus.

597 citations


Book ChapterDOI
06 Nov 2005
TL;DR: In this article, the authors present a framework for introducing design patterns that facilitate or improve the techniques used during ontology lifecycle, and some distinctions are drawn between kinds of ontology design patterns.
Abstract: The paper presents a framework for introducing design patterns that facilitate or improve the techniques used during ontology lifecycle. Some distinctions are drawn between kinds of ontology design patterns. Some content-oriented patterns are presented in order to illustrate their utility at different degrees of abstraction, and how they can be specialized or composed. The proposed framework and the initial set of patterns are designed in order to function as a pipeline connecting domain modelling, user requirements, and ontology-driven tasks/queries to be executed.

502 citations


Book
01 Jul 2005
TL;DR: This volume presents current research in ontology learning, addressing three perspectives, including methodologies that have been proposed to automatically extract information from texts and to give a structured organization to such knowledge, including approaches based on machine learning techniques.
Abstract: This volume brings together ontology learning, knowledge acquisition and other related topics It presents current research in ontology learning, addressing three perspectives The first perspective looks at methodologies that have been proposed to automatically extract information from texts and to give a structured organization to such knowledge, including approaches based on machine learning techniques Then there are evaluation methods for ontology learning, aiming at defining procedures and metrics for a quantitative evaluation of the ontology learning task; and finally application scenarios that make ontology learning a challenging area in the context of real applications such as bio-informatics According to the three perspectives mentioned above, the book is divided into three sections, each including a selection of papers addressing respectively the methods, the applications and the evaluation of ontology learning approaches

488 citations


Book ChapterDOI
06 Nov 2005
TL;DR: A new string metric for the comparison of names which performs better on the process of ontology alignment as well as to many other field matching problems is presented.
Abstract: Ontologies are today a key part of every knowledge based system. They provide a source of shared and precisely defined terms, resulting in system interoperability by knowledge sharing and reuse. Unfortunately, the variety of ways that a domain can be conceptualized results in the creation of different ontologies with contradicting or overlapping parts. For this reason ontologies need to be brought into mutual agreement (aligned). One important method for ontology alignment is the comparison of class and property names of ontologies using string-distance metrics. Today quite a lot of such metrics exist in literature. But all of them have been initially developed for different applications and fields, resulting in poor performance when applied in this new domain. In the current paper we present a new string metric for the comparison of names which performs better on the process of ontology alignment as well as to many other field matching problems.

465 citations


Journal ArticleDOI
01 Oct 2005
TL;DR: The experimental results show that the news agent based on the fuzzy ontology can effectively operate for news summarization and an experimental website is constructed to test the approach.
Abstract: In this paper, a fuzzy ontology and its application to news summarization are presented. The fuzzy ontology with fuzzy concepts is an extension of the domain ontology with crisp concepts. It is more suitable to describe the domain knowledge than domain ontology for solving the uncertainty reasoning problems. First, the domain ontology with various events of news is predefined by domain experts. The document preprocessing mechanism will generate the meaningful terms based on the news corpus and the Chinese news dictionary defined by the domain expert. Then, the meaningful terms will be classified according to the events of the news by the term classifier. The fuzzy inference mechanism will generate the membership degrees for each fuzzy concept of the fuzzy ontology. Every fuzzy concept has a set of membership degrees associated with various events of the domain ontology. In addition, a news agent based on the fuzzy ontology is also developed for news summarization. The news agent contains five modules, including a retrieval agent, a document preprocessing mechanism, a sentence path extractor, a sentence generator, and a sentence filter to perform news summarization. Furthermore, we construct an experimental website to test the proposed approach. The experimental results show that the news agent based on the fuzzy ontology can effectively operate for news summarization.

377 citations


Journal ArticleDOI
01 Oct 2005
TL;DR: An initial validation of the Ontology Auditor on the DARPA Agent Markup Language (DAML) library of domain ontologies indicates that the metrics are feasible and highlights the wide variation in quality among ontologies in the library.
Abstract: A suite of metrics is proposed to assess the quality of an ontology. Drawing upon semiotic theory, the metrics assess the syntactic, semantic, pragmatic, and social aspects of ontology quality. We operationalize the metrics and implement them in a prototype tool called the Ontology Auditor. An initial validation of the Ontology Auditor on the DARPA Agent Markup Language (DAML) library of domain ontologies indicates that the metrics are feasible and highlights the wide variation in quality among ontologies in the library. The contribution of the research is to provide a theory-based framework that developers can use to develop high quality ontologies and that applications can use to choose appropriate ontologies for a given task.

330 citations


Book Chapter
01 Jan 2005
TL;DR: This volume brings together a collection of extended versions of selected papers from two workshops on ontology learning, knowledge acquisition and related topics that were organized in the context of the European Conference on Artificial Intelligence (ECAI) 2004 and the International Conference on Knowledge Engineering and Management (EKAW) 2004.
Abstract: This volume brings together a collection of extended versions of selected papers from two workshops on ontology learning, knowledge acquisition and related topics that were organized in the context of the European Conference on Artificial Intelligence (ECAI) 2004 and the International Conference on Knowledge Engineering and Management (EKAW) 2004. The volume presents current research in ontology learning, addressing three perspectives: methodologies that have been proposed to automatically extract information from texts and to give a structured organization to such knowledge, including approaches based on machine learning techniques; evaluation methods for ontology learning, aiming at defining procedures and metrics for a quantitative evaluation of the ontology learning task; and finally application scenarios that make ontology learning a challenging area in the context of real applications such as bio-informatics. According to the three perspectives mentioned above, the book is divided into three sections, each including a selection of papers addressing respectively the methods, the applications and the evaluation of ontology learning approaches. However, all selected papers pay considerably attention to the evaluation perspective, as this was a central topic of the ECAI 2004 workshop out of which most of the papers in this volume originate.

292 citations


Book ChapterDOI
29 May 2005
TL;DR: This work proposes a model for the exploitation of ontology-based KBs to improve search over large document repositories, which includes an ontological-based scheme for the semi-automatic annotation of documents, and a retrieval system.
Abstract: Semantic search has been one of the motivations of the Semantic Web since it was envisioned. We propose a model for the exploitation of ontology-based KBs to improve search over large document repositories. Our approach includes an ontology-based scheme for the semi-automatic annotation of documents, and a retrieval system. The retrieval model is based on an adaptation of the classic vector-space model, including an annotation weighting algorithm, and a ranking algorithm. Semantic search is combined with keyword-based search to achieve tolerance to KB incompleteness. Our proposal is illustrated with sample experiments showing improvements with respect to keyword-based search, and providing ground for further research and discussion.

270 citations


Proceedings ArticleDOI
10 May 2005
TL;DR: A number of simple debugging cues generated from the description logic reasoner, Pellet, are integrated in the hypertextual ontology development environment, Swoop, to significantly improve the OWL debugging experience, and point the way to more general improvements in the presentation of an ontology to new users.
Abstract: As an increasingly large number of OWL ontologies become available on the Semantic Web and the descriptions in the ontologies become more complicated, finding the cause of errors becomes an extremely hard task even for experts. Existing ontology development environments provide some limited support, in conjunction with a reasoner, for detecting and diagnosing errors in OWL ontologies. Typically these are restricted to the mere detection of, for example, unsatisfiable concepts. We have integrated a number of simple debugging cues generated from our description logic reasoner, Pellet, in our hypertextual ontology development environment, Swoop. These cues, in conjunction with extensive undo/redo and Annotea based collaboration support in Swoop, significantly improve the OWL debugging experience, and point the way to more general improvements in the presentation of an ontology to new users.

Book ChapterDOI
29 May 2005
TL;DR: A model for the semantics of change for OWL ontologies, considering structural, logical, and user-defined consistency is presented, and resolution strategies to ensure that consistency is maintained as the ontology evolves are introduced.
Abstract: Support for ontology evolution is extremely important in ontology engineering and application of ontologies in dynamic environments. A core aspect in the evolution process is the to guarantee consistency of the ontology when changes occur. In this paper we discuss the consistent evolution of OWL ontologies. We present a model for the semantics of change for OWL ontologies, considering structural, logical, and user-defined consistency. We introduce resolution strategies to ensure that consistency is maintained as the ontology evolves.

Journal ArticleDOI
TL;DR: A number of debugging cues generated from the authors' reasoner, Pellet, are integrated in their hypertextual ontology development environment, Swoop, and it is demonstrated that these debugging cues significantly improve the OWL debugging experience, and point the way to more general improvements in the presentation of an ontology to users.

01 Jan 2005
TL;DR: The conceptual model summarized in this paper represents the foundation for Semantic Web Services from the viewpoint of the Web Service Modeling Ontology (WSMO) Working Group.
Abstract: This paper outlines some of the main issues related to the semantic modeling of Web Services and provides an overview of the Web Service Modeling Ontology (WSMO) - an ontology for Semantic Web Services. The design principles of this ontology are highlighted and a short description of the top-level elements is given. The conceptual model summarized in this paper represents the foundation for Semantic Web Services from the viewpoint of the Web Service Modeling Ontology (WSMO) Working Group.

Book ChapterDOI
01 Jan 2005
TL;DR: The goal is to help developers find the most suitable language for their representation needs in the Semantic Web, which has a need for languages to represent the semantic information that this Web requires.
Abstract: being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web—known as the Semantic Web—which has been defined as “the conceptual structuring of the Web in an explicit machine-readable way.”1 This definition does not differ too much from the one used for defining an ontology: “An ontology is an explicit, machinereadable specification of a shared conceptualization.”2 In fact, new ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires—solving the heterogeneous data exchange in this heterogeneous environment. Here, we don’t decide which language is best of the Semantic Web. Rather, our goal is to help developers find the most suitable language for their representation needs.

Book ChapterDOI
TL;DR: OntoMerge, an online system for ontology merging and automated reasoning, can implement ontology translation with inputs and outputs in OWL or other web languages.
Abstract: Ontologies are a crucial tool for formally specifying the vocabulary and relationship of concepts used on the Semantic Web. In order to share information, agents that use different vocabularies must be able to translate data from one ontological framework to another. Ontology translation is required when translating datasets, generating ontology extensions, and querying through different ontologies. OntoMerge, an online system for ontology merging and automated reasoning, can implement ontology translation with inputs and outputs in OWL or other web languages. Ontology translation can be thought of in terms of formal inference in a merged ontology. The merge of two related ontologies is obtained by taking the union of the concepts and the axioms defining them, and then adding bridging axioms that relate their concepts. The resulting merged ontology then serves as an inferential medium within which translation can occur. Our internal representation, Web-PDDL, is a strong typed first-order logic language for web application. Using a uniform notation for all problems allows us to factor out syntactic and semantic translation problems, and focus on the latter. Syntactic translation is done by an automatic translator between Web-PDDL and OWL or other web languages. Semantic translation is implemented using an inference engine (OntoEngine) which processes assertions and queries in Web-PDDL syntax, running in either a data-driven (forward chaining) or demand-driven (backward chaining) way.

Book ChapterDOI
01 Jan 2005
TL;DR: This paper presents how to build an ontology in the legal domain following the ontology development methodology METHONTOLOGY and using the ontological engineering workbench WebODE.
Abstract: This paper presents how to build an ontology in the legal domain following the ontology development methodology METHONTOLOGY and using the ontology engineering workbench WebODE. Both of them have been widely used to develop ontologies in many other domains. The ontology used to illustrate this paper has been extracted from an existing class taxonomy proposed by Breuker, and adapted to the Spanish legal domain.

Journal ArticleDOI
TL;DR: A framework for analyzing the existing methodologies that compares them to a set of general criteria is provided and a classification is obtained based upon the direction of ontology construction.
Abstract: In the current literature of knowledge management and artificial intelligence, several different approaches to the problem have been carried out of developing domain ontologies from scratch. All these approaches deal fundamentally with three problems: (1) providing a collection of general terms describing classes and relations to be employed in the description of the domain itself; (2) organizing the terms into a taxonomy of the classes by the ISA relation; and (3) expressing in an explicit way the constraints that make the ISA pairs meaningful. Though a number of such approaches can be found, no systematic analysis of them exists which can be used to understand the inspiring motivation, the applicability context, and the structure of the approaches. In this paper, we provide a framework for analyzing the existing methodologies that compares them to a set of general criteria. In particular, we obtain a classification based upon the direction of ontology construction; bottom-up are those methodologies that start with some descriptions of the domain and obtain a classification, while top-down ones start with an abstract view of the domain itself, which is given a priori. The resulting classification is useful not only for theoretical purposes but also in the practice of deployment of ontologies in Information Systems, since it provides a framework for choosing the right methodology to be applied in the specific context, depending also on the needs of the application itself.

Book ChapterDOI
31 Oct 2005
TL;DR: The NRL Security Ontology is more comprehensive and better organized than existing security ontologies, capable of representing more types of security statements and can be applied to any electronic resource.
Abstract: Annotation with security-related metadata enables discovery of resources that meet security requirements. This paper presents the NRL Security Ontology, which complements existing ontologies in other domains that focus on annotation of functional aspects of resources. Types of security information that could be described include mechanisms, protocols, objectives, algorithms, and credentials in various levels of detail and specificity. The NRL Security Ontology is more comprehensive and better organized than existing security ontologies. It is capable of representing more types of security statements and can be applied to any electronic resource. The class hierarchy of the ontology makes it both easy to use and intuitive to extend. We applied this ontology to a Service Oriented Architecture to annotate security aspects of Web service descriptions and queries. A refined matching algorithm was developed to perform requirement-capability matchmaking that takes into account not only the ontology concepts, but also the properties of the concepts.

Journal ArticleDOI
TL;DR: An ontology-based framework to enable semantic interoperability of product information is proposed and a procedure to semi-automatically determine mappings between exactly equivalent concepts across representations of the interacting applications is described.
Abstract: An increasing trend toward product development in a collaborative environment has resulted in the use of various software tools to enhance the product design. This requires a meaningful representation and exchange of product data semantics across different application domains. This paper proposes an ontology-based framework to enable such semantic interoperability. A standards-based approach is used to develop a Product Semantic Representation Language (PSRL). Formal description logic (DAML+OIL) is used to encode the PSRL. Mathematical logic and corresponding reasoning is used to determine semantic equivalences between an application ontology and the PSRL. The semantic equivalence matrix enables resolution of ambiguities created due to differences in syntaxes and meanings associated with terminologies in different application domains. Successful semantic interoperability will form the basis of seamless communication and thereby enable better integration of product development systems. Note to Practitioners-Semantic interoperability of product information refers to automating the exchange of meaning associated with the data, among information resources throughout the product development. This research is motivated by the problems in enabling such semantic interoperability. First, product information is formalized into an explicit, extensible, and comprehensive product semantics representation language (PSRL). The PSRL is open and based on standard W3C constructs. Next, in order to enable semantic translation, the paper describes a procedure to semi-automatically determine mappings between exactly equivalent concepts across representations of the interacting applications. The paper demonstrates that this approach to translation is feasible, but it has not yet been implemented commercially. Current limitations and the directions for further research are discussed. Future research addresses the determination of semantic similarities (not exact equivalences) between the interacting information resources.

Journal Article
TL;DR: The intention of this essay is to give an overview of different methods that learn ontologies or ontology-like structures from unstructured text.
Abstract: After the vision of the Semantic Web was broadcasted at the turn of the millennium, ontology became a synonym for the solution to many problems concerning the fact that computers do not understand human language: if there were an ontology and every document were marked up with it and we had agents that would understand the markup, then computers would finally be able to process our queries in a really sophisticated way. Some years later, the success of Google shows us that the vision has not come true, being hampered by the incredible amount of extra work required for the intellectual encoding of semantic mark-up – as compared to simply uploading an HTML page. To alleviate this acquisition bottleneck, the field of ontology learning has since emerged as an important sub-field of ontology engineering. It is widely accepted that ontologies can facilitate text understanding and automatic processing of textual resources. Moving from words to concepts not only mitigates data sparseness issues, but also promises appealing solutions to polysemy and homonymy by finding non-ambiguous concepts that may map to various realizations in – possibly ambiguous – words. Numerous applications using lexical-semantic databases like WordNet (Miller, 1990) and its non-English counterparts, e.g. EuroWordNet (Vossen, 1997) or CoreNet (Choi and Bae, 2004) demonstrate the utility of semantic resources for natural language processing. Learning semantic resources from text instead of manually creating them might be dangerous in terms of correctness, but has undeniable advantages: Creating resources for text processing from the texts to be processed will fit the semantic component neatly and directly to them, which will never be possible with general-purpose resources. Further, the cost per entry is greatly reduced, giving rise to much larger resources than an advocate of a manual approach could ever afford. On the other hand, none of the methods used today are good enough for creating semantic resources of any kind in a completely unsupervised fashion, albeit automatic methods can facilitate manual construction to a large extent. The term ontology is understood in a variety of ways and has been used in philosophy for many centuries. In contrast, the notion of ontology in the field of computer science is younger – but almost used as inconsistently, when it comes to the details of the definition. The intention of this essay is to give an overview of different methods that learn ontologies or ontology-like structures from unstructured text. Ontology learning from other sources, issues in description languages, ontology editors, ontology merging and ontology evolving transcend the scope of this article. Surveys on ontology learning from text and other sources can be found in Ding and Foo (2002) and Gomez-Perez

Book ChapterDOI
06 Nov 2005
TL;DR: Omen, an Ontology Mapping ENhancer, is based on a set of meta-rules that captures the influence of the ontology structure and the existing matches to match nodes that are neighbours to matched nodes in the two ontologies.
Abstract: Most existing ontology mapping tools are inexact. Inexact ontology mapping rules, if not rectified, result in imprecision in the applications that use them. We describe a framework to probabilistically improve existing ontology mappings using a Bayesian Network. Omen, an Ontology Mapping ENhancer, is based on a set of meta-rules that captures the influence of the ontology structure and the existing matches to match nodes that are neighbours to matched nodes in the two ontologies. We have implemented a protype ontology matcher that can either map concepts across two input ontologies or enhance existing matches between ontology concepts. Preliminary experiments demonstrate that Omen enhances existing ontology mappings in our test cases.

Journal ArticleDOI
TL;DR: The cohesion metrics examine the fundamental quality of cohesion as it relates to ontologies in order to effectively make use of domain specific ontology development.
Abstract: Recently, domain specific ontology development has been driven by research on the Semantic Web. Ontologies have been suggested for use in many application areas targeted by the Semantic Web, such as dynamic web service composition and general web service matching. Fundamental characteristics of these ontologies must be determined in order to effectively make use of them: for example, Sirin, Hendler and Parsia have suggested that determining fundamental characteristics of ontologies is important for dynamic web service composition. Our research examines cohesion metrics for ontologies. The cohesion metrics examine the fundamental quality of cohesion as it relates to ontologies.

Book ChapterDOI
05 Dec 2005
TL;DR: This paper describes the publicly available ‘Semantic Web for Research Communities’ (SWRC) ontology, in which research communities and relevant related concepts are modelled, and describes the design decisions that underlie the ontology.
Abstract: Representing knowledge about researchers and research communities is a prime use case for distributed, locally maintained, interlinked and highly structured information in the spirit of the Semantic Web. In this paper we describe the publicly available ‘Semantic Web for Research Communities’ (SWRC) ontology, in which research communities and relevant related concepts are modelled. We describe the design decisions that underlie the ontology and report on both experiences with and known usages of the SWRC Ontology. We believe that for making the Semantic Web reality the re-usage of ontologies and their continuous improvement by user communities is crucial. Our contribution aims to provide a description and usage guidelines to make the value of the SWRC explicit and to facilitate its re-use.

Journal ArticleDOI
TL;DR: This semantic analysis approach can be used in semantic annotation and transcoding systems, which take into consideration the users environment including preferences, devices used, available network bandwidth and content identity.
Abstract: An approach to knowledge-assisted semantic video object detection based on a multimedia ontology infrastructure is presented. Semantic concepts in the context of the examined domain are defined in an ontology, enriched with qualitative attributes (e.g., color homogeneity), low-level features (e.g., color model components distribution), object spatial relations, and multimedia processing methods (e.g., color clustering). Semantic Web technologies are used for knowledge representation in the RDF(S) metadata standard. Rules in F-logic are defined to describe how tools for multimedia analysis should be applied, depending on concept attributes and low-level features, for the detection of video objects corresponding to the semantic concepts defined in the ontology. This supports flexible and managed execution of various application and domain independent multimedia analysis tasks. Furthermore, this semantic analysis approach can be used in semantic annotation and transcoding systems, which take into consideration the users environment including preferences, devices used, available network bandwidth and content identity. The proposed approach was tested for the detection of semantic objects on video data of three different domains.

Book ChapterDOI
06 Nov 2005
TL;DR: In this paper, the source and target ontologies are first translated into Bayesian networks (BN) and the concept mapping between the two ontologies is treated as evidential reasoning between the translated BNs.
Abstract: This paper presents our ongoing effort on developing a principled methodology for automatic ontology mapping based on BayesOWL, a probabilistic framework we developed for modeling uncertainty in semantic web. In this approach, the source and target ontologies are first translated into Bayesian networks (BN); the concept mapping between the two ontologies are treated as evidential reasoning between the two translated BNs. Probabilities needed for constructing conditional probability tables (CPT) during translation and for measuring semantic similarity during mapping are learned using text classification techniques where each concept in an ontology is associated with a set of semantically relevant text documents, which are obtained by ontology guided web mining. The basic ideas of this approach are validated by positive results from computer experiments on two small real-world ontologies.

Proceedings ArticleDOI
02 Oct 2005
TL;DR: The results show that AKTiveRank will have great utility although there is potential for improvement, and a number of metrics are applied in an attempt to investigate their appropriateness for ranking ontologies.
Abstract: In view of the need to provide tools to facilitate the re-use of existing knowledge structures such as ontologies, we present in this paper a system, AKTiveRank, for the ranking of ontologies AKTiveRank uses as input the search terms provided by a knowledge engineer and, using the output of an ontology search engine, ranks the ontologies We apply a number of metrics in an attempt to investigate their appropriateness for ranking ontologies, and compare the results with a questionnaire-based human study Our results show that AKTiveRank will have great utility although there is potential for improvement

Proceedings ArticleDOI
19 Sep 2005
TL;DR: A software requirements analysis method based ondomain ontology technique, where a mapping between a software requirements specification and the domain ontology that represents semantic components is established, which allows requirements engineers to analyze a requirements specification with respect to the semantics of the application domain.
Abstract: We propose a software requirements analysis method based on domain ontology technique, where we can establish a mapping between a software requirements specification and the domain ontology that represents semantic components. Our ontology system consists of a thesaurus and inference rules and the thesaurus part comprises domain specific concepts and relationships suitable for semantic processing. It allows requirements engineers to analyze a requirements specification with respect to the semantics of the application domain. More concretely, we demonstrate following three kinds of semantic processing through a case study, (1) detecting incompleteness and inconsistency included in a requirements specification, (2) measuring the quality of a specification with respect to its meaning and (3) predicting requirements changes based on semantic analysis on a change history.

Journal ArticleDOI
TL;DR: This paper introduces an approach to generating ontologies based on table analysis called TANGO (Table ANalysis for Generating Ontologies), a formalized method of processing the format and content of tables that can serve to incrementally build a relevant reusable conceptual ontology.
Abstract: At the heart of today's information-explosion problems are issues involving semantics, mutual understanding, concept matching, and interoperability. Ontologies and the Semantic Web are offered as a potential solution, but creating ontologies for real-world knowledge is nontrivial. If we could automate the process, we could significantly improve our chances of making the Semantic Web a reality. While understanding natural language is difficult, tables and other structured information make it easier to interpret new items and relations. In this paper we introduce an approach to generating ontologies based on table analysis. We thus call our approach TANGO (Table ANalysis for Generating Ontologies). Based on conceptual modeling extraction techniques, TANGO attempts to (i) understand a table's structure and conceptual content; (ii) discover the constraints that hold between concepts extracted from the table; (iii) match the recognized concepts with ones from a more general specification of related concepts; and (iv) merge the resulting structure with other similar knowledge representations. TANGO is thus a formalized method of processing the format and content of tables that can serve to incrementally build a relevant reusable conceptual ontology.

Proceedings ArticleDOI
07 Nov 2005
TL;DR: Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model and can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them.
Abstract: Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.