scispace - formally typeset
Search or ask a question

Showing papers on "Ontology (information science) published in 2010"


Journal ArticleDOI
TL;DR: The authors review some of the dominant ways that management scholars have addressed technology over the past five decades and demonstrate that while materiality is an integral aspect of organisational activity, it has either been ignored by management research or investigated through an ontology of separateness that cannot account for the multiple and dynamic ways in which the social and the material are constitutively entangled in everyday life.
Abstract: Drawing on a specific scenario from a contemporary workplace, I review some of the dominant ways that management scholars have addressed technology over the past five decades I will demonstrate that while materiality is an integral aspect of organisational activity, it has either been ignored by management research or investigated through an ontology of separateness that cannot account for the multiple and dynamic ways in which the social and the material are constitutively entangled in everyday life I will end by pointing to some possible alternative perspectives that may have the potential to help management scholars take seriously the distributed and complex sociomaterial configurations that form and perform contemporary organisations Copyright The Author 2009 Published by Oxford University Press on behalf of the Cambridge Political Economy Society All rights reserved, Oxford University Press

887 citations


Journal ArticleDOI
TL;DR: The application of reference ontologies to data is a key problem, and this work presents guidelines on how community ontologies can be presented in an application ontology in a data-driven way.
Abstract: Motivation: Describing biological sample variables with ontologies is complex due to the cross-domain nature of experiments. Ontologies provide annotation solutions; however, for cross-domain investigations, multiple ontologies are needed to represent the data. These are subject to rapid change, are often not interoperable and present complexities that are a barrier to biological resource users. Results: We present the Experimental Factor Ontology, designed to meet cross-domain, application focused use cases for gene expression data. We describe our methodology and open source tools used to create the ontology. These include tools for creating ontology mappings, ontology views, detecting ontology changes and using ontologies in interfaces to enhance querying. The application of reference ontologies to data is a key problem, and this work presents guidelines on how community ontologies can be presented in an application ontology in a data-driven way. Availability: http://www.ebi.ac.uk/efo Contact: [email protected] Supplementary information:Supplementary data are available at Bioinformatics online.

468 citations


Book ChapterDOI
01 Jan 2010

447 citations


Tanya Z. Berardini, Dong Li, Eva Huala, Susan M. Bridges, Shane C. Burgess, Fiona M. McCarthy, Seth Carbon, Suzanna E. Lewis, Christopher J. Mungall, A Abdulla, Wood, Erika Feltrin, Giorgio Valle, Rex L. Chisholm, Petra Fey, P Gaudet, Warren A. Kibbe, S. Basu, Y Bushmanova, Karen Eilbeck, Deborah A. Siegele, B. K. McIntosh, Daniel P. Renfro, Adrienne E. Zweifel, James C. Hu, Michael Ashburner, Susan Tweedie, Yasmin Alam-Faruque, Rolf Apweiler, A. Auchinchloss, A Bairoch, Daniel Barrell, David Binns, M. C. Blatter, Lydie Bougueleret, Emmanuel Boutet, Lionel Breuza, Alan Bridge, Paul Browne, W. M. Chan, Elisabeth Coudert, L Daugherty, E. Dimmer, Ruth Y. Eberhardt, Anne Estreicher, L Famiglietti, S. Ferro-Rojas, M Feuermann, Rebecca E. Foulger, Nadine Gruaz-Gumowski, Ursula Hinz, Rachael P. Huntley, S. Jimenez, Florence Jungo, Guillaume Keller, Kati Laiho, Duncan Legge, P Lemercier, Damien Lieberherr, Michele Magrane, Claire O'Donovan, Ivo Pedruzzi, Sylvain Poux, Catherine Rivoire, Bernd Roechert, Tony Sawford, Maria Victoria Schneider, Eleanor J Stanley, Andre Stutz, Shyamala Sundaram, Michael Tognolli, Ioannis Xenarios, Midori A. Harris, Jennifer I. Deegan, Amelia Ireland, Jane Lomax, Pankaj Jaiswal, Marcus C. Chibucos, Michelle G. Giglio, Jennifer R. Wortman, Linda Hannick, R Madupu, David Botstein, Kara Dolinski, Livstone, Rose Oughtred, Judith A. Blake, Carol J. Bult, Alexander D. Diehl, Mary E. Dolan, H. Drabkin, Janan T. Eppig, David P. Hill, L. Ni, Martin Ringwald, D. Sitnikov, C Collmer, T Torto-Alalibo, Stan Laulederkind, Mary Shimoyama, Simon N. Twigger, P D'Eustachio, Lisa Matthews, Rama Balakrishnan, Gail Binkley, J. M. Cherry, Karen R. Christie, Maria C. Costanzo, Engel, Dianna G. Fisk, Jodi E. Hirschman, Benjamin C. Hitz, El Hong, Cynthia J. Krieger, Miyasato, Robert S. Nash, Julie Park, Skrzypek, Sa Weng, Edith D. Wong, Martin Aslett, Juancarlos Chan, Ranjana Kishore, Paul W. Sternberg, K. Van Auken, Varsha K. Khodiyar, Ruth C. Lovering, P.J. Talmud, Doug Howe, Monte Westerfield 
01 Jan 2010
TL;DR: The Gene Ontology (GO) Consortium continues to develop, maintain and use a set of structured, controlled vocabularies for the annotation of genes, gene products and sequences and several new relationship types have been introduced and used to create links between and within the GO domains.
Abstract: The Gene Ontology (GO) Consortium (http://www.geneontology.org) (GOC) continues to develop, maintain and use a set of structured, controlled vocabularies for the annotation of genes, gene products and sequences. The GO ontologies are expanding both in content and in structure. Several new relationship types have been introduced and used, along with existing relationships, to create links between and within the GO domains. These improve the representation of biology, facilitate querying, and allow GO developers to systematically check for and correct inconsistencies within the GO. Gene product annotation using GO continues to increase both in the number of total annotations and in species coverage. GO tools, such as OBO-Edit, an ontology-editing tool, and AmiGO, the GOC ontology browser, have seen major improvements in functionality, speed and ease of use.

416 citations


Journal ArticleDOI
TL;DR: This paper provides an introduction to ontology-based information extraction and reviews the details of different OBIE systems developed so far to identify a common architecture among these systems and classify them based on different factors, which leads to a better understanding on their operation.
Abstract: Information extraction (IE) aims to retrieve certain types of information from natural language text by processing them automatically. For example, an IE system might retrieve information about geopolitical indicators of countries from a set of web pages while ignoring other types of information. Ontology-based information extraction (OBIE) has recently emerged as a subfield of information extraction. Here, ontologies - which provide formal and explicit specifications of conceptualizations - play a crucial role in the IE process. Because of the use of ontologies, this field is related to knowledge representation and has the potential to assist the development of the Semantic Web. In this paper, we provide an introduction to ontology-based information extraction and review the details of different OBIE systems developed so far. We attempt to identify a common architecture among these systems and classify them based on different factors, which leads to a better understanding on their operation. We also discuss the implementation details of these systems including the tools used by them and the metrics used to measure their performance. In addition, we attempt to identify the possible future directions for this field.

409 citations


Journal ArticleDOI
TL;DR: The Gene Ontology (GO) Consortium (http://wwwgeneontologyorg) (GOC) continues to develop, maintain and use a set of structured, controlled vocabularies for the annotation of genes, gene products and sequences.
Abstract: The Gene Ontology (GO) Consortium (http://wwwgeneontologyorg) (GOC) continues to develop, maintain and use a set of structured, controlled vocabularies for the annotation of genes, gene products and sequences The GO ontologies are expanding both in content and in structure Several new relationship types have been introduced and used, along with existing relationships, to create links between and within the GO domains These improve the representation of biology, facilitate querying, and allow GO developers to systematically check for and correct inconsistencies within the GO Gene product annotation using GO continues to increase both in the number of total annotations and in species coverage GO tools, such as OBO-Edit, an ontology-editing tool, and AmiGO, the GOC ontology browser, have seen major improvements in functionality, speed and ease of use

401 citations


Patent
15 Mar 2010
TL;DR: In this article, a system, method and/or computer program product for automatically generating questions and answers based on any corpus of data is presented, given a collection of textual documents, automatically generating collections of questions about the documents together with answers to those questions.
Abstract: A system, method and/or computer program product for automatically generating questions and answers based on any corpus of data. The computer system, given a collection of textual documents, automatically generates collections of questions about the documents together with answers to those questions. In particular, such a process can be applied to the so called ‘open’ domain, where the type of the corpus is not given in advance, and neither is the ontology of the corpus. The system improves the exploring of large bodies of textual information. Applications implementing the system and method include new types of tutoring systems, educational question-answering games, national security and business analysis systems, etc.

371 citations


Journal ArticleDOI
29 Sep 2010-PLOS ONE
TL;DR: A practical strategy for organizing, mining, and correlating global collections of large-scale genomic data to explore normal and disease biology and demonstrates how a data-driven analysis across very large collections of genomic data can reveal novel discoveries and evidence to support existing hypothesis.
Abstract: Background The investigation of the interconnections between the molecular and genetic events that govern biological systems is essential if we are to understand the development of disease and design effective novel treatments. Microarray and next-generation sequencing technologies have the potential to provide this information. However, taking full advantage of these approaches requires that biological connections be made across large quantities of highly heterogeneous genomic datasets. Leveraging the increasingly huge quantities of genomic data in the public domain is fast becoming one of the key challenges in the research community today.

352 citations


Journal ArticleDOI
TL;DR: Robinson PN, Mundlos S. The Human Phenotype Ontology: Foundations of a Ontology, 2nd Ed.
Abstract: A standardized, controlled vocabulary allows phenotypic information to be described in an unambiguous fashion in medical publications and databases. The Human Phenotype Ontology (HPO) is being developed in an effort to provide such a vocabulary. The use of an ontology to capture phenotypic information allows the use of computational algorithms that exploit semantic similarity between related phenotypic abnormalities to define phenotypic similarity metrics, which can be used to perform database searches for clinical diagnostics or as a basis for incorporating the human phenome into large-scale computational analysis of gene expression patterns and other cellular phenomena associated with human disease. The HPO is freely available at http://www.human-phenotype-ontology.org.

348 citations



Journal ArticleDOI
TL;DR: A methodology for ontology development that is now being used by multiple groups of researchers in different life science domains is tested and refined, and some of its principles are being applied, especially within the framework of the OBO (Open Biomedical Ontologies) Foundry initiative.
Abstract: Since 2002 we have been testing and refining a methodology for ontology development that is now being used by multiple groups of researchers in different life science domains. Gary Merrill, in a recent paper in this journal, describes some of the reasons why this methodology has been found attractive by researchers in the biological and biomedical sciences. At the same time he assails the methodology on philosophical grounds, focusing specifically on our recommendation that ontologies developed for scientific purposes should be constructed in such a way that their terms are seen as referring to what we call universals or types in reality. As we show, Merrill's critique is of little relevance to the success of our realist project, since it not only reveals no actual errors in our work but also criticizes views on universals that we do not in fact hold. However, it nonetheless provides us with a valuable opportunity to clarify the realist methodology, and to show how some of its principles are being applied, especially within the framework of the OBO (Open Biomedical Ontologies) Foundry initiative.

BookDOI
17 Sep 2010
TL;DR: Theory and Applications of Ontology: Computer Applications presents ontology in ways that philosophers are not likely to find elsewhere, and introduces the reader to current research on frameworks and applications in information technology in Ways that are sure to invite reflection and constructive responses from ontologists in philosophy.
Abstract: Ontology was once understood to be the philosophical inquiry into the structure of reality: the analysis and categorization of what there is. Recently, however, a field called ontology has become part of the rapidly growing research industry in information technology. The two fields have more in common than just their name.Theory and Applications of Ontology is a two-volume anthology that aims to further an informed discussion about the relationship between ontology in philosophy and ontology in information technology. It fills an important lacuna in cutting-edge research on ontology in both fields, supplying stage-setting overview articles on history and method, presenting directions of current research in either field, and highlighting areas of productive interdisciplinary contact.Theory and Applications of Ontology: Computer Applications presents ontology in ways that philosophers are not likely to find elsewhere. The volume offers an overview of current research in ontology, distinguishing basic conceptual issues, domain applications, general frameworks, and mathematical formalisms. It introduces the reader to current research on frameworks and applications in information technology in ways that are sure to invite reflection and constructive responses from ontologists in philosophy.

Book ChapterDOI
07 Nov 2010
TL;DR: This paper presents a system for finding schema-level links between LOD datasets in the sense of ontology alignment, based on the idea of bootstrapping information already present on the LOD cloud, and presents a comprehensive evaluation which shows that BLOOMS outperforms state-of-the-art ontology aligned systems on LOD dataset.
Abstract: The Web of Data currently coming into existence through the Linked Open Data (LOD) effort is a major milestone in realizing the Semantic Web vision. However, the development of applications based on LOD faces difficulties due to the fact that the different LOD datasets are rather loosely connected pieces of information. In particular, links between LOD datasets are almost exclusively on the level of instances, and schema-level information is being ignored. In this paper, we therefore present a system for finding schema-level links between LOD datasets in the sense of ontology alignment. Our system, called BLOOMS, is based on the idea of bootstrapping information already present on the LOD cloud. We also present a comprehensive evaluation which shows that BLOOMS outperforms state-of-the-art ontology alignment systems on LOD datasets. At the same time, BLOOMS is also competitive compared with these other systems on the Ontology Evaluation Alignment Initiative Benchmark datasets.

Journal ArticleDOI
29 Dec 2010-PLOS ONE
TL;DR: The Hymenoptera Anatomy Ontology provides a foundation through which connections between genomic, evolutionary developmental biology, phylogenetic, taxonomic, and morphological research can be actualized and is available through the OBO Foundry ontology repository and BioPortal.
Abstract: Hymenoptera is an extraordinarily diverse lineage, both in terms of species numbers and morphotypes, that includes sawflies, bees, wasps, and ants. These organisms serve critical roles as herbivores, predators, parasitoids, and pollinators, with several species functioning as models for agricultural, behavioral, and genomic research. The collective anatomical knowledge of these insects, however, has been described or referred to by labels derived from numerous, partially overlapping lexicons. The resulting corpus of information—millions of statements about hymenopteran phenotypes—remains inaccessible due to language discrepancies. The Hymenoptera Anatomy Ontology (HAO) was developed to surmount this challenge and to aid future communication related to hymenopteran anatomy. The HAO was built using newly developed interfaces within mx, a Web-based, open source software package, that enables collaborators to simultaneously contribute to an ontology. Over twenty people contributed to the development of this ontology by adding terms, genus differentia, references, images, relationships, and annotations. The database interface returns an Open Biomedical Ontology (OBO) formatted version of the ontology and includes mechanisms for extracting candidate data and for publishing a searchable ontology to the Web. The application tools are subject-agnostic and may be used by others initiating and developing ontologies. The present core HAO data constitute 2,111 concepts, 6,977 terms (labels for concepts), 3,152 relations, 4,361 sensus (links between terms, concepts, and references) and over 6,000 text and graphical annotations. The HAO is rooted with the Common Anatomy Reference Ontology (CARO), in order to facilitate interoperability with and future alignment to other anatomy ontologies, and is available through the OBO Foundry ontology repository and BioPortal. The HAO provides a foundation through which connections between genomic, evolutionary developmental biology, phylogenetic, taxonomic, and morphological research can be actualized. Inherent mechanisms for feedback and content delivery demonstrate the effectiveness of remote, collaborative ontology development and facilitate future refinement of the HAO.

Book ChapterDOI
07 Nov 2010
TL;DR: This paper describes an ontology-based streaming data access service that link their data content to ontologies through S2O mappings and can query the ontology using SPARQLStream, an extension of SParQL for streaming data.
Abstract: The availability of streaming data sources is progressively increasing thanks to the development of ubiquitous data capturing technologies such as sensor networks. The heterogeneity of these sources introduces the requirement of providing data access in a unified and coherent manner, whilst allowing the user to express their needs at an ontological level. In this paper we describe an ontology-based streaming data access service. Sources link their data content to ontologies through S2O mappings. Users can query the ontology using SPARQLStream, an extension of SPARQL for streaming data. A preliminary implementation of the approach is also presented. With this proposal we expect to set the basis for future efforts in ontology-based streaming data integration.


Proceedings ArticleDOI
22 Jun 2010
TL;DR: The concept, architecture and key design decisions of Smart-M3 interoperability platform, based on the ideas of space-based information sharing and semantic web ideas about information representation and ontologies, are described.
Abstract: We describe the concept, architecture and key design decisions of Smart-M3 interoperability platform. The platform is based on the ideas of space-based information sharing and semantic web ideas about information representation and ontologies. The interoperability platform has been used as the basis for multiple case studies.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed approach can work effectively and that the menu can be provided as a reference for the involved diabetes after diet validation by domain experts.
Abstract: It has been widely pointed out that classical ontology is not sufficient to deal with imprecise and vague knowledge for some real-world applications like personal diabetic-diet recommendation. On the other hand, fuzzy ontology can effectively help to handle and process uncertain data and knowledge. This paper proposes a novel ontology model, which is based on interval type-2 fuzzy sets (T2FSs), called type-2 fuzzy ontology (T2FO), with applications to knowledge representation in the field of personal diabetic-diet recommendation. The T2FO is composed of 1) a type-2 fuzzy personal profile ontology ( type-2 FPPO); 2) a type-2 fuzzy food ontology ( type-2 FFO); and 3) a type-2 fuzzy-personal food ontology (type-2 FPFO). In addition, the paper also presents a T2FS-based intelligent diet-recommendation agent ( IDRA), including 1) T2FS construction; 2) a T2FS-based personal ontology filter; 3) a T2FS-based fuzzy inference mechanism; 4) a T2FS-based diet-planning mechanism; 5) a T2FS-based menu-recommendation mechanism; and 6) a T2FS-based semantic-description mechanism. In the proposed approach, first, the domain experts plan the diet goal for the involved diabetes and create the nutrition facts of common Taiwanese food. Second, the involved diabetics are requested to routinely input eaten items. Third, the ontology-creating mechanism constructs a T2FO, including a type-2 FPPO, a type-2 FFO, and a set of type-2 FPFOs. Finally, the T2FS-based IDRA retrieves the built T2FO to recommend a personal diabetic meal plan. The experimental results show that the proposed approach can work effectively and that the menu can be provided as a reference for the involved diabetes after diet validation by domain experts.

Journal ArticleDOI
TL;DR: A new approach for automatic learning of terminological ontologies from text corpus based on probabilistic topic models, which shows that the method outperforms other methods in terms of recall and precision measures.
Abstract: Probabilistic topic models were originally developed and utilized for document modeling and topic extraction in Information Retrieval. In this paper, we describe a new approach for automatic learning of terminological ontologies from text corpus based on such models. In our approach, topic models are used as efficient dimension reduction techniques, which are able to capture semantic relationships between word-topic and topic-document interpreted in terms of probability distributions. We propose two algorithms for learning terminological ontologies using the principle of topic relationship and exploiting information theory with the probabilistic topic models learned. Experiments with different model parameters were conducted and learned ontology statements were evaluated by the domain experts. We have also compared the results of our method with two existing concept hierarchy learning methods on the same data set. The study shows that our method outperforms other methods in terms of recall and precision measures. The precision level of the learned ontology is sufficient for it to be deployed for the purpose of browsing, navigation, and information search and retrieval in digital libraries.

Book ChapterDOI
01 Jan 2010
TL;DR: The General Formal Ontology (GFO) as discussed by the authors is a foundational ontology integrating objects and processes, which includes categories of objects (3D objects) as well as processes (4D entities) and both are integrated into one coherent framework.
Abstract: The current chapter presents an overview about the current stage of the foundational ontology GFO. GFO (General Formal Ontology). GFO is a foundational ontology integrating objects and processes. It is being developed by the research group Onto-Med (Ontologies in Medicine) at the University of Leipzig. Unique selling properties of GFO are the following: it includes categories of objects (3D objects) as well as of processes (4D entities) and both are integrated into one coherent framework. GFO presents a multi-categorial approach by admitting universals, concepts, and symbol structures and their interrelations. GFO adopts categories pertaining to levels of reality, and it is designed to support interoperability by principles of ontological mapping and reduction. GFO contains several novel ontological modules, in particular, a module for functions and a module for roles. GFO is designed for applications, firstly in medical, biological, and biomedical areas, but also in the fields of economics and sociology.

Journal ArticleDOI
TL;DR: In this article, an improved inconsistency reasoner is proposed, which selects consistent subsets using minimal inconsistent sets and a resolution method, to improve the run-time performance of the reasoning processing.

Book
19 Apr 2010
TL;DR: In this paper, the First Settlement: Philosophy of Science 2. The Second Settlement: Analytic Philosophy 3. The Third Settlement: Foucault--We Have Never Been Postmodern 4. The Fourth Settlement: Feminism--From Epistemology to Ontology 5. From Construction to Disclosure: Ontology and the Social Notes References Index
Abstract: Acknowledgments Introduction 1. The First Settlement: Philosophy of Science 2. The Second Settlement: Analytic Philosophy 3. The Third Settlement: Foucault--We Have Never Been Postmodern 4. The Fourth Settlement: Feminism--From Epistemology to Ontology 5. From Construction to Disclosure: Ontology and the Social Notes References Index

Book ChapterDOI
30 May 2010
TL;DR: This work presents FREyA, which combines syntactic parsing with the knowledge encoded in ontologies in order to reduce the customisation effort, and is evaluated using Mooney Geoquery dataset with very high precision and recall.
Abstract: With large datasets such as Linked Open Data available, there is a need for more user-friendly interfaces which will bring the advantages of these data closer to the casual users. Several recent studies have shown user preference to Natural Language Interfaces (NLIs) in comparison to others. Although many NLIs to ontologies have been developed, those that have reasonable performance are domain-specific and tend to require customisation for each new domain which, from a developer's perspective, makes them expensive to maintain. We present our system FREyA, which combines syntactic parsing with the knowledge encoded in ontologies in order to reduce the customisation effort. If the system fails to automatically derive an answer, it will generate clarification dialogs for the user. The user's selections are saved and used for training the system in order to improve its performance over time. FREyA is evaluated using Mooney Geoquery dataset with very high precision and recall.

Book ChapterDOI
01 Jan 2010
TL;DR: This paper showed that denominal verbs in English, of both the location/locatum variety and the unergative variety, are measured out by the incorporated nominal Root, which strongly supports Hale and Keyser's (1993 et seq.) l-syntactic approach, since it shows parallel semantic effects of identical structures in overt syntax and L-syntax, and suggests that English Roots of denological verbs have inherent semantic properties, in particular, "boundedness" which determine the effects they produce when they are Incremental Themes.
Abstract: Evidence is presented showing that denominal verbs in English, of both the location/locatum variety and the unergative variety, are ‘measured-out’ by the incorporated nominal Root. This strongly supports Hale and Keyser’s (1993 et seq.) l-syntactic approach, since it shows parallel semantic effects of identical structures in overt syntax and l-syntax, and suggests that English Roots of denominal verbs have inherent semantic properties, in particular, ‘boundedness’, which determine the effects they produce when they are Incremental Themes. * I wish to thank the workshop organizers, Tova Rapoport and Nomi Erteschik-Shir, as well as the workshop participants and audiences at the University of Maryland and the University of Arizona, for very useful input. All remaining shortcomings are of course my own responsibility.

Journal ArticleDOI
TL;DR: OntoFox provides a timely publicly available service, providing different options for users to collect terms from external ontologies, making them available for reuse by import into client OWL ontologies.
Abstract: Background Ontology development is a rapidly growing area of research, especially in the life sciences domain. To promote collaboration and interoperability between different projects, the OBO Foundry principles require that these ontologies be open and non-redundant, avoiding duplication of terms through the re-use of existing resources. As current options to do so present various difficulties, a new approach, MIREOT, allows specifying import of single terms. Initial implementations allow for controlled import of selected annotations and certain classes of related terms.

Dissertation
25 Jun 2010
TL;DR: The NeOn Glossary of Processes and Activities, which identifies and defines the processes and activities potentially involved when ontology networks are collaboratively built, is proposed, which defines a set of two ontology network life cycle models.
Abstract: A new ontology development paradigm has started; its emphasis lies on the reuse and possible subsequent reengineering of knowledge resources, on the collaborative and argumentative ontology development, and on the building of ontology networks; this new trend is the opposite of building new ontologies from scratch. To help ontology developers in this new paradigm, it is important to provide strong methodological support. This thesis presents some contributions to the methodological area of the Ontology Engineering field that we are sure will improve the development and building of ontologies networks, and thus, - It proposes the NeOn Glossary of Processes and Activities, which identifies and defines the processes and activities potentially involved when ontology networks are collaboratively built. - It defines a set of two ontology network life cycle models. - It identifies and describes a collection of nine scenarios for building ontology networks. - It provides some methodological guidelines for performing the ontology requirements specification activity, to obtain the requirements that the ontology should fulfil. - It offers some methodological guidelines for obtaining the ontology network life cycle for a concrete ontology network, as part of scheduling ontology projects. Additionally, the thesis provides the technological support to these guidelines: a tool called gOntt. - It also proposes some methodological guidelines for the reuse of ontological resources at two different levels of granularity: as a whole (general ontologies and domain ontologies) and using ontology statements.

Journal ArticleDOI
TL;DR: An ontology model of a Product Data and Knowledge Management Semantic Object Model for PLM has been developed, with the aim of implementing ontology advantages and features into the model.

Proceedings Article
01 May 2010
TL;DR: The notion of a shared mental model is defined by investigating which concepts are relevant for shared mental models, and modeling how they are related by means of UML to obtain a mental model ontology.
Abstract: The notion of a shared mental model is well known in the literature regarding team work among humans. It has been used to explain team functioning. The idea is that team performance improves if team members have a shared understanding of the task that is to be performed and of the involved team work. We maintain that the notion of shared mental model is not only highly relevant in the context of human teams, but also for teams of agents and for human-agent teams. However, before we can start investigating how to engineer agents on the basis of the notion of shared mental model, we first have to get a better understanding of the notion, which is the aim of this paper. We do this by investigating which concepts are relevant for shared mental models, and modeling how they are related by means of UML. Through this, we obtain a mental model ontology. Then, we formally define the notion of shared mental model and related notions. We illustrate our definitions by means of an example.

Journal ArticleDOI
TL;DR: A set of algorithms that exploit upper ontologies as semantic bridges in the ontology matching process is described and a systematic analysis of the relationships among features of matched ontologies is presented.
Abstract: ?Ontology matching? is the process of finding correspondences between entities belonging to different ontologies. This paper describes a set of algorithms that exploit upper ontologies as semantic bridges in the ontology matching process and presents a systematic analysis of the relationships among features of matched ontologies (number of simple and composite concepts, stems, concepts at the top level, common English suffixes and prefixes, and ontology depth), matching algorithms, used upper ontologies, and experiment results. This analysis allowed us to state under which circumstances the exploitation of upper ontologies gives significant advantages with respect to traditional approaches that do no use them. We run experiments with SUMO-OWL (a restricted version of SUMO), OpenCyc, and DOLCE. The experiments demonstrate that when our ?structural matching method via upper ontology? uses an upper ontology large enough (OpenCyc, SUMO-OWL), the recall is significantly improved while preserving the precision obtained without upper ontologies. Instead, our ?nonstructural matching method? via OpenCyc and SUMO-OWL improves the precision and maintains the recall. The ?mixed method? that combines the results of structural alignment without using upper ontologies and structural alignment via upper ontologies improves the recall and maintains the F-measure independently of the used upper ontology.

Book ChapterDOI
07 Nov 2010
TL;DR: A framework, which maps the feature-based model of similarity into the information theoretic domain and a new measure called FaITH (Feature and Information THeoretic) has been devised, which enables to rewrite existing similarity measures that can be augmented to compute semantic relatedness.
Abstract: Semantic similarity and relatedness measures between ontology concepts are useful in many research areas. While similarity only considers subsumption relations to assess how two objects are alike, relatedness takes into account a broader range of relations (e.g., part-of). In this paper, we present a framework, which maps the feature-based model of similarity into the information theoretic domain. A new way of computing IC values directly from an ontology structure is also introduced. This new model, called Extended Information Content (eIC) takes into account the whole set of semantic relations defined in an ontology. The proposed framework enables to rewrite existing similarity measures that can be augmented to compute semantic relatedness. Upon this framework, a new measure called FaITH (Feature and Information THeoretic) has been devised. Extensive experimental evaluations confirmed the suitability of the framework.