scispace - formally typeset
Search or ask a question

Showing papers on "Ontology (information science) published in 2011"


Proceedings ArticleDOI
07 Sep 2011
TL;DR: DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs, is developed, and results are evaluated in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of the system.
Abstract: Interlinking text documents with Linked Open Data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. As a step towards interconnecting the Web of Documents with the Web of Data, we developed DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs. DBpedia Spotlight allows users to configure the annotations to their specific needs through the DBpedia Ontology and quality measures such as prominence, topical pertinence, contextual ambiguity and disambiguation confidence. We compare our approach with the state of the art in disambiguation, and evaluate our results in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of our system. DBpedia Spotlight is shared as open source and deployed as a Web Service freely available for public use.

1,228 citations


Book
01 Jan 2011
TL;DR: This chapter discusses Semantic Information and the Network Theory of Account, Consciousness, Agents and the Knowledge Game, and a Defence of Informational Structural Realism against Digital Ontology.
Abstract: Preface 1. What is the Philosophy of Information? 2. Open Problems in the Philosophy of Information 3. The Method of Levels of Abstraction 4. Semantic Information and the Veridicality Thesis 5. Outline of a Theory of Strongly Semantic Information 6. The Symbol Grounding Problem 7. Action-Based Semantics 8. Semantic Information and the Correctness Theory of Truth 9. The Logical Unsolvability of the Gettier Problem 10. The Logic of Being Informed 11. Understanding Epistemic Relevance 12. Semantic Information and the Network Theory of Account 13. Consciousness, Agents and the Knowledge Game 14. Against Digital Ontology 15. A Defence of Informational Structural Realism References

862 citations


Journal ArticleDOI
TL;DR: The National Center for Biomedical Ontology (NCBO) has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies via the NCBO Web services.
Abstract: The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection.

692 citations


Book ChapterDOI
23 Oct 2011
TL;DR: This paper presents LogMap--a highly scalable ontology matching system with 'built-in' reasoning and diagnosis capabilities, and is the only matching system that can deal with semantically rich ontologies containing tens (and even hundreds of thousands of classes).
Abstract: In this paper, we present LogMap--a highly scalable ontology matching system with 'built-in' reasoning and diagnosis capabilities. To the best of our knowledge, LogMap is the only matching system that can deal with semantically rich ontologies containing tens (and even hundreds) of thousands of classes. In contrast to most existing tools, LogMap also implements algorithms for 'on the fly' unsatisfiability detection and repair. Our experiments with the ontologies NCI, FMA and SNOMED CT confirm that our system can efficiently match even the largest existing bio-medical ontologies. Furthermore, LogMap is able to produce a 'clean' set of output mappings in many cases, in the sense that the ontology obtained by integrating LogMap's output mappings with the input ontologies is consistent and does not contain unsatisfiable classes.

473 citations


Book
06 May 2011
TL;DR: This book presents the theory and practice of OPM with examples from various industry segments and engineering disciplines, as well as daily life, and includes a CD-ROM demo version of the award-winning OPM-supporting Object-Process CASE Tool (OPCAT).
Abstract: From the Publisher: Object-Process Methodology (OPM) is a comprehensive novel approach to systems engineering. Integrating function, structure and behavior in a single, unifying model, OPM significantly extends the system modeling capabilities of current object-oriented methods. Founded on a precise generic ontology and combining graphics with natural language, OPM is applicable to virtually any domain of business, engineering and science. Relieved from technical issues, system architects can use OPM to engage in the creative design of complex systems. The book presents the theory and practice of OPM with examples from various industry segments and engineering disciplines, as well as daily life. It includes a CD-ROM demo version of the award-winning OPM-supporting Object-Process CASE Tool (OPCAT). Using the numerous examples and exercises (with answers) in the book, this software enables the reader to gain hands-on experience in developing complex systems.

460 citations


Journal ArticleDOI
TL;DR: A new definition of the notion of Intelligent Product inspired by what happens in nature with us as human beings and the way the authors develop intelligence and knowledge is introduced.
Abstract: With the advent of the information and related emerging technologies, such as RFID, small size sensors and sensor networks or, more generally, product embedded information devices (PEID), a new generation of products called smart or intelligent products is available in the market. Although various definitions of intelligent products have been proposed, we introduce a new definition of the notion of Intelligent Product inspired by what happens in nature with us as human beings and the way we develop intelligence and knowledge. We see an intelligent product as a product system which contains sensing, memory, data processing, reasoning and communication capabilities at four intelligence levels. This future generations of Intelligent Products will need new Product Data Technologies allowing the seamless interoperability of systems and exchange of not only Static but of Dynamic Product Data as well. Actual standards for PDT cover only lowest intelligence of today’s products. In this context, we try to shape the actual state and a possible future of the Product Data Technologies from a Closed-Loop Product Lifecycle Management (C-L PLM) perspective. Our approach is founded in recent findings of the FP6 IP 507100 project PROMISE and follow-up research work. Standards of the STEP family, covering the product lifecycle to a certain extend (PLCS) as well as MIMOSA and ISO 15926 are discussed together with more recent technologies for the management of ID and sensor data such as EPCglobal, OGC-SWE and relevant PROMISE propositions for standards. Finally, the first efforts towards ontology based semantic standards for product lifecycle management and associated knowledge management and sharing are presented and discussed.

448 citations


Proceedings ArticleDOI
28 Mar 2011
TL;DR: YAGO2, an extension of the YAGO knowledge base with focus on temporal and spatial knowledge, is presented, automatically built from Wikipedia, GeoNames, and WordNet, and contains nearly 10 million entities and events, as well as 80 million facts representing general world knowledge.
Abstract: We present YAGO2, an extension of the YAGO knowledge base with focus on temporal and spatial knowledge. It is automatically built from Wikipedia, GeoNames, and WordNet, and contains nearly 10 million entities and events, as well as 80 million facts representing general world knowledge. An enhanced data representation introduces time and location as first-class citizens. The wealth of spatio-temporal information in YAGO can be explored either graphically or through a special time- and space-aware query language.

332 citations


Journal ArticleDOI
01 Nov 2011
TL;DR: This work presents paris, an approach for the automatic alignment of ontologies, which aligns not only instances, but also relations and classes and provides a truly holistic solution to the problem of ontology alignment.
Abstract: One of the main challenges that the Semantic Web faces is the integration of a growing number of independently designed ontologies. In this work, we present paris, an approach for the automatic alignment of ontologies. paris aligns not only instances, but also relations and classes. Alignments at the instance level cross-fertilize with alignments at the schema level. Thereby, our system provides a truly holistic solution to the problem of ontology alignment. The heart of the approach is probabilistic, i.e., we measure degrees of matchings based on probability estimates. This allows paris to run without any parameter tuning. We demonstrate the efficiency of the algorithm and its precision through extensive experiments. In particular, we obtain a precision of around 90% in experiments with some of the world's largest ontologies.

322 citations


Journal ArticleDOI
TL;DR: Three ontologies created specifically to address the needs of the systems biology community are described, including the Systems Biology Ontology, which provides semantic information about the model components, and the Kinetic Simulation Algorithm Ontology and the Terminology for the Description of Dynamics, which categorizes dynamical features of the simulation results and general systems behavior.
Abstract: The use of computational modeling to describe and analyze biological systems is at the heart of systems biology. Model structures, simulation descriptions and numerical results can be encoded in structured formats, but there is an increasing need to provide an additional semantic layer. Semantic information adds meaning to components of structured descriptions to help identify and interpret them unambiguously. Ontologies are one of the tools frequently used for this purpose. We describe here three ontologies created specifically to address the needs of the systems biology community. The Systems Biology Ontology (SBO) provides semantic information about the model components. The Kinetic Simulation Algorithm Ontology (KiSAO) supplies information about existing algorithms available for the simulation of systems biology models, their characterization and interrelationships. The Terminology for the Description of Dynamics (TEDDY) categorizes dynamical features of the simulation results and general systems behavior. The provision of semantic information extends a model's longevity and facilitates its reuse. It provides useful insight into the biology of modeled processes, and may be used to make informed decisions on subsequent simulation experiments.

298 citations


Book ChapterDOI
TL;DR: This paper reports results and lessons learned from the Ontology Alignment Evaluation Initiative (OAEI), a benchmarking initiative for ontology matching, and describes the evaluation design used in the OAEI campaigns in terms of datasets, evaluation criteria and workflows.
Abstract: In the area of semantic technologies, benchmarking and systematic evaluation is not yet as established as in other areas of computer science, e.g., information retrieval. In spite of successful attempts, more effort and experience are required in order to achieve such a level of maturity. In this paper, we report results and lessons learned from the Ontology Alignment Evaluation Initiative (OAEI), a benchmarking initiative for ontology matching. The goal of this work is twofold: on the one hand, we document the state of the art in evaluating ontology matching methods and provide potential participants of the initiative with a better understanding of the design and the underlying principles of the OAEI campaigns. On the other hand, we report experiences gained in this particular area of semantic technologies to potential developers of benchmarking for other kinds of systems. For this purpose, we describe the evaluation design used in the OAEI campaigns in terms of datasets, evaluation criteria and workflows, provide a global view on the results of the campaigns carried out from 2005 to 2010 and discuss upcoming trends, both specific to ontology matching and generally relevant for the evaluation of semantic technologies. Finally, we argue that there is a need for a further automation of benchmarking to shorten the feedback cycle for tool developers.

290 citations


Posted Content
TL;DR: Paris as mentioned in this paper is a probabilistic approach for ontology alignment, i.e., it measures degrees of matchings based on probability estimates, and it can align not only instances, but also relations and classes.
Abstract: One of the main challenges that the Semantic Web faces is the integration of a growing number of independently designed ontologies. In this work, we present PARIS, an approach for the automatic alignment of ontologies. PARIS aligns not only instances, but also relations and classes. Alignments at the instance level cross-fertilize with alignments at the schema level. Thereby, our system provides a truly holistic solution to the problem of ontology alignment. The heart of the approach is probabilistic, i.e., we measure degrees of matchings based on probability estimates. This allows PARIS to run without any parameter tuning. We demonstrate the efficiency of the algorithm and its precision through extensive experiments. In particular, we obtain a precision of around 90% in experiments with some of the world's largest ontologies.

Journal ArticleDOI
01 Mar 2011
TL;DR: This paper proposes a solution based on the use of ontologies and ontological reasoning combined with statistical inferencing to recognize complex activities that cannot be derived by statistical methods alone.
Abstract: Human activity recognition is a challenging problem for context-aware systems and applications. Research in this field has mainly adopted techniques based on supervised learning algorithms, but these systems suffer from scalability issues with respect to the number of considered activities and contextual data. In this paper, we propose a solution based on the use of ontologies and ontological reasoning combined with statistical inferencing. Structured symbolic knowledge about the environment surrounding the user allows the recognition system to infer which activities among the candidates identified by statistical methods are more likely to be the actual activity that the user is performing. Ontological reasoning is also integrated with statistical methods to recognize complex activities that cannot be derived by statistical methods alone. The effectiveness of the proposed technique is supported by experiments with a complete implementation of the system using commercially available sensors and an Android-based handheld device as the host for the main activity recognition module.

Journal ArticleDOI
TL;DR: MASTRO is a Java tool for ontology-based data access (OBDA) developed at Sapienza Universita di Roma and at the Free University of Bozen-Bolzano that provides optimized algorithms for answering expressive queries, as well as features for intensional reasoning and consistency checking.
Abstract: In this paper we present MASTRO, a Java tool for ontology-based data access (OBDA) developed at Sapienza Universita di Roma and at the Free University of Bozen-Bolzano. MASTRO manages OBDA systems in which the ontology is specified in DL-Lite A,id, a logic of the DL-Lite family of tractable Description Logics specifically tailored to ontology-based data access, and is connected to external JDBC enabled data management systems through semantic mappings that associate SQL queries over the external data to the elements of the ontology. Advanced forms of integrity constraints, which turned out to be very useful in practical applications, are also enabled over the ontologies. Optimized algorithms for answering expressive queries are provided, as well as features for intensional reasoning and consistency checking. MASTRO provides a proprietary API, an OWLAPI compatible interface, and a plugin for the Protege 4 ontology editor. It has been successfully used in several projects carried out in collaboration with important organizations, on which we briefly comment in this paper.

Journal ArticleDOI
TL;DR: This paper analyzes ontology-based approaches for IC computation and proposes several improvements aimed to better capture the semantic evidence modelled in the ontology for the particular concept.
Abstract: The information content (IC) of a concept provides an estimation of its degree of generality/concreteness, a dimension which enables a better understanding of concept's semantics. As a result, IC has been successfully applied to the automatic assessment of the semantic similarity between concepts. In the past, IC has been estimated as the probability of appearance of concepts in corpora. However, the applicability and scalability of this method are hampered due to corpora dependency and data sparseness. More recently, some authors proposed IC-based measures using taxonomical features extracted from an ontology for a particular concept, obtaining promising results. In this paper, we analyse these ontology-based approaches for IC computation and propose several improvements aimed to better capture the semantic evidence modelled in the ontology for the particular concept. Our approach has been evaluated and compared with related works (both corpora and ontology-based ones) when applied to the task of semantic similarity estimation. Results obtained for a widely used benchmark show that our method enables similarity estimations which are better correlated with human judgements than related works.

Journal ArticleDOI
01 Feb 2011
TL;DR: A novel fuzzy expert system can work effectively for diabetes decision support application and the semantic fuzzy decision making mechanism simulates the semantic description of medical staff for diabetes-related application.
Abstract: An increasing number of decision support systems based on domain knowledge are adopted to diagnose medical conditions such as diabetes and heart disease. It is widely pointed that the classical ontologies cannot sufficiently handle imprecise and vague knowledge for some real world applications, but fuzzy ontology can effectively resolve data and knowledge problems with uncertainty. This paper presents a novel fuzzy expert system for diabetes decision support application. A five-layer fuzzy ontology, including a fuzzy knowledge layer, fuzzy group relation layer, fuzzy group domain layer, fuzzy personal relation layer, and fuzzy personal domain layer, is developed in the fuzzy expert system to describe knowledge with uncertainty. By applying the novel fuzzy ontology to the diabetes domain, the structure of the fuzzy diabetes ontology (FDO) is defined to model the diabetes knowledge. Additionally, a semantic decision support agent (SDSA), including a knowledge construction mechanism, fuzzy ontology generating mechanism, and semantic fuzzy decision making mechanism, is also developed. The knowledge construction mechanism constructs the fuzzy concepts and relations based on the structure of the FDO. The instances of the FDO are generated by the fuzzy ontology generating mechanism. Finally, based on the FDO and the fuzzy ontology, the semantic fuzzy decision making mechanism simulates the semantic description of medical staff for diabetes-related application. Importantly, the proposed fuzzy expert system can work effectively for diabetes decision support application.

Journal ArticleDOI
TL;DR: The major contribution of this work is an innovative, comprehensive semantic search model, which extends the classic IR model, addresses the challenges of the massive and heterogeneous Web environment, and integrates the benefits of both keyword and semantic-based search.

Journal ArticleDOI
TL;DR: A new measure based on the exploitation of the taxonomical structure of a biomedical ontology is proposed, using SNOMED CT as the input ontology and shows that it outperforms most of the previous measures avoiding, at the same time, some of their limitations.

Journal ArticleDOI
TL;DR: This work recalls the building blocks of this API and presents here the version 4 of the Alignment API through some of its new features: ontology proxys, the expressive alignment language EDOAL and evaluation primitives.
Abstract: Alignments represent correspondences between entities of two ontologies. They are produced from the ontologies by ontology matchers. In order for matchers to exchange alignments and for applications to manipulate matchers and alignments, a minimal agreement is necessary. The Alignment API provides abstractions for the notions of network of ontologies, alignments and correspondences as well as building blocks for manipulating them such as matchers, evaluators, renderers and parsers. We recall the building blocks of this API and present here the version 4 of the Alignment API through some of its new features: ontology proxys, the expressive alignment language EDOAL and evaluation primitives.

Journal ArticleDOI
TL;DR: This work reviews the application of literature mining and ontology modeling and traversal to the area of drug repurposing (DR), which has emerged as a noteworthy alternative to the traditional drug development process in response to the decreased productivity of the biopharmaceutical industry.
Abstract: The immense growth of MEDLINE coupled with the realization that a vast amount of biomedical knowledge is recorded in free-text format, has led to the appearance of a large number of literature mining techniques aiming to extract biomedical terms and their inter-relations from the scientific literature. Ontologies have been extensively utilized in the biomedical domain either as controlled vocabularies or to provide the framework for mapping relations between concepts in biology and medicine. Literature-based approaches and ontologies have been used in the past for the purpose of hypothesis generation in connection with drug discovery. Here, we review the application of literature mining and ontology modeling and traversal to the area of drug repurposing (DR). In recent years, DR has emerged as a noteworthy alternative to the traditional drug development process, in response to the decreased productivity of the biopharmaceutical industry. Thus, systematic approaches to DR have been developed, involving a variety of in silico, genomic and high-throughput screening technologies. Attempts to integrate literature mining with other types of data arising from the use of these technologies as well as visualization tools assisting in the discovery of novel associations between existing drugs and new indications will also be presented.

Journal ArticleDOI
TL;DR: The main conclusion from this study is that reasoners vary significantly with regard to all included characteristics, and therefore a critical assessment and evaluation of requirements is needed before selecting a reasoner for a real-life application.
Abstract: This paper provides a survey to and a comparison of state-of-the-art Semantic Web reasoners that succeed in classifying large ontologies expressed in the tractable OWL 2 EL profile. Reasoners are characterized along several dimensions: The first dimension comprises underlying reasoning characteristics, such as the employed reasoning method and its correctness as well as the expressivity and worst-case computational complexity of its supported language and whether the reasoner supports incremental classification, rules, justifications for inconsistent concepts and ABox reasoning tasks. The second dimension is practical usability: whether the reasoner implements the OWL API and can be used via OWLlink, whether it is available as Protege plugin, on which platforms it runs, whether its source is open or closed and which license it comes with. The last dimension contains performance indicators that can be evaluated empirically, such as classification, concept satisfiability, subsumption checking and consistency checking performance as well as required heap space and practical correctness, which is determined by comparing the computed concept hierarchies with each other. For the very large ontology SNOMED CT, which is released both in stated and inferred form, we test whether the computed concept hierarchies are correct by comparing them to the inferred form of the official distribution. The reasoners are categorized along the defined characteristics and benchmarked against well-known biomedical ontologies. The main conclusion from this study is that reasoners vary significantly with regard to all included characteristics, and therefore a critical assessment and evaluation of requirements is needed before selecting a reasoner for a real-life application.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: A new rewriting algorithm for rather general types of ontological constraints (description logics) and an effective new method that works for Linear Datalog±, a description logic that encompasses well-known description logics of the DL-Lite family are proposed.
Abstract: Ontological queries are evaluated against an enterprise ontology rather than directly on a database. The evaluation and optimization of such queries is an intriguing new problem for database research. In this paper we discuss two important aspects of this problem: query rewriting and query optimization. Query rewriting consists of the compilation of an ontological query into an equivalent query against the underlying relational database. The focus here is on soundness and completeness. We review previous results and present a new rewriting algorithm for rather general types of ontological constraints (description logics). In particular, we show how a conjunctive query (CQ) against an enterprise ontology can be compiled into a union of conjunctive queries (UCQ) against the underlying database. Ontological query optimization, in this context, attempts to improve this process so to produce possibly small and cost-effective output UCQ. We review existing optimization methods, and propose an effective new method that works for Linear Datalog±, a description logic that encompasses well-known description logics of the DL-Lite family.

Journal ArticleDOI
TL;DR: The project started in 2003 to create a connection between the enzyme data collection of the BRENDA enzyme database and a structured network of source tissues and cell types and is widely used by lab scientists, curators of genomic and biochemical databases and bioinformaticians.
Abstract: BTO, the BRENDA Tissue Ontology (http://www.BTO.brenda-enzymes.org) represents a comprehensive structured encyclopedia of tissue terms. The project started in 2003 to create a connection between the enzyme data collection of the BRENDA enzyme database and a structured network of source tissues and cell types. Currently, BTO contains more than 4600 different anatomical structures, tissues, cell types and cell lines, classified under generic categories corresponding to the rules and formats of the Gene Ontology Consortium and organized as a directed acyclic graph (DAG). Most of the terms are endowed with comments on their derivation or definitions. The content of the ontology is constantly curated with ∼1000 new terms each year. Four different types of relationships between the terms are implemented. A versatile web interface with several search and navigation functionalities allows convenient online access to the BTO and to the enzymes isolated from the tissues. Important areas of applications of the BTO terms are the detection of enzymes in tissues and the provision of a solid basis for text-mining approaches in this field. It is widely used by lab scientists, curators of genomic and biochemical databases and bioinformaticians. The BTO is freely available at http://www.obofoundry.org.

Proceedings Article
15 Nov 2011
TL;DR: The main problem in defining the mOSAIC ontology is in the heterogeneity of terms used by Clouds vendors, and in the number of standards which refer to Cloud Systems with different terminology.
Abstract: The easiness of managing and configuring resources and the low cost needed for setup and maintaining Cloud services have made Cloud Computing widespread. Several commercial vendors now offer solutions based on Cloud architectures. More and more providers offer new different services every month, following their customers needs. Anyway, it is very hard to find a single provider which offers all services needed by end users. Furthermore, different vendors propose different architectures for their Cloud systems and usually these are not compatible. Very few efforts have been done in order to propose a unified standard for Cloud Computing. This is a problem, since different Cloud systems and vendors have different ways to describe and invoke their services, to specify requirements and to communicate. Hence a way to provide a common access to Cloud services and to discover and use required services in Cloud federations is appealing. mOSAIC project addresses these problems by defining a common ontology and it aims at developing an open-source platform that enables applications to negotiate Cloud services as requested by users. The main problem in defining the mOSAIC ontology is in the heterogeneity of terms used by Clouds vendors, and in the number of standards which refer to Cloud Systems with different terminology. In this work the mOSAIC Cloud Ontology is described. It has been built by analysing Cloud standards and proposals. The Ontology has been then refined by introducing individuals from real Cloud systems.

Journal ArticleDOI
TL;DR: A survey on ontology-based Question Answering (QA), which has emerged in recent years to exploit the opportunities offered by structured semantic information on the Web, and the potential of this technology to go beyond the current state of the art to support end-users in reusing and querying the SW content.
Abstract: . With the recent rapid growth of the Semantic Web (SW), the processes of searching and querying content that is both massive in scale and heterogeneous have become increasingly challenging. User-friendly interfaces, which can support end users in querying and exploring this novel and diverse, structured information space, are needed to make the vision of the SW a reality. We present a survey on ontology-based Question Answering (QA), which has emerged in recent years to exploit the opportunities offered by structured semantic information on the Web. First, we provide a comprehensive perspective by analyzing the general background and history of the QA research field, from influential works from the artificial intelligence and database communities developed in the 70s and later decades, through open domain QA stimulated by the QA track in TREC since 1999, to the latest commercial semantic QA solutions, before tacking the current state of the art in open userfriendly interfaces for the SW. Second, we examine the potential of this technology to go beyond the current state of the art to support end-users in reusing and querying the SW content. We conclude our review with an outlook for this novel research area, focusing in particular on the R&D directions that need to be pursued to realize the goal of efficient and competent retrieval and integration of answers from large scale, heterogeneous, and continuously evolving semantic sources.

Journal ArticleDOI
01 May 2011
TL;DR: An ontology-based unified robot knowledge framework that integrates low-level data with high-level knowledge for robot intelligence and the experimental results that demonstrate the advantages of using the proposed knowledge framework are presented.
Abstract: A significant obstacle for service robots is the execution of complex tasks in real environments. For example, it is not easy for service robots to find objects that are partially observable and are located at a place which is not identical but near the place where the robots saw them previously. To overcome the challenge effectively, robot knowledge represented as a semantic network can be extremely useful. This paper presents an ontology-based unified robot knowledge framework that integrates low-level data with high-level knowledge for robot intelligence. This framework consists of two sections: knowledge description and knowledge association. Knowledge description includes comprehensively integrated robot knowledge derived from low-level knowledge regarding perceptual features, part objects, metric maps, and primitive behaviors, as well as high-level knowledge about perceptual concepts, objects, semantic maps, tasks, and contexts. Knowledge association uses logical inference with both unidirectional and bidirectional rules. This characteristic enables reasoning to be performed even when only a partial information is available. The experimental results that demonstrate the advantages of using the proposed knowledge framework are also presented.

Proceedings ArticleDOI
16 Jul 2011
TL;DR: The combined approach is described, which incorporates the information given by the ontology into the data and employs query rewriting to eliminate spurious answers in ontology-based data access.
Abstract: The use of ontologies for accessing data is one of the most exciting new applications of description logics in databases and other information systems. A realistic way of realising sufficiently scalable ontology-based data access in practice is by reduction to querying relational databases. In this paper, we describe the combined approach, which incorporates the information given by the ontology into the data and employs query rewriting to eliminate spurious answers. We illustrate this approach for ontologies given in the DL-Lite family of description logics and briefly discuss the results obtained for the EL family.

Journal ArticleDOI
TL;DR: This paper proposes a set of guidelines for importing required terms from an external resource into a target ontology, describing the methodology, its implementation, and some examples of this application, and outline future work and extensions.
Abstract: While the Web Ontology Language OWL provides a mechanism to import ontologies, this mechanism is not always suitable. Current editing tools present challenges for working with large ontologies and direct OWL imports can prove impractical for day-to-day development. Furthermore, external ontologies often undergo continuous change which can introduce conflicts when integrated with multiple efforts. Finally, importing heterogeneous ontologies in their entirety may lead to inconsistencies or unintended inferences. In this paper we propose a set of guidelines for importing required terms from an external resource into a target ontology. We describe the methodology, its implementation, present some examples of this application, and outline future work and extensions.

Book ChapterDOI
29 May 2011
TL;DR: The suitability of FREyA to query the Linked Open Data is discussed, and its performance in terms of precision and recall using the MusicBrainz and DBpedia datasets is reported.
Abstract: Natural Language Interfaces are increasingly relevant for information systems fronting rich structured data stores such as RDF and OWL repositories, mainly because of the conception of them being intuitive for human. In the previous work, we developed FREyA, an interactive Natural Language Interface for querying ontologies. It uses syntactic parsing in combination with the ontology-based lookup in order to interpret the question, and involves the user if necessary. The user's choices are used for training the system in order to improve its performance over time. In this paper, we discuss the suitability of FREyA to query the Linked Open Data. We report its performance in terms of precision and recall using the MusicBrainz and DBpedia datasets.

Book ChapterDOI
01 Jan 2011
TL;DR: This chapter presents a survey of the most relevant methods, techniques and tools used for the task of ontology learning, explaining how BOEMIE addresses problems observed in existing systems and contributes to issues that are not frequently considered by existing approaches.
Abstract: Ontology learning is the process of acquiring (constructing or integrating) an ontology (semi-) automatically. Being a knowledge acquisition task, it is a complex activity, which becomes even more complex in the context of the BOEMIE project1, due to the management of multimedia resources and the multi-modal semantic interpretation that they require. The purpose of this chapter is to present a survey of the most relevant methods, techniques and tools used for the task of ontology learning. Adopting a practical perspective, an overview of the main activities involved in ontology learning is presented. This breakdown of the learning process is used as a basis for the comparative analysis of existing tools and approaches. The comparison is done along dimensions that emphasize the particular interests of the BOEMIE project. In this context, ontology learning in BOEMIE is treated and compared to the state of the art, explaining how BOEMIE addresses problems observed in existing systems and contributes to issues that are not frequently considered by existing approaches.

Journal ArticleDOI
TL;DR: It is found that an information-theoretical redefinition of well-known semantic measures and similarity coefficients, and an intrinsic estimation of concept IC result in noticeable improvements in their accuracy, resulting in new semantic similarity measures expressed in terms of concept Information Content.