scispace - formally typeset
Search or ask a question
Book•

Information Sharing on the Semantic Web

TL;DR: Addressing problems like missing conceptual models, unclear system boundaries, and heterogeneous representations, the authors design a framework for ontology-based information sharing in weakly structured environments like the Semantic Web.
Abstract: The large-scale and almost ubiquitous availability of information has become as much of a curse as it is a blessing. The more information is available, the harder it is to locate any particular piece of it. And even when it has been successfully found, it is even harder still to usefully combine it with other information we may already possess. This problem occurs at many different levels, ranging from the overcrowded disks of our own PCs to the mass of unstructured information on the World Wide Web. It is commonly understood that this problem of information sharing can only be solved by giving computers better access to the semantics of the information. While it has been recognized that ontologies play a crucial role in solving the open problems, most approaches rely on the existence of well-established data structures. To overcome these shortcomings, Stuckenschmidt and van Harmelen describe ontology-based approaches for resolving semantic heterogeneity in weakly structured environments, in particular the World Wide Web. Addressing problems like missing conceptual models, unclear system boundaries, and heterogeneous representations, they design a framework for ontology-based information sharing in weakly structured environments like the Semantic Web. For researchers and students in areas related to the Semantic Web, the authors provide not only a comprehensive overview of the State of the art, but also present in detail recent research in areas like ontology design for information integration, metadata generation and management, and representation and management of distributed ontologies. For professionals in areas such as e-commerce (e.g., the exchange of product knowledge) and knowledge management (e.g., in large and distributed organizations), the book provides decision support on the use of novel technologies, information about potential problems, and guidelines for the successful application of existing technologies.
Citations
More filters
Book•
01 Jan 2015
TL;DR: The Psychology of Physical Activity as discussed by the authors is a comprehensive account of our psychological knowledge about physical activity covering: motivation and the psychological factors associated with activity or inactivity; the feel-good factor: the psychological outcomes of exercising, including mental health illness and clinical populations; interventions and applied practice in the psychology of physical activity; current trends and future directions in research and practice.
Abstract: Psychology of Physical Activity is a comprehensive account of our psychological knowledge about physical activity covering: motivation and the psychological factors associated with activity or inactivity; the feel-good factor: the psychological outcomes of exercising, including mental health illness and clinical populations; interventions and applied practice in the psychology of physical activity; current trends and future directions in research and practice. This textbook is essential for students of sport and exercise science, exercise physiology, health psychology, occupational therapy and physical education.

477 citations

Book Chapter•DOI•
31 May 2009
TL;DR: Current projects, ongoing development, and further research are described in a joint collaboration between the BBC, Freie Universitat Berlin and Rattle Research in order to use DBpedia as the controlled vocabulary and semantic backbone for the whole BBC.
Abstract: In this paper, we describe how the BBC is working to integrate data and linking documents across BBC domains by using Semantic Web technology, in particular Linked Data, MusicBrainz and DBpedia. We cover the work of BBC Programmes and BBC Music building Linked Data sites for all music and programmes related brands, and we describe existing projects, ongoing development, and further research we are doing in a joint collaboration between the BBC, Freie Universitat Berlin and Rattle Research in order to use DBpedia as the controlled vocabulary and semantic backbone for the whole BBC.

297 citations

Book•
01 Jan 2006

152 citations

Journal Article•DOI•
TL;DR: This paper proposes a new generic and adaptive ontology mapping approach, called the PRIOR+, based on propagation theory, information retrieval techniques and artificial intelligence, which shows that harmony is a good estimator of f-measure and the harmony based adaptive aggregation outperforms other aggregation methods.

114 citations


Cites background from "Information Sharing on the Semantic..."

  • ...Though synonym sets, term networks, concept lattices, features and constraints have been proposed as solutions for solving semantic heterogeneity among different information systems, thoseapproachesarenot sufficient to solve theproblemof semantic heterogeneity in the WWW environment [38]....

    [...]

  • ..., syntax, structure and semantics [38]....

    [...]

Journal Article•
TL;DR: The main goal of this thesis is to present methodological principles for ontology engineering to guide ontology builders towards building ontologies that are both highly reusable and usable, easier to build, and smoother to maintain.
Abstract: The Internet and other open connectivity environments create a strong demand for the sharing of data semantics. Emerging ontologies are increasingly becoming essential for computer science applications. Organizations are looking towards them as vital machine-processable semantics for many application areas. An ontology in general, is an agreed understanding (i.e. semantics) of a certain domain, axiomatized and represented formally as logical theory in a computer resource. By sharing an ontology, autonomous and distributed applications can meaningfully communicate to exchange data and make transactions interoperate independently of their internal technologies. The main goal of this thesis is to present methodological principles for ontology engineering to guide ontology builders towards building ontologies that are both highly reusable and usable, easier to build, and smoother to maintain. First, we investigate three foundational challenges in ontology engineering (namely, ontology reusability, ontology application-independence, and ontology evolution). Based on these challenges, we derive six ontology-engineering requirements. Fulfilling these requirements is the goal and motivation of our methodological principles. Second, we present two methodological principles for ontology engineering: 1) ontology double articulation, and 2) ontology modularization. The double articulation principle suggests that an ontology be built as separate domain axiomatizations and application axiomatizations. While a domain axiomatization focuses on the characterization of the intended meaning (i.e. intended models) of a vocabulary at the domain level, application axiomatizations mainly focus on the usability of this vocabulary according to certain application/usability perspectives. An application axiomatization is intended to specify the legal models (a subset of the intended models) of the application(s)' interest. The modularization principle suggests that application axiomatizations be built in a modular manner. Axiomatizations should be developed as a set of small modules and later composed to form, and be used as, one modular axiomatization. We define a composition operator for automatic module composition. It combines all axioms introduced in the composed modules. Third, to illustrate the implementation of our methodological principles, we develop a conceptual markup language called ORM-ML, an ontology engineering tool prototype called DogmaModeler and a customer complaint ontology that serves as a real-life case study. This research is a contribution to the DOGMA research project, which is a research framework for modeling, engineering, and deploying ontologies. In addition, we find we have benefited enormously from our participation in several European projects. It was through the CCFORM project (discussed extensively in chapter 7) that we were able to test and debug many ideas that resulted in this thesis. The Network of Excellence KnowledgeWeb has also proved to be a fruitful brainstorming environment that has undoubtedly improved the quality of the analyses performed and the results obtained.

92 citations