scispace - formally typeset
Search or ask a question

Showing papers on "Ontology-based data integration published in 2009"


Journal ArticleDOI
TL;DR: This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data Fusion.
Abstract: The development of the Internet in recent years has made it possible and useful to access many different information systems anywhere in the world to obtain information. While there is much research on the integration of heterogeneous information systems, most commercial systems stop short of the actual integration of available data. Data fusion is the process of fusing multiple records representing the same real-world object into a single, consistent, and clean representation.This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data fusion, namely, uncertain and conflicting data values. We give an overview and classification of different ways of fusing data and present several techniques based on standard and advanced operators of the relational algebra and SQL. Finally, the article features a comprehensive survey of data integration systems from academia and industry, showing if and how data fusion is performed in each.

1,797 citations


Book ChapterDOI
01 Jan 2009
TL;DR: This paper shall revisit the previous attempts to clarify and formalize such original definition of (computational) ontologies as “explicit specifications of conceptualizations”, providing a detailed account of the notions of conceptualization and explicit specification, while discussing the importance of shared explicit specifications.
Abstract: The word “ontology” is used with different senses in different communities The most radical difference is perhaps between the philosophical sense, which has of course a well-established tradition, and the computational sense, which emerged in the recent years in the knowledge engineering community, starting from an early informal definition of (computational) ontologies as “explicit specifications of conceptualizations” In this paper we shall revisit the previous attempts to clarify and formalize such original definition, providing a detailed account of the notions of conceptualization and explicit specification, while discussing at the same time the importance of shared explicit specifications

1,253 citations


Book ChapterDOI
01 Jan 2009
TL;DR: In this chapter, components called Ontology Design Patterns (OP) are described, and methods that support pattern-based ontology design are described that support explicit documentation of design rationales.
Abstract: Computational ontologies in the context of information systems are artifacts that encode a description of some world, for some purpose. Under the assumption that there exist classes of problems that can be solved by applying common solutions (as it has been experienced in software engineering), we envision small, task-oriented ontologies with explicit documentation of design rationales. In this chapter, we describe components called Ontology Design Patterns (OP), and methods that support pattern-based ontology design.

484 citations


Journal ArticleDOI
TL;DR: This paper presents a dynamic multistrategy ontology alignment framework, named RiMOM, and proposes a systematic approach to quantitatively estimate the similarity characteristics for each alignment task and a strategy selection method to automatically combine the matching strategies based on two estimated factors.
Abstract: Ontology alignment identifies semantically matching entities in different ontologies. Various ontology alignment strategies have been proposed; however, few systems have explored how to automatically combine multiple strategies to improve the matching effectiveness. This paper presents a dynamic multistrategy ontology alignment framework, named RiMOM. The key insight in this framework is that similarity characteristics between ontologies may vary widely. We propose a systematic approach to quantitatively estimate the similarity characteristics for each alignment task and propose a strategy selection method to automatically combine the matching strategies based on two estimated factors. In the approach, we consider both textual and structural characteristics of ontologies. With RiMOM, we participated in the 2006 and 2007 campaigns of the Ontology Alignment Evaluation Initiative (OAEI). Our system is among the top three performers in benchmark data sets.

444 citations


Journal ArticleDOI
TL;DR: This paper proposes UP for ONtology (UPON) building, a methodology for ontology building derived from the UP, and a comparative evaluation with other methodologies and the results of its adoption in the context of the Athena EU Integrated Project are discussed.

300 citations


Book ChapterDOI
01 Jan 2009
TL;DR: The methodology serves as a scaffold for Part B “Ontology Engineering” of the handbook and shows where more specific concerns of ontology engineering find their place and how they are related in the overall process.
Abstract: In this chapter we present a methodology for introducing and maintaining ontology based knowledge management applications into enterprises with a focus on Knowledge Processes and Knowledge Meta Processes. While the former process circles around the usage of ontologies, the latter process guides their initial set up.We illustrate our methodology by an example from a case study on skills management. The methodology serves as a scaffold for Part B “Ontology Engineering” of the handbook. It shows where more specific concerns of ontology engineering find their place and how they are related in the overall process.

134 citations


Book ChapterDOI
31 May 2009
TL;DR: A general method and novel algorithmic techniques to facilitate the integration of independently developed ontologies using mappings and a preliminary evaluation suggests that this approach is both useful and feasible in practice.
Abstract: We propose a general method and novel algorithmic techniques to facilitate the integration of independently developed ontologies using mappings. Our method and techniques aim at helping users understand and evaluate the semantic consequences of the integration, as well as to detect and fix potential errors. We also present ContentMap, a system that implements our approach, and a preliminary evaluation which suggests that our approach is both useful and feasible in practice.

130 citations


Journal ArticleDOI
TL;DR: The proposed user ontology model with the spreading activation based inferencing procedure has been incorporated into a semantic search engine, called OntoSearch, to provide personalized document retrieval services.

128 citations


Book ChapterDOI
TL;DR: The paper presents the ORSD that resulted from the ontology requirements specification activity within the SEEMP project, and how this document facilitated not only the reuse of existing knowledge-aware resources but also the verification of the SE EMP ontologies.
Abstract: The goal of the ontology requirements specification activity is to state why the ontology is being built, what its intended uses are, who the end-users are, and which requirements the ontology should fulfill. The novelty of this paper lies in the systematization of the ontology requirements specification activity since the paper proposes detailed methodological guidelines for specifying ontology requirements efficiently. These guidelines will help ontology engineers to capture ontology requirements and produce the ontology requirements specification document (ORSD). The ORSD will play a key role during the ontology development process because it facilitates, among other activities, (1) the search and reuse of existing knowledge-aware resources with the aim of re-engineering them into ontologies, (2) the search and reuse of existing ontological resources (ontologies, ontology modules, ontology statements as well as ontology design patterns), and (3) the verification of the ontology along the ontology development. In parallel to the guidelines, we present the ORSD that resulted from the ontology requirements specification activity within the SEEMP project, and how this document facilitated not only the reuse of existing knowledge-aware resources but also the verification of the SEEMP ontologies. Moreover, we present some use cases in which the methodological guidelines proposed here were applied.

126 citations


Journal ArticleDOI
TL;DR: It is argued that a useful ontology must simultaneously strive for usability and reusability and explain how these goals are achieved by OntoCAPE.

121 citations


Journal ArticleDOI
S. Izza1
TL;DR: This paper constitutes a general survey on integration of industrial information systems, and aims to overview the main approaches that can be used in the context of industrial enterprises either for syntactic or semantic integration issues.
Abstract: Nowadays, integration of enterprise information systems constitutes a real and growing need for most enterprises, especially for large and dynamic industrial ones. It constitutes the main approach to dealing with heterogeneity that concerns the multiple software applications that make up information systems. This paper constitutes a general survey on integration of industrial information systems, and aims to overview the main approaches that can be used in the context of industrial enterprises either for syntactic or semantic integration issues. In particular, this paper focuses on some semantics-based approaches that promote the use of ontologies, and especially the use of OWL-S service ontology.

Journal ArticleDOI
TL;DR: A simplified vocabulary list from the DO is constructed called Disease Ontology Lite (DOLite), which results in more interpretable results than DO for gene-disease association tests and has been used in the Functional disease Ontology (FunDO) Web application at http://www.northwestern.edu/fundo.
Abstract: Subjective methods have been reported to adapt a general-purpose ontology for a specific application. For example, Gene Ontology (GO) Slim was created from GO to generate a highly aggregated report of the human-genome annotation. We propose statistical methods to adapt the general purpose, OBO Foundry Disease Ontology (DO) for the identification of gene-disease associations. Thus, we need a simplified definition of disease categories derived from implicated genes. On the basis of the assumption that the DO terms having similar associated genes are closely related, we group the DO terms based on the similarity of gene-to-DO mapping profiles. Two types of binary distance metrics are defined to measure the overall and subset similarity between DO terms. A compactness-scalable fuzzy clustering method is then applied to group similar DO terms. To reduce false clustering, the semantic similarities between DO terms are also used to constrain clustering results. As such, the DO terms are aggregated and the redundant DO terms are largely removed. Using these methods, we constructed a simplified vocabulary list from the DO called Disease Ontology Lite (DOLite). We demonstrated that DOLite results in more interpretable results than DO for gene-disease association tests. The resultant DOLite has been used in the Functional Disease Ontology (FunDO) Web application at http://www.projects.bioinformatics.northwestern.edu/fundo. Contact: s-lin2@northwestern.edu

25 Oct 2009
TL;DR: eXtreme Design with Content Ontology Design Patterns (XD): a collaborative, incremental, iterative method for pattern-based ontology design based on patterns is presented.
Abstract: In this paper, we present eXtreme Design with Content Ontology Design Patterns (XD): a collaborative, incremental, iterative method for pattern-based ontology design. We also describe the first version of a supporting tool that has been implemented and is available as a plugin for the NeOn Toolkit. XD is defined in the context of a general approach to ontology design based on patterns, which is also briefly introduce in this work.

Journal ArticleDOI
12 May 2009
TL;DR: The proposed FMEA ontology is evaluated by means of use cases that measure the performance in finding relevant information used and produced during the safety analyses using JTP, an object oriented Modular Reasoning System used for querying the ontology.
Abstract: FMEA (Failure Modes and Effects Analysis) is a method to analyze potential reliability problems in the development cycle of the project, making it easier to take actions to overcome such issues, thus enhancing the reliability through design. FMEA is used to identify actions to mitigate the analyzed potential failure modes and their effect on the operations. Anticipating these failure modes, being the central step in the analysis, needs to be carried on extensively, in order to prepare a list of maximum potential failure modes. However, the information stored in risk assessment tools is in the form of textual natural language descriptions that limit computer-based extraction of knowledge for the reuse of the FMEA analyses in other designs or during plant operation. To overcome the limitations of text-based descriptions, FMEA ontology has been proposed that provides a basic set of standard concepts and terms. The development of the ontology uses an upper ontology based on ISO-15926, which defines general-purpose terms and act as a foundation for more specific domains. The ontology is developed so that engineers can build new concepts from the basic set of concepts. This paper evaluates the proposed ontology by means of use cases that measure the performance in finding relevant information used and produced during the safety analyses. In particular, the extraction of knowledge is performed using JTP (An object oriented Modular Reasoning System) that is used for querying the ontology.

Journal ArticleDOI
01 Jul 2009
TL;DR: A new ontology-structure-based technique for measuring semantic similarity in single ontology and across multiple ontologies in the biomedical domain within the framework of unified medical language system (UMLS).
Abstract: Most of the intelligent knowledge-based applications contain components for measuring semantic similarity between terms. Many of the existing semantic similarity measures that use ontology structure as their primary source cannot measure semantic similarity between terms and concepts using multiple ontologies. This research explores a new way to measure semantic similarity between biomedical concepts using multiple ontologies. We propose a new ontology-structure-based technique for measuring semantic similarity in single ontology and across multiple ontologies in the biomedical domain within the framework of unified medical language system (UMLS). The proposed measure is based on three features: 1) cross-modified path length between two concepts; 2) a new feature of common specificity of concepts in the ontology; and 3) local granularity of ontology clusters. The proposed technique was evaluated relative to human similarity scores and compared with other existing measures using two terminologies within UMLS framework: medical subject headings and systemized nomenclature of medicine clinical term. The experimental results validate the efficiency of the proposed technique in single and multiple ontologies, and demonstrate that our proposed measure achieves the best results of correlation with human scores in all experiments.

Journal ArticleDOI
TL;DR: Experiments results show that the method of genetic algorithm in conjunction with the ontology strategy, the combination of the transformed LSI-based measure with the thesaurus- based measure, apparently outperforms that with traditional similarity measures.
Abstract: This paper proposes a self-organized genetic algorithm for text clustering based on ontology method. The common problem in the fields of text clustering is that the document is represented as a bag of words, while the conceptual similarity is ignored. We take advantage of thesaurus-based and corpus-based ontology to overcome this problem. However, the traditional corpus-based method is rather difficult to tackle. A transformed latent semantic indexing (LSI) model which can appropriately capture the associated semantic similarity is proposed and demonstrated as corpus-based ontology in this article. To investigate how ontology methods could be used effectively in text clustering, two hybrid strategies using various similarity measures are implemented. Experiments results show that our method of genetic algorithm in conjunction with the ontology strategy, the combination of the transformed LSI-based measure with the thesaurus-based measure, apparently outperforms that with traditional similarity measures. Our clustering algorithm also efficiently enhances the performance in comparison with standard GA and k-means in the same similarity environments.


DOI
08 Jul 2009
TL;DR: The theory and methodology underlying the LKIF core ontology is described, compared with other ontologies, the concepts it defines are introduced, and its use in the formalisation of an EU directive is discussed.
Abstract: In this paper we describe a legal core ontology that is part of the Legal Knowledge Interchange Format: a knowledge representation formalism that enables the translation of legal knowledge bases written in different representation formats and formalisms A legal (core) ontology can play an important role in the translation of existing legal knowledge bases to other representation formats, in particular as the basis for articulate knowledge serving This requires that the ontology has a firm grounding in commonsense and is developed in a principled manner We describe the theory and methodology underlying the LKIF core ontology, compare it with other ontologies, introduce the concepts it defines, and discuss its use in the formalisation of an EU directive

Book ChapterDOI
17 Nov 2009
TL;DR: A fine-grain approach for opinion mining is introduced, which uses the ontology structure as an essential part of the feature extraction process, by taking account the relations between concepts.
Abstract: Ontology itself is an explicitly defined reference model of application domains with the purpose of improving information consistency and knowledge sharing. It describes the semantics of a domain in both human-understandable and computer-processable way. Motivated by its success in the area of Information Extraction (IE), we propose an ontology-based approach for opinion mining. In general, opinion mining is quite context-sensitive, and, at a coarser granularity, quite domain dependent. This paper introduces a fine-grain approach for opinion mining, which uses the ontology structure as an essential part of the feature extraction process, by taking account the relations between concepts. The experiment result shows the benefits of exploiting ontology structure to opinion mining.

Proceedings ArticleDOI
01 Sep 2009
TL;DR: The main positive conclusions when using Content ODPs include: ontology developers perceived them as useful, ontology quality is improved, ontological quality isimproved, coverage of the task increases, usability isImproved, and common modelling mistakes can be avoided.
Abstract: This paper addresses the evaluation of pattern-based ontology design through experiments. An initial method for reuse of content ontology design patterns (Content ODPs) was used by the participants during the experiments. Hypotheses considered include the usefulness of Content ODPs for ontology developers, and we additionally study in what respects they are useful and what open issues remain. The main positive conclusions when using Content ODPs include: ontology developers perceived them as useful, ontology quality is improved, coverage of the task increases, usability is improved, and common modelling mistakes can be avoided.

Journal ArticleDOI
01 Aug 2009
TL;DR: This paper investigates state-of-the-art approaches in modular ontologies focusing on techniques that are based on rigorous logical formalisms as well as well-studied graph theories and compares how such approaches can be leveraged in developing tools and applications in the biomedical domain.
Abstract: In the past several years, various ontologies and terminologies such as the Gene Ontology have been developed to enable interoperability across multiple diverse medical information systems. They provide a standard way of representing terms and concepts thereby supporting easy transmission and interpretation of data for various applications. However, with their growing utilization, not only has the number of available ontologies increased considerably, but they are also becoming larger and more complex to manage. Toward this end, a growing body of work is emerging in the area of modular ontologies where the emphasis is on either extracting and managing "modules" of an ontology relevant to a particular application scenario (ontology decomposition) or developing them independently and integrating into a larger ontology (ontology composition). In this paper, we investigate state-of-the-art approaches in modular ontologies focusing on techniques that are based on rigorous logical formalisms as well as well-studied graph theories. We analyze and compare how such approaches can be leveraged in developing tools and applications in the biomedical domain. We conclude by highlighting some of the limitations of the modular ontology formalisms and put forward additional requirements to steer their future development.

Journal ArticleDOI
TL;DR: ROMEO, a requirements-oriented methodology for evaluating ontologies, is presented and applied to the task of evaluating the suitability of some general ontologies (variants of sub-domains of the Wikipedia category structure) for supporting browsing in Wikipedia.

Book ChapterDOI
TL;DR: TheSemantic Data Warehouse is proposed to be a repository of ontologies and semantically annotated data resources and an ontology-driven framework to design multidimensional analysis models for Semantic Data Warehouses is proposed.
Abstract: The Semantic Web enables organizations to attach semantic annotations taken from domain and application ontologies to the information they generate. The concepts in these ontologies could describe the facts, dimensions and categories implied in the analysis subjects of a data warehouse. In this paper we propose the Semantic Data Warehouse to be a repository of ontologies and semantically annotated data resources. We also propose an ontology-driven framework to design multidimensional analysis models for Semantic Data Warehouses. This framework provides means for building a Multidimensional Integrated Ontology (MIO) including the classes, relationships and instances that represent interesting analysis dimensions, and it can be also used to check the properties required by current multidimensional databases (e.g., dimension orthogonality, category satisfiability, etc.) In this paper we also sketch how the instance data of a MIO can be translated into OLAP cubes for analysis purposes. Finally, some implementation issues of the overall framework are discussed.

Journal ArticleDOI
TL;DR: The proposed product ontology architecture reflects this evolving feature to guarantee semantic interoperability and facilitates building product ontologies that are referred to all related participants inbound and outbound of the enterprise for collaboration.
Abstract: As enterprises are subject to cope with frequently changing business environment, enterprises should integrate value chains such as supply chain and design chain. Sharing product information must precede for the integration. However, because most of the participants have different business experience and business domains, interoperability of product information among enterprises should be guaranteed for collaboration. To achieve interoperability, we suggest product ontology architecture through the investigation of generic ontology architecture. We first suggest 4-layered ontology architecture for an integrated value chain. Extending this ontology architecture, we develop product ontology architecture which facilitates building product ontologies that are referred to all related participants inbound and outbound of the enterprise for collaboration. Using a product ontology, each enterprise can have semantic interoperability across each other for collaborative works. Because product ontologies have the feature of evolving through product lifecycle, the proposed product ontology architecture reflects this evolving feature to guarantee semantic interoperability.

01 Jan 2009
TL;DR: Results show that it is possible to improve the results of typical existing ontology learning methods by selecting and reusing patterns, and this thesis introduces a typology of patterns, a general framework of pattern-based semi-automatic ontology construction called OntoCase, and specific methods to solve some specific tasks within this framework.
Abstract: This thesis aims to improve the ontology engineering process, by providing better semiautomatic support for constructing ontologies and introducing knowledge reuse through ontology patterns. The thesis introduces a typology of patterns, a general framework of pattern-based semi-automatic ontology construction called OntoCase, and provides a set of methods to solve some specific tasks within this framework. Experimental results indicate some benefits and drawbacks of both ontology patterns, in general, and semi-automatic ontology engineering using patterns, the OntoCase framework, in particular. The general setting of this thesis is the field of information logistics, which focuses on how to provide the right information at the right moment in time to the right person or organisation, sent through the right medium. The thesis focuses on constructing enterprise ontologies to be used for structuring and retrieving information related to a certain enterprise. This means that the ontologies are quite 'light weight' in terms of logical complexity and expressiveness. Applying ontology content design patterns within semi-automatic ontology construction, i.e. ontology learning, is a novel approach. The main contributions of this thesis are a typology of patterns together with a pattern catalogue, an overall framework for semi-automatic patternbased ontology construction, specific methods for solving partial problems within this framework, and evaluation results showing the characteristics of ontologies constructed semiautomatically based on patterns. Results show that it is possible to improve the results of typical existing ontology learning methods by selecting and reusing patterns. OntoCase is able to introduce a general top-structure to the ontologies, and by exploiting background knowledge, the ontology is given a richer structure than when patterns are not applied.

Journal ArticleDOI
TL;DR: The main contributions include a new, systematic, and more structured ontology development method assisted by a semiautomatic acquisition tool, which develops an engineering ontology (EO)-based computational framework to structure unstructured engineering documents and achieve more effective information retrieval.
Abstract: When engineering content is created and applied during the product life cycle, it is often stored and forgotten. Current information retrieval approaches based on statistical methods and keyword matching are not effective in understanding the context of engineering content. They are not designed to be directly applicable to the engineering domain. Therefore, engineers have very limited means to harness and reuse past designs. The overall objective of our research is to develop an engineering ontology (EO)-based computational framework to structure unstructured engineering documents and achieve more effective information retrieval. This paper focuses on the method and process to acquire and validate the EO. The main contributions include a new, systematic, and more structured ontology development method assisted by a semiautomatic acquisition tool. This tool is integrated with Protege ontology editing environment; an engineering lexicon (EL) that represents the associated lexical knowledge of the EO to bridge the gap between the concept space of the ontology and the word space of engineering documents and queries; the first large-scale EO and EL acquired from established knowledge resources for engineering information retrieval; and a comprehensive validation strategy and its implementations to justify the quality of the acquired EO. A search system based on the EO and EL has been developed and tested. The retrieval performance test further justifies the effectiveness of the EO and EL as well as the ontology development method.

Journal ArticleDOI
TL;DR: The generation procedure followed by TEXCOMON, the knowledge puzzle ontology learning tool, to extract concept maps from texts is described and how these concept maps are exported into a domain ontology is explained.
Abstract: One of the goals of the knowledge puzzle project is to automatically generate a domain ontology from plain text documents and use this ontology as the domain model in computer-based education. This paper describes the generation procedure followed by TEXCOMON, the knowledge puzzle ontology learning tool, to extract concept maps from texts. It also explains how these concept maps are exported into a domain ontology. Data sources and techniques deployed by TEXCOMON for ontology learning from texts are briefly described herein. Then, the paper focuses on evaluating the generated domain ontology and advocates the use of a three-dimensional evaluation: structural, semantic, and comparative. Based on a set of metrics, structural evaluations consider ontologies as graphs. Semantic evaluations rely on human expert judgment, and finally, comparative evaluations are based on comparisons between the outputs of state-of-the-art tools and those of new tools such as TEXCOMON, using the very same set of documents in order to highlight the improvements of new techniques. Comparative evaluations performed in this study use the same corpus to contrast results from TEXCOMON with those of one of the most advanced tools for ontology generation from text. Results generated by such experiments show that TEXCOMON yields superior performance, especially regarding conceptual relation learning.

Journal ArticleDOI
Yan Ye1, Zhibin Jiang1, Xiaodi Diao, Dong Yang1, Gang Du1 
TL;DR: An ontology-based approach of modeling clinical pathway workflows at the semantic level for facilitating computerized clinical pathway implementation and efficient delivery of high-quality healthcare services is proposed.

01 Jan 2009
TL;DR: The work describes a sensor data ontology which is created based on the Sensor Web Enablement (SWE) and SensorML data component models and describes how the semantic relationship and operational constraints are deployed in a uniform structure to describe the heterogeneous sensor data.
Abstract: Sensor networks are used in various applications in several domains for measuring and determining physical phenomena and natural events. Sensors enable machines to capture and observe characteristics of physical objects and features of natural incidents. Sensor networks generate immense amount of data which requires advanced analytical processing and interpretation by machines. Most of the current efforts on sensor networks are focused on network technologies and service develop- ment for various applications, but less on processing the emerging data. Sensor data in a real world application will be an integration of various data obtained from different sensors such as temperature, pressure, and humidity. Processing and interpretation of huge amounts of heterogeneous sensor data and interoperability are important issues in designing a scalable sensor network architecture. This paper describes a semantic model for heterogeneous sensor data representation. We use common standards and logical description frameworks proposed by the Semantic Web community to create a sensor data description model. The work describes a sensor data ontology which is created based on the Sensor Web Enablement (SWE) and SensorML data component models. We describe how the semantic relationship and operational constraints are deployed in a uniform structure to describe the heterogeneous sensor data.

Book ChapterDOI
15 Dec 2009
TL;DR: It is shown with evidence that appropriate translations of conceptual labels in ontologies are of crucial importance when applying monolingual matching techniques in cross-lingual ontology mapping.
Abstract: Ontologies are at the heart of knowledge management and make use of information that is not only written in English but also in many other natural languages. In order to enable knowledge discovery, sharing and reuse of these multilingual ontologies, it is necessary to support ontology mapping despite natural language barriers. This paper examines the soundness of a generic approach that involves machine translation tools and monolingual ontology matching techniques in cross-lingual ontology mapping scenarios. In particular, experimental results collected from case studies which engage mappings of independent ontologies that are labeled in English and Chinese are presented. Based on findings derived from these studies, limitations of this generic approach are discussed. It is shown with evidence that appropriate translations of conceptual labels in ontologies are of crucial importance when applying monolingual matching techniques in cross-lingual ontology mapping. Finally, to address the identified challenges, a semantic-oriented cross-lingual ontology mapping (SOCOM) framework is proposed and discussed.