scispace - formally typeset
Search or ask a question

Showing papers on "Ontology-based data integration published in 2018"


Journal ArticleDOI
TL;DR: This study shows that use of ontology for knowledge representation in e-learning recommender systems can improve the quality of recommendations and hybridization of knowledge-based recommendation with other recommendation techniques can enhance the effectiveness of e- learning recommenders.
Abstract: Recommender systems in e-learning domain play an important role in assisting the learners to find useful and relevant learning materials that meet their learning needs. Personalized intelligent agents and recommender systems have been widely accepted as solutions towards overcoming information retrieval challenges by learners arising from information overload. Use of ontology for knowledge representation in knowledge-based recommender systems for e-learning has become an interesting research area. In knowledge-based recommendation for e-learning resources, ontology is used to represent knowledge about the learner and learning resources. Although a number of review studies have been carried out in the area of recommender systems, there are still gaps and deficiencies in the comprehensive literature review and survey in the specific area of ontology-based recommendation for e-learning. In this paper, we present a review of literature on ontology-based recommenders for e-learning. First, we analyze and classify the journal papers that were published from 2005 to 2014 in the field of ontology-based recommendation for e-learning. Secondly, we categorize the different recommendation techniques used by ontology-based e-learning recommenders. Thirdly, we categorize the knowledge representation technique, ontology type and ontology representation language used by ontology-based recommender systems, as well as types of learning resources recommended by e-learning recommenders. Lastly, we discuss the future trends of this recommendation approach in the context of e-learning. This study shows that use of ontology for knowledge representation in e-learning recommender systems can improve the quality of recommendations. It was also evident that hybridization of knowledge-based recommendation with other recommendation techniques can enhance the effectiveness of e-learning recommenders.

260 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss ontology-based information retrieval approaches and techniques by taking into consideration the aspects of ontology modelling, processing and the translation of ontological knowledge into database search requests.

143 citations


Book ChapterDOI
01 Jan 2018
TL;DR: The goal of this paper is to provide an overview of OBDA, pointing out both the techniques that are at the basis of the paradigm, and the main challenges that remain to be addressed.
Abstract: While big data analytics is considered as one of the most important paths to competitive advantage of today’s enterprises, data scientists spend a comparatively large amount of time in the data preparation and data integration phase of a big data project This shows that data integration is still a major challenge in IT applications Over the past two decades, the idea of using semantics for data integration has become increasingly crucial, and has received much attention in the AI, database, web, and data mining communities Here, we focus on a specific paradigm for semantic data integration, called Ontology-Based Data Access (OBDA) The goal of this paper is to provide an overview of OBDA, pointing out both the techniques that are at the basis of the paradigm, and the main challenges that remain to be addressed

71 citations


Journal ArticleDOI
TL;DR: A novel Compact Co-Evolutionary Algorithm (CCEA) is proposed to improve the ontology alignment’s quality and reduce the runtime consumption and the experimental results show that CCEA-based ontology matching approach is both effective and efficient when matching ontologies with various scales and under different heterogeneous situations.
Abstract: With the proliferation of sensors, semantic web technologies are becoming closely related to sensor network. The linking of elements from semantic web technologies with sensor networks is called semantic sensor web whose main feature is the use of sensor ontologies. However, due to the subjectivity of different sensor ontology designer, different sensor ontologies may define the same entities with different names or in different ways, raising so-called sensor ontology heterogeneity problem. There are many application scenarios where solving the problem of semantic heterogeneity may have a big impact, and it is urgent to provide techniques to enable the processing, interpretation and sharing of data from sensor web whose information is organized into different ontological schemes. Although sensor ontology heterogeneity problem can be effectively solved by Evolutionary Algorithm (EA)-based ontology meta-matching technologies, the drawbacks of traditional EA, such as premature convergence and long runtime, seriously hamper them from being applied in the practical dynamic applications. To solve this problem, we propose a novel Compact Co-Evolutionary Algorithm (CCEA) to improve the ontology alignment’s quality and reduce the runtime consumption. In particular, CCEA works with one better probability vector (PV) $$PV_{better}$$ and one worse PV $$PV_{worse}$$ , where $$PV_{better}$$ mainly focuses on the exploitation which dedicates to increase the speed of the convergence and $$PV_{worse}$$ pays more attention to the exploration which aims at preventing the premature convergence. In the experiment, we use Ontology Alignment Evaluation Initiative (OAEI) test cases and two pairs of real sensor ontologies to test the performance of our approach. The experimental results show that CCEA-based ontology matching approach is both effective and efficient when matching ontologies with various scales and under different heterogeneous situations, and compared with the state-of-the-art sensor ontology matching systems, CCEA-based ontology matching approach can significantly improve the ontology alignment’s quality.

54 citations


Journal ArticleDOI
01 Jan 2018
TL;DR: QAPD, an ontology-based QA system applied to the physics domain, which integrates natural language processing, ontologies and information retrieval technologies to provide informative information for users, is presented and inferring schema mapping method is proposed, which uses the combination of semantic and syntactic information, and attribute-based inference to transform users’ questions into ontological knowledge base query.
Abstract: The tremendous development in information technology led to an explosion of data and motivated the need for powerful yet efficient strategies for knowledge discovery. Question answering (QA) systems made it possible to ask questions and retrieve answers using natural language queries. In ontology-based QA system, the knowledge-based data, where the answers are sought, have a structured organization. The question-answer retrieval of ontology knowledge base provides a convenient way to obtain knowledge for use. In this paper, QAPD, an ontology-based QA system applied to the physics domain, which integrates natural language processing, ontologies and information retrieval technologies to provide informative information for users, is presented. This system allows users to retrieve information from formal ontologies using input queries formulated in natural language. We proposed inferring schema mapping method, which uses the combination of semantic and syntactic information, and attribute-based inference to transform users' questions into ontological knowledge base query. In addition, a novel domain ontology for physics domain, called EAEONT, is presented. Relevant standards and regulations have been utilized extensively during the ontology building process. The original characteristic of system is the strategy used to fill the gap between users' expressiveness and formal knowledge representation. This system has been developed and tested on the English language and using an ontology modeling the physics domain. The performance level achieved enables the use of the system in real environments.

41 citations


Journal ArticleDOI
TL;DR: The problem of determining whether a given data integration system discloses a source query to an attacker is formalised and studied, and a number of techniques for analysing logical privacy issues in ontology-based data integration are introduced.

40 citations


Book
17 Mar 2018
TL;DR: The objective of the work described in this paper is to move closer to the ultimate goal of seamless system integration using the principle behind ontological engineering to unambiguously define domain-specific concepts.
Abstract: The objective of the work described in this paper is to move closer to the ultimate goal of seamless system integration using the principle behind ontological engineering to unambiguously define domain-specific concepts. A major challenge facing industry today is the lack of interoperability between heterogeneous systems. This challenge is apparent in many sectors, including both healthcare and manufacturing. Current integration efforts are usually based solely on how information is represented (the syntax) without a description of what the information means (the semantics). With the growing complexity of information and the increasing need to completely and correctly exchange information among different systems, the need for precise and unambiguous capture of the meaning of concepts within a given system is becoming apparent.

30 citations


Journal ArticleDOI
Andrew Iliadis1
TL;DR: In this paper, the authors highlight several useful data studies and ways to utilize data for social progress, and highlight the importance of critical data studies in social progress in media and communication research.
Abstract: Recently, media and communication researchers have shown an increasing interest in critical data studies and ways to utilize data for social progress. In this commentary, I highlight several useful...

29 citations


Book ChapterDOI
01 Jan 2018
TL;DR: An automatic topic ontology construction process for better topic classification is developed and a corpus based novel approach to enrich the set of categories in the ODP by automatically identifying concepts and their associated semantic relationships based on external knowledge from Wikipedia and WordNet is presented.
Abstract: The rapid growth of web technologies had created a huge amount of information that is available as web resources on Internet. Authors develop an automatic topic ontology construction process for better topic classification and present a corpus based novel approach to enrich the set of categories in the ODP by automatically identifying concepts and their associated semantic relationships based on external knowledge from Wikipedia and WordNet. The topic ontology construction process relies on concept acquisition and semantic relation extraction. Initially, a topic mapping algorithm is developed to acquire the concepts from Wikipedia based on semantic relations. A semantic similarity clustering algorithm is used to compute similarity to group the set of similar concepts. The semantic relation extraction algorithm derives associated semantic relations between the set of extracted topics from the lexical patterns in WordNet. The performance of the proposed topic ontology is evaluated for the classification of web documents and obtained results depict the improved performance over ODP.

26 citations


Journal ArticleDOI
TL;DR: A delay analysis ontology is proposed that may facilitate development of databases, information sharing as well as retrieval for delay analysis within construction companies, and enable companies to create their own databases, corporate memories and develop decision support systems for better analysis of delays.
Abstract: Delay is a common problem of the construction sector and it is one of the major reasons of claims between project participants. Systematic and reliable delay analysis is critical for successful management of claims. In this study, a delay analysis ontology is proposed that may facilitate development of databases, information sharing as well as retrieval for delay analysis within construction companies. A detailed literature review on construction delays has been carried out during the development of the ontology and it is evaluated by using five case studies. The delay analysis ontology may be used for different purposes especially to support decision-making during risk and claim management processes. It may enable companies to create their own databases, corporate memories and develop decision support systems for better analysis of delays.

24 citations


Journal ArticleDOI
TL;DR: This paper illustrates an approach to build a modular ontology for Big Data integration that considers the characteristics of big volume, high-speed generation and wide variety of the data.
Abstract: Big Data are collections of data sets so large and complex to process using classical database management tools. Their main characteristics are volume, variety and velocity. Although these characteristics accentuate heterogeneity problems, users are always looking for a unified view of the data. Consequently, Big Data integration is a new research area that faces new challenges due to the aforementioned characteristics. Ontologies are widely used in data integration since they represent knowledge as a formal description of a domain of interest. With the advent of Big Data, their implementation faces new challenges due to the volume, variety and velocity dimensions of these data. This paper illustrates an approach to build a modular ontology for Big Data integration that considers the characteristics of big volume, high-speed generation and wide variety of the data. Our approach exploits a NOSQL database, namely MongoDB, and takes advantages of modular ontologies. It follows three main steps: wrapping data sources to MongoDB databases, generating local ontologies and finally composing the local ontologies to get a global one. We equally focus on the implementation of the two last steps.

Proceedings ArticleDOI
01 Aug 2018
TL;DR: A unified set of fog-based access control policies are presented with the aim of reducing administrative burdens and processing overheads and a unified data ontology is introduced together with its reasoning capability by realizing the formal approach.
Abstract: With the proliferation of cloud-based data and services, accessing data from distributed cloud environments and consequently providing integrated results to the users has become a key challenge, often involving large processing overheads and administrative costs The traditional, spatial, temporal and other context-sensitive access control models have been applied in different environments in order to access such data and information Recently, fog-based access control models have also been introduced to overcome the latency and processing issues by moving the execution of application logic from the cloud-level to an intermediary-level through adding computational nodes at the edges of the networks These existing access control models mostly have been used to access data from centralized sources However, we have been encountering rapid changes in computing technologies over the last few years, and many organizations need to dynamically control context-sensitive access to cloud data resources from distributed environments In this article, we propose a new generation of fog-based access control approach, combining the benefits of fog computing and context-sensitive access control solutions We first formally introduce a general data model and its associated policy and mapping models, in order to access data from distributed cloud sources and to provide integrated results to the users In particular, we present a unified set of fog-based access control policies with the aim of reducing administrative burdens and processing overheads We then introduce a unified data ontology together with its reasoning capability by realizing our formal approach We demonstrate the applicability of our proposal through a prototype testing and several case studies Experiment results demonstrate the good performance of our approach with respect to our earlier context-sensitive access control approach

Book ChapterDOI
03 Jun 2018
TL;DR: This paper addresses the scenario where records with different identifiers in different databases can represent the same entity, and presents an alternative approach, which is based on assigning canonical IRIs to entities in order to avoid redundancy.
Abstract: In this paper, we study how to efficiently integrate multiple relational databases using an ontology-based approach. In ontology-based data integration (OBDI) an ontology provides a coherent view of multiple databases, and SPARQL queries over the ontology are rewritten into (federated) SQL queries over the underlying databases. Specifically, we address the scenario where records with different identifiers in different databases can represent the same entity. The standard approach in this case is to use sameAs to model the equivalence between entities. However, the standard semantics of sameAs may cause an exponential blow up of query results, since all possible combinations of equivalent identifiers have to be included in the answers. The large number of answers is not only detrimental to the performance of query evaluation, but also makes the answers difficult to understand due to the redundancy they introduce. This motivates us to propose an alternative approach, which is based on assigning canonical IRIs to entities in order to avoid redundancy. Formally, we present our approach as a new SPARQL entailment regime and compare it with the sameAs approach. We provide a prototype implementation and evaluate it in two experiments: in a real-world data integration scenario in Statoil and in an experiment extending the Wisconsin benchmark. The experimental results show that the canonical IRI approach is significantly more scalable.

Journal ArticleDOI
TL;DR: This work explores the use of a predictive statistical model to establish an alignment between two input ontologies and demonstrates how to integrate ontology partitioning and parallelism in the ontology matching process in order to make the statistical predictive model scalable to large ontological matching tasks.
Abstract: Ontologies have become a popular means of knowledge sharing and reuse. This has motivated development of large independent ontologies within the same or different domains with some overlapping information among them. In order to match such large ontologies, automatic matchers become an inevitable solution. This work explores the use of a predictive statistical model to establish an alignment between two input ontologies. We demonstrate how to integrate ontology partitioning and parallelism in the ontology matching process in order to make the statistical predictive model scalable to large ontology matching tasks. Unlike most ontology matching tools which establish 1:1 cardinality mappings, our statistical model generates one-to-many cardinality mappings.

Journal ArticleDOI
TL;DR: A combined three-part, five-stage framework of data mining, process improvement, and process ontology that can be exploited to support process improvement methodologies in organizations is presented.

Journal ArticleDOI
TL;DR: A system architecture and prototypical implementation for an integrated data management of distributed databases based on a domain-specific ontology, which will enhance existing business analysis methods in the domain of IT benchmarking.
Abstract: In the domain of IT benchmarking (ITBM), a variety of data and information are collected. Although these data serve as the basis for business analyses, no unified semantic representation of such data yet exists. Consequently, data analysis across different distributed data sets and different benchmarks is almost impossible. This paper presents a system architecture and prototypical implementation for an integrated data management of distributed databases based on a domain-specific ontology. To preserve the semantic meaning of the data, the ITBM ontology is linked to data sources and functions as the central concept for database access. Thus, additional databases can be integrated by linking them to this domain-specific ontology and are directly available for further business analyses. Moreover, the web-based system supports the process of mapping ontology concepts to external databases by introducing a semi-automatic mapping recommender and by visualizing possible mapping candidates. The system al...

Journal ArticleDOI
TL;DR: This paper analyzes the integration of XML-based standards with ontology to describe the meaning of the information and knowledge interchanged between trading partners to jointly execute business processes and defines the main components of an ontology development environment to support the entire ontology lifecycle.
Abstract: A collaborative B2B relationship implies jointly executing business processes. This relationship demands a complete access to available information and knowledge to support decision-making activities between trading partners. To support information interchange between enterprises in collaborative B2B ecommerce there are some XML-based standards technologies, like RosettaNet, ebXML and OAGIS. However, XML does not express semantics by itself. So, these standards only provide an infrastructure to support the information interchange. They are suitable to integrate information but not to support decision-making activities where a common understanding of the information is needed. In this paper we analyze the integration of these standards with ontology to describe the meaning of the information and knowledge interchanged between trading partners to jointly execute business processes. Furthermore, we define the main components of an ontology development environment to support the entire ontology lifecycle.

Journal ArticleDOI
TL;DR: This work proposes an ontology for the domain of IT benchmarking, based on requirements mainly elicited from a domain analysis, which considers analyzing documents and interviews with representatives from Small- and Medium-Sized Enterprises and Information and Communications Technology companies over the last eight years.
Abstract: A domain-specific ontology for IT benchmarking has been developed to bridge the gap between a systematic characterization of IT services and their data-based valuation. Since information is generally collected during a benchmark exercise using questionnaires on a broad range of topics, such as employee costs, software licensing costs, and quantities of hardware, it is commonly stored as natural language text; thus, this information is stored in an intrinsically unstructured form. Although these data form the basis for identifying potentials for IT cost reductions, neither a uniform description of any measured parameters nor the relationship between such parameters exists. Hence, this work proposes an ontology for the domain of IT benchmarking, available at https://w3id.org/bmontology. The design of this ontology is based on requirements mainly elicited from a domain analysis, which considers analyzing documents and interviews with representatives from Small- and Medium-Sized Enterprises and Information and Communications Technology companies over the last eight years. The development of the ontology and its main concepts is described in detail (i.e., the conceptualization of benchmarking events, questionnaires, IT services, indicators and their values) together with its alignment with the DOLCE-UltraLite foundational ontology.

Journal ArticleDOI
TL;DR: This work has proposed an ontology DEMLOnto based on six basic emotions to help users to share existed information and is a useful first step in providing and formalizing the semantics of information representation.
Abstract: With the explosive growth of various social media applications, individuals and organizations are increasingly using their contents (e.g. reviews, forum discussions, blogs, micro-blogs, comments, and postings in social network sites) for decision-making. These contents are typical big data. Opinion mining or sentiment analysis focuses on how to extract emotional semantics from these big data to help users to get a better decision. That is not an easy task, because it faces many problems, such as different context may make the meaning of the same word change variously, at the same time multilingual environment restricts the full use of the analysis results. Ontology provides knowledge about specific domains that are understandable by both the computers and developers. Building ontology is mainly a useful first step in providing and formalizing the semantics of information representation. We proposed an ontology DEMLOnto based on six basic emotions to help users to share existed information. The ont...


Journal ArticleDOI
13 Jun 2018
TL;DR: Information management during the construction phase of a built asset involves multiple stakeholders using multiple software applications to generate and store data, which is problematic as data is generated and stored.
Abstract: Information management during the construction phase of a built asset involves multiple stakeholders using multiple software applications to generate and store data. This is problematic as data com...

Journal ArticleDOI
TL;DR: In this article, a computational ontology for the Union of Concerned Scientists (UCS) Satellite Database (UCSSD) called the UCS Satellite Ontology (or UCSSO) is presented.
Abstract: This paper demonstrates the development of ontology for satellite databases. First, I create a computational ontology for the Union of Concerned Scientists (UCS) Satellite Database (UCSSD for short), called the UCS Satellite Ontology (or UCSSO). Second, in developing UCSSO I show that The Space Situational Awareness Ontology (SSAO) (Rovetto and Kelso 2016)--an existing space domain reference ontology--and related ontology work by the author (Rovetto 2015, 2016) can be used either (i) with a database-specific local ontology such as UCSSO, or (ii) in its stead. In case (i), local ontologies such as UCSSO can reuse SSAO terms, perform term mappings, or extend it. In case (ii), the author's orbital space ontology work, such as the SSAO, is usable by the UCSSD and organizations with other space object catalogs, as a reference ontology suite providing a common semantically-rich domain model. The SSAO, UCSSO, and the broader Orbital Space Environment Domain Ontology project is online at this http URL and GitHub. This ontology effort aims, in part, to provide accurate formal representations of the domain for various applications. Ontology engineering has the potential to facilitate the sharing and integration of satellite data from federated databases and sensors for safer spaceflight.

Book ChapterDOI
01 Jan 2018
TL;DR: OntoMaven as discussed by the authors adopts the Maven-based development methodology and adapts its concepts to knowledge engineering for Mavenbased ontology development and management of ontology artifacts in distributed ontology repositories.
Abstract: In collaborative agile ontology development projects support for modular reuse of ontologies from large existing remote repositories, ontology project life cycle management, and transitive dependency management are important needs. The Apache Maven approach has proven its success in distributed collaborative Software Engineering by its widespread adoption. The contribution of this paper is a new design artifact called OntoMaven. OntoMaven adopts the Maven-based development methodology and adapts its concepts to knowledge engineering for Maven-based ontology development and management of ontology artifacts in distributed ontology repositories.

Proceedings Article
02 Sep 2018
TL;DR: This paper analyses existing modeling approaches and classify them according to some revelant characteristics of knowledge modeling, and presents transformation rules between ontology models in order to allow a powerful usage of ontologies in data management and knowledge.
Abstract: Ontologies are seen like the more relevant way to solve data understanding problem and to allow programs to perform meaningful operations on data in various domains. However, it appears that none of the proposed models is complete enough by itself to cover all aspects of knowledge applications. In this paper, we analyse existing modeling approaches and classify them according to some revelant characteristics of knowledge modeling we have identified. Finally, this paper present transformation rules between ontology models in order to get benefits of their strengths and to allow a powerful usage of ontologies in data management and knowledge

Journal ArticleDOI
Ahmad Hawalah1
TL;DR: This paper will first extract and build an Arabic ontology from a publicly available directory, following which, this ontology will be enhanced with rich data from the Internet and a multi-disciplinary ontology that provides a hierarchical representation of topics in a conceptual way is constructed.
Abstract: Over recent years, the Internet has become people’s main source of information, with many databases and web pages being added and accessed every day. This continued growth in the amount of information available has led to frustration and difficulty for those attempting to find a specific piece of information. As such, many techniques are widely used to retrieve useful information and to mine valuable data; indeed, these techniques make it possible to discover hidden relations and patterns. Most of the above-mentioned techniques have been used primarily to process and analyse English text, but not Arabic text. Limited Arabic resources (e.g. datasets, databases, and ontologies), also make analysing and processing Arabic text a difficult task. As such, in this paper, we propose a framework for building an Arabic ontology from multiple resources. Thus, we will first extract and build an Arabic ontology from a publicly available directory, following which, we will enhance this ontology with rich data from the Internet. We will then use an Arabic online directory to construct a multi-disciplinary ontology that provides a hierarchical representation of topics in a conceptual way. Following this, we introduce an enhanced technique to enrich these ontologies with sufficient information and proper annotation for each concept. Finally, by using common information retrieval evaluation techniques, we confirm the viability of the proposed approach.


Journal ArticleDOI
TL;DR: This work describes a visual system for managing ontologies in the RDF formalism, providing a number of features for creating, updating and deleting elements and instances via a user-friendly graphical interface, along with a set of advanced operators that can be applied upon them.
Abstract: This work describes a visual system for managing ontologies in the RDF formalism, providing a number of features for creating, updating and deleting elements and instances via a user-friendly graphical interface, along with a set of advanced operators that can be applied upon them. These operators implement mechanisms for ontology instance matching and integration, ontology enrichment with semantically-related concepts, as well as question answering in natural language, with the purpose of discovering knowledge from the underlying ontologies. SEMANTO may display and manage RDF ontologies via SPARQL endpoints, including user-defined ontologies and subsets of Linked Open Data. SEMANTO has been experimented upon against ontological schema and instances derived from a knowledge model for learning management systems and from a learning application for online dispute resolution.

Proceedings ArticleDOI
01 Jan 2018
TL;DR: An ontology-based platform that supports data owners and model developers to share and harmonize their data for model development respecting data privacy is presented, driven by the demand for predictive instruments in allogeneic stem cell transplantation.
Abstract: Predictive models can support physicians to tailor interventions and treatments to their individual patients based on their predicted response and risk of disease and help in this way to put personalized medicine into practice. In allogeneic stem cell transplantation risk assessment is to be enhanced in order to respond to emerging viral infections and transplantation reactions. However, to develop predictive models it is necessary to harmonize and integrate high amounts of heterogeneous medical data that is stored in different health information systems. Driven by the demand for predictive instruments in allogeneic stem cell transplantation we present in this paper an ontology-based platform that supports data owners and model developers to share and harmonize their data for model development respecting data privacy.

Proceedings Article
24 Mar 2018
TL;DR: This work investigates the use of quality metrics for Content ODP evaluation in terms of metrics applicability and validity, and discusses the general applicability of each metric considering its definition, ODP characteristics, and the defined goals of ODPs.
Abstract: Ontology Design Patterns (ODPs) provide best practice solutions for common or recurring ontology design problems. This work focuses on Content ODPs. These form small ontologies themselves and thus can be subject to ontology quality metrics in general. We investigate the use of such metrics for Content ODP evaluation in terms of metrics applicability and validity. The quality metrics used for this investigation are taken from existing work in the area of ontology quality evaluation. We discuss the general applicability to Content ODPs of each metric considering its definition, ODP characteristics, and the defined goals of ODPs. Metrics that revealed to be applicable are calculated for a random set of 10 Content ODPs from the ODP wiki-portal that was initiated by the NeOn-project. Interviews have been conducted for an explorative view into the correlation of quality metrics and evaluation by users.

Journal ArticleDOI
TL;DR: This paper presents a knowledge-based approach to improve process integration in ODE, an ontology-based SEE, which aims to allow tool integration in Software Engineering Environments.
Abstract: Process integration in Software Engineering Environments (SEE) is very important to allow tool integration In this paper, we present a knowledge-based approach to improve process integration in ODE, an ontology-based SEE