scispace - formally typeset
Search or ask a question

Showing papers on "Ontology-based data integration published in 1994"


Book ChapterDOI
24 May 1994
TL;DR: This paper describes an ontology for mathematical modeling in engineering that includes conceptual foundations for scalar, vector, and tensor quantities, physical dimensions, units of measure, functions of quantities, and dimensionless quantities.
Abstract: We describe an ontology for mathematical modeling in engineering. The ontology includes conceptual foundations for scalar, vector, and tensor quantities, physical dimensions, units of measure, functions of quantities, and dimensionless quantities. The conceptualization builds on abstract algebra and measurement theory, but is designed explicitly for knowledge sharing purposes. The ontology is being used as a communication language among cooperating engineering agents, and as a foundation for other engineering ontologies. In this paper we describe the conceptualization of the ontology, and show selected axioms from definitions. We describe the design of the ontology and justify the important representation choices. We offer evaluation criteria for such ontologies and demonstrate design techniques for achieving them.

549 citations


Proceedings Article
12 Sep 1994
TL;DR: This work presents a procedure using a classifier to categorize attributes according to their field specifications and data values, then train a neural network to recognize similar attributes and present a technique to match equivalent data elements.
Abstract: One important step in integrating heterogeneous databases is matching equivalent attributes: Determining which fields in two databases refer to the same data. The meaning of information may be embodied within a database model, a conceptual schema, application programs, or data contents. Integration involves extracting semantics, expressing them as metadata, and matching semantically equivalent data elements. We present a procedure using a classifier to categorize attributes according to their field specifications and data values, then train a neural network to recognize similar attributes. In our technique, the knowledge of how to match equivalent data elements is "discovered" from metadata, not "pre-programmed".

233 citations


Proceedings ArticleDOI
17 Apr 1994
TL;DR: The ontology described in this paper is one of many that is being created by the TOVE project in the Enterprise Integration Laboratory at the University of Toronto.
Abstract: The complexity of planning and scheduling is determined by the degree to which activities contend for resources. Accordingly this requires any application to have the ability to reason about the nature of the resource and its availability. This paper presents a generic enterprise resource ontology. The ontology described in this paper is one of many that is being created by the TOVE project in the Enterprise Integration Laboratory at the University of Toronto. >

90 citations


01 Jan 1994
TL;DR: A logical framework for representing activities, states, time, resources, and cost in an enterprise integration architecture and defines ontologies for these concepts in first-order logic with the use of competency questions.
Abstract: We present a logical framework for representing activities, states, time, resources, and cost in an enterprise integration architecture. We define ontologies for these concepts in first-order logic and consider the problems of temporal projection and reasoning about the occurrence of actions. We characterize the ontology with the use of competency questions. The ontology must contain a necessary and sufficient set of axioms to represent and solve these questions. As such, they serve as a methodology for evaluating ontologies for the various tasks in enterprise engineering.

49 citations


Proceedings ArticleDOI
08 Mar 1994
TL;DR: This paper describes a semi-automatic method for associating a Japanese lexicon with a semantic concept taxonomy called an ontology, using a Japanese-English bilingual dictionary as a "bridge".
Abstract: This paper describes a semi-automatic method for associating a Japanese lexicon with a semantic concept taxonomy called an ontology, using a Japanese-English bilingual dictionary as a "bridge". The ontology supports semantic processing in a knowledge-based machine translation system by providing a set of language-neutral symbols and semantic information. To put the ontology to practical use, lexical items of each language of interest must be linked to appropriate ontology items. The association of ontology items with lexical items of various languages is a process fraught with difficulty: since much of this work depends on the subjective decisions of human workers, large MT dictionaries tend to be subject to some dispersion and inconsistency. The problem we focus on here is how to associate concepts in the ontology with Japanese lexical entities by automatic methods, since it is too difficult to define adequately many concepts manually. We have designed three algorithms to associate a Japanese lexicon with the concepts of the ontology automatically: the equivalent-word match, the argument match, and the example match. We simulated these algorithms for 980 nouns, 860 verbs and 520 adjectives as preliminary experiments. The algorithms are found to be effective for more than 80% of the words.

33 citations


ReportDOI
01 Oct 1994
TL;DR: The lDEF5 Ontology Capture Method as mentioned in this paper relies on iterative knowledge extraction through various steps: organizingiscoping; data collection; data analysis; initial development; ontology refinement/validation.
Abstract: : In order to exploit relevant information about a specific domain, the domain vocabulary must be captured. In addition, rigorous definitions of the basic terms in the vocabulary and the logical connections between those terms must be identified. Ontologies are used to capture the concept and objects in a specific domain, along with associated relationships and meanings. In addition, ontology capture helps coordinate projects by standardizing terminology and creates opportunities for information reuse. The lDEF5 Ontology Capture Method has been developed to reliably construct ontologies in a way that closely reflects human understanding of the specific domain. lDEF5 relies on iterative knowledge extraction through various steps: organizingiscoping; data collection; data analysis; initial development; ontology refinement/validation. lDEF5 allows users to validate the vocabulary and axioms of a given domain and store that knowledge in a usable representational medium.

23 citations


Journal ArticleDOI
TL;DR: It is shown that the use of an object-oriented data model for building a “uniform” view of several databases can greatly simplify this task, and actually extends the scope of integration towards two directions.
Abstract: The object-oriented paradigm has several features that facilitate the integration of heterogeneous data management systems. One of the main problems in the integration is to provide users with the same data model and language to access very different systems. This problem exists in all kinds of distributed heterogeneous data management systems, independently from their integration architecture (like classical distributed databases, federated databases, multidatabases). This paper shows that the use of an object-oriented data model for building a “uniform” view of several databases can greatly simplify this task, and actually extends the scope of integration towards two directions. The first concerns the integration of data management systems to which traditional integration techniques, based on mappings among data models, cannot be applied. The second direction moves the goal of integration to re-using not only data but to re-using data and application software using these data. In the paper we also briefly discuss some requirements for an object-oriented integrated platform.

18 citations


Book ChapterDOI
28 Feb 1994
TL;DR: In this work, it is shown how process traceability models and process guidance models can be developed and related in a standard repository framework and demonstrated with a prototype requirements engineering environment developed in ESPRIT project NATURE.
Abstract: Evolution is a fact of life in information systems. Not only systems evolve but also their development processes. IS environments must therefore be designed for accommodating and managing change. The management of process meta models in repositories is one important step; we show how process traceability models and process guidance models can be developed and related in a standard repository framework. In addition, the currently available tool integration along the presentation, data, and control perspectives have to be augmented for process integration. In our process-adaptable and interoperable tool concept, tool behavior is directly influenced by the process guidance model and automatically traced according to the traceability model. The approach is demonstrated with a prototype requirements engineering environment developed in ESPRIT project NATURE.

13 citations



Journal ArticleDOI
TL;DR: This work presents a business process oriented strategy for data integration that allows the determination of the order and the degree of integration, and reduces complexity by schema clustering during the pre-integration phase.

6 citations



01 Oct 1994
TL;DR: This report describes the activities an organization must undertake to integrate CASE tools in order to ensure the interopration of message-passing integration products and includes a set of lessons learned concerning the experiments carried out.
Abstract: : In an on-going set of commercial off-the-shelf (COTS) tool integration experiments being conducted by the CASE Environments Project, we have integrated a set of CASE tools using a combination of data integration mechanisms (PCTE Object Management System (OMS) and UNIX file system) and control integration mechanisms (Broadcast Message Server (B MS) of HP SoftBench). One of the key issues addressed in our work is the extent to which the integration of CASE tools can be independent of particular integration framework technology products. This report describes a task to examine interoperability aspects of the control integration component of the integration framework. The major conclusion from our work is that it is possible to integrate CASE tools using a message-passing approach that is independent of the integration framework product used. This report describes the activities an organization must undertake to integrate CASE tools in order to ensure the interopration of message-passing integration products. The report also includes a set of lessons learned concerning the experiments we carried out.