scispace - formally typeset
Search or ask a question

Showing papers on "Tuple published in 1992"


Journal ArticleDOI
TL;DR: The different kinds of joins and the various implementation techniques are surveyed and they are classified based on how they partition tuples from different relations.
Abstract: The join operation is one of the fundamental relational database query operations. It facilitates the retrieval of information from two different relations based on a Cartesian product of the two relations. The join is one of the most diffidult operations to implement efficiently, as no predefined links between relations are required to exist (as they are with network and hierarchical systems). The join is the only relational algebra operation that allows the combining of related tuples from relations on different attribute schemes. Since it is executed frequently and is expensive, much research effort has been applied to the optimization of join processing. In this paper, the different kinds of joins and the various implementation techniques are surveyed. These different methods are classified based on how they partition tuples from different relations. Some require that all tuples from one be compared to all tuples from another; other algorithms only compare some tuples from each. In addition, some techniques perform an explicit partitioning, whereas others are implicit.

489 citations


Patent
30 Apr 1992
TL;DR: In this paper, the semantics of the outer join operator are extended to permit the application of different predicates to the join tuples and the anti-join tuples, such that the predicate of anti-joining tuples is evaluated assuming a count value of zero.
Abstract: The semantics of the outer join operator are extended to permit the application of different predicates to the join tuples and the anti-join tuples. For un-nesting of nested query blocks, the anti-join tuples, for example, are associated with a count value of zero instead of a count value of null. An inner query block is un-nested from an outer query block by converting the inner query to a first un-nested query generating a temporary relation and converting the outer query block to a second un-nested query receiving the precomputed temporary relation. When the nested inner query has an equi-join predicate joining a relation of the inner query to an outer query and a count aggregate, the query blocks are un-nested by removing the equi-join predicate from the inner query and placing a corresponding conjunctive (left) outerjoin predicate term in the predicate of the outer query, performing the count aggregate for each distinct value of the joining attribute of the relation of the inner query, and in the outer query applying different predicates to the joining and anti-joining tuples such that the predicate of the anti-joining tuples is evaluated assuming a count value of zero.

315 citations


Proceedings Article
23 Aug 1992
TL;DR: A modular declarative query language/programming language that supports general Horn clauses with complex terms, set-grouping, aggregation, negation, and relations with tuples that contain (universally quantified) variables.
Abstract: CORAL is a modular declarative query language/programming language that supports general Horn clauses with complex terms, set-grouping, aggregation, negation, and relations with tuples that contain (universally quantified) variables. Support for persistent relations is provided by using the EXODUS storage manager. A unique feature of CORAL is that it provides a wide range of evaluation strategies and allows users to optionally tailor execution of a program through high-level annotations. A CORAL program is organized as a collection of modules, and this structure is used as the basis for expressing control choices. CORAL has an interface to C++, and uses the class structure of C++ to provide extensibility. FinaUy, CORAL supports a command sublanguage, in which statements are evaluated in a user-specified order. The statements can be queries, updates, production-system style rules, or any command that can be typed in at the CORAL

181 citations


Proceedings ArticleDOI
S.K. Lee1
03 Feb 1992
TL;DR: A novel approach for representing imprecise and uncertain data and evaluating queries in the framework of an extended relational database model based on the Dempster-Shafer theory of evidence is proposed.
Abstract: A novel approach for representing imprecise and uncertain data and evaluating queries in the framework of an extended relational database model based on the Dempster-Shafer theory of evidence is proposed. Because of the ability to combine evidences from different sources, the semantics of the update operation of imprecise or uncertain data is reconsidered. By including an undefined value in a domain, three different cases of a null value are presented: unknown, inapplicable, and unknown or inapplicable. In this model, two levels of uncertainty in the database are supported: one is for the attribute value level and the other is for the tuple level. >

89 citations


Proceedings ArticleDOI
01 Jun 1992
TL;DR: The results indicate that MAGIC outperforms both range and BERD in all experiments conducted in this study, and compares the performance of Multi-Attribute GrId deClustering (MAGIC) strategy and Bubba's Extended Range Declustered (BERD) strategy with one another and with the range partitioning strategy.
Abstract: During the past decade, parallel database systems have gained increased popularity due to their high performance, scalability and availability characteristics. With the predicted future database sizes and the complexity of queries, the scalability of these systems to hundreds and thousands of processors is essential for satisfying the projected demand. Several studies have repeatedly demonstrated that both the performance and scalability of a paralel database system is contingent on the physical layout of data across the processors of the system. If the data is not declustered properly, the execution of an operator might waste resources, reducing the overall processing capability of the system.With earlier, single attribute declustering strategies, such as those found in Tandem, Teradata, Gamma, and Bubba parallel database systems, a selection query including a range predicate on any attribute other than the partitioning attribute must be sent to all processors containing tuples of the relation. By directing a query with minimal resource requirements to processors that contain no relevant tuples, the system wastes CPU cycles, communication bandwidth, and I/O bandwidth, reducing its overall processing capability. As a solution, several multi-attribute declustering strategies have been proposed. However, the performance of these declustering techniques have not previously been compared to one another nor with a single attribute partitioning strategy. This paper, compares the performance of Multi-Attribute GrId deClustering (MAGIC) strategy and Bubba's Extended Range Declustering (BERD) strategy with one another and with the range partitioning strategy. Our results indicate that MAGIC outperforms both range and BERD in all experiments conducted in this study.

83 citations


Journal ArticleDOI
TL;DR: An efficient and interactive two-stage heuristic for the generation of block layouts is presented, which generates a hexagonal and maximum weight planar adjacency subgraph, which incorporates relationships with the outside of the layout in a consistent manner.

82 citations


Proceedings ArticleDOI
01 Jul 1992
TL;DR: Extensions to the semantics, restrictions on the input, and other supplementary requirements proposed in earlier studies appear to be unnecessary for the purpose of attaching a meaning to a program that involves recursion through aggregation.
Abstract: Common aggregation predicates have natural definitions in logic, either as first order sentences (min, max, etc.), or with elementary induction over a data structure that represents the relation (sum, count, etc.). The well-founded semantics for logic programs provides an interpretation of such definitions. The interpretation of first-order aggregates seems to be quite natural and intuitively satisfying, even in the presence of recursion through aggregation. Care is needed to get useful results on inductive aggregates, however. A basic building block is the “subset” predicate, which states that a data structure represents a subset of an IDB predicate, and which is definable in the well-founded semantics. The analogous “superset” is also definable, and their combination yields a “generic” form of findall. Surprisingly, findall must be used negatively to obtain useful approximations when the exact relation is not yet known.Extensions to the semantics, restrictions on the input, and other supplementary requirements proposed in earlier studies appear to be unnecessary for the purpose of attaching a meaning to a program that involves recursion through aggregation. For example, any reasonable definition of “shortest paths” tolerates negative weight edges, correctly computes shortest paths that exist, and leave tuples undefined where negative-weight cycles cause the shortest path not to exist. Other examples exhibit similarly robust behavior, when defined carefully. Connections with the generic model of computation are discussed briefly.

77 citations


Proceedings Article
23 Aug 1992
TL;DR: This paper introduces a belief-based semantics for multilevel secure databases that supports the description of semantic multileVEL secure entities, and argues for the generality of this semantics.
Abstract: Previous proposals for a multilevel secure relational model have utilized syntactic integrity properties to control problems such as polyinstantiation, pervasive ambiguity, and proliferation of tuples due to updates. Although successive versions of these models have shown steady improvement, most thorny problems have been mitigated but not resolved. We believe that the major roadblock to progress has been that no effort to date has shown what a multilevel secure database means semantically; instead the focus has been on making syntactic adjustments to avoid problems. In this paper, we introduce a belief-based semantics for multilevel secure databases that supports the description of semantic multilevel secure entities, and argue for the generality of this semantics. We also present our syntax for multilevel secure databases, and show its relationship to the semantics. Our syntax is free of most problems of previous models, and is also simpler without sacrificing security or expressiveness. Permission to copy without fee all or part of this material is grantedprovided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 18th VLDB Conference Vancouver, British Columbia, Canada 1992

70 citations


Journal ArticleDOI
TL;DR: This work presents several alternative extensions and decompositions of access support relations for a given path expression, the best of which has to be determined according to the application-specific database usage profile.

61 citations


Patent
Arun N. Swami1, Honesty C. Young1
27 Oct 1992
TL;DR: In this article, a system and method for distributed relational databases for parallel sorting of a relation wherein the relation is a set of tuples to be sorted on multiple sort sites which completely decouples the return phase from the sort phase in order to eliminate the merge phase.
Abstract: A system and method is provided for distributed relational databases for parallel sorting of a relation wherein the relation is a set of tuples to be sorted on multiple sort sites which completely decouples the return phase from the sort phase in order to eliminate the merge phase. The method involves selecting one coordinator site from any of the available logical sites, then generating and sorting a local sample on each of the available storage sites before sending the local random sample from each storage site to the designated coordinator site wherein the local random samples are merged to provide a single global sample. The coordinator site determines the global interval key values based on the global sample. The interval key values being determined such that each interval fits in a single sort site's main memory, wherein the tuples between two interval key values define the interval. The interval key values are sent to the various storage sites wherein each storage site scans its portion of the relation in order to determine for each tuple the assigned interval and its corresponding sort site before sending each tuple to the assigned sort site. At each sort site the tuples are stored in temporary files using a single temporary file for each interval whereafter repeating, for each interval on each sort site, the steps of reading an interval and performing an in-memory sort in any fashion of the interval read before sending the tuples of the sorted interval to the sink site.

44 citations



Proceedings ArticleDOI
02 Feb 1992
TL;DR: The authors develop a full set of extended relational operators for manipulating relations containing probabilistic partial values, so that the uncertain answer tuples of a query are associated with degrees of uncertainty and provided a comparison among maybe tuples and a better understanding on the query results.
Abstract: In heterogeneous database systems, partial values can be used to resolve the interoperability problems, including domain mismatch, inconsistent data, and missing data. Performing operations on partial values may produce maybe tuples in the query result which cannot be compared. Thus, users have no way to distinguish which maybe tuple is the most possible answer. The concept of partial values is generalized to probabilistic partial values. The authors develop a full set of extended relational operators for manipulating relations containing probabilistic partial values. With this approach, the uncertain answer tuples of a query are associated with degrees of uncertainty. That provides users a comparison among maybe tuples and a better understanding on the query results. Besides, extended selection and join are generalized to alpha -selection and alpha -join, respectively, which can be used to filter out may be tuples with low possibilities-those which have possibilities smaller than alpha . >

Proceedings ArticleDOI
01 Aug 1992
TL;DR: This paper proposes to use the quite general framework of Cousot's abstract interpretation for the particular analysis of multi-dimensional array indexes, and describes on a complete example how to use it in order to optimize array storage.
Abstract: With the growing use of vector supercomputers, efficient and accurate data structure analyses are needed. What we propose in this paper is to use the quite general framework of Cousot's abstract interpretation for the particular analysis of multi-dimensional array indexes. While such indexes are integer tuples, a relational integer analysis is first required. This analysis results of a combination of existing ones that are interval and congruence based. Two orthogonal problems are directly concerned with the results of such an analysis, that are the parallelization/vectorization with the dependence analysis and the data locality problem used for array storage management. After introducing the analysis algorithm, this paper describes on a complete example how to use it in order to optimize array storage.

Proceedings ArticleDOI
15 Jun 1992
TL;DR: The authors present an extended SQL system called ESQL which facilitates IS-A relation hierarchies in a RDBMS and uses constraints to resolve data redundancy and updating abnormality problems that exist in current OO languages and OODBMS such as GemStone, PostGres, O/sub 2/, Iris, and Orion.
Abstract: The IS-A relationship (the class-subclass relationship) is one of the most fundamental and important properties in an object-oriented (OO) language and OO database management system (OODBMS). Due to the popularity and domination of relational database management systems (RDBMSs) and the fundamental importance of inheritance, supporting IS-A relationships in a RDBMS becomes very desired, and is essential to adapt a RDBMS to more advanced applications. The authors present an extended SQL system called ESQL which facilitates IS-A relation hierarchies in a RDBMS. The proposed ESQL uses constraints to resolve data redundancy and updating abnormality problems that exist in current OO languages such as C++ and Smalltalk, and OODBMS such as GemStone, PostGres, O/sub 2/, Iris, and Orion. Features such as inheritance constraints, subrelation assertions, mappings, and automatic tuple placement into its most specific subrelation are distinct in ESQL. These features are missing in current RDBMSs, OODBMSs and OO languages. >

Journal ArticleDOI
TL;DR: The tuples method is proposed which combines assumption-based approaches with the pattern recognition techniques, using neural networks to perform real-time diagnosis of process malfunctions and suggests that the generalization characteristics of a neural network can be improved by using a fully-connected network.

Journal ArticleDOI
TL;DR: It is shown that the results of Demetrovics about the maximal number of minimal keys on unbounded domains do not hold for finite domains and lower bounds for the size of minimum-sized Armstrong relations are derived.

Journal ArticleDOI
TL;DR: A method for the evaluation of the well-founded semantics of Datalog queries that allows us to construct answers without having to compute the whole greatest unfounded set, the Greatest Useful Unfounded Set (GUUS), from which the choice of the name GUUS method is chosen.

Proceedings ArticleDOI
03 Feb 1992
TL;DR: The author presents a mechanism for representing exclusive disjunctive information in database tables using various tuple types and a range for the count of the number of tuples in the unknown relation denoted by a table.
Abstract: The author presents a mechanism for representing exclusive disjunctive information in database tables using various tuple types and a range for the count of the number of tuples in the unknown relation denoted by a table. The relational algebra operators are extended to take the new tables as operands. Query evaluation in the extended model is sound and complete for relational algebra expressions consisting of projection, difference, Cartesian product, or selection operators. Possible storage structures for storing the base tables and algorithms for inserting tuples into a table are described. >

01 Jan 1992
TL;DR: This thesis shows that the most basic definition of a random sample, called simple random sampling, could lead to excessive load imbalance in a parallel database system and overcome this problem by showing that stratified random sampling guarantees perfect load balancing without sacrificing the quality of the estimate.
Abstract: In this thesis, we apply probabilistic techniques to analyze problems related to query optimization and query processing in database management systems. We consider three such problem areas: sampling based query size estimation, sampling based percentile estimation, and analytically deriving sizes of answers to recursive datalog queries. Sampling based query size estimation deals with the problem of estimating the number of tuples in the sizes of answers to relational queries. We compare the theoretical performance of five sampling methods and prove that the accuracy of these schemes as a function of the number of I/O's forms a partial order. We show that the most basic definition of a random sample, called simple random sampling, could lead to excessive load imbalance in a parallel database system. We overcome this problem by showing that stratified random sampling guarantees perfect load balancing without sacrificing the quality of the estimate. Sampling based percentile estimation is useful to partition the work in a parallel system. This partitioning can in turn be used to extract parallelism. Parallel sorting, nou-equi join computation and parallel joins in the presence of data skew are some examples of where estimates of percentiles can be used to partition the work. In each case, the efficiency of the resulting algorithm depends on the quality of the estimates of the percentiles. We derive a new bound on the probability that the estimates differ from the actual value by more than a certain amount for a given number of samples. Finally, we derive analytically the sizes of the fixpoints of recursively defined relations in datalog programs and the rewritten programs generated by the Magic Sets and Factoring rewriting algorithms in response to selection queries. Our results show that the recursively defined relations are within a small constant factor of their worst-case size bounds, and that the Magic Sets rewriting algorithm on the average produces relations within a small constant factor of the corresponding bounds for the recursion without rewriting. The expected size of relations produced by the Factoring algorithm, when it applies, is significantly smaller than the expected size of relations produced by Magic Sets.

Book ChapterDOI
06 Jul 1992
TL;DR: The explicit integrity constraints of the database intension are used in two essential ways to elicit the semantics of those views of the DB addressed in the user's query; firstly to ascertain the relevance of a user query at a particular site and thus to advise the user in case of any constraint violations, suggesting a modification for the query in the process.
Abstract: A significant number of database (DB) users today lack complete knowledge of the semantics of the DB(s) they desire to query. This is a very common phenomenon in the Multidatabase System (MBS) type of distributed database systems (DDBSs). In MBSs, a number of DBs, are loosely linked without creating a global schema in order to enable occasional sharing of their information contents. As some cost is normally associated with querying any particular site in this system, a lack of complete knowledge of the DB semantics can often result in fruitless but costly searches. The aim of the work described in this paper is to provide a tool which can assist Multidatabase users gain an understanding of the semantics of DBs accessible to them. Specifically, we use the explicit integrity constraints(ICs) of the database intension in two essential ways to elicit the semantics of those views of the DB addressed in the user's query; firstly to ascertain the relevance of a user query at a particular site and thus to advise the user in case of any constraint violations, suggesting a modification for the query in the process, and secondly to provide abstract or intensional answers to a user request.The first goal aims to provide a system free of the ambiguities associated with an empty response to some retrieval request while the second goal aims to improve the user's understanding of the semantics associated with the data values generated as answers by providing with the tuples of the answer the general rules that they obey.

Journal ArticleDOI
TL;DR: The proposed algorithms and the associated data structures are simple conceptually and in implementation and in a multiprocessor environment, the time complexities for insertion and deletion of the authors schemes are reduced.
Abstract: A data structure is used to store materialized generalized transitive closure so that the evaluation of generalized transitive closure queries, deletions, and insertions of tuples can be performed efficiently in centralized and parallel environments. Some techniques to manage materialized transitive closure are presented and generalized to more general recursions. The proposed algorithms and the associated data structures are simple conceptually and in implementation. In a multiprocessor environment, the time complexities for insertion and deletion of the authors schemes are reduced. Only two rounds of communication are needed. >

Proceedings ArticleDOI
03 Feb 1992
TL;DR: The authors discuss the keying methods that are proposed in the literature and introduce the external keying method which aims to restore the structure of tuples that is lost by unnesting a relation valued attribute in a nested relation.
Abstract: The authors discuss the keying methods that are proposed in the literature and introduce the external keying method which aims to restore the structure of tuples that is lost by unnesting a relation valued attribute in a nested relation. As opposed to the previous keying methods, it does not store the keying information within the relation instance which can be manipulated by relational algebra. Instead, the keying information is kept separately while it is generated, utilized, and manipulated. >

Journal ArticleDOI
TL;DR: A general framework for the study of the conflict resolution problem is proposed, and a variety of resolution criteria are suggested, which collectively subsume all previously known solutions.
Abstract: When a set of rules generates (conflicting) values for a virtual attribute of some tuple, the system must resolve the inconsistency and decide on a unique value that is assigned to that attribute. In most current systems, the conflict is resolved based on criteria that choose one of the rules in the conflicting set and use the value that it generated. There are several applications, however, where inconsistencies of the above form arise, whose semantics demand a different form of resolution. We propose a general framework for the study of the conflict resolution problem, and suggest a variety of resolution criteria, which collectively subsume all previously known solutions. With several new criteria being introduced, the semantics of several applications are captured more accurately than in the past. We discuss how conflict resolution criteria can be specified at the schema or the rule-module level. Finally, we suggest some implementation techniques based on rule indexing, which allow conflicts to be resolved efficiently at compile time, so that at run time only a single rule is processed.

01 Dec 1992
TL;DR: This paper proposes a basic shift in the nature of match algorithms: from tuple-oriented to collection- oriented, which shows great promise for efficiently matching expressive productions against large amounts of data.
Abstract: : Match algorithms that are capable of handling large amounts of data, without giving up expressiveness are a key requirement for successful integration of relational database systems and powerful rule-based systems. Algorithms that have been used for database rule systems have usually been unable to support large and complex rule sets, while the algorithms that have been used for rule-based expert systems do not scale well with increasing amounts of data. Furthermore, these algorithms do not provide support for collection (or set) oriented production languages. This paper proposes a basic shift in the nature of match algorithms: from tuple-oriented to collection- oriented. A collection-oriented match algorithm matches each condition in a production with a collection of tuples and generates collection-oriented instantiations, i.e., instantiations that have collection of tuples corresponding to each condition in the production. This approach shows great promise for efficiently matching expressive productions against large amounts of data. In addition, it provides direct support for collection-oriented production languages.

Book ChapterDOI
14 Oct 1992
TL;DR: It is proved that first order logic with comparison of the cardinalities of relations has a 0/1 law and it is established that the arity of the tuples which are counted induces a strict expressivity hierarchy.
Abstract: We investigate the expressive power of query languages with counting ability. We define a LOGSPACE extension of first order logic and a PTIME extension of fixpoint logic with counters. We develop specific techniques, such as games, for dealing with languages with counters and therefore integers. We prove in particular that the arity of the tuples which are counted induces a strict expressivity hierarchy. We also establish results about the asymptotic probabilities of sentences with counters. In particular we show that first order logic with comparison of the cardinalities of relations has a. 0/1 law.

Journal ArticleDOI
TL;DR: The problem of updating databases through interfaces based on the weak instance model is studied, thus extending previous proposals that considered them only from the query point of view.
Abstract: The problem of updating databases through interfaces based on the weak instance model is studied, thus extending previous proposals that considered them only from the query point of view. Insertions and deletions of tuples are considered.As a preliminary tool, a lattice on states is defined, based on the information content of the various states.Potential results of an insertion are states that contain at least the information in the original state and that in the new tuple. Sometimes there is no potential result, and in the other cases there may be many of them. We argue that the insertion is deterministic if the state that contains the information common to all the potential results (the greatest lower bound, in the lattice framework) is a potential result itself. Effective characterizations for the various cases exist.A symmetric approach is followed for deletions, with fewer cases, since there are always potential results; determinism is characterized as a consequence.

Patent
18 Feb 1992
TL;DR: In this article, a compiler framework uses a generic "shell" or control and sequencing mechanism, and a generic back end (where the code generator is target-specific) where the generic shell includes the functions of optimization, register and memory allocation, and code generation.
Abstract: A compiler framework uses a generic 'shell' or control and sequencing mechanism, and a generic back end (where the code generator is target-specific). The generic back end includes the functions of optimization, register and memory allocation, and code generation. The shell may be executed on various host computers, and the code generation function of the back end may be targeted for any of a number of computer architectures. A front end is tailored for each different source language, such as Cobol, Fortran, Pascal, C, C++, Ada, etc. The front end scans and parses the source code modules, and generates from them an intermediate language ('IL') representation of the programs expressed in the source code. This IL is constructed to represent any of the source code languages in a universal manner, so the interface between the front end and back end is of a standard format, and need not be rewritten for each language-specific front end. The IL representation generated by the front end is based upon a tuple as the elemental unit, where each tuple represents a single operation to be performed, such as a load, a store, an add, a label, a branch, etc. A data structure is created by the front end for each tuple, with fields for various necessary information. One feature of the invention is a mechanism for representing effects and dependencies in the interface between front end and back end; a tuple has an effect if it writes to memory, and has a dependency if it reads from a location which some other node may write to. A mechanism independent of source language is provided for describing the effects of program execution. Another feature is the use in the optimization part of the compiler of a method for analyzing induction variables, where the improvement is to use the side effects sets used to construct IDEF sets. Another feature is a mechanism for 'folding constants' (referred to as K-folding or a KFOLD routine), included as one of the optimizations. A further feature is the type definition mechanism, referred to as the TD module, which provides mechanisms used by the front end and the compiler of the back end in constructing program type information to be incorporated in an object module for use by a linker or debugger. Another feature is a method for doing code generation using code templates in a multipass manner.

Journal ArticleDOI
TL;DR: A practically useful algorithm is presented that solves the maintenance problem of all ctm database schemes within a "not too large" bound and shows that non-ctm database schemes are not maintainable in less than a linear time in the state size.
Abstract: The maintenance problem of a database scheme is the following decision problem: Given a consistent database state ρ and a new tuple u over some relation scheme of ρ, is the modified state ρ ∪ {u} still consistent? A database scheme is said to be constant-time-maintainable(ctm) if there exists an algorithm that solves its maintenance problem by making a fixed number of tuple retrievals. We present a practically useful algorithm, called the canonical maintenance algorithm, that solves the maintenance problem of all ctm database schemes within a "not too large" bound. A number of interesting properties are shown for ctm database schemes, among them that non-ctm database schemes are not maintainable in less than a linear time in the state size. A test method is given when only cover embedded functional dependencies (fds) appear. When the given dependencies consist of fds and the join dependency (jd) ⋈ R of the database scheme, testing whether a database scheme is ctm is reduced to the case of cover embedded fds. When dependency-preserving database schemes with only equality-generating dependencies (egds) are considered, it is shown that every ctm database scheme has a set of dependencies that is equivalent to a set of embedded fds, and thus, our test method for the case of embedded fds can be applied. In particular, this includes the important case of lossless database schemes with only egds.

Journal ArticleDOI
TL;DR: This paper considers how the operation-oriented language can be implemented on the basis of the rule-oriented approach, and proposes a rule-based prototype implementation defined in Prolog which can be utilized in deductive databases based on both the heterogeneous and homogeneous approach.

01 Jan 1992
TL;DR: This dissertation presents an object-based framework for uniformly modeling and communicating spatial and functional information about constructed facilities, using the concept of data abstraction for dealing with the spatial andfunctional attributes and the various organizations of facility components.
Abstract: Constructed facilities, such as buildings, ships and off-shore platforms, are complex and contain many components which exhibit various behaviors to different disciplines, such as architecture and mechanical systems, throughout the facility lifecycle, e.g., design, analysis, fabrication, and operation. Due to the increasing number of users and developers of computer-aided tools in the domain of constructed facilities worldwide, integration of computer systems supporting these tools has become an important challenge. Although the existing engineering data models provide various means of representing, organizing and linking facility information in a computer-integrated environment, they often fail to recognize the fundamental and important differences between the spatial and functional attributes of facility components. Spatial information, i.e., mixed-dimensional geometric data and topologic relations, is modeled by specialized geometric representation schemes and often shared across disciplines, while functional information, such as material properties, is captured as attribute-value pairs in taxonomies of attribute classes that are discipline-specific. This dissertation presents an object-based framework for uniformly modeling and communicating spatial and functional information about constructed facilities. This framework uses the concept of data abstraction for dealing with the spatial and functional attributes and the various organizations of facility components. A referential scheme for representing and identifying the mixed-dimensional spatial extent of facility components is developed on top a non-manifold geometric modeling paradigm. Based on this scheme, a set of algebraic operations is defined for creating and manipulating spatial configurations which represent various spatial arrangements of facility components. The topological relations and geometric attributes of facility components thus are not explicitly represented in the information model but computed by the underlying non-manifold modeler when requested. Furthermore, functional attribute classes are translated into tables in the underlying relational database and tuples of these tables are subsequently associated with the facility components. The spatial and functional information about a component is in turn linked via the data object representing that component in the information model. This information and various component relationships are defined and retrieved by external programs from the underlying information base through a high-level, object-based interface, comprised of the operations corresponding to the data abstractions defined in the information model. An interactive information management system has been developed as the prototype implementation of this framework using this object-based interface, encapsulating the underlying geometric modeler and database management system.