scispace - formally typeset
Search or ask a question

Showing papers on "Tuple published in 1988"


Journal ArticleDOI
Guy M. Lohman1
01 Jun 1988
TL;DR: This work presents a constructive, “building blocks” approach to defining alternative plans, in which the rules defining alternatives are an extension of the productions of a grammar to resemble the definition of a function in mathematics.
Abstract: Extensible query optimization requires that the “repertoire” of alternative strategies for executing queries be represented as data, not embedded in the optimizer code. Recognizing that query optimizers are essentially expert systems, several researchers have suggested using strategy rules to transform query execution plans into alternative or better plans. Though extremely flexible, these systems can be very inefficient at any step in the processing, many rules may be eligible for application and complicated conditions must be tested to determine that eligibility during unification. We present a constructive, “building blocks” approach to defining alternative plans, in which the rules defining alternatives are an extension of the productions of a grammar to resemble the definition of a function in mathematics. The extensions permit each token of the grammar to be parametrized and each of its alternative definitions to have a complex condition. The terminals of the grammar are base-level database operations on tables that are interpreted at run-time. The non-terminals are defined declaratively by production rules that combine those operations into meaningful plans for execution. Each production produces a set of alternative plans, each having a vector of properties, including the estimated cost of producing that plan. Productions can require certain properties of their inputs, such as tuple order and location, and we describe a “glue” mechanism for augmenting plans to achieve the required properties. We give detailed examples to illustrate the power and robustness of our rules and to contrast them with related ideas.

181 citations


Journal ArticleDOI
Joseph Y. Halpern1, Ronald Fagin1
TL;DR: A formal model that captures the subtle interaction between knowledge and action in distributed systems and extends the standard notion of a protocol by defining knowledge-based protocols, ones in which a process' actions may depend explicitly on its knowledge.
Abstract: We present a formal model that captures the subtle interaction between knowledge and ac- tion in distributed systems. We view a distributed system as a set of runs, where a run is a function from time to global states and a global state is a tuple consisting of an environment state and a local state for each process in the system. This mod- el is a generalization of those used in many pre- vious papers. Actions in this model are associated with functions from global states to global states. A protocol is a function from local states to actions. We extend the standard notion of a protocol by defining knowledge-based protocols, ones in which a process' actions may depend explicitly on its knowledge. Knowledge-based protocols provide a natural way of describing how actions should take place in a distributed system. Finally, we show how the notion of one protocol implementing another can be captured in our model.

156 citations


Book ChapterDOI
14 Mar 1988
TL;DR: A logic-based language for manipulating complex objects constructed using set and tuple constructors is introduced and applications of the language to procedural data, semantic database models, heterogeneous databases integration, and datalog query evaluation are presented.
Abstract: A logic-based language for manipulating complex objects constructed using set and tuple constructors is introduced. A key feature of the language is the use of base and derived data functions. Under some stratification restrictions, the semantics of programs is given by a canonical minimal and causal model that can be computed using a finite sequence of fixpoints. Applications of the language to procedural data, semantic database models, heterogeneous databases integration, and datalog query evaluation are presented.

154 citations


Proceedings ArticleDOI
01 Mar 1988
TL;DR: This paper designs a sampling plan based on the cluster sampling method to improve the utilization of sampled data and to reduce the cost of sampling, and proposes consistent and unbiased estimators for arbitrary COUNT(E) type queries.
Abstract: Present database systems process all the data related to a query before giving out responses. As a result, the size of the data to be processed becomes excessive for real-time/time-constrained environments. A new methodology is needed to cut down systematically the time to process the data involved in processing the query. To this end, we propose to use data samples and construct an approximate synthetic response to a given query.In this paper, we consider only COUNT(E) type queries, where E is an arbitrary relational algebra expression. We make no assumptions about the distribution of attribute values and ordering of tuples in the input relations, and propose consistent and unbiased estimators for arbitrary COUNT(E) type queries. We design a sampling plan based on the cluster sampling method to improve the utilization of sampled data and to reduce the cost of sampling. We also evaluate the performance of the proposed estimators.

150 citations


Journal ArticleDOI
TL;DR: An extension of the well-known language SQL is presented along with some considerations about how such fuzzy queries can be processed to get acceptable performances.

143 citations


Proceedings ArticleDOI
01 Mar 1988
TL;DR: A new definition of complex objects is introduced which provides a denotation for incomplete tuples as well as partially described sets, and the use of rules in defining queries over such objects is examined.
Abstract: A new definition of complex objects is introduced which provides a denotation for incomplete tuples as well as partially described sets. Set values are “sandwiched” between “complete” and “consistent” descriptions (representing the Smyth and Hoare powerdomains respectively), allowing the maximal values to be arbitrary subsets of maximal elements in the domain of the set. We also examine the use of rules in defining queries over such objects.

54 citations


Proceedings Article
29 Aug 1988
TL;DR: This dissertation discusses the Indiana University prototype for nested relational databases, ANDA (Architecture for a Nested Database Applications), which provides a mechanism for storing and manipulating nested relations and complex objects and a graphical query language which allows direct manipulation of nested relational schemes has been designed to provide a user-friendly interface.
Abstract: Relational databases organize data in the First Normal Form, i.e., all values in a database are necessarily atomic (non-decomposable). Nested Relational Databases on the other hand are not constrained by this assumption; values of a nested relation can be either atomic values or nested relations themselves. This dissertation discusses the Indiana University prototype for nested relational databases, ANDA (Architecture for a Nested Database Applications). ANDA provides a mechanism for storing and manipulating nested relations and complex objects. All stages of query processing in ANDA are discussed. A query can be expressed in a graphical query language which is first translated to an access language and then optimized. Finally the access language interacts with the data structures of the system to evaluate the query. In ANDA, all tuples and objects are identified by a hierarchical, structured tuple-identifier (tuple-id). As tuple-ids have enough information to determine mutual relationships, extensive query processing is done by manipulating tuple-ids in main-memory. Query evaluation in ANDA is done by repeating the value $\to$ tuple-id $\to$ value cycle. The first phase involves extracting the required tuple-ids from the value-based indexing structure, VALTREE. In the second phase these tuple-ids are manipulated in the main-memory based CACHE. In the third phase, the tuple-ids from the CACHE are materialized using the RECLIST, which provides fast access to a value for a given tuple-id. An access language is designed to specify access plans. The access language abstracts the operations of the VALTREE, the RECLIST, and the CACHE. Access plans for ANDA are specified in the access language. A cost model has been proposed to provide a mechanism to compare alternative access plans. Other optimization strategies for optimizing access plans are also discussed. A graphical query language which allows direct manipulation of nested relational schemes has been designed to provide a user-friendly interface to ANDA.

44 citations


Proceedings Article
29 Aug 1988
TL;DR: The results show that caching wins when updates do not occur with a high frequency, and that separate caching is, in general, better than caching in tuples; and that when the composition of the objects in the procedural field is predictable and parameterizable, flattening is a good option.
Abstract: relational model. POSTGRES allows fields of a relation to have procedural (executable) objects. POSTQUEL is the query language supporting access to these fields, and in this paper we consider the optimizing process for such queries. The simplest algorithm for optimization assumes that the procedural objects are executed in full, whenever needed. As a refinement to this basic process, we propose an algorithm wherein cost savings are achieved by modifying the procedural queries before executing them. In another direction of refinement, we consider the caching of the materialized results. Two caching strategiescaching in tuples, and separate caching-are considered. The fifth algorithm is flattening, where a POSTQUEL query is modified into an equivalent flat query, and then optimized through a traditional optimizer. We study the relative performances of these algorithms under varying conditions and parameters. Our results show that caching wins when updates do not occur with a high frequency, and that separate caching is, in general, better than caching in tuples. We further show that when the composition of the objects in the procedural field is predictable and parameterizable, flattening is a good option. However, a number of recent proposals which enhance Codd’s [CODD70] model require the modification of the existing algorithms to optimize the new set of queries that were not possible before. In this paper we study the query optimization problem in one such extended relational model, namely POSTGRES. We present a number of algorithms for optimization of queries in such an environment, and do a performance study of each. The rest of the paper is organized as follows. In Section 2 we present the extensions in POSTGRES relevant to our study. We also discuss the previous work on optimization of queries on procedural objects. The optimizing paradigm and the details of the algorithms under consideration are then discussed in Section 3. In Section 4 we present the framework in which we compare the various algorithms. Section 5 presents the results of our study. Finally, this paper ends with the conclusions on the viability of each algorithm. 2. POSTGRES PROCEDURES

36 citations


Book ChapterDOI
31 Aug 1988
TL;DR: It is shown, in particular, that augmenting Horn-clause logic with hypothetical addition increases its data-complexity from PTIME to PSPACE, and that the logic of hypothetical additions then expresses all database queries which are computable in PSPACE.
Abstract: We present an extension of Horn-clause logic which can hypothetically add and delete tuples from a database. Such logics have been discussed in the literature, but their complexities and expressibilities have remained an open question. This paper examines two such logics in the function-free, predicate case. It is shown, in particular, that augmenting Horn-clause logic with hypothetical addition increases its data-complexity from PTIME to PSPACE. When deletions are added as well, complexity increases again, to EXPTIME. To establish expressibility, we augment the logic with negation-by-failure and view it as a query language for relational databases. The logic of hypothetical additions then expresses all database queries which are computable in PSPACE. When deletions are included, the logic expresses all database queries computable in EXPTIME.

22 citations


Book ChapterDOI
01 Jun 1988
TL;DR: Networks of constraints are a simple knowledge representation model, useful for describing large classes of problems in picture recognition and scene analysis, in the representation of physical systems and in the specification of software systems.
Abstract: Networks of constraints are a simple knowledge representation model, useful for describing large classes of problems in picture recognition and scene analysis, in the representation of physical systems and in the specification of software systems. Nodes in the network represent variables to be assigned, while arcs are constraints to be satisfied by the adjacent variables; constraints are simply seen as relation specifying the acceptable tuples of values of the variables. Solutions of the network are variable assignments which simultaneously satisfy all constraints.

20 citations


Journal ArticleDOI
S. W. Drury1
01 Nov 1988
TL;DR: In this article, the authors make L p estimates for the n -linear form defined on n -tuples of functions (o 1, …, o n ) on the sphere S n−1 inℝ n.
Abstract: The object of this paper is to make L p estimates for the n -linear form defined on n -tuples of functions (o 1 , …, o n ) on the sphere S n−1 inℝ n

Journal ArticleDOI
TL;DR: It is shown by topological arguments that by ACTs one cannot decide certain classes of languages, examples of which are Q n and the set of tuples (x1,…,xn) ∈ R n that have components which are Z -linearly or algebraically dependent.
Abstract: Up to now, few models of computation with the power of evaluating discontinuous functions have been analyzed and few of their lower bounds or results on the decidability of languages are known. In this paper, we present a model of an “analytic computation tree” (ACT). These trees operate on real numbers and are able to compare real numbers, to evaluate functions on real numbers, and to evaluate certain discontinuous functions like the “floor function.” This model generalizes the model of “algebraic computation trees” introduced by Ben Or. We show by topological arguments that by ACTs one cannot decide certain classes of languages, examples of which are Q n and the set of tuples (x1,…,xn) ∈ R n that have components which are Z -linearly or algebraically dependent.

01 Jul 1988
TL;DR: This manual describes Rex, a programming language for specifying machines by deducing their behavior from a set of LISP functions that define primitive Rex machines and provides methods for building complex machines out of simpler components.
Abstract: : This manual describes Rex, a programming language for specifying machines by dedara tively describing their behavior. The Rex language consists of a set of LISP functions that define primitive Rex machines and provides methods for building complex machines out of simpler components. A Rex machine is a synchronous abstract device that has inputs, local state, and outputs, all of which are storage locations. Storage locations may be thought of as wires that can be set to certain values and whose values can be read by Rex machines. The value of a storage location is determined by its constraint, some function of the values of a set (possibly empty) of storage locations. A Rex machine operates by repeatedly computing a mapping from its inputs and current state into its outputs and next state. By hierarchically dividing a large state into small components and specifying their state transitions, we can "make the combinatorial explosion work for us" [3]. The size of the smallest component may vary from implementation to implementation; it could be a bit, an integer, or a small enumerated type. The state transitions are described by functions that map tuples of elements of the primitive data types into other tuples. The new value of any given component could, in principle, depend on all of the inputs and the entire current state of the machine, but, in practices, the dependencies are usually local.

Book ChapterDOI
Jim Austin1
28 Mar 1988
TL;DR: In this paper, a generalisation of the binary N tuple technique originally described by Bledsoe and Browning is described. But this technique is not suitable for image pre-processing.
Abstract: This paper describes a generalisation of the binary N tuple technique originally described by Bledsoe and Browning (1). The binary N tuple technique has commonly been used for the classification (2) and pre-processing (3) of binary images. The extension to the method described here allows grey level images of objects to be classified using the N tuple method without first having to convert the image to an intermediate binary representation. The paper illustrates the methods use in image preprocessing.

Journal ArticleDOI
TL;DR: In this paper, the main varieties of such indeterminacies, together with the special conditions, if any, under which they shrink to unique determinations, are studied and compared.
Abstract: The much-discussed prevailing failure of a moment decomposition Mzz = AM0A to identify just one factor tuple F such that Z = AF and MFF = M0 is only one of many ways in which a selected fragment of a complete factor solution generally specifies the solution's remainder only imperfectly. Precise ranges are worked out here for the main varieties of such indeterminacies, together with the special conditions, if any, under which they shrink to unique determinations.

Book ChapterDOI
15 Jun 1988
TL;DR: Eight basic strategies to generate and to refine transitive closure algorithms are identified: algebraic manipulation, implementation of the join operator, reusage of newly generated tuples, enforcement of some ordering ofTuples, blocking of adjacency lists, tuning and preprocessing, taking advantage of topological order, and selection of an access structure for adjacencies.
Abstract: Nontraditional applications of database systems require the efficient evaluation of recursive queries. The transitive closure of a binary relation has been identified as an important and frequently occurring special case. Traditional algorithms for computing the transitive closure, as developed in the field of algorithmic graph theory, hold both the operand relation and the result relation within directly addressable main memory. The newly anticipated applications, however, deal with very large relations that do not fit into main memory and therefore must be blockwise paged to and from secondary storage. Thus we have to design algorithms and optimization methods for computing the transitive closure of very large relations. We survey and compare various such algorithms and methods in a unifying manner. In particular we identify eight basic strategies to generate and to refine transitive closure algorithms: algebraic manipulation, implementation of the join operator, reusage of newly generated tuples, enforcement of some ordering of tuples, blocking of adjacency lists, tuning and preprocessing, taking advantage of topological order, and selection of an access structure for adjacency lists. The analysis demonstrates the great variety of options on the different description levels and how they are compatible. Based on experiments some specific algorithms are recommended.

Proceedings ArticleDOI
05 Oct 1988
TL;DR: An approach to the handling of time in query languages of historical database management systems by extending Boolean and comparison operators by allowing their operands to be sets of time intervals is outlined.
Abstract: An approach to the handling of time in query languages of historical database management systems is outlined. The approach is based on extending Boolean and comparison operators by allowing their operands to be sets of time intervals. The proposed temporal logic is shown to satisfy the properties of the normal Boolean logic. A relational-calculus query language using the extended logic is proposed. A new syntax for retrieval statements is defined in order to separate the process of selecting entities (tuples) and the process of selecting required values of temporal attributes from the chosen entities. The extension presented in this paper offer a good degree of flexibility in expressing different temporal requirements. >

Book ChapterDOI
04 Jul 1988
TL;DR: The purpose of this study is the answer generation when querying a rule base in the context of deductive databases, using the formalism of the first-order predicate calculus without equality, restricted to Horn clauses.
Abstract: The purpose of this study is the answer generation when querying a rule base in the context of deductive databases. We suppose the rule base to be a non recursive set of rules, represented by closed range restricted formulae. The query is an open range restricted formula, in which bound variables are existentially quantified. The answer to a query is a formula (and not a set of tuples). We use the formalism of the first-order predicate calculus without equality, restricted to Horn clauses. The generation method is based on a complete and sound strategy : we construct a linear input resolution tree by a depth-first method. The corresponding algorithm terminates under our hypothesis.

01 Jun 1988
TL;DR: This thesis gives sufficient and necessary conditions for detecting when an update of a conceptual relation cannot affect a derived relation, an irrelevant update, and for detectingWhen aderived relation can be correctly updated using no data other than the derived relation itself and the given update operation, a (serially) autonomously computable update.
Abstract: Consider a relational database where the stored relations are not necessarily conceptual relations but may also include derived relations. The derived relations may be used to structure the internal level, as has been proposed as a means of improving query response time. More traditionally, derived relations may be thought of as materialized views or database fragments. When a user updates a conceptual relation, it will be necessary to update some of the derived relations (complete re-evaluation may be quite expensive). This thesis gives sufficient and necessary conditions for detecting when an update of a conceptual relation cannot affect a derived relation, an irrelevant update, and for detecting when a derived relation can be correctly updated using no data other than the derived relation itself and the given update operation, a (serially) autonomously computable update. If a given update is neither irrelevant nor (serially) autonomously computable on some derived relation then we present an algorithm which determines a smallest sufficient attribute set for that derived relation. If the attribute set is augmented by this set of attributes then there will be enough information to update this derived relation. The class of derived relations considered is restricted to those defined by PSJ-expressions, that is, any relational algebra expression constructed from an arbitrary number of project, select and join operations (but containing no self-joins). The class of update operations consists of insertions, deletions, and modifications, where the set of tuples to be deleted or modified is specified by a PSJ-expression. We also present an update algorithm and discuss a preliminary prototype implementation of this algorithm.

Journal ArticleDOI
01 Jun 1988
TL;DR: This report describes a working prototype of a Prolog-INGRES interface based on external semantic query simplification, which employs a graph theoretic approach to simplify arbitrary conjunctive queries with inequalities.
Abstract: This report describes a working prototype of a Prolog-INGRES interface based on external semantic query simplification. Semantic query simplification employs integrity constraints enforced in a database system for reducing the number of tuple variables and terms in a relational query. This type of query simplifier is useful in providing very high level user interfaces to existing database systems. The system employes a graph theoretic approach to simplify arbitrary conjunctive queries with inequalities. One very interesting feature of the system is to provide meaningful error messages in case of an empty query result resulting from contradiction. In addition to data, rules are stored in the database as well and are retrieved automatically if the Prolog program references them but they are not defined in the Prolog rulebase.

Book ChapterDOI
21 Jun 1988
TL;DR: The proposed temporal logic is shown to satisfy the properties of the normal Boolean logic and offer a good degree of flexibility in expressing different temporal requirements.
Abstract: In this paper, an approach to handle time in relational query languages is outlined The approach is based on extending Boolean and comparison operators by allowing their operands to be sets of intervals [BASS87] The proposed temporal logic is shown to satisfy the properties of the normal Boolean logic New syntax for retrieval statements is defined in order to separate the process of selecting entities (tuples) and the process of selecting required values of temporal attributes from the chosen entities The extensions presented in this paper offer a good degree of flexibility in expressing different temporal requirements

Proceedings ArticleDOI
01 Mar 1988
TL;DR: In this paper, a generalized approach to the decomposition of relational schemata is developed in which the component views may be defined using both restriction and projection operators, thus admitting both horizontal and vertical decompositions.
Abstract: A generalized approach to the decomposition of relational schemata is developed in which the component views may be defined using both restriction and projection operators, thus admitting both horizontal and vertical decompositions. The realization of restrictions is enabled through the use of a Boolean algebra of types, while true independence of projections is modelled by permitting null values in the base schema. The flavor of the approach is algebraic, with the collection of all candidate views of a decomposition modelled within a lattice-like framework, and the actual decompositions arising as Boolean subalgebrac. Central to the framework is the notion of sidimensional join dependency, which generalizes the classical notion of join dependency by allowing the components of the join to be selected horizontally as well as vertically. Several properties of such dependencies are presented, including a generalization of many of the classical results known to be equivalent to schema acyclicity. Finally, a characterization of the nature of dependencies which participate in decompositions is presented. It is shown that there are two major types, the bidimensional join dependencies, which are tuple generating and allow tuple removal by implicit encoding of knowledge, and splitting dependencies, which simply partition the database into two components.

Book ChapterDOI
Kurt Rothermel1
01 Jan 1988
TL;DR: A method for storing and retrieving arbitrary complex PROLOG clauses from a relational database is presented and can be used to implement a filter module that sits on top of the database management system (DBMS) and offers clause retrieval functions to the PROLOG interpreter.
Abstract: A method for storing and retrieving arbitrary complex PROLOG clauses from a relational database is presented. The presented method can be used to implement a filter module that sits on top of the database management system (DBMS) and offers clause retrieval functions to the PROLOG interpreter. This filter accepts arbitrary complex goal literals and returns all clauses (potentially) matching the goal. The method is flexible enough to support an wide range of filter selectivity, i.e. it can be used to implement highly selective as well as arbitrary rough filters. The structures needed to represent clauses in the DBMS are simple: Each clause is entirely stored in a single tuple, which also contains all information needed for selection purposes. To retrieve the clauses matching a given goal, one DBMS query operating on a single table is to be executed, independent of the complexity of the stored clauses and the degree of selectivity provided by the filter.

01 Jun 1988
TL;DR: The same idea is developed here, for other recursion schemes, neither iterative nor primitive recursive, like the simultaneous (mutual) iteration schemes, which are algorithms or functions more or less familiar in computer science, e.g. functions defined by some limited while schemes.
Abstract: (i) The family \(\mathbb{D}\)of data systems , considered as heterogeneous term- (anarchic-) algebras with a finite number of supports and constructors, has the property that the family \(\mathbb{I}\)of iterative functions mapping algebras into algebras does not change if, inside its definition, the primitive recursion scheme replaces the iteration scheme [Bohm 1986]. The same idea is developed here, for other recursion schemes, neither iterative nor primitive recursive, like the simultaneous (mutual) iteration schemes. Illustrations of such schemes are algorithms or functions more or less familiar in computer science, e.g. functions defined by some limited while schemes.


Book ChapterDOI
Joseph Y. Halpern1, Ronald Fagin1
01 Oct 1988
TL;DR: A formal model that captures the subtle interaction between knowledge and action in distributed systems and extends the standard notion of a protocol by defining knowledge-based protocols, ones in which a process' actions may depend explicitly on its knowledge.
Abstract: We present a formal model that captures the subtle interaction between knowledge and action in distributed systems. We view a distributed system as a set of runs, where a run is a function from time to global states and a global state is a tuple consisting of an environment state and a local state for each process in the system. This model is a generalization of those used in many previous papers. Actions in this model are associated with functions from global states to global states. A protocol is a function from local states to actions. We extend the standard notion of a protocol by defining knowledge-based protocols, ones in which a process' actions may depend explicitly on its knowledge. Knowledge-based protocols provide a natural way of describing how actions should take place in a distributed system. Finally, we show how the notion of one protocol implementing another can be captured in our model.

Patent
12 Nov 1988
TL;DR: In this paper, the authors proposed to shorten the time required for production of an index of relations by producing an index table consisting of the codes obtained by putting the attributes of two index values on each other also for a 3rd relation obtained by connecting the attribute of two relations to each other.
Abstract: PURPOSE: To shorten the time required for production of an index of relations by producing an index table consisting of the codes obtained by putting the attributes of two index values on each other also for a 3rd relation obtained by connecting the attributes of two relations to each other. CONSTITUTION: The relation tuples are retrieved via the 1st and 2nd index tables 1 and 2 of two relations respectively to obtain a 3rd index table 3 consisting of the index records obtained by connecting two relations to each other. Therefore the relation connecting two relations to each other is also retrieved at a high speed via the table 3. In such a way, the existing index table and hash table are used for the relations applying the arithmetic operations. Thus the time required for the natural connection arithmetic operation carried out between the relations can be shortened. Furthermore the relation tuples can be retrieved at a high speed and at the same time an index is obtained to the executing result of the natural connection arithmetic operation carried out between the relations having indexes. COPYRIGHT: (C)1990,JPO&Japio

Proceedings ArticleDOI
01 Feb 1988
TL;DR: The authors demonstrate that by using the same approach for a data manipulation language and a clustering strategy, few modifications of theDBMS program are required and the assertional power of the DBMS is upgraded while respecting performance considerations.
Abstract: The authors present a clustering method for complex domains. The method is original in that tuples can be clustered using functions applied to complex domain values. Thus, tuples are organized according to a function result. Those functions most often applied to complex values and used in the restriction part of queries can be used as clustering predicates. Hence, they optimize the retrieval of tuples that would otherwise require processing the whole relation. In SABRINA, complex domain processing is made possible by a Lisp language processor designed as an integrated database management system processor. Clustering is determined by a set of predicates defining a recursive partitioning of the relation. These predicates are the Lisp functions, taken from the set of functions applicable to a given domain. The authors demonstrate that by using the same approach for a data manipulation language and a clustering strategy, few modifications of the DBMS program are required and the assertional power of the DBMS is upgraded while respecting performance considerations. >

Book ChapterDOI
01 Jan 1988
TL;DR: RAPID is a highly parallel processor, using wired algorithms, and built with several copies of a full custom VLSI component, and displays several original features, such as a Sequential Partitioned Structure mixing sequential and parallel evaluation, and a full query resolution.
Abstract: In this paper we present RAPID, a co-processor for database operations. RAPID is a highly parallel processor, using wired algorithms, and built with several copies of a full custom VLSI component. RAPID displays several original features, such as a Sequential Partitioned Structure mixing sequential and parallel evaluation, and a full query resolution. The interfaces of RAPID with the DBMS and the host machine are quite simple. The main component of RAPID contains 16 to 32 processing elements, with sophisticated functionalities. It evaluates a 1000 tuples x 1000 join in about 3 milliseconds. The join duration is linear with the size of the source relations, even with source relations of more than one million of tuples. RAPID is presently being implemented in HCMOS3 technology.

Journal ArticleDOI
TL;DR: This paper presents two join algorithms which preprocess the Partial-Relations first and then join the selected tuples of the relations, and shows that for wide range of selectivity factors and/or join factors the proposed algorithms perform better than the sort-merge and hash-based join algorithms.