scispace - formally typeset
Search or ask a question

Showing papers on "Tuple published in 1991"


Patent
30 Apr 1991
TL;DR: In this paper, a system and method of logically and physically clustering data (tuples) in a database is presented, where data objects stored in the domains may be stored in a particular domain based upon a locality-of-reference algorithm in which a tuple of data is placed in a domain if and only if all objects referenced by the tuple are contained in the domain.
Abstract: A system and method of logically and physically clustering data (tuples) in a database. The database management system of the invention partitions (declusters) a set of relations into smaller so-called local relations and reclusters the local relations into constructs called domains. The domains are self-contained in that a domain contains the information for properly accessing and otherwise manipulating the data it contains. In other words, the data objects stored in the domains may be stored in a particular domain based upon a locality-of-reference algorithm in which a tuple of data is placed in a domain if and only if all objects referenced by the tuple are contained in the domain. On the other hand, the data objects stored in a domain may be clustered so that a tuple of data is placed in a domain based on the domain of the object referenced by a particular element of the tuple. By clustering the related object data in this manner, the database management system may more efficiently cache data to a user application program requesting data related to a particular data object. The system may also more efficiently lock and check-in and check-out data from the database so as to improve concurrency. Moreover, versioning may be more readily supported by copying tuples of a particular domain into a new domain which can then be updated as desired.

164 citations


Book ChapterDOI
01 Jan 1991
TL;DR: This work introduces “abstract maps,” an analogical representation that inherently reflects the structure of the represented domain, and demonstrates their use in spatial reasoning and facilitates “coarse” reasoning and the hierarchical organization of knowledge.
Abstract: There have been some straightforward efforts to extend Allen’s interval-based temporal logic to spatial dimensions by using Cartesian tuples of relations (Guesgen, 1989). We take a different approach based on a study of the kind of information that best relates two entities in 2-dimensional space qualitatively. The relevant spatial categories turn out to be “projection” and “orientation.” We define a small set of spatial relations and stress the importance of making their reference frames explicit. Furthermore, we introduce “abstract maps,” an analogical representation that inherently reflects the structure of the represented domain, and demonstrate their use in spatial reasoning. This scheme also facilitates “coarse” reasoning and the hierarchical organization of knowledge. These representational issues form the basis for an experimental system to develop “cognitive maps” from 2-D scanned layout plans of buildings.

106 citations


Journal ArticleDOI
TL;DR: A logic based language for manipulating complex objects constructed using set and tuple constructors is introduced that uses base and derived data functions and is extended using external functions and predicates.
Abstract: A logic based language for manipulating complex objects constructed using set and tuple constructors is introduced. A key feature of the COL language is the use of base and derived data functions. Under some stratification restrictions, the semantics of programs is given by a minimal and justified model that can be computed using a finite sequence of fixpoints. The language is extended using external functions and predicates. An implementation of COL in a functional language is briefly discussed.

101 citations


Patent
06 Aug 1991
TL;DR: In this article, an expanded virtual register (EVR) data structure is provided comprising an infinite, linearly ordered set of virtual register elements with a remap() function defined upon the EVR.
Abstract: A process for optimizing compiler intermediate representation (IR) code, and data structures for implementing the process; the process is preferably embodied in a compiler computer program operating on an electronic computer or data processor with access to a memory storage means such as a random access memory and access to a program mass storage means such as an electronic magnetic disk storage device. The compiler program reads an input source program stored in the program mass storage means and creates a dynamic single assignment intermediate representation of the source program in the memory using pseudo-machine instructions. To create the dynamic single assignment intermediate representation, during compilation, the compiler creates a plurality of virtual registers in the memory for storage of variables defined in the source program. Means are provided to ensure that the same virtual register is never assigned to more than once on any dynamic execution path. An expanded virtual register (EVR) data structure is provided comprising an infinite, linearly ordered set of virtual register elements with a remap() function defined upon the EVR. Calling the remap() function with an EVR parameter causes an EVR element which was accessible as [n] prior to the remap operation to be accessible as [n+1] after the remap operation. A subscripted reference map comprising a dynamic plurality of map tuples is used. Each map tuple associates the real memory location accessible under a textual name with an EVR element. A compiler can use the map tuple to substitute EVR elements for textual names, eliminating unnecessary load operations from the output intermediate representation.

79 citations


Journal ArticleDOI
TL;DR: The author shows how relational algebra operations can be extended, and implemented using information source vectors, to calculate the vector corresponding to each tuples in the answer to a query, and hence, to identify information source contributing to each tuple in theanswer.
Abstract: The author studies the problem of determining the reliability of answers to queries in a relational database system, where the information in the database comes from various sources with varying degrees of reliability. An extended relational model is proposed in which each tuple in a relation is associated with an information source vector which identifies the information source(s) that contributed to that tuple. The author shows how relational algebra operations can be extended, and implemented using information source vectors, to calculate the vector corresponding to each tuple in the answer to a query, and hence, to identify information source(s) contributing to each tuple in the answer. This also enables the database system to calculate the reliability of each tuple in the answer to a query as a function of the reliability of information sources. >

52 citations


Book ChapterDOI
01 Nov 1991
TL;DR: A fuzzy query language (FQL) for relational databases is proposed, constructed as an enhancement of the relational domain calculus, has sufficient capability to represent all four types of the fuzzy statements distinguished by the work of L.A. Zadeh (1978).
Abstract: A fuzzy query language (FQL) for relational databases is proposed. FQL, constructed as an enhancement of the relational domain calculus, has sufficient capability to represent all four types of the fuzzy statements distinguished by the work of L.A. Zadeh (1978). The idea for constructing FQL is in the formulation of the fuzzy matching degrees that assigns the appropriate values in the interval to any combination of fuzzy queries and tuples in relational databases. FQL helps to provide a human-oriented interface to relational databases that store a vast amount of information. Furthermore, fuzzy expert systems are expected to be provided with the facility to make use of fact data in relational databases through FQL. In addition, FQL is a theoretical basis for systematically developing a higher human-oriented interface with relational databases. >

47 citations


Journal ArticleDOI
TL;DR: A new definition of complex objects is introduced which provides a denotation for incomplete tuples as well as partially described sets and can be used constructively to infer consistent instances of conclusions and to refine complete instances of the hypothesis.

38 citations


Book ChapterDOI
26 Aug 1991
TL;DR: This paper shows how to achieve the dynamic detection of determinism in implementations of functional logic languages with a nonambiguity condition and what can be gained by this optimization.
Abstract: Programs in functional logic languages usually have to satisfy a nonambiguity condition, that semantically ensures completeness of conditional narrowing and pragmatically ensures that the defined (non-boolean) functions are deterministic and do not yield different result values for the same argument tuples The nonambiguity condition allows the dynamic detection of determinism in implementations of functional logic languages In this paper we show how to achieve this and what can be gained by this optimization

36 citations


Journal ArticleDOI
TL;DR: It is shown that the retrieve statement of T QUEL is weaker than its counterpart in TCAL, and it is argued that TQUEL is not as user friendly as TCAL .

33 citations


Journal ArticleDOI
TL;DR: An attempt is made in this paper to cater for this situation by an extension of the Shepard-Kruskal approach to non-metric multidimensional scaling which deals with dissimilarities defined for three or more objects.
Abstract: Wide use has been made of multidimensional scaling (MDS) techniques since the pioneering papers of Shepard and Kruskal. In the main, dissimilarities used in the various MDS techniques are derived for pairs of objects or stimuli. This is termed 2-way, 1-mode data, meaning pairs of objects within a single set are considered. Some MDS techniques are designed for 3-way or even higher, and for 2-mode, 3-mode or more. One such example is the CANDECOMP model which can deal with n-way, m-mode data where 3≤n≤7 and 2≤m≤7. This model considers n-tuples of objects at a time, selecting these from m different sets. To date there are no models which consider n-way, 1-mode data, where n≥3. An attempt is made in this paper to cater for this situation by an extension of the Shepard-Kruskal approach to non-metric multidimensional scaling which deals with dissimilarities defined for three or more objects. A computer program has been written using the new model to produce a configuration in Euclidean space to represent the objects. Some historical voting data and some artificial data are then analysed.

28 citations


Journal ArticleDOI
TL;DR: Positive solutions to the decision problem for a class of quantified formulae of the first order set theoretic language based on ϕ, ε, =, involving particular occurrences of restricted universal quantifiers and for the unquantified formula, obtained by adding the operators of binary union, intersection and difference and the relation of inclusion.
Abstract: Positive solutions to the decision problem for a class of quantified formulae of the first order set theoretic language based on ϕ, e, =, involving particular occurrences of restricted universal quantifiers and for the unquantified formulae of ϕ, e, =, {...}, η, where {...} is the tuple operator and η is a general choice operator, are obtained. To that end a method is developed which also provides strong reflection principles over the hereditarily finite sets. As far as finite satisfiability is concerned such results apply also to the unquantified extention of ϕ, e, =, {...}, η, obtained by adding the operators of binary union, intersection and difference and the relation of inclusion, provided no nested term involving η is allowed.

Book ChapterDOI
21 Oct 1991
TL;DR: The paper presents an attempt to develop a totally correct shared-state parallel program in the style of VDM.
Abstract: The paper presents an attempt to develop a totally correct shared-state parallel program in the style of VDM. Programs are specified by tuples of five assertions (P,R,W,G,E). The pre-condition P, the rely-condition R and the wait-condition W describe assumptions about the environment, while the guar-condition G and the eff-condition E characterise commitments to the implementation.

Book ChapterDOI
10 Jun 1991
TL;DR: A distributed data structure is an object which permits many producers to augment or modify its contents, and many consumers simultaneously to access its component elements.
Abstract: A distributed data structure is an object which permits many producers to augment or modify its contents, and many consumers simultaneously to access its component elements. Synchronization is implicit in data structure access: a process that requests an element which has not yet been generated blocks until a producer creates it.

Book ChapterDOI
16 Dec 1991
TL;DR: Two algorithms for generating execution plans for queries expressed in an object algebra are presented and the interface to an object manager whose operations are the executable elements of query execution plans is defined.
Abstract: We address the generation of execution plans for object-oriented database queries. This is a challenging area of study because, unlike the relational algebra, a uniformly accepted set of object algebra operators has not been defined. Additionally, a standardized object manager interface analogous to storage manager interfaces of relational systems does not exist. We define the interface to an object manager whose operations are the executable elements of query execution plans. Parameters to the object manager interface are streams of tuples of object identifiers. The object manager can apply methods and simple predicates to the objects identified in a tuple. Two algorithms for generating execution plans for queries expressed in an object algebra are presented. The first algorithm runs quickly but may produce inefficient plans. The second algorithm enumerates all possible execution plans and presents them in an efficient, compact representation.

Journal ArticleDOI
TL;DR: It is shown that the eager method, the counting method, and the magic-set method can be expressed as algorithms for finding particular paths in a directed graph associated to the query.
Abstract: A logic query Q is a triple G , LP, D , where G is the query goal, LP is a logic program without function symbols, and D is a set of facts, possibly stored as tuples of a relational database. The answers of Q are all facts that can be inferred from LP ∪ D and unify with G . A logic query is bound if some argument of the query goal is a constant; it is canonical strongly linear (a CSL query ) if LP contains exactly one recursive rule and this rule is linear, i.e., only one recursive predicate occurs in its body. In this paper, the problem of finding the answers of a bound CSL query is studied with the aim of comparing for efficiency some well-known methods for implementing logic queries: the eager method, the counting method, and the magic-set method. It is shown that the above methods can be expressed as algorithms for finding particular paths in a directed graph associated to the query. Within this graphical formalism, a worst-case complexity analysis of the three methods is performed. It turns out that the counting method has the best upper bound for noncyclic queries. On the other hand, since the counting method is not safe if queries are cyclic, the method is extended to safely implement this kind of queries as well.

Proceedings ArticleDOI
01 Dec 1991
TL;DR: Presents a new approach to parallel computation of transitive closure queries using a semantic data fragmentation which produces a partitioning of the base relation into several fragments such that any fragment corresponds to a subgraph.
Abstract: Presents a new approach to parallel computation of transitive closure queries using a semantic data fragmentation. Tuples of a large base relation denote edges in a graph, which models a transportation network. A fragmentation algorithm is proposed which produces a partitioning of the base relation into several fragments such that any fragment corresponds to a subgraph. One fragment, called high-speed fragment, collects all edges which guarantee maximum speed. Thus, the fragmentation algorithm induces a hierarchical relationship between the high-speed fragment and all other fragments. With this fragmentation, any query about paths connecting two nodes can be answered by using just the fragments in which nodes are located and the high-speed fragment. In general, if each fragment is managed by a distinguished processor, then the query can be answered by three processors working in parallel. This schema can be applied recursively to generate an arbitrary number of hierarchical levels. >

Proceedings ArticleDOI
01 Apr 1991
TL;DR: The results show that even over relatively sparse base relations, the fixpoints of the recursively defined relations are within a small constant factor of their worst-case size bounds, and that reducing the arity of the recursive predicate is probably more important than restricting the recursion to relevant tuples.
Abstract: We present asymptotically exact expressions for the expected sizes of relations defined by three well-studied Datalog recursions, namely the "transitive closure," "same generation," and "canonical factorable recursion" We consider the size of the fixpoints of the recursively defined relations in the above programs, as well as the size of the fixpoints of the relations defined by the rewritten programs generated by the Magic Sets and Factoring rewriting algorithms in response to selection queries Our results show that even over relatively sparse base relations, the fixpoints of the recursively defined relations are within a small constant factor of their worst-case size bounds, and that the Magic Sets rewriting algorithm on the average produces relations whose fixpoints are within a small constant factor of the corresponding bounds for the recursion without rewriting The expected size of the fixpoint of the relations produced by the Factoring algorithm, when it applies, is significantly smaller than the expected size of the fixpoints of the relations produced by Magic Sets This lends credence to the belief that reducing the arity of the recursive predicate is probably more important than restricting the recursion to relevant tuples

Journal ArticleDOI
TL;DR: The authors characterize a multirelation M(n, L) by its cardinality n and the number of distinct elements L it contains and it is shown that the existence of duplicate values in the join attribute columns can be exploited to reduce the computational complexity of the natural join operation.
Abstract: It is shown that the existence of duplicate values in some attribute columns has a significant impact on the computational complexity of the sorting and joining operations. This is especially true when the number of distinct tuple values is a small fraction of the total number of tuples. The authors characterize a multirelation M(n, L) by its cardinality n and the number of distinct elements L it contains. Under this characterization, the worst time complexity of sorting such a multirelation with binary comparisons as basic operations is investigated. Upper and lower bounds on the number of three-branch comparisons needed to sort such a multirelation are established. Thereafter, the methodology used to study the complexity of sorting is applied to the natural join operation. It is shown that the existence of duplicate values in the join attribute columns can be exploited to reduce the computational complexity of the natural join operation. >

Proceedings ArticleDOI
01 Dec 1991
TL;DR: Three approaches to this problem are presented, each exploiting more knowledge about the application domain and hence becoming progressively less memory-intensive, the two most promising of these approaches are examined in depth.
Abstract: The monitoring of distributed systems is made difficult both by their nondeterministic nature and by their tendency to generate a large amount of state information during a monitored execution. It is therefore desirable for a monitor of distributed systems to be able to perform, by itself, a significant portion of the analysis of that system's state information. Two of the core problems intrinsic to such a monitor are developed. It is critical to maintain knowledge about the temporal organization of the changes in state, events, of the system being monitored. An important use of this knowledge is determining which events occur "between" two bounding events A and B in quasi-ordered logical time. Three approaches to this problem are presented, each exploiting more knowledge about the application domain and hence becoming progressively less memory-intensive. The two most promising of these approaches are examined in depth. Events received by the monitor are stored as database tuples. Many of the queries posed against this database relate to time; the results of these queries thus impose a quasi order on the tuples in the database. A class of such queries is defined and a set of algorithms for their efficient resolution is presented.

01 Jan 1991
TL;DR: The Linda model of shared distributed tuple space in a functional programming language, Standard ML, is implemented, using ML's flexible type system and pattern matching facilities to provide ML programmers with the basic Linda operations on tuples.
Abstract: We have implemented the Linda model of shared distributed tuple space in a functional programming language, Standard ML. We use ML's flexible type system and pattern matching facilities to provide ML programmers with the basic Linda operations on tuples. No preprocessor is used, and no compiler changes are required. We use separate ML modules to implement the Linda interface, operations on tuple space, communication of tuples over the network, and replication of tuple spaces. Our approach allows different compositions of these modules to be used to configure a system with either local or remote access to tuple space, and with either a centralized or distributed implementation of tuple space. The resulting implementation of Linda in Standard ML offers an attractive way to separate the functional and the imperative portions of a distributed system. Individual processes can be written in ML in a pure functional style and the Linda shared tuple space can be used to interconnect the processes and maintain the state of the system. This research was sponsored by the Defense Advanced Research Projects Agency and monitored by the Air Force Systems Command under Contract F19628-91-C-0128. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government.

Book
01 Jan 1991
TL;DR: A standard deductive database system with explicit termination of recursion and preference of useful rules for selecting useful tuples.
Abstract: A standard deductive database system.- An expert deductive database system.- Discarding irrelevant tuples.- Disregarding irrelevant rules.- Explicit termination of recursion.- Preferring useful rules.- Preferring useful tuples.- Summary and outlook.

Book ChapterDOI
17 Jun 1991
TL;DR: The Swarm model is overviews, the synchronic group concept is introduced, and its use in the expression of dynamically structured programs is illustrated.
Abstract: Swarm is a computational model which extends the UNITY model in three important ways: (1) UNITY'S fixed set of variables is replaced by an unbounded set of tuples which are addressed by content rather than by name; (2) UNITY's static set of statements is replaced by a dynamic set of transactions; and (3) UNITY's static ∥-composition is augmented by dynamic coupling of transactions into synchronic groups. This last feature, unique to Swarm, facilitates formal specification of the mode of execution (synchronous or asynchronous) associated with portions of a concurrent program and enables computations to restructure themselves so as to accommodate the nature of the data being processed and to respond to changes in processing objectives. This paper overviews the Swarm model, introduces the synchronic group concept, and illustrates its use in the expression of dynamically structured programs. A UNITY-style programming logic is given for Swarm, the first axiomatic proof system for a shared dataspace language.

01 May 1991
TL;DR: This thesis presents two alternative methods which retrieve a query result in less redundant structures than a single flat relation, and demonstrates that these two methods incur far less cost than the method of retrieving a singleflat relation.
Abstract: The approach of instantiating objects from relational databases through views provides an effective mechanism for building object-oriented applications on top of relational databases. However, a system built in such a framework has the overhead of interfacing between two different models--an object-oriented model and the relational model--in terms of both functionality and performance. In this thesis, we address two important problems: the outer join problem and the instantiation efficiency problem. In instantiating objects, tuples that should be retrieved from databases may be lost if we allow only inner joins. Hence it becomes necessary to evaluate certain join operations of the query by outer joins, left outer joins in particular. On the other hand, we sometimes retrieve unwanted nulls from nulls stored in databases, even if there is no null inserted during query processing. In this case, it is necessary to filter some relations with selection conditions which eliminate the tuples containing null attributes in order to prevent the retrieval of unwanted nulls. We develop a mechanism for making the system generate those left outer joins and filters as needed rather than requiring that a programmer specifies it manually as part of the query for every view definition. Since the advent of the relational databases, it has been universally accepted that a query result is retrieved as a single flat relation (a table). This single table concept is not useful in our framework because a client wants to retrieve object instances. Rather, a single flat relation contains data redundantly inserted just to make the query result 'flat'. These redundant data convey no extra information but only degrade the performance of the system. This fact motivated us to look into different methods which reduce the amount of data that the system must handle to instantiate objects, without diminishing the amount of information to be retrieved. In this thesis, we present two alternative methods which retrieve a query result in less redundant structures than a single flat relation. Our result demonstrates that these two methods incur far less cost than the method of retrieving a single flat relation.

Journal ArticleDOI
TL;DR: The RAPID-1 (relational access processor for intelligent data), an associative accelerator that recognizes tuples and logical formulas, is presented and speeds up the database by a significant factor.
Abstract: The RAPID-1 (relational access processor for intelligent data), an associative accelerator that recognizes tuples and logical formulas, is presented. It evaluates logical formulas instantiated by the current tuple, or record, and operates on whole relations or on hashing buckets. RAPID- 1 uses a reduced instruction set and hardwired control and executes all comparisons in a bit-parallel mode. It speeds up the database by a significant factor and will adapt to future generations of microprocessors. The principal design issues, data structures, instruction set, architecture, environments and performance are discussed. >

Proceedings ArticleDOI
08 Apr 1991
TL;DR: To efficiently process recursive queries in a DBMS (database management system), a parallel, direct transitive closure algorithm is proposed by reorganizing the computation order of Warren's algorithm.
Abstract: To efficiently process recursive queries in a DBMS (database management system), a parallel, direct transitive closure algorithm is proposed. Efficiency is obtained by reorganizing the computation order of Warren's algorithm. The number of transfers among processors depends only on the number of processors and does not depend on the depth of the longest path. The evaluation shows an improvement due to the parallelism and the superiority of the proposed algorithm over recent propositions. The speed of the production of new tuples is very high and the volume of transfers between the sites is reduced. >

Proceedings ArticleDOI
01 Dec 1991
TL;DR: A third approach is proposed which combines the two methods in a single framework, decomposed into segments and data is partitioned among the segments, and allows data to be partitioned and balanced dynamically at different levels.
Abstract: There are two approaches to processing Datalog programs in parallel. One is to decompose the rules of a program into concurrent modules, and then assign them to processors. The other is to partition data between processors, so that each processor evaluates the same program, but with less data. The authors propose a third approach which combines the two methods in a single framework. In this approach, rules are decomposed into segments and data is partitioned among the segments. There are a number of advantages of this approach. Most importantly, it provides good focus on processing the tuples that are relevant to queries, and allows data to be partitioned and balanced dynamically at different levels. An analytic performance study is also presented to illustrate the usefulness of the proposed approach. >

Journal ArticleDOI
M. S. Verrall1
TL;DR: As there is inheritance in the Service Abstraction Description Language, Services can be built upon each other, rather than from the ground up, by allowing the Abstractions to inherit one from another.
Abstract: ion Description. A description of a Service independent of the software form of its provision or request. Interaction Description. A description of how a set of Services described by different near miss (§2.2) Abstraction Descriptions can interact. Representation Description. A description of how a Service described by an Abstraction Description is provided or required in software. There are two parts of the Software Bus of particular interest: Abstraction Converter. A piece of software which converts Service Element Requests from those of one Abstraction Description to those of another, according to an Interaction Description. Representation Converter. A piece of software which converts Service Element Requests, from the rep-ion Converter. A piece of software which converts Service Element Requests from those of one Abstraction Description to those of another, according to an Interaction Description. Representation Converter. A piece of software which converts Service Element Requests, from the rep526 THE COMPUTER JOURNAL, VOL. 34, NO. 6, 1991 UNITY DOESN'T IMPLY UNIFICATION resentation given by one Representation Description to that given by another Representation Description. These are explained and exemplified in the following sections: §5.1 Abstraction Description; §5.2 Representation Description, and its relationship to Component Skeleton and Representation Converter; §5.3 Interaction Description, and its relationship to Abstraction Converter; §5.4 Component Type, and the relationships between Component Body, Installant and Executant; §5.5 Composite Component; §5.6 Tool. 5.1 Service Abstraction Description A Service Abstraction is described in a fashion that generalises over all the ways in which a Service could be requested in programming languages, as discussed in §3.2. At the abstract level the Services are described within the Abstract Data Type paradigm and thus each is encapsulated in a set of operations the Service Elements through which they are accessed by the Service Element Requests. They are expressed in the most abstract representation as a tuple of and thus appear like an ordinary local procedure call. The arguments can be values, but they cannot be items of information (i.e. the actual data entities holding the values) or Components. The values are of data types defined or constructed in the language for describing Services in the abstract; if a value is interpreted by one or more Components as a reference, this is the business of the Components and their platforms, not of the Software Bus. As there is inheritance in the Service Abstraction Description Language, Services can be built upon each other, rather than from the ground up, by allowing the Abstractions to inherit one from another. However, the verification of subtyping between Services is more complex than that for data-type definitions in third generation languages; amongst other things the verification rules consider argument direction, type and default value existence; thus they are similar to, but more extensive than, the rules for type conformance in the Emerald language. 5.1.1 Example Service Abstraction Description The following example (Fig. 3) shows selected fragments of a Service Abstraction Description, which has been chosen from the domain of project and team management. The example is given in one particular concrete syntax for a Service Abstraction Description Language in which keywords are in upper case. In the example domain, a project has a name and is made up of a number of teams, and a team has a name and a number of staff members who are identified by their surnames. The fragments show these descriptions: • the declarations of data types for the name of a person, project and team. The name of a team can be seen to be built upon the name of a project. • a Class called 'Teaml' which expresses a team and has only one method, which is a query. A Class is two things: it is a data type of the Service Abstraction Description Language (it is an encapsulated data type), and it is a Service. As it is an encapsulated data type, each method contains an argument of type of that Class; in this case the argument On. As it is a Service it may be offered and required by Components. The method WhatStaf f returns the staff on the team as a bag as a surname can occur more than once, this is the correct level of abstraction. • a Class called 'Team'. This inherits from 'Teaml', as shown by the braces, and adds methods to create a team and manipulate its staffing. • two Services for TeamLeading. These Services express the operations of some functionality, rather than the encapsulation of data. The duplication of Services and difference in return arguments are used in the development of this example in §5.3. The return argument in the 'B' Service express the fact that the data structure being passed in the argument is a tree a level of abstraction that a software engineer often wants to express, but which cannot be expressed in common programming languages. • similarly, a Class, ' P r o j e c t ' and a Service ' Pro jectManagement'. • a Service which expresses the functionality which can take charge of managing a project. As the example develops in §5.5, the actor ultimately behind this will be seen to be a human being rather than software. The sub-division of Services into those which express encapsulated data and those which express functionality can be compared to the data type and functional modules of some principles of software architecture, and more generally to the classification of a vast number of software engineering methods according to whether they place greater emphasis on information or process. 5.2 Service Representation Description A Service Representation is a set of rules for mapping the requests associated with one or more Service Abstractions onto the virtual machine that the Component Body operates on, this consists of some or all of the following parts • the programming language's paradigm; • the language's interpreter or run-time system; • intra-process schedulers; • operating system process execution control; • levels built on top of these. Also, the programming language in which the Component Body is written may have representation control primitives in it, e.g. the representation clause in Ada, or may be subject to some manually enforced coding rules, e.g. 'even in Fortran always encapsulate'; the descriptive power of Service Representations is being developed to cope with these. There are two uses for Service Representations: (1) Component Skeleton Generation. To assist builders of new Components, a Service Representation Description may be combined with a Service Abstraction Description to generate a Component Skeleton. (2) Representation Converter Generation. For both new and existing Components, it will be necessary to THE COMPUTER JOURNAL, VOL. 34, NO. 6, 1991 527

Journal ArticleDOI
TL;DR: It is proved that it is undecidable whether an arbitrary definition has an equivalent one-sided definition, but it is shown that equivalence to a one- sided recursion is decidable for a large subset of recursions.

Journal ArticleDOI
TL;DR: The design and simulation of a bit-sliced processor for relational database aggregation functions, which takes two tuples as input and returns two bits as output every clock cycle, are discussed.
Abstract: The design and simulation of a bit-sliced processor for relational database aggregation functions, are discussed. The processor, which addresses an important, computationally expensive problem in database computers, takes two tuples as input (one bit at a time) and returns two bits as output every clock cycle. A larger aggregation unit uses a number of identical slice processors, connected according to odd-even network topology, to achieve improved performance on a parallel pipelined processor. The data processing time is completely overlapped with the input and output of data to and from the unit. The design is independent of the tuple size, and since a bit-serial computation is used, the system requires limited interconnection. >

Proceedings Article
02 Apr 1991
TL;DR: Fast join methods implemented in a relational database processor, RINDA, are described, which accelerates join operations about ten times compared with conventional software systems.
Abstract: Fast join methods implemented in a relational database processor, RINDA, are described. RINDA performs complex queries including sorts and joins with specialized hardware. Join operations by RINDA are executed in three phases: filtering phase, sorting phase and merge-join phase. In the filtering phase, unjoinable tuples are removed with hashed-bit-arrays. Remaining tuples are sorted in the sorting phase. Sorted tuples are merged and connected together in the merge-join phase. Iterating operations in the filtering and sorting phases are rapidly executed by RINDA’s specialized hardware. Especially in the filtering phase, a new multiplication-folding method is used as a hashing function to set and refer hashed-bitarrays. It strongly reduces collisions for any type and length of keys. Three kinds of join algorithms, nestedloop, single-table filtering and dual-table filtering algorithms, are dynamically selected according to the number of tuples to be joined. Performance evaluation shows RINDA accelerates join operations about ten times compared with conventional software systems.