scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Knowledge and Data Engineering in 1995"


Journal Article•DOI•
TL;DR: Using an analogy between product manufacturing and data manufacturing, this paper develops a framework for analyzing data quality research, and uses it as the basis for organizing the data quality literature.
Abstract: Organizational databases are pervaded with data of poor quality. However, there has not been an analysis of the data quality literature that provides an overall understanding of the state-of-art research in this area. Using an analogy between product manufacturing and data manufacturing, this paper develops a framework for analyzing data quality research, and uses it as the basis for organizing the data quality literature. This framework consists of seven elements: management responsibilities, operation and assurance costs, research and development, production, distribution, personnel management, and legal function. The analysis reveals that most research efforts focus on operation and assurance costs, research and development, and production of data products. Unexplored research topics and unresolved issues are identified and directions for future research provided. >

694 citations


Journal Article•DOI•
TL;DR: In this article, the authors present a survey of the state-of-the-art work in temporal and real-time data models, and evaluate temporal query languages along several dimensions.
Abstract: A temporal database contains time-varying data. In a real-time database transactions have deadlines or timing constraints. In this paper we review the substantial research in these two previously separate areas. First we characterize the time domain; then we investigate temporal and real-time data models. We evaluate temporal and real-time query languages along several dimensions. We examine temporal and real-time DBMS implementation. Finally, we summarize major research accomplishments to date and list several unanswered research questions. >

516 citations


Journal Article•DOI•
Alexander Borgida1•
TL;DR: This work indicates how one can achieve enhanced access to data and knowledge by using descriptions in languages for schema design and integration, queries, answers, updates, rules, and constraints.
Abstract: Description logics and reasoners, which are descendants of the KL-ONE language, have been studied in depth in artificial intelligence. After a brief introduction, we survey their application to the problems of information management, using the framework of an abstract information server equipped with several operations-each involving one or more languages. Specifically, we indicate how one can achieve enhanced access to data and knowledge by using descriptions in languages for schema design and integration, queries, answers, updates, rules, and constraints. >

321 citations


Journal Article•DOI•
TL;DR: This work formally express the intertransaction conflicts that are recognized by ESR and through that define ESR, analogous to the manner in which conflict-based serializability is defined.
Abstract: Epsilon serializability (ESR) is a generalization of classic serializability (SR). In this paper, we provide a precise characterization of ESR when queries that may view inconsistent data run concurrently with consistent update transactions. Our first goal is to understand the behavior of queries in the presence of conflicts and to show how ESR in fact is a generalization of SR. So, using the ACTA framework, we formally express the intertransaction conflicts that are recognized by ESR and through that define ESR, analogous to the manner in which conflict-based serializability is defined. Secondly, expressions are derived for the amount of inconsistency (in a data item) viewed by a query and its effects on the results of a query. These inconsistencies arise from concurrent updates allowed by ESR. Thirdly, in order to maintain the inconsistencies within bounds associated with each query, the expressions are used to determine the preconditions that operations have to satisfy. The results of a query, and the errors in it, depend on what a query does with the (possibly inconsistent) data viewed by it. One of the important byproducts of this work is the identification of different types of queries which lend themselves to an analysis of the effects of data inconsistency on the results of the query.

140 citations


Journal Article•DOI•
TL;DR: The PALKA (Parallel Automatic Linguistic Knowledge Acquisition) system is presented that acquires linguistic patterns from a set of domain specific training texts and their desired outputs and a specialized representation of patterns called FP structures has been defined.
Abstract: The paper presents an automatic acquisition of linguistic patterns that can be used for knowledge based information extraction from texts. In knowledge based information extraction, linguistic patterns play a central role in the recognition and classification of input texts. Although the knowledge based approach has been proved effective for information extraction on limited domains, there are difficulties in construction of a large number of domain specific linguistic patterns. Manual creation of patterns is time consuming and error prone, even for a small application domain. To solve the scalability and the portability problem, an automatic acquisition of patterns must be provided. We present the PALKA (Parallel Automatic Linguistic Knowledge Acquisition) system that acquires linguistic patterns from a set of domain specific training texts and their desired outputs. A specialized representation of patterns called FP structures has been defined. Patterns are constructed in the form of FP structures from training texts, and the acquired patterns are tuned further through the generalization of semantic constraints. Inductive learning mechanism is applied in the generalization step. The PALKA system has been used to generate patterns for our information extraction system developed for the fourth Message Understanding Conference (MUC-4). >

139 citations


Journal Article•DOI•
TL;DR: In this paper, an original language for the symbolic representation of the contents of image sequences is presented, referred to as spatio-temporal logic, which allows for treatment and operation of content structures at a higher level than pixels or image features.
Abstract: The emergence of advanced multimedia applications is emphasizing the relevance of retrieval by contents within databases of images and image sequences. Matching the inherent visuality of the information stored in such databases, visual specification by example provides an effective and natural way to express content-oriented queries. To support this querying approach, the system must be able to interpret example scenes reproducing the contents of images and sequences to be retrieved, and to match them against the actual contents of the database. In the accomplishment of this task, to avoid a direct access to raw image data, the system must be provided with an appropriate description language supporting the representation of the contents of pictorial data. An original language for the symbolic representation of the contents of image sequences is presented. This language, referred to as spatio-temporal logic, comprises a framework for the qualitative representation of the contents of image sequences, which allows for treatment and operation of content structures at a higher level than pixels or image features. Organization and operation principles of a prototype system exploiting spatio-temporal logic to support querying by example through visual iconic interaction are expounded. >

139 citations


Journal Article•DOI•
TL;DR: An algorithm for answering queries usingYes-no dialogs is developed and it is proved that secure query processing using yes-noDialogs is NP-complete and maximally cooperative to user in the sense that lying is resorted to only when absolutely necessary.
Abstract: We develop a formal logical foundation for secure deductive databases. This logical foundation is based on an extended logic involving several modal operators. We develop two models of interaction between the user and the database called "yes-no" dialogs, and "yes-no-don't know" dialogs. Both dialog frameworks allow the database to lie to the user. We develop an algorithm for answering queries using yes-no dialogs and prove that secure query processing using yes-no dialogs is NP-complete. Consequently, the degree of computational intractability of query processing with yes-no dialogs is no worse than for ordinary databases. Furthermore, the algorithm is maximally cooperative to user in the sense that lying is resorted to only when absolutely necessary. For Horn databases, we show that secure query processing can be achieved in linear time-hence, this is no more intractable than the situation in ordinary databases. Finally, we identify necessary and sufficient conditions for the database to be able to preserve security. Similar results are also obtained for yes-no-don't know dialogs. >

119 citations


Journal Article•DOI•
TL;DR: This work has evaluated the performance of two well known classes of concurrency control algorithms that handle multiversion data: the two phase locking and the optimistic algorithms, as well as the rate monotonic and earliest deadline first scheduling algorithms.
Abstract: We study the performance of concurrency control algorithms in maintaining temporal consistency of shared data in hard real time systems. In our model, a hard real time system consists of periodic tasks which are either write only, read only or update transactions. Transactions may share data. Data objects are temporally inconsistent when their ages and dispersions are greater than the absolute and relative thresholds allowed by the application. Real time transactions must read temporally consistent data in order to deliver correct results. Based on this model, we have evaluated the performance of two well known classes of concurrency control algorithms that handle multiversion data: the two phase locking and the optimistic algorithms, as well as the rate monotonic and earliest deadline first scheduling algorithms. The effects of using the priority inheritance and stack based protocols with lock based concurrency control are also studied. >

114 citations


Journal Article•DOI•
TL;DR: G-Log is introduced, a declarative query language based on graphs, which combines the expressive power of logic, the modeling power of complex objects with identity and the representation power of graphs, and is found to be the only nondeterministic and computationally complete language that does not suffer from the copy-elimination problem.
Abstract: We introduce G-Log, a declarative query language based on graphs, which combines the expressive power of logic, the modeling power of complex objects with identity and the representation power of graphs. G-Log is a nondeterministic complete query language, and thus allows the expression of a large variety of queries. We compare G-Log to well-known deductive database languages, and find that it is the only nondeterministic and computationally complete language that does not suffer from the copy-elimination problem. G-Log may be used in a totally declarative way, as well as in a "more procedural" way. Thus, it provides an intuitive, flexible graph-based formalism for nonexpert database users. >

113 citations


Journal Article•DOI•
TL;DR: A new conceptual clustering method is introduced which addresses the problem of clustering large amounts of structured objects and the conditions under which the method is applicable are discussed.
Abstract: An important structuring mechanism for knowledge bases is building an inheritance hierarchy of classes based on the content of their knowledge objects. This hierarchy facilitates group-related processing tasks such as answering set queries, discriminating between objects, finding similarities among objects, etc. Building this hierarchy is a difficult task for the knowledge engineer. Conceptual clustering may be used to automate or assist the engineer in the creation of such a classification structure. This article introduces a new conceptual clustering method which addresses the problem of clustering large amounts of structured objects. The conditions under which the method is applicable are discussed. >

105 citations


Journal Article•DOI•
TL;DR: A comprehensive mathematical modeling approach for distributed database design that considers network communication, local processing, and data storage costs is developed and a genetic algorithm is developed to solve this mathematical formulation.
Abstract: The allocation of data and operations to nodes in a computer communications network is a critical issue in distributed database design. An efficient distributed database design must trade off performance and cost among retrieval and update activities at the various nodes. It must consider the concurrency control mechanism used as well as capacity constraints at nodes and on links in the network. It must determine where data will be allocated, the degree of data replication, which copy of the data will be used for each retrieval activity, and where operations such as select, project, join, and union will be performed. We develop a comprehensive mathematical modeling approach for this problem. The approach first generates units of data (file fragments) to be allocated from a logical data model representation and a characterization of retrieval and update activities. Retrieval and update activities are then decomposed into relational operations on these fragments. Both fragments and operations on them are then allocated to nodes using a mathematical modeling approach. The mathematical model considers network communication, local processing, and data storage costs. A genetic algorithm is developed to solve this mathematical formulation. >

Journal Article•DOI•
TL;DR: This paper proposes two languages, called Future Temporal Logic (FTL) and Past Temporal logic (PTL), for specifying temporal triggers, and presents algorithms for processing the trigger conditions specified in these languages, namely, procedures for determining when thetrigger conditions are satisfied.
Abstract: In this paper we propose two languages, called Future Temporal Logic (FTL) and Past Temporal Logic (PTL), for specifying temporal triggers. Some examples of trigger conditions that can be specified in our language are the following: "The value of a certain attribute increases by more than 10% in 10 minutes," "A tuple that satisfies a certain predicate is added to the database at least 10 minutes before another tuple, satisfying a different condition, is added to the database." Such triggers are important for monitor and control applications. In addition to the languages, we present algorithms for processing the trigger conditions specified in these languages, namely, procedures for determining when the trigger conditions are satisfied. These methods can be added as a "temporal" component to an existing database management systems. A preliminary prototype of the temporal component that uses the FTL language has been built on top of Sybase running on SUN workstations. >

Journal Article•DOI•
TL;DR: A novel unified approach for integrating explicit knowledge and learning by example in recurrent networks is proposed, which is accomplished by using a technique based on linear programming, instead of learning from random initial weights.
Abstract: Proposes a novel unified approach for integrating explicit knowledge and learning by example in recurrent networks. The explicit knowledge is represented by automaton rules, which are directly injected into the connections of a network. This can be accomplished by using a technique based on linear programming, instead of learning from random initial weights. Learning is conceived as a refinement process and is mainly responsible for uncertain information management. We present preliminary results for problems of automatic speech recognition. >

Journal Article•DOI•
TL;DR: A new algorithm called Priority Adaptation Query Resource Scheduling (PAQRS) is introduced and evaluated for handling both single class and multiclass query workloads and confirms that PAQRS is very effective for real-time query scheduling.
Abstract: In recent years, a demand for real-time systems that can manipulate large amounts of shared data has led to the emergence of real-time database systems (RTDBS) as a research area. This paper focuses on the problem of scheduling queries in RTDBSs. We introduce and evaluate a new algorithm called Priority Adaptation Query Resource Scheduling (PAQRS) for handling both single class and multiclass query workloads. The performance objective of the algorithm is to minimize the number of missed deadlines, while at the same time ensuring that any deadline misses are scattered across the different classes according to an administratively-defined miss distribution. This objective is achieved by dynamically adapting the system's admission, memory allocation, and priority assignment policies according to its current resource configuration and workload characteristics. A series of experiments confirms that PAQRS is very effective for real-time query scheduling. >

Journal Article•DOI•
TL;DR: A prototype implementation of techniques to compute the well-founded model of a logic program is described and it is shown that this technique is more efficient than the standard alternating fixpoint computation.
Abstract: Though the semantics of nonmonotonic logic programming has been studied extensively, relatively little work has been done on operational aspects of these semantics. In this paper, we develop techniques to compute the well-founded model of a logic program. We describe a prototype implementation and show, based on experimental results, that our technique is more efficient than the standard alternating fixpoint computation. Subsequently, we develop techniques to compute the set of all stable models of a deductive database. These techniques first compute the well-founded semantics and then use an intelligent branch and bound strategy to compute the stable models. We report on our implementation, as well as on experiments that we have conducted on the efficiency of our approach. >

Journal Article•DOI•
Elisa Bertino1, P. Foscoli1•
TL;DR: An indexing technique providing support for queries involving complex, nested objects and inheritance hierarchies is presented and is compared with two techniques obtained from more traditional organizations.
Abstract: We present an indexing technique providing support for queries involving complex, nested objects and inheritance hierarchies. This technique is compared with two techniques obtained from more traditional organizations. The three techniques are evaluated using an analytical cost model. The discussion is cast in the framework of object-oriented databases. However, results are applicable to data management systems characterized by features such as complex objects and inheritance hierarchies. >

Journal Article•DOI•
TL;DR: Atlas is a nested relational database system that has been designed for text-based applications and supported by signature file text indexing techniques, and by a parser that can be configured for different text formats and even some foreign languages.
Abstract: Advanced database applications require facilities such as text indexing, image storage, and the ability to store data with a complex structure. However, these facilities are not usually included in traditional database systems. In this paper we describe Atlas, a nested relational database system that has been designed for text-based applications. The Atlas query language is TQL, an SQL-like query language with text operators. The query language is supported by signature file text indexing techniques, and by a parser that can be configured for different text formats and even some foreign languages. Atlas can also be used to store images and audio. >

Journal Article•DOI•
TL;DR: The aim of this paper is to show that ISP, although NP-hard, can in practice be solved effectively through well-designed algorithms, including an exact branch-and-bound algorithm based on the linear programming relaxation of the model.
Abstract: The index selection problem (ISP) is an important optimization problem in the physical design of databases. The aim of this paper is to show that ISP, although NP-hard, can in practice be solved effectively through well-designed algorithms. We formulate ISP as a 0-1 integer linear program and describe an exact branch-and-bound algorithm based on the linear programming relaxation of the model. The performance of the algorithm is enhanced by means of procedures to reduce the size of the candidate index set. We also describe heuristic algorithms based on the solution of a suitably defined knapsack subproblem and on Lagrangian decomposition. Finally, computational results on several classes of test problems are given. We report the exact solution of large-scale ISP instances involving several hundred indexes and queries. We also evaluate one of the heuristic algorithms we propose on very large-scale instances involving several thousand indexes and queries and show that it consistently produces very tight approximate (and sometimes provably optimal) solutions. Finally, we discuss possible extensions and future directions of research.

Journal Article•DOI•
TL;DR: This work demonstrates that the hypernode model is a natural candidate for formalising hypertext, and shows how to bridge the gap between graph based and set based data models, and at what computational cost this can be done.
Abstract: Currently, database researchers are investigating new data models in order to remedy the deficiencies of the flat relational model when applied to nonbusiness applications. Herein we concentrate on a recent graph based data model called the hypernode model. The single underlying data structure of this model is the hypernode which is a digraph with a unique defining label. We present in detail the three components of the model, namely its data structure, the hypernode, its query and update language, called HNQL, and its provision for enforcing integrity constraints. We first demonstrate that the said data model is a natural candidate for formalising hypertext. We then compare it with other graph based data models and with set based data models. We also investigate the expressive power of HNQL. Finally, using the hypernode model as a paradigm for graph based data modelling, we show how to bridge the gap between graph based and set based data models, and at what computational cost this can be done. >

Journal Article•DOI•
TL;DR: Techniques for processing security constraints in a distributed environment during query, update, and database design operations are described by this work.
Abstract: In a multilevel secure distributed database management system, users cleared at different security levels access and share a distributed database consisting of data at different sensitivity levels. An approach to assigning sensitivity levels, also called security levels, to data is one which utilizes constraints or classification rules. Security constraints provide an effective classification policy. They can be used to assign security levels to the data based on content, context, and time. We extend our previous work on security constraint processing in a centralized multilevel secure database management system by describing techniques for processing security constraints in a distributed environment during query, update, and database design operations. >

Journal Article•DOI•
TL;DR: The paper is concerned with the optimal compression of propositional Horn production rule bases-one of the most important knowledge bases used in practice and develops a procedure and algorithm for recognizing in quadratic time the quasi acyclicity of a function given by a Horn CNF.
Abstract: Horn knowledge bases are widely used in many applications. The paper is concerned with the optimal compression of propositional Horn production rule bases-one of the most important knowledge bases used in practice. The problem of knowledge compression is interpreted as a problem of Boolean function minimization. It was proved by P.L. Hammer and A. Kogan (1993) that the minimization of Horn functions, i.e., Boolean functions associated with Horn knowledge bases, is NP complete. The paper deals with the minimization of quasi acyclic Horn functions, the class of which properly includes the two practically significant classes of quadratic and of acyclic functions. A procedure is developed for recognizing in quadratic time the quasi acyclicity of a function given by a Horn CNF, and a graph based algorithm is proposed for the quadratic time minimization of quasi acyclic Horn functions. >

Journal Article•DOI•
TL;DR: The utility of a methodology developed for evaluating the reliability of real-time systems incorporating AI planning programs is demonstrated by applying it to the reliability evaluation of two AI planning algorithms embedded in a real- time multicriteria route finding system.
Abstract: We define the reliability of a real-time system incorporating AI planning programs as the probability that, for each problem-solving request issued from the environment, the embedded system can successfully plan and execute a response within a specified real-time deadline. A methodology is developed for evaluating the reliability of such systems taking into consideration the fact that, other than program bugs, the intrinsic characteristics of AI planning programs may also cause the embedded system to fail even after all software bugs are removed from the program. The utility of the methodology is demonstrated by applying it to the reliability evaluation of two AI planning algorithms embedded in a real-time multicriteria route finding system. >

Journal Article•DOI•
TL;DR: The performance study indicates that a naive approach to hash join algorithms is not able to provide tangible savings, however, the carefully designed strategies can offer substantial improvement over conventional techniques for a wide range of skew conditions.
Abstract: Shared nothing multiprocessor architecture is known to be more scalable to support very large databases. Compared to other join strategies, a hash-based join algorithm is particularly efficient and easily parallelized for this computation model. However, this hardware structure is very sensitive to the skew in tuple distribution. Unless the parallel hash join algorithm includes some dynamic load balancing mechanism, the skew effect can severely deteriorate the system performance. In this paper, we investigate this issue. In particular, three parallel hash join algorithms are presented. We implement a simulator to study the effectiveness of these schemes. The simulation model is validated by comparing the simulation results to those produced by the actual implementation of the algorithms running on a multiprocessor system. Our performance study indicates that a naive approach is not able to provide tangible savings. However, the carefully designed strategies can offer substantial improvement over conventional techniques for a wide range of skew conditions.

Journal Article•DOI•
TL;DR: Rules that guarantee correctness of the target queries are developed, where correctness means that the target query is equivalent to the source query, in cases when one source query needs to be translated to multiple target queries.
Abstract: In a heterogeneous database system, a query for one type of database system (i.e., a source query) may have to be translated to an equivalent query (or queries) for execution in a different type of database system (i.e., a target query). Usually, for a given source query, there is more than one possible target query translation. Some of them can be executed more efficiently than others by the receiving database system. Developing a translation procedure for each type of database system is time-consuming and expensive. We abstract a generic hierarchical database system (GHDBS) which has properties common to database systems whose schema contains hierarchical structures (e.g., System 2000, IMS, and some object-oriented database systems). We develop principles of query translation with GHDBS as the receiving database system. Translation into any specific system can be accomplished by a translation into the general system with refinements to reflect the characteristics of the specific system. We develop rules that guarantee correctness of the target queries, where correctness means that the target query is equivalent to the source query. We also provide rules that can guarantee a minimum number of target queries in cases when one source query needs to be translated to multiple target queries. Since the minimum number of target queries implies the minimum number of times the underlying system is invoked, efficiency is taken into consideration. >

Journal Article•DOI•
TL;DR: This paper shows how reasoning with cases, and reasoning with the law of excluded middle may be captured, and develops a declarative and operational semantics for knowledge bases that are possibly inconsistent.
Abstract: Databases and knowledge bases could be inconsistent in many ways. For example, during the construction of an expert system, we may consult many different experts. Each expert may provide us with a group of rules and facts which are self-consistent. However, when we coalesce the facts and rules provided by these different experts, inconsistency may arise. Alternatively, knowledge bases may be inconsistent due to the presence of some erroneous information. Thus, a framework for reasoning about knowledge bases that contain inconsistent information is necessary. However, existing frameworks for reasoning with inconsistency do not support reasoning by cases and reasoning with the law of excluded middle ("everything is either true or false"). In this paper, we show how reasoning with cases, and reasoning with the law of excluded middle may be captured. We develop a declarative and operational semantics for knowledge bases that are possibly inconsistent. We compare and contrast our work with work on explicit and non-monotonic modes of negation in logic programs and suggest under what circumstances one framework may be preferred over another. >

Journal Article•DOI•
TL;DR: In this paper, the authors propose a general architecture for implementing temporal integrity constraints by compiling them into a set of active DBMS rules, which are optimized to reduce the space overhead introduced by the integrity checking mechanism.
Abstract: The paper proposes a general architecture for implementing temporal integrity constraints by compiling them into a set of active DBMS rules. The modularity of the design allows easy adaptation to different environments. Both differences in the specification languages and in the target rule systems can be easily accommodated. The advantages of this architecture are demonstrated on a particular temporal constraint compiler. This compiler allows automatic translation of integrity constraints formulated in Past Temporal Logic into rules of an active DBMS (in the current version of the compiler two active DBMS are supported: Starburst and INGRES). During the compilation the set of constraints is checked for the safe evaluation property. The result is a set of SQL statements that includes all the necessary rules needed for enforcing the original constraints. The rules are optimized to reduce the space overhead introduced by the integrity checking mechanism. There is no need for an additional runtime constraint monitor. When the rules are activated, all updates to the database that violate any of the constraints are automatically rejected (i.e., the corresponding transaction is aborted). In addition to straightforward implementation, this approach offers a clean separation of application programs and the integrity checking code. >

Journal Article•DOI•
TL;DR: This paper proposes a different strategy, for general Datalog programs, that is based on the partitioning of data rather than that of rule instantiations, that has many promising features.
Abstract: Parallel bottom-up evaluation provides an alternative for the efficient evaluation of logic programs. Existing parallel evaluation strategies are neither effective nor efficient in determining the data to be transmitted among processors. In this paper, we propose re different strategy, for general Datalog programs, that is based on the partitioning of data rather than that of rule instantiations. The partition and processing schemes defined in this paper are more general than those in existing strategies. A parallel evaluation algorithm is given based on the semi-naive bottom-up evaluation. A notion of potential usefulness is recognized as a data transmission criterion to reduce, both effectively and efficiently, the amount of data transmitted. Heuristics and algorithms are proposed for designing the partition and processing schemes for a given program. Results from an experiment show that the strategy proposed in this paper has many promising features. >

Journal Article•DOI•
TL;DR: It is proved that the semantics is probabilistic and reduces to the usual fixpoint semantics of stratified Datalog if all information is certain.
Abstract: We define a new fixpoint semantics for rule based reasoning in the presence of weighted information. The semantics is illustrated on a real world application requiring such reasoning. Optimizations and approximations of the semantics are shown so as to make the semantics amenable to very large scale real world applications. We finally prove that the semantics is probabilistic and reduces to the usual fixpoint semantics of stratified Datalog if all information is certain. We implemented various knowledge discovery systems which automatically generate such probabilistic decision rules. In collaboration with a bank in Hong Kong we use one such system to forecast currency exchange rates. >

Journal Article•DOI•
D.D. Straube1, M.T. Ozsu•
TL;DR: This work defines the interface to an object manager whose operations are the executable elements of query execution plans, and enumerates all possible execution plans and presents them in an efficient, compact representation.
Abstract: The generation of execution plans for object-oriented database queries is a new and challenging area of study. Unlike relational algebra, a common set of object algebra operators has not been defined. Similarly, a standardized object manager interface analogous to the storage manager interface of relational subsystems does not exist. We define the interface to an object manager whose operations are the executable elements of query execution plans. Parameters to the object manager interface are streams of tuples of object identifiers. The object manager can apply methods and simple predicates to the objects identified in a tuple. Two algorithms for generating such execution plans for queries expressed in an object algebra are presented. The first algorithm runs quickly but may produce inefficient plans. The second algorithm enumerates all possible execution plans and presents them in an efficient, compact representation. >

Journal Article•DOI•
TL;DR: CASE-DB is a real-time, single-user, relational prototype DBMS that permits the specification of strict time constraints for relational algebra queries and controls the risk of overspending the time quota at each step using a risk control technique.
Abstract: CASE-DB is a real-time, single-user, relational prototype DBMS that permits the specification of strict time constraints for relational algebra queries. Given a time constrained nonaggregate relational algebra query and a "fragment chain" for each relation involved in the query, CASE-DB initially obtains a response to a modified version of the query and then uses an "iterative query evaluation" technique to successively improve and evaluate the modified version of the query, CASE-DB controls the risk of overspending the time quota at each step using a "risk control technique".