scispace - formally typeset
Search or ask a question

Showing papers on "Consistency (database systems) published in 1991"


Journal ArticleDOI
TL;DR: It is shown that the STP, which subsumes the major part of Vilain and Kautz's point algebra, can be solved in polynomial time and the applicability of path consistency algorithms as preprocessing of temporal problems is studied, to demonstrate their termination and bound their complexities.

1,989 citations


Proceedings Article
14 Jul 1991
TL;DR: In this article, a general model for temporal reasoning, capable of handling both qualitative and quantitative information, is presented, which allows the representation and processing of all types of constraints considered in the literature so far, including metric constraints, and qualitative, disjunctive, constraints (specifying the relative position between temporal objects).
Abstract: This paper presents a general model for temporal reasoning, capable of handling both qualitative and quantitative information. This model allows the representation and processing of all types of constraints considered in the literature so far, including metric constraints (restricting the distance between time points), and qualitative, disjunctive, constraints (specifying the relative position between temporal objects). Reasoning tasks in this unified framework are formulated as constraint satisfaction problems, and are solved by traditional constraint satisfaction techniques, such as backtracking and path consistency. A new class of tractable problems is characterized, involving qualitative networks augmented by quantitative domain constraints, some of which can be solved in polynomial time using arc and path consistency.

312 citations


Book
01 Jan 1991
TL;DR: Indexing principles and practice natural language in information retrieval consistency of indexing on the indexing and abstracting of imaginative works enhancing indexing quality of indexed abstracts.
Abstract: Indexing principles and practice natural language in information retrieval consistency of indexing on the indexing and abstracting of imaginative works enhancing indexing quality of indexing abstracts - types and functions approaches used in indexing and abstracting services evaluation aspects automatic indexing, abstracting and related procedures writing the abstract the future of indexing and abstracting services pre-coordinate indexes indexing exercises and abstracting exercises part contents.

263 citations


Journal ArticleDOI
TL;DR: It is shown how an important proof method, which is a variant of Park's (1969) principle of fixpoint induction, can be used to prove the consistency of the static and the dynamic relational semantics of a small functional programming language with recursive functions.

145 citations


Journal ArticleDOI
01 Nov 1991
TL;DR: An algorithm for truth maintenance is provided that guarantees local consistency for each agent and global consistency for data shared by the agents and is shown to be complete, in the sense that if a consistent state exists, the algorithm will either find it or report failure.
Abstract: The concept of logical consistency of belief among a group of computational agents that are able to reason nonmonotonically is defined. An algorithm for truth maintenance is then provided that guarantees local consistency for each agent and global consistency for data shared by the agents. The algorithm is shown to be complete, in the sense that if a consistent state exists, the algorithm will either find it or report failure. The implications and limitations of this algorithm for cooperating agents are discussed, and several extensions are described. The algorithm has been implemented in the RAD distributed expert system shell. >

123 citations


Proceedings Article
24 Aug 1991
TL;DR: It is proved that AC-5, in conjunction with node consistency, provides a decision procedure for these constraints running in time O(ed), which has an important application in Constraint Logic Programming over Finite Domains.
Abstract: Consistency Techniques have been studied extensively in the past as a way of tackling Constraint Satisfaction Problems (CSP) In particular various arc consistency algorithms have been proposed, originating from Waltz's filtering algorithm [20] and culminating in the optimal algorithm AC-4 of Mohr and Henderson [13] AC-4 runs in O(ed2) in the worst case where e is the number of arcs (or constraints) and d is the site of the largest domain Being applicable to the whole class of (binary) CSP, these algorithms do not take into account the semantics of constraints In this paper, we present a new generic arc consistency algorithm AC-5 The algorithm is parametrised on two specified procedures and can be instantiated to reduce to AC-3 and AC-4 More important, AC-5 can be instantiated to produce an O(ed) algorithm for two important classes of constraints: functional and monotonic constraints We also show that AC-5 has an important application in Constraint Logic Programming over Finite Domains [18] The kernel of the constraint-solver for such a programming language is an arc consistency algorithm for a set of basic constraints We prove that AC-5, in conjunction with node consistency, provides a decision procedure for these constraints running in time O(ed)

89 citations


Proceedings ArticleDOI
01 Aug 1991
TL;DR: In this article, the authors introduce several implementations of delayed consistency for cache-based systems in the framework of a weakly ordered consistency model, and a performance comparison of the delayed protocols with the corre sponding On-the-Fly (non-delayed) consistency protocol is made.
Abstract: In cache based multiprocessors a protocol must maintain coherence among replicated copies of shared writable data. In delayed consistency protocols the effect of out-going and in-coming invalidations or updates are delayed. Delayed coherence can reduce processor blocking time as well as the effects offalse sharing. In this paper, we introduce several implementations of delayed consistency for cache-based systems in the framework of a weakly­ ordered consistency model. A performance comparison of the delayed protocols with the corre sponding On-the-Fly (non-delayed) consistency protocol is made, through execution-driven simulations of four parallel algorithms. The results show that,for parallel programs in which false sharing is a problem, significant reductions in the data miss rate of paraUel programs can be obtained with just a small incre ase in the cost and complexity of the cache system.

80 citations


Journal ArticleDOI
TL;DR: A norm of consistency is proposed for a mixed set of defeasible and strict sentences which, guided by a probabilistic interpretation of these sentences, establishes a clear distinction between exceptions, ambiguities and outright contradictions.

70 citations


Proceedings ArticleDOI
01 Apr 1991
TL;DR: New conditions for sequential consistency are presented which show that sequential consistency can be maintained if all accesses in a multiprocessor can be ordered in an acyclic graph and what is required to maintain processor consistency in race-free networks are investigated.
Abstract: Modern shared-memory multiprocmors require complex interconnection networks to provide sufficient communication bandwidth between processors. They also rely on advanced memory systems that allow multiple memory operations to be made in parallel. It is expensive to maintain a high consistency level in a machine based on a general network, but for special interconnection topologies, some of these costs can he reduced. We define and study one class of interconnection networks, race-free networks. New conditions for sequential consistency are presented which show that sequential consistency can be maintained if all accesses in a multiprocessor can be ordered in an acyclic graph. We show that this can be done in racefree networks without the need for a transaction to be globally performed before the next transaction can be issued: We also investigate what is required to maintain processor consistency in race-free networks. In a race-free network which maintains processor consistency, writes may be pipelined, and reads may bypass writes. - The proposed methods reduce the latencies associated with processor write-misses to shared data.

68 citations


Book ChapterDOI
01 Jul 1991
TL;DR: A considerable need for such methods appeared these last ten years in different domains, such as design of asynchronous circuits, communication protocols and distributed software in general, and many different theories have been suggested for the automated analysis of distributed systems.
Abstract: Program verification is a branch of computer science whose business is "to prove programs correctness". It has been studied in theoretical computer science departments for a long time but it is rarely and laboriously applied to real world problems. As a matter of fact, we must pay much more attention to practical problems like the amount of space and time needed to perform verification. Let us recall that proofs of correctness are proofs of the relative consistency between two formal specifications: those of the program, and of the properties that the program is supposed to satisfy. Such a formal proof tries to increase the confidence that a computer system will make it right when executing the program under consideration. A considerable need for such methods appeared these last ten years in different domains, such as design of asynchronous circuits, communication protocols and distributed software in general. A lot of us accepted the challenge to design automated verification tools, and many different theories have been suggested for the automated analysis of distributed systems. There now exist elaborate methods that can verify quite subtle behaviors. A simple method for performing automated verification is symbolic execution which is the core of most existing and planned verification systems. The practical limits of this method are the size of the state space and the time it may take to inspect all reachable states in this state space. Those quantities can dramatically rise with the problem size.

67 citations


Proceedings Article
03 Sep 1991
TL;DR: This paper presents the formalism, underlying ACTA, necessary to prove the visibility, consistency, recovery, and permanence properties of transactions in the extended models of traditional, nested, and split transaction models.
Abstract: Several extensions to the transaction model adopted in traditional database systems have been proposed in order to support the functional and performance requirements of emerging advanced applications such as design environments. In [6], we introduced a comprehensive transaction framework, called ACTA to specify the effects of extended transactions on each other and on objects in the database, and to reason about the properties of extended transactions. This paper presents the formalism, underlying ACTA, necessary to prove the visibility, consistency, recovery, and permanence properties of transactions in the extended models. In this paper we show how the formalism,can be used to specify and reason about the properties of traditional, nested, and split transaction models.

Patent
23 May 1991
TL;DR: In this article, a method and apparatus for performing reconfiguration of a cellular network is provided, where cell parameters of affected mobile switching centers in the network are copied to a database, and the copied parameters are stored.
Abstract: A method and apparatus for performing reconfiguration of a cellular network is provided. Cell parameters of affected mobile switching centers in the network are copied to a database, and the copied parameters are stored. A set of proposed changes to the stored parameters are prepared and the consistency of the prepared set of proposed changes is verified. Any necessary alterations to the set of proposed changes responsive to the verification are made and the verified set of proposed changes are copied to the affected mobile switching centers. The verified set of proposed changes are then introduced into the network. Additionally, at all times, an up-to-date image of all the cell parameters in all the mobile switching centers in the network is maintained in a system parameter database.

Journal ArticleDOI
TL;DR: A method of transforming the production rules into a numerical petri nets (NPNs) model is proposed that allows the verification of the correctness, consistency, and completeness of the knowledge base.
Abstract: A major difficulty that occurs in the construction of large production rule-based expert systems is maintaining the correctness, consistency, and completeness of the knowledge base. A method of transforming the production rules into a numerical petri nets (NPNs) model is proposed. These NPNs are high level nets that are necessary to effectively model production rules. the net model is then analysed by using a computer-aided tool to perform reachability analysis. an algorithm is given to generate the reachability set of the nets. This allows the verification of the correctness, consistency, and completeness of the knowledge base. Examples showing the use of this approach are given.

ReportDOI
01 Nov 1991
TL;DR: This paper provides a protocol to do replicated architecture that is secure, since it is free of covert channels, and also ensures one-copy serializability of executing transactions, and can be implemented with untrusted processes for both concurrency and recovery.
Abstract: : Replicated architecture has been proposed as a way to obtain acceptable performance in a multilevel secure database system. This architecture contains a separate database for each security level such that each contains replicated data from lower security classes. The consistency of the values of replicated data items must be maintained without unnecessarily interfering with concurrency of database operations. This paper provides a protocol to do this that is secure, since it is free of covert channels, and also ensures one-copy serializability of executing transactions. The protocol can be implemented with untrusted processes for both concurrency and recovery.

Book ChapterDOI
15 Jul 1991
TL;DR: This work shows a tool called ICC, which ensures the structural consistency when updating an object-oriented database system, which is important for schema evolution and updates.
Abstract: Schema evolution is an important facility in object-oriented databases. However, updates should not result in inconsistencies either in the schema or in the database. We show a tool called ICC, which ensures the structural consistency when updating an object-oriented database system.

01 Dec 1991
TL;DR: This paper develops a taxonomy of various correctness criteria that focus on database consistency requirements and transaction correctness properties from the viewpoint of what the different dimensions of these two are, and applies a uniform specification technique based on ACTA to express the various criteria.
Abstract: Whereas serializability captures database consistency requirements and transaction correctness properties via a single notion, recent research has attempted to come up with correctness criteria that view these two types of requirements indepen- dently. The search for more flexible correctness criteria is partly motivated by the introduction of new transaction models that extend the traditional atomic transaction model. These extensions came about because the atomic transac- tion model in conjunction with serializability is found to be very constraining when applied in advanced applications, such as, design databases, that function in distributed, cooperative, and heterogeneous environments. In this paper, we develop a taxonomy of various correctness criteria that focus on database consistency requirements and transaction correctness properties from the viewpoint of what the different dimensions of these two are. This taxonomy allows us to categorize correctness criteria that have been proposed in the lit- erature. To help in this categorization, we have applied a uniform specification technique, based on ACTA, to express the various criteria. Such a categorization helps shed light on the similarities and differences between different criteria and to place them in perspective.

Journal ArticleDOI
TL;DR: The objective of this article is to provide a formal basis for the DFD and a number of completeness criteria are discussed and formalized.

Proceedings ArticleDOI
07 Apr 1991
TL;DR: The author introduces a multidatabase recoverability requirement and describes a recovery mechanism that takes advantage of the local recovery in the participating database systems by minimizing the replication of recovery tasks.
Abstract: To support global transactions in a multidatabase environment, one must coordinate the activities of multiple database management systems, that were designed for independent, stand-alone operation. The autonomy and heterogeneity of these systems present a major impediment to the direct adaptation of transaction management mechanisms developed for distributed database systems. This paper addresses the problems in multidatabase recovery. Most solutions proposed to provide multidatabase recovery are either allow incorrect results or place severe restrictions on global and local transactions. To assure that multidatabase recovery preserves the consistency of a multidatabase system. The author introduces a multidatabase recoverability requirement. He also describes a recovery mechanism that takes advantage of the local recovery in the participating database systems by minimizing the replication of recovery tasks. >

01 Jan 1991
TL;DR: The paper proposes the mapping of the externally (user-oriented) representation of business rules onto an Object-Oriented executable specification, which has the effect that validating the captured rules can be facilitated through the use of three levels of abstraction namely, model meta-knowledge, application knowledge and extensional knowledge.
Abstract: This paper argues that substantial benefits can be accrued from the explicit modelling of business rules and the alignment of the business knowledge to an information system. To this end, the paper introduces a conceptual modelling language for the capturing and representation of business rules incorporating aspects such as time modelling and complex objects. Together with the need for expressiveness power of conceptual modelling formalisms, this paper argues that one of the major challenges in the task of explicitly representing business rules is the ability of the chosen paradigm to provide facilities for clarification and consistency checking of the captured knowledge. The paper proposes the mapping of the externally (user-oriented) representation of business rules onto an Object-Oriented executable specification. This has the effect that validating the captured rules can be facilitated through the use of three levels of abstraction namely, model meta-knowledge, application knowledge and extensional knowledge.

Journal ArticleDOI
TL;DR: A new method of enabling information sharing in loosely-coupled socially-organized systems, typically involving personal rather than institutional computers and lacking the network infrastructure that is generally taken for granted in distributed computing is proposed.
Abstract: While most schemes that support information sharing on computers rely on formal protocols, in practice much cooperative work takes place using informal means of communication, even chance encounters. This paper proposes a new method of enabling information sharing in loosely-coupled socially-organized systems, typically involving personal rather than institutional computers and lacking the network infrastructure that is generally taken for granted in distributed computing. It is based on the idea of arranging for information transmission to take place as an unobtrusive side-effect of interpersonal communication. Update conflicts are avoided by an information ownership scheme. Under mild assumptions, we show how the distributed database satisfies the property of observational consistency. The new idea, called “Liveware”, is not so much a specific piece of technology as a fresh perspective on information sharing that stimulates new ways of solving old problems. Being general, it transcends particular distribution technologies. A prototype database, implemented in HyperCard and taking the form of an electronic directory, utilizes the medium of floppy disk to spread information in a (benign!) virus-like manner.

01 May 1991
TL;DR: A multidatabase recoverability condition is introduced as the minimal requirement that can assure that multid atabase recovery preserves the consistency of a multidAtabase system.
Abstract: The concept of Multidatabase Systems (MDBS) was introduced to support applications that access data stored in multiple databases, controlled by autonomous and possibly heterogeneous Local Database Systems (LDBSs). The autonomy and the heterogeneity of the LDBSs causes several new problems in multidatabase transaction management that do not exist in other distributed database systems. The primary difficulty in enforcing global serializability in a multidatabase environment is due to the fact that transaction execution and serialization orders can be different. To determine the serialization order of global transactions the MDBS must take into account conflicts caused by local transactions. However, due to the autonomy of the LDBSs such information is not available. A serious problem in multidatabase recovery is that MDBS recovery actions constitute new transactions. This complication redefines the recoverability requirements in a MDBS. Most solutions proposed to deal with the above problems either allow incorrect results or place severe restrictions on global and local transactions. In the global serializability area the thesis contributes in several aspects. We first define a subclass of LDBSs which produce rigorous schedules where the execution order of transactions determines their serialization order. We also propose a family of concurrency control methods and prove that they guarantee global serializability without violating the autonomy of the local systems. The ticketing method assumes only local serializability while its refinements take advantage of the possible rigorousness of the LDBSs. In the area of multidatabase recovery, we introduce a multidatabase recoverability condition as the minimal requirement that can assure that multidatabase recovery preserves the consistency of a multidatabase system. We also describe a recovery mechanism that takes advantage of the local recovery in the LDBSs by minimizing the replication of recovery tasks. Another contribution of this thesis is the introduction of a new correctness criterion that is based on time. We show that chronological correctness captures temporal transaction dependencies in addition to other time-independent conflicts among transactions. We also propose a chronological scheduler which releases transaction isolation, takes into account transaction duration and allows implicit commitment of transactions.

01 Jan 1991
TL;DR: This work shows how a weaker mutual consistency requirement, called eventual consistency, can be satisfied and requires writing only the local copy of each interdependent data and hence preserves autonomy by not requiring synchronization of the local concurrency controllers.
Abstract: Interdependent data are characterized by dependency constraints and mutual consistency requirements. Maintaining consistency of interdependent data that are managed by heterogeneous and autonomous DBMSs is a real problem faced in many practical computing environments. Supporting a mutual consistency criterion that is weaker than one copy serializability is often acceptable if better performance can be achieved and the autonomy of DBMSs is not sacrificed. Updates to interdependent data have to be controlled in order to maintain their consistency. We propose a solution where at least one of the copies of each interdependent data, called current copy, is kept up-to-date in the system. Using the concept of update through current copy, we show how a weaker mutual consistency requirement, called eventual consistency, can be satisfied. The proposed approach requires writing only the local copy (as opposed to writing many or all copies) and hence preserves autonomy by not requiring synchronization of the local concurrency controllers. This approach exploits the semantics of both the interdependent data and transactions; thus, it is non-intrusive, flexible and efficient.

Journal ArticleDOI
01 Sep 1991
TL;DR: An extension of the method that was used for building and using the network-based knowledge system SUPER is proposed to fully utilize the benefits of this approach in the domain of diagnosing distributed dynamically evolving processes.
Abstract: An extension of the method that was used for building and using the network-based knowledge system SUPER is proposed to fully utilize the benefits of this approach in the domain of diagnosing distributed dynamically evolving processes. In the scope of distributed sensor networks, several issues are addressed concerning problems dealing with software architectures, strategies, and properties for efficient sensor data management: (1) how to build links efficiently between the elements of each network-based knowledge base, (2) how to maintain consistency of the whole structure and manage the constraints of domain dependent variables corresponding to sensor data, (3) how to manage and update efficiently the database at run time in order to maintain its consistency and to satisfy a high level of response time performances, and (4) how to propagate the solutions given by a qualitative knowledge base into a knowledge base utilizing sensor data whenever the sensors are out of order. The answers given are based on extending and generalizing the principles that have been defined for SUPER. >

Journal ArticleDOI
TL;DR: A different reference theory that is based on a program transformation that given any program transforms it into a strict one and the usual notion of program completion is proposed, which is a reasonable reference theory to discuss program semantics and completeness results.
Abstract: The paper presents a new approach to the problem of completeness of the SLDNF-resolution. We propose a different reference theory that we call strict completion. This new concept of completion (comp*(P)) is based on a program transformation that given any program transforms it into a strict one (with the same computational behaviour) and the usual notion of program completion. We consider it a reasonable reference theory to discuss program semantics and completeness results. The standard 2-valued logic is used. The new comp*(P) is always consistent and the completeness of all allowed programs and goals w.r.t. comp*(P) is proved.

Journal ArticleDOI
TL;DR: The architecture and design of a collaborative computer-aided software engineering (CASE) environment, called C-CASE, which can be used to assist users in defining the requirements of their organization and information systems as well as to analyze the consistency and completeness of the requirements.
Abstract: Defining systems requirements and specifications is a collaborative effort among managers, users, and systems developers. The difficulty of systems definition is caused by the human's limited cognitive capabilities, that is compounded by the complexity of group communication and coordination processes. Current system analysis methodologies are first evaluated regarding to the level of support to users. Since systems definition is a knowledge-intensive activity, the knowledge contents and structures employed in systems definition are discussed. For any large-scale system, no one person possesses all the knowledge that is needed, therefore, the authors proposed a collaborative approach to systems definition. The use of a group decision support system (GDSS) for systems definition is first described and limitations of the current GDSS are identified. The architecture and design of a collaborative computer-aided software engineering (CASE) environment, called C-CASE, is then discussed. C-CASE can be used to assist users in defining the requirements of their organization and information systems as well as to analyze the consistency and completeness of the requirements. C-CASE integrates GDSS and CASE such that users can actively participate in the requirements elicitation process. Users can use the metasystem capability of C-CASE to define domain specific systems definition languages, which are adaptable to different systems development settings. An example of using C-CASE in a collaborative environment is given. The implications C-CASE and the authors' ongoing research are also discussed.

Journal ArticleDOI
TL;DR: A general portable simulation tool called FLEXSIM, designed to evaluate certain classes of flexible manufacturing systems, is presented, which achieves the independence between the data model which represents the whole FMS and the simulation model itself by using a relational database management system.

Book ChapterDOI
16 Dec 1991
TL;DR: It turns out that using functional dependencies it is possible to resolve potential ambiguities in several practical cases and precomputations can be performed at definition time to execute update requests more efficiently.
Abstract: We study the problem of updating intensional relations in the framework of deductive databases on which integrity constraints (specifically functional dependencies) are defined. First, a formalization of a model-theoretic semantics of updates is provided: the notions of representability, consistency and determinism are introduced to characterize the various cases. Then, a proof-theoretic approach, based on a variant of resolution integrated with the chase procedure, is defined, showing that the method exactly captures the above notions. It turns out that using functional dependencies it is possible to resolve potential ambiguities in several practical cases. Also, precomputations can be performed at definition time to execute update requests more efficiently.

Book ChapterDOI
15 Jul 1991
TL;DR: The design of an exception handling mechanism for Guide, an object-oriented language based on a distributed system, is described, and a specific tool to maintain the consistency of objects in the face of exceptions is provided.
Abstract: This paper describes the design of an exception handling mechanism for Guide, an object-oriented language based on a distributed system. We confront the usual exception techniques to the object formalism, and we propose conformance rules and an original association scheme. A specific tool to maintain the consistency of objects in the face of exceptions is provided. System and hardware exceptions are integrated to the mechanism, and parallelism is handled in an original manner. Some details of the implementation are given.

Journal ArticleDOI
TL;DR: The application of the transformational paradigm to the specification and design phases is proposed and requirements are expressed in the ADISSA notation, a transaction-oriented refinement of structured systems analysis.
Abstract: In conventional information systems development, consistency between requirements specifications and design is achieved by manual checking. The application of the transformational paradigm to the specification and design phases is proposed. Requirements are expressed in the ADISSA notation, using the ADISSA method, a transaction-oriented refinement of structured systems analysis. The control part of a transaction is transformed into a formal specification, the FSM (finite state machine) transaction, by applying a set of rules. The design stage is realized by an algorithm which compares the FSM transaction into simpler transactions and implements them with a hierarchical set of finite-state machines. Consistency between the formal specification and the result of the design is achieved by proving that the latter has the same behavior as the former. >

Patent
08 Aug 1991
TL;DR: In this paper, a cache lock, a pending lock, and an out-of-date lock are added to a two-lock concurrency control system to maintain the consistency of cached data in a clientserver database system.
Abstract: A method of maintaining the consistency of cached data in a client-server database system. Three new locks--a cache lock, a pending lock and an out-of-date lock--are added to a two-lock concurrency control system. A new long-running envelope transaction (69) holds a cache lock (45) on each object cached by a given client. A working transaction of the client works only with the cached object until commit time. If a second client's working transaction acquires an "X" lock on the object (167) the cache lock is changed to a pending lock (51); if the transaction thereafter commits (171) the pending lock is changed to an out-of-date lock (47). If the first client's working transaction thereafter attempts to commit, it waits for a pending lock to change (67); it aborts if it encounters an out-of-date lock (49); and otherwise it commits (61).