scispace - formally typeset
Search or ask a question

Showing papers on "Consistency (database systems) published in 1988"


Journal ArticleDOI
TL;DR: Several randomized algorithms for distributing updates and driving the replicas toward consistency are described, solving long-standing problems of high traffic and database inconsistency.
Abstract: When a database is replicated at many sites, maintaining mutual consistency among the sites in the face of updates is a significant problem. This paper describes several randomized algorithms for distributing updates and driving the replicas toward consistency. The algorithms are very simple and require few guarantees from the underlying communication system, yet they ensure that the effect of every update is eventually reflected in all replicas. The cost and performance of the algorithms are tuned by choosing appropriate distributions in the randomization step. The algorithms are closely analogous to epidemics, and the epidemiology literature aids in understanding their behavior. One of the algorithms has been implemented in the Clearinghouse servers of the Xerox Corporate Internet. solving long-standing problems of high traffic and database inconsistency.

721 citations


Journal ArticleDOI
TL;DR: It is shown how complex update programs can be built from primitive update operators and how view update programs are translated into database update programs and shows that consistent views have a number of interesting properties with respect to the concurrency of (high-level) update transactions.
Abstract: The problem of translating view updates to database updates is considered. Both databases and views are modeled as data abstractions. A data abstraction consists of a set of states and of a set of primitive update operators representing state transition functions. It is shown how complex update programs can be built from primitive update operators and how view update programs are translated into database update programs. Special attention is paid to a class of views that we call “consistent.” Loosely speaking, a consistent view is a view with the following property: If the effect of a view update program on a view state is determined, then the effect of the corresponding database update is unambiguously determined. Thus, in order to know how to translate a given view update into a database update, it is sufficient to be aware of a functional specification of such a program. We show that consistent views have a number of interesting properties with respect to the concurrency of (high-level) update transactions. Moreover we show that the class of consistent views includes as a subset the class of views that translate updates under maintenance of a constant complement. However, we show that there exist consistent views that do not translate under constant complement. The results of Bancilhon and Spyratos [6] are generalized in order to capture the update semantics of the entire class of consistent views. In particular we show that the class of consistent views is obtained if we relax the requirement of a constant complement by allowing the complement to decrease according to a suitable partial order.

163 citations


Proceedings Article
Johan de Kleer1
21 Aug 1988
TL;DR: This paper presents an alternative approach based on negated assumptions which integrates simply and cleanly into existing ATMS algorithms and which does not require the use of a hyperresolution rule to ensure label consistency.
Abstract: Assumption-based truth maintenance systems have become a powerful and widely used tool in Artificial Intelligence problem solvers. The basic ATMS is restricted to accepting only horn clause justifications. Although various generalizations have been made and proposed to allow an ATMS to handle more general clauses, they have all involved the addition of complex and difficult to integrate hyperresolution rules. This paper presents an alternative approach based on negated assumptions which integrates simply and cleanly into existing ATMS algorithms and which does not require the use of a hyperresolution rule to ensure label consistency.

75 citations


Journal ArticleDOI
TL;DR: This paper illustrates the use of Ada's abstraction facilities—notably, operator overloading and type parameterization—to define an oft-requested feature: a way to attribute units of measure to variables and values.
Abstract: This paper illustrates the use of Ada's abstraction facilities—notably, operator overloading and type parameterization—to define an oft-requested feature: a way to attribute units of measure to variables and values. The definition given allows the programmer to specify units of measure for variables, constants, and parameters; checks uses of these entities for dimensional consistency; allows arithmetic between them, where legal; and provides scale conversions between commensurate units. It is not constrained to a particular system of measurement (such as the metric or English systems). Although the definition is in standard Ada and requires nothing special of the compiler, certain reasonable design choices in the compiler, discussed here at some length, can make its implementation particularly efficient.

58 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present analytical methods for consistent modeling of control structures for multivariable processes, using the continuous distillation process to illustrate the methods, but the basic ideas and more specifically the transformations and consistency relations derived, are valid in general.
Abstract: Rigorous analytical methods for consistent modeling of control structures for multivariable processes are presented. The continuous distillation process is used to illustrate the methods, but the basic ideas, and more specifically the transformations and consistency relations derived, are valid in general. If steady-state operating data and the process gains of an arbitrary control structure are known, it is possible to calculate the process gains of any feasible control structure. A general expression relating the process gaisn of different control structures is derived. In general, the process gains must also satisfy certain consistency relationships which can be derived from first principles, e.g., steady-state material balances. The usefulness of the results is illustrated by control structure transformations and reconciliation of process gains, by an application to process dynamics, by synthesis of noninteracting control loops, and by derivation of analytical relationships useful in relative gain analysis.

50 citations


Journal ArticleDOI
TL;DR: This paper shows how to embed the incomplete database and the incoming information in the language of mathematical logic, explains the semantics of the update operators, and discusses the algorithms that implement these operators.
Abstract: Suppose one wishes to construct, use, and maintain a database of facts about the real world, even though the state of that world is only partially known. In the artificial intelligence domain, this problem arises when an agent has a base set of beliefs that reflect partial knowledge about the world, and then tries to incorporate new, possibly contradictory knowledge into this set of beliefs. In the database domain, one facet of this situation is the well-known null values problem. We choose to represent such a database as a logical theory, and view the models of the theory as representing possible states of the world that are consistent with all known information.How can new information be incorporated into the database? For example, given the new information that “b or c is true,” how can one get rid of all outdated information about b and c, add the new information, and yet in the process not disturb any other information in the database? In current-day database management systems, the difficult and tedious burden of determining exactly what to add and remove from the database is placed on the user. The goal of our research was to relieve users of that burden, by equipping the database management system with update algorithms that can automatically determine what to add and remove from the database.Under our approach, new information about the state of the world is input to the database management system as a well-formed formula that the state of the world is now known to satisfy. We have constructed database update algorithms to interpret this update formula and incorporate the new information represented by the formula into the database without further assistance from the user. In this paper we show how to embed the incomplete database and the incoming information in the language of mathematical logic, explain the semantics of our update operators, and discuss the algorithms that implement these operators.

48 citations


Book ChapterDOI
Bernd Owsnicki-Klewe1
19 Sep 1988
TL;DR: This paper describes how a configuration task may be considered a problem of main- taining global consistency within a knowledge base and the mechanism embedded in MESON, a KL-ΟNE descendant, developed at PHILIPS Research Laboratories, Hamburg.
Abstract: This paper describes how a configuration task may be considered a problem of main- taining global consistency within a knowledge base. A consistency maintenance process should a) be able to deduce the consequences of new knowledge and b) detect logical contradictions on the basis of these inferences. Usually, this is done best by “read-time inferences”, i. e. inferences drawn immediately on arrival of new information. This leads to a configuration system that does not contain heuristic rules but just one inference mechanism responsible for processing the user’s input. The mechanism is embedded in MESON, a KL-ΟNE descendant, developed at PHILIPS Research Laboratories, Hamburg.

40 citations


Journal ArticleDOI
01 Mar 1988
TL;DR: This work proposes techniques to increase the availability in a partitioned real-time database and suggests that a transaction may execute even when the most up-to-date information is not available or when a serializable execution cannot be guaranteed.
Abstract: One of the issues in distributed databases is to maintain the data consistency when a database is replicated for higher availability. In a real-time database system, availability may be more important than consistency since a result must be produced before a deadline. We propose techniques to increase the availability in a partitioned real-time database. We also suggest that a transaction may execute even when the most up-to-date information is not available or when a serializable execution cannot be guaranteed. As long as data integrity is maintained, serializable execution may not be necessary.

35 citations


Journal ArticleDOI
TL;DR: A framework is proposed for the structured specification and verification of database dynamics that is a many sorted first order linear tense theory whose proper axioms specify the update and the triggering behaviour of the database.
Abstract: A framework is proposed for the structured specification and verification of database dynamics. In this framework, the conceptual model of a database is a many sorted first order linear tense theory whose proper axioms specify the update and the triggering behaviour of the database. The use of conceptual modelling approaches for structuring such a theory is analysed. Semantic primitives based on the notions of event and process are adopted for modelling the dynamic aspects. Events are used to model both atomic database operations and communication actions (input/output). Nonatomic operations to be performed on the database (transactions) are modelled by processes in terms of trigger/reaction patterns of behaviour. The correctness of the specification is verified by proving that the desired requirements on the evolution of the database are theorems of the conceptual model. Besides the traditional data integrity constraints, requirements of the form “Under condition W, it is guaranteed that the database operation Z will be successfully performed” are also considered. Such liveness requirements have been ignored in the database literature, although they are essential to a complete definition of the database dynamics.

33 citations


Proceedings ArticleDOI
03 Nov 1988
TL;DR: A new model which unifies all three concepts in a distributed SDE, version control, configuration control and transactions, is presented and the application of the model to the Cosmos Distributed SDE is discussed.
Abstract: The increase in size and complexity of software projects over recent years has lead to the need for Software Development Environments (SDEs). SDEs are intended to provide assistance in the development of large software systems involving teams of people. It is generally agreed that SDE's should be built on a distributed base. However, the distribution of computer systems introduces several problems which make it very difficult to maintain the consistency of data.To ensure that changes to data are made consistently the concept of atomic transactions is usually adopted. However, existing transaction mechanisms are unsuitable for use in a distributed SDE. Furthermore, transactions are not the only mechanism concerned with controlling changes to the SDE database. The control of change is also a task for version and configuration control mechanisms. Traditionally, the functions of version control, configuration control and transactions have been treated as separate, unrelated issues.This paper presents a new model which unifies all three concepts in a distributed SDE. Mechanisms for supporting the new model are presented and the application of the model to the Cosmos Distributed SDE is discussed.

25 citations


Proceedings ArticleDOI
10 Oct 1988
TL;DR: The authors have proposed an imprecise result mechanism for producing partial results, which is used to implement timing error recovery in real-time database systems, and present a model of real- time systems that distinguishes the external data consistency from the internal data consistency maintained by non-real-time systems.
Abstract: In real-time database systems, a transaction may not have enough time to complete. In such cases, partial, or imprecise, results can still be produced. The authors have proposed an imprecise result mechanism for producing partial results, which is used to implement timing error recovery in real-time database systems. They also present a model of real-time systems that distinguishes the external data consistency from the internal data consistency maintained by non-real-time systems. Providing a timely response may require sacrificing internal consistency. The authors discuss three examples that have different requirements of data consistency and present algorithms for implementing them. >

Journal ArticleDOI
TL;DR: A conceptual model of a distributed real time system is developed and a set of consistency constraints, concerning the time validity of real time information in distributedreal time systems is presented.

Journal ArticleDOI
TL;DR: This paper presents a novel class of special purpose processors referred to as ASOCS (adaptive self-organizing concurrent systems), intended applications include adaptive logic devices, robotics, process control, system malfunction management, and in general, applications of logic reasoning.

Proceedings Article
01 Jan 1988
TL;DR: Presents a concept that permits long activities to be broken into several steps using short transactions, permitting an application designer to specify those assumptions needed to produce the actions that make up the activity.
Abstract: Presents a concept that permits long activities to be broken into several steps using short transactions. The scheme maintains the consistency of the data and the correct termination of the transaction-set is guaranteed. Invariant data or program structures can be passed to subsequent decision points, permitting an application designer to specify those assumptions needed to produce the actions that make up the activity. The control of the activity is not fixed on a certain node making the execution of the activity independent of nodes. >

Book Chapter
01 May 1988
TL;DR: This paper describes an approach to the provision of such support for three particular aspects: method support by active guidance, validation by transaction anomation, and reuse of specification fragments.
Abstract: : Requirements analysis is one of the most critical and difficult tasks in software engineering. The need for tool support is easily justified. This paper describes an approach to the provision of such support for three particular aspects: method support by active guidance, validation by transaction anomation, and reuse of specification fragments. Method guidance is supported by a method model used to describe the sequence of method steps that should be followed. This model is directly interpreted by the tools to provide advice and reasoning. It is used in conjunction with rules used for consistency checking to provide remedial advice. The animator provides facilities for the selection and execution of a transaction to reflect the specified behavior given a particular scenario. Actions are described in terms of input-out relations. Simple rules can be specified to control the execution of actions. Facilities are provided to replay and interact with transactions. Reuse is supported by facilities for identifying candidate transactions from a reuse database. The search strategies provided include browsing in an inheritance structure, different levels of pattern matching, casual chain matching (matching of the underlying control structures), and purpose matching. Support is then provided for the allocation of the selected fragment to the target environment. The approach has been tested by implementing a prototype set of tools for the CORE method and the Analyst workstation. A major case study, the ASE (Advanced Sensor Exploitation) test environment, has been analyzed and specified using CORE, the Analyst, and the tools described above. The results of that work are described and evaluated.

Proceedings ArticleDOI
01 Mar 1988
TL;DR: In this article, the authors consider the problem of how a database is queried and how its integrity is enforced, and propose a way of answering database queries using a modal logic (called KFOPCE).
Abstract: The by now conventional perspective on databases, especially deductive databases, is that they are sets of first order sentences. As such, they can be said to be claims about the truths of some external world, the database is a symbolic representation of that world.While agreeing with this account of what a database is, I disagree with how, both in theory and practice, a database is used, specifically how it is queried and how its integrity is enforced.Virtually all approaches to database query evaluation treat queries as first order formulas, usually with free variables whose bindings resulting from the evaluation phase define the answers to the query. The sole exception to this is the work of Levesque (1981, 1984), who argues that queries should be formulas in an epistemic modal logic. Queries, in other words, should be permitted to address aspects of the external world as represented in the database, as well as aspects of the database itself i e aspects of what the database knows. To take a simple example, suppose DB = p y qQuery p (i e is p true in the external world?)Answer unknownQuery Kp (i e. do you know whether p is true in the external world?)Answer noLevesque's modal logic (called KFOPCE) distinguishes between known and unknown individuals in the database and thus accounts for “regular” database values as well as null values. For example, if KB is{Teach (John, Math100), (∃ x) Teach (x, CS100), Teach (Mary, Psych100) y Teach (Sue, Psych100)},thenQuery (∃ x)K Teach (John, x) i e is there a known course which John teaches?Answer yes-Math100Query (∃ x)K Teach (x, CS100) i e is there a known teacher for CS100?Answer NoQuery (∃ x) Teach (x, Psych100) i e does anyone teach Psych 100?Answer: Yes - Mary or SueQuery (∃ x)K Teach (x, Psych100) i e is there a known teacher of Psych100?Answer NoLevesque (1981, 1984) provides a semantics for his language KFOPCE FOPCE, is the first order language KFOPCE without the modal K Levesque proposes that a database is best viewed as a set of FOPCE sentences, and that it be queried by sentences of KFOPCE. He further provides a (noneffective) way of answering database queries.Recently I have considered the concept of a static integrity constraint in the context of Levesque's KFOPCE (Reiter 1988). The conventional view of integrity constraints is that, like the database itself, they too are first order formulas (e g Lloyd & Topor (1985), Nicolas & Yazdanian (1978), Reiter (1984)). There are two definitions in the literature of a deductive database KB satisfying an integrity constraint IC.Definition 1 Consistency (e.g. Kowalski (1978), Sadri and Kowalski (1987)) KB satisfies IC if f KB + IC is satisfiableDefinition 2 Entailment (e g Lloyd and Topor (1985), Reiter (1984)) KB satisfies IC if f KB

Proceedings ArticleDOI
23 May 1988
TL;DR: A prototype implementation of a data management system primarily designed for technical applications is discussed, to enrich the concept of consistency supported in a classical database system by a more flexible data distribution mechanism.
Abstract: A prototype implementation of a data management system primarily designed for technical applications is discussed. The system may be regarded as a superset of well-known data administration concepts like file systems and database systems, both centralized and distributed. The basic idea is to enrich the concept of consistency supported in a classical database system by a more flexible data distribution mechanism. The prototype makes it possible to validate the basic concepts it is then extended to a more powerful version. >

Journal ArticleDOI
TL;DR: An algorithm for maintaining consistency and improving the performance of databases with replicated data in distributed realtime systems is presented and the semantic information of read-only transactions is exploited for improved efficiency and a multiversion technique is used to increase the degree of concurrency.
Abstract: Considerable research effort has been devoted to the problem of developing techniques for achieving high availability of critical data in distributed realtime systems. One approach is to use replication. Replicated data is stored redundantly at multiple sites so that it can be used even if some of the copies are not available due to failures. This paper presents an algorithm for maintaining consistency and improving the performance of databases with replicated data in distributed realtime systems. The semantic information of read-only transactions is exploited for improved efficiency, and a multiversion technique is used to increase the degree of concurrency. Related issues including version management and consistency of the states seen by transactions are discussed.

Book ChapterDOI
31 Aug 1988
TL;DR: This method is based on focussing on the relevant parts of the database by reasoning forwards from the updates of a transaction, and using this knowledge about real or just possible implicit updates for simplifying the consistency constraints in question.
Abstract: In this paper a theoretical framework for efficiently checking the consistency of deductive databases is provided and proven to be correct. Our method is based on focussing on the relevant parts of the database by reasoning forwards from the updates of a transaction, and using this knowledge about real or just possible implicit updates for simplifying the consistency constraints in question. Opposite to the algorithms by Kowalski/Sadri and Lloyd/Topor, we are neither committed to determine the exact set of implicit updates nor to determine a fairly large superset of it by only considering the head literals of deductive rule clauses. Rather, our algorithm unifies these two approaches by allowing to choose any of the above or even intermediate strategies for any step of reasoning forwards. This flexibility renders possible the integration of statistical data and knowledge about access paths into the checking process. Second, deductive rules are organized into a graph to avoid searching for applicable rules in the proof procedure. This graph resembles a connection graph, however, a new method of interpreting it avoids the introduction of new clauses and links.


Journal ArticleDOI
TL;DR: This paper presents an efficient algorithm for computation of probability of conflicts in case of an arbitrary access distribution for databases where data access distribution is arbitrary.

Proceedings ArticleDOI
03 Jan 1988
TL;DR: The availabilities of replicated data managed by these three protocols are compared using a simulation model with realistic parameters and Dynamic Voting is found to perform better than Majority Consensus Voting for all files having more than three copies while Lexicographic Dynamic Voting performs much better than the two other protocols for all eleven configurations under study.
Abstract: Data are often replicated in distributed systems to protect them against site failures and network malfunctions. When this is the case, an access policy must be chosen to insure that a consistent view of the data is always presented. Voting protocols guarantee consistency of replicated data in the presence of any scenario involving non-Byzantine site failures and network partitions. While Static Majority Consensus Voting protocols use static quorums, Dynamic Voting protocols, like Dynamic Voting and Lexicographic Dynamic Voting, dynamically adjust quorums to changes in the status of the network of sites holding the copies.The availabilities of replicated data managed by these three protocols are compared using a simulation model with realistic parameters. Dynamic Voting is found to perform better than Majority Consensus Voting for all files having more than three copies while Lexicographic Dynamic Voting performs much better than the two other protocols for all eleven configurations under study.

Proceedings ArticleDOI
11 Apr 1988
TL;DR: The authors illustrate this concept with the design step positioned between the logical level of simulation for VLSI and the physical implementation of the design, where if constraints are satisfied on the layout phase, then timing-error-free design is obtained.
Abstract: A major problem in hierarchical design is to achieve consistency of the design steps that will not require iterations and will converge to the 'reasonably good' solution. To achieve this goal, additional efforts need to be made of each level of the hierarchical top-down process to derive constraints on variables of the lower level of hierarchy and use these additional constraints to solve the problems of lower levels. The authors illustrate this concept with the design step positioned between the logical level of simulation for VLSI and the physical implementation of the design. This step performs the timing analysis of the logic and provides constraints for the physical implementation of the design. If these constraints are satisfied on the layout phase, then timing-error-free design is obtained. >

Book ChapterDOI
14 Mar 1988
TL;DR: An intelligent information dictionary which serves as a knowledge-based interface between a database user and the query language of a relational database management system and details specific features of these categories and present examples of their use.
Abstract: This paper describes an intelligent information dictionary (IID) which serves as a knowledge-based interface between a database user and the query language of a relational database management system. IID extends the traditional roles of a data dictionary by enabling a user to view, manipulate, and verify semantic aspects of relational data. Our use of IID focuses on the interactive creation of simulation-specific databases from large "public" databases in the domain of military simulation and modeling. We have identified classes of database-related activities performed by a simulation developer when preparing databases as input to simulation models. Three categories of IID capabilities supporting these activities are explanation and browsing, customized data manipulation, and interactive consistency checking. In this paper we detail specific features of these categories and present examples of their use.

Journal ArticleDOI
TL;DR: This paper gives a formal methodology for the generation of bounded database schemes using a new technique called extensibility, which can also be used to generate constant-time-maintainable database schemes.
Abstract: Under the weak instance model, determining whether a class of database schemes is bounded (with respect to dependencies or with respect to consistency) is fundamental for the analysis of the behavior of the class of database schemes with respect to query processing and updates However, proving that a class of database schemes is bounded seems to be very difficult even for restricted cases To resolve this problem, we need to develop techniques or to explore other ideas for characterizing bounded database schemes In particular, the idea of generating bounded database schemes is completely unexplored In this paper, we give a formal methodology for the generation of bounded database schemes using a new technique called extensibility This methodology can also be used to generate constant-time-maintainable database schemes

Proceedings ArticleDOI
14 Sep 1988
TL;DR: This model is characterized by a decentralized control model and a data model based on ideas from functional programming that significantly reduces the severity of the consistency problems normally encountered in distributed systems.
Abstract: An alternative to inherently centralized control models and time-variant data models for distributed computing systems is proposed. This model is characterized by a decentralized control model and a data model based on ideas from functional programming. The control model aids availability and extensibility of the system management. The data model aids availability of particular objects. It is shown that the data model significantly reduces the severity of the consistency problems normally encountered in distributed systems. >

01 Jan 1988
TL;DR: The EARA/G model is defined formally and is discussed in the context of a metasystem, a system for automatically generating specification environments, to provide a unified modeling approach that addresses the shortcomings of current database models.
Abstract: Computer-aided software specification environments store information about software systems in databases so that various forms of automatic analysis can be applied to that information. The characteristics of software specification information are such that existing database models are less than ideal for supporting these environments. The major drawbacks of existing models include an inability to deal explicitly with complex objects, poor support for automatic analysis of the consistency and completeness of specification information, and a lack of support for diagrammatical representations. The Entity Aggregate Relationship Attribute model with Graphical extensions (EARA/G) proposed in this thesis provides a unified modeling approach that addresses the shortcomings of current database models. The EARA/G model is defined formally and is discussed in the context of a metasystem, a system for automatically generating specification environments. Extensive examples are provided to show how the EARA/G model can be used to support two prominent software development methods: Structured Systems Analysis and Higher Order Software.

Book ChapterDOI
01 Jan 1988
TL;DR: This paper is concerned with the transaction control mechanisms needed to control concurrent transactions that access the public database system.
Abstract: R 2 D 2 is an object-oriented database system intended for engineering applications that is based on the nested relational data model. R 2 D 2 provides a two layer architecture for engineering application programming: a public database to globally store the engineering objects and a private database consisting of a local database and an object cache that are local to the application program. This paper is concerned with the transaction control mechanisms needed to control concurrent transactions that access the public database system. Objects are requested by a transaction to be transferred from the public database into the transaction's local database connected with an object cache, which together form the private data repository. Upon release of the objects from the cache they are checked back into the public database. The central implementation idea forms a modified intention locking scheme to provide for a high level of concurrency in accessing objects that are represented in nested relational structure, the underlying data model of R 2 D 2 . This scheme facilitates the (exclusive or shared) locking of subobjects within an abstraction hierarchy while the remainder of the hierarchical object is still accessible by other transactions.

Proceedings ArticleDOI
01 Apr 1988
TL;DR: The focus of this workshop is on “executable or interpretable (`enactable`) models of the software process, and their prescriptive application to directly controlling software project activities,” research and development efforts that focus on relating such models to the full software lifecycle are premature, with insufficient reward to risk ratios.
Abstract: The focus of this workshop is on “executable or interpretable (`enactable`) models of the software process, and their prescriptive application to directly controlling software project activities.” Research and development efforts that focus on relating such models to the full software lifecycle are premature, with insufficient reward to risk ratios. Alternative (and not particularly novel) research approaches, each of which has good reward to risk ratios, are likely to lead us more effectively to the ultimate objective of producing, at reasonable cost, high-quality full lifecycle software development environmentsProcess programming [3] has been developed to support the construction of a family of environments, each with a distinct and possibly evolving view of the appropriate lifecycle for a specific domain and project. In particular, the intent is to produce a software development environment kernel that can be parameterized by a process program.Although process programming is not strictly linked to full lifecycle environments, the connection is strong: “We believe that the essence of software engineering is the study of effective ways of developing process programs and of maintaining their effectiveness in the face of the need to make changes.” [3,p.12] Since software engineering addresses the full lifecycle, process programming must do so as well.Why is applying process programming to the full lifecycle premature? Because computer science history tells us so. Consider both compilers and operating systems.At first, compilers were thought to be extraordinarily difficult to build. Some, such as the initial Fortran compilers, were built using a variety of ad hoc techniques. As the task became somewhat better understood, formal notations (such as BNF) were developed, along with associated implementations (such as Early's parsing algorithm), to ease the process. Over time, given lots of attention by many researchers, the notions of lexing, parsing, tree attribution, flow analysis, and such became well-known techniques. The technical results demanded significant insights by both theoretical and experimental researchers.The cost of developing individual compilers, even given these powerful techniques, was still significant. Tools to generate pieces of compilers—such as parser generators—were then developed. These tools, based on formal descriptions, have greatly decreased the cost of constructing compilers. But even current compiler generation technology is not perfect. Front-ends are relatively easy to generate, but there is not yet any truly effective approach to generating back-ends that produce high-quality code.Now consider operating systems, which are in many ways indistinguishable from environments [1]. There is no operating system generating system; indeed, virtually every piece of each operating system is still constructed from scratch. Even though many operating systems have been constructed, we still do not have a handle on how to reduce the cost of their development through parameterization. One reason may be that there is less similarity among different operating systems than among different programming languages. However, this is not the entire problem. Rather, the problem is largely due to our inability to identify and formalize the key aspects of operating systems, as we have so successfully done in compilers.The key lesson from these examples is that experience in building many complete instances is necessary before you can hope to generate instances. And even that is not sufficient if enough formal notations, useful for the actual parameterization, have not been developed.What about environments? The biggest problem is that we simply do not have sufficient instances of full lifecycle environments. In fact, there are no commercially successful instances at all. Without appropriate instances, how can one expect to construct useful environments through parameterization using process programs? How can one determine the key pieces that can be parameterized? How can one hope to combine these pieces effectively? Without a large number of such instances, research in parameterizing full lifecycle environments seems too difficult. Even with such instances, the operating system example indicates that we might ultimately be disappointed anyway.Two not-so-surprising alternatives seem appropriate. First, we need to develop full lifecycle environments, such as those under development in the ISTAR [2] and the Arcadia [4] efforts.1 At the very least, we need experience in environments that address more than a small range of lifecycle activities. Second, we need to focus on narrow ranges of lifecycle activities, with the intention of producing parameterizable efforts in these areas.Work in the first category is of a scope that is beyond the resources available in most academic environments. Douglas Wiebe, one of my Ph.D. students, is working on a dissertation that fits into this second category [5]. He has identified a small, but important, area in which parameterization and generation is promising: the verification of semantic properties of software configurations.Wiebe's research is motivated by the observation that existing systems each have fixed definitions of valid configurations. For instance, Xerox's DF subsystem includes tools that check for completeness and consistency (for precise definitions of these terms), while the UNIX make program places simple temporal constraints on configurations. Wiebe is developing notations and mechanisms that will support the construction of a parameterizable configuration verifier. The foundation of the approach is the development of an interconnection algebra to describe system models, combined with first order predicate calculus as a constraint language to describe the restrictions on the interconnections.Even in this small area, progress is challenging. Similar investigations on other aspects of the software lifecycle, along with aggressive efforts to construct full lifecycle environments, are more appropriate research approaches than is process programming as applied to the full lifecycle.

Patent
22 Sep 1988
TL;DR: In this article, a method and an apparatus for constraint-oriented inference with an improved knowledge base formation efficiency and ability to maintain consistency among slots is disclosed, in which constraints indicating relationships among slot values of frames are stored collectively, pointers which make the constraints accessible by relevant slots are attached automatically, and the constraints are simplified by inserting the slot values determined in processes of inference.
Abstract: A method and an apparatus for constraint-oriented inference with an improved knowledge base formation efficiency and ability to maintain consistency among slots is disclosed, in which constraints indicating relationships among slot values of frames are stored collectively, pointers which make the constraints accessible by relevant slots are attached automatically, and the constraints are simplified by inserting the slot values determined in processes of inference.