scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Automated Reasoning in 1999"


Journal ArticleDOI
TL;DR: A substantial body of knowledge about lambda calculus and Pure Type Systems is surveyed, formally developed in a constructive type theory using the LEGO proof system, leading to the strengthening lemma.
Abstract: We survey a substantial body of knowledge about lambda calculus and Pure Type Systems, formally developed in a constructive type theory using the LEGO proof system On lambda calculus, we work up to an abstract, simplified proof of standardization for beta reduction that does not mention redex positions or residuals Then we outline the meta theory of Pure Type Systems, leading to the strengthening lemma One novelty is our use of named variables for the formalization Along the way we point out what we feel has been learned about general issues of formalizing mathematics, emphasizing the search for formal definitions that are convenient for formal proof and convincingly represent the intended informal concepts

112 citations


Journal ArticleDOI
TL;DR: This paper considers the problem of dealing automatically with arbitrary geometric statements and presents a rather successful but noncomplete method for automatic discovery that proceeds adding the given conjectural thesis to the collection of hypotheses and derives some special consequences from this new set of conditions.
Abstract: We present here a further development of the well-known approach to automatic theorem proving in elementary geometry via algorithmic commutative algebra and algebraic geometry. Rather than confirming/refuting geometric statements (automatic proving) or finding geometric formulae holding among prescribed geometric magnitudes (automatic derivation), in this paper we consider (following Kapur and Mundy) the problem of dealing automatically with arbitrary geometric statements (i.e., theses that do not follow, in general, from the given hypotheses) aiming to find complementary hypotheses for the statements to become true. First we introduce some standard algebraic geometry notions in automatic proving, both for self-containment and in order to focus our own contribution. Then we present a rather successful but noncomplete method for automatic discovery that, roughly, proceeds adding the given conjectural thesis to the collection of hypotheses and then derives some special consequences from this new set of conditions. Several examples are discussed in detail.

100 citations


Journal ArticleDOI
TL;DR: Four statements equivalent to well-foundedness have been proved in Mizar and the proofs mechanically checked for correctness and stressed the importance of a systematic development of a mechanized data base for mathematics in the spirit of the QED Project.
Abstract: Four statements equivalent to well-foundedness (well-founded induction, existence of recursively defined functions, uniqueness of recursively defined functions, and absence of descending ω-chains) have been proved in Mizar and the proofs mechanically checked for correctness. It seems not to be widely known that the existence (without the uniqueness assumption) of recursively defined functions implies well-foundedness. In the proof we used regular cardinals, a fairly advanced notion of set theory. The theory of cardinals in Mizar was developed earlier by G. Bancerek. With the current state of the Mizar system, the proofs turned out to be an exercise with only minor additions at the fundamental level. We would like to stress the importance of a systematic development of a mechanized data base for mathematics in the spirit of the QED Project. 12pt ENOD – Experience, Not Only Doctrine G. Kreisel

50 citations


Journal ArticleDOI
TL;DR: The paper shows that satisfiability in a range of popular propositional modal systems can be decided by ordinary resolution procedures, providing an alternative method of proving decidability for modal logics, as well as closely related systems of artificial intelligence.
Abstract: The paper shows that satisfiability in a range of popular propositional modal systems can be decided by ordinary resolution procedures. This follows from a general result that resolution combined with condensing, and possibly some additional form of normalization, is a decision procedure for the satisfiability problem in certain so-called path logics. Path logics arise from normal propositional modal logics by the optimized functional translation method. The decision result provides an alternative method of proving decidability for modal logics, as well as closely related systems of artificial intelligence. This alone is not interesting. A more far-reaching consequence of the result has practical value, namely, many standard first-order theorem provers that are based on resolution are suitable for facilitating modal reasoning.

44 citations


Journal ArticleDOI
TL;DR: This paper presents the first machine-checked verification of Milner's type inference algorithm W for computing the most general type of an untyped λ-term enriched with let-expressions, the core of most typed functional programming languages and also known as Mini-ML.
Abstract: This paper presents the first machine-checked verification of Milner‘s type inference algorithm W for computing the most general type of an untyped λ-term enriched with let-expressions. This term language is the core of most typed functional programming languages and is also known as Mini-ML. We show how to model all the concepts involved, in particular types and type schemes, substitutions, and the thorny issue of “new” variables. Only a few key proofs are discussed in detail. The theories and proofs are developed in Isabelle/HOL, the HOL instantiation of the generic theorem prover Isabelle.

43 citations


Journal ArticleDOI
TL;DR: This paper identifies second-order mappings from the source to the target that preserve induction-specific proof- relevant abstractions dictating whether the source plan can be replayed and reformulations invoked to add, delete, or modify planning steps.
Abstract: In this paper, we investigate analogy-driven proof plan construction in inductive theorem proving. The intention is to produce a plan for a target theorem that is similar to a given source theorem. We identify second-order mappings from the source to the target that preserve induction-specific proof- relevant abstractions dictating whether the source plan can be replayed. We replay the planning decisions taken in the source if the reasons or justifications for these decisions still hold in the target. If the source and target plan differ significantly at some isolated point, additional reformulations are invoked to add, delete, or modify planning steps. These reformulations are not ad hoc but are triggered by peculiarities of the mappings and by failed justifications. Employing analogy on top of the proof planner CLAM has extended the problem-solving horizon of CLAM: With analogy, some theorems could be proved automatically that neither CLAM nor NQTHM could prove automatically.

40 citations


Journal ArticleDOI
TL;DR: The results of the CADE-15 ATP System Competition (CASC-15) are presented.
Abstract: The results of the CADE-15 ATP System Competition (CASC-15) are presented.

28 citations


Journal ArticleDOI
TL;DR: This paper presents the Coq formalization of the typing system and its inference algorithm, and establishes formally the correctness and the completeness of the type inference algorithm with respect to the typing rules of the language.
Abstract: We develop a formal proof of the ML type inference algorithm, within the Coq proof assistant. We are much concerned with methodology and reusability of such a mechanization. This proof is an essential step toward the certification of a complete ML compiler. In this paper we present the Coq formalization of the typing system and its inference algorithm. We establish formally the correctness and the completeness of the type inference algorithm with respect to the typing rules of the language. We describe and comment on the mechanized proofs.

28 citations


Journal ArticleDOI
Ching-Tsun Chou1, Doron Peled2
TL;DR: This paper uses the mechanical theorem prover HOL to verify the correctness of a partial-order reduction technique for cutting down the amount of state search performed by model checkers, and formalizes it on the basis of a formal meta-theory of other model-checking algorithms and techniques.
Abstract: Mechanical theorem proving and model checking are the two main methods of formal verification, each with its own strengths and weaknesses. While mechanical theorem proving is more general, it requires intensive human guidance. Model checking is automatic, but is applicable to a more restricted class of problems. It is appealing to combine these two methods in order to take advantage of their different strengths. Prior research in this direction has focused on how to decompose a verification problem into parts each of which is manageable by one of the two methods. In this paper we explore another possibility: we use mechanical theorem proving to formally verify a meta-theory of model checking. As a case study, we use the mechanical theorem prover HOL to verify the correctness of a partial-order reduction technique for cutting down the amount of state search performed by model checkers. We choose this example for two reasons. First, this reduction technique has been implemented in the protocol analysis tool SPIN to significantly speed up the analysis of many practical protocols; hence its correctness has important practical consequences. Second, the correctness arguments involve nontrivial mathematics, the formalization of which we hope will become the basis of a formal meta-theory of other model-checking algorithms and techniques. Interestingly, our formalization led to a nontrivial generalization of the original informal theory. We discuss the lessons, both encouraging and discouraging, learned from this exercise. In the appendix we highlight the important definitions and theorems from each of our HOL theories. The complete listing of our HOL proof is given in a separate document because of space limitations.

27 citations


Journal ArticleDOI
TL;DR: A method to reduce search redundancy in goal-sensitive resolution methods is introduced, and an algorithm called Modoc is shown, both analytically and experimentally, to be faster than Model Elimination by an exponential factor.
Abstract: Goal-sensitive resolution methods, such as Model Elimination, have been observed to have a higher degree of search redundancy than model-search methods Therefore, resolution methods have not been seen in high-performance propositional satisfiability testers A method to reduce search redundancy in goal-sensitive resolution methods is introduced The idea at the heart of the method is to attempt to construct a refutation and a model simultaneously and incrementally, based on subsearch outcomes The method exploits the concept of ’autarky‘, which can be informally described as a ’self-sufficient‘ model for some clauses, but which does not affect the remaining clauses of the formula Incorporating this method into Model Elimination leads to an algorithm called Modoc Modoc is shown, both analytically and experimentally, to be faster than Model Elimination by an exponential factor Modoc, unlike Model Elimination, is able to find a model if it fails to find a refutation, essentially by combining autarkies Unlike the pruning strategies of most refinements of resolution, autarky-related pruning does not prune any successful refutation; it only prunes attempts that ultimately will be unsuccessful; consequently, it will not force the underlying Modoc search to find an unnecessarily long refutation To prove correctness and other properties, a game characterization of refutation search is introduced, which demonstrates some symmetries in the search for a refutation and the search for a model Experimental data is presented on a variety of formula classes, comparing Modoc with Model Elimination and model-search algorithms On random formulas, model-search methods are faster than Modoc, whereas Modoc is faster on structured formulas, including those derived from a circuit-testing application Considerations for first-order refutation methods are discussed briefly

26 citations


Journal ArticleDOI
TL;DR: The hot list strategy is formulated and feature in this article, which enables an automated reasoning program to briefly consider a newly retained conclusion whose complexity would otherwise prevent its use for perhaps many CPU-hours.
Abstract: Experimentation strongly suggests that, for attacking deep questions and hard problems with the assistance of an automated reasoning program, the more effective paradigms rely on the retention of deduced information. A significant obstacle ordinarily presented by such a paradigm is the deduction and retention of one or more needed conclusions whose complexity sharply delays their consideration. To mitigate the severity of the cited obstacle, I formulated and feature in this article the hot list strategy. The hot list strategy asks the researcher to choose, usually from among the input statements characterizing the problem under study, one or more statements that are conjectured to play a key role for assignment completion. The chosen statements – conjectured to merit revisiting, again and again – are placed in an input list of statements, called the hot list. When an automated reasoning program has decided to retain a new conclusion C – before any other statement is chosen to initiate conclusion drawing – the presence of a nonempty hot list (with an appropriate assignment of the input parameter known as heat) causes each inference rule in use to be applied to C together with the appropriate number of members of the hot list. Members of the hot list are used to complete applications of inference rules and not to initiate applications. The use of the hot list strategy thus enables an automated reasoning program to briefly consider a newly retained conclusion whose complexity would otherwise prevent its use for perhaps many CPU-hours. To give evidence of the value of the strategy, I focus on four contexts: (1) dramatically reducing the CPU time required to reach a desired goal, (2) finding a proof of a theorem that had previously resisted all but the more inventive automated attempts, (3) discovering a proof that is more elegant than previously known, and (4) answering a question that had steadfastly eluded researchers relying on an automated reasoning program. I also discuss a related strategy, the dynamic hot list strategy (formulated by my colleague W. McCune), that enables the program during a run to augment the contents of the hot list. In the Appendix, I give useful input files and interesting proofs. Because of frequent requests to do so, I include challenge problems to consider, commentary on my approach to experimentation and research, and suggestions to guide one in the use of McCune’s automated reasoning program OTTER.

Journal ArticleDOI
TL;DR: Some basic theorems about ordinal numbers were proved using McCune's computer program OTTER, building on Quaife's modification of Gödel’s class theory.
Abstract: Some basic theorems about ordinal numbers were proved using McCune’s computer program OTTER, building on Quaife’s modification of Godel’s class theory. Our theorems are based on Isbell’s elegant definition of ordinals. Neither the axiom of regularity nor the axiom of choice is used.

Journal ArticleDOI
TL;DR: Some basic theorems about composition and other key constructs of set theory were proved using McCune's computer program OTTER, building on Quaife's modification of Gödel’s class theory.
Abstract: Some basic theorems about composition and other key constructs of set theory were proved using McCune’s computer program OTTER, building on Quaife’s modification of Godel’s class theory. Our proofs use equational definitions in terms of Godel’s flip and rotate functors. A new way to prove the composition of homomorphisms theorem is also presented.

Journal ArticleDOI
TL;DR: A completion procedure (called MKB) that works for multiple reduction orderings, based on the observation that some inferences made in different processes are often closely related, so it can design inference rules that simulate these inferences all in a single operation.
Abstract: We present a completion procedure (called MKB) that works for multiple reduction orderings. Given equations and a set of reduction orderings, the procedure simulates a computation performed by the parallel processes each of which executes the standard completion procedure (KB) with one of the given orderings. To gain efficiency, however, we develop new inference rules working on objects called nodes, which are data structures consisting of a pair s : t of terms associated with the information to show which processes contain the rule s → t (or t → s) and which processes contain the equation s ↔ t. The idea is based on the observation that some inferences made in different processes are often closely related, so we can design inference rules that simulate these inferences all in a single operation. Our experiments show that MKB is significantly more efficient than the naive simulation of parallel execution of KB procedures, when the number of reduction orderings is large enough. We also present an extension of this technique to the unfailing completion for multiple reduction orderings, which is useful in various areas of automated reasoning, including equational theorem proving.

Journal ArticleDOI
TL;DR: The theorem of Sylow is proved in Isabelle HOL with a proof by Wielandt that is more general than the original and uses a nontrivial combinatorial identity.
Abstract: The theorem of Sylow is proved in Isabelle HOL. We follow the proof by Wielandt that is more general than the original and uses a nontrivial combinatorial identity. The mathematical proof is explained in some detail, leading on to the mechanization of group theory and the necessary combinatorics in Isabelle. We present the mechanization of the proof in detail, giving reference to theorems contained in an appendix. Some weak points of the experiment with respect to a natural treatment of abstract algebraic reasoning give rise to a discussion of the use of module systems to represent abstract algebra in theorem provers. Drawing from that, we present tentative ideas for further research into a section concept for Isabelle.

Journal ArticleDOI
TL;DR: This work proposes means for drawing conclusions from systems that are based on classical logic, although the information might be inconsistent, by detecting those parts of the knowledge base that ‘cause’ the inconsistency, and isolate the parts that are ‘recoverable’.
Abstract: One of the most significant drawbacks of classical logic is its being useless in the presence of an inconsistency. Nevertheless, the classical calculus is a very convenient framework to work with. In this work we propose means for drawing conclusions from systems that are based on classical logic, although the information might be inconsistent. The idea is to detect those parts of the knowledge base that ‘cause’ the inconsistency, and isolate the parts that are ‘recoverable’. We do this by temporarily switching into Ginsberg/Fitting multivalued framework of bilattices (which is a common framework for logic programming and nonmonotonic reasoning). Our method is conservative in the sense that it considers the contradictory data as useless and regards all the remaining information unaffected. The resulting logic is nonmonotonic, paraconsistent, and a plausibility logic in the sense of Lehmann.

Journal ArticleDOI
TL;DR: This work presents an application in spatial reasoning that uses the embedding of constraint satisfaction on the domain of discourse into a rule-based programming paradigm like logic programming to produce a clear, concise, yet very expressive system through its ability to manipulate partial information.
Abstract: The embedding of constraint satisfaction on the domain of discourse into a rule-based programming paradigm like logic programming provides a powerful reasoning tool. We present an application in spatial reasoning that uses this combination to produce a clear, concise, yet very expressive system through its ability to manipulate partial information. Three-dimensional solid objects in constructive solid geometry representation are manipulated, and their spatial relationship with one another, points, or regions is reasoned about. The language used to develop this application is QUAD-CLP(ℜ), an experimental constraint logic programming language of our own design, which is equipped with a solver for quadratic and linear arithmetic constraints over the reals.

Journal Article
TL;DR: It is claimed that one can use (propositional) logic for encoding the low-level properties of state-of-the-art cryptographic algorithms and then use automated theorem proving for reasoning about them and call this approach logical cryptanalysis.
Abstract: Providing formal assurance is a key issue in computer security. Yet, automated reasoning tools have only been used for the veriication of security protocols, and never for the veriication and cryptanalysis of the cryptographic algorithms on which those protocols rely. We claim that one can use (propositional) logic for encoding the low-level properties of state-of-the-art cryptographic algorithms and then use automated theorem proving for reasoning about them. We call this approach logical cryptanalysis. In this framework, nding a model for a formula encoding an algorithm is equivalent to nding a key with a crypt-analytic attack. Other important properties can also be captured. Moreover, SAT benchmarks based on the encoding of cryptographic algorithms optimally share features of \\real world\" and random problems. Here we present a case study on the U.S. Data Encryption Standard (DES) and discuss how to obtain a manageable encoding of its properties. We have also tested three SAT provers, TABLEAU the encoding of the DES, to see whether they are up the task, and we discuss the reasons for their diierent performance. A discussion of open problems and future research concludes the paper.

Journal ArticleDOI
TL;DR: This document argues that the use of tail-recursive abstract machines incurs only a small increase in theorem-proving burden when compared with what is required when using ordinary abstract machines.
Abstract: One method for producing verified implementations of programming languages is to formally derive them from abstract machines. Tail-recursive abstract machines provide efficient support for iterative processes via the ordinary procedure call mechanism. This document argues that the use of tail-recursive abstract machines incurs only a small increase in theorem-proving burden when compared with what is required when using ordinary abstract machines. The position is supported by comparing correctness proofs performed using the Boyer–Moore theorem prover. A by-product of this effort is a syntactic criterion based on tail contexts for identifying which procedure calls must be implemented as tail calls. The concept of tail contexts was used in the latest Scheme Report, the only language specification known to the author that defines the requirement that its implementations must be tail recursive.

Journal ArticleDOI
TL;DR: Using combinatorial optimization techniques, it is shown that if each variable is restricted to having at most two occurrences, then several cases of simultaneous elementary AC- matching and ACU-matching can be solved in polynomial time.
Abstract: The simultaneous elementary E-matching problem for an equational theory E is to decide whether there is an E-matcher for a given system of equations in which the only nonconstant function symbols occurring in the terms to be matched are the ones constrained by the equational axioms of E. We study the computational complexity of simultaneous elementary matching problems for the equational theories A of semigroups, AC of commutative semigroups, and ACU of commutative monoids. In each case, we delineate the boundary between NP-completeness and solvability in polynomial time by considering two parameters, the number of equations in the systems and the number of constant symbols in the signature. Moreover, we analyze further the intractable cases of simultaneous elementary AC-matching and ACU-matching by also taking into account the maximum number of occurrences of each variable. Using combinatorial optimization techniques, we show that if each variable is restricted to having at most two occurrences, then several cases of simultaneous elementary AC-matching and ACU-matching can be solved in polynomial time.

Journal ArticleDOI
TL;DR: In this paper, an algorithm for set unification, which is a restricted case of the associative-commutative-idempotent (ACI) unification, is presented.
Abstract: In this paper, an algorithm for set unification – which is a restricted case of the associative-commutative-idempotent (ACI) unification – is presented. The algorithm is able to unify finite sets containing arbitrary terms. It is nondeterministic and can easily be implemented in Prolog. Because of the simplicity of the algorithm, the computation of a single solution is quite fast, and the exact complexity of the algorithm and of the set unification problem itself can be analyzed easily. The algorithm is compared with some other set unification algorithms. All algorithms have single exponential complexity, because the set unification problem is NP-complete, but our exact complexity analysis provides more details. It is shown how the algorithm presented here can be used to solve a generalized set unification problem where sets with tails are admissible. The algorithm can be used in any logic programming language embedding (finite) sets, or in other contexts where set unification is needed, for example, in some unification-based grammar formalisms.

Journal ArticleDOI
TL;DR: This paper shows how to safely include user- defined decision procedures in theorem provers and shows that using a rich underlying logic permits an abstract account of the approach so that the results carry over to different implementations and other logics.
Abstract: Proving theorems is a creative act demanding new combinations of ideas and on occasion new methods of argument. For this reason, theorem proving systems need to be extensible. The provers should also remain correct under extension, so there must be a secure mechanism for doing this. The tactic-style provers pioneered by Edinburgh LCF provide a very effective way to achieve secure extensions, but in such systems, all new methods must be reduced to tactics. This is a drawback because there are other useful proof generating tools such as decision procedures; these include, for example, algorithms which reduce a deduction problem, such as arithmetic provability, to a computation on graphs. The Nuprl system pioneered the combination of fixed decision procedures with tactics, but the issue of securely adding new ones was not solved. In this paper we show how to safely include user- defined decision procedures in theorem provers. The idea is to prove properties of the procedure inside the prover’s logic and then invoke a reflection rule to connect the procedure to the system. We also show that using a rich underlying logic permits an abstract account of the approach so that the results carry over to different implementations and other logics.

Journal ArticleDOI
TL;DR: A logical and axiomatic version of SDT capturing the essence of Domain Theory à la Scott is presented, based on a sufficiently expressive version of constructive type theory and fully implemented in the proof checker Lego.
Abstract: Synthetic Domain Theory (SDT) is a constructive variant of Domain Theory where all functions are continuous following Dana Scott‘s idea of “domains as sets”. Recently there have been suggested more abstract axiomatizations encompassing alternative notions of domain theory as, for example, stable domain theory. In this article a logical and axiomatic version of SDT capturing the essence of Domain Theory a la Scott is presented. It is based on a sufficiently expressive version of constructive type theory and fully implemented in the proof checker Lego. On top of this “core SDT” denotational semantics and program verification can be – and in fact has been – developed in a purely formal machine-checked way. The version of SDT we have chosen for this purpose is based on work by Reus and Streicher and can be regarded as an axiomatization of complete extensional PERs. This approach is a modification of Phoa‘s complete Σ-spaces and uses axioms introduced by Taylor.

Journal ArticleDOI
TL;DR: The power and generality of T-resolution are emphasized by introducing suitable linear and ordered refinements, uniformly and in strict analogy with the standard resolution approach.
Abstract: T-resolution is a binary rule, proposed by Policriti and Schwartz in 1995 for theorem proving in first-order theories (T-theorem proving) that can be seen – at least at the ground level – as a variant of Stickel’s theory resolution. In this paper we consider refinements of this rule as well as the model elimination variant of it. After a general discussion concerning our viewpoint on theorem proving in first-order theories and a brief comparison with theory resolution, the power and generality of T-resolution are emphasized by introducing suitable linear and ordered refinements, uniformly and in strict analogy with the standard resolution approach. Then a model elimination variant of T-resolution is introduced and proved to be sound and complete; some experimental results are also reported. In the last part of the paper we present two applications of T-resolution: to constraint logic programming and to modal logic.

Journal ArticleDOI
TL;DR: A full formalization of the semantics of definite programs, in the calculus of inductive constructions, including switching and lifting lemmas and soundness and completeness theorems are formalized.
Abstract: This paper presents a full formalization of the semantics of definite programs, in the calculus of inductive constructions. First, we describe a formalization of the proof of first-order terms unification: this proof is obtained from a similar proof dealing with quasi-terms, thus showing how to relate an inductive set with a subset defined by a predicate. Then, SLD-resolution is explicitely defined: the renaming process used in SLD-derivations is made explicit, thus introducing complications, usually overlooked, during the proofs of classical results. Last, switching and lifting lemmas and soundness and completeness theorems are formalized. For this, we present two lemmas, usually omitted, which are needed. This development also contains a formalization of basic results on operators and their fixpoints in a general setting. All the proofs of the results, presented here, have been checked with the proof assistant Coq.

Journal ArticleDOI
TL;DR: An efficient recursive algorithm is presented to compute the set of prime implicants of a propositional formula in conjunctive normal form (CNF), and it is shown that the number of subsumption operation is reduced in the proposed algorithm.
Abstract: In this paper, an efficient recursive algorithm is presented to compute the set of prime implicants of a propositional formula in conjunctive normal form (CNF). The propositional formula is represented as a (0,1)-matrix, and a set of 1’s across its columns are termed as paths. The algorithm finds the prime implicants as the prime paths in the matrix using the divide-and-conquer technique. The algorithm is based on the principle that the prime implicant of a formula is the concatenation of the prime implicants of two of its subformulae. The set of prime paths containing a specific literal and devoid of a literal are characterized. Based on this characterization, the formula is recursively divided into subformulae to employ the divide-and-conquer paradigm. The prime paths of the subformulae are then concatenated to obtain the prime paths of the formula. In this process, the number of subsumption operations is reduced. It is also shown that the earlier algorithm based on prime paths has some avoidable computations that the proposed algorithm avoids. Besides being more efficient, the proposed algorithm has the additional advantage of being suitable for the incremental method, without recomputing prime paths for the updated formula. The subsumption operation is one of the crucial operations for any such algorithms, and it is shown that the number of subsumption operation is reduced in the proposed algorithm. Experimental results are presented to substantiate that the proposed algorithm is more efficient than the existing algorithms.

Journal ArticleDOI
TL;DR: In this article, terms containing triple dots are defined in the framework of an ad hoc small formal language of mathematical logic, and some constructions that make this task possible are described.
Abstract: Terms containing triple dots are defined in the framework of an ad hoc small formal language of mathematical logic. It contains some constructions that make this task possible. The constructions include variables whose subscripts are arbitrary terms of the language, quantifiers of an infinite number of variables, and shorter forms of triple-dot terms, called star terms. Introduction of the terms and constructions in a practice-oriented language may make it more convenient.

Journal ArticleDOI
TL;DR: This paper presents in detail how the Unity logic for reasoning about concurrent programs was formalized within the mechanized theorem prover PC-NQTHM-92, including several natural extensions to Unity, including nondeterministic statements.
Abstract: This paper presents in detail how the Unity logic for reasoning about concurrent programs was formalized within the mechanized theorem prover PC-NQTHM-92. Most of Unity‘s proof rules were formalized in the unquantified logic of NQTHM, and the proof system has been used to mechanically verify several concurrent programs. The mechanized proof system is sound by construction, since Unity‘s proof rules were proved about an operational semantics of concurrency, also presented here. Skolem functions are used instead of quantifiers, and the paper describes how proof rules containing Skolem function are used instead of Unity‘s quantified proof rules when verifying concurrent programs. This formalization includes several natural extensions to Unity, including nondeterministic statements. The paper concludes with a discussion of the cost and value of mechanization.

Journal ArticleDOI
TL;DR: A working proof transformation system that, by exploiting the duality between mathematical induction and recursion, employs the novel strategy of optimizing recursive programs by transforming inductive proofs.
Abstract: The research described in this paper involved developing transformation techniques that increase the efficiency of the original program, the source, by transforming its synthesis proof into one, the target, which yields a computationally more efficient algorithm. We describe a working proof transformation system that, by exploiting the duality between mathematical induction and recursion, employs the novel strategy of optimizing recursive programs by transforming inductive proofs. We compare and contrast this approach with the more traditional approaches to program transformation and highlight the benefits of proof transformation with regards to search, correctness, automatability, and generality.

Journal ArticleDOI
TL;DR: SPTHEO is described, a parallelization of the sequential first-order theorem prover SETHEO, based on the SPS-model (Static Partitioning with Slackness) for parallel search, an approach that minimizes the processor-to-processor communication.
Abstract: This paper describes the parallel automated theorem prover SPTHEO, a parallelization of the sequential first-order theorem prover SETHEO. The parallelization is based on the SPS-model (Static Partitioning with Slackness) for parallel search, an approach that minimizes the processor-to-processor communication. This model allows efficient computations on hardware with weak communication performance, such as workstation networks. SPTHEO offers the utilization of both OR- and independent-AND parallelism. In this paper, a detailed description and evaluation of the OR-parallel part are given.