scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 1998"


Journal ArticleDOI
Carlo Blundo, Alfredo De Santis, Ugo Vaccaro, Amir Herzberg1, Shay Kutten1, Moti Yong 
TL;DR: This paper considers the model where interaction is allowed in the common key computation phase and shows a gap between the models by exhibiting a one-round interactive scheme in which the user's information is only k + t −1 times the size of the commonKey.
Abstract: In this paper we analyze perfectly secure key distribution schemes for dynamic conferences. In this setting, any member of a group of t users can compute a common key using only his private initial piece of information and the identities of the other t −1 users in the group. Keys are secure against coalitions of up to k users; that is, even if k users pool together their pieces they cannot compute anything about a key of any conference comprised of t other users. First we consider a noninteractive model where users compute the common key without any interaction. We prove the tight bound on the size of each user's piece of information of[formula]times the size of the common key. Then, we consider the model where interaction is allowed in the common key computation phase and show a gap between the models by exhibiting a one-round interactive scheme in which the user's information is only k + t −1 times the size of the common key. Finally, we present its adaptation to network topologies with neighbourhood constraints and to asymmetric (e.g., client-server) communication models.

473 citations


Journal ArticleDOI
TL;DR: The authors are seeking a ``logic of polytime'', not yet one more axiomatization, but an intrinsically polytime system, which admits full induction on data types, which shows that, within LLL, induction is compatible with low complexity.
Abstract: We are seeking a ``logic of polytime'', not yet one more axiomatization, but an intrinsically polytime system. Our methodological bias will be to consider that the expressive power of a system is the complexity of its cut-elimination procedure, and we therefore seek a system with a polytime complexity for cut-elimination (to be precise: besides the size of the proof, there will be an auxiliary parameter, the depth, controlling the degree of the polynomial). This cannot be achieved within classical or intuitionistic logics because of structural rules, especially contraction: this is why the complexity of cut-elimination in all extant logical systems (including the standard version of linear logic which controls structural rules without forbidding them) is catastrophic, elementary (towers of exponentials) or worse. Light Linear Logic (LLL) is a purely logical system with a more careful handling of structural rules: this system is strong enough to represent all polytime functions, but cut-elimination is (locally) polytime. With LLL, our control over the complexity of cut-elimination improves greatly. But this is not the only potentiality of LLL: why not transform it into a system of mathematics and try to formalize ``polytime mathematics'' in the same way as Heyting arithmetic formalizes constructive mathematics? The possibility is clearly open, since LLL admits extensions into a naive set-theory, with full comprehension, still with polytime cut-elimination. This system admits full induction on data types, which shows that, within LLL, induction is compatible with low complexity.

356 citations


Journal ArticleDOI
TL;DR: A Kleene theorem is able to prove the decidability of whether a given regular expression denotes a 1-unambiguous language; if it does, then it can be proved that an equivalent 1- unambiguous regular expression can be constructed in worst-case optimal time.
Abstract: The ISO standard for the Standard Generalized Markup Language (SGML) provides a syntactic meta-language for the definition of textual markup systems. In the standard, the right-hand sides of productions are based on regular expressions, although only regular expressions that denote words unambiguously, in the sense of the ISO standard, are allowed. In general, a word that is denoted by a regular expression is witnessed by a sequence of occurrences of symbols in the regular expression that match the word. In an unambiguous regular expression as defined by Book et al. (1971, IEEE Trans. Comput. C-20 (2), 149–153), each word has at most one witness. But the SGML standard also requires that a witness be computed incrementally from the word with a one-symbol lookahead; we call such regular expressions 1- unambiguous . A regular language is a 1- unambiguous language if it is denoted by some 1-unambiguous regular expression. We give a Kleene theorem for 1-unambiguous languages and characterize 1-unambiguous regular languages in terms of structural properties of the minimal deterministic automata that recognize them. As a result we are able to prove the decidability of whether a given regular expression denotes a 1-unambiguous language; if it does, then we can construct an equivalent 1-unambiguous regular expression in worst-case optimal time.

263 citations


Journal ArticleDOI
TL;DR: The main results provide sufficient criteria for the validity of induction and coinduction principles and strengthen these logical principles to deal with contexts and prove that such strengthening is valid when the (abstract) logic the authors consider is contextually/functionally complete.
Abstract: We present a categorical logic formulation of induction and coinduction principles for reasoning about inductively and coinductively defined types. Our main results provide sufficient criteria for the validity of such principles: in the presence of comprehension, the induction principle for initial algebras is admissible, and dually, in the presence of quotient types, the coinduction principle for terminal coalgebras is admissible. After giving an alternative formulation of induction in terms of binary relations, we combine both principles and obtain a mixed induction/coinduction principle which allows us to reason about minimal solutionsX??(X) whereXmay occur both positively and negatively in the type constructor ?. We further strengthen these logical principles to deal with contexts and prove that such strengthening is valid when the (abstract) logic we consider is contextually/functionally complete. All the main results follow from a basic result about adjunctions between “categories of algebras” (inserters).

211 citations


Journal ArticleDOI
TL;DR: The register allocation problem for an imperative program is often modelled as the coloring problem of the interference graph of the control-flowGraph of the program, which cannot in general color within a factor O(n ɛ ) from optimality unless NP=P.
Abstract: The register allocation problem for an imperative program is often modelled as the coloring problem of the interference graph of the control-flow graph of the program. The interference graph of a flow graph G is the intersection graph of some connected subgraphs of G. These connected subgraphs represent the lives, or life times, of variables, so the coloring problem models that two variables with overlapping life times should be in different registers. For general programs with unrestricted gotos, the interference graph can be any graph, and hence we cannot in general color within a factor O(n ɛ ) from optimality unless NP=P.

195 citations


Journal ArticleDOI
TL;DR: TheMinimum Edge Color Sum (MECS) problem, which is shown to be NP-hard, is introduced and it is shown that ann1−e-approximation is NP- hard, for somee>0.
Abstract: This paper studies an optimization problem that arises in the context of distributed resource allocation: Given a conflict graph that represents the competition of processors over resources, we seek an allocation under which no two jobs with conflicting requirements are executed simultaneously. Our objective is to minimize theaverage response timeof the system. In alternative formulation this is known as theMinimum Color Sum (MCS)problem (E. Kubicka and A. J. Schwenk, 1989. An introduction to chromatic sums,in“Proceedings of the ACM Computer Science Conference,” pp. 39–45.). We show that the algorithm based on finding iteratively a maximum independent set (MaxIS) is a 4-approximation to the MCS. This bound is tight to within a factor of 2. We give improved ratios for the classes of bipartite, bounded-degree, and line graphs. The bound generalizes to a 4ρ-approximation of MCS for classes of graphs for which the maximum independent set problem can be approximated within a factor ofρ. On the other hand, we show that ann1−e-approximation is NP-hard, for somee>0. For some instances of the resource allocation problem, such as theDining Philosophers, an efficient solution requiresedgecoloring of the conflict graph. We introduce theMinimum Edge Color Sum (MECS)problem which is shown to be NP-hard. We show that a 2-approximation to MECS(G) can be obtained distributively usingcompactcoloring withinO(log2 n) communication rounds

170 citations


Journal Article
TL;DR: A Hybrid Genetic Algorithm, combining genetic algorithm with neural network, for Job shop scheduling problem is described and it is shown that this method is good for complex production scheduling, at calculation time and goodness.
Abstract: The neural network model of Job shop scheduling problem is built. The characteristics and properties of its solutions are studied. A Hybrid Genetic Algorithm, combining genetic algorithm with neural network, for Job shop scheduling problem is described. The corresponding simulation shows that our method is good for complex production scheduling, at calculation time and goodness.

149 citations


Journal ArticleDOI
TL;DR: It is shown that factoring and the discrete logarithm are implicitly definable in any extension of S 2 1 admitting an NP -definition of primes about which it can prove that no number satisfying the definition is composite.
Abstract: We show that there is a pair of disjoint NP sets, whose disjointness is provable in S 2 1 and which cannot be separated by a set in P/poly, if the cryptosystem RSA is secure. Further we show that factoring and the discrete logarithm are implicitly definable in any extension of S 2 1 admitting an NP -definition of primes about which it can prove that no number satisfying the definition is composite.

130 citations


Journal ArticleDOI
TL;DR: Motivated by computer science challenges, it is suggested to extend the approach and methods of finite model theory beyond finite structures to address the challenges posed by the rapidly changing environment.
Abstract: Motivated by computer science challenges, we suggest to extend the approach and methods of finite model theory beyond finite structures.

105 citations


Journal ArticleDOI
TL;DR: The class of (generalized) Church-Rosser languages and the class of context-free languages are incomparable under set inclusion, which verifies a conjecture of Mc-Naughton et al [MNO88].
Abstract: The growing context-sensitive languages (GCSL) are characterized by a nondeterministic machine model, the so-called shrinking two-pushdown automaton (sTPDA). Then the deterministic version of this automaton (sDTPDA) is shown to characterize the class of generalized Church-Rosser languages (GCRL). Finally, we prove that each growing context-sensitive language is accepted in polynomial time by some one-way auxiliary pushdown automaton with a logarithmic space bound (OW-auxPDA[log, poly]). As a consequence the class of (generalized) Church-Rosser languages and the class of context-free languages are incomparable under set inclusion, which verifies a conjecture of Mc-Naughton et al [MNO88].

99 citations


Journal ArticleDOI
TL;DR: Study of the model checking problem for knowledge formulae in the S5nKripke structures generated by finite state environments in which states determine an observation for each agent shows that, in this setting, model checking of common knowledge formULae is intractable, but efficient incremental algorithms are developed for formulAE containing only knowledge operators.
Abstract: Logics of knowledge have been shown to provide a useful approach to the high level specification and analysis of distributed systems It has been proposed that such systems can be developed using knowledge- based protocols, in which agents' actions have preconditions that test their state of knowledge Both computer-assisted analysis of the knowledge properties of systems and automated compilation of knowledge-based protocols require the development of algorithms for the computation of states of knowledge This paper studies one of the computational problems of interest, the model checking problem for knowledge formulae in the S5nKripke structures generated by finite state environments in which states determine an observation for each agent Agents are assumed to have perfect recall and may operate synchronously or asynchronously It is shown that, in this setting, model checking of common knowledge formulae is intractable, but efficient incremental algorithms are developed for formulae containing only knowledge operators Connections to knowledge updates and compilation of knowledge-based protocols are discussed

Journal ArticleDOI
TL;DR: This paper addresses a fundamental problem related to the induction of Boolean logic by establishing a Boolean function (or an extension) so that true is true in every given true (resp., false) vector.
Abstract: In this paper, we address a fundamental problem related to the induction of Boolean logic: Given a set of data, represented as a set of binary “truen-vectors” (or “positive examples”) and a set of “falsen-vectors” (or “negative examples”), we establish a Boolean function (or an extension)f, so thatfis true (resp., false) in every given true (resp., false) vector. We shall further require that such an extension belongs to a certain specified class of functions, e.g., class of positive functions, class of Horn functions, and so on. The class of functions represents our a priori knowledge or hypothesis about the extensionf, which may be obtained from experience or from the analysis of mechanisms that may or may not cause the phenomena under consideration. The real-world data may contain errors, e.g., measurement and classification errors might come in when obtaining data, or there may be some other influential factors not represented as variables in the vectors. In such situations, we have to give up the goal of establishing an extension that is perfectly consistent with the given data, and we are satisfied with an extensionfhaving the minimum number of misclassifications. Both problems, i.e., the problem of finding an extension within a specified class of Boolean functions and the problem of finding a minimum error extension in that class, will be extensively studied in this paper. For certain classes we shall provide polynomial algorithms, and for other cases we prove their NP-hardness.

Journal ArticleDOI
TL;DR: A formal framework is set up to describe transition system specifications in the style of Plotkin that has the power to express many-sortedness, general binding mechanisms, and substitutions, among other notions such as negative hypotheses and unary predicates on terms.
Abstract: We set up a formal framework to describe transition system specifications in the style of Plotkin. This framework has the power to express many-sortedness, general binding mechanisms, and substitutions, among other notions such as negative hypotheses and unary predicates on terms. The framework is used to present a conservativity format in operational semantics, which states sufficient criteria to ensure that the extension of a transition system specification with new transition rules does not affect the semantics of the original terms.

Journal ArticleDOI
Anuj Dawar1
TL;DR: In this paper, a restricted version of second order logic SOω is introduced, in which the second order quantifiers range over relations that are closed under the equivalence relation ≡k of k variable equivalence, for some k.
Abstract: We introduce a restricted version of second order logic SOω in which the second order quantifiers range over relations that are closed under the equivalence relation ≡k of k variable equivalence, for some k. This restricted second order logic is an effective fragment of the infinitary logic L αω ω , which differs from other such fragments in that it is not based on a fixpoint logic. We explore the relationship of SOω with fixpoint logics, showing that its inclusion relations with these logics are equivalent to problems in complexity theory. We also look at the expressibility of NP-complete problems in this logic.

Journal ArticleDOI
TL;DR: A new I/O automaton model, and a new timed version that permit the verification of general liveness properties on the basis of existing verification techniques, and includes a notion ofceptiveness which extends the idea of receptiveness of other existing formalisms, and enables the use of compositional verification techniques.
Abstract: When proving the correctness of algorithms in distributed systems, one generally considerssafetyconditions andlivenessconditions. The Input/Output (I/O) automaton model and its timed version have been used successfully, but have focused on safety conditions and on a restricted form of liveness called fairness. In this paper we develop a new I/O automaton model, and a new timed I/O automaton model, that permit the verification of general liveness properties on the basis of existing verification techniques. Our models include a notion ofreceptivenesswhich extends the idea ofreceptivenessof other existing formalisms, and enables the use of compositional verification techniques. The presentation includes anembeddingof the untimed model into the timed model which preserves all the interesting attributes of the untimed model. Thus, our models constitute acoordinated frameworkfor the description of concurrent and distributed systems satisfying general liveness properties.


Journal ArticleDOI
TL;DR: This paper finds ways to keep the exponentially many weights of Winnow implicitly so that the time for the algorithm to compute a prediction and update its “virtual” weights is polynomial, and thinks that other online algorithms with multiplicative weight updates whose loss bounds grow logarithmically with the dimension are amenable to these methods.
Abstract: We reduce learning simple geometric concept classes to learning disjunctions over exponentially many variables. We then apply an online algorithm called Winnow whose number of prediction mistakes grows only logarithmically with the number of variables. The hypotheses of Winnow are linear threshold functions with one weight per variable. We find ways to keep the exponentially many weights of Winnow implicitly so that the time for the algorithm to compute a prediction and update its “virtual” weights is polynomial. Our method can be used to learnd-dimensional axis-parallel boxes whendis variable and unions ofd-dimensional axis-parallel boxes whendis constant. The worst-case number of mistakes of our algorithms for the above classes is optimal to within a constant factor, and our algorithms inherit the noise robustness of Winnow. We think that other online algorithms with multiplicative weight updates whose loss bounds grow logarithmically with the dimension are amenable to our methods.

Journal ArticleDOI
TL;DR: In this paper, an interpretation of Abadi and Cardelli's first-order function object calculus into a typed π-calculus is presented, which validates the subtyping relation and the typing judgements of the object calculus and is computationally adequate.
Abstract: An interpretation of Abadi and Cardelli's first-order function object calculus into a typed π -calculus is presented. The interpretation validates the subtyping relation and the typing judgements of the object calculus and is computationally adequate. This is the first interpretation of a typed object-oriented language into a process calculus. The study intends to offer a contribution to understanding on the one hand, the relationship between π -calculus types and conventional types of programming languages and on the other hand, the usefulness of the π -calculus as a metalanguage for the semantics of typed object-oriented languages. The type language for the π -calculus has Pierce and Sangiorgi's I/O annotations, to separate the capabilities of reading and writing on a channel and variant types. Technical contributions of the paper are the presentation of variant types for the π -calculus and their typing and subtyping properties, and an analysis of behavioural equivalences in a π -calculus with variant types.

Journal ArticleDOI
TL;DR: A formal approach for modeling and analyzing concurrent systems is proposed which integrates performance characteristics in the early stages of the design process and relies on both stochastically timed process algebras and stochastic timed Petri nets in order to exploit their complementary advantages.
Abstract: A formal approach for modeling and analyzing concurrent systems is proposed which integrates performance characteristics in the early stages of the design process. The approach relies on both stochastically timed process algebras and stochastically timed Petri nets in order to exploit their complementary advantages. The approach is instantiated to the case of EMPA (extended Markovian process algebra), introduced together with the collection of its four semantics and the notion of equivalence that are required in order to implement the approach. Finally, the case study of the alternating bit protocol is presented to illustrate the adequacy of the approach.

Journal ArticleDOI
TL;DR: A hierarchy of languages based on the number of strata allowed in programs is studied, showing that the class of stratified programs made of two strata has the expressive power of the whole family, thus expressing the computable queries.
Abstract: The expressive power of the family wILOG(?)of relational query languages is investigated. The languages are rule based, with value invention and stratified negation. The semantics for value invention is based on Skolem functor terms. We study a hierarchy of languages based on the number of strata allowed in programs. We first show that, in presence of value invention, the class of stratified programs made of two strata has the expressive power of the whole family, thus expressing the computable queries. We then show that the language wILOG?of programs with nonequality and without negation expresses the mono- tone computable queries, and that the language wILOG1/2, ?of semipositive programs expresses the semimonotone computable queries.

Journal ArticleDOI
TL;DR: This thesis presents the explicit use of AND-OR gates as a general tool for computing functions with integer values and uses it to obtain depth-four threshold circuits of majority-depth two for other arithmetic problems such as the logarithm and power series approximation.
Abstract: We investigate the complexity of computations with constant-depth threshold circuits. Such circuits are composed of gates that determine if the sum of their inputs is greater than a certain threshold. When restricted to polynomial size, these circuits compute exactly the functions in the class TC$\sp0$. These circuits are usually studied by measuring their efficiency in terms of their total depth. Using this point of view, the best division and iterated multiplication circuits have depth three and four, respectively. In this thesis, we propose a different approach. Since threshold gates are much more powerful than AND-OR gates, we allow the explicit use of AND-OR gates and consider the main measure of complexity to be the majority-depth of the circuit, i.e. the maximum number of threshold gates on any path in the circuit. Using this approach, we obtain division and iterated multiplication circuits of total depth four and five, but of majority-depth two and three. The technique used is called Chinese remaindering. We present this technique as a general tool for computing functions with integer values and use it to obtain depth-four threshold circuits of majority-depth two for other arithmetic problems such as the logarithm and power series approximation. We also consider the iterated multiplication problem for integers modulo q and for finite fields. The notion of majority-depth naturally leads to a hierarchy of subclasses of TC$\sp0$. We investigate this hierarchy and show that it is closely related to the usual depth hierarchy.

Journal ArticleDOI
TL;DR: The finiteness of ranges of tree transductions is shown to be decidable for TBY +, the composition closure of macro tree transduction, which is a large class which is proved to form a substitution-closed full AFL.
Abstract: The finiteness of ranges of tree transductions is shown to be decidable for TBY + , the composition closure of macro tree transductions Furthermore, TBY + definable sets and TBY + computable relations are considered, which are obtained by viewing a tree as an expression that denotes an element of a given algebra A sufficient condition on the considered algebra is formulated under which the finiteness problem is decidable for TBY + definable sets and for the ranges of TBY + computable relations The obtained result applies in particular to the class of string languages that can be defined by TBY + transductions via the yield mapping This is a large class which is proved to form a substitution-closed full AFL

Journal ArticleDOI
TL;DR: For all syntactic complexity classes there exist complete problems under monotone projection reductions, and this positively answers a question by Stewart for a large number of complexity classes.
Abstract: In this article, the following results are shown: 1. For succinctly encoded problems s ( A ), completeness under polynomial time reductions is equivalent to completeness under projection reductions, an extremely weak reduction defined by a quantifier-free projective formula. 2. The succinct version s ( A of a computational problem A is complete under projection reductions for the class of problems characterizable with leaf language A , but not complete under monotone projections. 3. A strong conversion lemma: If A is reducible to B in polylogarithmic time, then the succinct version of A is monotone projection reducible to the succinct version of B . This result strengthens previous results by Papadimitriou and Yannakakis, and Balcazar and Lozano. It allows iterated application for multiple succinct problems. 4. For all syntactic complexity classes there exist complete problems under monotone projection reductions. This positively answers a question by Stewart for a large number of complexity classes.

Journal ArticleDOI
TL;DR: This work derives general bounds on the complexity of learning in the statistical query (SQ) model and in the PAC model with classification noise by considering the problem of boosting the accuracy of weak learning algorithms which fall within the SQ model.
Abstract: We derive general bounds on the complexity of learning in the statistical query (SQ) model and in the PAC model with classification noise. We do so by considering the problem of boosting the accuracy of weak learning algorithms which fall within the SQ model. This new model was introduced by Kearns to provide a general framework for efficient PAC learning in the presence of classification noise. We first show a general scheme for boosting the accuracy of weak SQ learning algorithms, proving that weak SQ learning is equivalent to strong SQ learning. The boosting is efficient and is used to show our main result of the first general upper bounds on the complexity of strong SQ learning. Since all SQ algorithms can be simulated in the PAC model with classification noise, we also obtain general upper bounds on learning in the presence of classification noise for classes which can be learned in the SQ model.

Journal ArticleDOI
TL;DR: A bisimulation-based characterization of the coarsest congruence contained within performance equivalence is given and it is shown that it is a natural extension of those standard in the untimed setting.
Abstract: Based on the hypothesis of durational actions and process synchronization with “busy waiting” mechanism,performance equivalencehas been proposed to introduce a simple form of performance evaluation in process algebras. This equivalence enjoys many of the pleasant properties of those in the untimed setting but it is not a congruence for parallel composition with synchronization. In this paper we give a bisimulation-based characterization of the coarsest congruence contained within performance equivalence (and discuss alternative formulations). This problem was left open in several papers. We study how the new equivalence, calledperformance congruence, relates with other closed equivalences in the literature and show that, unlike other proposals, it is a natural extension of those standard in the untimed setting. The weak version of performance congruence, which abstracts from internal details, is also studied. A number of examples of processes related or taken apart by performance congruence, and its weak version, are provided. A nontrivial one is also presented to illustrate the utility of the new congruences. The paper concludes with further observations concerning performance congruence and with a discussion on related and further interesting work.

Journal ArticleDOI
TL;DR: This paper starts studies of set constraints in the environment given by equational specifications, and shows that in the case of associativity and commutativity the problem of consistency of systems of set constraint is undecidable; in linear nonerasing shallow theories the consistency of system of positive set constraints is NEXPTIME-complete and in linear shallow theoriesThe problem for positive and negative set constraints are decidable.
Abstract: Set constraints are relations between sets of ground terms over a given alphabet. They give a natural formalism for many problems in program analysis, type inference, order-sorted unification, and constraint logic programming. In this paper we start studies of set constraints in the environment given by equational specifications. We show that in the case of associativity (i.e., in free monoids) as well as in the case of associativity and commutativity (i.e., in commutative monoids) the problem of consistency of systems of set constraints is undecidable; in linear nonerasing shallow theories the consistency of systems of positive set constraints is NEXPTIME-complete and in linear shallow theories the problem for positive and negative set constraints is decidable.

Journal ArticleDOI
TL;DR: A logical language is defined that is in PTIME but strictly more expressive than fixed-point logic with counting, based on a mechanism of restricted non-determinism in logical languages that uses a so-called symmetry-based choice operator whose application is restricted only on symmetric elements.
Abstract: We propose a mechanism of restricted non-determinism in logical languages that uses a so-called symmetry-based choice operator whose application is restricted only on symmetric elements. Based on this mechanism, we define a logical language that is in PTIME but strictly more expressive than fixed-point logic with counting. This language is based, on the one hand, on an extension of the inflationary fixed-point logic with a choice operator, called specified symmetry choice, and, on the other hand, on an introduction of a so-called logical reduction operator, which when added to the above extension of fixpoint logic, allows us to increase the expressive power.

Journal ArticleDOI
TL;DR: The class of robustly verifiable transactions over first-order logic is exactly the class of transactions that admit the local form of verifiability, and the implications of these results for the design of verifiable transaction languages are discussed.
Abstract: It is often necessary to ensure that database transactions preserve integrity constraints that specify valid database states. While it is possible to monitor for violations of constraints at run-time, rolling back transactions when violations are detected, it is preferable to verify correctness statically,beforetransactions are executed. This can be accomplished if we can verify transaction safety with respect to a set of constraints by means of calculatingweakest preconditions. We study properties of weakest preconditions for a number of transaction and specification languages. We show that some simple transactions do not admit weakest preconditions over first-order logic and some of its extensions such as first-order logic with counting and monadic?11. We also show that the class of transactions that admit weakest preconditions over first-order logic cannot be captured by any transaction language. We consider a strong local form of verifiability, and show that it is different from the general form. We define robustly verifiable transactions as those that can be statically analyzed regardless of extensions to the signature of the specification language, and we show that the class of robustly verifiable transactions over first-order logic is exactly the class of transactions that admit the local form of verifiability. We discuss the implications of these results for the design of verifiable transaction languages.

Journal ArticleDOI
TL;DR: It is shown that for sets of Horn clauses saturated underbasic paramodulation the word and unifiability problems are in NP, and the number of minimal unifiers is simply exponential, and it follows that shallow unifiable is inNP, which is optimal since un ifiability in ground theories is already NP-hard.
Abstract: It is shown that for sets of Horn clauses saturated underbasic paramodulationthe word and unifiability problems are in NP, and the number of minimal unifiers is simply exponential (i) For Horn sets saturated wrt a special ordering under the more restrictive inference rule ofbasic superposition, the word and unifiability problems are still decidable and unification is finitary (ii) These two results are applied to the following languages Forshallowpresentations (equations with variables at depth at most one) we show that the closure under paramodulation can be computed in polynomial time Applying result (i), it follows that shallow unifiability is in NP, which is optimal since unifiability in ground theories is already NP-hard The shallow word problem is even shown to be polynomial Generalizing shallow theories to the Horn case, we obtain (two versions of) a language we callCatalog, a natural extension of Datalog to include functions and equality The closure under paramodulation is finite for Catalog sets, hence (i) still applies For Catalog sets S the decidability of the full first-order theory of T(F)=Sis shown as well Finally we definestandard theories, which include and significantly extendshallowtheories Standard presentations can be finitely closed under superposition and result (ii) applies, thus obtaining a new fundamental class with decidable word and unifiability problems and where unification is finitary

Journal ArticleDOI
TL;DR: This model captures the notion of contraction, which has been central in some of the recent developments in theorem proving, and outlines an approach to measuring the complexity of search which can be applied to analyze and evaluate the behaviour of theorem-proving strategies.
Abstract: We present a model for representing search in theorem proving. This model captures the notion of contraction , which has been central in some of the recent developments in theorem proving. We outline an approach to measuring the complexity of search which can be applied to analyze and evaluate the behaviour of theorem-proving strategies. Using our framework, we compare contraction-based strategies of different contraction power and show how they affect the evolution of the respective search spaces during the derivation.