scispace - formally typeset
Search or ask a question

Showing papers in "BRICS Report Series in 1998"


Journal ArticleDOI
TL;DR: A new group signature scheme that is well suited for large groups, i.e., the length of the group’s public key and of signatures do not depend on the size of the Group, based on a variation of the RSA problem called strong RSA assumption.
Abstract: The concept of group signatures allows a group member to sign messages anonymously on behalf of the group. However, in the case of a dispute, the identity of a signature’s originator can be revealed by a designated entity. In this paper we propose a new group signature scheme that is well suited for large groups, i.e., the length of the group’s public key and of signatures do not depend on the size of the group. Our scheme is based on a variation of the RSA problem called strong RSA assumption. It is also more efficient than previous ones satisfying these requirements.

112 citations


Journal ArticleDOI
TL;DR: Clock Difference Diagrams is presented, a new BDD-like data-structure for effective representation and manipulation of certain non-convex subsets of the Euclidean space, notably those encountered in verification of timed automata.
Abstract: We sketch a BDD-like structure for representing unions of simple convex polyhedra, describing the legal values of a set of clocks given bounds on the values of clocks and clock dffierences.

83 citations


Journal ArticleDOI
TL;DR: Clock Dierence Diagrams, CDD's, a BDD-like data-structure for representing and eectively manipulating certain non-convex subsets of the Euclidean space, notably those encountered during verication of timed automata, is presented.
Abstract: One of the major problems in applying automatic verication tools to industrial-size systems is the excessive amount of memory required during the state-space exploration of a model. In the setting of real-time, this problem of state-explosion requires extra attention as information must be kept not only on the discrete control structure but also on the values of continuous clock variables. In this paper, we present Clock Dierence Diagrams, CDD's, a BDD-like data-structure for representing and eectively manipulating certain non-convex subsets of the Euclidean space, notably those encountered during verication of timed automata. A version of the real-time verication tool Uppaal using CDD's as a compact datastructure for storing explored symbolic states has been implemented. Our experimental results demonstrate signicant space-savings: for 8 industrial examples, the savings are between 46% and 99% with moderate increase in runtime. We further report on how the symbolic state-space exploration itself may be carried out using CDD's.

45 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show tight upper and lower bounds for the marked ancestor problem in the cell probe model, the algorithms run on a unit-cost RAM, and they prove (often optimal) lower bounds on a number of problems.
Abstract: Consider a rooted tree whose nodes can be in two states: marked or unmarked. The marked ancestor problem is to maintain a data structure with the following operations: mark(v) marks node v: unmark(v) removes any marks from node v; firstmarked(v) returns the first marked node on the path from v to the root. We show tight upper and lower bounds for the marked ancestor problem. The lower bounds are proved in the cell probe model, the algorithms run on a unit-cost RAM. As easy corollaries we prove (often optimal) lower bounds on a number of problems. These include planar range searching, including the existential or emptiness problem, priority search trees static tree union-find, and several problems from dynamic computational geometry, including segment intersection, interval maintenance, and ray shooting in the plane. Our upper bounds improve algorithms from various fields, including coloured ancestor problems and maintenance of balanced parentheses.

45 citations


Journal ArticleDOI
TL;DR: This work shows that for n-element subsets, constant worst case query time can be obtained using B +O(log log |U|)+o(n) bits of storage, where B = [log2 (|U| / n)] is the minimum number of bits needed to represent all such subsets.
Abstract: A static dictionary is a data structure for storing subsets of a finite universe U, so that membership queries can be answered efficiently. We study this problem in a unit cost RAM model with word size Omega(log |U|), and show that for n-element subsets, constant worst case query time can be obtained using B +O(log log |U|)+o(n) bits of storage, where B = [log2 (|U| / n)] is the minimum number of bits needed to represent all such subsets. The solution for dense subsets uses B + O( |U| log log |U| / log |U| ) bits of storage, and supports constant time rank queries. In a dynamic setting, allowing insertions and deletions, our techniques give an O(B) bit space usage.

28 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the secure channel model and proposed protocols for WSS, VSS, and MPC with a non-zero error probability and showed that weak secret sharing is not secure against an adaptive adversary.
Abstract: We consider verifiable secret sharing (VSS) and multiparty computation (MPC) in the secure channels model, where a broadcast channel is given and a non-zero error probability is allowed. In this model Rabin and Ben-Or proposed VSS and MPC protocols, secure against an adversary that can corrupt any minority of the players. In this paper, we rst observe that a subprotocol of theirs, known as weak secret sharing (WSS), is not secure against an adaptive adversary, contrary to what was believed earlier. We then propose new and adaptively secure protocols for WSS, VSS and MPC that are substantially more efficient than the original ones. Our protocols generalize easily to provide security against general Q2 adversaries.

26 citations


Journal ArticleDOI
TL;DR: In this article, the authors generalize and improve the security and efficiency of the verifiable encryption scheme of Asokan et al. such that it can rely on more general assumptions, and can be proven secure without relying on random oracles.
Abstract: We generalise and improve the security and efficiency of the verifiable encryption scheme of Asokan et al., such that it can rely on more general assumptions, and can be proven secure without relying on random oracles. We show a new application of verifiable encryption to group signatures with separability, these schemes do not need special purpose keys but can work with a wide range of signature and encryption schemes already in use. Finally, we extend our basic primitive to verifiable threshold and group encryption. By encrypting digital signatures this way, one gets new solutions to the verifiable signature sharing problem.

26 citations


Journal ArticleDOI
TL;DR: In this paper, the authors determine precisely the arithmetical and computational strength of weaker function parameter-free schematic versions of S− of elementary analysis, and show a sharp borderline between fragments of analysis which are still conservative over PRA and extensions which just go beyond the strength of PRA.
Abstract: It is well-known by now that large parts of (non-constructive) mathematical reasoning can be carried out in systems T which are conservative over primitive recursive arithmetic PRA (and even much weaker systems). On the other hand there are principles S of elementary analysis (like the Bolzano-Weierstrass principle, the existence of a limit superior for bounded sequences etc.) which are known to be equivalent to arithmetical comprehension (relative to T ) and therefore go far beyond the strength of PRA (when added to T ). In this paper we determine precisely the arithmetical and computational strength (in terms of optimal conservation results and subrecursive characterizations of provably recursive functions) of weaker function parameter-free schematic versions S− of S, thereby exhibiting different levels of strength between these principles as well as a sharp borderline between fragments of analysis which are still conservative over PRA and extensions which just go beyond the strength of PRA.

21 citations


Journal ArticleDOI
TL;DR: A rich theory of LTL is now available using which one can effectively verify whether a nite state system meets its specification, and the verification task can be automated to handle large systems of practical interest.
Abstract: Linear time Temporal Logic (LTL) as proposed by Pnueli [37] has become a well established tool for specifying the dynamic behaviour of distributed systems. A basic feature of LTL is that its formulas are interpreted over sequences. Typically, such a sequence will model a computation of a system; a sequence of states visited by the system or a sequence of actions executed by the system during the course of the computation. A system is said to satisfy a specification expressed as an LTL formula in case every computation of the system is a model of the formula. A rich theory of LTL is now available using which one can effectively verify whether a nite state system meets its specification [51]. Indeed, the verification task can be automated (for instance using the software packages SPIN [21] and FormalCheck [2]) to handle large systems of practical interest.

20 citations


Journal ArticleDOI
Olivier Danvy1
TL;DR: John Hughes has proposed a new paradigm for partial evaluation, "Type Specialization," based on type inference instead of being based on symbolic interpretation, which suggests a very simple type-directed solution to the problem.
Abstract: Partial evaluation specializes terms, but traditionally this specialization does not apply to the type of these terms. As a result, specializing, e.g., an interpreter written in a typed language, which requires a "universal" type to encode expressible values, yields residual programs with type tags all over. Neil Jones has stated that getting rid of these type tags was an open problem, despite possible solutions such as Torben Mogensen's "constructor specialization." To solve this problem, John Hughes has proposed a new paradigm for partial evaluation, "Type Specialization," based on type inference instead of being based on symbolic interpretation. Type Specialization is very elegant in principle but it also appears non-trivial in practice. Stating the problem in terms of types instead of in terms of type encodings suggests a very simple type-directed solution, namely, to use a projection from the universal type to the specific type of the residual program. Standard partial evaluation then yields a residual program without type tags, simply and efficiently.

20 citations


Journal ArticleDOI
TL;DR: In this paper, the collective token interpretation of Petri nets in terms of theories and theory morphisms in partial membership equational logic has been studied, and the notion of adjunction has been used to express each connection.
Abstract: In recent years, several semantics for place/transition Petri nets have been proposed that adopt the collective token philosophy. We investigate distinctions and similarities between three such models, namely configuration structures, concurrent transition systems, and (strictly) symmetric (strict) monoidal categories. We use the notion of adjunction to express each connection. We also present a purely logical description of the collective token interpretation of net behaviours in terms of theories and theory morphisms in partial membership equational logic.

Journal ArticleDOI
TL;DR: This work revisits Bondorf and Palsberg's compilation of actions using Similix and compares it in detail with using an online type-directed partial evaluator, which appears to consume about 7 times less space and to be about 28 times faster.
Abstract: We revisit Bondorf and Palsberg's compilation of actions using< the offline syntax-directed partial evaluator Similix (FPCA'93, JFP'96), and we compare it in detail with using an online type-directed partial evaluator. In contrast to Similix, our type-directed partial evaluator is idempotent and requires no "binding-time improvements." It also appears to consume about 7 times less space and to be about 28 times faster than Similix, and to yield residual programs that are perceptibly more efficient than those generated by Similix.

Journal ArticleDOI
TL;DR: In this article, the authors propose to transform source programs into continuation-passing style (CPS), replacing handle and raise expressions by continuation-catching and throwing expressions, respectively.
Abstract: ML's exception handling makes it possible to describe exceptional execution flows conveniently, but it also forms a performance bottleneck. Our goal is to reduce this overhead by source-level transformation. To this end, we transform source programs into continuation-passing style (CPS), replacing handle and raise expressions by continuation-catching and throwing expressions, respectively. CPS-transforming every expression, however, introduces a new cost. We therefore use an exception analysis to transform expressions selectively: if an expression is statically determined to involve exceptions then it is CPS-transformed; otherwise, it is left in direct style. In this article, we formalize this selective CPS transformation, prove its correctness, and present early experimental data indicating its effect on ML programs.

Journal ArticleDOI
TL;DR: This paper describes ongoing work on the automatic construction of formal models from Real-Time implementations based on measurements of the timed behavior of the threads of an implementation, their causal interaction patterns and external visible events.
Abstract: This paper describes ongoing work on the automatic construction of formal models from Real-Time implementations. The model construction is based on measurements of the timed behavior of the threads of an implementation, their causal interaction patterns and external visible events. A specification of the timed behavior is modelled in timed automata and checked against the generated model in order to validate their timed behavior.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the structure of finitely presented Heyting algebras is co-Heyting, i.e., every element can be expressed as a finite join of join-irreducible elements in such a Heyting algebra (in terms of a given presentation).
Abstract: In this paper we study the structure of finitely presented Heyting algebras. Using algebraic techniques (as opposed to techniques from proof-theory) we show that every such Heyting algebra is in fact co- Heyting, improving on a result of Ghilardi who showed that Heyting algebras free on a finite set of generators are co-Heyting. Along the way we give a new and simple proof of the finite model property. Our main technical tool is a representation of finitely presented Heyting algebras in terms of a colimit of finite distributive lattices. As applications we construct explicitly the minimal join-irreducible elements (the atoms) and the maximal join-irreducible elements of a finitely presented Heyting algebra in terms of a given presentation. This gives as well a new proof of the disjunction property for intuitionistic propositional logic. Unfortunately not very much is known about the structure of Heyting algebras, although it is understood that implication causes the complex structure of Heyting algebras. Just to name an example, the free Boolean algebra on one generator has four elements, the free Heyting algebra on one generator is infinite. Our research was motivated a simple application of Pitts' uniform interpolation theorem [11]. Combining it with the old analysis of Heyting algebras free on a finite set of generators by Urquhart [13] we get a faithful functor J : HAop f:p: ! PoSet; sending a finitely presented Heyting algebra to the partially ordered set of its join-irreducible elements, and a map between Heyting algebras to its leftadjoint restricted to join-irreducible elements. We will explore on the induced duality more detailed in [5]. Let us briefly browse through the contents of this paper: The first section recapitulates the basic notions, mainly that of the implicational degree of an element in a Heyting algebra. This is a notion relative to a given set of generators. In the next section we study nite Heyting algebras. Our contribution is a simple proof of the nite model property which names in particular a canonical family of nite Heyting algebras into which we can embed a given finitely presented one. In Section 3 we recapitulate the standard duality between nite distributive lattices and nite posets. The `new' feature here is a strict categorical formulation which helps simplifying some proofs and avoiding calculations. In the following section we recapitulate the description given by Ghilardi [8] on how to adjoin implications to a nite distributive lattice, thereby not destroying a given set of implications. This construction will be our major technical ingredient in Section 5 where we show that every nitely presented Heyting algebra is co-Heyting, i.e., that the operation (−) n (−) dual to implication is dened. This result improves on Ghilardi's [8] that this is true for Heyting algebras free on a finite set of generators. Then we go on analysing the structure of finitely presented Heyting algebras in Section 6. We show that every element can be expressed as a finite join of join-irreducibles, and calculate explicitly the maximal join-irreducible elements in such a Heyting algebra (in terms of a given presentation). As a consequence we give a new proof of the disjunction property for propositional intuitionistic logic. As well, we calculate the minimal join-irreducible elements, which are nothing but the atoms of the Heyting algebra. Finally, we show how all this material can be used to express the category of finitely presented Heyting algebras as a category of fractions of a certain category with objects morphism between finite distributive lattices.

Journal ArticleDOI
TL;DR: The goal of this work is to demonstrate how to program with type-indexed values within a Hindley-Milner type system, and proposes and compares two solutions that require first-class and higher-order polymorphism and are not implementable in the core language of ML.
Abstract: A Hindley-Milner type system such as ML's seems to prohibit typeindexed values, i.e., functions that map a family of types to a family of values. Such functions generally perform case analysis on the input types and return values of possibly different types. The goal of our work is to demonstrate how to program with type-indexed values within a Hindley-Milner type system. Our first approach is to interpret an input type as its corresponding value, recursively. This solution is type-safe, in the sense that the ML type system statically prevents any mismatch between the input type and function arguments that depend on this type. Such specific type interpretations, however, prevent us from combining different type-indexed values that share the same type. To meet this objection, we focus on finding a value-independent type encoding that can be shared by different functions. We propose and compare two solutions. One requires first-class and higher-order polymorphism, and, thus, is not implementable in the core language of ML, but it can be programmed using higher-order functors in Standard ML of New Jersey. Its usage, however, is clumsy. The other approach uses embedding/projection functions. It appears to be more practical. We demonstrate the usefulness of type-indexed values through examples including type-directed partial evaluation, C printf-like formatting, and subtype coercions. Finally, we discuss the tradeoffs between our approach and some other solutions based on more expressive typing disciplines.

Journal ArticleDOI
TL;DR: A quadratic time algorithm is presented that computes an optimal alignment of two coding DNA sequences in the model under the assumption of affine gap cost and it is believed that the constant factor of the running time is sufficiently small to make the algorithm feasible in practice.
Abstract: We discuss a model for the evolutionary distance between two coding DNA sequences which specializes to the DNA/protein model proposed in Hein [3]. We discuss the DNA/protein model in details and present a quadratic time algorithm that computes an optimal alignment of two coding DNA sequences in the model under the assumption of affine gap cost. The algorithm solves a conjecture in [3] and we believe that the constant factor of the running time is sufficiently small to make the algorithm feasible in practice.

Journal ArticleDOI
TL;DR: In this article, a comparison-based sorting algorithm was proposed which achieves O(n/sup 2/ log n) upper bound for the full range of space bounds between log n and n/log n. The algorithm is optimal for comparison based models as well as for very powerful general models considered by Beame.
Abstract: We study the fundamental problem of sorting in a sequential model of computation and in particular consider the time-space trade-off (product of time and space) for this problem. P. Beame (1991) has shown a lower bound of /spl Omega/(n/sup 2/) for this product leaving a gap of a logarithmic factor up to the previously best known upper bound of O(n/sup 2/ log n) due to G.N. Frederickson (1987). Since then, no progress has been made towards tightening this gap. The main contribution of this paper is a comparison based sorting algorithm which closes the gap by meeting the lower bound of Beame. The time-space product O(n/sup 2/) upper bound holds for the full range of space bounds between log n and n/log n. Hence in this range our algorithm is optimal for comparison based models as well as for the very powerful general models considered by Beame.

Journal ArticleDOI
TL;DR: An elementary proof of the fixpoint alternation hierarchy in arithmetic is provided, which in turn allows us to simplify theProof of the modal mu-calculusAlternation hierarchy.
Abstract: We provide an elementary proof of the fixpoint alternation hierarchy in arithmetic, which in turn allows us to simplify the proof of the modal mu-calculus alternation hierarchy. We further show that the alternation hierarchy on the binary tree is strict, resolving a problem of Niwinski.

Journal ArticleDOI
TL;DR: This work relates an operational notion of continuation semantics to the traditional CPS transformation and uses it to account for the control operator shift and the control delimiter reset operationally, thus obtaining a native and modular implementation of the entire CPS hierarchy.
Abstract: We explore the hierarchy of control induced by successive transformations into continuation-passing style (CPS) in the presence of "control delimiters'' and "composable continuations''. Specifically, we investigate the structural operational semantics associated with the CPS hierarchy. To this end, we characterize an operational notion of continuation semantics. We relate it to the traditional CPS transformation and we use it to account for the control operator shift and the control delimiter reset operationally. We then transcribe the resulting continuation semantics in ML, thus obtaining a native and modular implementation of the entire hierarchy. We illustrate it with a few examples, the most significant of which is layered monads.P.S. Not published. No full text available.

Journal ArticleDOI
TL;DR: This paper presents the first efficient statistical zero-knowledge protocols to prove statements such as: A committed number is a pseudo-prime, a committed (or revealed) number is the product of two safe primes, and any multivariate polynomial equation modulo a certain modulus is satisfied.
Abstract: This paper presents the first efficient statistical zero-knowledge protocols to prove statements such as: A committed number is a pseudo-prime. A committed (or revealed) number is the product of two safe primes, i.e., primes p and q such that (p - 1)=2 and (q - 1)=2 are primes as well. A given value is of large order modulo a composite number that consists of two safe prime factors. So far, no methods other than inefficient circuit-based proofs are known for proving such properties. Proving the second property is for instance necessary in many recent cryptographic schemes that rely on both the hardness of computing discrete logarithms and of difficulty computing roots modulo a composite. The main building blocks of our protocols are statistical zero-knowledge proofs that are of independent interest. Mainly, we show how to prove the correct computation of a modular addition, a modular multiplication, or a modular exponentiation, where all values including the modulus are committed but not publicly known. Apart from the validity of the computation, no other information about the modulus (e.g., a generator which order equals the modulus) or any other operand is given. Our technique can be generalized to prove in zeroknowledge that any multivariate polynomial equation modulo a certain modulus is satisfied, where only commitments to the variables of the polynomial and a commitment to the modulus must be known. This improves previous results, where the modulus is publicly known. We show how a prover can use these building blocks to convince a verifier that a committed number is prime. This finally leads to efficient protocols for proving that a committed (or revealed) number is the product of two safe primes. As a consequence, it can be shown that a given value is of large order modulo a given number that is a product of two safe primes. Keywords. RSA-based protocols, zero-knowledge proofs of knowledge, primality tests.

Journal ArticleDOI
TL;DR: This work proposes a model which facilitates separate and modular specification of real-time constraints, and shows how separation of real the time and synchronization constraints and functional behavior is possible.
Abstract: Large and complex real-time systems can benefit significantly from a component-based development approach where new systems are constructed by composing reusable, documented and previously tested concurrent objects. However, reusing objects which execute under real-time constraints is problematic because application specific time and synchronization constraints are often embedded in the internals of these objects. The tight coupling of functionality and real-time constraints makes objects interdependent, and as a result difficult to reuse in another system. We propose a model which facilitates separate and modular specification of real-time constraints, and show how separation of real-time constraints and functional behavior is possible. We present our ideas using the Actor model to represent untimed objects, and the Real-time Synchronizers language to express real-time and synchronization constraints. We discuss specific mechanisms by which Real-time Synchronizers can govern the interaction and execution of untimed objects. We treat our model formally, and succinctly define what effect real-time constraints have on a set of concurrent objects. We briefly discuss how a middleware scheduling and event-dispatching service can use the synchronizers to execute the system.

Journal ArticleDOI
TL;DR: It is shown that it is not possible to speed-up the Knapsack problems in the parallel algebraic decision tree model and extended to the PRAM model without bit-operations, consistent with Mulmuley's recent result on the separa-tion of the strongly-polynomial class and the corresponding NC class in the arithmeticPRAM model.
Abstract: We show that it is not possible to speed-up the Knapsack problemeciently in the parallel algebraic decision tree model More speci -cally, we prove that any parallel algorithm in the xed degree algebraicdecision tree model that solves the decision version of the Knapsackproblem requires Ω(pn) rounds even by using 2 pn processors Weextend the result to the PRAM model without bit-operations Theseresults are consistent with Mulmuley’s [6] recent result on the separa-tion of the strongly-polynomial class and the corresponding NCclassin the arithmetic PRAM model Keywords lower-bounds, parallel algorithms, algebraic decision tree 1 Introduction The primary objective of designing parallel algorithms is to obtain fasteralgorithms Nonetheless, the pursuit of higher speed has to be weightedagainst the concerns of eciency, namely, if we are getting our money’s(processor’s) worth It has been an open theoretical problem whether all theproblems in the class Pcan be made to run in polylogarithmic running time

Journal ArticleDOI
Glynn Winskel1
TL;DR: In this paper, a metalanguage for concurrent process languages is introduced, which can be interpreted as a variant of linear logic, and a range of process languages can be defined, including higher-order process languages where processes are passed and received as arguments.
Abstract: A metalanguage for concurrent process languages is introduced. Within it a range of process languages can be defined, including higher-order process languages where processes are passed and received as arguments. (The process language has, however, to be linear, in the sense that a process received as an argument can be run at most once, and not include name generation as in the Pi-Calculus.) The metalanguage is provided with two interpretations both of which can be understood as categorical models of a variant of linear logic. One interpretation is in a simple category of nondeterministic domains; here a process will denote its set of traces. The other interpretation, obtained by direct analogy with the nondeterministic domains, is in a category of presheaf categories; the nondeterministic branching behaviour of a process is captured in its denotation as a presheaf. Every presheaf category possesses a notion of (open-map) bisimulation, preserved by terms of the metalanguage. The conclusion summarises open problems and lines of future work.

Journal ArticleDOI
Kim Sunesen1
TL;DR: This paper investigates the role played by TCSP-style renaming and hiding combinators with respect to decidability, and shows that location equivalence becomes undecidable when either renaming or hiding is added to this class of processes.
Abstract: In [26], we investigated decidability issues for standard language equivalence for process description languages with two generalisations based on traditional approaches for capturing non-interleaving behaviour: pomset equivalence reflecting global causal dependency, and location equivalence reflecting spatial distribution of events. In this paper, we continue by investigating the role played by TCSP-style renaming and hiding combinators with respect to decidability. One result of [26] was that in contrast to pomset equivalence, location equvialence remained decidable for a class of processes consisting of finite sets of BPP processes communicating in a TCSP manner. Here, we show that location equivalence becomes undecidable when either renaming or hiding is added to this class of processes. Furthermore, we investigate the weak versions of location and pomset equivalences. We show that for BPP with τ prefixing, both weak pomset and weak location equivalence are decidable. Moreover, we show that weak location equivalence is undecidable for BPP semantically extended with CCS communication. ∗Basic Research in Computer Science, Centre of the Danish National Research Foundation.

Journal ArticleDOI
TL;DR: In this article, the weak Konig's lemma WKL is shown to be equivalent to the notion of quantifier-free trees, i.e., for all x there exists y there exists f where A0 is a quantifier free formula (x, y, z are natural number variables).
Abstract: The weak Konig's lemma WKL is of crucial significance in the study of fragments of mathematics which on the one hand are mathematically strong but on the other hand have a low proof-theoretic and computational strength. In addition to the restriction to binary trees (or equivalently bounded trees), WKL is also `weak' in that the tree predicate is quantifier-free. Whereas in general the computational and proof-theoretic strength increases when logically more complex trees are allowed, we show that this is not the case for trees which are given by formulas in a class Phi where we allow an arbitrary function quantifier prefix over bounded functions in front of a Pi^0_1-formula. This results in a schema Phi-WKL. Another way of looking at WKL is via its equivalence to the principle For all x there exists y there exists f where A0 is a quantifier-free formula (x, y, z are natural number variables). We generalize this to Phi-formulas as well and allow function quantifiers `there exists g instead of `there exists y In the absence of functional parameters (so in particular in a second order context), the corresponding versions of Phi-WKL and Phi-b-AC^0,1 turn out to be equivalent to WKL. This changes completely in the presence of functional variables of type 2 where we get proper hierarchies of principles Phi_n-WKL and Phi_n-b-AC^0,1. Variables of type 2 however are necessary for a direct representation of analytical objects and - sometimes - for a faithful representation of such objects at all as we will show in a subsequent paper. By a reduction of Phi-WKL and Phi-b-AC^0,1 to a non-standard axiom F (introduced in a previous paper) and a new elimination result for F relative to various fragment of arithmetic in all finite types, we prove that Phi-WKL and Phi-b-AC^0,1 do neither contribute to the provably recursive functionals of these fragments nor to their proof-theoretic strength. In a subsequent paper we will illustrate the greater mathematical strength of these principles (compared to WKL).

Journal ArticleDOI
TL;DR: This is the first study of correctness of an object-oriented abstract machine, and of operational equivalence for the imperative object calculus, and an algorithm is proved correct an algorithm, used in the prototype compiler, for statically resolving method offsets.
Abstract: We adopt the untyped imperative object calculus of Abadi and Cardelli as a minimal setting in which to study problems of compilation and program equivalence that arise when compiling object oriented languages. We present both a big-step and a small-step substitution-based operational semantics for the calculus. Our first two results are theorems asserting the equivalence of our substitution based semantics with a closure-based semantics like that given by Abadi and Cardelli. Our third result is a direct proof of the correctness of compilation to a stack-based abstract machine via a small-step decompilation algorithm. Our fourth result is that contextual equivalence of objects coincides with a form of Mason and Talcott's CIU equivalence; the latter provides a tractable means of establishing operational equivalences. Finally, we prove correct an algorithm, used in our prototype compiler, for statically resolving method offsets. This is the first study of correctness of an object-oriented abstract machine, and of operational equivalence for the imperative object calculus.

Journal ArticleDOI
TL;DR: In this paper, a nicely packaged form of Talagrand's inequality that can be applied to prove concentration of measure for functions defined by hereditary properties is presented. But it is only valid in spaces satisfying a certain negative dependence property.
Abstract: We develop a nicely packaged form of Talagrand's inequality that can be applied to prove concentration of measure for functions defined by hereditary properties. We illustrate the framework with several applications from combinatorics and algorithms. We also give an extension of the inequality valid in spaces satisfying a certain negative dependence property and give some applications.

Journal ArticleDOI
TL;DR: A characterization of the properties expressible in Hennessy-milner logic with recursion that can be tested using finite LTSs is given in this paper. But this work is restricted to the case where the LTS can be used to probe the behaviour of the tested system.
Abstract: This study offers a characterization of the collection of properties expressible in Hennessy-Milner Logic (HML) with recursion that can be tested using finite LTSs. In addition to actions used to probe the behaviour of the tested system, the LTSs that we use as tests will be able to perform a distinguished action nok to signal their dissatisfaction during the interaction with the tested process. A process s passes the test T iff T does not perform the action nok when it interacts with s. A test T tests for a property phi in HML with recursion iff it is passed by exactly the states that satisfy phi. The paper gives an expressive completeness result offering a characterization of the collection of properties in HML with recursion that are testable in the above sense.

Journal ArticleDOI
TL;DR: Tight upper and lower bounds for a rooted tree whose nodes can be marked or unmarked are shown, which improve a number of algorithms from various fields, including dynamic dictionary matching and coloured ancestor problems.
Abstract: Consider a rooted tree whose nodes can be marked or unmarked. Given a node, we want to find its nearest marked ancestor. This generalises the well-known predecessor problem, where the tree is a path. We show tight upper and lower bounds for this problem. The lower bounds are proved in the cell probe model, the upper bounds run on a unit-cost RAM. As easy corollaries we prove (often optimal) lower bounds on a number of problems. These include planar range searching, including the existential or emptiness problem, priority search trees, static tree union-find, and several problems from dynamic computational geometry, including intersection problems, proximity problems, and ray shooting. Our upper bounds improve a number of algorithms from various fields, including dynamic dictionary matching and coloured ancestor problems.