scispace - formally typeset
Search or ask a question

Showing papers in "BRICS Report Series in 2004"


Journal ArticleDOI
TL;DR: The construction of a refocus function shows how to mechanically obtain an abstract machine out of a reduction semantics, which was done previously on a case-by-case basis.
Abstract: The evaluation function of a reduction semantics (i.e., a small-step operational semantics with an explicit representation of the reduction context) is canonically defined as the transitive closure of (1) decomposing a term into a reduction context and a redex, (2) contracting this redex, and (3) plugging the contractum in the context. Directly implementing this evaluation function therefore yields an interpreter with a worst-case overhead, for each step, that is linear in the size of the input term. We present sufficient conditions over the constituents of a reduction semantics to circumvent this overhead, by replacing the composition of (3) plugging and (1) decomposing by a single ``refocus'' function mapping a contractum and a context into a new context and a new redex, if any. We also show how to construct such a refocus function, we prove the correctness of this construction, and we analyze the complexity of the resulting refocus function. The refocused evaluation function of a reduction semantics implements the transitive closure of the refocus function, i.e., a ``pre-abstract machine.'' Fusing the refocus function with the trampoline function computing the transitive closure gives a state-transition function, i.e., an abstract machine. This abstract machine clearly separates between the transitions implementing the congruence rules of the reduction semantics and the transitions implementing its reduction rules. The construction of a refocus function therefore shows how to mechanically obtain an abstract machine out of a reduction semantics, which was done previously on a case-by-case basis. We illustrate refocusing by mechanically constructing Felleisen et al.'s CK machine from a call-by-value reduction semantics of the lambda-calculus, and by constructing a substitution-based version of Krivine's machine from a call-by-name reduction semantics of the lambda-calculus. We also mechanically construct three one-pass CPS transformers from three quadratic context-based CPS transformers for the lambda-calculus.

124 citations


Journal ArticleDOI
TL;DR: The PSPACE-complete problem of deciding the outcome of Maker-Maker and Maker-Breaker games on arbitrary hypergraphs was shown to be PSPACEcomplete in this article.
Abstract: We show that the problems of deciding the outcome of Maker-Maker and Maker-Breaker games played on arbitrary hypergraphs are PSPACE-complete. Maker-Breaker games have earlier been shown PSPACE-complete by Schaefer (1978); we give a simpler proof and show a reduction from Maker-Maker games to Maker-Breaker games.

30 citations


Journal ArticleDOI
TL;DR: It is shown that reachability analysis for a replicative variant of the protocol becomes decidable, and the extended calculus is capable of an implicit description of the active intruder.
Abstract: We use some recent techniques from process algebra to draw several conclusions about the well studied class of ping-pong protocols introduced by Dolev and Yao. In particular we show that all nontrivial properties, including reachability and equivalence checking wrt. the whole van Glabbeek's spectrum, become undecidable for a very simple recursive extension of the protocol. The result holds even if no nondeterministic choice operator is allowed. We also show that the extended calculus is capable of an implicit description of the active intruder, including full analysis and synthesis of messages in the sense of Amadio, Lugiez and Vanackere. We conclude by showing that reachability analysis for a replicative variant of the protocol becomes decidable.

27 citations


Journal ArticleDOI
TL;DR: The cache-oblivious SSSP-algorithm takes nearly full advantage of block transfers for dense graphs, and the number of I/Os for sparse graphs is reduced by a factor of nearly sqrt{B}, where B is the cache-block size.
Abstract: We present improved cache-oblivious data structures and algorithms for breadth-first search (BFS) on undirected graphs and the single-source shortest path (SSSP) problem on undirected graphs with non-negative edge weights. For the SSSP problem, our result closes the performance gap between the currently best cache-aware algorithm and the cache-oblivious counterpart. Our cache-oblivious SSSP-algorithm takes nearly full advantage of block transfers for dense graphs. The algorithm relies on a new data structure, called bucket heap , which is the first cache-oblivious priority queue to efficiently support a weak D ECREASE K EY operation. For the BFS problem, we reduce the number of I/Os for sparse graphs by a factor of nearly sqrt{B}, where B is the cache-block size, nearly closing the performance gap between the currently best cache-aware and cache-oblivious algorithms.

24 citations


Journal ArticleDOI
TL;DR: In this article, the authors study how to adjoin probability to event structures, leading to the model of probabilistic event structures and show that continuous valuations on the domain of a confusion-free event structure correspond to the event structures it supports.
Abstract: This paper studies how to adjoin probability to event structures, leading to the model of probabilistic event structures. In their simplest form, probabilistic choice is localised to cells, where conflict arises; in which case probabilistic independence coincides with causal independence. An event structure is associated with a domain--that of its configurations ordered by inclusion. In domain theory, probabilistic processes are denoted by continuous valuations on a domain. A key result of this paper is a representation theorem showing how continuous valuations on the domain of a confusion-free event structure correspond to the probabilistic event structures it supports. We explore how to extend probability to event structures which are not confusion-free via two notions of probabilistic runs of a general event structure. Finally, we show how probabilistic correlation and probabilistic event structures with confusion can arise from event structures which are originally confusion-free by using morphisms to rename and hide events.

23 citations


Book ChapterDOI
TL;DR: This work shows Sigma^1_1-completeness of weak bisimilarity for PA (process algebra), and of weak simulation preorder/equivalence for PDA (pushdown automata), PA and PN (Petri nets).
Abstract: We show Sigma^1_1-completeness of weak bisimilarity for PA (process algebra), and of weak simulation preorder/equivalence for PDA (pushdown automata), PA and PN (Petri nets). We also show Pi^1_1-hardness of weak omega-trace equivalence for the (sub)classes BPA (basic process algebra) and BPP (basic parallel processes).

18 citations


Journal ArticleDOI
TL;DR: In this article, a variant of the notion of asymptotic contractions and a quantitative version of the corresponding fixed point theorem were developed using techniques from proof mining and proved.
Abstract: In [J.Math.Anal.App.277(2003) 645-650], W.A.Kirk introduced the notion of asymptotic contractions and proved a fixed point theorem for such mappings. Using techniques from proof mining, we develop a variant of the notion of asymptotic contractions and prove a quantitative version of the corresponding fixed point theorem.

17 citations


Journal ArticleDOI
TL;DR: It is shown that the set of fixed-point combinators forms a recursively-enumerable subset of a larger set of terms that is not Recursively enumerable, and the terms of which are observationally equivalent to fixed- point combinators in any computable context.
Abstract: We show that the set of fixed-point combinators forms a recursively-enumerable subset of a larger set of terms that is (A) not recursively enumerable, and (B) the terms of which are observationally equivalent to fixed-point combinators in any computable context.

17 citations


Journal ArticleDOI
TL;DR: A comprehensive operational semantic theory of graph rewriting is introduced, recasting rewriting frameworks as Leifer and Milner's reactive systems, and the construction of groupoidal relative pushouts in suitable cospan categories over arbitrary adhesive categories is introduced.
Abstract: We introduce a comprehensive operational semantic theory of graph rewriting. The central idea is recasting rewriting frameworks as Leifer and Milner's reactive systems. Consequently, graph rewriting systems are associated with canonical labelled transition systems, on which bisimulation equivalence is a congruence with respect to arbitrary graph contexts (cospans of graphs). This construction is derived from a more general theorem of much wider applicability. Expressed in abstract categorical terms, the central technical contribution of the paper is the construction of groupoidal relative pushouts, introduced and developed by the authors in recent work, in suitable cospan categories over arbitrary adhesive categories. As a consequence, we both generalise and shed light on rewriting via borrowed contexts due to Ehrig and Konig.

16 citations


Journal ArticleDOI
TL;DR: An exact algorithm is presented for Maximum Exact Satisfiability where each clause contains at most two literals with time complexity O(poly(L) .
Abstract: Inspired by the Maximum Satisfiability and Exact Satisfiability problems we present two Maximum Exact Satisfiability problems. The first problem called Maximum Exact Satisfiability is: given a formula in conjunctive normal form and an integer k, is there an assignment to all variables in the formula such that at least k clauses have exactly one true literal. The second problem called Restricted Maximum Exact Satisfiability has the further restriction that no clause is allowed to have more than one true literal. Both problems are proved NP-complete restricted to the versions where each clause contains at most two literals. In fact Maximum Exact Satisfiability is a generalisation of the well-known NP-complete problem MaxCut. We present an exact algorithm for Maximum Exact Satisfiability where each clause contains at most two literals with time complexity O(poly(L) . 2^{m/4}), where m is the number of clauses and L is the length of the formula. For the second version we give an algorithm with time complexity O(poly(L) . 1.324718^n) , where n is the number of variables. We note that when restricted to the versions where each clause contains exactly two literals and there are no negations both problems are fixed parameter tractable. It is an open question if this is also the case for the general problems.

16 citations


Journal ArticleDOI
TL;DR: A new notion of QZK, non-oblivious verifier QZk, is proposed, which is strictly stronger than honest-verifierQZK but weaker than full QK, and it is shown that this notion can be achieved by means of efficient (quantum) protocols.
Abstract: The concept of zero-knowledge (ZK) has become of fundamental importance in cryptography. However, in a setting where entities are modeled by quantum computers, classical arguments for proving ZK fail to hold since, in the quantum setting, the concept of rewinding is not generally applicable. Moreover, known classical techniques that avoid rewinding have various shortcomings in the quantum setting. We propose new techniques for building quantum zero-knowledge (QZK) protocols, which remain secure even under (active) quantum attacks. We obtain computational QZK proofs and perfect QZK arguments for any NP language in the common reference string model. This is based on a general method converting an important class of classical honest-verifier ZK (HVZK) proofs into QZK proofs. This leads to quite practical protocols if the underlying HVZK proof is efficient. These are the first proof protocols enjoying these properties, in particular the first to achieve perfect QZK. As part of our construction, we propose a general framework for building unconditionally hiding (trapdoor) string commitment schemes, secure against quantum attacks, as well as concrete instantiations based on specific (believed to be) hard problems. This is of independent interest, as these are the first unconditionally hiding string commitment schemes withstanding quantum attacks. Finally, we give a partial answer to the question whether QZK is possible in the plain model. We propose a new notion of QZK, non-oblivious verifier QZK, which is strictly stronger than honest-verifier QZK but weaker than full QZK, and we show that this notion can be achieved by means of efficient (quantum) protocols.

Journal ArticleDOI
TL;DR: This work characterize and evaluate existing tools in this design space, including a recent result of the authors providing practical type checking of full unannotated XSLT 1.0 stylesheets given general DTDs that describe the input and output languages.
Abstract: We survey work on statically type checking XML transformations, covering a wide range of notations and ambitions. The concept of type may vary from idealizations of DTD to full-blown XML Schema or even more expressive formalisms. The notion of transformation may vary from clean and simple transductions to domain-specific languages or integration of XML in general-purpose programming languages. Type annotations can be either explicit or implicit, and type checking ranges from exact decidability to pragmatic approximations. We characterize and evaluate existing tools in this design space, including a recent result of the authors providing practical type checking of full unannotated XSLT 1.0 stylesheets given general DTDs that describe the input and output languages.

Journal ArticleDOI
Olivier Danvy1
TL;DR: A systematic construction of a reduction-free normalization function that builds on previous work on refocusing and on a functional correspondence between evaluators and abstract machines is presented.
Abstract: We present a systematic construction of a reduction-free normalization function. Starting from a reduction-based normalization function, i.e., the transitive closure of a one-step reduction function, we successively subject it to refocusing (i.e., deforestation of the intermediate reduced terms), simplification (i.e., fusing auxiliary functions), refunctionalization (i.e., Church encoding), and direct-style transformation (i.e., the converse of the CPS transformation). We consider two simple examples and treat them in detail: for the first one, arithmetic expressions, we construct an evaluation function; for the second one, terms in the free monoid, we construct an accumulator-based flatten function. The resulting two functions are traditional reduction-free normalization functions. The construction builds on previous work on refocusing and on a functional correspondence between evaluators and abstract machines. It is also reversible.

Journal ArticleDOI
TL;DR: In this article, a voting scheme based on homomorphic encryption is proposed, where the voter has access to a secondary communication channel through which he can receive information inaccessible to the adversary.
Abstract: We give suggestions for protection against adversaries with access to the voter's equipment in voting schemes based on homomorphic encryption. Assuming an adversary has complete knowledge of the contents and computations taking place on the client machine we protect the voter's privacy in a way so that the adversary has no knowledge about the voter's choice. Furthermore, an active adversary trying to change a voter's ballot may do so, but will end up voting for a random candidate. To accomplish the goal we assume that the voter has access to a secondary communication channel through which he can receive information inaccessible to the adversary. An example of such a secondary communication channel is ordinary mail. Additionally, we assume the existence of a trusted party that will assist in the protocol. To some extent, the actions of this trusted party are verifiable.

Journal ArticleDOI
TL;DR: It is proved that randomized Quicksort performs expected O(n + log (1 + Inv/n))) element swaps, where Inv denotes the number of inversions in the input sequence, and provides a theoretical explanation for the observed behavior, and gives new insights on the behavior of the quicksort algorithm.
Abstract: Quicksort was first introduced in 1961 by Hoare. Many variants have been developed, the best of which are among the fastest generic sorting algorithms available, as testified by the choice of Quicksort as the default sorting algorithm in most programming libraries. Some sorting algorithms are adaptive, i.e. they have a complexity analysis which is better for inputs which are nearly sorted, according to some specified measure of presortedness. Quicksort is not among these, as it uses Omega(n log n) comparisons even when the input is already sorted. However, in this paper we demonstrate empirically that the actual running time of Quicksort is adaptive with respect to the presortedness measure Inv. Differences close to a factor of two are observed between instances with low and high Inv value. We then show that for the randomized version of Quicksort, the number of element swaps performed is provably adaptive with respect to the measure Inv. More precisely, we prove that randomized Quicksort performs expected O(n (1 + log (1 + Inv/n))) element swaps, where Inv denotes the number of inversions in the input sequence. This result provides a theoretical explanation for the observed behavior, and gives new insights on the behavior of the Quicksort algorithm. We also give some empirical results on the adaptive behavior of Heapsort and Mergesort.

Book ChapterDOI
TL;DR: New-HOPLA is introduced, a concise but powerful language for higherorder nondeterministic processes with name generation that is typed, the type of a process describing the shape of the computation paths it can perform.
Abstract: This paper introduces new-HOPLA, a concise but powerful language for higherorder nondeterministic processes with name generation. Its origins as a metalanguage for domain theory are sketched but for the most part the paper concentrates on its operational semantics. The language is typed, the type of a process describing the shape of the computation paths it can perform. Its transition semantics, bisimulation, congruence properties and expressive power are explored. Encodings are given of well-known process algebras, including π-calculus, Higher-Order π-calculus and Mobile Ambients.

Journal ArticleDOI
TL;DR: In this paper, it was shown that a colimit preserving functor between presheaf categories (corresponding to a profunctor) preserves open maps and open map bisimulation.
Abstract: This paper studies fundamental connections between profunctors (i.e., distributors, or bimodules), open maps and bisimulation. In particular, it proves that a colimit preserving functor between presheaf categories (corresponding to a profunctor) preserves open maps and open map bisimulation. Consequently, the composition of profunctors preserves open maps as 2-cells. A guiding idea is the view that profunctors, and colimit preserving functors, are linear maps in a model of classical linear logic. But profunctors, and colimit preserving functors, as linear maps, are too restrictive for many applications. This leads to a study of a range of pseudo-comonads and how non-linear maps in their co-Kleisli bicategories preserve open maps and bisimulation. The pseudo-comonads considered are based on finite colimit completion, ``lifting'', and indexed families. The paper includes an appendix summarising the key results on coends, left Kan extensions and the preservation of colimits. One motivation for this work is that it provides a mathematical framework for extending domain theory and denotational semantics of programming languages to the more intricate models, languages and equivalences found in concurrent computation. But the results are likely to have more general applicability because of the ubiquitous nature of profunctors.

Journal ArticleDOI
TL;DR: This work describes how to construct correct abstract machines from the class of L-attributed natural semantics and formalizes it as an extraction algorithm and proves that the algorithm produces abstract machines that are equivalent to the original natural semantics.
Abstract: We describe how to construct correct abstract machines from the class of L-attributed natural semantics introduced by Ibraheem and Schmidt at HOOTS 1997. The construction produces stack-based abstract machines where the stack contains evaluation contexts. It is defined directly on the natural semantics rules. We formalize it as an extraction algorithm and we prove that the algorithm produces abstract machines that are equivalent to the original natural semantics. We illustrate the algorithm by extracting abstract machines from natural semantics for call-by-value, call-by-name, and call-by-need evaluation of lambda terms.

Journal ArticleDOI
TL;DR: An algorithm for performing beta reduction on lambda terms represented as uplinked DAGs, which is particularly suited to applications such as compilers, theorem provers, and type-manipulation systems that may need to examine terms in-between reductions.
Abstract: Terms of the lambda-calculus are one of the most important data structures we have in computer science. Among their uses are representing program terms, advanced type systems, and proofs in theorem provers. Unfortunately, heavy use of this data structure can become intractable in time and space; the typical culprit is the fundamental operation of beta reduction. If we represent a lambda-calculus term as a DAG rather than a tree, we can efficiently represent the sharing that arises from beta reduction, thus avoiding combinatorial explosion in space. By adding uplinks from a child to its parents, we can efficiently implement beta reduction in a bottom-up manner, thus avoiding combinatorial explosion in time required to search the term in a top-down fashion. We present an algorithm for performing beta reduction on lambda terms represented as uplinked DAGs; describe its proof of correctness; discuss its relation to alternate techniques such as Lamping graphs, the suspension lambda-calculus (SLC) and director strings; and present some timings of an implementation. Besides being both fast and parsimonious of space, the algorithm is particularly suited to applications such as compilers, theorem provers, and type-manipulation systems that may need to examine terms in-between reductions - i.e., the ``readback'' problem for our representation is trivial. Like Lamping graphs, and unlike director strings or the suspension lambda-calculus, the algorithm functions by side-effecting the term containing the redex; the representation is not a ``persistent'' one. The algorithm additionally has the charm of being quite simple; a complete implementation of the core data structures and algorithms is 180 lines of fairly straightforward SML.

Journal ArticleDOI
TL;DR: A type system for a modular style of the action semantic framework that, given signatures of all the semantic functions used in a semantic equation defining a semantic function, performs a soft type check on the action in the semantic equation.
Abstract: When writing semantic descriptions of programming languages, it is convenient to have tools for checking the descriptions. With frameworks that use inductively defined semantic functions to map programs to their denotations, we would like to check that the semantic functions result in denotations with certain properties. In this paper we present a type system for a modular style of the action semantic framework that, given signatures of all the semantic functions used in a semantic equation defining a semantic function, performs a soft type check on the action in the semantic equation. We introduce types for actions that describe different properties of the actions, like the type of data they expect and produce, whether they can fail or have side effects, etc. A type system for actions which uses these new action types is presented. Using the new action types in the signatures of semantic functions, the language describer can assert properties of semantic functions and have the assertions checked by an implementation of the type system. The type system has been implemented for use in connection with the recently developed formalism ASDF. The formalism supports writing language definitions by combining modules that describe single language constructs. This is possible due to the inherent modularity in ASDF. We show how we manage to preserve the modularity and still perform specialised type checks for each module.

Journal ArticleDOI
TL;DR: It is shown how the least fixed-point can be computed using a simple, totally-asynchronous distributed algorithm, enabling sound reasoning about the global trust-state without computing the exact fixed- point.
Abstract: Recently, Carbone, Nielsen and Sassone introduced the trust-structure framework; a semantic model for trust-management in global-scale distributed systems. The framework is based on the notion of trust structures; a set of ``trust-levels'' ordered by two distinct partial orderings. In the model, a unique global trust-state is defined as the least fixed-point of a collection of local policies assigning trust-levels to the entities of the system. However, the framework is a purely denotational model: it gives precise meaning to the global trust-state of a system, but without specifying a way to compute this abstract mathematical object. This paper complements q the denotational model of trust structures with operational techniques. It is shown how the least fixed-point can be computed using a simple, totally-asynchronous distributed algorithm. Two efficient protocols for approximating the least fixed-point are provided, enabling sound reasoning about the global trust-state without computing the exact fixed-point. Finally, dynamic algorithms are presented for safe reuse of information between computations, in face of dynamic trust-policy updates.

Journal ArticleDOI
TL;DR: This work presents a case study in constructive semantic description, a description of Core ML, consisting of a mapping from it to BAS and action semantic descriptions of the individual BAS constructs written in ASDF (Action Semantics Definition Formalism), a formalism specially designed for writing action semantic description of single language constructs.
Abstract: Usually, the majority of language constructs found in a programming language can also be found in many other languages, because language design is based on reuse. This should be reflected in the way we give semantics to programming languages. It can be achieved by making a language description consist of a collection of modules, each defining a single language construct. The description of a single language construct should be language independent, so that it can be reused in other descriptions without any changes. We call a language description framework ``constructive'' when it supports independent description of individual constructs. We present a case study in constructive semantic description. The case study is a description of Core ML, consisting of a mapping from it to BAS (Basic Abstract Syntax) and action semantic descriptions of the individual BAS constructs. The latter are written in ASDF (Action Semantics Definition Formalism), a formalism specially designed for writing action semantic descriptions of single language constructs. Tool support is provided by the ASF+SDF Meta-Environment and by the Action Environment, which is a new extension of the ASF+SDF Meta-Environment.

Journal ArticleDOI
TL;DR: An example proposed by Patrick Greussay in his doctoral thesis: how to verify in sublinear time whether a Calder mobile is well balanced is revisited, and a spectrum of solutions is derived, starting from the original specification of the problem.
Abstract: This note was written at the occasion of the retirement of Jean-Francois Perrot at the Universite Pierre et Marie Curie (Paris VI). In an attempt to emulate his academic spirit, we revisit an example proposed by Patrick Greussay in his doctoral thesis: how to verify in sublinear time whether a Calder mobile is well balanced. Rather than divining one solution or another, we derive a spectrum of solutions, starting from the original specification of the problem. We also prove their correctness.

Journal ArticleDOI
TL;DR: In this paper, the second author obtained metatheorems for the extraction of effective (uniform) bounds from classical, prima facie non-constructive proofs in functional analysis.
Abstract: In 2003, the second author obtained metatheorems for the extraction of effective (uniform) bounds from classical, prima facie non-constructive proofs in functional analysis. These metatheorems for the first time cover general classes of structures like arbitrary metric, hyperbolic, CAT(0) and normed linear spaces and guarantee the independence of the bounds from parameters raging over metrically bounded (not necessarily compact!) spaces. The use of classical logic imposes some severe restrictions on the formulas and proofs for which the extraction can be carried out. In this paper we consider similar metatheorems for semi-intuitionistic proofs, i.e. proofs in an intuitionistic setting enriched with certain non-constructive principles. Contrary to the classical case, there are practically no restrictions on the logical complexity of theorems for which bounds can be extracted. Again, our metatheorems guarantee very general uniformities, even in cases where the existence of uniform bounds is not obtainable by (ineffective) straightforward functional analytic means. Already in the purely intuitionistic case, where the existence of effective bounds is implicit, the metatheorems allow one to derive uniformities that may not be obvious at all from a given constructive proofs. Finally, we illustrate our main metatheorem by an example from metric fixed point theory.

Journal ArticleDOI
TL;DR: This note shows that split-2 bisimulation equivalence affords a finite equational axiomatization over the process algebra obtained by adding an auxiliary operation proposed by Hennessy in 1981 to the recursion free fragment of Milner's Calculus of Communicating Systems.
Abstract: This note shows that split-2 bisimulation equivalence (also known as timed equivalence) affords a finite equational axiomatization over the process algebra obtained by adding an auxiliary operation proposed by Hennessy in 1981 to the recursion free fragment of Milner's Calculus of Communicating Systems. Thus the addition of a single binary operation, viz. Hennessy's merge, is sufficient for the finite equational axiomatization of parallel composition modulo this non-interleaving equivalence. This result is in sharp contrast to a theorem previously obtained by the same authors to the effect that the same language is not finitely based modulo bisimulation equivalence.

Journal ArticleDOI
TL;DR: A new formalism, ASDF, which has been designed specifically for giving reusable action semantic descriptions of individual language constructs, and implemented on top of the ASF+SDF Meta-Environment, exploiting recent advances in techniques for integration of different formalisms.
Abstract: Some basic programming constructs (e.g., conditional statements) are found in many different programming languages, and can often be included without change when a new language is designed. When writing a semantic description of a language, however, it is usually not possible to reuse parts of previous descriptions without change. This paper introduces a new formalism, ASDF, which has been designed specifically for giving reusable action semantic descriptions of individual language constructs. An initial case study in the use of ASDF has already provided reusable descriptions of all the basic constructs underlying Core ML. The paper also describes the Action Environment, a new environment supporting use and validation of ASDF descriptions. The Action Environment has been implemented on top of the ASF+SDF Meta-Environment, exploiting recent advances in techniques for integration of different formalisms, and inheriting all the main features of the Meta-Environment.

Journal ArticleDOI
TL;DR: In this paper, a polynomial of quite small degree whose zero set over Q_p coincides with the zero set of the original system was shown to have low additive and straight-line complexity.
Abstract: For a system of polynomial equations over Q_p we present an efficient construction of a single polynomial of quite small degree whose zero set over Q_p coincides with the zero set over Q_p of the original system. We also show that the polynomial has some other attractive features such as low additive and straight-line complexity. The proof is based on a link established here between the above problem and some recent number theoretic result about zeros of p-adic forms.

Journal ArticleDOI
TL;DR: An algorithm for Exact Satisfiability with polynomial space usage and a time bound of poly(L) . m!, where m is the number of clauses and L is the length of the formula.
Abstract: We give an algorithm for Exact Satisfiability with polynomial space usage and a time bound of poly(L) . m!, where m is the number of clauses and L is the length of the formula. Skjernaa has given an algorithm for Exact Satisfiability with time bound poly(L) . 2^m but using exponential space. We leave the following problem open: Is there an algorithm for Exact Satisfiability using only polynomial space with a time bound of c^m, where c is a constant and m is the number of clauses?

Journal ArticleDOI
TL;DR: In this article, the Universal Composability framework (UC) is shown to be equivalent to security in the probabilistic polynomial time calculus ppc. Security is defined under active and adaptive adversaries with synchronous and authenticated communication.
Abstract: Two different approaches for general protocol security are proved equivalent. Concretely, we prove that security in the Universal Composability framework (UC) is equivalent to security in the probabilistic polynomial time calculus ppc. Security is defined under active and adaptive adversaries with synchronous and authenticated communication. In detail, we define an encoding from machines in UC to processes in ppc and show UC is fully abstract in ppc, i.e., we show the soundness and completeness of security in ppc with respect to UC. However, we restrict security in ppc to be quantified not over all possible contexts, but over those induced by UC-environments under encoding. This result is not overly-restricting security in ppc, since the threat and communication models we assume are meaningful in both practice and theory.

Journal ArticleDOI
TL;DR: This work presents an algorithm for computing logarithms of positive real numbers, that bares structural resemblance to the elementary school algorithm of long division, and makes no use of Taylor series or calculus, but rather exploits properties of the radix-d representation of a logariths in base d.
Abstract: In this work, we present an algorithm for computing logarithms of positive real numbers, that bares structural resemblance to the elementary school algorithm of long division. Using this algorithm, we can compute successive digits of a logarithm using a 4-operation pocket calculator. The algorithm makes no use of Taylor series or calculus, but rather exploits properties of the radix-d representation of a logarithm in base d. As such, the algorithm is accessible to anyone familiar with the elementary properties of exponents and logarithms.