scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 2001"


Journal ArticleDOI
TL;DR: This work presents threshold DSS (digital signature standard) signatures where the power to sign is shared by n players such that for a given parameter t there is a consensus that n players should have the right to sign.
Abstract: We present threshold DSS (digital signature standard) signatures where the power to sign is shared by n players such that for a given parameter t

376 citations


Journal ArticleDOI
Rosario Gennaro1, Pankaj Rohatgi1
TL;DR: In this paper, a new efficient paradigm for signing digital streams is presented, which is substantially different from the traditional signature-oriented problem of signing regular messages and requires the receiver to process the entire message before being able to authenticate its signature.
Abstract: We present a new efficient paradigm for signing digital streams. The problem of signing digital streams to prove their authenticity is substantially different from the problem of signing regular messages. Traditional signature schemes are message oriented and require the receiver to process the entire message before being able to authenticate its signature. However, a stream is a potentially very long (or infinite) sequence of bits that the sender sends to the receiver and the receiver is required to consume the received bits at more or less the input rate and without excessive delay. Therefore it is infeasible for the receiver to obtain the entire stream before authenticating and consuming it. Examples of streams include digitized video and audio files, data feeds, and applets. We present two solutions to the problem of authenticating digital streams. The first one is for the case of a finite stream which is entirely known to the sender (say a movie). We use this constraint to devise an extremely efficient solution. The second case is for a (potentially infinite) stream which is not known in advance to the sender (for example a live broadcast). We present proofs of security of our constructions. Our techniques also have applications in other areas, for example, efficient authentication of long files when communication is at a cost and signature-based filtering at a proxy server.

302 citations


Journal ArticleDOI
TL;DR: In this article, an EXPTIME procedure for finding a winner in a pushdown game is presented, which is then used to solve the model-checking problem for the pushdown processes and the propositional?-calculus.
Abstract: A pushdown game is a two player perfect information infinite game on a transition graph of a pushdown automaton. A winning condition in such a game is defined in terms of states appearing infinitely often in the play. It is shown that if there is a winning strategy in a pushdown game then there is a winning strategy realized by a pushdown automaton. An EXPTIME procedure for finding a winner in a pushdown game is presented. The procedure is then used to solve the model-checking problem for the pushdown processes and the propositional ?-calculus. The problem is shown to be DEXPTIME-complete.

211 citations


Journal ArticleDOI
TL;DR: This paper introduces and examines the problem of model checking of open systems (module checking), and shows that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching- time paradigm.
Abstract: In computer system design, we distinguish between closed and open systems. A closed system is a system whose behavior is completely determined by the state of the system. An open system is a system that interacts with its environment and whose behavior depends on this interaction. The ability of temporal logics to describe an ongoing interaction of a reactive program with its environment makes them particularly appropriate for the specification of open systems. Nevertheless, model-checking algorithms used for the verification of closed systems are not appropriate for the verification of open systems. Correct model checking of open systems should check the system with respect to arbitrary environments and should take into account uncertainty regarding the environment. This is not the case with current model-checking algorithms and tools. In this paper we introduce and examine the problem of model checking of open systems (module checking, for short). We show that while module checking and model checking coincide for the linear-time paradigm, module checking is much harder than model checking for the branching-time paradigm. We prove that the problem of module checking is EXPTIME-complete for specifications in CTL and 2EXPTIME-complete for specifications in CTL*. This bad news is also carried over when we consider the program-complexity of module checking. As good news, we show that for the commonly-used fragment of CTL (universal, possibly, and always possibly properties), current model-checking tools do work correctly, or can be easily adjusted to work correctly, with respect to both closed and open systems.

182 citations


Journal ArticleDOI
TL;DR: In this article, an EXPTIME procedure for finding a winner in a pushdown game is presented, which is then used to solve the model-checking problem for the pushdown processes and the propositional μ-calculus.
Abstract: A pushdown game is a two player perfect information infinite game on a transition graph of a pushdown automaton. A winning condition in such a game is defined in terms of states appearing infinitely often in the play. It is shown that if there is a winning strategy in a pushdown game then there is a winning strategy realized by a pushdown automaton. An EXPTIME procedure for finding a winner in a pushdown game is presented. The procedure is then used to solve the model-checking problem for the pushdown processes and the propositional μ-calculus. The problem is shown to be DEXPTIME-complete.

172 citations


Journal ArticleDOI
TL;DR: It is shown that this technique offers effective load reduction on servers and high availability, and bounds on the server load that can be achieved with these techniques are proved.
Abstract: We initiate the study of probabilistic quorum systems, a technique for providing consistency of replicated data with high levels of assurance despite the failure of data servers. We show that this technique offers effective load reduction on servers and high availability. We explore probabilistic quorum systems both for services tolerant of benign server failures and for services tolerant of arbitrary (Byzantine) ones. We also prove bounds on the server load that can be achieved with these techniques.

142 citations


Journal ArticleDOI
TL;DR: A new way to measure the space needed in resolution refutations of CNF formulas in propositional logic is introduced and it is shown that Tseitin formulas associated to a certain kind of expander graphs of n nodes need resolution space n-c for some constant c.
Abstract: We introduce a new way to measure the space needed in resolution refutations of CNF formulas in propositional logic. With the former definition (1994, B. H. Kleine and T. Lettman, "Aussangenlogik: Deduktion und Algorithmen, Teubner, Stuttgart) the space required for the resolution of any unsatisfiable formula in CNF is linear in the number of clauses. The new definition allows a much finer analysis of the space in the refutation, ranging from constant to linear space. Moreover, the new definition allows us to relate the space needed in a resolution proof of a formula to other well-studied complexity measures. It coincides with the complexity of a pebble game in the resolution graphs of a formula and, as we show, has relationships to the size of the refutation. We also give upper and lower bounds on the space needed for the resolution of unsatisfiable formulas. We show that Tseitin formulas associated to a certain kind of expander graphs of n nodes need resolution space n-c for some constant c. Measured on the number of clauses, this result is the best possible. We also show that the formulas expressing the general pigeonhole principle with n holes and more than n pigeons need space n+1 independent of the number of pigeons. Since a matching space upper bound of n+1 for these formulas exists, the obtained bound is exact. We also point to a possible connection between resolution space and resolution width, another measure for the complexity of resolution refutations. 2001 Elsevier Science.

115 citations


Journal ArticleDOI
TL;DR: An event structure semantics for contextual nets is presented, an extension of P/T Petri nets where transitions can check for the presence of tokens without consuming them (read-only operations), and the relation between the proposed unfolding semantics and several deterministic process semantics for context nets in the literature is investigated.
Abstract: We present an event structure semantics for contextual nets, an extension of P/T Petri nets where transitions can check for the presence of tokens without consuming them (read-only operations). A basic role is played by asymmetric event structures, a generalization of Winskel's prime event structures where symmetric conflict is replaced by a relation modelling asymmetric conflict or weak causality, used to represent a new kind of dependency between events arising in contextual nets. Extending Winskel's seminal work on safe nets, the truly concurrent event-based semantics of contextual nets is given at categorical level via a chain of coreflections leading from the category SW-CN of semi-weighted contextual nets to the category Dom of finitary prime algebraic domains. First an unfolding construction generates from a contextual net a corresponding occurrence contextual net, from where an asymmetric event structure is extracted. Then the configurations of the asymmetric event structure, endowed with a suitable order, are shown to form a finitary prime algebraic domain. We also investigate the relation between the proposed unfolding semantics and several deterministic process semantics for contextual nets in the literature. In particular, the domain obtained via the unfolding is characterized as the collection of the deterministic processes of the net endowed with a kind of prefix ordering. 2001 Elsevier Science.

111 citations


Journal ArticleDOI
TL;DR: An analysis for the π-calculus is presented that shows how names will be bound to actual channels at run time and establishes a super-set of the set of channels to which a given name may be bound and of theSet of channels that may be sent along a given channel.
Abstract: Control Flow Analysis is a static technique for predicting safe and computable approximations to the set of values that the objects of a program may assume during its execution. We present an analysis for the π-calculus that shows how names will be bound to actual channels at run time. The result of our analysis establishes a super-set of the set of channels to which a given name may be bound and of the set of channels that may be sent along a given channel. Besides a set of rules that permits one to validate a given solution, we also offer a constructive procedure that builds solutions in low polynomial time. Applications of our analysis include establishing two simple security properties of processes. One example is that P has no leaks: P offers communication to the external environment through public channels only and confines its secret channels within itself. The other example is connected to the no read-up/no write-down property of Bell and LaPadula: once processes are given levels of security clearance, we check that a process at a high level never sends channels to processes at a lower level.

93 citations


Journal ArticleDOI
TL;DR: It is shown that the reduction method can be extended to solve the construction variants of many decision problems on graphs of bounded treewidth, including all problems definable in monadic second order logic.
Abstract: This paper presents a number of new ideas and results on graph reduction applied to graphs of bounded treewidth. S. Arnborg, B. Courcelle, A. Proskurowski, and D. Seese (J. Assoc. Comput. Mach.40, 1134?1164 (1993)) have shown that many decision problems on graphs can be solved in linear time on graphs of bounded treewidth, using a finite set of reduction rules. These algorithms can be used to solve problems on graphs of bounded treewidth without the need to obtain a tree decomposition of the input graph first. We show that the reduction method can be extended to solve the construction variants of many decision problems on graphs of bounded treewidth, including all problems definable in monadic second order logic. We also show that a variant of these reduction algorithms can be used to solve (constructive) optimization problems in O(n) time. For example, optimization and construction variants of INDEPENDENT SET and HAMILTONIAN COMPLETION NUMBER can be solved in this way on graphs of small treewidth. Additionally, we show that the results of H. L. Bodlaender and T. Hagerup (SIAM J. Comput.27, 1725?1746 (1998)) can be applied to our reduction algorithms, which results in parallel reduction algorithms that use O(n) operations and O(log n log* n) time on an EREW PRAM, or O(log n) time on a CRCW PRAM.

91 citations


Journal ArticleDOI
TL;DR: Reduction to the approximation of polynomial zeros enabled to obtain a new insight into the GCD problem and to devise effective solution algorithms, and this enables to obtain certified correct solution for a large class of input polynomials.
Abstract: Computation of approximate polynomial greatest common divisors (GCDs) is important both theoretically and due to its applications to control linear systems, network theory, and computer-aided design We study two approaches to the solution so far omitted by the researchers, despite intensive recent work in this area Correlation to numerical Pade approximation enabled us to improve computations for both problems (GCDs and Pade) Reduction to the approximation of polynomial zeros enabled us to obtain a new insight into the GCD problem and to devise effective solution algorithms In particular, unlike the known algorithms, we estimate the degree of approximate GCDs at a low computational cost, and this enables us to obtain certified correct solution for a large class of input polynomials We also restate the problem in terms of the norm of the perturbation of the zeros (rather than the coefficients) of the input polynomials, which leads us to the fast certified solution for any pair of input polynomials via the computation of their roots and the maximum matchings or connected components in the associated bipartite graph

Journal ArticleDOI
TL;DR: In this article, the authors give a logical semantics for the class CC of concurrent constraint programming languages and for its extension LCC based on linear constraint systems, and illustrate the usefulness of these results by showing with examples how the phase semantics of linear logic can be used to give simple “semantical” proofs of safety properties of LCC programs.
Abstract: In this paper we give a logical semantics for the class CC of concurrent constraint programming languages and for its extension LCC based on linear constraint systems. Besides the characterization in intuitionistic logic of the stores of CC computations, we show that both the stores and the successes of LCC computations can be characterized in intuitionistic linear logic. We illustrate the usefulness of these results by showing with examples how the phase semantics of linear logic can be used to give simple “semantical” proofs of safety properties of LCC programs.

Journal ArticleDOI
TL;DR: In this paper, a theoretical study of constraint simplification for a type inference system with subtyping is presented, where constraints are interpreted in a non-structural lattice of regular terms.
Abstract: This paper offers a theoretical study of constraint simplification, a fundamental issue for the designer of a practical type inference system with subtyping. In the simpler case where constraints are equations, a simple isomorphism between constrained type schemes and finite state automata yields a complete constraint simplification method. Using it as a guide for the intuition, we move on to the case of subtyping, and describe several simplification algorithms. Although no longer complete, they are conceptually simple, efficient, and very effective in practice. Overall, this paper gives a concise theoretical account of the techniques found at the core of our type inference system. Our study is restricted to the case where constraints are interpreted in a non-structural lattice of regular terms. Nevertheless, we highlight a small number of general ideas, which explain our algorithms at a high level and may be applicable to a variety of other systems.

Journal ArticleDOI
TL;DR: The main goal of this paper is the comparison of the power of Las Vegas computation and deterministic respectively nondeterministic computation for the complexity measures of one-way communication, ordered binary decision diagrams, and finite automata.
Abstract: The study of the computational power of randomized computations is one of the central tasks of complexity theory. The main goal of this paper is the comparison of the power of Las Vegas computation and deterministic respectively nondeterministic computation. We investigate the power of Las Vegas computation for the complexity measures of one-way communication, ordered binary decision diagrams, and finite automata. (i) For the one-way communication complexity of two-party protocols we show that Las Vegas communication can save at most one half of the deterministic one-way communication complexity. We also present a language for which this gap is tight. (ii) The result (i) is applied to show an at most polynomial gap between determinism and Las Vegas for ordered binary decision diagrams. (iii) For the size (i.e., the number of states) of finite automata we show that the size of Las Vegas finite automata recognizing a language L is at least the square root of the size of the minimal deterministic finite automaton recognizing L. Using a specific language we verify the optimality of this bound. Copyright 2001 Academic Press.

Journal ArticleDOI
TL;DR: The approximability properties of several weighted problems are investigated, by comparing them with the respective unweighted problems, and the new notion of “mixing” set is introduced and it is shown that these reductions give new non-approximability results for these problems.
Abstract: We investigate the approximability properties of several weighted problems, by comparing them with the respective unweighted problems. For an appropriate (and very general) definition of niceness, we show that if a nice weighted problem is hard to approximate within r, then its polynomially bounded weighted version is hard to approximate within r?o(1). Then we turn our attention to specific problems, and we show that the unweighted versions of MIN VERTEX COVER, MIN SAT, MAX CUT, MAX DICUT, MAX 2SAT, and MAX EXACTkSAT are exactly as hard to approximate as their weighted versions. We note in passing that MIN VERTEX COVER is exactly as hard to approximate as MIN SAT. In order to prove the reductions for MAX 2SAT, MAX CUT, MAX DICUT, and MAX E3SAT we introduce the new notion of “mixing” set and we give an explicit construction of such sets. These reductions give new non-approximability results for these problems.

Journal ArticleDOI
TL;DR: It is proved that the class of languages recognized by quantum automata with isolated cut point is theclass of reversible regular languages, which implies the regularity of the language accepted by a quantum automaton.
Abstract: In this paper we analyze some features of the behaviour of quantum automata. In particular we prove that the class of languages recognized by quantum automata with isolated cut point is the class of reversible regular languages. As a more general result, we give a bound on the inverse error that implies the regularity of the language accepted by a quantum automaton.

Journal ArticleDOI
TL;DR: A semantic framework to reason about properties of abstractions of SLD-derivations and using abstract interpretation techniques to model abstraction allows us to state very simple conditions on the observables which guarantee the validity of several general theorems.
Abstract: We define a semantic framework to reason about properties of abstractions of SLD-derivations. The framework allows us to address problems such as the relation between the (top-down) operational semantics and the (bottom-up) denotational semantics, the existence of a denotation for a set of definite clauses and their properties (compositionality w.r.t. various syntactic operators, correctness, minimality, and precision). Using abstract interpretation techniques to model abstraction allows us to state very simple conditions on the observables which guarantee the validity of several general theorems.

Journal ArticleDOI
TL;DR: The notion of pre-nets is introduced, obtaining a fully satisfactory categorical treatment, where the operational semantics of nets yields an adjunction, and since the universal property of adjunctions guarantees that colimit constructions on nets are preserved in the authors' algebraic models, the resulting semantic framework has good compositional properties.
Abstract: We show that although the algebraic semantics of place/transition Petri nets under the collective token philosophy can be fully explained in terms of strictly symmetric monoidal categories, the analogous construction under the individual token philosophy is not completely satisfactory, because it lacks universality and also functoriality. We introduce the notion of pre-nets to overcome this, obtaining a fully satisfactory categorical treatment, where the operational semantics of nets yields an adjunction. This allows us to present a uniform logical description of net behaviors under both the collective and the individual token philosophies in terms of theories and theory morphisms in partial membership equational logic. Moreover, since the universal property of adjunctions guarantees that colimit constructions on nets are preserved in our algebraic models, the resulting semantic framework has good compositional properties.

Journal ArticleDOI
TL;DR: This paper presents a parallel algorithm for computing the edit distance for the class of languages accepted by one-way nondeterministic auxiliary pushdown automata working in polynomial time, a class that strictly contains context?free languages.
Abstract: The notion of edit distance arises in very different fields such as self-correcting codes, parsing theory, speech recognition, and molecular biology. The edit distance between an input string and a language L is the minimum cost of a sequence of edit operations (substitution of a symbol in another incorrect symbol, insertion of an extraneous symbol, deletion of a symbol) needed to change the input string into a sentence of L. In this paper we study the complexity of computing the edit distance, discovering sharp boundaries between classes of languages for which this function can be efficiently evaluated and classes of languages for which it seems to be difficult to compute. Our main result is a parallel algorithm for computing the edit distance for the class of languages accepted by one-way nondeterministic auxiliary pushdown automata working in polynomial time, a class that strictly contains context?free languages. Moreover, we show that this algorithm can be extended in order to find a sentence of the language from which the input string has minimum distance.

Journal ArticleDOI
TL;DR: The notion of RDT-compliance is introduced, a family of communication-induced checkpointing protocols that ensure on-the-fly RDT properties is considered, and a new communication- induced checkpointing protocol P, which tracks a minimal set of Z-paths and breaks those not perceived as being doubled.
Abstract: Considering a checkpoint and communication pattern, the rollback-dependency trackability (RDT) property stipulates that there is no hidden dependency between local checkpoints. In other words, if there is a dependency between two checkpoints due to a noncausal sequence of messages (Z-path), then there exists a causal sequence of messages (C-path) that doubles the noncausal one and that establishes the same dependency. This paper introduces the notion of RDT-compliance. A property defined on Z-paths is RDT-compliant if the causal doubling of Z-paths having this property is sufficient to ensure RDT. Based on this notion, the paper provides examples of such properties. Moreover, these properties are visible, i.e., they can be tested on the fly. One of these properties is shown to be minimal with respect to visible and RDT-compliant properties. In other words, this property defines a minimal visible set of Z-paths that have to be doubled for the RDT property to be satisfied. Then, a family of communication-induced checkpointing protocols that ensure on-the-fly RDT properties is considered. Assuming processes take local checkpoints independently (called basic checkpoints), protocols of this family direct them to take on-the-fly additional local checkpoints (called forced checkpoints) in order that the resulting checkpoint and communication pattern satisfies the RDT property. The second contribution of this paper is a new communication-induced checkpointing protocol P . This protocol, based on a condition derived from the previous characterization, tracks a minimal set of Z-paths and breaks those not perceived as being doubled. Finally, a set of communication-induced checkpointing protocols are derived from P . Each of these derivations considers a particular weakening of the general condition used by P . It is interesting to note that some of these derivations produce communication-induced checkpointing protocols that have already been proposed in the literature.

Journal ArticleDOI
TL;DR: A partition refinement algorithm for the π -calculus, a development of CCS where channel names can be communicated, is presented and can be used to check bisimilarity and to compute minimal realisations of finite control processes.
Abstract: The partition refinement algorithm is the basis for most of the tools for checking bisimulation equivalences and for computing minimal realisations of CCS-like finite state processes. In this paper, we present a partition refinement algorithm for the π -calculus, a development of CCS where channel names can be communicated. It can be used to check bisimilarity and to compute minimal realisations of finite control processes—the π -calculus counterpart of CCS finite state processes. The algorithm is developed for strong open bisimulation and can be adapted to late and early bisimulations, as well as to weak bisimulations. To arrive at the algorithm, a few laws, proof techniques, and four characterizations of open bisimulation are proved.

Journal ArticleDOI
TL;DR: Context-free CF series on trees with coefficients on a semiring are investigated; they are obtained as components of the least solutions of systems of equations having polynomials on their right-hand sides.
Abstract: We investigate context-free (CF) series on trees with coefficients on a semiring; they are obtained as components of the least solutions of systems of equations having polynomials on their right-hand sides. The relationship between CF series on trees and CF tree-grammars and recursive program schemes is also examined. Polypodes, a new algebraic structure, are introduced in order to study in common series on trees and words and applications are given.

Journal ArticleDOI
TL;DR: The computational paradigms of superposition of values and of higher-order sharing are identified, appealing to compelling analogies with quantum mechanics and SIMD-parallelism.
Abstract: We analyze the inherent complexity of implementing Levy's notion of optimal evaluation for the ?-calculus, where similar redexes are contracted in one step via so-called parallel s-reduction. Optimal evaluation was finally realized by Lamping, who introduced a beautiful graph reduction technology for sharing evaluation contexts dual to the sharing of values. His pioneering insights have been modified and improved in subsequent implementations of optimal reduction. We prove that the cost of parallel s-reduction is not bounded by any Kalmar-elementary recursive function. Not only do we establish that the parallel s-step cannot be a unit-cost operation, we demonstrate that the time complexity of implementing a sequence of n parallel s-steps is not bounded as O(2n), O(22n), O(222n), or in general, O(Kl(n)), where Kl(n) is a fixed stack of l 2's with an n on top. A key insight, essential to the establishment of this non-elementary lower bound, is that any simply typed ?-term can be reduced to normal form in a number of parallel s-steps that is only polynomial in the length of the explicitly typed term. The result follows from Statman's theorem that deciding equivalence of typed ?-terms is not elementary recursive. The main theorem gives a lower bound on the work that must be done by any technology that implements Levy's notion of optimal reduction. However, in the significant case of Lamping's solution, we make some important remarks addressing how work done by s-reduction is translated into equivalent work carried out by his bookkeeping nodes. In particular, we identify the computational paradigms of superposition of values and of higher-order sharing, appealing to compelling analogies with quantum mechanics and SIMD-parallelism.

Journal ArticleDOI
TL;DR: This work addresses several problematic issues, such as the use of higher-order abstract syntax in inductive sets in the presence of recursive constructors, and the formalization of modal (sequent-style) rules and of context sensitive grammars in the calculus of inductive constructions.
Abstract: We present a natural deduction proof system for the propositional modal ?-calculus and its formalization in the calculus of inductive constructions. We address several problematic issues, such as the use of higher-order abstract syntax in inductive sets in the presence of recursive constructors, and the formalization of modal (sequent-style) rules and of context sensitive grammars. The formalization can be used in the system Coq, providing an experimental computer-aided proof environment for the interactive development of error-free proofs in the modal ?-calculus. The techniques we adopt can be readily ported to other languages and proof systems featuring similar problematic issues.

Journal ArticleDOI
TL;DR: This work investigates inductive theorem proving techniques for first-order functions whose meaning and domains can be specified by Horn clauses built up from the equality and finitely many unary membership predicates.
Abstract: This work investigates inductive theorem proving techniques for first-order functions whose meaning and domains can be specified by Horn clauses built up from the equality and finitely many unary membership predicates. In contrast with other works in the area, constructors are not assumed to be free. Techniques originating from tree automata are used to describe ground constructor terms in normal form, on which the induction proofs are built up. Validity of (free) constructor clauses is checked by an original technique relying on the recent discovery of a complete axiomatization of finite trees and their rational subsets. Validity of clauses with defined symbols or nonfree constructor terms is reduced to the latter case by appropriate inference rules using a notion of ground reducibility for these symbols. We show how to check this property by generating proof obligations which can be passed over to the inductive prover.

Journal ArticleDOI
TL;DR: The equivalence of weak and strong normalization for various restricted ?
Abstract: We study perpetuality of reduction steps, as well as perpetuality of redexes, in orthogonal rewrite systems. A perpetual step is a reduction step which retains the possibility of infinite reductions. A perpetual redex is a redex which, when put into an arbitrary context, yields a perpetual step. We generalize and refine existing criteria for the perpetuality of reduction steps and redexes in orthogonal term rewriting systems and the ?-calculus due to Bergstra and Klop and others. We first introduce context-sensitive conditional expression reduction systems (CCERSs) and define a concept of orthogonality (which implies confluence) for them. In particular, several important ?-calculi and their extensions and restrictions can naturally be embedded into orthogonal CCERSs. We then define a perpetual reduction strategy which enables one to construct minimal (w.r.t. Levy's permutation ordering on reductions) infinite reductions in orthogonal fully-extended CCERSs. Using the properties of the minimal perpetual strategy, we prove 1.perpetuality of any reduction step that does not erase potentially infinite arguments, which are arguments that may become, via substitution, infinite after a number of outside steps, and 2.perpetuality (in every context) of any safe redex, which is a redex whose substitution instances may discard infinite arguments only when the corresponding contracta remain infinite. We prove both these perpetuality criteria for orthogonal fully-extended CCERSs and then specialize and apply them to restricted ?-calculi, demonstrating their usefulness. In particular, we prove the equivalence of weak and strong normalization (whose equivalence is here called uniform normalization) for various restricted ?-calculi, most of which cannot be derived from previously known perpetuality criteria.

Journal ArticleDOI
TL;DR: Han et al. as mentioned in this paper presented a fast deterministic algorithm for integer sorting in linear space in O(n log log n log log log N) time, where n is the number of nodes in the input space.
Abstract: We present a fast deterministic algorithm for integer sorting in linear space. Our algorithm sorts n integers in the range {0, 1, 2, ?, m?1} in linear space in O(n log log n log log log n) time. When log m?log2+?n, ?>0, we can further achieve O(n log log n) time. This improves the O(n(log log n)2) time bound given in M. Thorup (1998) in “Proc. 1998 ACM-SIAM Symp. on Discrete Algorithms (SODA'98),” pp. 550?555). This result is obtained by combining our new technique with that of Thorup's. Signature sorting (A. Andersson, T. Hagerup, S. Nilsson, and R. Raman, 1995, in “Proc. 1995 Symposium on Theory of Computing,” pp. 427?436), A. Andersson's result (1996, in “Proc. 1996 IEEE Symp. on Foundations of Computer Science,” pp. 135?141), R. Raman's result (1996, Lecture Notes in Computer Science, Vol. 1136, pp. 121?137, Springer-Verlag Berlin/New York), and our previous result (Y. Han and X. Shen, 1999, in “Proc. 1999 Tenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA'99),” Baltimore, MD, January, pp. 419?428) are also used for the design of our algorithms. We provide an approach and techniques which are totally different from previous approaches and techniques for the problem. As a consequence our technique can be extended to apply to nonconservative sorting and parallel sorting. Our nonconservative sorting algorithm sorts n integers in {0, 1, ?, m?1} in time O(n(log log n)2/(log k+log log log n)) using word length k log(m+n), where k?log n. Our EREW parallel algorithm sorts n integers in {0, 1, ?, m?1} in O((log n)2) time and O(n(log log n)2/log log log n) operations provided log m=?((log n)2).

Journal ArticleDOI
Howard Straubing1
TL;DR: It is proved that a regular language defined by a boolean combination of generalized ?
Abstract: We prove that a regular language defined by a boolean combination of generalized ?1-sentences built using modular counting quantifiers can be defined by a boolean combination of ?1-sentences in which only regular numerical predicates appear. The same statement, with “?1” replaced by “first-order,” is equivalent to the conjecture that the nonuniform circuit complexity class ACC is strictly contained in NC1. The argument introduces some new techniques, based on a combination of semigroup theory and Ramsey theory, which may shed some light on the general case.

Journal ArticleDOI
TL;DR: This paper defines an important class of BEE channels, the SID channels, which include channels that permit a bounded number of scattered errors and, possibly at the same time, a bounded burst of errors in any segment of predefined length of a message.
Abstract: Recently, the author introduced a nonprobabilistic mathematical model of discrete channels, the BEE channels, that involve the error-types substitution, insertion, and deletion. This paper defines an important class of BEE channels, the SID channels, which include channels that permit a bounded number of scattered errors and, possibly at the same time, a bounded burst of errors in any segment of predefined length of a message. A formal syntax is defined for generating channel expressions, and appropriate semantics is provided for interpreting a given channel expression as a communication channel (SID channel) that permits combinations of substitutions, insertions, and deletions of symbols. Our framework permits one to generalize notions such as error correction and unique decodability, and express statements of the form “The code K can correct all errors of type ?” and “it is decidable whether the code K is uniquely decodable for the channel described by ?”, where ? is any SID channel expression.

Journal ArticleDOI
TL;DR: Here, it is shown how to generalize the notion of entropy (of a language) in order to obtain new formulas to determine the Hausdorff dimension of fractal sets (also in Euclidean spaces), especially defined via regular (?-)languages.
Abstract: Valuations?morphisms from (?*, ·, e) to ((0, ∞), ·, 1)?are a generalization of Bernoulli morphisms introduced by Eilenberg “Automata, Languages, and Machines”, Academic Press, New York, 1974]. Here, we show how to generalize the notion of entropy (of a language) in order to obtain new formulas to determine the Hausdorff dimension of fractal sets (also in Euclidean spaces), especially defined via regular (?-)languages. By doing this, we can sharpen and generalize earlier results in two ways: first, we treat the case where the underlying basic iterated function system contains noncontractive mappings and, second, we obtain results valid for nonregular languages as well.