scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 1991"


Journal ArticleDOI
TL;DR: Calculi are introduced, based on a categorical semantics for computations, that provide a correct basis for proving equivalence of programs for a wide range of notions of computation.
Abstract: The λ-calculus is considered a useful mathematical tool in the study of programming languages, since programs can be identified with λ-terms. However, if one goes further and uses βη-conversion to prove equivalence of programs, then a gross simplification is introduced (programs are identified with total functions from values to values ) that may jeopardise the applicability of theoretical results. In this paper we introduce calculi, based on a categorical semantics for computations , that provide a correct basis for proving equivalence of programs for a wide range of notions of computation .

1,825 citations


Journal ArticleDOI
TL;DR: By using probabilistic transition systems as the underlying semantic model, it is shown how a testing algorithm can distinguish, with a probability arbitrarily close to one, between processes that are not bisimulation equivalent.
Abstract: We propose a language for testing concurrent processes and examine its strength in terms of the processes that are distinguished by a test. By using probabilistic transition systems as the underlying semantic model, we show how a testing algorithm can distinguish, with a probability arbitrarily close to one, between processes that are not bisimulation equivalent. We also show a similar result (in a slightly stronger form) for a new process relation called 2 3 - bisimulation —which lies strictly between that of simulation and bisimulation. Finally, the ultimately strength of the testing language is shown to identify a new process relation called probabilistic bisimulation—which is strictly stronger than bisimulation.

1,297 citations


Journal ArticleDOI
TL;DR: Comparisons and equivalences are given between Valiant's model and the prediction learning models of Haussler, Littlestone, and Warmuth and show that several simplifying assumptions on polynomial learning algorithms can be made without loss of generality.
Abstract: In this paper we consider several variants of Valiant's learnability model that have appeared in the literature. We give conditions under which these models are equivalent in terms of the polynomially learnable concept classes they define. These equivalences allow comparisons of most of the existing theorems in Valiant-style learnability and show that several simplifying assumptions on polynomial learning algorithms can be made without loss of generality. We also give a useful reduction of learning problems to the problem of finding consistent hypotheses, and give comparisons and equivalences between Valiant's model and the prediction learning models of Haussler, Littlestone, and Warmuth ( in “29th Annual IEEE Symposium on Foundations of Computer Science,” 1988).

208 citations


Journal ArticleDOI
TL;DR: A method for providing semantic interpretations for languages with a type system featuring inheritance polymorphism, illustrated on an extension of the language Fun of Cardelli and Wegner, which is interpreted via a translation into an extended polymorphic lambda calculus.
Abstract: We present a method for providing semantic interpretations for languages with a type system featuring inheritance polymorphism. Our approach is illustrated on an extension of the language Fun of Cardelli and Wegner, which we interpret via a translation into an extended polymorphic lambda calculus. Our goal is to interpret inheritances in Fun via coercion functions which are definable in the target of the translation. Existing techniques in the theory of semantic domains can be then used to interpret the extended polymorphic lambda calculus, thus providing many models for the original language. This technique makes it possible to model a rich type discipline which includes parametric polymorphism and recursive types as well as inheritance. A central difficulty in providing interpretations for explicit type disciplines featuring inheritance in the sense discussed in this paper arises from the fact that programs can type-check in more than one way. Since interpretations follow the type-checking derivations, coherence theorems are required: that is, one must prove that the meaning of a program does not depend on the way it was type-checked. Proofs of such theorems for our proposed interpretation are the basic technical results of this paper. Interestingly, proving coherence in the presence of recursive types, variants, and abstract types forced us to reexamine fundamental equational properties that arise in proof theory (in the form of commutative reductions) and domain theory (in the form of strict vs. non-strict functions).

202 citations


Journal ArticleDOI
TL;DR: A denotational semantics for SCCS based on the domain of synchronization trees is given, and proved fully abstract with respect to bisimulation.
Abstract: Some basic topics in the theory of concurrency are studied from the point of view of denotational semantics, and particularly the ''domain theory in logical form'' developed by the author. A domain of synchronization trees is defined by means of a recursive domain equation involving the Plotkin powerdomain. The logical counterpart of this domain is described, and shown to be related to it by Stone duality. The relationship of this domain logic to the standard Hennessy-Milner logic for transition systems is studied; the domain logic can be seen as a rational reconstruction of Hennessy-Milner logic from the standpoint of a very general and systematic theory. Finally, a denotational semantics for SCCS based on the domain of synchronization trees is given, and proved fully abstract with respect to bisimulation.

188 citations


Journal ArticleDOI
TL;DR: A refinement of Kripke modal logic is proposed, and in particular of PDL (propostional dynamic logic) is proposed.
Abstract: We propose a refinement of Kripke modal logic, and in particular of PDL (propostional dynamic logic)

167 citations


Journal ArticleDOI
TL;DR: It is shown that this calculus does not have principal types, but does have finite complete sets of types, and it is concluded that type inference is decidable for object-oriented programs, even with multiple inheritance and classes as first-class values.
Abstract: We show that the type inference problem for a lambda calculus with records, including a record concatenation operator, is decidable. We show that this calculus does not have principal types, but does have finite complete sets of types: that is, for any term M in the calculus, there exists an effectively generable finite set of type schemes such that every typing for M is an instance of one of the schemes in the set. We show how a simple model of object-oriented programming, including hidden instance variables and multiple inheritance, may be coded in this calculus. We conclude that type inference is decidable for object-oriented programs, even with multiple inheritance and classes as first-class values.

154 citations


Journal ArticleDOI
Arnon Avron1
TL;DR: In this article, the notion of a simple consequence relation is taken to be fundamental and a general investigation of logic is provided, where the notion is more general than the usual one since we give up monotonicity and use multisets rather than sets.
Abstract: We provide a general investigation of logic in which the notion of a simple consequence relation is taken to be fundamental. Our notion is more general than the usual one since we give up monotonicity and use multisets rather than sets. We use our notion to characterize several known logics (including linear logic and non-monotonic logics) and for a general, semantics-independent classification of standard onnectives via equations on consequence relations (these include Girard's “multiplicatives” and “additives”). We next investigate the standard methods for uniformly representing consequence relations: Hilbert type, Natural Deduction, and Gentzen type. The advantages and disadvantages of using each system and what should be taken as good representations in each case (especially from the implementation point of view) are explained. We end by briefly outlining (with examples) some methods for developing non-uniform, but still efficient, representations of consequence relations.

154 citations


Journal ArticleDOI
TL;DR: It is shown that polynomial time truth-table reducibility via Boolean circuits to SAT is the same as logspace truth- table reducibilities via Boolean formulas to SAT and the sameAs logspace Turing reducible to SAT.
Abstract: We show that polynomial time truth-table reducibility via Boolean circuits to SAT is the same as logspace truth-table reducibility via Boolean formulas to SAT and the same as logspace Turing reducibility to SAT. In addition, we prove that a constant number of rounds of parallel queries to SAT is equivalent to one round of parallel queries. We give an oracle relative to which Δ2p is not equal to the class of predicates polynomial time truth-table reducible to SAT.

138 citations


Journal ArticleDOI
TL;DR: This paper offers an axiomatic characterization of the probabilistic relation X is independent of Y (written (X, Y)) where X and Y are two disjoint sets of variables and a polynomial membership algorithm is developed to decide whether any given independence statement (x, Y) logically follows from a set E of such statements.
Abstract: This paper offers an axiomatic characterization of the probabilistic relation “X is independent of Y (written (X, Y))”, where X and Y are two disjoint sets of variables. Four axioms for (X, Y) are presented and shown to be complete. Based on these axioms, a polynomial membership algorithm is developed to decide whether any given independence statement (X, Y) logically follows from a set Σ of such statements, i.e., whether (X, Y) holds in every probability distribution that satisfies Σ. The complexity of the algorithm is O(|Σ| · k2 + |Σ| · n), where |Σ| is the number of given statements, n is the number of variables in Σ ∪ {(X, Y)}, and k is the number of variables in (X, Y).

114 citations


Journal ArticleDOI
TL;DR: The problem of scheduling a set of chains onm > 1 identical processors with the objectives of minimizing the makespan and the mean flow time is considered, answering the open question of whether this problem is strongly NP-hard for trees.
Abstract: We consider the problem of scheduling a set of chains onm > 1 identical processors with the objectives of minimizing the makespan and the mean flow time. We show that finding a nonpreemptive schedule with the minimum makespan is strongly NP-hard for each fixedm > 1, answering the open question of whether this problem is strongly NP-hard for trees. We also show that finding a nonpreemptive schedule with the minimum mean flow time is strongly NP-hard for each fixedm > 1, improving the known strong NP-hardness results for in-trees and out-trees. Finally, we generalize the result of McNaughton, showing that preemption cannot reduce the mean weighted flow time for a set of chains. The last two results together imply that finding a preemptive schedule with the minimum mean flow time is also strongly NP-hard for each fixedm > 1, answering another open question on the complexity of this problem for trees.

Journal ArticleDOI
TL;DR: The class of stratifiable databases and programs is extensively studied in this framework and the default logic approach to the declarative semantics of logical databases and Programs is compared with the other major approaches.
Abstract: Default logic is introduced as a well-suited formalism for defining the declarative semantics of deductive databases and logic programs. After presenting, in general, how to use default logic in order to define the meaning of logical databases and logic programs, the class of stratifiable databases and programs is extensively studied in this framework. Finally, the default logic approach to the declarative semantics of logical databases and programs is compared with the other major approaches. This comparison leads to showing some advantages of the default logic approach.

Journal ArticleDOI
TL;DR: It is proved that n integers drawn from a set {0, …, m−1} can be sorted on an Arbitrary CRCW PRAM in time O( log n log log n + log log m) with a time-processor product of O(nlog log m).
Abstract: We consider the problem of deterministic sorting of integers on a parallel RAM (PRAM). The best previous result ( T. Hagerup, 1987 , Inform. and Comput.75, 39–51) states that n integers of size polynomial in n can be sorted in time O(log n) on a Priority CRCW PRAM with O( n log log n log n ) processors. We prove that n integers drawn from a set {0, …, m−1} can be sorted on an Arbitrary CRCW PRAM in time O( log n log log n + log log m) with a time-processor product of O(n log log m). In particular, if m = n(log n)O(1), the time and number of processors used are O( log n log log n ) and O( n( log log n) 2 log n ) , respectively. This improves the previous result in several respects: The new algorithm is faster, it works on a weaker PRAM model, and it is closer to optimality for input numbers of superpolynomial size. If log log m = O( log n log log n ) , the new algorithm is optimally fast, for any polynomial number of processors, and if log log m = (1 + Ω(1)) log log n and log log m = 0 ( log n ), it has optimal speedup relative to the fastest known sequential algorithm. The space needed is O(nme), for arbitrary but fixed e > 0. The sorting algorithm derives its speed from a fast solution to a special list ranking problem of possible independent interest, the monotonic list ranking problem. In monotonic list ranking, each list element has an associated key, and the keys are known to increase monotonically along the list. We show that monotonic list ranking problems of size n can be solved optimally in time O( log n log log n ) . We also discuss and attempt to solve some of the problems arising in the precise description and implementation of parallel recursive algorithms. As part of this effort, we introduce a new PRAM variant, the allocated PRAM.

Journal ArticleDOI
TL;DR: Stratified logic programs became the choice for the treatment of negation in the NAIL ! system developed at Stanford University by Ullman and his co-workers.
Abstract: The study of negation in logic programming has been the topic of substantial research activity during the past several years, starting with the negation as failure semantics in Clark (1978), and Apt and van Emden (1982). More recently, a major direction of research has focused on the class of stratified logic programs, in which no predicate is defined recursively in terms of its own negation and which can be given natural semantics in terms of iterated fixpoints. Stratified logic programs were introduced and studied first by Chandra and Hare1 (1985), but soon attracted the interest of researchers from both database theory and artificial intelligence. Recent research work on stratified logic programs and their generalizations includes the papers by Apt, Blair, and Walker (1988), Van Gelder (1986) Lifschitz (1988), Przymusinski (1988), Apt and Pugin (1987) and others. At the same time, stratified logic programs became the choice for the treatment of negation in the NAIL ! system developed at Stanford University by Ullman and his co-workers (cf. Morris

Journal ArticleDOI
TL;DR: It is shown that the degree structure of NPO allows intermediate degrees, that is, if P≠NP, there are problems which are neither complete nor belong to a lower class, and natural approximation preserving reductions are defined.
Abstract: We introduce a formal framework for studying approximation properties of NP optimization (NPO) problems. The classes we consider are those appearing in the literature, namely the class of approximable problems within a constant e (APX), the class of problems having a Polynomial-time Approximation Scheme (PAS) and the class of problems having a Fully Polynomial-time Approximation Scheme (FPAS). We define natural approximation preserving reductions and obtain completeness results for these classes. A complete problem in a class can not have stronger approximation properties unless P=NP. We also show that the degree structure of NPO allows intermediate degrees, that is, if P≠NP, there are problems which are neither complete nor belong to a lower class.

Journal ArticleDOI
TL;DR: This paper develops notation for strings of abstractors in typed lambda calculus, and shows how to treat them more or less as single abstractors.
Abstract: The paper develops notation for strings of abstractors in typed lambda calculus, and shows how to treat them more or less as single abstractors.

Journal ArticleDOI
TL;DR: A realizability model for a language including Girard's system F and an operator of recursion on types is given and some of its local properties are studied.
Abstract: Realizability structures play a major role in the metamathematics of intuitionistic systems and they are a basic tool in the extraction of the computational content of constructive proofs. Besides their rich categorical structure and effectiveness properties provide a privileged mathematical setting for the semantics of data types of programming languages. In this paper we emphasize the modelling of recursive definitions of programs and types. A realizability model for a language including Girard's system F and an operator of recursion on types is given and some of its local properties are studied.

Journal ArticleDOI
TL;DR: The main syntactical properties, notably the existence of principal type schemes, are proved to hold when recursive types are viewed as finite notations for infinite (regular) type expressions representing their infinite unfoldings.
Abstract: In this paper we study type inference systems for λ-calculus with a recursion operator over types. The main syntactical properties, notably the existence of principal type schemes, are proved to hold when recursive types are viewed as finite notations for infinite (regular) type expressions representing their infinite unfoldings. Exploiting the approximation structure of a model for the untyped language of terms, types are interpreted as limits of sequences of their approximations. We show that the interpretation is essentially unique and that two types have equal interpretation if and only if their infinite unfoldings are identical. Finally, a completeness theorem is proved to hold w.r.t. the specific model we consider for a natural (infinitary) extension of the type inference system.

Journal ArticleDOI
TL;DR: 1-way iterated pushdown automata form a proper hierarchy with respect to the number of iterations, and their emptiness problem is complete in deterministic iterated exponential time.
Abstract: An iterated pushdown is a pushdown of pushdowns of … of pushdowns. An iterated exponential function is 2 to the 2 to the … to the 2 to some polynomial. The main result presented here is that the nondeterministic 2-way and multi-head iterated pushdown automata characterize the deterministic iterated exponential time complexity classes. This is proved by investigating both nondeterministic and alternating auxiliary iterated pushdown automata, for which similar characterization results are given. In particular it is shown that alternation corresponds to one more iteration of pushdowns. These results are applied to the 1-way iterated pushdown automata: (1) they form a proper hierarchy with respect to the number of iterations, and (2) their emptiness problem is complete in deterministic iterated exponential time. Similar results are given for iterated stack (checking stack, nonerasing stack, nested stack, checking stack-pushdown) automata.

Journal ArticleDOI
TL;DR: A new logarithmic time parallel (PRAM) algorithm for computing the connected components of undirected graphs which uses a novel technique for approximate parallel scheduling and implies logarathmic time optimal parallel algorithms for a number of other graph problems, including biconnectivity, Euler tours, strong orientation and st-numbering.
Abstract: Part I of this paper presented a novel technique for approximate parallel scheduling and a new logarithmic time optimal parallel algorithm for the list ranking problem. In this part, we give a new logarithmic time parallel (PRAM) algorithm for computing the connected components of undirected graphs which uses this scheduling technique. The connectivity algorithm is optimal unless m = o(n log∗ n) in graphs of n vertices and m edges. (log(k) denotes the kth iterate of the log function and log∗ n denotes the least i such that log(i) n ≤ 2). Using known results, this new algorithm implies logarithmic time optimal parallel algorithms for a number of other graph problems, including biconnectivity, Euler tours, strong orientation and st-numbering. Another contribution of the present paper is a parallel union/find algorithm.

Journal ArticleDOI
TL;DR: To show how to use the concrete nature of Scott's information systems to advantage in solving recursive domain equations, the method is based on the substructure relation between information systems, which essentially makes a complete partial order of information systems.
Abstract: This paper aims to make the following main contribution: to show how to use the concrete nature of Scott's information systems to advantage in solving recursive domain equations. The method is based on the substructure relation between information systems. This essentially makes a complete partial order (cpo) of information systems. Standard domain constructions like function space can be made continous on this cpo so the solution of recursive domain equations reduces to the more familiar construction of forming the least fixed-point of a continuous function.

Journal ArticleDOI
TL;DR: This paper shows that for fixed k, this probability of success is at least ( 2 k ) p + O(p −3 2 ) as p → ∞, which leads to an Ω( log 2 p) p estimate of the success probability.
Abstract: Pollard's “rho” method for integer factorization iterates a simple polynomial map and produces a nontrivial divisor of n when two such iterates agree modulo this divisor. Experience and heuristic arguments suggest that a prime divisor p should be detected in O( p ) steps, but this has never been proved. Indeed, nothing seems to be have been rigorously proved about the probability of success that would improve the obvious lower bound of 1 p . This paper shows that for fixed k , this probability is at least ( 2 k ) p + O(p −3 2 ) as p → ∞. This leads to an Ω( log 2 p) p estimate of the success probability.

Journal ArticleDOI
TL;DR: The main technical result is that the functions representable in the finitely stratified polymorphic λ-calculus are precisely the super-elementary functions, i.e., the class e4 in Grzegorczyk's subrecursive hierarchy.
Abstract: We consider predicative type-abstraction disciplines based on type quantification with finitely stratified levels. These lie in the vast middle ground between quantifier-free parametric abstraction and full impredicative abstraction. Stratified polymorphism has an unproblematic set-theoretic semantics, and may lend itself to new approaches to type inference, without sacrificing useful expressive power. Our main technical result is that the functions representable in the finitely stratified polymorphic λ-calculus are precisely the super-elementary functions, i.e., the class e4 in Grzegorczyk's subrecursive hierarchy. This implies that there is no super-elementary bound on the length of optimal normalization sequences, and that the equality problem for finitely stratified polymorphic λ-expressions is not super-elementary. We also observe that finitely stratified polymorphism augmented with type recursion admits functional algorithms that are not typable in the full second order λ-calculus.

Journal ArticleDOI
TL;DR: A few lines pattern matching algorithm is obtained by using the correctness proof of programs as a tool to the design of efficient algorithms by three refinement steps from a brute force algorithm.
Abstract: A few lines pattern matching algorithm is obtained by using the correctness proof of programs as a tool to the design of efficient algorithms. The new algorithm is obtained from a brute force algorithm by three refinement steps. The first step leads to the algorithm of Knuth, Morris, and Pratt that performs 2n character comparisons in the worst case and (1 + α)n comparisons in the average case (0

Journal ArticleDOI
TL;DR: A new method for proving lower bounds on the complexity of branching programs and considers k-times-only branching programs, which have the additional restriction that the input bits are read k times, yet blockwise and in each block in the same order.
Abstract: We present a new method for proving lower bounds on the complexity of branching programs and consider k-times-only branching programs. While exponential and nearly exponential lower bounds on the complexity of one-time-only branching programs were proved for many problems, there are still missing methods of proving lower bounds for k-times-only programs (k > 1). We prove exponential lower bounds for k-times-only branching programs which have the additional restriction that the input bits are read k times, yet blockwise and in each block in the same order. This is done both for the algebraic decision problem POLYn,d∗ (n ∈ N prime, d ≤ n) whether a given mapping g: F n → F n is a polynomial over F n of degree at most d, and for the corresponding monotone problem over quadratic Boolean matrices. As a consequence we obtain a sharp bound of order Θ(n · log(n)) on the communication complexity of POLYn, δn∗ ( δ ∈ (0, 1 2 ) ).

Journal ArticleDOI
TL;DR: An ω- Set (realizability) model of ECC is described to show how its essential properties can be captured set-theoretically and some hints on how to adequately formalize abstract mathematics are given.
Abstract: We present a higher-order calculus ECC which naturally combines Coquand-Huet's calculus of constructions and Martin-Lof's type theory with universes. ECC is very expressive, both for structured abstract reasoning and for program specification and construction. In particular, the strong sum types together with the type universes provide a useful module mechanism for abstract description of mathematical theories and adequate formalization of abstract mathematics. This allows comprehensive structuring of interactive development of specifications, programs and proofs. After a summary of the meta-theoretic properties of the calculus, an ω- Set (realizability) model of ECC is described to show how its essential properties can be captured set-theoretically. The model construction entails the logical consistency of the calculus and gives some hints on how to adequately formalize abstract mathematics. Theory abstraction in ECC is discussed as a pragmatic application.

Journal ArticleDOI
TL;DR: This synopis compares the two computational models of Boolean circuits and arithmetic circuits in cases where they both apply, namely the computation of polynomials over the rational numbers or over finite fields.
Abstract: We compare the two computational models of Boolean circuits and arithmetic circuits in cases where they both apply, namely the computation of polynomials over the rational numbers or over finite fields. Over Q and finite fields, Boolean circuits can simulate arithmetic circuits efficiently with respect to size. Over finite fields of small characteristic, the two models are equally powerful when size is considered, but Boolean circuits are exponentially more powerful than arithmetic circuits with respect to depth. Most of the technical results given in this synopis are taken from the literature.

Journal ArticleDOI
TL;DR: Every algorithm to compute fT requires time T′ on almost every input if T′ is almost everywhere significantly smaller than T (T′ = o(T), typically).
Abstract: For each time bound T: {input strings} → {natural numbers} that is some machine's exact running time, there is a {0, 1}-valued function fT that can be computed within time proportional to T, but that cannot be computed within any time bound T′ that is infinitely often significantly smaller than T ( T′ ≠ Ω(T) , typically). Equivalently, every algorithm to compute fT requires time T′ on almost every input if T′ is almost everywhere significantly smaller than T (T′ = o(T), typically).

Journal ArticleDOI
TL;DR: This paper addresses the relationship between the Lyndon decomposition of a word x and a canonical rotation of x, i.e., a rotation w of x that is lexicographically smallest among all rotations of x.
Abstract: Any word can be decomposed uniquely into lexicographically nonincreasing factors each one of which is a Lyndon word. This paper addresses the relationship between the Lyndon decomposition of a word x and a canonical rotation of x, i.e., a rotation w of x that is lexicographically smallest among all rotations of x. The main combinatorial result is a characterization of the Lyndon factor of x with which w must start. As an application, faster on-line algorithms for finding the canonical rotation(s) of x are developed by nontrivial extension of known Lyndon factorization strategies. Unlike their predecessors, the new algorithms lend themselves to incremental variants that compute, in linear time, the canonical rotations of all prefixes of x. The fastest such variant represents the main algorithmic contribution of the paper. It performs within the same 3 ∥x∥ character-comparisons bound as that of the fastest previous on-line algorithms for the canonization of a single string. This leads to the canonization of all substrings of a string in optimal quadratic time, within less than 3 ∥x∥2 character comparisons and using linear auxiliary space.

Journal ArticleDOI
TL;DR: The notion of dynamic sampling is introduced, wherein the number of examples examined may increase with the complexity of the target concept, and this method is used to establish the learnability of various concept classes with an infinite Vapnik-Chervonenkis dimension.
Abstract: We consider the problem of learning a concept from examples in the distribution-free model by Valiant. (An essentially equivalent model, if one ignores issues of computational difficulty, was studied by Vapnik and Chervonenkis.) We introduce the notion of dynamic sampling, wherein the number of examples examined may increase with the complexity of the target concept. This method is used to establish the learnability of various concept classes with an infinite Vapnik-Chervonenkis dimension. We also discuss an important variation on the problem of learning from examples, called approximating from examples. Here we do not assume that the target concept T is a member of the concept class C from which approximations are chosen. This problem takes on particular interest when the VC dimension of C is infinite. Finally, we discuss the problem of computing the VC dimension of a finite concept set defined on a finite domain and consider the structure of classes of a fixed small dimension.