scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 1982"


Journal ArticleDOI
TL;DR: A framework allowing a unified and rigorous definition of the semantics of concurrency is proposed, which introduces processes as elements of process domains which are obtained as solutions of domain equations in the sense of Scott and Plotkin.
Abstract: A framework allowing a unified and rigorous definition of the semantics of concurrency is proposed. The mathematical model introduces processes as elements of process domains which are obtained as solutions of domain equations in the sense of Scott and Plotkin. Techniques of metric topology as proposed, e.g., by Nivat are used to solve such equations. Processes are then used as meanings of statements in languages with concurrency. Three main concepts are treated, viz. parallellism (arbitrary interleaving of sequences of elementary actions), synchronization, and communication. These notions are embedded in languages which also feature classical sequential concepts such as assignment, tests, iteration or recursion, and guarded commands. In the definitions, a sequence of process domains of increasing complexity is used. The languages discussed include Milner's calculus for communicating systems and Hoare's communicating sequential processes. The paper concludes with a section with brief remarks on miscellaneous notions in concurrency, and two appendices with mathematical details.

323 citations


Journal ArticleDOI
TL;DR: It is demonstrated that such “write-once memories” can be “rewritten to a surprising degree” and an n-wit WOM is shown to have a “capacity” of up to n · log(n) bits.
Abstract: Storage media such as digital optical disks, PROMS, or paper tape consist of a number of “write-once≓ bit positions (wits); each wit initially contains a “0≓ that may later be irreversibly overwritten with a “1.≓ It is demonstrated that such “write-once memories≓ (woms) can be “rewritten≓ to a surprising degree. For example, only 3 wits suffice to represent any 2-bit value in a way that can later be updated to represent any other 2-bit value. For large k, 1.29… · k wits suffice to represent a k-bit value in a way that can be similarly updated. Most surprising, allowing t writes of a k-bit value requires only t + o(t) wits, for any fixed k. For fixed t, approximately k · t/log(t) wits are required as k → ∞. An n-wit WOM is shown to have a “capacity≓ (i.e., k · t when writing a k-bit value t times) of up to n · log(n) bits.

316 citations


Journal ArticleDOI
TL;DR: It is proved that the satisfiability problem for propositional dynamic logic of looping and converse is elementarily decidable and deterministic two- way automata on infinite trees are defined and it is shown how they can be simulated by nondeterministic one-way automata.
Abstract: Propositional dynamic logic is a formal system for reasoning about the before—after behavior of regular program schemes. An extension of propositional dynamic logic which includes both an infinite looping construct and a converse or backtracking construct is considered and it is proved that the satisfiability problem for this logic is elementarily decidable. In order to establish this result, deterministic two-way automata on infinite trees are defined, and it is shown how they can be simulated by nondeterministic one-way automata. The satisfiability problem for propositional dynamic logic of looping and converse is then reduced to the emptiness problem for these two-way automata.

221 citations


Journal ArticleDOI
TL;DR: It is shown first that for each nonnegative integer k there is a language L k in NP that does not have O( n k )-size uniform circuits, and it is noted that existence of “small circuits≓ is in suitable contexts equivalent to being reducible to sparse sets.
Abstract: As remarked in Cook (“Towards a Complexity Theory of Synchronous Parallel Computation,≓ Univ. of Toronto, 1980), a nonlinear lower bound on the circuit-size of a language in P or even in NP is not known. The best known published lower bound seems to be due to Paul (“Proceedings, 7th ACM Symposium on Theory of Computing,≓ 1975). In this paper it is shown first that for each nonnegative integer k there is a language L k in σ 2 ⌢ π 2 (of the Meyer and Stockmeyer (“Proceedings, 13th IEEE Symposium on Switching and Automata Theory,≓ 1972) hierarchy) which does not have O( n k )-size circuits. Using the same techniques, one is able to prove several similar results. For example, it is shown that for each nonnegative integer k , there is a language L k in NP that does not have O( n k )-size uniform circuits. This follows as a corollary of a stronger result shown in the paper. This result like the others to follow is not provable by direct diagonalization. It thus points to the most interesting feature of the techniques used hereby using the polynomial-time hierarchy, they are able to prove results about NP that cannot seem to proved by direct diagonalization. Finally, it is noted that existence of “small circuits≓ is in suitable contexts equivalent to being reducible to sparse sets. Using this, one is able to prove, for example, that for any time-constructible superpolynomial function f ( n ), NTIME( f ( n )) contains a language which is not many-to-one p -time reducible to any sparse set.

189 citations


Journal ArticleDOI
TL;DR: An elementary, purely algebraic definition of model for the untyped lambda calculus is given, shown to be equivalent to the natural semantic definition based on environments, which yields a completeness theorem for, the standard axioms for lambda convertibility.
Abstract: An elementary, purely algebraic definition of model for the untyped lambda calculus is given. This definition is shown to be equivalent to the natural semantic definition based on environments. These definitions of model are consistent with, and yield a completeness theorem for, the standard axioms for lambda convertibility. A simple construction of models for lambda calculus is reviewed. The algebraic formulation clarifies the relation between combinators and lambda terms.

184 citations


Journal ArticleDOI
TL;DR: An oracle is constructed relative to which UNIQUE SAT is not complete for DIF ~, and another oracle relative towhich UNIQue SAT is complete forDIF e, whereas NP v ~ co - NP is co- NP.
Abstract: UNIQUE SAT is the problem of deciding whether a given Boolean formula has exactly one satisfying truth assignment. This problem is a typical (moreover complete) representative of a natural class of problems about unique solutions. All these problems belong to the class DIFe= {L1--L2:L1,Lz~NP} studied by Papadimitriou and Yannakakis. We consider the relationship between these two classes, particularly whether UNIQUE SAT is DIFe-complete: It is if NP = co- NP. We construct an oracle relative to which UNIQUE SAT is not complete for DIF ~, and another oracle relative to which UNIQUE SAT is complete for DIF e, whereas NP v ~ co - NP.

174 citations


Journal ArticleDOI
TL;DR: The temporal propositional logic of linear time is generalized to an uncertain world, in which random events may occur, and three different axiomatic systems are proposed and shown complete for general models, finite models, and models with bounded transition probabilities, respectively.
Abstract: The temporal propositional logic of linear time is generalized to an uncertain world, in which random events may occur. The formulas do not mention probabilities explicitly, i.e., the only probability appearing explicitly in formulas is probability one. This logic is claimed to be useful for stating and proving properties of probabilistic programs. It is convenient for proving those properties that do not depend on the specific distribution of probabilities used in the program's random draws. The formulas describe properties of execution sequences. The models are stochastic systems, with state transition probabilities. Three different axiomatic systems are proposed and shown complete for general models, finite models, and models with bounded transition probabilities, respectively. All three systems are decidable, by the results of Rabin ( Trans. Amer. Math. Soc. 141 (1969), 1–35).

145 citations


Journal ArticleDOI
TL;DR: An explicit solution not using authentication for n = 3t + 1 processes is given, using 2t + 3 rounds and O(t3 log t) message bits.
Abstract: Byzantine Agreement involves a system of n processes, of which some t may be faulty. The problem is for the correct processes to agree on a binary value sent by a transmitter that may itself be one of the n processes. If the transmitter sends the same value to each process, then all correct processes must agree on that value, but in any case, they must agree on some value. An explicit solution not using authentication for n = 3t + 1 processes is given, using 2t + 3 rounds and O(t3 log t) message bits. This solution is easily extended to the general case of n ⩾ 3t + 1 to give a solution using 2t + 3 rounds and O(nt + t3 log t) message bits.

130 citations


Journal ArticleDOI
TL;DR: Gold (1967) has provided specifications of (a)--(c) that have played major roles in subsequent evaluation of theories of natural language that contain exactly the natural languages.
Abstract: A language is called natural just in case it can be internalized by human infants on the basis of the kind of casual linguistic exposure typically afforded the young. A theory of natural language will specify (a) the kind of linguistic input available to children, (b) the process by which children convert that experience into successive hypotheses about the input language, and (c) the criteria for "internalization of a language" to which children ultimately conform. From (a)-(c) it should be possible to deduce (d) the class of languages that can be internalized in the sense of (c) by the learning mechanism specified in (b) operating on linguistic input of the kind characterized in (a). Such a theory is correct only if (d) contains exactly the natural languages. Wexler and his associates (Hamburger and Wexler, 1973; Wexler and Culicover, 1980, Chap. 1) provide detailed discussion of theories of natural language in the present sense. Gold (1967) has provided specifications of (a)--(c) that have played major roles in subsequent evaluation of theories of natural language. ~ In Gold's model, linguistic input is construed as an enumeration of the sentences of the target language, arranged in arbitrary order; the process embodied by the human language learner is assumed to be "mechanical" in the sense of realizing a computable function of some sort; and the learner is credited with the capacity to acquire a language L just in case for every order of presen-

121 citations


Journal ArticleDOI
TL;DR: Consider the class of protocols, for two participants, in which the initiator applies a sequence of operators to a message M and sends it to the other participant; in each step, one of the participants applies a Sequence of Operators to the message received last, and sending it back.
Abstract: Consider the class of protocols, for two participants, in which the initiator applies a sequence of operators to a message M and sends it to the other participant; in each step, one of the participants applies a sequence of operators to the message received last, and sends it back. This “ping-pong” action continues several times, using sequences of operators as specified by the protocol. The set of operators may include public-key encryptions and decryptions.

109 citations


Journal ArticleDOI
TL;DR: The present article was directly stimulated by my hearing Kleene's thoroughly delightful talk "Origins of Recursive Function Theory," which is now available in printed form (Kleene, 1981), and further stimulated by the appearance of Webb (1980), a provocative philosophical and historical study of Church's thesis at an unusually deep level.
Abstract: To celebrate the occasion of the twentieth anniversary meeting on Foundations of Computer Science, in October 1979, it was held at a very special location, San Juan, Puerto Rico, and three distiguished pioneers of theoretical computer science, Sheila Greibach, Juris Hartmanis, and Stephen C. Kleene were invited to give addresses on the history of the field. The present article was directly stimulated by my hearing Kleene's thoroughly delightful talk \"Origins of Recursive Function Theory,\" which is now available in printed form (Kleene, 1981). It was my great good fortune to have been, during the late 1940s, a student of two of the most important early workers in the field of recursive function theory, Alonzo Church and Emil Post. Later, I edited an anthology (Davis, 1965) of basic papers in the field and marvelled at the richness of the interactions among the remarkable community of logicians that historical crosscurrents had brought to the East coast of the United States, and especially to Princeton, New Jersey, in the 1930s. It is truly remarkable (G6del, 1946, speaks of a \"kind of miracle\") that it has proved possible to give a pecise mathematical characterization of the class of processes that can be carried out by purely mechanical means. It is in fact the possibility of such a characterization that underlies the ubiquitous applicability of digital computers. In addition it has made it possible to prove the algorithmic unsolvability of important problems, has provided a key tool in mathematical logic, has made available an array of fundamental models in theoretical computer science, and has been the basis of a rich new branch of mathematics. Kleene's account, which is particularly valuable bacause he is able to write as one of the key participants in the unfolding drama, restimulated my interest in the early history of these ideas. Another source of stimulation was the appearance of Webb (1980), a provocative philosophical and historical study of Church's thesis at an unusually deep level. I am very grateful for the extremely helpful criticisms, corrections, and new historical material provided by Kleene after reading a preliminary version of this article, although, of course, responsibility for the opinions expressed is entirely my own.

Journal ArticleDOI
TL;DR: This paper investigates the basic properties of pictures and picture description languages from the formal language theory point of view.
Abstract: A picture is a set of unit lines from the Cartesian plane considered as a square grid. A word over the alphabet l, r, u, d is a picture description in the sense that it represents a traversal of a picture where the interpretation of the symbols l, r, u, d, is: l go one unit line to the left of the current point r go one unit line to the right of the current point u go one unit line up from the current point, and d go one unit line down from the current point. A set of picture descriptions forms a picture description language. This paper investigates the basic properties of pictures and picture description languages from the formal language theory point of view.

Journal ArticleDOI
TL;DR: A unified categorical description of both it-algebras and ),-models is given, which gives a convincing argument that the two kinds of models form a natural class of interpretations of the it-calculus.
Abstract: In 1969 Scott constructed "mathematical" models for the it-calculus; see Scott (1972). It took some time, however, before a general definition of the notion of a it-calculus model was given. This was done independently in Barendregt (1977, 1981), Berry (1981), Hindley and Longo (1980), Meyer (1980), Obtulowicz (1979), and Scott (1980). All of these definitions except Berry's are reviewed in Cooperstock (1981). There seemed to be some disagreement on the notion of a it-calculus model. Barendregt introduced two classes of models, viz. the ),-algebras and the it-models. Berry's models coincide essentially with the ).-algebras, whereas the models of Hindley-Longo, Meyer, Obtulowicz, and Scott all coincide with the it-models. Barendregt was inspired by proof theoretic considerations (coincompleteness, see Plotkin, 1974) for introducing both ),-algebras and itmodels. He did this both in a syntactical and a first order way. We will replace his syntactical method by the so-called environment models. (These are in fact also syntactical but somewhat easier to handle.) Moreover, inspired by Berry (1981) and Meyers (1974) (for the typed it-calculus) we give a unified categorical description of both it-algebras and ),-models. By methods taken from Scott (1980), it will be proved that the structures thus obtained consist of all it-algebras and it-models. The categorical description gives a convincing argument that the two kinds of models form a natural class of interpretations of the it-calculus. In the meantime there seemed to have formed a consensus about the need for both it-algebras and it-models. The revised version Meyer (1981) includes also ),-algebras. Scott (1980) constructs Cartesian closed categories (ccc's) from it-theories; but this construction essentially goes via a it-algebra (ittheory-~ term model (which is a it-algebra)~ ccc). We prefer this way of describing Scott's construction, because different it-algebras may have the same theory, but yet different ccc's. Now we will give a short description of the three ways of introducing the 3O6 0019-9958/82 $2.00

Journal ArticleDOI
TL;DR: The structure of NP <~-degrees is similar to the one of r, and the reducibilities formulated by Cook (1971) and Karp (1973) are just the restrictions to polynomial time of Turing and many-one reducibility, respectively.
Abstract: where R is a relation in P and, for some polynomial p , y ranges over words of length not exceeding p([x[). This same analogy to the classical characterization of the recursively enumerable sets extends to the polynomial hierarchy, which of course is the natural analogue of the arithmetical hierarchy. Even Wrathall's result (Wrathall, 1976) that Bo, (a well-known complete set in PSPACE, see Stockmeyer (1976) is not in PH, if the hierarchy is proper, is reminiscent of Tarski's theorem that truth is not arithmetic. Efficient reducibilities are used to classify decidable problems in much the same way that effective reducibilities are used to classify undecidable problems. The reducibilities formulated by Cook (1971) and by Karp (1973) are just the restrictions to polynomial time of Turing and many-one reducibilities, respectively. These are denoted ~<~and

Journal ArticleDOI
TL;DR: Some of the properties of codes capable of detecting errors when used on a binary asymmetric channel are examined and in fact the maximum cardinality codes are determined.
Abstract: Some of the properties of codes capable of detecting errors when used on a binary asymmetric (or Z) channel are examined and in fact the maximum cardinality codes are determined. These results are extended to the q-ary asymmetric channel introduced by Varshamov (1973), IEEE Trans. Inform. Theory 19, 92–95) .

Journal ArticleDOI
TL;DR: It is shown that the derived matrix recurrence relation converges to the solution of a linear system involving the transition matrix, even when the Transition matrix has eigenvalues with multiplicity greater than one.
Abstract: A fringe analysis method based on a new way of describing the composition of a fringe in terms of tree collections is presented. It is shown that the derived matrix recurrence relation converges to the solution of a linear system involving the transition matrix, even when the transition matrix has eigenvalues with multiplicity greater than one. As a consequence, bounds and some exact results on the expected number of splits per insertion and on the expected depth of the deepest safe node in 23 trees and B-trees, on the expected height of 23 trees are obtained, and improvements of the bounds on the expected number of nodes in 23 trees are derived also. Bounds and some exact results for 23 trees and B-trees using an overflow technique are obtained.

Journal ArticleDOI
TL;DR: It is shown that deciding whether two distant agents can arrive at compatible decisions without any communication can be done in polynomial time if there are two possible decisions for each agent, but is NP-complete if one agent has three or more alternatives.
Abstract: The complexity of two problems of distributed computation and decision-making is studied. It is shown that deciding whether two distant agents can arrive at compatible decisions without any communication can be done in polynomial time if there are two possible decisions for each agent, but is NP-complete if one agent has three or more alternatives. It is also shown that minimizing the amount of communication necessary for the distributed computation of a function, when two distant computers receive each a part of the input, is NP-complete. This proves a conjecture due to A. Yao.

Journal ArticleDOI
TL;DR: The purpose here is to use a connection between the semantics of predicate logic as a programming language and the well-studied theory of inductive definability to given a measure to the incompleteness of the negation as failure rule for proving Herbrand valid negations of formulas, and to show that the negators of this rule are very highly incomplete in the sense of the measure.
Abstract: The coupling of resolution techniques for automatic theorem proving with what is termed the procedural interpretation of logic (Kowalski 1979) has resulted in efforts to implement predicate logic as a programming language. These efforts have already resulted in the language PROLOG. The semantics of predicate logic as a programming language have been formulated with an orientation toward the precedural interpretation of logic in van Emden and Kowalski (1976) and Apt and van Emden (1980). The predicate logic core of PROLOG is a restriction to universally quantified Horn sentences with atomic conclusions, so-called definite clauses, and therefore prevents the user from fully expressing logical negation. In the procedural interpretation of logic, a logic program is regarded as its "if and only if'' version of which half is explicitly presented. The precise definition of this "if and only if" version is given in Clark (1980). The interpretations of a logic program are then restricted to the Herbrand models of the "if and only if,' version of the program, and the formula F is Herbrand valid in logic program P iff F is valid in all such Herbrand interpretations of P. Although the user does not have negation available the control features of PROLOG do allow the user to have a useful but still limited form of negation: to infer ~A from a proof of the unprovability of A. This is sound, as we shall see, in the procedural interpretation of logic. Our purpose here is to use a connection between the semantics of predicate logic as a programming language and the well-studied theory of inductive definability to given a measure to the incompleteness of the negation as failure rule for proving Herbrand valid negations of formulas, and then to show that the negation as failure rule is very highly incomplete in the sense of the measure. Moreover, the general problem of deciding whether a formula is Herbrand

Journal ArticleDOI
TL;DR: There exists an algorithm for deciding whether or not an arbitrary regular language is of star height one and it is unclear whether this algorithm is applicable to English.
Abstract: There exists an algorithm for deciding whether or not an arbitrary regular language is of star height one

Journal ArticleDOI
TL;DR: It can be concluded that a parallel RAM requires at least ω(loglog itn}) steps to compute f and a function achieving this bound is presented.
Abstract: A function f : , 1 n → , 1 is said to depend on dimension i iff there exists an input vector x such that f ( x ) differs from f ( x i }), where itx i } agrees with x in every dimension except iti.} In this case x is said to be itcritical} for f with respect to iti}. Function f is called itnondegenerated} iff it depends on all itn} dimensions. The main result of this paper is that for each nondegenerated function f : , 1 n → , 1 there exists an input vector x which is critical with respect to at least ω(log itn}) dimensions. A function achieving this bound is presented. Together with earlier results from Cook and Dwork (“Proceeding, 14th ACM Symp. on Theory of Computing,≓ 1982) and Reischuk (IBM Research Report, No. RJ 3431, 1982) it can be concluded that a parallel RAM requires at least ω(loglog itn}) steps to compute f .

Journal ArticleDOI
TL;DR: The following fundamental theorem about the adequacy of the algebraic specification methods for data abstractions is proved: A is computable if, and only if, A possesses an equational specification which defines it under initial and final algebra semantics simultaneously.
Abstract: The following fundamental theorem about the adequacy of the algebraic specification methods for data abstractions is proved. Let A be a data type with n subtypes. Then A is computable if, and only if, A possesses an equational specification, involving at most 3( n + 1) hidden operators and 2( n + 1) axioms, which defines it under initial and final algebra semantics simultaneously.

Journal ArticleDOI
Wolfgang J. Paul1
TL;DR: On-line simulation of real-time (k + 1)-tape Turing machines by k-taped Turing machines requires time n(log n)1/(k+1).
Abstract: On-line simulation of real-time (k + 1)-tape Turing machines by k-tape Turing machines requires time n(log n)1/(k+1)

Journal ArticleDOI
Stathis Zachos1
TL;DR: It is shown that many definitions, that arise naturally from different types of algorithms, are equivalent in defining the same class R (or ZPP, resp.).
Abstract: Various types of probabilistic algorithms play an increasingly important role in computer science, especially in computational complexity theory. Probabilistic polynomial time complexity classes are defined and compared to each other, emphasizing some structural relationships to the known complexity classes P, NP, PSPACE. The classes R and ZPP, corresponding to the so-called Las Vegas polynomial time bounded algorithms, are given special attention. It is shown that many definitions, that arise naturally from different types of algorithms, are equivalent in defining the same class R (or ZPP, resp.). These robustness results justify finally the tractability of the above probabilistic classes.

Journal ArticleDOI
TL;DR: The assumption of orientability is shown not to be necessary in the case of 6-connectedness and, unexpectedly, it is shown that the property oforientability is not symmetric with respect to the two types of connectedness.
Abstract: This is a continuation of a series of papers on the digital geometry of three-dimensional images. In an earlier paper by Morgenthaler and Rosenfeld, a three-dimensional analog of the two-dimensional Jordan curve theorem was established. This was accomplished by defining simple surface points under the symmetric consideration of 6-connectedness and 26-connectedness and by characterizing a simple closed surface as a connected collection of “orientable” simple surface points. The necessity of the assumption of orientability, a condition of often prohibitive computational cost to establish, was the major unresolved issue of that paper. In this paper, the assumption is shown not to be necessary in the case of 6-connectedness and, unexpectedly, it is shown that the property of orientability is not symmetric with respect to the two types of connectedness.

Journal ArticleDOI
TL;DR: It is shown that for every context free language L there effectively exists a test set F, that is, a finite subset F of L such that, for any pair ( g, h ) of morphisms, g( x) = h(x) for all x in F implies g(x), which implies that every algebraic system of equations is equivalent to a finite subsystem.
Abstract: It is shown that for every context free language L there effectively exists a test set F , that is, a finite subset F of L such that, for any pair ( g, h ) of morphisms, g(x) = h(x) for all x in F implies g(x) = h(x) for all x in L . This result was claimed earlier but a detailed correct proof is given here. Together with very recent results on systems of equations over a free monoid this result implies that every algebraic system of equations is equivalent to a finite subsystem.

Journal ArticleDOI
TL;DR: Two extensions of propositional dynamic logic for dealing with infinite computations, LPDL and RPDL, are compared in expressive power and it is shown here that the converse fails.
Abstract: Two extensions of propositional dynamic logic for dealing with infinite computations, LPDL and RPDL, are compared in expressive power. The first is obtained by adding the assertion loop (α) for any program α, meaning “α contains an infinite computation,≓ and the second by adding repeat (α), meaning “α can be repeated indefinitely.≓ While repeat can be used to encode loop , and hence LPDL ⩽ RPDL, it is shown here that the converse fails. Thus LPDL

Journal ArticleDOI
TL;DR: The principal results imply that nearly minimal size programs can be inferred (in the limit) without loss of inferring power provided the authors are willing to tolerate a finite, but not uniformly, bounded, number of anomalies in the synthesized programs.
Abstract: Inductive inference machines are algorithmic devices which attempt to synthesize (in the limit) programs for a function while they examine more and more of the graph of the function. There are many possible criteria of success. We study the inference of nearly minimal size programs. Our principal results imply that nearly minimal size programs can be inferred (in the limit) without loss of inferring power provided we are willing to tolerate a finite, but not uniformly, bounded, number of anomalies in the synthesized programs. On the other hand, there is a severe reduction of inferring power in inferring nearly minimal size programs if the maximum number of anomalies allowed is any uniform constant. We obtain a general characterization for the classes of recursive functions which can be synthesized by inferring nearly minimal size programs with anomalies. We also obtain similar results for Popperian inductive inference machines. The exact tradeoffs between mind change bounds on inductive inference machines and anomalies in synthesized programs are obtained. The techniques of recursive function theory including the recursion theorem are employed.

Journal ArticleDOI
TL;DR: The first part of this paper investigates the relationship between the classes of sets accepted by space-bounded and finitely leaf-size bounded three-way two-dimensional alternating Turing machines and the classes which are finite intersections of sets Accepting power and closure properties of two- dimensional alternating Turing Machines with only universal states.
Abstract: Several properties of two-dimensional alternating Turing machines are investigated. The first part of this paper investigates the relationship between the classes of sets accepted by space-bounded and finitely leaf-size bounded three-way two-dimensional alternating Turing machines and the classes of sets which are finite intersections of sets accepted by space-bounded three-way two-dimensional nondeterministic Turing machines. The second part of this paper investigates the accepting power and closure properties (under Boolean operations) of two-dimensional alternating Turing machines with only universal states.

Journal ArticleDOI
TL;DR: If n − 1 is an odd composite integer then there are at least 2 (1/2)√ n pairwise inequivalent binary error-correcting codes of length 2 n, size 2 2 n , and minimum distance n −1 .
Abstract: If n − 1 is an odd composite integer then there are at least 2 (1/2)√ n pairwise inequivalent binary error-correcting codes of length 2 n , size 2 2 n , and minimum distance 2 n −1 − 2 (1/2) n −1 .

Journal ArticleDOI
TL;DR: The Lehmann-Smith least fixpoint approach to recursively-defined data types is extended by introducing the dual notion of greatest fixpoint, which allows the definition of infinite lists and trees without recourse to domains bearing a partial order structure.
Abstract: Data types may be considered as objects in any suitable category, and need not necessarily be ordered structures or many-sorted algebras. Arrays may be specified having as parameter any object from a category %plane1D;4A6; with finite products and coproducts, if products distribute over coproducts. The Lehmann-Smith least fixpoint approach to recursively-defined data types is extended by introducing the dual notion of greatest fixpoint, which allows the definition of infinite lists and trees without recourse to domains bearing a partial order structure. Finally, the least fixpoint approach is shown allowing the definition of queues directly in terms of stacks, rather than through a separate equational specification.