scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 1994"


Book ChapterDOI
02 Jan 1994
TL;DR: Differentially uniform mappings as discussed by the authors have also desirable cryptographic properties: large distance from affine functions, high nonlinear order and efficient computability, and have also been used in DES-like ciphers.
Abstract: This work is motivated by the observation that in DES-like ciphers it is possible to choose the round functions in such a way that every non-trivial one-round characteristic has small probability. This gives rise to the following definition. A mapping is called differentially uniform if for every non-zero input difference and any output difference the number of possible inputs has a uniform upper bound. The examples of differentially uniform mappings provided in this paper have also other desirable cryptographic properties: large distance from affine functions, high nonlinear order and efficient computability.

859 citations


Journal Article
TL;DR: The examples of differentially uniform mappings provided in this paper have also other desirable cryptographic properties: large distance from affine functions, high nonlinear order and efficient computability.
Abstract: This work is motivated by the observation that in DES-like ciphers it is possible to choose the round functions in such a way that every non-trivial one-round characteristic has small probability. This gives rise to the following definition. A mapping is called differentially uniform if for every non-zero input difference and any output difference the number of possible inputs has a uniform upper bound. The examples of differentially uniform mappings provided in this paper have also other desirable cryptographic properties: large distance from affine functions, high nonlinear order and efficient computability.

148 citations


Journal ArticleDOI
TL;DR: It is shown that similar systems in dimension two are also capable of universal computations, and it is necessary to resort to more complex systems in order to retain this capability in dimension one.

144 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that no finite "system\", in the broadest sense of the term, can potentially cover the whole mathematical field and that mathematicians will always be able to discover new playing fields.

104 citations


Proceedings ArticleDOI
23 May 1994
TL;DR: It is shown that a synchronization problem has a wait-free solution if and only if its input complex can be continuously “stretched and folded” to cover its output complex.
Abstract: A Simple (constructive Computability Theorem for wait-free ~omputation In modern shared-memory multiprocessors, processes can be halted or delayed without warning by interrupts, pre-emption, or cache misses. In such environments, it is desirable to design synchronization protocols that are wait-free: any processes that continues to run will finish the protocol in a fixed number of steps, regardless of delays or failures by other processes. Not all synchronization problems have wait-free solutions. In this paper, we give a new, remarkably simple necessary and sufficient combinatorial condition characterizing the problems that have wait-free solutions using shared read/write memory. We associate the range of possible input and output values for any synchronization problem with a high-dimensional geometric structure called a simplicial complex. We show that a synchronization problem has a wait-free solution if and only if its input complex can be continuously “stretched and folded” to cover its output complex. The key to the new theorem is a novel “simplex agreement” protocol, allowing processes to converge asynchronously to a common simplex of a simplicial complex. The proof exploits a number of classical results from algebraic and combinatorial topology. Permission to copy without fee all or part of thk material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association of Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. STOC 945/94 Montreal, Quebec, Canada @ 1984 ACM 0-89791 -663-8/84/0005..$3.50 Nir Shavit Computer Science Department Tel-Aviv University shanir@math. tau. ac.il

71 citations


Proceedings ArticleDOI
20 Nov 1994
TL;DR: It is proved that the problem of existence of a solution of a system of set constraints with projections is in NEXPTIME, and thus that it is N EXPTIME-complete.
Abstract: Systems of set constraints describe relations between sets of ground terms. They have been successfully used in program analysis and type inference. In this paper we prove that the problem of existence of a solution of a system of set constraints with projections is in NEXPTIME, and thus that it is NEXPTIME-complete. This extends the result of A. Aiken, D. Kozen, and E.L. Wimmers (1993) and R. Gilleron, S. Tison, and M. Tommasi (1990) on decidability of negated set constraints and solves a problem that was open for several years. >

62 citations


Book
01 Jan 1994
TL;DR: In this chapter, the author examines the role of symbols in the development of logic, and some examples from the literature show the role that notation and basic theory have played in this development.
Abstract: Introduction. 1: Notation and basic theory. 2: Reduction. 3: Combinatory logic. 4: Semantics. 5: Computability. 6: Types. 7: Practical issues. 8: Other calculi. 9: Further reading. Bibliography. Index

61 citations


Book
01 Jan 1994
TL;DR: This book introduces the beginning computer science student to some of the fundamental ideas and techniques used by computer scientists today, focusing on discrete structures, logic, and computability.
Abstract: This book introduces the beginning computer science student to some of the fundamental ideas and techniques used by computer scientists today, focusing on discrete structures, logic, and computability.

59 citations


Book
01 Jan 1994
TL;DR: An up-to-date, authoritative text for courses in theory of computability and languages that redefine the building blocks of automata theory by offering a single unified model encompassing all traditional types of computing machines and "real world" electronic computers.
Abstract: An up-to-date, authoritative text for courses in theory of computability and languages. The authors redefine the building blocks of automata theory by offering a single unified model encompassing all traditional types of computing machines and "real world" electronic computers. This reformulation of computablity and formal language theory provides a framework for building a body of knowledge. A solutions manual and an instructor's software disk are also available.

47 citations


Posted Content
TL;DR: In this paper, it was shown that if a social welfare function satisfying unanimity and independence also satisfies pairwise computability, then it is dictatorial and this result severely limits on practical grounds Fishburn's resolution~(1970) of Arrow's impossibility.
Abstract: A social welfare function for a denumerable society satisfies {Pairwise Computability} if for each pair (x, y) of alternatives, there exists an algorithm that can decide from any description of each profile on {x,y} whether the society prefers x to y. I prove that if a social welfare function satisfying Unanimity and Independence also satisfies Pairwise Computability, then it is dictatorial. This result severely limits on practical grounds Fishburn's resolution~(1970) of Arrow's impossibility. I also give an interpretation of a denumerable ``society.'' {Keywords} Arrow impossibility theorem, Hayek's knowledge problem, algorithms, recursion theory, ultrafilters.

37 citations


Book
01 Jan 1994
TL;DR: The main theorem of the theory of effectivity (cf. as discussed by the authors ) states that in admissibly represented topological spaces a function is continuous iff it has a continuous representation.
Abstract: The main theorem of the theory of effectivity ( cf. Kreitz and Weihrauch [KWl], [Wl]) states that in admissibly represented topological spaces a function is continuous iff it has a continuous representation. Hence continuity is a necessary condition for computability. We investigate an extended model of computability in order to compute relations. From another point of view these relations are nondeterministic operations or set-valued functions. We show that for a special class of topological spaces (including the complete separable metric ones) and for a certain notion of continuity for relations the main theorem can be extended too.

Journal ArticleDOI
TL;DR: The difference between classes of languages such as P and PSPACE, NL and SAC^1, PL and Diff_< is characterized as the difference between the number of stack symbols; that is, whether the stack alphabet contains one versus two distinct symbols.

Journal ArticleDOI
TL;DR: The results here can be used to prove complexity theorems on path following algorithms for solving systems of polynomial equations, using a model of computation over the integers (Malajovich].

Journal ArticleDOI
TL;DR: It is proved that the function of normalization in base θ, which maps any θ-representation of a real number onto its ι-development, is a function computable by a finite automaton over any alphabet if and only if θ is a Pisot number.
Abstract: We prove that the function of normalization in base θ, which maps any θ-representation of a real number onto its θ-development, obtained by a greedy algorithm, is a function computable by a finite automaton over any alphabet if and only if θ is a Pisot number.


Journal ArticleDOI
TL;DR: In this article, the impact of procedural rationality on economics depends on the computational resources available to economic agents, which can then be studied using the concepts of computability and complexity, as well as game theory.
Abstract: Herbert Simon advocates that economists should study procedureal rationality instead of substantive rationality. One approach for studying procedural rationality is to consider algorithmic representations of procedures, which can then be studied using the concepts of computability and complexity. For some time, game theorists have considered the issue of computability and have employed automata to study bounded rationality. Outside game theory very little research has been performed. Very simple examples of the traditional economic optimization models can require transfinite computations. The impact of procedural rationality on economics depends on the computational resources available to economic agents.

Journal ArticleDOI
TL;DR: Using this theory, it is shown that engineering design is a computable function and a computational methodology is developed that can be described as a form follows function design paradigm.
Abstract: Computational abstraction of engineering design leads to an elegant theory defining (1) the process of design as an abstract model of computability, the Turing machine; (2) the artifacts of design as enumerated strings from a (possibly multidimensional) grammar; and (3) design specifications or constraints as formal state changes that govern string enumeration. Using this theory, it is shown that engineering design is a computable function. A computational methodology based on the theory is then developed that can be described as a form follows function design paradigm.

01 Jan 1994
TL;DR: A variant of`quasi-logical form' is used as an underspeci-ed meaning representation, related to related logical forms' viàconditional equivalences', which deene the semantics of contextually dependent constructs with respect to a given context.
Abstract: This paper describes a reversible system for the interpretation and generation of sentences containing context dependent constructs like pronouns, focus, and ellipsis. A variant of`quasi-logical form' is used as an underspeci-ed meaning representation, related tòresolved logical forms' viàconditional equivalences'. These equivalences deene the semantics of contextually dependent constructs with respect to a given context. Higher order uniication and abduction are used in relating expressions to contexts.

Book ChapterDOI
04 Jul 1994
TL;DR: The new presentation introduces an operator to express recursion, and an ML-style let-constructor allowing to associate an agent to an agent-variable, and use the latter several times in a program.
Abstract: We present a formulation of the polyadic π-calculus featuring a syntactic category for agents, together with a typing system assigning polymorphic types to agents The new presentation introduces an operator to express recursion, and an ML-style let-constructor allowing to associate an agent to an agent-variable, and use the latter several times in a program The essence of the monomorphic type system is the assignment of types to names, and multiple name-type pairs to programs [14] The polymorphic type system incorporates a form of abstraction over types, and inference rules allowing to introduce and eliminate the abstraction operator The extended system preserves most of the syntactic properties of the monomorphic system, including subject-reduction and computability of principal typings We present an algorithm to extract the principal typing of a process, and prove it correct with respect to the typing system We also study, in the context of π-calculus, some well-known properties of the let-constructor

Journal Article
TL;DR: The author describes the state of this new field of Inductive Logic Programming, which grew directly out of the earlier work of Plotkin and Shapiro, and discusses areas for future development.
Abstract: Turing's best known work is concerned with whether universal machines can decide the truth value of arbitrary logic formulae. However, in this paper it is shown that there is a direct evolution in Turing's ideas from his earlier investigations of computability to his later interests in machine intelligence and machine learning. Turing realised that machines which could learn would be able to avoid some of the consequences of Godes and his results on incompleteness and undecidability. Machines which learned could continuously add new axioms to their repertoire. Inspired by a radio talk given by Turing in 1951, Christopher Strachey went on to implement the world's first machine learning program. This particular first is usually attributed to A.L. Samuel. Strachey's program, which did rote learning in the game of Nim, preceded Samuel's checker playing program by four years. Neither Strachey's nor Samuel's system took up Turing's suggestion of learning logical formulae. Developments in this area were delayed until Gordon Plotkin's work in the early 1970's. Computer-based learning of logical formulae is the central theme of the research area of Inductive Logic Programming, which grew directly out of the earlier work of Plotkin and Shapiro. In the present paper the author describes the state of this new field and discusses areas for future development.



01 Jan 1994
TL;DR: In this paper, the authors investigated the degree to which the mathematics used in physical theories can be constructivized by using recursive function theory and classical logic to separate out the algorithmic content of mathematical theories rather than attempting to reformulate them in terms of "intuitionistic" logic.
Abstract: This dissertation is an investigation into the degree to which the mathematics used in physical theories can be constructivized The techniques of recursive function theory and classical logic are used to separate out the algorithmic content of mathematical theories rather than attempting to reformulate them in terms of "intuitionistic" logic The guiding question is: are there experimentally testable predictions in physics which are not computable from the data? The nature of Church's thesis, that the class of effectively calculable functions on the natural numbers is identical to the class of general recursive functions, is discussed It is argued that this thesis is an example of an explication of the very notion of an effectively calculable function This is contrary to a view of the thesis as a hypothesis about the limitations of the human mind The extension to functions of a real variable of the notion of effective calculability is discussed, and it is argued that a function of a real variable must be continuous in order to be considered effectively calculable (herein: the Borel-Brouwer thesis) The relation between continuity and computability is significant for the problem at hand The results of a well-designed experiment do not depend critically upon the precise values of the relevant parameters Accordingly, if the solution to a problem in mathematical physics depends discontinuously upon the data, it cannot be regarded as an experimentally testable prediction of the theory The principle that the testable predictions of a physical theory cannot be singular is known as the principle of regularity This principle is significant, because (by the Borel-Brouwer thesis) discontinuities generate non-computability, but (by the principle of regularity) they also disqualify a prediction from being experimentally testable A mathematical framework is set up for discussing computability in physical theories This framework is then applied to the case of quantum mechanics It is found that, due to the use of unbounded (hence discontinuous) operators in the theory, noncomputable objects appear, but predictions which satisfy the principle of regularity are nevertheless computable functions of the data





Journal Article
TL;DR: The paper introduces several generalized pumping lemmata which furnish eleganter technique than common pumping lemma for showing that certain languages are not regular.
Abstract: The paper introduces several generalized pumping lemmata which furnish eleganter technique than common pumping lemma for showing that certain languages are not regular.

10 Oct 1994
TL;DR: A novel algorithmic method for simulating complex fluids, for instance multiphase single component fluids and molecular systems, that inherits exact computability on a discrete spacetime lattice, using the use of non-local interactions that allow it to model a richer set of physical dynamics.
Abstract: : Presented is a novel algorithmic method for simulating complex fluids, for instance multiphase single component fluids and molecular systems. The algorithm falls under a class of single-instruction multiple-data computation known as lattice-gases, and therefore inherits exact computability on a discrete spacetime lattice. Our contribution is the use of non-local interactions that allow us to model a richer set of physical dynamics, such as crystallization processes, yet to do so in a way that remains locally computed. A simple computational scheme is employed that allows all the dynamics to be computed in parallel with two additional bits of local site data, for outgoing and incoming messengers, regardless of the number of long-range neighbors. The computational scheme is an efficient decomposition of a lattice-gas with many neighbors. It is conceptually similar to the idea of virtual intermediate particle momentum exchanges that is well known in particle physics. All 2-body interactions along a particular direction define a spatial partition that is updated in parallel. Random permutation through the partitions is sufficient to recover the necessary isotropy as long as enough momentum exchange directions are used. The algorithm is implemented on the CAM-8 prototype.