scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 1992"


Journal ArticleDOI
TL;DR: It is proved that any functional term of appropriate type actually encodes a polynomial-time algorithm and that conversely any polynometric-time function can be obtained in this way.

275 citations


Proceedings ArticleDOI
22 Jun 1992
TL;DR: The frontier of knowledge about the structural properties of sparse sets is explored and the strongest currently known results, together with the open problems that the results leave, are presented.
Abstract: The frontier of knowledge about the structural properties of sparse sets is explored. A collection of topics that are related to the issue of how hard or easy sparse sets is surveyed. The strongest currently known results, together with the open problems that the results leave, are presented. >

70 citations


Book ChapterDOI
01 Jan 1992
TL;DR: Several classical approaches to higher type computability are described and compared, and a new result is proved in Section 8, showing that an intuitively polynomial time functional is in fact not in the class described.
Abstract: Several classical approaches to higher type computability are described and compared. Their suitability for providing a basis for higher type complexity theory is discussed. A class of polynomial time functional is described and characterized. A new result is proved in Section 8, showing that an intuitively polynomial time functional is in fact not in the class described.

63 citations



Journal ArticleDOI
TL;DR: A classical subject in computational complexity is what is usually called algebraic complexity, where R is a ring, understood as the number of ring operations the algorithm performs as a function of IZ, and the main kind of results are both upper and lower bounds.

24 citations


Journal ArticleDOI
TL;DR: The notion of “semicomputability” is investigated, intended to generalize the notion of recursive enumerability of relations to abstract structures and leads to the formulation of a “Generalized Church-Turing Thesis” for definability of Relations on abstract structures.
Abstract: We investigate the notion of “semicomputability,” intended to generalize the notion of recursive enumerability of relations to abstract structures. Two characterizations are considered and shown to be equivalent: one in terms of “partial computable functions” (for a suitable notion of computability over abstract structures) and one in terms of definability by means of Horn programs over such structures. This leads to the formulation of a “Generalized Church-Turing Thesis” for definability of relations on abstract structures.

12 citations


Journal ArticleDOI
TL;DR: From Grammars to Neural Networks 274 6 .
Abstract: 1 . Introduction 262 2 . RAM and PLN Networks 264 3 . Automata Theory 266 4 . A Probabilistic Recognition Method 271 4.1 Structure of Network 272 4.2 Recognition Algorithm 272 5 . From Grammars to Neural Networks 274 6 . From Neural Networks to Grammars 283 7 . Conc lus ions 285

12 citations


Proceedings ArticleDOI
TL;DR: This paper applies the genetic algorithm to a difficult machine learning problem, viz., to learn the description of pushdown automata to accept a context-free language (CFL), given legal and illegal sentences of the language.
Abstract: Genetic algorithms (GAs) are a class of probabilistic optimization algorithms which utilize ideas from natural genetics. In this paper, we apply the genetic algorithm to a difficult machine learning problem, viz., to learn the description of pushdown automata (PDA) to accept a context-free language (CFL), given legal and illegal sentences of the language. Previous work has involved the use of GAs in learning descriptions for finite state machines for accepting regular languages. CFLs are known to properly include regular languages, and hence, the learning problem addressed here is of a greater complexity. The ability to accept context free languages can be applied to a number of practical problems like text processing, speech recognition, etc.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

12 citations


Journal ArticleDOI
TL;DR: In this article, fixed points of functors and fibrations are used to model solution concepts abstractly, so that solving equations whose arguments are solution concepts can be solved abstractly.

9 citations


Book
01 Jan 1992

8 citations


Journal ArticleDOI
TL;DR: The paper introduces the notion of assumption of high-arity variable and shows that by using the computational interpretation of types it immediately follows that the execution of any proved correct program terminates.

Book
01 Jan 1992
TL;DR: This text represents the proceedings of a workshop held in 1989 and covers such areas as computability and the complexity of higher type functions, logics for termination and correctness of functional programs and concurrent computation as game playing.
Abstract: This text represents the proceedings of a workshop held in 1989 and covers such areas as computability and the complexity of higher type functions, logics for termination and correctness of functional programs and concurrent computation as game playing.

Proceedings ArticleDOI
02 Oct 1992
TL;DR: The main topics are computational "hardness" of physical systems, computational status of fundamental theories, quantum computation, and the Universe as a computer.
Abstract: This paper reviews connections between physics and computation, and explores their implications. The main topics are computational "hardness" of physical systems, computational status of fundamental theories, quantum computation, and the Universe as a computer.

Book ChapterDOI
19 Jun 1992
TL;DR: In this contribution a natural and simple as well as general and efficient frame for studying effectivity (Type 2 theory of effectivity, TTE) is presented.
Abstract: In this contribution a natural and simple as well as general and efficient frame for studying effectivity (Type 2 theory of effectivity, TTE) is presented.

Journal ArticleDOI
TL;DR: The importance of the Stability Problem in neurocomputing is discussed, as well as the need for the study of infinite networks, which aims at characterizing problems solvable, in principle, by a neural network.
Abstract: The importance of the Stability Problem in neurocomputing is discussed, as well as the need for the study of infinite networks. Stability must be the key ingredient in the solution of a problem by a neural network without external intervention. Infinite discrete networks seem to be the proper objects of study for a theory of neural computability which aims at characterizing problems solvable, in principle, by a neural network. Precise definitions of such problems and their solutions are given. Some consequences are explored, in particular, the neural unsolvability of the Stability Problem for neural networks.


Book ChapterDOI
Peter J. Marcer1
01 Jan 1992
TL;DR: Support is provided from the theory of Lie computability and machines for Sir John Eccles hypothesis based on extensive neurophysiological evidence, that mental events may cause neural events analogously to the probability fields of quantum mechanics.
Abstract: Support is provided from the theory of Lie computability and machines for Sir John Eccles hypothesis based on extensive neurophysiological evidence, that, “Mental events (may) cause neural events analogously to the probability fields of quantum mechanics” (1).

Book
01 Jan 1992
TL;DR: This work has shown the ability and complexity of polynomial optimization problems to be influenced by model reference adaptive systems, and topics in modern computational methods for optimal control problems.
Abstract: Computability and complexity of polynomial optimization problems.- Identification by model reference adaptive systems.- Matching problems with Knapsack side constraints. A computational study.- Optimization under functional constraints (semi-infinite programming) and applications.- Vector optimization: Theory, methods, and application to design problems in engineering.- Nonconvex optimization and its structural frontiers.- Nonlinear optimization problems under data perturbations.- Difference methods for differential inclusions.- Ekeland's variational principle, convex functions and Asplund-spaces.- Topics in modern computational methods for optimal control problems.

Journal ArticleDOI
Daniel Ocone1
TL;DR: In this paper, it was shown that the property of finite dimensional computability of a statistic is invariant under the application of the Markov semigroup of a diffusion with a constant diffusion coefficient.
Abstract: Let X be a one-dimensional diffusion with a constant diffusion coefficient. Suppose that we observe X in additive white noise. Bene[sbreve] showed that the conditional density of X(t) given the noisy observations up to time t can be computed explicitly by a stochastic system with a finite-dimensional state space if the drift of X is a differentiable, everywhere defined solution of a one dimensional Ricatti equation with a quadratic potential. Suppose now that the drift satisfies the same Ricatti equation, but only on a bounded or semi-bounded interval I at the endpoints of which it becomes singular. In this caseX will have entrance boundaries at the finite endpoints of I We show that unnormalized conditional estimates will no longer be finite-dimensionally computable in this situation. Our proof uses the following principle, which is of independent interest: the property of finite dimensional computability of a statistic is invariant under the application of the Markov semigroup of X

Proceedings ArticleDOI
08 Apr 1992
TL;DR: This paper investigates the feasibility of using formal proofs of programs in computability theory, and describes a framework for formal verification of programs written in a simple theoretical programming language.
Abstract: Whereas early researchers in computability theory described effective computability in terms of such models as Turing machines, Markov algorithms, and register machines, a recent trend has been to use simple programming languages as computability models. A parallel development to this programming approach to computability theory is the current interest in systematic and scientific development and proof of programs. This paper investigates the feasibility of using formal proofs of programs in computability theory. After describing a framework for formal verification of programs written in a simple theoretical programming language, we discuss the proofs of several typical programs used in computability theory.

Book ChapterDOI
01 Jan 1992
TL;DR: All non-trivial systems of symbolic expressions are “comprehended” by this model and have inherent incompleteness properties by which any description must proceed by the iterative specification of sub-languages.
Abstract: In the present study we are using a conceptualization of a General Theory of Systems of Symbolic Expressions and Representation to discuss complementarity in language. This conceptualization reconciles our factual knowledge of neuronal control in biological systems with notions of computability and decidability in artifacts as well as living organisms. We thereby identify cognition with computation on representations (i.e. structures of symbolic expressions, — possibly containing existential predicates and projecting on the truth values) in domains of symbolic objects closed under allowed operations (i.e. in autonomous or self-defining systems). It will be argued that “linguistic complementarity”, defined in terms of a distinction between “description” and “interpretation” of language, exists in language processing when symbolic expressions are considered partial objects which are related to each other as elements in a complete lattice structure. The formal theory developed by Dana Scott (1976) as a model for the Type Free Lambda Calculus is used in relation to this Theory of Systems of Symbolic Expressions and Representations. We argue that all non-trivial systems of symbolic expressions are “comprehended” by this model and have inherent incompleteness properties by which any description must proceed by the iterative specification of sub-languages. It is shown that continuous domains insure this. Type Freeness is taken to mean that every distinct expression is a unique type sui generis, but the dichotomy of rules in combinations, where every object can be either operator or operandi, implies a complementarity by which value elements and functions, as constant courses of variation, are mutually defining. Meaning is considered from the point of view of denotation or representation by converging sequences of monotonically ascending continuous expressions that approximate objects of infinite (“real”) type as limits. The methodology of defining a partial order over domains of expressions is also considered.