scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 1995"


Journal ArticleDOI
TL;DR: This paper surveys some recent attempts to formulate a plausible and tractable model of bounded rationality and focuses in particular on models which view bounded rationality as stemming from limited information processing.
Abstract: This paper surveys recent attempts to formulate a plausible and tractable model of bounded rationality. The author focuses in particular on models that view bounded rationality as stemming from limited information processing. He discusses partitional models (such as computability, automata, perceptions, and optimal networks), nonpartitional models, and axiomatic approaches.

154 citations


Posted Content
TL;DR: In this article, a formal framework for the analysis of complex systems is proposed based on simulation, and a notion of a universal simulator and the definition of simulatability is proposed, which allows a description of conditions under which simulations can distribute update functions over system components.
Abstract: Artificial Life and the more general area of Complex Systems does not have a unified theoretical framework although most theoretical work in these areas is based on simulation. This primarily due to an insufficient representational power of the classical mathematical frameworks for the description of discrete dynamical systems of interacting objects with often complex internal states. Unlike computation or the numerical analysis of differential equations, simulation does not have a well established conceptual and mathematical foundation. Simulation is an arguable unique union of modeling and computation. However, simulation also qualifies as a separate species of system representation with its own motivations, characteristics, and implications. This work outlines how simulation can be rooted in mathematics and shows which properties some of the elements of such a mathematical framework has. The properties of simulation are described and analyzed in terms of properties of dynamical systems. It is shown how and why a simulation produces emergent behavior and why the analysis of the dynamics of the system being simulated always is an analysis of emergent phenomena. Indeed, the single fundamental class of properties of the natural world that simulation will open to new understanding, is that which occurs only in the dynamics produced by the interactions of the components of complex systems. Simulation offers a synthetic, formal framework for the experimental mathematics of representation and analysis of complex dynamical systems. A notion of a universal simulator and the definition of simulatability is proposed. This allows a description of conditions under which simulations can distribute update functions over system components, thereby determining simulatabilty. The connection between the notion of simulatabilty and the notion of computability is defined and the concepts are distinguished. The basis of practical detection methods for determining effectively non-simulatable systems in practice is presented. The conceptual framework is illustrated, computability, dynamics, emergence, system representation, universal simulator.

29 citations


Journal ArticleDOI
TL;DR: A new study of the decomposition algorithm first proposed by Karp, Miller and Winograd is proposed, based on linear programming resolutions whose duals give exactly the desired multi-dimensional schedules.
Abstract: This paper is devoted to the construction of multi-dimensional schedules for a system of uniform recurrence equations. We show that this problem is dual to the problem of computability of a system of uniform recurrence equations. We propose a new study of the decomposition algorithm first proposed by Karp, Miller and Winograd: we base our implementation on linear programming resolutions whose duals give exactly the desired multi-dimensional schedules. Furthermore, we prove that the schedules built this way are optimal up to a constant factor.

27 citations


Journal ArticleDOI
TL;DR: This work considers the recursion theoretic properties of Ω with special emphasis on recursively enumerable sets and captures the class Ω of all sets A such that for some n, the n-fold characteristic function of A can be computed with fewer than n errors.
Abstract: The notion of frequency computation captures the class Ω of all sets A such that for some n, the n-fold characteristic function of A can be computed with fewer than n errors. We consider the recursion theoretic properties of Ω with special emphasis on recursively enumerable sets.

26 citations


Journal ArticleDOI
TL;DR: In this article, the authors give an overview of some recent results concerning quantitative adaptive error control in CFD, and present a method for quantifying the adaptive error of a CFD model.
Abstract: We give an overview of some recent results concerning quantitative adaptive error control in CFD.

20 citations


Book ChapterDOI
01 Jan 1995
TL;DR: Methods of proof theory are used to assign ordinals to classes of (terminating) programs, the idea being that the ordinal assignment provides a uniform way of measuring computational complexity “in the large”.
Abstract: In this paper we use methods of proof theory to assign ordinals to classes of (terminating) programs, the idea being that the ordinal assignment provides a uniform way of measuring computational complexity “in the large”. We are not concerned with placing prior (e.g. polynomial) bounds on computation-length, but rather with general methods of assessing the complexity of natural classes of programs according to the ways in which they are constructed. We begin with an overview of the method in section 2, the crucial idea being supplied by Buchholz’s ω+–rule. Section 3 introduces a large class of higher-order programs based on Plotkin’s PCF, but with “bounded” fixed point operators controlled by given well-ordering. A Tait-style computability proof then ensures termination. In section 4 the details of the ordinal assignment method are worked out for the case where the given well-ordering is just the ordering of the natural numbers. The complexity bounds thus obtained turn out to be the slow-growing functions G α with α below the Bachmann-Howard ordinal. Thus the functions computed by PCF<–programs are just the provably recursive functions of arithmetic.

19 citations


Proceedings ArticleDOI
24 Jul 1995
TL;DR: A new study of the decomposition algorithm first proposed by Karp, Miller and Winograd is proposed, based on linear programming resolutions whose duals give exactly the desired multi-dimensional schedules.
Abstract: This paper is devoted to the construction of multi-dimensional schedules for a system of uniform recurrence equations. We show that this problem is dual to the problem of computability of a system of uniform recurrence equations. We propose a new study of the decomposition algorithm first proposed by Karp, Miller and Winograd: we base our implementation on linear programming resolutions whose duals give exactly the desired multi-dimensional schedules. Furthermore, we prove that the schedules built this way are optimal up to a constant factor.

17 citations


Journal ArticleDOI
TL;DR: A general theorem for avoiding undecidable problems in computability theory is proposed by introducing a new class of recursive functions on different axiomatizations of numbers by way of a well‐formed formula of a first‐order predicate calculus.
Abstract: In this article we intend to analyze a chaotic system from the standpoint of its computation capability. to pursue this aim, we refer to a complex chaotic dynamics that we characterize via its symbolic dynamics. We show that these dynamic systems are subjected to some typical undecidable problems. Particularly, we stress the impossibility of deciding on a unique invariant measure. This depends essentially on the supposition of the existence of a fixed universal grammar. the suggestion is thus of justifying a contextual redefinition of the grammar as a function of the same evolution of the system. We propose on this basis a general theorem for avoiding undecidable problems in computability theory by introducing a new class of recursive functions on different axiomatizations of numbers. From it a series expansion on n algebraic fields can be defined. In such a way, we are able to obtain a very fast extraction procedure of unstable periodic orbits from a generic chaotic dynamics. the computational efficiency of this algorithm allows us to characterize a chaotic system by the complete statistics of its unstable cycles. Some examples of these two techniques are discussed. Finally, we introduce the possibility of an application of this same class of recursive functions to the calculus of the absolute minimum of energy in neural nets, as far as it is equivalent to a well-formed formula of a first-order predicate calculus. © 1995 John Wiley & Sons, Inc.

16 citations


Book ChapterDOI
01 Jan 1995
TL;DR: In this paper, the issues of computability and constructivity in the mathematics of physics are discussed, and the sorts of questions to be addressed are those which might be expressed, roughly, as: are the mathematical foundations of our current theories unavoidably non-constructive: or, are the laws of physics computable?
Abstract: In this paper, the issues of computability and constructivity in the mathematics of physics are discussed. The sorts of questions to be addressed are those which might be expressed, roughly, as: Are the mathematical foundations of our current theories unavoidably non-constructive: or, Are the laws of physics computable?

15 citations


Dissertation
20 Nov 1995
TL;DR: The paradigm of Supervisory Control Theory, developed for Discrete Event Systems constrained by legality specifications, is useful and it is shown how it can be extended to this more general class of specifications.
Abstract: We are interested in the problem of designing control software for large-scale systems having discrete event-driven dynamics, in situations where the performance is specified by numerical measures. The paradigm of Supervisory Control Theory, developed for Discrete Event Systems (DES) constrained by legality specifications (0, $\infty$-cost structure), is useful and we show how it can be extended to this more general class of specifications. We assume the DES is represented by a formal language consisting of strings contained in the Kleene closure of an alphabet. This language has two kinds of costs on its usage of resources. The design objective is to find sublanguages that minimize these costs. Both deterministic and stochastic languages are looked at. We present modelling methods and examples to motivate interesting ways of using our problem formulation. An existence theory is developed. Amongst other things, the theory establishes the existence of minimally restrictive solutions. Various related paradigms in stochastic control, dynamic programming and finite vertex directed graphs are discussed. For DES modelled by finite state machine we establish computability and present algorithms of polynomial complexity to compute optimal sublanguages. Our findings can collectively be considered as a theory of optimal control for DES, though it differs from the classical theory in interesting ways.

13 citations


01 Sep 1995
TL;DR: The interaction paradigm provides new ways of approaching each of these application areas, demonstrating the value of interac, while the conceptual framework and theoretical foundation for interactive models are provided.
Abstract: Interaction machines, defined by extending Turing machines with input actions (read statements), are shown to be more expressive than computable functions, providing a counterexample to the hypothesis of Church and Turing that the intuitive notion of computation corresponds to formal computability by Turing machines. The negative result that interaction cannot be modeled by algorithms leads to positive principles of interactive modeling by interface constraints that support partial descriptions of interactive systems whose complete behavior is inherently unspecifiable. The unspecifiability of complete behavior for interactive systems is a computational analog of Godel incompleteness for the integers. Fortunately the complete behavior of interaction machines is not needed to harness their behavior for practical purposes. Interface descriptions are the primary mechanism used by software designers and application programmers for partially describing systems for the purpose of designing, controlling, predicting, and understanding them. Interface descriptions are an example of "harness constraints" that constrain interactive systems so their behavior can be harnessed for useful purposes. We examine both system constraints like transaction correctness and interface constraints for software design and applications. Sections 1, 2, and 3 provide a conceptual framework and theoretical foundation for interactive models, while sections 4, 5, and 6 apply the framework to distributed systems, software engineering, and artificial intelligence. Section 1 explores the design space of interaction machines, section 2 considers the relation between imperative, declarative, and interactive paradigms of computing, section 3 examines the limitations of logic and formal semantics. In section 4 we examine process models, transaction correctness, time, programming languages, and operating systems from the point of view of interaction. In section 5, we examine life-cycle models, object-based design, use-case models, design patterns, interoperability, and coordination. In section 6, we examine knowledge representation, intelligent agents, planning for dynamical systems, nonmonotonic logic, and "can machines think?". The interaction paradigm provides new ways of approaching each of these application areas, demonstrating the value of interac


Journal ArticleDOI
TL;DR: This review surveys a significant set of recent ideas developed in the study of nonlinear Galerkin approximation, including the Krasnosel’skii calculus, which represents a generalization of the classical inf-sup linear saddlepoint theory.
Abstract: This review surveys a significant set of recent ideas developed in the study of nonlinear Galerkin approximation. A significant role is played by the Krasnosel’skii calculus, which represents a generalization of the classical inf-sup linear saddlepoint theory. A description of a proper extension of this calculus and the relation to the ii-sup theory are part of this review. The general study is motivated by steady-state, self-consistent, drift-diffusion systems. The mixed boundary value problem for nonlinear elliptic systems is studied with respect to defining a sequence of convergent approximations, satisfying requirements of: (i) optimal convergence rate; (ii) computability; and, (iii) stability. It is shown how the fixed point and numerical fixed point maps of the system, in conjunction with the Newton–Kantorovich method applied to the numerical fixed point map, permit a solution of this approximation problem. A critical aspect of the study is the identification of the breakdown of the Newton–Kantorovi...

Journal ArticleDOI
TL;DR: The degrees of unsolvability were introduced in the ground-breaking papers of Post [20] and Kleene and Post [7] as an attempt to measure the information content of sets of natural numbers.
Abstract: The degrees of unsolvability were introduced in the ground-breaking papers of Post [20] and Kleene and Post [7] as an attempt to measure the information content of sets of natural numbers. Kleene and Post were interested in the relative complexity of decision problems arising naturally in mathematics; in particular, they wished to know when a solution to one decision problem contained the information necessary to solve a second decision problem. As decision problems can be coded by sets of natural numbers, this question is equivalent to: Given a computer with access to an oracle which will answer membership questions about a set A , can a program (allowing questions to the oracle) be written which will correctly compute the answers to all membership questions about a set B ? If the answer is yes, then we say that B is Turing reducible to A and write B ≤ T A . We say that B ≡ T A if B ≤ T A and A ≤ T B . ≡ T is an equivalence relation, and ≤ T induces a partial ordering on the corresponding equivalence classes; the poset obtained in this way is called the degrees of unsolvability , and elements of this poset are called degrees . Post was particularly interested in computability from sets which are partially generated by a computer, namely, those for which the elements of the set can be enumerated by a computer.

Posted Content
TL;DR: In this article, the authors outline how simulation can be rooted in mathematics and show which properties some of the elements of such a mathematical framework has, and the properties of simulation are described and analyzed in terms of properties of dynamical systems.
Abstract: Unlike computation or the numerical analysis of differential equations, simulation does not have a well established conceptual and mathematical foundation. Simulation is an arguable unique union of modeling and computation. However, simulation also qualifies as a separate species of system representation with its own motivations, characteristics, and implications. This work outlines how simulation can be rooted in mathematics and shows which properties some of the elements of such a mathematical framework has. The properties of simulation are described and analyzed in terms of properties of dynamical systems. It is shown how and why a simulation produces emergent behavior and why the analysis of the dynamics of the system being simulated always is an analysis of emergent phenomena. A notion of a universal simulator and the definition of simulatability is proposed. This allows a description of conditions under which simulations can distribute update functions over system components, thereby determining simulatability. The connection between the notion of simulatability and the notion of computability is defined and the concepts are distinguished. The basis of practical detection methods for determining effectively non-simulatable systems in practice is presented. The conceptual framework is illustrated through examples from molecular self-assembly end engineering.

Book ChapterDOI
01 Jan 1995
TL;DR: A collection of partial functions over an arbitrary set M, indexed by elements of the same set, closed under composition, and containing projections, universal functions and functions s of the s-m-n theorem of Recursion Theory.
Abstract: An Effective Applicative Structure is a collection of partial functions over an arbitrary set M, indexed by elements of the same set, closed under composition, and containing projections, universal functions and functions s of the s-m-n theorem of Recursion Theory. The notion of EAS is developed as an abstract approach to computability, filling a notational gap between functional and combinatorial theories.

Journal ArticleDOI
T. Yamakami1
TL;DR: This work reformulate Townsend?s topological notions associated with time-bounded computations of function-oracle Turing machines, and further extend his "topological" characterization to all levels of the boldface polynomial hierarchy of type two, which leads to aPolynomialized version of descriptive set theory.
Abstract: Townsend introduced a resource-bounded extension of polynomial-time computable functions on strings to type-two functionals, and studied a type-two version of the Meyer-Stockmeyer polynomial hierarchy which is founded on polynomial-time computable functionals of type two. A functional of type two is polynomial-time computable if it is computed by a deterministic function-oracle Turing machine whose runtime is bounded by a polynomial that does not depend on the choice of oracle functions. Townsend also introduced a boldface polynomial hierarchy of type two by a relativization method, and gave a "topological" characterization of the first level of this hierarchy. We reformulate Townsend?s topological notions associated with time-bounded computations of function-oracle Turing machines, and further extend his "topological" characterization to all levels of the boldface polynomial hierarchy of type two. This leads to a polynomialized version of descriptive set theory.

Book ChapterDOI
TL;DR: The rigidity and definability in the noncomputable universe is discussed, which includes all possible notions of effective computability relative to oracles.
Abstract: Publisher Summary This chapter discusses the rigidity and definability in the noncomputable universe. A noncomputable universe is intimately connected with the world of everyday mathematics. This noncomputability is of a fundamental nature, and does not arise from mere practical limitations such as those on capacity of memory or duration of computational processes. An important aim of recursion theory is to investigate the context of interesting mathematical objects within the noncomputable universe, and Kleene and Post proposed the degrees of unsolvability as an appropriate theoretical framework, or fine structure theory, within which to do this. The subsequent development of local degree theory is largely based on autonomy of interest and motivation through which evolved elegant techniques and striking results, while its general impact amongst mathematicians and computer scientists is limited by its seeming preoccupation with pathology and technique of evermore prohibitive complexity. There are of course a number of notions on which to base a useful fine structure theory, of which two are especially important—namely, many-one reducibility and Turing reducibility. Many-one reductions are historically significant as the recursion theoretic analogues of natural translations between formal theories, while Turing reducibility includes all possible notions of effective computability relative to oracles.

Journal ArticleDOI
TL;DR: A normal form theorem for these non-deterministic and concurrent processes which can be specified in terms of various fair merge constructs implies that although they can be very complex when viewed as classical set-functions, they are all “loosely implementable” in the sense of Park (1980).

Journal ArticleDOI
TL;DR: This work proposes a model for systems involving both intracellular and intercellular computation by introducing the inconsistent relation of Boolean and non-Boolean logic and concludes that undecidability between a part and whole comprises a hierarchical structure.
Abstract: Systems involving both intracellular and intercellular computation are destined to be described as non-computable. We propose a model for such systems by introducing the inconsistent relation of Boolean and non-Boolean logic. Cellular automata fashioned model exhibits an evolution like class 4 located at the edge of chaos, while there is no local rule for universality existing over the whole space in our model. A system featuring the inconsistent vertical scheme is approximately articulated into a hierarchical system, whose wholeness cannot be deduced by any approximated local rule. In other words, undecidability between a part and whole comprises a hierarchical structure.

21 Sep 1995
TL;DR: In this paper, the authors proposed a method for simulating complex fluids, such as multiphase single component fluids and molecular systems, by using nonlocal interactions that allow them to model a richer set of physical dynamics, yet to do so in a way that remains locally computed.
Abstract: : A novel algorithmic method presented for simulating complex fluids, for instance multiphase single component fluids and molecular systems. The algorithm falls under a class of single-instruction multiple-data computation known as lattice-gases, and therefore inherits exact computability on a discrete spacetime lattice. Our contribution is the use of nonlocal interactions that allow us to model a richer set of physical dynamics, such as crystallization processes, yet to do so in a way that remains locally computed. A simple computational scheme is employed that allows all the dynamics to be computed in parallel with two additional bits of local site data, for outgoing and incoming messengers regardless of the number of long-range neighbors. The computational scheme is an efficient decomposition of a lattice-gas with many neighbors. It is conceptually similar to the idea of virtual intermediate particle momentum exchanges that is well known in particle physics. All 2-body interactions along a particular direction define a spatial partition that is updated in parallel. Random permutation through the partitions is sufficient to recover the necessary isotropy as long as enough momentum exchange directions are used. The algorithm is implemented on the CAM-8 prototype.

Book ChapterDOI
01 Jan 1995
TL;DR: Values from the normative component are used to determine the choices to be made from among the possibilities revealed by science and engineering, but these possibilities—the raw material for the normative analysis—are a function of the idealizations and approximations used.
Abstract: What is the logic of technological choice? An elementary first move in answering this question is to distinguish between normative and engineering components. On this view, values from the normative component are used to determine the choices to be made from among the possibilities revealed by science and engineering. But these possibilities—the raw material, as it were, for the normative analysis—are a function of the idealizations and approximations used. Because nothing can begin to happen in the way of testing or application of theory in the absence of some calculated numbers, scientists and engineers require real, as opposed to in principle only, computability. But real computability must make do with actually available empirical data, auxiliary theories, computational resources, and mathematical methods. Given real world limitations on the availability and power of these necessary components, idealizations and approximations must be used by both scientist and engineer. There is really no choice for either practitioner but to simplify. As we shall see, such simplification causes problems for the reliability of the claims of science and engineering.

Proceedings ArticleDOI
25 Jan 1995
TL;DR: A uniform family of circuits is discussed, realizing neural networks to solve approximately the maximum 2-satisfiability problem and shows a good performance solving problem instances in 20 /spl mu/s with relative error less than 0.003.
Abstract: In this paper we discuss a uniform family of circuits, realizing neural networks to solve approximately the maximum 2-satisfiability problem. An implementation on FPGA for the problem instances of 16 variables and 480 clauses is presented. The circuit shows a good performance solving problem instances in 20 /spl mu/s with relative error less than 0.003. >

Book
01 Jan 1995
TL;DR: On the synthesis of strategies in infinite games, finding the maximum with linear error probabilities: a sequential analysis approach and the structure of log-space probabilistic complexity classes.
Abstract: On the synthesis of strategies in infinite games.- Finding the maximum with linear error probabilities: a sequential analysis approach.- Completeness and weak completeness under polynomial-size circuits.- Communication complexity of key agreement on small ranges.- Pseudorandom generators and the frequency of simplicity.- Classes of bounded counting type and their inclusion relations.- Lower bounds for depth-three circuits with equals and mod-gates.- On realizing iterated multiplication by small depth threshold circuits.- A random NP-complete problem for inversion of 2D cellular automata.- On the subword equivalence problem for infinite words.- On the separators on an infinite word generated by a morphism.- Systolic tree ?-languages.- Structural complexity of ?-automata.- Algorithms explained by symmetries.- Generalized scans and tri-diagonal systems.- Two-dimensional pattern matching in linear time and small space.- On-line and dynamic algorithms for shortest path problems.- On compact representations of propositional circumscription.- A set-theoretic translation method for (poly)modal logics.- On the synthesis of discrete controllers for timed systems.- A fully abstract semantics for causality in the ?-calculus.- On the sizes of permutation networks and consequences for efficient simulation of hypercube algorithms on bounded-degree networks.- Exploiting storage redundancy to speed up randomized shared memory simulations.- Interval routing schemes.- A packet routing protocol for arbitrary networks.- A family of tag systems for paperfolding sequences.- Growing context-sensitive languages and Church-Rosser languages.- Deterministic generalized automata.- Optimal simulation of automata by neural nets.- Concurrent process equivalences: Some decision problems.- Optimal lower bounds on the multiparty communication complexity.- Simultaneous messages vs. communication.- Coding and strong coding in trace monoids.- On codings of traces.- Finding largest common embeddable subtrees.- The ?t-coloring problem.- Expander properties in random regular graphs with edge faults.- Dynamic analysis of the sizes of relations.- On slender context-free languages.- Partial derivatives of regular expressions and finite automata constructions.- Dependence orders for computations of concurrent automata.- On the undecidability of deadlock detection in families of nets.- On the average running time of odd-even merge sort.- Optimal average case sorting on arrays.- Normal numbers and sources for BPP.- Lower bounds on learning decision lists and trees.- Line segmentation of digital curves in parallel.- Computability of convex sets.- Enumerating extreme points in higher dimensions.- The number of views of piecewise-smooth algebraic objects.- On the structure of log-space probabilistic complexity classes.- Resource-bounded instance complexity.- On the sparse set conjecture for sets with low density.- Beyond PNP=NEXP.- Malign distributions for average case circuit complexity.- A possible code in the genetic code.

Proceedings ArticleDOI
13 Dec 1995
TL;DR: In this article, it was shown that the simultaneous stabilization question: When are three linear systems stabilizable by the same controller? cannot be solved by a semialgebraic set description nor be answered by computational machines.
Abstract: We show that the simultaneous stabilization question: When are three linear systems stabilizable by the same controller? cannot be solved by a semialgebraic set description nor be answered by computational machines.

02 Jan 1995
TL;DR: One-dimensional bilinear cellular automata over Z are $\pi$-universal, i.e. capable of simulating any one dimensional cellular automaton, and a construction of a computation universal monotone map that is also capable ofSimulating the fixed-point behavior of any cellular Automaton, discrete neural network, general automata network, and moreover, any continuous self-map of the Cantor set is reported.
Abstract: This thesis is a two-part study of the computational properties of dynamical systems. Part I studies the computability properties of piecewise linear and piecewise monotone maps of the real interval. Part II is concerned with the computability properties of cellular automata mappings. The major new contributions of Part I are: (1) A construction of a piecewise linear map semi-conjugate to a nontrivial pushdown automaton. (2) The construction of a recursive homomorphism between the attractor-basin portrait of a given halting Turing machine and that of an appropriately chosen tent map. (3) A construction of a computation universal monotone map that is also capable of simulating the fixed-point behavior of any cellular automaton, discrete neural network, general automata network, and moreover, any continuous self-map of the Cantor set. The major new results of Part II are: (1) One-dimensional bilinear cellular automata over Z$\sbsp{p}{p}$ are $\pi$-universal, i.e. capable of simulating any one dimensional cellular automaton. (2) A phenomenological classification of quadric cellular automata over Z$\sb{m}.$ (3) Constructions of a 4$\sp{th}$ degree polynomial over Z$\sb6$ for Bank's Computer, a computation universal cellular automaton, and a 5$\sp{th}$ degree polynomial over Z$\sb{32}$ for the Billiard Ball Model, a computation universal reversible cellular automaton.

Book ChapterDOI
11 Dec 1995
TL;DR: The computability of induction in Logic Programming is discussed and an induction system (GA-CIGOL) based on Genetic Algorithm and Inverse Resolution that induces a hypothesis from a finite set of positive or negative examples is proposed.
Abstract: We discuss the computability of induction in Logic Programming. It is impossible to select a hypothesis, a set of horn clauses, with fewer than several tens of examples. In order to overcome this problem, we try to computerize induction with an optimization method on the assumption that hypotheses are relative. We discuss and clarify the criteria used in an optimization method for induction. Additionally, we propose an induction system (GA-CIGOL) based on Genetic Algorithm and Inverse Resolution that induces a hypothesis from a finite set of positive or negative examples. Furthermore, we evaluate the learning ability of GA-CIGOL and discuss the problems of computerized induction with an optimization method.

Book ChapterDOI
23 Nov 1995
TL;DR: A possible such a model, constituting a chaotic dynamical system is presented, which is term as the analog shift map, when viewed as a computational model has super-Turing power and is equivalent to neural networks and the class of analog machines.
Abstract: This paper reasons about the need to seek for particular kinds of models of computation that imply stronger computability than the classical models. A possible such a model, constituting a chaotic dynamical system is presented. This model, which we term as the analog shift map, when viewed as a computational model has super-Turing power and is equivalent to neural networks and the class of analog machines. This map may be appropriate to describe natural physical phenomena.

Journal ArticleDOI
TL;DR: The author's forthcoming book proves central results in computability and complexity theory from a programmer-oriented perspective that say that cons-free programs augmented with recursion can solve all and only PTIME problems.