scispace - formally typeset
Search or ask a question

Showing papers on "Turing machine published in 2021"


Journal ArticleDOI
TL;DR: It is shown that there exists no Turing machine that takes the physical description of the communication channel as an input and solves a non-trivial classification task and it is impossible to algorithmically detect denial-of-service (DoS) attacks on the transmission.
Abstract: For communication systems there is a recent trend towards shifting functionalities from the physical layer to higher layers by enabling software-focused solutions. Having obtained a (physical layer-based) description of the communication channel, such approaches exploit this knowledge to enable various services by subsequently processing it on higher layers. For this it is a crucial task to first find out in which state the underlying communication channel is. This paper develops a framework based on Turing machines and studies whether or not it is in principle possible to algorithmically solve such classification tasks, i.e., to decide in which state the communication system is. Turing machines have no limitations on computational complexity, computing capacity and storage, and can simulate any given algorithm and therewith are a simple but very powerful model of computation. They characterize the fundamental performance limits for today’s digital computers. It is shown that there exists no Turing machine that takes the physical description of the communication channel as an input and solves a non-trivial classification task. Subsequently, this general result is used to study communication under adversarial attacks and it is shown that it is impossible to algorithmically detect denial-of-service (DoS) attacks on the transmission. Jamming attacks on ACK/NACK feedback cannot be detected as well and, in addition, ACK/NACK feedback is shown to be useless for the detection of DoS on the actual message transmission. Further applications are discussed including DoS attacks on the Post Shannon task of identification, and on physical layer security and resilience by design.

16 citations


Journal ArticleDOI
TL;DR: This work provides a toolbox for carrying out fundamental tasks on a given arrangement of particles, using the arrangement itself as a storage device, similar to a higher-dimensional Turing machine with geometric properties.
Abstract: We contribute results for a set of fundamental problems in the context of programmable matter by presenting algorithmic methods for evaluating and manipulating a collective of particles by a finite automaton that can neither store significant amounts of data, nor perform complex computations, and is limited to a handful of possible physical operations. We provide a toolbox for carrying out fundamental tasks on a given arrangement of particles, using the arrangement itself as a storage device, similar to a higher-dimensional Turing machine with geometric properties. Specific results include time- and space-efficient procedures for bounding, counting, copying, reflecting, rotating or scaling a complex given shape.

14 citations


Journal ArticleDOI
TL;DR: This research presents a quantum computing framework using the circuit model, for estimating algorithmic information metrics, and is the first time experimental algorithmic Information theory is implemented using quantum computation.
Abstract: Inferring algorithmic structure in data is essential for discovering causal generative models. In this research, we present a quantum computing framework using the circuit model, for estimating algorithmic information metrics. The canonical computation model of the Turing machine is restricted in time and space resources, to make the target metrics computable under realistic assumptions. The universal prior distribution for the automata is obtained as a quantum superposition, which is further conditioned to estimate the metrics. Specific cases are explored where the quantum implementation offers polynomial advantage, in contrast to the exhaustive enumeration needed in the corresponding classical case. The unstructured output data and the computational irreducibility of Turing machines make this algorithm impossible to approximate using heuristics. Thus, exploring the space of program-output relations is one of the most promising problems for demonstrating quantum supremacy using Grover search that cannot be dequantized. Experimental use cases for quantum acceleration are developed for self-replicating programs and algorithmic complexity of short strings. With quantum computing hardware rapidly attaining technological maturity, we discuss how this framework will have significant advantage for various genomics applications in meta-biology, phylogenetic tree analysis, protein-protein interaction mapping and synthetic biology. This is the first time experimental algorithmic information theory is implemented using quantum computation. Our implementation on the Qiskit quantum programming platform is copy-left and is publicly available on GitHub.

12 citations


Proceedings ArticleDOI
29 Jun 2021
TL;DR: In this article, it is shown that strong call-by-value evaluation is reasonable for time, via a new abstract machine realizing useful sharing and having a linear overhead, and a new mix of sharing techniques, adding on top of useful sharing a form of implosive sharing, which on some terms brings an exponential speed-up.
Abstract: Whether the number of β -steps in the λ-calculus can be taken as a reasonable time cost model (that is, polynomially related to the one of Turing machines) is a delicate problem, which depends on the notion of evaluation strategy. Since the nineties, it is known that weak (that is, out of abstractions) call-by-value evaluation is a reasonable strategy while Levy's optimal parallel strategy, which is strong (that is, it reduces everywhere), is not. The strong case turned out to be subtler than the weak one. In 2014 Accattoli and Dal Lago have shown that strong call-by-name is reasonable, by introducing a new form of useful sharing and, later, an abstract machine with an overhead quadratic in the number of β-steps.Here we show that also strong call-by-value evaluation is reasonable for time, via a new abstract machine realizing useful sharing and having a linear overhead. Moreover, our machine uses a new mix of sharing techniques, adding on top of useful sharing a form of implosive sharing, which on some terms brings an exponential speed-up. We give examples of families that the machine executes in time logarithmic in the number of β-steps.

10 citations


Journal ArticleDOI
TL;DR: In this paper, the authors employ the concept of Turing computability, a theoretical model that describes the fundamental limits of what can be solved algorithmically on a digital hardware, and ask if, for a given computable bandlimited signal, it is possible to compute its bandwidth on a Turing machine.
Abstract: The bandwidth of a bandlimited signal is an important number that is relevant in many applications and concepts For example, according to the Shannon sampling theorem, the bandwidth determines the minimum sampling rate that is required for a perfect reconstruction In this paper we consider bandlimited signals with finite energy and bandlimited signals that are absolutely integrable and analyze whether the bandwidth of these signals can be determined algorithmically We employ the concept of Turing computability, a theoretical model that describes the fundamental limits of what can be solved algorithmically on a digital hardware, and ask if, for a given computable bandlimited signal, it is possible to compute its bandwidth on a Turing machine We show that this is not possible in general, because there exist computable bandlimited signals for which the bandwidth is a non-computable real number Even the weaker question if the bandwidth of a given signal is smaller than a predefined value cannot be always answered algorithmically Further, we prove that in the case where the bandwidth in not computable, it is even impossible to algorithmically determine a sequence of upper bounds that converges to the actual bandwidth of the signal As a positive result, we show that the set of signals whose bandwidth is larger than some given value is semi-decidable

8 citations


Journal ArticleDOI
TL;DR: This work considers the possibility of a real-valued Turing machine model, the potential computational and algorithmic opportunities of these techniques, the implications for implementation applications, and the computational complexity space arising from this model.
Abstract: Physical computing unifies real value computing including analog, neuromorphic, optical, and quantum computing Many real-valued techniques show improvements in energy efficiency, enable smaller area per computation, and potentially improve algorithm scaling These physical computing techniques suffer from not having a strong computational theory to guide application development in contrast to digital computation’s deep theoretical grounding in application development We consider the possibility of a real-valued Turing machine model, the potential computational and algorithmic opportunities of these techniques, the implications for implementation applications, and the computational complexity space arising from this model These techniques have shown promise in increasing energy efficiency, enabling smaller area per computation, and potentially improving algorithm scaling

8 citations


Proceedings ArticleDOI
23 Aug 2021
TL;DR: In this article, the authors propose a developmental method that trains only a single but optimal network for each application lifetime using a new standard for performance evaluation in machine learning, called developmental errors for all networks trained in a project that the selection of the luckiest network depends on.
Abstract: This work is the theory of Post-Selection practices that have been rarely studied. Post-Selections mean selections of systems after the systems have been trained. Post-Selections Using Validation Sets (PSUVS) are wasteful and Post-Selections Using Test Sets (PSUTS) are wasteful and unethical. Both result in systems whose generalization powers are weak. The PSUTS fall into two kinds, machine PSUTS and human PSUTS. The connectionist AI school received criticisms for its “scruffiness” due to a huge number of network parameters and now the machine PSUTS; but the seemingly “clean” symbolic AI school seems more brittle because of its human PSUTS. This paper analyzes why, in deep learning, error-backprop methods with random initial weights suffer from severe local minima, why PSUTS violates well-established research ethics, and publications that used PSUTS should have transparently reported PSUTS. This paper proposes a Developmental Methodology that trains only a single but optimal network for each application lifetime using a new standard for performance evaluation in machine learning, called developmental errors for all networks trained in a project that the selection of the luckiest network depends on, along with Three Learning Conditions: (1) framework restrictions, (2) training experience and (3) computational resources. This paper also discusses how the brain-inspired Developmental Networks (DNs) avoid PSUTS by reporting developmental errors and its maximum likelihood (ML) optimality under the Three Learning Conditions. DNs are not “scruffy” because they are ML-estimators of the observed Emergent Turing Machines at each time during their “lives”. This implies best performance given a limited amount of overall available computational resources for a project.

8 citations


Journal ArticleDOI
TL;DR: A new automatic engagement recognition method based on Neural Turing Machine is proposed which will be applied to identify the students’ engagement in learning online courses and shows improved performance over state of the art methods on DAiSEE dataset.
Abstract: With the continuous and rapid growth of online courses, online learners’ engagement recognition has become a novel research topic in the field of computer vision and pattern recognition. While a few attempts to automatic engagement recognition has been studied in the literature, learning a robust engagement measure is still a challenging task. To address it, we propose a new automatic engagement recognition method based on Neural Turing Machine in this paper. In particular, we firstly extract student’s eye gaze features, facial action unit features, head pose features, and body pose features respectively, then combine these multi modal features into the final feature of our recognition task. Moreover, we propose the engagement recognition framework based on the idea of Neural Turing Machine to learn the weight of each short video feature. In consequence, the feature fused by different weights will be applied to identify the students’ engagement in learning online courses. Empirically, we show improved performance over state of the art methods to automatic engagement recognition on DAiSEE dataset.

7 citations


Book ChapterDOI
06 Oct 2021
TL;DR: In this paper, Boneh et al. proposed an attribute-based functional encryption (FE) scheme for inner product functionality, where secret-keys provide access control via polynomial-size bounded-depth circuits.
Abstract: The notion of functional encryption (FE) was proposed as a generalization of plain public-key encryption to enable a much more fine-grained handling of encrypted data, with advanced applications such as cloud computing, multi-party computations, obfuscating circuits or Turing machines. While FE for general circuits or Turing machines gives a natural instantiation of the many cryptographic primitives, existing FE schemes are based on indistinguishability obfuscation or multilinear maps which either rely on new computational hardness assumptions or heuristically claimed to be secure. In this work, we present new techniques directly yielding FE for inner product functionality where secret-keys provide access control via polynomial-size bounded-depth circuits. More specifically, we encrypt messages with respect to attributes and embed policy circuits into secret-keys so that a restricted class of receivers would be able to learn certain property about the messages. Recently, many inner product FE schemes were proposed. However, none of them uses a general circuit as an access structure. Our main contribution is designing the first construction for an attribute-based FE scheme in key-policy setting for inner products from well-studied Learning With Errors (\(\mathsf {LWE}\)) assumption. Our construction takes inspiration from the attribute-based encryption of Boneh et al. from Eurocrypt 2014 and the inner product functional encryption of Agrawal et al. from Crypto 2016. The scheme is proved in a stronger setting where the adversary is allowed to ask secret-keys that can decrypt the challenge ciphertext. Doing so requires a careful setting of parameters for handling the noise in ciphertexts to enable correct decryption. Another main advantage of our scheme is that the size of ciphertexts and secret-keys depends on the depth of the circuits rather than its size. Additionally, we extend our construction in a much desirable multi-input variant where secret-keys are associated with multiple policies subject to different encryption slots. This enhances the applicability of the scheme with finer access control.

7 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that all physically realizable computing automata, from finite automata such as logic gates to Linearly Bound Automata (LBA), can be represented/assembled/built in the laboratory using oscillatory chemical reactions, and that word recognition by the Belousov-Zhabotinsky Turing machine is equivalent to extremal entropy production by the automaton.
Abstract: Computing with molecules is at the center of complex natural phenomena, where the information contained in ordered sequences of molecules is used to implement functionalities of synthesized materials or to interpret the environment, as in Biology. This uses large macromolecules and the hindsight of billions of years of natural evolution. But, can one implement computation with small molecules? If so, at what levels in the hierarchy of computing complexity? We review here recent work in this area establishing that all physically realizable computing automata, from Finite Automata (FA) (such as logic gates) to the Linearly Bound Automaton (LBA, a Turing Machine with a finite tape) can be represented/assembled/built in the laboratory using oscillatory chemical reactions. We examine and discuss in depth the fundamental issues involved in this form of computation exclusively done by molecules. We illustrate their implementation with the example of a programmable finite tape Turing machine which using the Belousov-Zhabotinsky oscillatory chemistry is capable of recognizing words in a Context Sensitive Language and rejecting words outside the language. We offer a new interpretation of the recognition of a sequence of chemicals representing words in the machine's language as an illustration of the "Maximum Entropy Production Principle" and concluding that word recognition by the Belousov-Zhabotinsky Turing machine is equivalent to extremal entropy production by the automaton. We end by offering some suggestions to apply the above to problems in computing, polymerization chemistry, and other fields of science.

7 citations


Posted Content
TL;DR: In this article, a modified version of the Global Workspace Theory (GWT) of consciousness is formalized in the form of a Conscious Turing Machine (CTM), also called a conscious AI.
Abstract: The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. In the spirit of Alan Turing's simple yet powerful definition of a computer, the Turing Machine (TM), and perspective of computational complexity theory, we formalize a modified version of the Global Workspace Theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeaux and others. We are not looking for a complex model of the brain nor of cognition, but for a simple computational model of (the admittedly complex concept of) consciousness. We do this by defining the Conscious Turing Machine (CTM), also called a conscious AI, and then we define consciousness and related notions in the CTM. While these are only mathematical (TCS) definitions, we suggest why the CTM has the feeling of consciousness. The TCS perspective provides a simple formal framework to employ tools from computational complexity theory and machine learning to help us understand consciousness and related concepts. Previously we explored high level explanations for the feelings of pain and pleasure in the CTM. Here we consider three examples related to vision (blindsight, inattentional blindness, and change blindness), followed by discussions of dreams, free will, and altered states of consciousness.

Proceedings ArticleDOI
12 Jul 2021
TL;DR: In this article, the authors developed algorithms based on Turing machines and also Blum-Shub-Smale (BSS) machines, where the latter can process and store arbitrary real numbers.
Abstract: Communication systems are subject to adversarial attacks since malevolent adversaries might harm and disrupt legitimate transmissions intentionally. Of particular interest in this paper are so-called denial-of-service (DoS) attacks in which the jammer completely prevents any transmission. Arbitrarily varying classical-quantum channels, providing a suitable model to capture the jamming attacks of interest, are studied. Algorithmic detection frameworks are developed based on Turing machines and also Blum-Shub-Smale (BSS) machines, where the latter can process and store arbitrary real numbers. It is shown that Turing machines are not capable of detecting DoS attacks. However, BSS machines are capable thereof implying that real number signal processing enables the algorithmic detection of DoS attacks.

Journal ArticleDOI
TL;DR: In this article, the authors use slime mold as a model of minimal cognition and compare it to deep learning video game bots, which some claim have evolved beyond their merely quantitative algorithms, and find that these discrete Turing machine bots are not able to make productive, yet unanticipated, "errors necessary for biological learning.
Abstract: Although machines may be good at mimicking, they are not currently able, as organisms are, to act creatively. We offer an understanding of the emergent qualities of biological sign processing in terms of generalization, association, and encryption. We use slime mold as a model of minimal cognition and compare it to deep-learning video game bots, which some claim have evolved beyond their merely quantitative algorithms. We find that these discrete Turing machine bots are not able to make productive, yet unanticipated, "errors"-necessary for biological learning-which, based on the physicality of signs, their relatively similar shapes, and relative physical positions spatially and temporally, lead to emergent effects and make learning and evolution possible. In organisms, stochastic resonance at the local level can be leveraged for self-organization at the global level. We contrast all this to the symbolic processing of today's machine learning, whereby each logic node and memory state is discrete. Computer codes are produced by external operators, whereas biological symbols are evolved through an internal encryption process.

Posted Content
TL;DR: In this article, the authors prove that neuromorphic computing is Turing-complete and therefore capable of general-purpose computing, and they devise neuromorphic circuits for computing all the {\mu}-recursive functions and operators that can be computed using a Turing machine.
Abstract: Neuromorphic computing is a non-von Neumann computing paradigm that performs computation by emulating the human brain. Neuromorphic systems are extremely energy-efficient and known to consume thousands of times less power than CPUs and GPUs. They have the potential to drive critical use cases such as autonomous vehicles, edge computing and internet of things in the future. For this reason, they are sought to be an indispensable part of the future computing landscape. Neuromorphic systems are mainly used for spike-based machine learning applications, although there are some non-machine learning applications in graph theory, differential equations, and spike-based simulations. These applications suggest that neuromorphic computing might be capable of general-purpose computing. However, general-purpose computability of neuromorphic computing has not been established yet. In this work, we prove that neuromorphic computing is Turing-complete and therefore capable of general-purpose computing. Specifically, we present a model of neuromorphic computing, with just two neuron parameters (threshold and leak), and two synaptic parameters (weight and delay). We devise neuromorphic circuits for computing all the {\mu}-recursive functions (i.e., constant, successor and projection functions) and all the {\mu}-recursive operators (i.e., composition, primitive recursion and minimization operators). Given that the {\mu}-recursive functions and operators are precisely the ones that can be computed using a Turing machine, this work establishes the Turing-completeness of neuromorphic computing.

Proceedings ArticleDOI
14 Jun 2021
TL;DR: In this paper, it is shown that there exists no Turing machine that takes the physical description of the communication channel as an input and solves a non-trivial classification task.
Abstract: For communication systems there is a recent trend towards shifting functionalities from the physical layer to higher layers by enabling software-focused solutions. Having obtained a (physical layer-based) description of the communication channel, such approaches exploit this knowledge to enable various services by subsequently processing it on higher layers. For this it is a crucial task to first find out in which state the underlying communication channel is. This paper develops a framework based on Turing machines and studies whether or not it is in principle possible to algorithmically decide in which state the communication system is. It is shown that there exists no Turing machine that takes the physical description of the communication channel as an input and solves a non-trivial classification task. Subsequently, this general result is used to study communication under adversarial attacks and it is shown that it is impossible to algorithmically detect denial-of-service (DoS) attacks on the transmission. Jamming attacks on ACK/NACK feedback cannot be detected as well and, in addition, ACK/NACK feedback is shown to be useless for the detection of DoS attacks on the actual message transmission.

Proceedings ArticleDOI
01 Jun 2021
TL;DR: In this article, it is shown that there is no universal Turing machine that takes the channel from the class of interest as an input and outputs optimal codes for a whole class of channels.
Abstract: Proving a capacity result usually involves two parts: achievability and converse which establish matching lower and upper bounds on the capacity. For achievability, only the existence of good (capacity-achieving) codes is usually shown. Although the existence of such optimal codes is known, constructing such capacity-achieving codes has been open for a long time. Recently, significant progress has been made and optimal code constructions have been found including for example polar codes. A crucial observation is that all these constructions are done for a fixed and given channel and this paper addresses the question whether or not it is possible to find universal algorithms that can construct optimal codes for a whole class of channels. For this purpose, the concept of Turing machines is used which provides the fundamental performance limits of digital computers. It is shown that there exists no universal Turing machine that takes the channel from the class of interest as an input and outputs optimal codes. Finally, implications on channel-aware transmission schemes are discussed.

Book
15 Jan 2021
TL;DR: The turing machine theory for some spinal cord and brain condition, A toxicological antidotic depurative approach.
Abstract: Aim of this work is to produce a general theory related an new depurative strategy to be devalued for reduce or delay some spinal cord and brain degenerative and infl ammatory chronic disease or acute traumatic condition. It is used and informatics approach in order to set correct the problem and the process. Scope of this project is to submit to the researcher a new therapeutic strategy (under a depurativetoxicological-pharmacological) in this complex kind of disease. A Turing machine theory say us a method to TRASLATE the need of a strategy in a practical hypotesys of work. A global conceptual map can help in this fi eld. More Information *Address for Correspondence: Mauro Luisetto, Applied Pharmacologist, European Specialist Lab, Medicine, Branch General Toxicology, Italy; Email: maurolu65@gmail.com Submitted: 22 July 2019 Approved: 30 July 2019 Published: 31 July 2019 How to cite this article: Luisetto M, Ahmadabadi BN, Rafa AY, Sahu RK, Cabianca L, et al. The turing machine theory for some spinal cord and brain condition, A toxicological antidotic depurative approach. J Neurosci Neurol Disord.

Proceedings ArticleDOI
29 Jun 2021
TL;DR: In this paper, the authors introduce a new intersection type system to measure the space consumption of the IAM on the typed term, which is connected to a further structural modification, turning multisets into trees.
Abstract: The space complexity of functional programs is not well understood. In particular, traditional implementation techniques are tailored to time efficiency, and space efficiency induces time inefficiencies, as it prefers re-computing to saving. Girard's geometry of interaction underlies an alternative approach based on the interaction abstract machine (IAM), claimed as space efficient in the literature. It has also been conjectured to provide a reasonable notion of space for the λ-calculus, but such an important result seems to be elusive. In this paper we introduce a new intersection type system precisely measuring the space consumption of the IAM on the typed term. Intersection types have been repeatedly used to measure time, which they achieve by dropping idempotency, turning intersections into multisets. Here we show that the space consumption of the IAM is connected to a further structural modification, turning multisets into trees. Tree intersection types lead to a finer understanding of some space complexity results from the literature. They also shed new light on the conjecture about reasonable space: we show that the usual way of encoding Turing machines into the λ-calculus cannot be used to prove that the space of the IAM is a reasonable cost model.

Journal ArticleDOI
TL;DR: This work refine the recently introduced model of M systems (for morphogenetic systems) that leverages certain constructs in membrane computing and DNA self-assembly and provides quantitative evidence of certain macro-properties characteristic of living organisms in a simple cell-like M system model.

Journal ArticleDOI
TL;DR: This work demonstrates that the time interval is a non-trivial design parameter, whose choice should be made with great care in the optimization of the speed, robustness, and energy efficiency of chemical automata computations.
Abstract: Chemical reactions are powerful molecular recognition machines. This power has been recently harnessed to build actual instances of each class of experimentally realizable computing automata, using exclusively small-molecule chemistry (i.e. without requiring biomolecules). The most powerful of them, a programmable Turing machine, uses the Belousov–Zhabotinsky oscillatory chemistry, and accepts/rejects input sequences through a dual oscillatory and thermodynamic output signature. The time interval between the aliquots representing each letter of the input is the parameter that determines the time it takes to run the computation. Here, we investigate this critical performance parameter, and its effect not only on the computation speed, but also on the robustness of the accept/reject oscillatory and thermodynamic criteria. Our work demonstrates that the time interval is a non-trivial design parameter, whose choice should be made with great care. The guidelines we provide can be used in the optimization of the speed, robustness, and energy efficiency of chemical automata computations.

Posted Content
TL;DR: This article analyzes two proofs of complexity lower bound: Ben-Or's proof of minimal height of algebraic computational trees deciding certain problems and Mulmuley's proof that restricted Parallel Random Access Machines (prams) over integers can not decide P-complete problems efficiently.
Abstract: This paper presents a new semantic method for proving lower bounds in computational complexity. We use it to prove that maxflow, a Ptime-complete problem, is not computable in polylogarithmic time on parallel random access machines (PRAMs) working with integers, showing that the class NC over the integers, the complexity class defined by such machines, is different from Ptime, the standard class of polynomial time computable problems (on, say, a Turing machine). On top of showing this new separation result, we show our method captures previous lower bounds results from the literature: Steele and Yao's lower bounds for algebraic decision trees, Ben-Or's lower bounds for algebraic computation trees, Cucker's proof that NC over the reals is not equal to Ptime over the reals, and Mulmuley's lower bounds for "PRAMs without bit operations".

Book ChapterDOI
05 Jan 2021
TL;DR: A simple Oritatami system which intrinsically simulates arbitrary 1D cellular automata in a readable way and in time linear in space and time of the simulated automaton is proposed.
Abstract: The Oritatami model was introduced by Geary et al. (2016) to study the computational potential of RNA cotranscriptional folding as first shown in wet-lab experiments by Geary et al. (Science 2014). In the Oritatami model, a molecule grows component by component (named beads) into the triangular grid and folds as it grows. More precisely, the \(\delta \) last nascent beads are free to move and adopt the positions that maximize the number of bonds with the current folded structure. Geary et al. (2018) proved that the Oritatami model is capable of efficient Turing universal computation using a complicated construction that simulates Turing machines via tag systems. We propose here a simple Oritatami system which intrinsically simulates arbitrary 1D cellular automata. Being intrinsic, our simulation emulates the behavior of cellular automata in a readable way and in time linear in space and time of the simulated automaton. The Oritatami model has proven to be a fruitful framework to study molecular reconfigurability. Our construction relies on the development of new mechanisms which are simple enough that we believe that some simplification of them may be implemented in the wet lab. An implementation of our construction can be downloaded for testing.

Journal ArticleDOI
TL;DR: In this article, the authors study the capabilities of probabilistic finite-state machines that act as verifiers for certificates of language membership for input strings, in the regime where the verifiers are restricted to toss some fixed nonzero number of coins regardless of the input size.
Abstract: We study the capabilities of probabilistic finite-state machines that act as verifiers for certificates of language membership for input strings, in the regime where the verifiers are restricted to toss some fixed nonzero number of coins regardless of the input size Say and Yakaryilmaz showed that the class of languages that could be verified by these machines within an error bound strictly less than 1 2 is precisely NL, but their construction yields verifiers with error bounds that are very close to 1 2 for most languages in that class when the definition of “error” is strengthened to include looping forever without giving a response We characterize a subset of NL for which verification with arbitrarily low error is possible by these extremely weak machines It turns out that, for any e > 0 , one can construct a constant-coin, constant-space verifier operating within error e for every language that is recognizable by a linear-time multi-head nondeterministic finite automaton ( 2nfa (k)) We discuss why it is difficult to generalize this method to all of NL, and give a reasonably tight way to relate the power of linear-time 2nfa (k)'s to simultaneous time-space complexity classes defined in terms of Turing machines

Journal ArticleDOI
TL;DR: A novel cryptosystem using contextual array splicing system for DNA sequenced input data is proposed and proved to be Turing computable in the domain of Medical Internet of Things (MIoT).

Journal ArticleDOI
TL;DR: In this article, it was shown that every non-trivial linear time-invariant (LTI) system of the first order shows a complexity blowup if it is simulated on a digital computer.
Abstract: This paper proves that every non-trivial, linear time-invariant (LTI) system of the first order shows a complexity blowup if it is simulated on a digital computer. This means that there exists a low-complexity input signal, which can be generated on a Turing machine in polynomial time, but such that the output signal of the LTI system has high complexity in the sense that the computation time for determining an approximation up to $n$ significant digits grows faster than any polynomial in $n$ . Moreover, this input signal can easily and explicitly be generated from the given system parameters by a Turing machine. It is also shown that standard techniques for simulating higher-order LTI systems with real poles show the same complexity blowup. Finally, it is shown that a similar complexity blowup occurs for the calculation of Fourier series approximations and Fourier transforms.

Journal ArticleDOI
19 Jul 2021-Entropy
TL;DR: The Gaia hypothesis as mentioned in this paper states that the Earth is a complex system because it instantiates life and therefore an autopoietic, metabolic-repair (M,R) organization at a planetary scale.
Abstract: Current physics commonly qualifies the Earth system as 'complex' because it includes numerous different processes operating over a large range of spatial scales, often modelled as exhibiting non-linear chaotic response dynamics and power scaling laws. This characterization is based on the fundamental assumption that the Earth's complexity could, in principle, be modeled by (surrogated by) a numerical algorithm if enough computing power were granted. Yet, similar numerical algorithms also surrogate different systems having the same processes and dynamics, such as Mars or Jupiter, although being qualitatively different from the Earth system. Here, we argue that understanding the Earth as a complex system requires a consideration of the Gaia hypothesis: the Earth is a complex system because it instantiates life-and therefore an autopoietic, metabolic-repair (M,R) organization-at a planetary scale. This implies that the Earth's complexity has formal equivalence to a self-referential system that inherently is non-algorithmic and, therefore, cannot be surrogated and simulated in a Turing machine. We discuss the consequences of this, with reference to in-silico climate models, tipping points, planetary boundaries, and planetary feedback loops as units of adaptive evolution and selection.

Posted Content
TL;DR: In this article, the authors prove the existence of stationary solutions of the Euler equations in Euclidean space, of Beltrami type, that can simulate a universal Turing machine.
Abstract: In this article, we pursue our investigation of the connections between the theory of computation and hydrodynamics. We prove the existence of stationary solutions of the Euler equations in Euclidean space, of Beltrami type, that can simulate a universal Turing machine. In particular, these solutions possess undecidable trajectories. Heretofore, the known Turing complete constructions of steady Euler flows in dimension 3 or higher were not associated to a prescribed metric. Our solutions do not have finite energy, and their construction makes crucial use of the non-compactness of $\mathbb R^3$, however they can be employed to show that an arbitrary tape-bounded Turing machine can be robustly simulated by a Beltrami flow on $\mathbb T^3$ (with the standard flat metric). This shows that there exist steady solutions to the Euler equations on the flat torus exhibiting dynamical phenomena of (robust) arbitrarily high computational complexity. We also quantify the energetic cost for a Beltrami field on $\mathbb T^3$ to simulate a tape-bounded Turing machine, thus providing additional support for the space-bounded Church-Turing thesis. Another implication of our construction is that a Gaussian random Beltrami field on Euclidean space exhibits arbitrarily high computational complexity with probability $1$. Finally, our proof also yields Turing complete flows and maps on $\mathbb{S}^2$ with zero topological entropy, thus disclosing a certain degree of independence within different hierarchies of complexity.

Journal ArticleDOI
TL;DR: The halting problem is a prominent example of undecidable problem and its formulation and undecidability proof is usually attributed to Turing's 1936 landmark paper as discussed by the authors, however, it was first stated in a 1958 book by Martin Davis.

Posted Content
TL;DR: This article showed that strong call-by-value evaluation is also reasonable, via a new abstract machine realizing useful sharing and having a linear overhead, adding on top of useful sharing a form of implosive sharing, which on some terms brings an exponential speedup.
Abstract: Whether the number of beta-steps in the lambda-calculus can be taken as a reasonable cost model (that is, polynomially related to the one of Turing machines) is a delicate problem, which depends on the notion of evaluation strategy. Since the nineties, it is known that weak (that is, out of abstractions) call-by-value evaluation is a reasonable strategy while Levy's optimal parallel strategy, which is strong (that is, it reduces everywhere), is not. The strong case turned out to be subtler than the weak one. In 2014 Accattoli and Dal Lago have shown that strong call-by-name is reasonable, by introducing a new form of useful sharing and, later, an abstract machine with an overhead quadratic in the number of beta-steps. Here we show that strong call-by-value evaluation is also reasonable, via a new abstract machine realizing useful sharing and having a linear overhead. Moreover, our machine uses a new mix of sharing techniques, adding on top of useful sharing a form of implosive sharing, which on some terms brings an exponential speed-up. We give an example of family that the machine executes in time logarithmic in the number of beta-steps.

Proceedings ArticleDOI
TL;DR: The Turing Test has been criticised by philosophers and computer scientists both as irrelevant, or simply inefficient in order to evaluate a machine's intelligence as mentioned in this paper. But while arguments against it certainly highlight some of the test's flaws, they also reveal the confusion that exists between thinking and intelligence.
Abstract: The Turing Test was initially suggested as a way to give an answer to the question “Can machines think Since then, it has been heavily criticized by philosophers and computer scientists both as irrelevant, or simply inefficient in order to evaluate a machine’s intelligence. But while arguments against it certainly highlight some of the test’s flaws, they also reveal the confusion that exists between thinking and intelligence. While we will not attempt here to define the concept of intelligence, we wm instead show that such a definition becomes irrelevant if the Turing Test is instead considered to be a test of the humanness of a conversational partner instead, an experimental paradigm that can be used in order to investigate human inferences and expectations. We will review studies which use the Turing Test this way, not only in computer sciences where it is commonly used to evaluate the humanness of a chatbot but also its uses in the field of psychology where it can be used to understand human reasoning in conversation either with a chatbot or with another human.