scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 2023"


Journal ArticleDOI
TL;DR: In this paper , a notion of the extraction rate of Turing functionals that translate between notions of randomness with respect to different underlying probability measures is studied. But this work is restricted to a class of extraction procedures: a class that generalizes von Neumann's trick for extracting unbiased randomness from the tosses of a biased coin, a class based on work of by Knuth and Yao, and a class independently developed by Levin and Kautz that generalises the data compression technique of arithmetic coding.
Abstract: In this article, we study a notion of the extraction rate of Turing functionals that translate between notions of randomness with respect to different underlying probability measures. We analyze several classes of extraction procedures: (1) a class that generalizes von Neumann’s trick for extracting unbiased randomness from the tosses of a biased coin, (2) a class based on work of by Knuth and Yao (which more properly can be characterized as extracting biased randomness from unbiased randomness), and (3) a class independently developed by Levin and Kautz that generalizes the data compression technique of arithmetic coding. For the first two classes of extraction procedures, we identify a level of algorithmic randomness for an input that guarantees that we attain the extraction rate along that input, while for the third class, we calculate the rate attained along sufficiently random input sequences.

1 citations


Journal ArticleDOI
TL;DR: In this article , it is proved that the class of effective procedures is undecidable, i.e., there is no effective procedure for determining whether a given procedure is effective.
Abstract: Abstract The “somewhat vague, intuitive” notion from computability theory of an effective procedure (method) or algorithm can be fairly precisely defined even if it is not sufficiently formal and precise to belong to mathematics proper (in a narrow sense)—and even if (as many have asserted) for that reason the Church–Turing thesis is unprovable. It is proved logically that the class of effective procedures is not decidable, i.e., that there is no effective procedure for ascertaining whether a given procedure is effective. This result is proved directly from the notion itself of an effective procedure, without reliance on any (partly) mathematical lemma, conjecture, or thesis invoking recursiveness or Turing-computability. In fact, there is no reliance on anything very mathematical. The proof does not even appeal to a precise definition of ‘effective procedure’. Instead, it relies solely and entirely on a basic grasp of the intuitive notion of an effective procedure. Though the result that effectiveness is undecidable is not surprising, it is also not without significance. It has the consequence, for example, that the solution to a decision problem, if it is to be complete, must be accompanied by a separate argument that the proposed ascertainment procedure invariably terminates with the correct verdict.

1 citations


Journal ArticleDOI
TL;DR: In this paper , it was shown that two topologies on the same space induce different sets of computable points, i.e., points satisfying properties that are co-meager w.r.t.
Abstract: Computable analysis provides ways of representing points in a topological space, and therefore of defining a notion of computable points of the space. In this article, we investigate when two topologies on the same space induce different sets of computable points. We first study a purely topological version of the problem, which is to understand when two topologies are not σ -homeomorphic. We obtain a characterization leading to an effective version, and we prove that two topologies satisfying this condition induce different sets of computable points. Along the way, we propose an effective version of the Baire category theorem which captures the construction technique, and enables one to build points satisfying properties that are co-meager w.r.t. a topology, and are computable w.r.t. another topology. Finally, we generalize the result to three topologies and give an application to prove that certain sets do not have computable type, i.e. have a copy that is semicomputable but not computable.

1 citations


Journal ArticleDOI
TL;DR: In the first half of the 20th century, 5 classic articles were written by three outstanding scholars, namely, Wiener (1894 to 1964), the father of cybernetics, Schrödinger (1887 to 1961), the quantum mechanics, and Turing (1912 to 1954), the pioneer of artificial intelligence as discussed by the authors .
Abstract: In the first half of the 20th century, 5 classic articles were written by 3 outstanding scholars, namely, Wiener (1894 to 1964), the father of cybernetics, Schrödinger (1887 to 1961), the father of quantum mechanics, and Turing (1912 to 1954), the father of artificial intelligence. The articles discuss the concepts such as computability, life, machine, control, and artificial intelligence, establishing a solid foundation for the intelligence of machines (how machines can recognize as humans do?) and its future development.

1 citations


Book ChapterDOI
31 May 2023
TL;DR: In this article , the authors present a summary of their work and a preview of the full version of this article. Please use the Get access link above for information on how to access this content.
Abstract: A summary is not available for this content so a preview has been provided. Please use the Get access link above for information on how to access this content.

Posted ContentDOI
26 Jun 2023
TL;DR: In this article , the surjection property and the epsilon-surjection property were introduced to characterize the notion of computable type arising from computability theory, and techniques to prove or disprove these properties using homotopy and homology theories were developed.
Abstract: We provide a detailed study of two properties of spaces and pairs of spaces, the surjection property and the epsilon-surjection property, that were recently introduced to characterize the notion of computable type arising from computability theory. For a class of spaces including the finite simplicial complexes, we develop techniques to prove or disprove these properties using homotopy and homology theories, and give applications of these results. In particular, we answer an open question on the computable type property, showing that it is not preserved by taking products.

Posted ContentDOI
11 May 2023
TL;DR: In this article , a rich computability theory for functions classes is developed for these functions classes which embraces the central results of classical computability, in which all partial (computable) functions are considered and the central algorithmic idea in this approach is to search in enumerated lists.
Abstract: Partiality is a natural phenomenon in computability that we cannot get around, So, the question is whether we can give the areas where partiality occurs, that is, where non-termination happens more structure. In this paper we consider function classes which besides the total functions only contain finite functions whose domain of definition is an initial segment of the natural numbers. Such functions appear naturally in computation. We show that a rich computability theory can be developed for these functions classes which embraces the central results of classical computability theory, in which all partial (computable) functions are considered. To do so the concept of a G\"odel number is generalised, resulting in a broader class of numberings. The central algorithmic idea in this approach is to search in enumerated lists. By this way the notion of computation is reduced to that of enumeration. Beside of the development of a computability theory for the functions classes, the new numberings -- called quasi-G\"odel numberings -- are studied from a numbering-theoretic perspective: they are complete, and each of the function classes numbered in this way is a retract of the G\"odel numbered set of all partial computable functions. Moreover, the Rogers semi-lattice of all computable numberings of the considered function classes is studied and results as in the case of the computable numberings of the partial computable functions are obtained. The function classes are shown to be effectively given algebraic domains in the sense of Scott-Ershov. The quasi-G\"odel numberings are exactly the admissible numberings of the domain. Moreover, the domain can be computable mapped onto every other effectively given one so that every admissible numbering of the computable domain elements is generated by a quasi-G\"odel numbering via this mapping.

Proceedings ArticleDOI
10 Jul 2023
TL;DR: The Logic Bonbon as discussed by the authors is a liquid-centered dessert that computes its own flavor and presentation through hydrodynamically induced logic operations (AND, OR, and XOR), which form the basis of computation.
Abstract: This proposal presents the "Logic Bonbon," an artwork that embodies an integration of food and computation to celebrate the interconnectedness of human-food interactions within HCI. Based on the authors’ previous research exploring “alternative ways” of computation through food design, the Logic Bonbon is a liquid-centered dessert that “computes” its own flavor and presentation through hydrodynamically induced logic operations (AND, OR, and XOR), which form the basis of computation. This artwork serves as a material speculation, provoking a reimagining of the concept of "resilience" – the theme of this Art Exhibition – across the boundary between the “edible” and the “computable.” It envisions an innovative future for Human-Food Interaction (HFI) design by considering the material paradox between fragility and adaptability, ephemerality and durability, as well as palatability and computability.

Posted ContentDOI
14 Feb 2023
TL;DR: In this article , the authors identify some closely related non-normal functionals that fall on different sides of this abyss, based on mainstream mathematical notions, like quasicontinuity, Baire classes, and semi-continuity.
Abstract: Kleene's computability theory based on his S1-S9 computation schemes constitutes a model for computing with objects of any finite type and extends Turing's `machine model' which formalises computing with real numbers. A fundamental distinction in Kleene's framework is between normal and non-normal functionals where the former compute the associated Kleene quantifier $\exists^{n}$ and the latter do not. Historically, the focus was on normal functionals, but recently new non-normal functionals have been studied, based on well-known theorems like the uncountability of the reals. These new non-normal functionals are fundamentally different from historical examples like Tait's fan functional: the latter is computable from $\exists^{2}$ while the former are only computable in $\exists^{3}$. While there is a great divide separating $\exists^{2}$ and $\exists^{3}$, we identify certain closely related non-normal functionals that fall on different sides of this abyss. Our examples are based on mainstream mathematical notions, like quasi-continuity, Baire classes, and semi-continuity.

Proceedings ArticleDOI
16 Jun 2023
TL;DR: In this article , the authors presented a new communication abstraction called Mutual Broadcast (MBroadcast), which provides each pair of processes with the following property (called mutual ordering): if p broadcasts a message m and p′ broadcasts m′, it is not possible for p to deliver first (its message) m and then m′ while p′ delivers first message m, and p
Abstract: This short article presents a new communication abstraction denoted Mutual Broadcast (in short MBroadcast). It provides each pair of processes with the following property (called mutual ordering): for any pair of processes p and p′, if p broadcasts a message m and p′ broadcasts a message m′, it is not possible for p to deliver first (its message) m and then m′ while p′ delivers first (its message) m′ and then m. The computability power of this broadcast abstraction is the same as the one of an atomic read/write register. Interestingly, it constitutes the first characterization of RW registers in terms of (binary) message patterns.

Posted ContentDOI
03 Mar 2023
TL;DR: In this article , a negative solution to the Kirchberg embedding problem is proposed, motivated by the recent refutation of the Connes embedding, and two computability-theoretic consequences of a positive solution to KEP are established.
Abstract: The Kirchberg Embedding Problem (KEP) asks if every C*-algebra embeds into an ultrapower of the Cuntz algebra $\mathcal{O}_2$. In an effort to provide a negative solution to the KEP and motivated by the recent refutation of the Connes Embedding Problem, we establish two computability-theoretic consequences of a positive solution to KEP. Both of our results follow from the a priori weaker assumption that there exists a locally universal C*-algebra with a computable presentation.

Posted ContentDOI
04 May 2023
TL;DR: In this article , the capacity of the band-limited additive colored Gaussian noise (ACGN) channel is studied from a fundamental algorithmic point of view by addressing the question of whether or not the capacity can be algorithmically computed.
Abstract: Designing capacity-achieving coding schemes for the band-limited additive colored Gaussian noise (ACGN) channel has been and is still a challenge. In this paper, the capacity of the band-limited ACGN channel is studied from a fundamental algorithmic point of view by addressing the question of whether or not the capacity can be algorithmically computed. To this aim, the concept of Turing machines is used, which provides fundamental performance limits of digital computers. t is shown that there are band-limited ACGN channels having computable continuous spectral densities whose capacity are non-computable numbers. Moreover, it is demonstrated that for those channels, it is impossible to find computable sequences of asymptotically sharp upper bounds for their capacities.

Posted ContentDOI
04 May 2023
TL;DR: In this article , a new programming model with support for alternation, imperfect information, and recursion is proposed, which is based on a programming construct of urgency annotations that decorate the choice operators and control the order in which the choices have to be made.
Abstract: We propose a new programming model with support for alternation, imperfect information, and recursion. We model imperfect information with the novel programming construct of urgency annotations that decorate the (angelic and~demonic) choice operators and control the order in which the choices have to be made. Our contribution is a study of the standard notions of contextual equivalence for urgency programs. Our first main result are fully abstract characterizations of these relations based on sound and complete axiomatizations. The axiomatization clearly shows how imperfect information distributes over perfect information. Our second main result is to settle their computability status. Notably, we show that the contextual preorder is (2h-1)-EXPTIME-complete for programs of maximal urgency h when the regular observable is given as an input resp. PTIME-complete when the regular observable is fixed. Our findings imply new decidability results for hyper model checking, a prominent problem in security.

Book ChapterDOI
08 May 2023

Journal ArticleDOI
TL;DR: In this paper , it is shown that for discrete memoryless channels, it is impossible to compute the capacity-achieving input distribution, where the channel is given as an input to the algorithm (or Turing machine).
Abstract: The capacity of a channel can usually be characterized as a maximization of certain entropic quantities. From a practical point of view it is of primary interest to not only compute the capacity value, but also to find the corresponding optimizer, i.e., the capacity-achieving input distribution. This paper addresses the general question of whether or not it is possible to find algorithms that can compute the optimal input distribution depending on the channel. For this purpose, the concept of Turing machines is used which provides the fundamental performance limits of digital computers and therewith fully specifies which tasks are algorithmically feasible in principle. It is shown for discrete memoryless channels that it is impossible to algorithmically compute the capacity-achieving input distribution, where the channel is given as an input to the algorithm (or Turing machine). Finally, it is further shown that it is even impossible to algorithmically approximate these input distributions.

Posted ContentDOI
28 Feb 2023
TL;DR: In this article , the authors use constructive type theory as a framework to revisit, analyze and generalize Tennenbaum's theorem, which states that the only countable model of Peano arithmetic with computable arithmetical operations is the standard model of natural numbers.
Abstract: Tennenbaum's theorem states that the only countable model of Peano arithmetic (PA) with computable arithmetical operations is the standard model of natural numbers. In this paper, we use constructive type theory as a framework to revisit, analyze and generalize this result. The chosen framework allows for a synthetic approach to computability theory, exploiting that, externally, all functions definable in constructive type theory can be shown computable. We then build on this viewpoint and furthermore internalize it by assuming a version of Church's thesis, which expresses that any function on natural numbers is representable by a formula in PA. This assumption provides for a conveniently abstract setup to carry out rigorous computability arguments, even in the theorem's mechanization. Concretely, we constructivize several classical proofs and present one inherently constructive rendering of Tennenbaum's theorem, all following arguments from the literature. Concerning the classical proofs in particular, the constructive setting allows us to highlight differences in their assumptions and conclusions which are not visible classically. All versions are accompanied by a unified mechanization in the Coq proof assistant.

Journal ArticleDOI
TL;DR: In this article , the authors compared the computability of Observational Medical Outcomes Partnership (OMOP)-based queries related to prescreening of patients using two versions of the OMOP common data model (CDM; v5.3 and V5.4).
Abstract: To compare the computability of Observational Medical Outcomes Partnership (OMOP)-based queries related to prescreening of patients using two versions of the OMOP common data model (CDM; v5.3 and v5.4) and to assess the performance of the Greater Paris University Hospital (APHP) prescreening tool.We identified the prescreening information items being relevant for prescreening of patients with cancer. We randomly selected 15 academic and industry-sponsored urology phase I-IV clinical trials (CTs) launched at APHP between 2016 and 2021. The computability of the related prescreening criteria (PC) was defined by their translation rate in OMOP-compliant queries and by their execution rate on the APHP clinical data warehouse (CDW) containing data of 205,977 patients with cancer. The overall performance of the prescreening tool was assessed by the rate of true- and false-positive cases of three randomly selected CTs.We defined a list of 15 minimal information items being relevant for patients' prescreening. We identified 83 PC of the 534 eligibility criteria from the 15 CTs. We translated 33 and 62 PC in queries on the basis of OMOP CDM v5.3 and v5.4, respectively (translation rates of 40% and 75%, respectively). Of the 33 PC translated in the v5.3 of the OMOP CDM, 19 could be executed on the APHP CDW (execution rate of 58%). Of 83 PC, the computability rate on the APHP CDW reached 23%. On the basis of three CTs, we identified 17, 32, and 63 patients as being potentially eligible for inclusion in those CTs, resulting in positive predictive values of 53%, 41%, and 21%, respectively.We showed that PC could be formalized according to the OMOP CDM and that the oncology extension increased their translation rate through better representation of cancer natural history.

Journal ArticleDOI
TL;DR: In this paper , a class of operators on Turing degrees arising naturally from ultra-filters is studied, which are called ultra-filter jumps and are closely tied to Scott sets.
Abstract: We study a class of operators on Turing degrees arising naturally from ultrafilters. Suppose U is a nonprincipal ultrafilter on ω. We can then view a sequence of sets A = ( A i ) i ∈ ω as an “approximation” of a set B produced by amalgamating the A i via U: we set lim U ( A ) = { x : { i : x ∈ A i } ∈ U }. This can be extended to the Turing degrees, by defining δ U ( a ) = { lim U ( A ) : A = ( A i ) i ∈ ω ∈ a }. The δ U – which we call “ultrafilter jumps” – resemble classical limit computability in certain ways. In particular, δ U ( a ) is always a Turing ideal containing Δ 2 0 ( a ). However, they are also closely tied to Scott sets: δ U ( a ) is always a Scott set containing a ′ . (This yields an alternate proof of the standard result in reverse mathematics that Weak Konig’s Lemma is strictly weaker than arithmetic comprehension.) Our main result is that the converse also holds: if S is a countable Scott set containing a ′ , then there is some ultrafilter U with δ U ( a ) = S. We then turn to the problem of controlling the action of an ultrafilter jump δ U on two degrees simultaneously, and for example show that there are nontrivial degrees which are “low” for some ultrafilter jump. Finally, we study the structure on the set of ultrafilters arising from the construction U ↦ δ U ; in particular, we introduce a natural preordering on this set and show that it is connected with the classical Rudin–Keisler ordering of ultrafilters. We end by presenting two directions for further research.

Proceedings ArticleDOI
02 Mar 2023
TL;DR: In this article , the authors explore the pedagogical connection of undecidability results in theoretical computing with similar (uncomputability) results in physics that have recently appeared.
Abstract: Classroom examples of non-computable problems most often involve sets of Turing machines, which makes these problems too abstract and far divorced from real-world practice, so some students are turned off by the abstract nature of theoretical CS. In this paper, we explore the pedagogical connection of undecidability results in theoretical computing with similar (un)computability results in physics that have recently appeared. We argue that incorporating these new impossibility and undecidability results can increase students' interest in theoretical computing topics, as well as improve their understanding of the underlying science and mathematics.

Proceedings ArticleDOI
28 Apr 2023
TL;DR: In our modern world, people use deductive thinking every day without even knowing it as discussed by the authors . But it is applied and consciously in various fields, such as mathematics, philosophy, law, science, technology and logic.
Abstract: In our modern world, people use deductive thinking every day without even knowing it. Richard Davidson wrote that people start with generalizations, then apply them to specific situations and, ultimately, draw conclusions [2]. As, for example, in a situation where a person is going to go out. Logical chains are built in a person's head: warm clothes protect from the cold, it is cold outside, a jacket is warm clothes, I will wear a jacket. Everything happens automatically in a split second. Nevertheless the method of deductive thinking is applied and consciously in various fields, such as mathematics, philosophy, law, science, technology and logic. For example, in philosophy, deduction is used to develop arguments and substantiate positions. It is used to analyze and evaluate various philosophical theories and concepts. In computer science, deduction plays an important role in computability theory and the development of formal specifications. In mathematics, the deductive method is widely used to prove theorems. In addition, deduction is used to formalize and study various mathematical theories, such as set theory, algebra and number theory.

Posted ContentDOI
31 Jan 2023
TL;DR: In this article , it was shown that in every non-compact model, protocols solving tasks correspond to simplicial maps that need to be continuous, and that the approach used in ACT that equates protocols and simplicial complexes actually works for every compact model.
Abstract: The famous asynchronous computability theorem (ACT) relates the existence of an asynchronous wait-free shared memory protocol for solving a task with the existence of a simplicial map from a subdivision of the simplicial complex representing the inputs to the simplicial complex representing the allowable outputs. The original theorem relies on a correspondence between protocols and simplicial maps in finite models of computation that induce a compact topology. This correspondence, however, is far from obvious for computation models that induce a non-compact topology, and indeed previous attempts to extend the ACT have failed. This paper shows first that in every non-compact model, protocols solving tasks correspond to simplicial maps that need to be continuous. This correspondence is then used to prove that the approach used in ACT that equates protocols and simplicial complexes actually works for every compact model, and to show a generalized ACT, which applies also to non-compact computation models. Finally, the generalized ACT is applied to the set agreement task. Our study combines combinatorial and point-set topological aspects of the executions admitted by the computation model.

Journal ArticleDOI
TL;DR: In this paper , the authors consider the problem of bandwidth computation in a general class of band-limited signals and derive a negative answer to the question of whether it is at least possible to compute non-trivial upper or lower bounds for the bandwidth of such signals.
Abstract: The bandwidth of a signal is an important physical property that is of relevance in many signal- and information-theoretic applications. In this paper we study questions related to the computability of the bandwidth of computable bandlimited signals. To this end we employ the concept of Turing computability, which exactly describes what is theoretically feasible and can be computed on a digital computer. Recently, it has been shown that there exist computable bandlimited signals with finite energy, the actual bandwidth of which is not a computable number, and hence cannot be computed on a digital computer. In this work, we consider the most general class of band-limited signals, together with different computable descriptions thereof. Among other things, our analysis includes a characterization of the arithmetic complexity of the bandwidth of such signals and yields a negative answer to the question of whether it is at least possible to compute non-trivial upper or lower bounds for the bandwidth of a bandlimited signal. Furthermore, we relate the problem of bandwidth computation to the theory of oracle machines. In particular, we consider halting and totality oracles, which belong to the most frequently investigated oracle machines in the theory of computation.

Posted ContentDOI
15 Jan 2023
TL;DR: In this article , it was shown that the optimizer is not Banach-Mazur computable on Turing machines and consequently not even in an approximate sense on digital computers.
Abstract: Optimization problems are a staple of today's scientific and technical landscape. However, at present, solvers of such problems are almost exclusively run on digital hardware. Using Turing machines as a mathematical model for any type of digital hardware, in this paper, we analyze fundamental limitations of this conceptual approach of solving optimization problems. Since in most applications, the optimizer itself is of significantly more interest than the optimal value of the corresponding function, we will focus on computability of the optimizer. In fact, we will show that in various situations the optimizer is unattainable on Turing machines and consequently on digital computers. Moreover, even worse, there does not exist a Turing machine, which approximates the optimizer itself up to a certain constant error. We prove such results for a variety of well-known problems from very different areas, including artificial intelligence, financial mathematics, and information theory, often deriving the even stronger result that such problems are not Banach-Mazur computable, also not even in an approximate sense.

Journal ArticleDOI
TL;DR: The toy Turing Tumble is Turing-complete as mentioned in this paper , which is the first natural extension of a marble-based computer that has been shown to be universal in the real world.
Abstract: It is shown that the toy Turing Tumble, suitably extended with an infinitely long game board and unlimited supply of pieces, is Turing-Complete. This is achieved via direct simulation of a Turing machine. Unlike previously informally presented constructions, we do not encode the finite control infinitely many times, we need only one trigger/ball-hopper pair, and we prove our construction correct. We believe this is the first natural extension of a marble-based computer that has been shown to be universal.


Posted ContentDOI
24 May 2023
TL;DR: In this paper , a personal journey towards a deeper understanding of the mathematical foundations of algorithmic music composition is described, focusing on general issues such as fundamental limits and possibilities, by analogy with metalogic, metamathematics and computability theory.
Abstract: This essay recounts my personal journey towards a deeper understanding of the mathematical foundations of algorithmic music composition. I do not spend much time on specific mathematical algorithms used by composers; rather, I focus on general issues such as fundamental limits and possibilities, by analogy with metalogic, metamathematics, and computability theory. I discuss implications from these foundations for the future of algorithmic composition.



Posted ContentDOI
10 May 2023
TL;DR: In this article , the decidability of verifying DNNs with piecewise-smooth activation functions is shown to be equivalent to the DNN reachability problem, whose approximation is NP-complete.
Abstract: Deep neural networks (DNNs) are increasingly being deployed to perform safety-critical tasks. The opacity of DNNs, which prevents humans from reasoning about them, presents new safety and security challenges. To address these challenges, the verification community has begun developing techniques for rigorously analyzing DNNs, with numerous verification algorithms proposed in recent years. While a significant amount of work has gone into developing these verification algorithms, little work has been devoted to rigorously studying the computability and complexity of the underlying theoretical problems. Here, we seek to contribute to the bridging of this gap. We focus on two kinds of DNNs: those that employ piecewise-linear activation functions (e.g., ReLU), and those that employ piecewise-smooth activation functions (e.g., Sigmoids). We prove the two following theorems: 1) The decidability of verifying DNNs with piecewise-smooth activation functions is equivalent to a well-known, open problem formulated by Tarski; and 2) The DNN verification problem for any quantifier-free linear arithmetic specification can be reduced to the DNN reachability problem, whose approximation is NP-complete. These results answer two fundamental questions about the computability and complexity of DNN verification, and the ways it is affected by the network's activation functions and error tolerance; and could help guide future efforts in developing DNN verification tools.

Posted ContentDOI
08 Feb 2023
TL;DR: In this article , the authors study computable online learning with varying requirements for "optimality" in terms of the mistake bound and give a necessary and sufficient condition for optimal online learning.
Abstract: We initiate a study of computable online (c-online) learning, which we analyze under varying requirements for "optimality" in terms of the mistake bound. Our main contribution is to give a necessary and sufficient condition for optimal c-online learning and show that the Littlestone dimension no longer characterizes the optimal mistake bound of c-online learning. Furthermore, we introduce anytime optimal (a-optimal) online learning, a more natural conceptualization of "optimality" and a generalization of Littlestone's Standard Optimal Algorithm. We show the existence of a computational separation between a-optimal and optimal online learning, proving that a-optimal online learning is computationally more difficult. Finally, we consider online learning with no requirements for optimality, and show, under a weaker notion of computability, that the finiteness of the Littlestone dimension no longer characterizes whether a class is c-online learnable with finite mistake bound. A potential avenue for strengthening this result is suggested by exploring the relationship between c-online and CPAC learning, where we show that c-online learning is as difficult as improper CPAC learning.