scispace - formally typeset
Search or ask a question
Posted Content

A Map of Update Constraints in Inductive Inference

TL;DR: In this paper, the authors investigate how different learning restrictions reduce learning power and how the different restrictions relate to one another, and give a complete map for nine different restrictions both for the cases of complete information learning and set-driven learning.
Abstract: We investigate how different learning restrictions reduce learning power and how the different restrictions relate to one another. We give a complete map for nine different restrictions both for the cases of complete information learning and set-driven learning. This completes the picture for these well-studied \emph{delayable} learning restrictions. A further insight is gained by different characterizations of \emph{conservative} learning in terms of variants of \emph{cautious} learning. Our analyses greatly benefit from general theorems we give, for example showing that learners with exclusively delayable restrictions can always be assumed total.
Citations
More filters
Proceedings ArticleDOI
01 Jan 2016
TL;DR: Three example maps are provided, one pertaining to partially set-driven learning, and two pertaining to strongly monotone learning, which can serve as blueprints for future maps of similar base structure to determine the relations of different learning criteria.
Abstract: A major part of our knowledge about Computational Learning stems from comparisons of the learning power of different learning criteria. These comparisons inform about trade-offs between learning restrictions and, more generally, learning settings; furthermore, they inform about what restrictions can be observed without losing learning power. With this paper we propose that one main focus of future research in Computational Learning should be on a structured approach to determine the relations of different learning criteria. In particular, we propose that, for small sets of learning criteria, all pairwise relations should be determined; these relations can then be easily depicted as a map, a diagram detailing the relations. Once we have maps for many relevant sets of learning criteria, the collection of these maps is an Atlas of Computational Learning Theory, informing at a glance about the landscape of computational learning just as a geographical atlas informs about the earth. In this paper we work toward this goal by providing three example maps, one pertaining to partially set-driven learning, and two pertaining to strongly monotone learning. These maps can serve as blueprints for future maps of similar base structure.

12 citations

Journal ArticleDOI
TL;DR: A complete map for nine different restrictions both for the cases of complete information learning and set-driven learning is given and a further insight is gained by different characterizations of conservative learning in terms of variants of cautious learning.

9 citations

Proceedings Article
11 Oct 2017
TL;DR: It is shown that strongly locking learning can be assumed for partially set-driven learners, even when learning restrictions apply, and also the converse is true: every strongly locking learner can be made partiallySet-driven.
Abstract: We consider language learning in the limit from text where all learning restrictions are semantic, that is, where any conjecture may be replaced by a semantically equivalent conjecture. For different such learning criteria, starting with the well-known TxtGBclearning, we consider three different normal forms: strongly locking learning, consistent learning and (partially) set-driven learning. These normal forms support and simplify proofs and give insight into what behaviors are necessary for successful learning (for example when consistency in conservative learning implies cautiousness and strong decisiveness). We show that strongly locking learning can be assumed for partially set-driven learners, even when learning restrictions apply. We give a very general proof relying only on a natural property of the learning restriction, namely, allowing for simulation on equivalent text. Furthermore, when no restrictions apply, also the converse is true: every strongly locking learner can be made partially set-driven. For several semantic learning criteria we show that learning can be done consistently. Finally, we deduce for which learning restrictions partial set-drivenness and set-drivenness coincide, including a general statement about classes of infinite languages. The latter again relies on a simulation argument.

8 citations

Proceedings Article
01 Jan 2020
TL;DR: This paper compares the known variants in a number of different settings, namely full-information and (partially) set-driven learning, paired either with the syntactic convergence restriction (explanatory learning) or the semantic converge restriction (behaviourally correct learning) to understand the restriction of cautious learning more fully.
Abstract: We investigate language learning in the limit from text with various cautious learning restrictions. Learning is cautious if no hypothesis is a proper subset of a previous guess. While dealing with a seemingly natural learning behaviour, cautious learning does severely restrict explanatory (syntactic) learning power. To further understand why exactly this loss of learning power arises, Kötzing and Palenta (2016) introduced weakened versions of cautious learning and gave first partial results on their relation. In this paper, we aim to understand the restriction of cautious learning more fully. To this end we compare the known variants in a number of different settings, namely full-information and (partially) set-driven learning, paired either with the syntactic convergence restriction (explanatory learning) or the semantic convergence restriction (behaviourally correct learning). To do so, we make use of normal forms presented in Kötzing et al. (2017), most notably strongly locking and consistent learning. While strongly locking learners have been exploited when dealing with a variety of syntactic learning restrictions, we show how they can be beneficial in the semantic case as well. Furthermore, we expand the normal forms to a broader range of learning restrictions, including an answer to the open question of whether cautious learners can be assumed to be consistent, as stated in Kötzing et al. (2017).

3 citations

Posted Content
TL;DR: The deduced main theorem states the relations between the most important delayable learning success criteria, being the ones not ruined by a delayed in time hypothesis output, and the claim for \emph{delayability} being the right structural property is underpinned.
Abstract: Learning from positive and negative information, so-called \emph{informants}, being one of the models for human and machine learning introduced by E.~M.~Gold, is investigated. Particularly, naturally arising questions about this learning setting, originating in results on learning from solely positive information, are answered. By a carefully arranged argument learners can be assumed to only change their hypothesis in case it is inconsistent with the data (such a learning behavior is called \emph{conservative}). The deduced main theorem states the relations between the most important delayable learning success criteria, being the ones not ruined by a delayed in time hypothesis output. Additionally, our investigations concerning the non-delayable requirement of consistent learning underpin the claim for \emph{delayability} being the right structural property to gain a deeper understanding concerning the nature of learning success criteria. Moreover, we obtain an anomalous \emph{hierarchy} when allowing for an increasing finite number of \emph{anomalies} of the hypothesized language by the learner compared with the language to be learned. In contrast to the vacillatory hierarchy for learning from solely positive information, we observe a \emph{duality} depending on whether infinitely many \emph{vacillations} between different (almost) correct hypotheses are still considered a successful learning behavior.

3 citations

References
More filters
Journal ArticleDOI
TL;DR: Central concerns of the book are related theories of recursively enumerable sets, of degree of un-solvability and turing degrees in particular and generalizations of recursion theory.

3,665 citations

Journal ArticleDOI
TL;DR: It was found that theclass of context-sensitive languages is learnable from an informant, but that not even the class of regular languages is learningable from a text.
Abstract: Language learnability has been investigated. This refers to the following situation: A class of possible languages is specified, together with a method of presenting information to the learner about an unknown language, which is to be chosen from the class. The question is now asked, “Is the information sufficient to determine which of the possible languages is the unknown language?” Many definitions of learnability are possible, but only the following is considered here: Time is quantized and has a finite starting time. At each time the learner receives a unit of information and is to make a guess as to the identity of the unknown language on the basis of the information received so far. This process continues forever. The class of languages will be considered learnable with respect to the specified method of information presentation if there is an algorithm that the learner can use to make his guesses, the algorithm having the following property: Given any language of the class, there is some finite time after which the guesses will all be the same and they will be correct. In this preliminary investigation, a language is taken to be a set of strings on some finite alphabet. The alphabet is the same for all languages of the class. Several variations of each of the following two basic methods of information presentation are investigated: A text for a language generates the strings of the language in any order such that every string of the language occurs at least once. An informant for a language tells whether a string is in the language, and chooses the strings in some order such that every string occurs at least once. It was found that the class of context-sensitive languages is learnable from an informant, but that not even the class of regular languages is learnable from a text.

3,460 citations

Book
01 Jan 1980
TL;DR: The authors of this book have developed a rigorous and unified theory that opens the study of language learnability to discoveries about the mechanisms of language acquisition in human beings and has important implications for linguistic theory, child language research, and the philosophy of language.
Abstract: The question of language learnability is central to modern linguistics. Yet, despite its importance, research into the problems of language learnability has rarely gone beyond the informal, commonsense intuitions that currently prevail among linguists and psychologists.By focusing their inquiry on formal language learnability theory--the interface of formal mathematical linguistics, linguistic theory and cognitive psychology--the authors of this book have developed a rigorous and unified theory that opens the study of language learnability to discoveries about the mechanisms of language acquisition in human beings. Their research has important implications for linguistic theory, child language research, and the philosophy of language."Formal Principles of Language Acquisition" develops rigorous mathematical methods for demonstrating the learnability of classes of grammars. It adapts the well-developed theories of transformational grammar to establish psychological motivation for a set of formal constraints on grammars sufficient for learnability. In addition, the research deals with such matters as the complex interaction between the mechanism of language learning and the learning environment, the empirical adequacy of the learnability constraints, feasibility and attainability of classes of grammars, the role of semantics in language learnability, and the adequacy of transformational grammars as models of human linguistic competence.This first serious and extended development of a formal and precise theory of language learnability will interest researchers in psychology and linguistics, and is recommended for use in graduate courses in language acquisition, linguistic theory, psycholinguistics, and mathematical linguistics, as well as interdisciplinary courses that deal with language learning, use, and philosophy.Contents: Methodological Considerations; Foundations of a Theory of Learnability; A Learnability Result for Transformational Grammar; Degree-2 Learnability; Linguistic Evidence for the Learnability Constraints; Function, Performance and Explanations; Further Issues: Linguistic Interaction, Invariance Principle, Open Problems; Notes, Bibliography, Index.

1,144 citations

Journal ArticleDOI
TL;DR: If searching for the ebook by Hartley Rogers Theory of Recursive Functions and Effective Computability in pdf format, then you've come to the faithful site, which presented the complete version of this book in PDF, DjVu, doc, ePub, txt forms.
Abstract: If searching for the ebook by Hartley Rogers Theory of Recursive Functions and Effective Computability in pdf format, then you've come to the faithful site. We presented the complete version of this book in PDF, DjVu, doc, ePub, txt forms. You may reading Theory of Recursive Functions and Effective Computability online or download. As well as, on our site you can read guides and different artistic eBooks online, either downloading their as well. We like invite your note that our site does not store the book itself, but we give link to site whereat you can downloading either reading online. So if you have necessity to downloading pdf by Hartley Rogers Theory of Recursive Functions and Effective Computability , in that case you come on to the right site. We own Theory of Recursive Functions and Effective Computability PDF, doc, txt, DjVu, ePub formats. We will be happy if you return to us more.

1,124 citations

Journal ArticleDOI
TL;DR: A theorem characterizing when an indexed family of nonempty recursive formal languages is inferrable from positive data is proved, and other useful conditions for inference frompositive data are obtained.
Abstract: We consider inductive inference of formal languages, as defined by Gold (1967) , in the case of positive data, i.e., when the examples of a given formal language are successive elements of some arbitrary enumeration of the elements of the language. We prove a theorem characterizing when an indexed family of nonempty recursive formal languages is inferrable from positive data. From this theorem we obtain other useful conditions for inference from positive data, and give several examples of their application. We give counterexamples to two variants of the characterizing condition, and investigate conditions for inference from positive data that avoids “overgeneralization.”

805 citations