A Solution to Wiehagen's Thesis
Summary (1 min read)
Introduction
- In Gold-style learning [10] (also known as inductive inference) a learner tries to learn an infinite sequence, given more and more finite information about this sequence.
- Gold, in his seminal paper [10] , gave a first, simple learning criterion, later called Ex-learning 1 , where a learner is successful iff it eventually stops changing its conjectures, and its final conjecture is a correct program (computing the input sequence).
- In Theorem 3 the authors discuss four different learning criteria in which the thesis does not hold.
- From these results on learning with a semantically 1-1 enumeration the authors can derive corollaries to conclude that the learning criteria, to which the theorems apply, allow for strongly decisive and conservative learning (see Definition 1); for example, for plain Ex-learning, this proves (a stronger version of) a result from [15] (which showed that Ex-learning can be done decisively).
Definition 1.
- The authors say that a learner exhibits a U-shape when it first outputs a correct conjecture, abandons this, and then returns to a correct conjecture.
- Forbidding these kinds of U-shapes leads to the respective non-U-shapedness restrictions SynNU, NU and SNU.
- If the authors consider forbidding returning to abandoned conjectures more generally, they get three corresponding restrictions of decisiveness.
- Note that the literature knows many more learning criteria than those constructible from the parts given in this section (see the text book [11] or the survey [19] for an overview).
3 Learning by Enumeration
- From the wealth of (theoretically possible) learning criteria the authors quickly see that there are learning criteria which do not allow for learning by enumeration.
- With these definitions, the authors get the follwing theorem.
S TA C S ' 1 4
- The following learning criteria do not allow for learning by enumeration.
- The authors can see the deep power and versatility of Theorem 13 in connection with Remark 3 and the various examples of sequence acceptance criteria fulfilling the prerequisites of Theorem 13, which leads, for example, to the following corollary.
- Then RItδ allows for learning by enumeration.
Definition 5.
- That is (by taking the contrapositive), different pre-images under e not only give different images, but even semantically different images.
- The authors say that a learning criterion I allows for learning by semantically 1-1 enumeration iff each I-learnable set S is I-learnable by semantically 1-1 enumeration.
- Let h learn by semantically 1-1 enumeration.
- In particular, for any learning criterion I allowing for learning by semantically 1-1 enumeration, every I-learnable set is I-learnable by a strongly decisive learner.
Did you find this useful? Give us your feedback
Citations
10 citations
8 citations
Cites background from "A Solution to Wiehagen's Thesis"
...In decisive learning (Dec, Osherson et al., 1982), a learner may never return to a semantically abandoned conjecture; in strongly decisive learning (SDec, Kötzing, 2014) the learner may not even return to syntactically abandoned conjectures....
[...]
...The following definitions were first given by Kötzing (2014)....
[...]
References
3,665 citations
Additional excerpts
...If there is an f ∈ R such that ∀z : (φz) = φf (z), we call effective [20]....
[...]
...Unintroduced notation for computability theory follows [20]....
[...]
3,460 citations
"A Solution to Wiehagen's Thesis" refers background or methods in this paper
...for “Explanatory”), already studied by Gold [13]....
[...]
...Gold, in his seminal paper [13], gave a first, simple learning criterion, later called Ex-learning,1 where a learner is successful iff it eventually stops changing its conjectures, and its final conjecture is a correct program (computing the input sequence)....
[...]
...In inductive inference (as introduced by Gold [13]) a learner tries to learn an infinite sequence of function values, given more and more finite information about this sequence....
[...]
...We consider the three sequence generating operators in this paper: G (which stands for “Gold”, who first studied it [13]), corresponding to the examples of learning criteria given in the introduction; It (iterative learning, [23]); and Td (transductive learning, [10])....
[...]
...The most important sequence acceptance criterion is denoted Ex (which stands for “Explanatory”), already studied by Gold [13]....
[...]
1,779 citations
805 citations
"A Solution to Wiehagen's Thesis" refers methods in this paper
...Definition 1 With Cons we denote the restriction of consistent learning [4, 6, 17] (being correct on all known data); with Conf the restriction of conformal learning [24] (being correct or divergent on known data); with Conv we denote the restriction of conservative learning [2] (never abandoning a conjecture which is correct on all known data); with Mon we denote the restriction of monotone learning [16] (conjectures make all the outputs that previous conjectures made— monotonicity in the graphs); finally, with PMon we denote the restriction of pseudo-monotone learning [25] (conjectures make all the correct outputs that previous conjectures made)....
[...]
649 citations
"A Solution to Wiehagen's Thesis" refers background or methods in this paper
...Thus, the above strategy for learning employed by Blum and Blum [6] is not applicable for all learning tasks....
[...]
...Blum and Blum [6] gave the following example....
[...]
...Thus, the above strategy for learning employed by Blum and Blum [6] is not...
[...]
...Definition 1 With Cons we denote the restriction of consistent learning [4, 6, 17] (being correct on all known data); with Conf the restriction of conformal learning [24] (being correct or divergent on known data); with Conv we denote the restriction of conservative learning [2] (never abandoning a conjecture which is correct on all known data); with Mon we denote the restriction of monotone learning [16] (conjectures make all the outputs that previous conjectures made— monotonicity in the graphs); finally, with PMon we denote the restriction of pseudo-monotone learning [25] (conjectures make all the correct outputs that previous conjectures made)....
[...]