scispace - formally typeset
Search or ask a question

Showing papers on "Algorithmic learning theory published in 1978"


Journal ArticleDOI
TL;DR: In this paper, a model of second language learning is proposed which attempts to account for discrepancies both in individual achievement and achievement in different aspects of second-language learning, including exposure to the language, the storage of that information for the language learner, and the responses that are produced as a function of the stored information.
Abstract: A model of second language learning is proposed which attempts to account for discrepancies both in individual achievement and achievement in different aspects of second language learning. The model outlines aspects of the input of information through various kinds of exposure to the language, the storage of that information for the language learner, and the responses that are produced as a function of the stored information. The operation of the model is explained in terms of learning processes and learning strategies. The former refer to the obligatory relationships that hold between aspects of the model and are true for all second language learners. The latter describe a group of optional strategies that may be employed by different language learners and in different learning situations. Individual learner characteristics, such as language learning aptitude and attitude, affect the efficiency with which the processes will operate for an individual and the extent to which he will use the learning strategies. Illustrations are provided to explain how the model would account for performance on a number of different learning tasks.

506 citations


Book
01 Jan 1978

153 citations



Book ChapterDOI
04 Sep 1978
TL;DR: Algorithmic Theory of Stacks as discussed by the authors formalizes properties of relational systems of stacks and proves that every relational system of stacks is isomorphic to a system of finite sequences of elements.
Abstract: Algorithmic Theory of Stacks ATS formalizes properties of relational systems of stacks. It turns out that apart of previously known axioms, the new axiom of algorithmic nature [while - empty(s)do s := pop(s)]true is in place. The representation theorem stating that every relational system of stacks is isomorphic to a system of finite sequences of elements is proved. The connections between ATS and a type STACKS declaration /written in LOGLAN programming language/ are shown.

18 citations


08 Feb 1978
TL;DR: The ACT theory of learning is embodied as a computer simulation program that makes predictions about human learning of various cognitive skills such as language fluency, study skills for social science texts, problem-solving skills in mathematics, and computer programming skills.
Abstract: : The paper describes the ACT theory of learning The theory is embodied as a computer simulation program that makes predictions about human learning of various cognitive skills such as language fluency, study skills for social science texts, problem-solving skills in mathematics, and computer programming skills The learning takes place within the ACT theory of the performance of such skills This theory involves a propositional network representation of general factual knowledge and a production system representation of procedural knowledge Skill learning mainly involves addition and modification of the productions There are five mechanisms by which this takes place: Designation, strengthening, generalization, discrimination, and composition Each of these five learning mechanisms is discussed in detail and related to available data in procedural learning

13 citations


Book ChapterDOI
01 Jan 1978
TL;DR: Although most traditional concept learning tasks employ only attribute-value descriptions, this learning task requires higher order, relational logic to characterize the structural constraints among the lines of a triangle.
Abstract: Everyone has many personal experiences of learning by example. While much psychological research has investigated “concept learning” (cf. Bruner, Goodnow, & Austin, 1956; Hayes-Roth & Hayes-Roth, in press; Hunt, 1952), that rubric is too narrow to embrace the variety of situations in which learning by example occurs. A brief list of such situations includes: 1. Traditional concept learning, such as inducing the class characteristics of “triangle”: “Three distinct line segments such that each line segment is coterminous, with a different line segment at each of its endpoints.” Such a rule can be induced from various examples of triangles; all examples necessarily manifest the rule, although they may differ from one another in irrelevant ways (e.g., in absolute and relative size, shape, orientation, color, texture). Note that although most traditional concept learning tasks employ only attribute-value descriptions, this learning task requires higher order, relational logic to characterize the structural constraints among the lines of a triangle. 2. Serial pattern learning, such as predicting the next item in a conceptually organized sequence. Traditional research on this problem has centered on mathematical sequences of symbols and various algorithmic models of memory processes for simulating the sequence generator. Other examples of this type of behavior include anticipation of expectable events (e.g., words or topics in a text that are predictable from preceding context) and prediction of cyclic phenomena. In these situation, subsequences of the preceding sequence of items serve as examples from which the sequence generation rule is induced.

12 citations


Journal ArticleDOI
TL;DR: A response to the recent discussions critical of the Bayesian learning procedure on the basis of empirically observed deviations from its prescriptions and examples of surprising learning behaviours and decision strategies are generated.
Abstract: A response is made to the recent discussions critical of the Bayesian learning procedure on the basis of empirically observed deviations from its prescriptions. Bayes' theorem is embedded in a more general class of learning rules which allow for departure from the demands of idealized rational behaviour. Such departures are termed learning impediments or disabilities. Some particular forms and interpretations of impediment functions are presented. Consequences of learning disabilities for the likelihood principle, stable estimation and admissible decision-making are explored. Examples of surprising learning behaviours and decision strategies are generated. Deeper understanding of Bayesian learning and its characteristics results.

6 citations


Journal ArticleDOI
TL;DR: This paper analyzed three well-known paradoxes to show how mathematical statements can cause mental disharmony in mathematics learning and proposed a psychological theory of confusion in mathematics education, which has important implications for classroom practice and curriculum innovation.
Abstract: This paper analyses three well‐known paradoxes to show how mathematical statements can cause mental disharmony. The outcome forms a basis for a psychological theory of confusion in mathematics learning which has important implications for classroom practice and curriculum innovation. In particular, it draws the educator's attention to the true nature of his art— the creation and communication of systems of shared meanings.

3 citations


Journal ArticleDOI
TL;DR: This paper will show the validity of Estes' learning theory using the Tsetlin model (i.e., interaction between random media and automata), and propose to prove the psychological hypothesis.

3 citations



Journal ArticleDOI
TL;DR: The evaluation of two teaching programs in terms of their effectiveness with respect to overall learning gain, retention of the learned content and transfer to new subject areas using the probabilistic measurement theory of RASCH.

Book ChapterDOI
01 Jan 1978
TL;DR: A new algebraic method, the so-called “structure theory” developed by Blickle and coworkers applied to solving a wide-range of chemical engineering problems proves to be a very useful tool for learning processes.
Abstract: A new algebraic method, the so-called “structure theory” developed by Blickle and coworkers applied to solving a wide-range of chemical engineering problems proves to be a very useful tool for learning processes. The learning algorithms based on the structure theory consist of the following steps: labelling, feature selection, training, determination of the structure and recognition.