scispace - formally typeset
Search or ask a question
Author

Linwood Taylor

Bio: Linwood Taylor is an academic researcher from University of Pittsburgh. The author has contributed to research in topics: Intelligent tutoring system & Implicit attitude. The author has an hindex of 4, co-authored 5 publications receiving 766 citations.

Papers
More filters
Proceedings ArticleDOI
01 Aug 2005
TL;DR: The Andes system demonstrates that student learning can be significantly increased by upgrading only their homework problem-solving support, and its key feature appears to be the grain-size of interaction.
Abstract: The Andes system demonstrates that student learning can be significantly increased by upgrading only their homework problem-solving support. Although Andes is called an intelligent tutoring system, it actually replaces only the students' pencil and paper as they do problem-solving homework. Students do the same problems as before, study the same textbook, and attend the same lectures, labs and recitations. Five years of experimentation at the United States Naval Academy indicates that Andes significantly improves student learning. Andes' key feature appears to be the grain-size of interaction. Whereas most tutoring systems have students enter only the answer to a problem, Andes has students enter a whole derivation, which may consist of many steps, such as drawing vectors, drawing coordinate systems, defining variables and writing equations. Andes gives feedback after each step. When the student asks for help in the middle of problem-solving, Andes gives hints on what's wrong with an incorrect step or on what kind of step to do next. Thus, the grain size of Andes' interaction is a single step in solving the problem, whereas the grain size of a typical tutoring system's interaction is the answer to the problem. This report is a comprehensive description of Andes. It describes Andes' pedagogical principles and features, the system design and implementation, the evaluations of pedagogical effectiveness, and our plans for dissemination.

580 citations

Proceedings Article
06 May 2005
TL;DR: Five years of experimentation at the United States Naval Academy indicates that the Andes tutoring system significantly improves student learning.
Abstract: Andes is a mature intelligent tutoring system that has helped hundreds of students improve their learning of university physics. It replaces pencil and paper problem solving homework. Students continue to attend the same lectures, labs and recitations. Five years of experimentation at the United States Naval Academy indicates that it significantly improves student learning. This report describes the evaluations and what was learned from them.

124 citations

Book ChapterDOI
02 Jun 2002
TL;DR: This paper discusses attempts to teach the tacit knowledge without making Andes more invasive, a coach for physics problem solving that has had good evaluations, but still does not teach complex problem solving as well as the authors would like.
Abstract: Solving complex physics problems requires some kind of knowledge for selecting appropriate applications of physics principles. This knowledge is tacit, in that it is not explicitly taught in textbooks, existing tutoring systems or anywhere else. Experts seem to have acquired it via implicit learning and may not be aware of it. Andes is a coach for physics problem solving that has had good evaluations, but still does not teach complex problem solving as well as we would like. The conventional ITS approach to increasing its effectiveness requires teaching the tacit knowledge explicitly, and yet this would cause Andes to be more invasive. In particular, the textbooks and instructors would have to make space in an already packed curriculum for teaching the tacit knowledge. This paper discusses our attempts to teach the tacit knowledge without making Andes more invasive.

43 citations

Book ChapterDOI
30 Aug 2004
TL;DR: In this paper, two physics tutoring systems, Andes and Pyrenees, were compared, and it was found that Pyrenee is a model-tracing tutor that teaches a problem-solving strategy explicitly, whereas Andes uses a novel pedagogy, developed over many years of use in the field, that provides virtually no explicit strategic instruction.
Abstract: University physics is typical of many cognitive skills in that there is no standard procedure for solving problems, and yet a few students still master the skill. This suggests that their learning of problem solving strategies is implicit, and that an effective tutoring system need not teach problem solving strategies as explicitly as model-tracing tutors do. In order to compare implicit vs. explicit learning of problem solving strategies, we developed two physics tutoring systems, Andes and Pyrenees. Pyrenees is a model-tracing tutor that teaches a problem solving strategy explicitly, whereas Andes uses a novel pedagogy, developed over many years of use in the field, that provides virtually no explicit strategic instruction. Preliminary results from an experiment comparing the two systems are reported.

29 citations

01 Dec 2004
TL;DR: In this paper, two physics tutoring systems, Andes and Pyrenees, were compared, and it was found that Pyrenee is a model-tracing tutor that teaches a problem-solving strategy explicitly, whereas Andes uses a novel pedagogy, developed over many years of use in the field, that provides virtually no explicit strategic instruction.
Abstract: University physics is typical of many cognitive skills in that there is no standard procedure for solving problems, and yet a few students still master the skill. This suggests that their learning of problem solving strategies is implicit, and that an effective tutoring system need not teach problem solving strategies as explicitly as model-tracing tutors do. In order to compare implicit vs. explicit learning of problem solving strategies, we developed two physics tutoring systems, Andes and Pyrenees. Pyrenees is a model-tracing tutor that teaches a problem solving strategy explicitly, whereas Andes uses a novel pedagogy, developed over many years of use in the field, that provides virtually no explicit strategic instruction. Preliminary results from an experiment comparing the two systems are reported.

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This article reviewed the corpus of research on feedback, with a focus on formative feedback, defined as information communicated to the learner that is intended to modify his or her thinking or behavior to improve learning.
Abstract: This article reviews the corpus of research on feedback, with a focus on formative feedback—defined as information communicated to the learner that is intended to modify his or her thinking or behavior to improve learning According to researchers, formative feedback should be nonevaluative, supportive, timely, and specific Formative feedback is usually presented as information to a learner in response to some action on the learner’s part It comes in a variety of types (eg, verification of response accuracy, explanation of the correct answer, hints, worked examples) and can be administered at various times during the learning process (eg, immediately following an answer, after some time has elapsed) Finally, several variables have been shown to interact with formative feedback’s success at promoting learning (eg, individual characteristics of the learner and aspects of the task) All of these issues are discussed This review concludes with guidelines for generating formative feedback

2,893 citations

Journal ArticleDOI
TL;DR: This paper reviews the corpus of research on feedback, with a particular focus on formative feedback—defined as information communicated to the learner that is intended to modify the learners' thinking or behavior for the purpose of improving learning, and concludes with a set of guidelines for generatingformative feedback.
Abstract: This paper reviews the corpus of research on feedback, with a particular focus on formative feedback—defined as information communicated to the learner that is intended to modify the learner's thinking or behavior for the purpose of improving learning. According to researchers in the area, formative feedback should be multidimensional, nonevaluative, supportive, timely, specific, credible, infrequent, and genuine (e.g., Brophy, 1981; Schwartz & White, 2000). Formative feedback is usually presented as information to a learner in response to some action on the learner's part. It comes in a variety of types (e.g., verification of response accuracy, explanation of the correct answer, hints, worked examples) and can be administered at various times during the learning process (e.g., immediately following an answer, after some period of time has elapsed). Finally, there are a number of variables that have been shown to interact with formative feedback's success at promoting learning (e.g., individual characteristics of the learner and aspects of the task). All of these issues will be discussed in this paper. This review concludes with a set of guidelines for generating formative feedback.

1,221 citations

Journal ArticleDOI
TL;DR: It was found that the effect size of human tutoring was much lower than previously thought, and the effect sizes of intelligent tutoring systems were nearly as effective as human tutors.
Abstract: This article is a review of experiments comparing the effectiveness of human tutoring, computer tutoring, and no tutoring. “No tutoring” refers to instruction that teaches the same content without tutoring. The computer tutoring systems were divided by their granularity of the user interface interaction into answer-based, step-based, and substep-based tutoring systems. Most intelligent tutoring systems have step-based or substep-based granularities of interaction, whereas most other tutoring systems (often called CAI, CBT, or CAL systems) have answer-based user interfaces. It is widely believed as the granularity of tutoring decreases, the effectiveness increases. In particular, when compared to No tutoring, the effect sizes of answer-based tutoring systems, intelligent tutoring systems, and adult human tutors are believed to be d = 0.3, 1.0, and 2.0 respectively. This review did not confirm these beliefs. Instead, it found that the effect size of human tutoring was much lower: d = 0.79. Moreover, the eff...

1,018 citations

Journal ArticleDOI
TL;DR: Findings suggest that significant effort should be put into detecting and responding to boredom and confusion, with a particular emphasis on developing pedagogical interventions to disrupt the ''vicious cycles'' which occur when a student becomes bored and remains bored for long periods of time.
Abstract: We study the incidence (rate of occurrence), persistence (rate of reoccurrence immediately after occurrence), and impact (effect on behavior) of students' cognitive-affective states during their use of three different computer-based learning environments. Students' cognitive-affective states are studied using different populations (Philippines, USA), different methods (quantitative field observation, self-report), and different types of learning environments (dialogue tutor, problem-solving game, and problem-solving-based Intelligent Tutoring System). By varying the studies along these multiple factors, we can have greater confidence that findings which generalize across studies are robust. The incidence, persistence, and impact of boredom, frustration, confusion, engaged concentration, delight, and surprise were compared. We found that boredom was very persistent across learning environments and was associated with poorer learning and problem behaviors, such as gaming the system. Despite prior hypothesis to the contrary, frustration was less persistent, less associated with poorer learning, and did not appear to be an antecedent to gaming the system. Confusion and engaged concentration were the most common states within all three learning environments. Experiences of delight and surprise were rare. These findings suggest that significant effort should be put into detecting and responding to boredom and confusion, with a particular emphasis on developing pedagogical interventions to disrupt the ''vicious cycles'' which occur when a student becomes bored and remains bored for long periods of time.

765 citations

Proceedings Article
01 Aug 2006
TL;DR: Although tutoring systems differ widely in their task domains, user interfaces, software structures, knowledge bases, etc., their behaviors are in fact quite similar.
Abstract: Tutoring systems are described as having two loops. The outer loop executes once for each task, where a task usually consists of solving a complex, multi-step problem. The inner loop executes once for each step taken by the student in the solution of a task. The inner loop can give feedback and hints on each step. The inner loop can also assess the student's evolving competence and update a student model, which is used by the outer loop to select a next task that is appropriate for the student. For those who know little about tutoring systems, this description is meant as a demystifying introduction. For tutoring system experts, this description illustrates that although tutoring systems differ widely in their task domains, user interfaces, software structures, knowledge bases, etc., their behaviors are in fact quite similar.

718 citations