scispace - formally typeset
Search or ask a question

Showing papers by "Arthur C. Graesser published in 2003"



Journal ArticleDOI
TL;DR: For instance, this paper found that deep comprehenders did not ask more questions, but did generate a higher proportion of good questions about plausible faults that explained the breakdowns, which is an excellent litmus test of deep comprehension.
Abstract: Models of question asking predict that questions are asked when comprehenders experience cognitive disequilibrium, which is triggered by contradictions, anomalies, obstacles, salient contrasts, and uncertainty. Questions should emerge when a person studies a device (e.g., a lock) and encounters a breakdown scenario ("the key turns but the bolt doesn't move"). Participants read illustrated texts and breakdown scenarios, with instructions to ask questions or think aloud. Participants subsequently completed a device-comprehension test, and tests of cognitive ability and personality. Deep comprehenders did not ask more questions, but did generate a higher proportion of good questions about plausible faults that explained the breakdowns. An excellent litmus test of deep comprehension is the quality of questions asked when confronted with breakdown scenarios. (PsycINFO Database Record (c) 2016 APA, all rights reserved)

213 citations



Journal ArticleDOI
TL;DR: In this paper, students overheard two computer-controlled virtual agents discussing four computer literacy topics in dialog discourse and four in monologue discourse and found that learners wrote significantly more content and significantly more relevant content in the deep question condition than in the monologue condition.
Abstract: In two experiments, students overheard two computer-controlled virtual agents discussing four computer literacy topics in dialog discourse and four in monologue discourse. In Experiment 1, the virtual tutee asked a series of deep questions in the dialog condition, but only one per topic in the monologue condition in both studies. In the dialog conditions of Experiment 2, the virtual tutee asked either deep questions, shallow questions, or made comments. In a fourth “dialog” condition, the comments were spoken by the virtual tutor. The discourse spoken by the virtual tutor was identical in the dialog and monologue conditions, except in the fourth dialog condition. In both studies, learners wrote significantly more content and significantly more relevant content in the deep question condition than in the monologue condition. No other differences were significant. Results were discussed in terms of advanced organizers, schema theory, and discourse comprehension theory.

69 citations


Proceedings Article
01 Jan 2003
TL;DR: Author(s): Graesser, A.C; Jackson, G.T.; Matthews, E.C.; Mitchell, H.H.; Olney, A.
Abstract: Author(s): Graesser, A.C.; Jackson, G.T.; Matthews, E.C.; Mitchell, H.H.; Olney, A.; Ventura, M.; Chipman, P.; Franceschetti, D.; Hu, X.; Louwerse, M.M.; Person, N.K.; Tutoring Research Group

68 citations


Proceedings ArticleDOI
31 May 2003
TL;DR: This paper describes classification of typed student utterances within AutoTutor, an intelligent tutoring system that uses part of speech tagging, cascaded finite state transducers, and simple disambiguation rules to classify utterances.
Abstract: This paper describes classification of typed student utterances within AutoTutor, an intelligent tutoring system. Utterances are classified to one of 18 categories, including 16 question categories. The classifier presented uses part of speech tagging, cascaded finite state transducers, and simple disambiguation rules. Shallow NLP is well suited to the task: session log file analysis reveals significant classification of eleven question categories, frozen expressions, and assertions.

58 citations



Proceedings Article
09 Aug 2003
TL;DR: A new LSA algorithm significantly improves the precision of AutoTutor's natural language understanding and can be applied to othernatural language understanding applications.
Abstract: The intelligent tutoring system AutoTutor uses latent semantic analysis to evaluate student answers to the tutor's questions. By comparing a student's answer to a set of expected answers, the system determines how much information is covered and how to continue the tutorial. Despite the success of LSA in tutoring conversations, the system sometimes has difficulties determining at an early stage whether or not an expectation is covered. A new LSA algorithm significantly improves the precision of AutoTutor's natural language understanding and can be applied to other natural language understanding applications.

16 citations


Book ChapterDOI
TL;DR: A study on college students who used a web facility in one of four navigational guide conditions, finding that there was no significant facilitation of any of the guides on several measures of learning and performance, compared with the No Guide condition.
Abstract: Knowledge management systems will presumably benefit from intelligent interfaces, including those with animated conversational agents. One of the functions of an animated conversational agent is to serve as a navigational guide that nudges the user how to use the interface in a productive way. This is a different function from delivering the content of the material. We conducted a study on college students who used a web facility in one of four navigational guide conditions: Full Guide (speech and face), Voice Guide, Print Guide, and No Guide. The web site was the Human Use Regulatory Affairs Advisor (HURAA), a web-based facility that provides help and training on research ethics, based on documents and regulations in United States Federal agencies. The college students used HURAA to complete a number of learning modules and document retrieval tasks. There was no significant facilitation of any of the guides on several measures of learning and performance, compared with the No Guide condition. This result suggests that the potential benefits of conversational guides are not ubiquitous, but they may save time and increase learning under specific conditions that are yet to be isolated.

13 citations



Book ChapterDOI
22 Jun 2003
TL;DR: The Tutoring Research Group from the University of Memphis has developed a pedagogically effective Intelligent Tutoring System (ITS), called Auto Tutor, that implements conversational dialog as a tutoring strategy for conceptual physics.
Abstract: The Tutoring Research Group from the University of Memphis has developed a pedagogically effective Intelligent Tutoring System (ITS), called Auto Tutor, that implements conversational dialog as a tutoring strategy for conceptual physics Latent Semantic Analysis (LSA) is used to evaluate the quality of student contributions and determine what dialog moves Auto Tutor gives By modeling the students' knowledge in this fashion, Auto Tutor successfully adapted its pedagogy to match the ideal strategy for students' ability

01 Jan 2003
TL;DR: Author(s): Hu, X.
Abstract: Author(s): Hu, X.; Cai, Z.; Franceschetti, D.; Penumatsa, P.; Graesser, A.C.; Louwerse, M.M.; McNamara, D.S.; Tutoring Research Group

01 Jan 2003
TL;DR: The dialog facilities of one such conversational agent (AutoTutor) which was designed for tutoring are described to describe how the Why/ autoTutor creates original dialog pathways for the learner.
Abstract: The interfaces of knowledge management systems will benefit from conversational agents, particularly for users who infrequently use such systems. The design of such agents will presumably share some of the dialog management facilities for systems designed for tutoring. For example, Why/AutoTutor is an automated physics tutor that engages students in conversation by simulating the discourse patterns and pedagogical dialog moves of human tutors. This paper describes how the Why/AutoTutor creates original dialog pathways for the learner. The system chains dialog moves, expressions, and discourse markers to simulate the dialog moves of natural human tutors while still controlling the conversational floor and the learning of the student. The agents of some knowledge management facilities of the future will be intelligent conversational agents. Conversational agents direct the flow of mixed initiative dialog in service of mutual goals. These agents prompt the user when to speak and what to say, provide useful feedback, and answer questions. Conversational agents will be particularly useful for infrequent users of a knowledge management system because they need the most guidance in managin g interactions with the system. Animated conversational agents have recently been designed for learning environments and help facilities (Cassell & Thorisson, 1999; Graesser, VanLehn, Rose, Jordon, & Harter, 2001; Johnson, Rickel, & Lester, 2000). These systems have dialog management facilities that hold mixed initiative dialog with the user by generating a variety of content-sensitive discourse moves: questions, answers, assertions, hints, suggestions, feedback, summaries, and so on. The design of t hese systems presumably have features and components that would directly apply to intelligent agents that control knowledge management systems. The purpose of the present paper is to describe the dialog facilities of one such conversational agent (AutoTutor) which was designed for tutoring. It is an open question as to whether an intelligent tutoring system would need sufficient overlap with the design of a knowledge management system in order to be useful.

Proceedings Article
01 Jan 2003
TL;DR: This paper used Latent Semantic Analysis (LSA) to match sentences generated by a student to essay type questions to a set of sentences (expectations) that would appear in a complete and correct response or which reflect common but incorrect understandings of the material (bads).
Abstract: Auto Tutor is an intelligent tutoring system that holds conversations with learners in natural language. Auto Tutor uses Latent Semantic Analysis (LSA) to match sentences the student generates in respo nse to essay type questions to a set of sentences (expectations) that would appear in a complete and correct response or which reflect common but incorrect understandings of the material (bads). The correctness of student contributions is decided using a t hreshold value of the LSA cosine between the student answer and the expectations. Our results indicate that the best agreement between LSA matches and the evaluations of subject matter experts can be obtained if the cosine threshold is allowed to be a function of the lengths of both student answer and the expectation being considered.

01 Jan 2003
TL;DR: Analysis of transcripts of tutoring sessions between students and expert physics tutors sheds light on how expert tutors use dialog to elicit deep processing of conceptual physics problems for use in improving intelligent tutoring of physics.
Abstract: One-on-one tutoring that encourages students to explain their answers has long been known to be an effective means of increasing student performance, even when the tutors are far from experts in the field concerned (e.g., Chi, de Leeuw, Chiu, and LaVancher, 1994; Bloom, 1984). The design of effective Intelligent Tutoring Systems (ITS) is an area of active research that attempts to take advantage of the benefits of this type of tutoring with the added convenience of automated, just-in-time teaching interventions. Validating ITS dialogues requires comparison with human tutors constrained to conditions similar to those found in ITS interfaces. A basic assumption in the design of ITS is that student productions (questions, statements, and side comments) can be categorized in a way that permits selection of an appropriate tutor response. Advanced ITSs attempt to use Natural Language Processing (NLP) components to give the student an intervention tailored to their specific needs. For these systems to work, a detailed modeling of the conversations that occur during a domain specific tutoring session is desirable. This study addresses two questions posed by the comparison of ITSs to human tutors. The first is the degree of variance that can be expected between expert tutors in a given discipline, in this instance physics. The second is the extent to which the productions of expert tutors vary from one tutor to another and if experience has any impact on the set of dialog moves employed by domain expert tutors. Answers to these questions could be key to the development of a robust ITS. AutoTutor is an ITS that teaches physics by using NLP components to conduct a dialog with the student (Graesser et al., 2000). Students are asked questions in conceptual physics and AutoTutor responds based on the quality of the student response. The overall selection of tutor responses is based on an extensive analysis of the moves employed by nonexpert human tutors across a broad range of subjects (Graesser & Person, 1994). In the process of developing and validating a version of Auto Tutor for conceptual physics, a set of 17 verbatim transcripts of tutoring sessions between students and expert physics tutors were collected. These transcripts represent well over 100 hours of human physics tutoring in a chat room environment. A turn by turn analysis of the transcripts was conducted by an experienced physics professor and a graduate student in educational technology using a modified form of the classification scheme introduced by Graesser and Person (1994). This analysis sheds light on how expert tutors use dialog to elicit deep processing of conceptual physics problems for use in improving intelligent tutoring of physics.