scispace - formally typeset
Search or ask a question
Author

Albert T. Corbett

Bio: Albert T. Corbett is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: TUTOR & Cognitive tutor. The author has an hindex of 41, co-authored 114 publications receiving 10985 citations. Previous affiliations of Albert T. Corbett include Columbia University & University of South Carolina.


Papers
More filters
Journal ArticleDOI
TL;DR: The 10-year history of tutor development based on the advanced computer tutoring (ACT) theory is reviewed, finding that a new system for developing and deploying tutors is being built to achieve the National Council of Teachers of Mathematics (NCTM) standards for high-school mathematics in an urban setting.
Abstract: This paper review the 10-year history of tutor development based on the ACT theory (Anderson, 1983,1993). We developed production system models in ACT ofhow students solved problems in LISP, geometry, and algebra. Computer tutors were developed around these cognitive models. Construction ofthese tutors was guided by a set of eight principles loosely based on the ACT theory. Early evaluations of these tutors usually but not always showed significant achievement gains. Best-case evaluations showed that students could achieve at least the same level of proficiency as conventional instruction in one third the time. Empirical studies showed that students were learning skills in production-rule units and that the best tutorial interaction style was one in which the tutor provides immediate feedback, consisting of short and directed error messages. The tutors appear to work better if they present themselves to students as nonhuman tools to assist learning rather than as emulations of human tutors. Students working with these tutors display transfer to other environments to the degree that they can map the tutor environment into the test environment. These experiences have coalesced into a new system for developing and deploying tutors. This system involves first selecting a problem-solving interface, then constructing a curriculum under the guidance of a domain expert, then designing a cognitive model for solving problems in that environment, then building instruction around the productions in that model, and finally deploying the tutor in the classroom. New tutors are being built in this system to achieve the NCTM standards for high school mathematics in an urban setting. (http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA312246)

1,826 citations

Journal ArticleDOI
TL;DR: An effort to model students' changing knowledge state during skill acquisition and a series of studies is reviewed that examine the empirical validity of knowledge tracing and has led to modifications in the process.
Abstract: This paper describes an effort to model students' changing knowledge state during skill acquisition. Students in this research are learning to write short programs with the ACT Programming Tutor (APT). APT is constructed around a production rule cognitive model of programming knowledge, called theideal student model. This model allows the tutor to solve exercises along with the student and provide assistance as necessary. As the student works, the tutor also maintains an estimate of the probability that the student has learned each of the rules in the ideal model, in a process calledknowledge tracing. The tutor presents an individualized sequence of exercises to the student based on these probability estimates until the student has ‘mastered’ each rule. The programming tutor, cognitive model and learning and performance assumptions are described. A series of studies is reviewed that examine the empirical validity of knowledge tracing and has led to modifications in the process. Currently the model is quite successful in predicting test performance. Further modifications in the modeling process are discussed that may improve performance levels.

1,668 citations

Journal ArticleDOI
TL;DR: The ACT theory of skill acquisition and its PUPS successor provide production-system models of the acquisition of skills such as LISP programming, geometry theorm-proving, and solving of algebraic equations.

545 citations

Journal ArticleDOI
TL;DR: The Knowledge-Learning-Instruction framework is described, which promotes the emergence of instructional principles of high potential for generality, while explicitly identifying constraints of and opportunities for detailed analysis of the knowledge students may acquire in courses.

511 citations

Proceedings ArticleDOI
25 Apr 2004
TL;DR: Analysis of students who choose to game the system suggests that learned helplessness or performance orientation might be better accounts for why students choose this behavior than lack of interest in the material.
Abstract: We investigate the prevalence and learning impact of different types of off-task behavior in classrooms where students are using intelligent tutoring software. We find that within the classrooms studied, no other type of off-task behavior is associated nearly so strongly with reduced learning as "gaming the system": behavior aimed at obtaining correct answers and advancing within the tutoring curriculum by systematically taking advantage of regularities in the software's feedback and help. A student's frequency of gaming the system correlates as strongly to post-test score as the student's prior domain knowledge and general academic achievement. Controlling for prior domain knowledge, students who frequently game the system score substantially lower on a post-test than students who never game the system. Analysis of students who choose to game the system suggests that learned helplessness or performance orientation might be better accounts for why students choose this behavior than lack of interest in the material. This analysis will inform the future re-design of tutors to respond appropriately when students game the system.

486 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 1964
TL;DR: In this paper, the notion of a collective unconscious was introduced as a theory of remembering in social psychology, and a study of remembering as a study in Social Psychology was carried out.
Abstract: Part I. Experimental Studies: 2. Experiment in psychology 3. Experiments on perceiving III Experiments on imaging 4-8. Experiments on remembering: (a) The method of description (b) The method of repeated reproduction (c) The method of picture writing (d) The method of serial reproduction (e) The method of serial reproduction picture material 9. Perceiving, recognizing, remembering 10. A theory of remembering 11. Images and their functions 12. Meaning Part II. Remembering as a Study in Social Psychology: 13. Social psychology 14. Social psychology and the matter of recall 15. Social psychology and the manner of recall 16. Conventionalism 17. The notion of a collective unconscious 18. The basis of social recall 19. A summary and some conclusions.

5,690 citations

Journal ArticleDOI
TL;DR: A wide variety of data on capacity limits suggesting that the smaller capacity limit in short-term memory tasks is real is brought together and a capacity limit for the focus of attention is proposed.
Abstract: Miller (1956) summarized evidence that people can remember about seven chunks in short-term memory (STM) tasks. How- ever, that number was meant more as a rough estimate and a rhetorical device than as a real capacity limit. Others have since suggested that there is a more precise capacity limit, but that it is only three to five chunks. The present target article brings together a wide vari- ety of data on capacity limits suggesting that the smaller capacity limit is real. Capacity limits will be useful in analyses of information processing only if the boundary conditions for observing them can be carefully described. Four basic conditions in which chunks can be identified and capacity limits can accordingly be observed are: (1) when information overload limits chunks to individual stimulus items, (2) when other steps are taken specifically to block the recoding of stimulus items into larger chunks, (3) in performance discontinuities caused by the capacity limit, and (4) in various indirect effects of the capacity limit. Under these conditions, rehearsal and long-term memory cannot be used to combine stimulus items into chunks of an unknown size; nor can storage mechanisms that are not capacity- limited, such as sensory memory, allow the capacity-limited storage mechanism to be refilled during recall. A single, central capacity limit averaging about four chunks is implicated along with other, noncapacity-limited sources. The pure STM capacity limit expressed in chunks is distinguished from compound STM limits obtained when the number of separately held chunks is unclear. Reasons why pure capacity estimates fall within a narrow range are discussed and a capacity limit for the focus of attention is proposed.

5,677 citations

01 Jan 2016
TL;DR: The modern applied statistics with s is universally compatible with any devices to read, and is available in the digital library an online access to it is set as public so you can download it instantly.
Abstract: Thank you very much for downloading modern applied statistics with s. As you may know, people have search hundreds times for their favorite readings like this modern applied statistics with s, but end up in harmful downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they cope with some harmful virus inside their laptop. modern applied statistics with s is available in our digital library an online access to it is set as public so you can download it instantly. Our digital library saves in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the modern applied statistics with s is universally compatible with any devices to read.

5,249 citations

Journal ArticleDOI
TL;DR: It is concluded that recent theories placing the explanatory weight on parallel processing of the irrelevant and the relevant dimensions are likely to be more sucessful than are earlier theories attempting to locate a single bottleneck in attention.
Abstract: The literature on interference in the Stroop Color-Word Task, covering over 50 years and some 400 studies, is organized and reviewed. In so doing, a set of 18 reliable empirical finding is isolated that must be captured by any successful theory of the Stroop effect. Existing theoretical positions are summarized and evaluated in view of this critical evidence and the 2 major candidate theories ―relative speed of processing and automaticity of reading― are found to be wanting. It is concluded that recent theories placing the explanatory weight on parallel processing of the irrelevant and the relevant dimensions are likely to be more sucessful than are earlier theories attempting to locate a single bottleneck in attention

5,172 citations