scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Teachable Agents and the Protégé Effect: Increasing the Effort Towards Learning

30 Jun 2009-Journal of Science Education and Technology (Springer Netherlands)-Vol. 18, Iss: 4, pp 334-352
TL;DR: Betty's Brain this paper is a computer-based learning environment that capitalizes on the social aspects of learning in which students instruct a character called a Teachable Agent (TA) which can reason based on how it is taught.
Abstract: Betty’s Brain is a computer-based learning environment that capitalizes on the social aspects of learning In Betty’s Brain, students instruct a character called a Teachable Agent (TA) which can reason based on how it is taught Two studies demonstrate the protege effect: students make greater effort to learn for their TAs than they do for themselves The first study involved 8th-grade students learning biology Although all students worked with the same Betty’s Brain software, students in the TA condition believed they were teaching their TAs, while in another condition, they believed they were learning for themselves TA students spent more time on learning activities (eg, reading) and also learned more These beneficial effects were most pronounced for lower achieving children The second study used a verbal protocol with 5th-grade students to determine the possible causes of the protege effect As before, students learned either for their TAs or for themselves Like study 1, students in the TA condition spent more time on learning activities These children treated their TAs socially by attributing mental states and responsibility to them They were also more likely to acknowledge errors by displaying negative affect and making attributions for the causes of failures Perhaps having a TA invokes a sense of responsibility that motivates learning, provides an environment in which knowledge can be improved through revision, and protects students’ egos from the psychological ramifications of failure

Summary (2 min read)

Introduction

  • Environment that capitalizes on the social aspects of learning.
  • The interactive potential of the computer naturally draws comparisons to social behavior.
  • Whether or not the Turing test is adequate for deciding the intelligence of a computer, it is useful to note that the test is really about the social behavior of the computer.
  • The authors begin with a brief review of agents and avatars, which are the two main classes of virtual characters used in educational applications.
  • This sets the stage for two studies that demonstrate what the authors term the protégé effect: students make greater effort to learn for their TAs than they do for themselves.

Learning and Motivation with Agents, Avatars, and Hybrids

  • Interactive computer characters traditionally come in one of two forms: avatar and agent (Bailenson and Blascovich 2004).
  • Clarebout et al. (2002) have created a typology of pedagogically relevant agent behaviors such as showing, explaining, and questioning.
  • Like agents, avatars (which humans control) may also have benefits for learning.
  • This tendency for adoption has educational potential, when the attributes to be adopted are useful dispositions for learning.
  • A TA is a ‘‘sentient’’ hybrid agent/avatar that has been specifically designed for educational outcomes.

Overview of Studies

  • Given evidence of cognitive gains, the current research was designed to get a closer look at the motivational properties of TAs.
  • Thus, students in the TA condition spent more time working on the concept maps and checking those maps with a quiz.
  • Students then played the Gameshow while thinking aloud.
  • As in Study 1, students in the TA condition were more likely to choose to refine their understanding, and they spent more time doing so.
  • TA students indicated this by distributing and co-mingling mental and responsibility attributions between themselves and their TAs.

General Discussion

  • Two studies demonstrated the existence of a protégé effect: students are more willing to make the effort towards learning on behalf of a computerized protégé than for themselves.
  • The first study, which used a classroom-level intervention, revealed that students who taught TAs spent more time on learning behaviors and ultimately learned more than students who learned for themselves.
  • The verbal data provided possible reasons for the students’ greater effort towards learning.
  • For these students, the TA existed in a middle ground between avatar and agent.
  • Positive after success Total positive Negative after failure Total negative.

Conclusion

  • Over the next few years, the authors anticipate that avatars and intelligent agents will be increasingly blended.
  • Or in a simulation of classroom interactions, a user may create students with various traits and observe how they would behave as a group.
  • Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the granting agencies.

Did you find this useful? Give us your feedback

Figures (13)

Content maybe subject to copyright    Report

Teachable Agents and the Prote
´
ge
´
Effect: Increasing the Effort
Towards Learning
Catherine C. Chase Æ Doris B. Chin Æ
Marily A. Oppezzo Æ Daniel L. Schwartz
Published online: 30 June 2009
Springer Science+Business Media, LLC 2009
Abstract Betty’s Brain is a computer-based learning
environment that capitalizes on the social aspects of
learning. In Betty’s Brain, students instruct a character
called a Teachable Agent (TA) which can reason based on
how it is taught. Two studies demonstrate the prote
´
ge
´
effect: students make greater effort to learn for their TAs
than they do for themselves. The first study involved 8th-
grade students learning biology. Although all students
worked with the same Betty’s Brain software, students in
the TA condition believed they were teaching their TAs,
while in another condition, they believed they were
learning for themselves. TA students spent more time on
learning activities (e.g., reading) and also learned more.
These beneficial effects were most pronounced for lower
achieving children. The second study used a verbal pro-
tocol with 5th-grade students to determine the possible
causes of the prote
´
ge
´
effect. As before, students learned
either for their TAs or for themselves. Like study 1, stu-
dents in the TA condition spent more time on learning
activities. These children treated their TAs socially by
attributing mental states and responsibility to them. They
were also more likely to acknowledge errors by displaying
negative affect and making attributions for the causes of
failures. Perhaps having a TA invokes a sense of respon-
sibility that motivates learning, provides an environment in
which knowledge can be improved through revision, and
protects students’ egos from the psychological ramifica-
tions of failure.
Keywords Educational technology Motivation
K-12 education Peer tutoring
The interactive potential of the computer naturally draws
comparisons to social behavior. For example, the Turing
test proposed that if a human interacts with a computer, and
the human believes the computer is a person, then the
computer has achieved human intelligence (Turing 1950).
A number of computer programs were engineered to
challenge the validity of the Turing test. ELIZA, for
instance, successfully impersonated the dialog of a Roge-
rian therapist, but the computer used such simple rules that
it would be absurd to consider it truly intelligent (Wei-
zenbaum 1976). Whether or not the Turing test is adequate
for deciding the intelligence of a computer, it is useful to
note that the test is really about the social behavior of the
computer. There could have been other tests of human
intelligence; for example, could the computer learn lan-
guage? But, instead the test assessed whether people would
treat the computer as a social entity. Here, we use the
natural social attractions of the computer to improve stu-
dents’ science learning.
Computers readily draw forth people’s social schemas.
Even when they explicitly know they are interacting with a
computer, people will behave in socially appropriate ways
(Reeves and Nass 1998). People’s tendency to attribute
social intelligence to computers has fueled the creation of
graphical worlds that comingle human and computer
intelligence. Examples include Second Life, the Sims, and
World of Warcraft—where people interact with graphical
characters that may represent a live person or a computer
character. These human-computer hybrids not only boost
natural social inclinations, they can also produce novel
social configurations that sustain unusual psychological
C. C. Chase (&) D. B. Chin M. A. Oppezzo D. L. Schwartz
School of Education, Stanford University, 485 Lasuen Mall,
Stanford, CA 94305, USA
e-mail: cchase@stanford.edu
123
J Sci Educ Technol (2009) 18:334–352
DOI 10.1007/s10956-009-9180-4

states. For instance, game players can program graphical
characters to act (and interact) in virtual social worlds,
even when the players are no longer at their computer.
The novel social configuration presented here involves
software agents that blend student and computer intelli-
gence. We have created a computer-based learning envi-
ronment that features a Teachable Agent (TA)—a
graphical computer character that students teach. The TA
uses artificial intelligence to learn and reason about what it
has been taught. TAs are a hybrid; they reflect their own-
ers’ knowledge, yet have minds of their own. This social
arrangement has benefits for learning. For example, stu-
dents are likely to adopt their TAs’ reasoning methods
(Schwartz et al. 2009). Here, we focus on the motivational
consequences.
We begin with a brief review of agents and avatars,
which are the two main classes of virtual characters used in
educational applications. We then introduce TAs, which
combine properties of agents and avatars. This sets the
stage for two studies that demonstrate what we term the
prote
´
ge
´
effect: students make greater effort to learn for
their TAs than they do for themselves. The first study
produces this effect, even when the only difference
between conditions is whether students believe they are
learning for their TAs or for themselves. The second study
shows the social nature of the interaction with the TA and
how it contributes to the prote
´
ge
´
effect. We conclude with
some initial thoughts on the role of TAs in creating a
distinctly social set of motivations to learn, which are
supported by an ego-protective buffer, an incrementalist
approach to learning, and a sense of responsibility.
Learning and Motivation with Agents, Avatars,
and Hybrids
Interactive computer characters traditionally come in one
of two forms: avatar and agent (Bailenson and Blascovich
2004). An avatar is a character that represents and is
controlled by a human. For example, in a video game, the
characters manipulated by the players are avatars. In con-
trast, an agent is a character controlled by the computer.
When people play a hockey video game by themselves,
they each control their own avatars, while the computer
controls the other players (agents) on the team. One of the
interesting things about these computer games is that the
users can jump from character to character, so they control
whichever player happens to have the hockey puck. This is
a nice example of a novel social configuration that com-
puters support.
Agents and avatars each have advantages for education.
A number of useful learning situations can be created by
agents (for a nice collection of instances, see Baylor 2007).
For example, agents can provide role models for how to
think or act. Ryokai et al. (2003) used an embodied con-
versational agent named Sam to engage children in col-
laborative story-telling. Children who interacted with Sam
adopted his conversational behaviors and used more
advanced narrative skills than children who conversed with
peers. Another type of agent is a pedagogical agent, which
provides advice to learners. For instance, Shimoda et al.
(2002) used a panoply of agents to deliver meta-cognitive
tips during scientific inquiry. Clarebout et al. (2002) have
created a typology of pedagogically relevant agent
behaviors such as showing, explaining, and questioning.
Agents can also be used to improve motivation. Lester
et al. (1997) experimented with five varieties of Herman
the Bug, a pedagogical agent who worked with middle
school students as they designed a plant. In a condition
where the agent gave no advice but exhibited social
behaviors of encouragement, students gave the agent high
ratings on entertainment value and chose to have Herman
help them with homework. Lester et al. (1997) dubbed this
the persona effect, claiming that the socialness of the agent
helped to engage students with the software. Similarly,
Baylor and Kim (2005) found that pedagogical agents
equipped with encouraging dialogue were perceived as
more motivating and showed a moderate trend for
enhancing student self-efficacy.
Like agents, avatars (which humans control) may also
have benefits for learning. For example, people may learn
to take on the attributes of their avatars. Yee and Bailenson
(2007) termed this the proteus effect. In one study, par-
ticipants were assigned to use either a tall or short avatar.
They then played a negotiation game with another person
in virtual reality. The people who played as the tall avatar
were tougher negotiators and were more likely to come out
ahead. Presumably, they took on the stereotype that height
confers power and authority. This tendency for adoption
has educational potential, when the attributes to be adopted
are useful dispositions for learning.
Avatars can also motivate students to take risks. If the
avatar makes a mistake, the user does not necessarily suffer
the consequences. When getting checked into the boards in
a virtual hockey game, the players not only do not get hurt,
but they can also ‘laugh it off’’. Just as computer simula-
tions of nuclear fusion are physically safer than the real
thing (Perkins et al. 2006), avatars can make it psycho-
logically safer to try new things, without experiencing the
real consequences of failure.
A hybrid agent/avatar blends the properties of an agent
and an avatar. It is a character that includes a bit of the
computer and bit of the human user. A key element of a
hybrid agent/avatar is its ability to behave without explicit
human control while still reflecting prior interactions with a
human user. A growing number of hybrids vary the mix of
J Sci Educ Technol (2009) 18:334–352 335
123

human dependence and independence. Some applications
have the user try to ‘program’ a character so it lives and
acts exactly the way the user intends (Gerhard et al. 2004;
Imbert and de Antonio 2000). For example, in The Sims, a
popular commercial game, computer characters behave
based on the attributes supplied by their users plus some
amount of their own apparent ‘free will’’.
Another example hybrid is the Tamagotchi—a digital
pet housed in a small, egg-shaped computer. Children are
responsible for feeding, cleaning, and nurturing their
Tamagotchis. The pets respond and grow based on the
children’s care. Children (especially girls) find the
responsibility and nurturing highly motivating (Pesce
2000). The research presented here shows that a sense of
responsibility towards a hybrid can lead to educationally
relevant outcomes as well.
A TA is a ‘sentient hybrid agent/avatar that has been
specifically designed for educational outcomes. The TA
engages learners in a teacher-pupil metaphor and takes on
the role of prote
´
ge
´
. The student teaches the TA, so the TA
is dependent on the student. At the same time, the TA
contains artificial intelligence that allows it to behave
independently. For instance, the TA can reason, answer
questions, and complete various assessments based on how
it was taught. Moreover, a TA possesses the educational
benefits of both agents and avatars. Like an agent, a TA
provides an independent social presence that motivates
students to interact with it, plus it offers new models of
thinking and reasoning. Like an avatar, the TA has
properties that students can adopt, without the intellectual
risks that come with learning something on one’s own.
A Teachable Agent Called Betty’s Brain
There are several types of TA software (see Schwartz et al.
2007); here we focus on Betty’s Brain. Betty was designed
to model chains of cause and effect relationships. For
example, when the brain’s temperature set point rises,
several multi-step pathways cause the body’s temperature
to increase and develop a fever (see Fig. 1). Betty is
especially relevant to science domains where long chains
of qualitative causes are a useful way to explain phenom-
ena. Biology content like food webs and ecosystems,
bodily systems, and global warming are well-modeled by
Betty’s architecture.
Before teaching in Betty’s Brain, each student names
and designs the appearance of her own TA (Betty’s Brain is
the name of the software; students create characters for
themselves). A student then teaches her TA by creating a
concept map of nodes connected by qualitative causal
links; for example, ‘heat release’ decreases ‘body temper-
ature’. The map fancifully symbolizes the interior of the
TA’s brain. Once taught, a TA can answer questions. For
instance, Betty includes a simple query feature. In Fig. 1,
the TA uses the map it was taught to answer the query, ‘If
blood flow to skin increases, what happens to body tem-
perature?’ Using basic artificial intelligence techniques,
Fig. 1 The teachable agent
Betty’s Brain. Using the Betty
software, each student teaches
her own TA (in this case, named
‘Dee’’) by constructing a
concept map as its ‘brain’’.
Through basic artificial
intelligence techniques, the TA
can answer questions based on
the relationships depicted in its
map. Students can query the TA
using a pull-down menu. The
highlighted links and nodes in
the figure show how the TA
answers the question, ‘If ‘blood
flow to skin’ increases, what
happens to ‘body
temperature’?’
336 J Sci Educ Technol (2009) 18:334–352
123

the TA animates its reasoning process by successively
highlighting each node and link in the causal chain (see
Biswas et al. 2005). A student can trace her TA’s reason-
ing, and then remediate its knowledge (and her own) if
necessary. A TA always reasons logically, but depending
on the nodes and links it was taught, it will reach a right or
wrong answer.
Betty’s Brain is not meant to be the only means of
instruction, but rather to provide a way for students to
organize and reason about content they have learned in the
classroom (Schwartz et al. 2007). Betty is intended to
complement many styles of instruction, not replace them.
One of her complementary strengths is feedback. Betty
comes with a number of software options that provide
feedback in various forms, some of which can spark class-
room discussion. The option shown in Fig. 2a enables a
teacher to project multiple TAs’ maps using a classroom
projector. The teacher can ask the same question of all the
TAs simultaneously, then zoom in to focus the discussion on
one or two maps. Figure 2b shows the All Possible Ques-
tions (APQ) matrix—a tool that asks the TA every possible
question. It then compares the answers of the TA with those
of a hidden, pre-programmed expert map to produce a grid
that indicates which questions the TA got right and wrong.
Several of Betty’s attributes were designed to encourage
students to treat their TAs as social beings. For instance, a
TA can draw inferences from questions, take quizzes, play
games, and even comment on its own knowledge
(depending on the configuration of the software). Betty’s
Brain also comes with narratives and graphical elements to
help support the mindset of teaching. Finally, each student
can customize her TA’s appearance and give it a name,
which makes her TA more personal than a sterile, generic
computerized icon. In reality, students are simply pro-
gramming their TAs in a high-level graphical language,
and children know the computer is not really alive. Nev-
ertheless, as we demonstrate in Study 2, students suspend
disbelief enough to treat the computer as possessing
knowledge and feelings (e.g., Reeves and Nass 1998;
Turkle 1995).
One of a TA’s most social elements is its ability to
externalize its thought processes. When a TA animates its
reasoning on the screen, it literally makes its ‘thinking’
visible. A study with 6th-graders indicated that students do
learn from the TA’s overt model of causal reasoning
(Schwartz et al. 2009). In one condition, students worked
with their TAs to organize what they had learned from
various readings, films, and hands-on activities. In another
condition, students learned the same content, but worked
with a commercial concept mapping program called
Inspiration. Students took periodic paper and pencil tests
across 3 weeks of a curriculum about global warming.
Over time, the TA students increasingly outperformed the
Inspiration students, and TA students demonstrated the
greatest advantage on questions that required longer chains
of causal inference. These results indicate that students
adopted the reasoning process modeled by the TAs in
Betty’s Brain.
Other studies have also found learning benefits when
students work with Betty’s Brain. A 2-month study had
Fig. 2 Software options for various types of feedback. Panel A
shows a front-of-the-class (FOC) display, where teachers project and
query multiple ‘brains’ (maps) simultaneously. The highlights
around each concept map indicate correct and incorrect answers.
Panel B shows the All-Possible-Questions (APQ) matrix. The matrix
indicates a TA’s accuracy when asked the complete population of
possible questions in a hidden expert map. All concepts are displayed
on both axes. Each cell displays feedback to the question, ‘If Y
increases, what happens to X?’ For both applications, green indicates
a correct answer, red indicates incorrect, and yellow indicates correct
but by the wrong causal path. A version of the Betty’s Brain
environment and teacher tools can be found at \aaalab.stan-
ford.edu[. (Color figure online)
J Sci Educ Technol (2009) 18:334–352 337
123

5th-graders learn river ecology (Wagster et al. 2007). In the
Teach condition, each student taught Betty (in this study all
students taught the same graphical character called Betty
rather than creating their own TAs). In the Being-Taught
condition, Betty’s image was replaced with a ‘mentor
agent’ named Mr. Davis. In the Being-Taught condition,
students also created maps. When a student asked a ques-
tion of her map, the mentor agent traced through the map
(in exactly the same way that Betty did for students in the
Teach condition). Thus, the primary difference between
conditions was quite subtle—the mindset of teaching ver-
sus being taught. Students in the Teach condition produced
more accurate concept maps. The benefits also transferred
to a unit on land ecology, when the students were no longer
in their respective treatments. Students who had been in the
Teach condition again made better concept maps.
Overview of Studies
Given evidence of cognitive gains, the current research was
designed to get a closer look at the motivational properties
of TAs. The first study demonstrates the prote
´
ge
´
effect:
students are willing to work harder to learn for their TAs
than for themselves, and this is especially true for low-
achieving students. The second study finds that students
treat their TAs as social, thinking beings. Students closely
monitor and take responsibility for their TAs’ failures,
which motivates them to revise their own understanding so
they can teach better. Both studies were short in duration,
only one to 3 h, so there was minimal expectation of
finding learning differences. Instead, the research focused
specifically on affective elements that may have contrib-
uted to the learning benefits found in earlier research.
In the current studies, one of Betty’s features was par-
ticularly important—the Triple-A-Challenge Gameshow.
The Gameshow is an online environment where multiple
TAs, each taught by a different student, can interact and
compete with one another (Fig. 3). Students can log on
from home to teach their TAs (by accessing the Betty
software), chat with other students, and eventually have
their TAs play in a game. During game play, the host poses
questions of the form, ‘If X increases/decreases, what
happens to Y?’ After each question, the student wagers
from 0 to 500 points, and the TA answers based on what it
has been taught. Then, the host reveals the correct answer
and awards points. Students normally play the Gameshow
in rounds, with each round consisting of about six ques-
tions, and subsequent rounds including more difficult
questions (i.e., requiring longer chains of reasoning).
The Gameshow was developed to make homework more
interactive, social, and fun. In one study, Schwartz et al.
(2009) found high levels of homework compliance when
students used the Gameshow with TAs, and the Gameshow
prepared students to learn related content in class over the
next few days. In the current studies, the Gameshow was
not used for homework, but was used in the classroom in
Study 1, and for individual sessions in Study 2. In both
studies, the manipulation was whether the character in the
software represented a TA, or whether the character was an
Avatar that represented the student. In the TA condition,
the TAs answered the host’s questions while students
wagered on their prote
´
ge
´
s. In the Avatar condition, the
students answered the host’s questions and wagered on
themselves.
Our predictions were simple. Students in both conditions
would be engaged by the novelty of the technologies,
especially in the context of school. However, the TA would
yield a specific type of engagement. Students would be
more motivated to learn for their prote
´
ge
´
s than for them-
selves. Specifically, they would spend more time reading
and revising their knowledge. Furthermore, this motivation
would be partially driven by the ‘make believe’ that their
TAs have thoughts and feelings and by the sense of
responsibility students would develop towards their digital
pupils.
Study 1: The Prote
´
ge
´
Effect
One of the interesting benefits of new technologies is that
they permit ‘clean tests’ that are hard to match in the
physical world. For example, most research that claims to
have demonstrated a benefit of social interaction for
learning has been confounded by the many differences
between a social and non-social interaction (e.g., Kuhl
et al. 2003; Moreno et al. 2001). For example, demon-
strating that an individual learns more by working in a
group than working alone may be attributed to the increase
of information exchange and not to the fact that the indi-
vidual was in a social exchange. Chi et al. (2008), recog-
nizing this distinction, proposed that learning from social
interaction may be due to the same processes involved in
self-explanation (e.g., elaborating on a topic by explaining
to oneself).
New technologies provide fresh possibilities for untan-
gling these matters (Blascovich et al. 2002
). For example,
Okita et al. (2007) had adults interact with a graphical
character in immersive virtual reality. The participants and
the character discussed the biological mechanisms that
sustain a fever. The interactions were covertly scripted so
that each participant said and heard the same things at the
same times. The experimental manipulation was simply
whether the participants were told that the character was a
computer agent or that the character represented a person in
another room (in reality, it was always a computer
338 J Sci Educ Technol (2009) 18:334–352
123

Citations
More filters
Book
01 Jan 2003
TL;DR: In this paper, Sherry Turkle uses Internet MUDs (multi-user domains, or in older gaming parlance multi-user dungeons) as a launching pad for explorations of software design, user interfaces, simulation, artificial intelligence, artificial life, agents, virtual reality, and the on-line way of life.
Abstract: From the Publisher: A Question of Identity Life on the Screen is a fascinating and wide-ranging investigation of the impact of computers and networking on society, peoples' perceptions of themselves, and the individual's relationship to machines. Sherry Turkle, a Professor of the Sociology of Science at MIT and a licensed psychologist, uses Internet MUDs (multi-user domains, or in older gaming parlance multi-user dungeons) as a launching pad for explorations of software design, user interfaces, simulation, artificial intelligence, artificial life, agents, "bots," virtual reality, and "the on-line way of life." Turkle's discussion of postmodernism is particularly enlightening. She shows how postmodern concepts in art, architecture, and ethics are related to concrete topics much closer to home, for example AI research (Minsky's "Society of Mind") and even MUDs (exemplified by students with X-window terminals who are doing homework in one window and simultaneously playing out several different roles in the same MUD in other windows). Those of you who have (like me) been turned off by the shallow, pretentious, meaningless paintings and sculptures that litter our museums of modern art may have a different perspective after hearing what Turkle has to say. This is a psychoanalytical book, not a technical one. However, software developers and engineers will find it highly accessible because of the depth of the author's technical understanding and credibility. Unlike most other authors in this genre, Turkle does not constantly jar the technically-literate reader with blatant errors or bogus assertions about how things work. Although I personally don't have time or patience for MUDs,view most of AI as snake-oil, and abhor postmodern architecture, I thought the time spent reading this book was an extremely good investment.

4,965 citations

Journal ArticleDOI
TL;DR: The authors reviewed the theoretical basis of several prominent social-psychological interventions and emphasized that they have lasting effects because they target students' subjective experience and beliefs, such as their belief that they had the potential to improve their intelligence or that they belong and are valued in school.
Abstract: Recent randomized experiments have found that seemingly “small” social-psychological interventions in education—that is, brief exercises that target students’ thoughts, feelings, and beliefs in and about school—can lead to large gains in student achievement and sharply reduce achievement gaps even months and years later. These interventions do not teach students academic content but instead target students’ psychology, such as their beliefs that they have the potential to improve their intelligence or that they belong and are valued in school. When social-psychological interventions have lasting effects, it can seem surprising and even “magical,” leading people either to think of them as quick fixes to complicated problems or to consider them unworthy of serious consideration. The present article discourages both responses. It reviews the theoretical basis of several prominent social-psychological interventions and emphasizes that they have lasting effects because they target students’ subjective experien...

1,079 citations

Journal ArticleDOI
15 Aug 2018
TL;DR: The potential of social robots in education is reviewed, the technical challenges are discussed, and how the robot’s appearance and behavior affect learning outcomes are considered.
Abstract: Social robots can be used in education as tutors or peer learners. They have been shown to be effective at increasing cognitive and affective outcomes and have achieved outcomes similar to those of human tutoring on restricted tasks. This is largely because of their physical presence, which traditional learning technologies lack. We review the potential of social robots in education, discuss the technical challenges, and consider how the robot's appearance and behavior affect learning outcomes.

747 citations


Cites background from "Teachable Agents and the Protégé Ef..."

  • ...This is an instance of learning by teaching, which is widely known in human education, also referred to as the protégé effect (72)....

    [...]

Journal ArticleDOI
TL;DR: A way to define the potential educational impact of current and future apps is offered and how the design and use of educational apps aligns with known processes of children’s learning and development is shown to offer a framework that can be used by parents and designers alike.
Abstract: Children are in the midst of a vast, unplanned experiment, surrounded by digital technologies that were not available but 5 years ago. At the apex of this boom is the introduction of applications ("apps") for tablets and smartphones. However, there is simply not the time, money, or resources available to evaluate each app as it enters the market. Thus, "educational" apps-the number of which, as of January 2015, stood at 80,000 in Apple's App Store (Apple, 2015)-are largely unregulated and untested. This article offers a way to define the potential educational impact of current and future apps. We build upon decades of work on the Science of Learning, which has examined how children learn best. From this work, we abstract a set of principles for two ultimate goals. First, we aim to guide researchers, educators, and designers in evidence-based app development. Second, by creating an evidence-based guide, we hope to set a new standard for evaluating and selecting the most effective existing children's apps. In short, we will show how the design and use of educational apps aligns with known processes of children's learning and development and offer a framework that can be used by parents and designers alike. Apps designed to promote active, engaged, meaningful, and socially interactive learning-four "pillars" of learning-within the context of a supported learning goal are considered educational.

592 citations

Journal ArticleDOI
TL;DR: A model of teaching and learning in pedagogical settings that predicts which examples teachers should choose and what learners should infer given a teacher's examples is introduced.

241 citations

References
More filters
Journal ArticleDOI
01 Oct 1950-Mind

7,266 citations

Book
01 Jan 1950
TL;DR: If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.
Abstract: I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.

6,137 citations


"Teachable Agents and the Protégé Ef..." refers background in this paper

  • ...For example, the Turing test proposed that if a human interacts with a computer, and the human believes the computer is a person, then the computer has achieved human intelligence ( Turing 1950 )....

    [...]

  • ...For example, the Turing test proposed that if a human interacts with a computer, and the human believes the computer is a person, then the computer has achieved human intelligence (Turing 1950)....

    [...]

Book
01 Jan 2003
TL;DR: In this paper, Sherry Turkle uses Internet MUDs (multi-user domains, or in older gaming parlance multi-user dungeons) as a launching pad for explorations of software design, user interfaces, simulation, artificial intelligence, artificial life, agents, virtual reality, and the on-line way of life.
Abstract: From the Publisher: A Question of Identity Life on the Screen is a fascinating and wide-ranging investigation of the impact of computers and networking on society, peoples' perceptions of themselves, and the individual's relationship to machines. Sherry Turkle, a Professor of the Sociology of Science at MIT and a licensed psychologist, uses Internet MUDs (multi-user domains, or in older gaming parlance multi-user dungeons) as a launching pad for explorations of software design, user interfaces, simulation, artificial intelligence, artificial life, agents, "bots," virtual reality, and "the on-line way of life." Turkle's discussion of postmodernism is particularly enlightening. She shows how postmodern concepts in art, architecture, and ethics are related to concrete topics much closer to home, for example AI research (Minsky's "Society of Mind") and even MUDs (exemplified by students with X-window terminals who are doing homework in one window and simultaneously playing out several different roles in the same MUD in other windows). Those of you who have (like me) been turned off by the shallow, pretentious, meaningless paintings and sculptures that litter our museums of modern art may have a different perspective after hearing what Turkle has to say. This is a psychoanalytical book, not a technical one. However, software developers and engineers will find it highly accessible because of the depth of the author's technical understanding and credibility. Unlike most other authors in this genre, Turkle does not constantly jar the technically-literate reader with blatant errors or bogus assertions about how things work. Although I personally don't have time or patience for MUDs,view most of AI as snake-oil, and abhor postmodern architecture, I thought the time spent reading this book was an extremely good investment.

4,965 citations


"Teachable Agents and the Protégé Ef..." refers background in this paper

  • ...Nevertheless, as we demonstrate in Study 2, students suspend disbelief enough to treat the computer as possessing knowledge and feelings (e.g., Reeves and Nass 1998; Turkle 1995)....

    [...]

Book
01 Jan 1996
TL;DR: This chapter discusses the media equation, which describes the role media and personality play in the development of a person's identity and aims at clarifying these roles.
Abstract: Part I. Introduction: 1. The media equation Part II. Media and Manners: 2. Politeness 3. Interpersonal distance 4. Flattery 5. Judging others and ourselves Part III. Media and Personality: 6. Personality of characters 7. Personality of interfaces 8. Imitating a personality Part IV. Media and emotion: 9. Good versus bad 10. Negativity 11. Arousal Part V. Media and Social Roles: 12. Specialists 13. Teammates 14. Gender 15. Voices 16. Source orientation Part VI. Media and Form: 17. Image size 18. Fidelity 19. Synchrony 20. Motion 21. Scene changes 22. Subliminal images Part VII. Final Words: 23. Conclusions about the media equation References.

4,690 citations


"Teachable Agents and the Protégé Ef..." refers background in this paper

  • ...Even when they explicitly know they are interacting with a computer, people will behave in socially appropriate ways ( Reeves and Nass 1998 )....

    [...]

  • ...Nevertheless, as we demonstrate in Study 2, students suspend disbelief enough to treat the computer as possessing knowledge and feelings (e.g., Reeves and Nass 1998; Turkle 1995)....

    [...]

Book
01 Sep 1997
TL;DR: In this article, Sherry Turkle, a Professor of the Sociology of Science at MIT and a licensed psychologist, uses Internet MUDs as a launching pad for explorations of software design, user interfaces, simulation, artificial intelligence, artificial life, agents, virtual reality, and the on-line way of life.
Abstract: From the Publisher: A Question of Identity Life on the Screen is a fascinating and wide-ranging investigation of the impact of computers and networking on society, peoples' perceptions of themselves, and the individual's relationship to machines. Sherry Turkle, a Professor of the Sociology of Science at MIT and a licensed psychologist, uses Internet MUDs (multi-user domains, or in older gaming parlance multi-user dungeons) as a launching pad for explorations of software design, user interfaces, simulation, artificial intelligence, artificial life, agents, "bots," virtual reality, and "the on-line way of life." Turkle's discussion of postmodernism is particularly enlightening. She shows how postmodern concepts in art, architecture, and ethics are related to concrete topics much closer to home, for example AI research (Minsky's "Society of Mind") and even MUDs (exemplified by students with X-window terminals who are doing homework in one window and simultaneously playing out several different roles in the same MUD in other windows). Those of you who have (like me) been turned off by the shallow, pretentious, meaningless paintings and sculptures that litter our museums of modern art may have a different perspective after hearing what Turkle has to say. This is a psychoanalytical book, not a technical one. However, software developers and engineers will find it highly accessible because of the depth of the author's technical understanding and credibility. Unlike most other authors in this genre, Turkle does not constantly jar the technically-literate reader with blatant errors or bogus assertions about how things work. Although I personally don't have time or patience for MUDs,view most of AI as snake-oil, and abhor postmodern architecture, I thought the time spent reading this book was an extremely good investment.

4,073 citations