scispace - formally typeset
Open AccessJournal ArticleDOI

Embodied cognition and the magical future of interaction design

TLDR
The theory of embodied cognition can provide HCI practitioners and theorists with new ideas about interaction and new principles for better designs, and these ideas have major implications for interaction design, especially the design of tangible, physical, context aware, and telepresence systems.
Abstract
The theory of embodied cognition can provide HCI practitioners and theorists with new ideas about interaction and new principles for better designs. I support this claim with four ideas about cognition: (1) interacting with tools changes the way we think and perceive -- tools, when manipulated, are soon absorbed into the body schema, and this absorption leads to fundamental changes in the way we perceive and conceive of our environments; (2) we think with our bodies not just with our brains; (3) we know more by doing than by seeing -- there are times when physically performing an activity is better than watching someone else perform the activity, even though our motor resonance system fires strongly during other person observation; (4) there are times when we literally think with things. These four ideas have major implications for interaction design, especially the design of tangible, physical, context aware, and telepresence systems.

read more

Content maybe subject to copyright    Report

UC San Diego
UC San Diego Previously Published Works
Title
Embodied Cognition and the Magical Future of Interaction Design
Permalink
https://escholarship.org/uc/item/8773q76m
Journal
ACM Transactions on Computer-Human Interaction, 20(1)
Author
Kirsh, David
Publication Date
2013-03-01
Peer reviewed
eScholarship.org Powered by the California Digital Library
University of California

3
Embodied Cognition and the Magical Future of Interaction Design
DAVID KIRSH, University of California, San Diego
The theory of embodied cognition can provide HCI practitioners and theorists with new ideas about interac-
tion and new principles for better designs. I support this claim with four ideas about cognition: (1) interacting
with tools changes the way we think and perceive tools, when manipulated, are soon absorbed into the
body schema, and this absorption leads to fundamental changes in the way we perceive and conceive of
our environments; (2) we think with our bodies not just with our brains; (3) we know more by doing than
by seeing there are times when physically performing an activity is better than watching someone else
perform the activity, even though our motor resonance system fires strongly during other person observa-
tion; (4) there are times when we literally think with things. These four ideas have major implications for
interaction design, especially the design of tangible, physical, context aware, and telepresence systems.
Categories and Subject Descriptors: H.1.2 [User/Machine Systems]; H.5.2 [User Interfaces]: Interaction
styles (e.g., commands, menus, forms, direct manipulation)
General Terms: Human Factors, Theory
Additional Key Words and Phrases: Human-computer interaction, embodied cognition, distributed cognition,
situated cognition, interaction design, tangible interfaces, physical computation, mental simulation
ACM Reference Format:
Kirsh, D. 2013. Embodied cognition and the magical future of interaction design. ACM Trans. Comput.-Hum.
Interact. 20, 1, Article 3 (March 2013), 30 pages.
DOI: http://dx.doi.org/10.1145/2442106.2442109
1. INTRODUCTION
The theory of embodied cognition offers us new ways to think about bodies, mind, and
technology. Designing interactivity will never be the same.
The embodied conception of a tool provides a first clue of things to come. When a
person hefts a tool the neural representation of their body schema changes as they
recalibrate their body perimeter to absorb the end-point of the tool [L
`
adavas 2002].
As mastery develops, the tool reshapes their perception, altering how they see and
act, revising their concepts, and changing how they think about things. This echoes
Marshall McLuhan’s famous line “we shape our tools and thereafter our tools shape
us” [McLuhan 1964]. A stick changes a blind person’s contact and grasp of the world;
a violin changes a musician’s sonic reach; roller-skates change physical speed, altering
the experience of danger, stride, and distance. These tools change the way we encounter,
engage, and interact with the world. They change our minds. As technology digitally
enhances tools we will absorb their new powers. Is there a limit to how far our powers
can be increased? What are the guidelines on how to effectively alter minds?
This work is supported by the National Science Foundation under grant IIS-1002736.
Author’s address: D. Kirsh, Cognitive Science, University of California at San Diego, La Jolla, CA 92093-0515;
email: kirsh@ucsd.edu.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted
without fee provided that copies are not made or distributed for profit or commercial advantage and that
copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for
components of this work owned by others than ACM must be honored. Abstracting with credit is permitted.
To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this
work in other works requires prior specific permission and/or a fee. Permissions may be requested from
Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212)
869-0481, or permissions@acm.org.
c
2013 ACM 1073-0516/2013/03-ART3 $15.00
DOI: http://dx.doi.org/10.1145/2442106.2442109
ACM Transactions on Computer-Human Interaction, Vol. 20, No. 1, Article 3, Publication date: March 2013.

3:2 D. Kirsh
Consider a moment longer how coming tools will change us. On the “perception” side,
our senses will reveal hidden patterns, microscopic, telescopic, and beyond our elec-
tromagnetic range, all visualized imaginatively. On the “action” side, our augmented
control will be fine enough to manipulate with micrometer precision scalpels too small
for our genetic hands; we will drive with millisecond sensitivity vehicles big enough
to span a football field or small enough to enter an artery. Our future is prosthetic: a
world of nuanced feedback and control through enhanced interaction. These are the
obvious things.
Less obvious, though, is how new tools will redefine our lived-in world: how we will
conceptualize how and what we do. New tools make new tasks and activities possible.
This makes predicting the future almost out of reach. Designers need to understand the
dynamic between invention, conception, and cognition. It is complicated. And changing.
Good design needs good science fiction; and good science fiction needs good cognitive
science.
Consider next the role the body itself plays in cognition. This is the second clue to our
imminent future. The new theory of mind emerging over the last twenty years holds
that the physical elements of our body figure in our thought. Unimpaired humans
think with their body in ways that are impossible for the paralyzed. If true, this means
that thought is not confined to the brain; it stretches out, distributed over body and
cortex, suggesting that body parts, because of the tight way we are coupled to them,
may behave like cognitive components, partially shaping how we think.
Before the theories of embodied, situated, and distributed cognition “thinking” was
assumed to happen exclusively in the head. Voice and gesture were ways of externaliz-
ing thought but not part of creating it. Thought occurred inside; it was only expressed
on the outside. This sidelined everything outside the brain. Thus, utterance, gesture,
and bodily action were not seen as elements of thinking; they were the expression
of thought, proof that thinking was already taking place on the inside. Not really
necessary.
On newer accounts, thinking is a process that is distributed and interactive. Body
movement can literally be part of thinking. In any process, if you change one of the key
components in a functionally significant way you change the possible trajectories of the
system. Apply this to thought and it means that a significant change in body or voice
might affect how we think. Perhaps if we speak faster we make ourselves think faster.
Change our body enough and maybe we can even think what is currently unthinkable.
For instance, a new cognitive prosthesis might enable us to conceptualize things that
before were completely out of reach. And not just the 10
20
digit of pi! It would be a new
way of thinking of pi; something unlike anything we can understand now, in principle.
If modern cognitive theories are right, bodies have greater cognitive consequences than
we used to believe.
This idea can be generalized beyond bodies to the objects we interact with. If a tool
can at times be absorbed into the body then why limit the cognitive to the boundaries
of the skin? Why not admit that humans, and perhaps some higher animals too, may
actually think with objects that are separate from their bodies, assuming the two,
creature and object, are coupled appropriately? If tools can be thought with, why not
admit an even stronger version of the hypothesis: that if an object is cognitively gripped
in the right way then it can be incorporated into our thinking process even if it is not
neurally absorbed? Handling an object, for example, may be part of a thinking process,
if we move it around in a way that lets us appreciate an idea from a new point of
view. Model-based reasoning, literally. Moving the object and attending to what that
movement reveals pushes us to a new mental state that might be hard to reach without
outside help.
ACM Transactions on Computer-Human Interaction, Vol. 20, No. 1, Article 3, Publication date: March 2013.

Embodied Cognition and the Magical Future of Interaction Design 3:3
If it is true that we can and do literally think with physical objects, even if only for
brief moments, then new possibilities open up for the design of tangible, reality-based,
and natural computing. Every object we couple with in a cognitive way becomes an
opportunity for thought, control, and imagination. These cognitively gripped objects
are not simply thought aids like calculators; things that speed up what, in principle,
we can do otherwise. They let us do things we cannot do without them, or at least
not without huge effort. The implications of a theory of thinking that allows lifeless
material things to be actual constituents of the thinking process are far reaching. They
point to a future where one day, because of digital enhancement and good design, it
will be mundane to think what is today unconceivable. Without cognitively informed
designers we will never get there.
1.1. Overview and Organization
This article has six sections. In the next section, Section 2, I review some of the lit-
erature on tool absorption [Maravita and Iriki 2004], and tie this to a discussion of
the theory of enactive perception [O’Regan and No
¨
e 2001; No
¨
e 2005], to explain why
tool absorption changes the way we perceive the world. The short answer is that in
addition to altering our sense of where our body ends each tool reshapes our “enactive
landscape”—-the world we see and partly create as active agents. With a tool in our
hands we selectively see what is tool relevant; we see tool-dependent affordances; we
extend our exploratory and probative capacities. This is obvious for a blind man with a
cane, who alters his body’s length and gains tactile knowledge of an otherwise invisible
world three feet away. His new detailed knowledge of the nearby changes his sense of
the terrain, and of the shape of things too big to handle but small enough to sweep. He
revises his perceptual apprehension of the peripersonal
1
both because he can sweep
faster than he can touch and because he has extended his peripersonal field [Iriki et al.
1996; Ladavas 1998]. It is less obvious, though no less true, that a cook who is clever
with a blade, or knows how to wield a spatula, sees the cooking world differently than
a neophyte. Skill with a knife informs how to look at a chicken prior to its dismem-
berment; it informs how one looks at an unpeeled orange or a cauliflower, attending to
this or that feature, seeing possibilities that are invisible to more na
¨
ıve chefs or diners.
The same holds for spatulas. Without acquaintance with a spatula one would be blind
to the affordances of food that make them cleanly liftable off of surfaces, or the role
and meaning of the way oil coats a surface. With expertise comes expert perception
[Goodwin 1994; Aglioti et al. 2008]. This is a core commitment of embodiment theory:
the concepts and beliefs we have about the world are grounded in our perceptual-action
experience with things, and the more we have tool-mediated experiences the more our
understanding of the world is situated in the way we interact through tools.
In Section 3, the longest part of the article, I present some remarkable findings that
arose in our study of superexpert dancers.
One might think that we already know what our bodies are good for. To some extent,
we do. For instance, the by now classic position of embodied cognition is that the more
actions you can perform the more affordances you register (e.g., if you can juggle you can
see an object as affording juggling) [Gibson 1966]. Our bodies also infiltrate cognition
because our early sensory experience of things, our particular history of interactions
with them, figures in how we understand them ever after. Meaning is modal-sensory
specific [Barsalou 2008]. If we acquired knowledge of a thing visually, or we tend to
1
Peripersonal space is the three-dimensional volume within arm’s reach and leg’s reach. Visual stimuli near
a hand are coded by neurons with respect to the hand, not the eyes or some other location reflecting egocentric
location [Makin et al. 2007].
ACM Transactions on Computer-Human Interaction, Vol. 20, No. 1, Article 3, Publication date: March 2013.

3:4 D. Kirsh
identify that thing on visual grounds, we stimulate these historic neural connections
in the later visual cortex when thinking of it [Barsalou 1999]. These visual experiences
often activate motor representations too, owing to our history of motor involvement
with the things we see. Thus, when thinking or speaking we regain access to the
constellation of associations typical of interacting with the thing. Even just listening
to language can trigger these activations in the associative cortex. The sentence “the
alarm sounded and John jumped out of bed” will activate areas in the auditory and
motor cortex related to alarms and jumping out of bed [Kaschak et al. 2006; Winter
and Bergen 2012]. This is the received embodiment view.
In the findings reported here I discuss additional ways bodies can play a role in
cognitive processing, ways we can use the physical machinery of the body and not just
our sensory cortex and its associative network. This means that our bodies are good for
more things than have traditionally been assumed. More specifically, I discuss howe
we use our bodies as simulation devices to physically model things.
For example, we found in our study with dancers that they are able to learn and con-
solidate mastery of a reasonably complex dance phrase better by physically practicing
a distorted model of the phrase than by mentally simulating the phrase undistorted. If
all that matters is what happens in the brain we would not observe this difference in
learning between simulating in the head and simulating with the body. But somehow,
by modeling a movement idea bodily, even when the model is imperfect, the dancers
we studied were able to learn more about the structure of their dance movement than
by simulating it without moving. Perhaps this intuitive. But more surprisingly, the
dancers learned the phrase better by working with the distorted model than by prac-
ticing the way one intuitively thinks they should: by physically executing the phrase,
or parts of it, in a complete and undistorted manner, repeatedly. In other words, our
dancers learned best when they explored a dance phrase by making a physical model
of the phrase (through dancing it), even though the model they made was imperfect.
Standard practice might not be considered to be modeling. No one predicted that find-
ing! The dancers seem to be using their bodies in a special way when they make these
imperfect models.
This is not specific to dance. Mechanics trying to understand a machine may sketch
on paper an imprecise or distorted model. This can help them explore mechanical
subsystems or help them consider physical principles. Architects may sketch in fluid
strokes their early ideas to get a feel for the way light pours in, or how people might
move through a space. Accuracy is not important, flow is. Violinists when practicing
a hard passage may work on their bowing while largely neglecting their fingers. They
are not aiming for perfection in the whole performance; they are fixating on aspects.
To fixate on certain aspects it may be easier to work with their body and instrument
than to think about those aspects “offline” in their head. These sorts of methods may
be common and intuitive; but on reflection, it is odd, to say the least, that practicing
(literally) the wrong thing can lead to better performance of the right thing [Kirsh et al.
2012]. I think this technique is prevalent, and deeply revealing.
Does anyone understand how or why it works? The knee-jerk reply is that for
sketches, at least, the function of the activity is to take something that is transi-
tory and internal—a thought or idea—and convert it into something that is persistent
and external—a sketch. This allows the agent to come back to it repeatedly, and to
interact with it in different ways than something purely in mind [Buxton 2007]. But
persistence doesn’t explain the utility of making physical actions like gesturing, violin
bowing, or dancing, all of which are external but ephemeral. How do we think with
these ephemera?
Section 4 explores why such ephemera might be so effective. The answer I offer
is that body activity may figure as an external mediating structure in thinking and
ACM Transactions on Computer-Human Interaction, Vol. 20, No. 1, Article 3, Publication date: March 2013.

Figures
Citations
More filters
Journal Article

A sensorimotor account of vision and visual consciousness-Authors' Response-Acting out our sensory experience

TL;DR: In this article, the authors propose that the brain produces an internal representation of the world, and the activation of this internal representation is assumed to give rise to the experience of seeing, but it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness.
Book

The SAGE Handbook of E-learning Research

TL;DR: The new edition of The SAGE Handbook of E-Learning Research retains the original effort of the first edition by focusing on research while capturing the leading edge of e-learning development and practice.
References
More filters
Book

Thought and language

Lev Vygotsky
TL;DR: Kozulin has created a new edition of the original MIT Press translation by Eugenia Hanfmann and Gertrude Vakar that restores the work's complete text and adds materials that will help readers better understand Vygotsky's meaning and intentions as discussed by the authors.
Book

Understanding Media: The Extensions of Man

TL;DR: Lapham as discussed by the authors re-evaluated McLuhan's work in the light of the technological as well as the political and social changes that have occurred in the last part of this century.
Journal ArticleDOI

Toward a Theory of Situation Awareness in Dynamic Systems

TL;DR: A theoretical model of situation awareness based on its role in dynamic human decision making in a variety of domains is presented and design implications for enhancing operator situation awareness and future directions for situation awareness research are explored.
Journal ArticleDOI

The mirror-neuron system.

TL;DR: A neurophysiological mechanism appears to play a fundamental role in both action understanding and imitation, and those properties specific to the human mirror-neuron system that might explain the human capacity to learn by imitation are stressed.