Words are not enough: how preschoolers' integration of perspective and emotion informs their referential understanding.
Reads0
Chats0
TLDR
The authors examined the developmental emergence of preschoolers' sensitivity to a communicative partner's perspective and found that preschoolers use visual and emotional cues of perspective to guide language interpretation, which is sometimes related to theory of mind and executive function skills, and is only revealed by implicit measures of language processing.Abstract:
When linguistic information alone does not clarify a speaker's intended meaning, skilled communicators can draw on a variety of cues to infer communicative intent. In this paper, we review research examining the developmental emergence of preschoolers' sensitivity to a communicative partner's perspective. We focus particularly on preschoolers' tendency to use cues both within the communicative context (i.e. a speaker's visual access to information) and within the speech signal itself (i.e. emotional prosody) to make on-line inferences about communicative intent. Our review demonstrates that preschoolers' ability to use visual and emotional cues of perspective to guide language interpretation is not uniform across tasks, is sometimes related to theory of mind and executive function skills, and, at certain points of development, is only revealed by implicit measures of language processing.read more
University of Calgary
PRISM: University of Calgary's Digital Repository
Arts Arts Research & Publications
2017-05
Words are not enough: how preschoolers' integration
of perspective and emotion informs their referential
understanding
Graham, Susan; San Juan, Valerie; Khu, Melanie
Cambridge University Press
Graham, S. A., Juan, V. S., & Khu, M. (2016). Words are not enough: how preschoolers’
integration of perspective and emotion informs their referential understanding. "Journal of Child
Language", 44(3), 500–526. doi: 10.1017/s0305000916000519
http://hdl.handle.net/1880/111869
journal article
https://creativecommons.org/licenses/by/4.0
Unless otherwise indicated, this material is protected by copyright and has been made available
with authorization from the copyright owner. You may use this material in any way that is
permitted by the Copyright Act or through licensing that has been assigned to the document. For
uses that are not allowable under copyright legislation or licensing, you are required to seek
permission.
Downloaded from PRISM: https://prism.ucalgary.ca
Words are not enough: how preschoolers’ integration of
perspective and emotion informs their referential
understanding*
SUSAN A. GRAHAM, VALERIE SAN JUAN AND
MELANIE KHU
University of Calgary
(Received June – Revised August – Accepted September –
First published online November )
ABSTRACT
When linguistic information alone does not clarify a speaker’s intended
meaning, skilled communicators can draw on a variety of cues to
infer communicative intent. In this paper, we review research
examining the developmental emergence of preschoolers’ sensitivity to
a communicative partner’s perspective. We focus particularly on
preschoolers’ tendency to use cues both within the communicative
context (i.e. a speaker’s visual access to information) and within the
speech signal itself (i.e. emotional prosody) to make on-line inferences
about communicative intent. Our review demonstrates that
preschoolers’ ability to use visual and emotional cues of perspective to
guide language interpretation is not uniform across tasks, is
sometimes related to theory of mind and executive function skills,
and, at certain points of development, is only revealed by implicit
measures of language processing.
INTRODUCTION
“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it
means just what I choose it to mean—neither more nor less.”“The
question is,” said Alice, “whether you can make words mean so many
* This work was supported by funds from the Canada Foundation for Innovation, the
Canada Research Chairs program, and the University of Calgary, and by an operating
grant from the Social Sciences and Humanities Research Council of Canada awarded to
Susan Graham. Valerie San Juan was supported by a postdoctoral fellowship from
SSHRC and an Eyes High Fellowship from the University of Calgary. We are very
grateful to our collaborators on the research reviewed in this paper: Jared Berman, Craig
Chambers, and Elizabeth Nilsen. We also thank Nina Anderson for her assistance with
the preparation of the manuscript. Address for correspondence: S. Graham, Dept. of
Psychology, University of Calgary, Calgary AB, TN N, Canada; e-mail: susan.
graham@ucalgary.ca
J. Child Lang. (), –. © Cambridge University Press
doi:./S
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S0305000916000519
Downloaded from https:/www.cambridge.org/core. University of Calgary Library, on 20 Jun 2017 at 15:15:58, subject to the Cambridge Core
different things.”“The question is,” said Humpty Dumpty, “which is to
be master—that’s all.”
(Lewis Carroll, Through the Looking Glass)
As so cleverly illustrated by this exchange between Humpty Dumpty and
Alice, inferring a speaker’s intended meaning cannot always be
accomplished through words alone. Consider, for example, the following
situation: a child looks at her bookshelf and says to her parent “Can you
get the book?” Given that there are multiple possible referents (i.e. books)
available, how does the parent infer the child’s intended meaning? In the
face of this indeterminacy, listeners can use a variety of cues to infer the
child’s intended meaning. For example, the parent may consider whether
the child has a favourite book she always wants to read; whether there is a
particular book the parent, but not the child, can reach; whether there is a
book that is not visible to the child and thus can be excluded from
consideration; or whether the child sounds happy because a brand new
book is on the shelf. As demonstrated by this example, skilled listeners can
draw upon information about a speaker’s perspectives to gauge that
speaker’s communicative intent. This ability to use information about a
speaker’s perspective to make inferences about that speaker’s intended
meaning is known as communicative perspective taking.
Communicative situations like the one described in the example above are
likely frequently encountered in everyday interactions. Thus, core questions
arise around children’s abilities to attend to and integrate other’s perspectives
during communicative interactions and whether these perspectives can be
integrated rapidly enough to guide language processing in the moment. In
this paper, we review research examining the developmental emergence of
preschoolers’ sensitivity to a communicative partner’s perspective. We
focus particularly on preschoolers’ tendency to use cues both within the
communicative context (i.e. a speaker’s visual access to information) and
within the speech signal itself (i.e. emotional prosody) to make on-line
inferences about communicative intent. First, we review research
examining the emergence of communicative perspective taking during the
first two years of development, with particular focus on children’s
attention towards others’ visual perspectives. Next, we introduce the visual
world paradigm as a means of examining
HOW cues of perspective become
integrated with on-line spoken language processing. We then review
research examining children’s sensitivity to a speaker’s visual perspective
and emotional prosody in referential communication, addressing current
issues in these research areas. We conclude with empirical challenges and
future directions.
INTEGRATION OF PERSPECTIVE AND EMOTION
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S0305000916000519
Downloaded from https:/www.cambridge.org/core. University of Calgary Library, on 20 Jun 2017 at 15:15:58, subject to the Cambridge Core
THE EMERGENCE OF VISUAL PERSPECTIVE- TAKING AND
COMMUNICATIVE ABILITIES
Visual perspective taking involves tracking what another person can see in
order to form inferences about their knowledge and inte ntional actions
(Moll & Meltzoff, a). For example, knowing that a person cannot see
a toy that is hidden by a barrier may lead one to infer that she is unaware
of the toy’s presence. Around the same that time that infants begin to
engage in verbal communicative interactions, they also begin to track and
reason about the perspectives of others. That is, studies using looking-time
measures have found evidence of perspective taking emerging just after
infants reach their first birthdays (Caron, Kiel, Dayton & Butler, ;
Dunphy-Lelii & Wellman, ; Luo & Baillargeon, ). For example,
-month-olds will selectively follow the gaze of another person whose
visual access to items is not occluded by either a physical barrier (Caron
et al., ; Dunphy-Lelii & Wellman, ) or a blindfold (Brooks
Meltzoff, ). Similarly, ·-month-old infants will track an agent’s
visual access to a desired item and use the information to interpret the
agent’s subsequent actions (Luo & Baillargeon, ). When assessed
explicitly via verbal or behavioural selection responses, visual perspecti ve-
taking abilities become evident around two years of age (Moll & Meltzoff,
b). For example, -month-olds, but not -month-olds, will
correctly respond to an adult who is searching for a toy (“Where is it? I
cannot find it”) by selecting an item hidden from the adult (Moll &
Tomasello, ).
Given the early development of visual perspective taking, when do
children first begin to consider the visual perspectives of others in
communicative interactions? The first studies to examine this question
suggested that before children reach school-age, they are largely egocentric
in their referential communication and fail to integrate feedback from
their communicative partner (e.g. Glucksberg & Krauss, ; Krauss
& Glucksberg, ). However, advancements in both methods and
technology have led to more sensitive means of assessing children’s visual
perspective taking. We now know that the ability to integrate perspective-
taking and communication abilities emerges durin g infancy and shows
marked improvement throughout the preschool years.
Between and months of age, infants begin to differentially adapt
their pointing gestures to communicate object location to both
knowledgeable and unknowledgeable agents (Liskowski, Carpenter &
Tomasello, ). During this same period, infants will also vary their
interpretation of communicative behaviours (e.g. eye-gaze and emotional
reactions towards an object) depending on the visual perspective of
their communicative partner (Moll & Tomasello, ; Moses, Baldwin,
GRAHAM ET AL.
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S0305000916000519
Downloaded from https:/www.cambridge.org/core. University of Calgary Library, on 20 Jun 2017 at 15:15:58, subject to the Cambridge Core
Rosicky & Tidball, ). By the end of their second year, infants begin to
use the perspectives of others to disambiguate spoken language. Specifically,
in word learning studies, researchers have shown that infants as young as
months will attend to where a speaker is looking to correctly infer the
referent of a novel label (e.g. Baldwin, , ; Tomasello, Strosberg &
Akhtar, ). By two years of age, children will monitor what a person
has or has not seen and will adapt their verbal requests for items to match
the knowledge state of their listener (Nayer & Graham, ;O’Neill,
). Overall, these findings suggest that as soon as infants begin to
reason about the visual perspectives of others, they begin to also use this
information to inform their interpretation and production of both non-
verbal and verbal communicative behaviours.
In summary, the ability to integrate visual perspective taking in receptive
and productive communication begins to emerge during the second year of
life. In the next section, we shift our focus to research that has begun to
examine
HOW children develop the ability to integrate perspective-taking
abilities with on-line language processing. We begin with a brief overview
of the visual world paradigm as used in referential communication
experiments.
THE VISUAL WORLD PARADIGM
The visual world paradigm is the basic method used to study spoken
language comprehension in real tim e, drawing upon the systematic
relation between eye-movements and language processing (Allopenna,
Magnuson & Tanenhaus, ; Sedivy, Tanenhaus, Chambers & Carlson,
; Tanenhaus, Spivey-Knowlton, Eberhard & Sedivy, ). In this
paradigm, researchers track participants’ eye-movements as they respond
to spoken instructions in the context of a visual display (see Huettig,
Rommers & Meyer, ; Snedeker & Huang, , for recent reviews of
the paradigm). Using this paradigm, research has demonstrated that
spoken language is processed incrementally – that is, both child and adult
listeners interpret words and sentences as they unfold over time, rather
than waiting to hear an entire sentence before making inferences about a
speaker’s intended meaning (e.g. Allopenna et al., ; Swingley, Pinto &
Fernald, ; Tanenhaus et al., ; Trueswell, Sekerina, Hill &
Logrip, ). Furthermore, this incremental interpretation occurs in real
time, with listeners launching eye-movements to intended referents within
the first few hundred milliseconds of hearing a target word (e.g.
Tanenhaus et al., ; Trueswell et al., ).
Research using the visual world paradigm led to fundamental insights into
the interactive nature of the language processin g system – that is, adult and
child listeners integrate linguistic, paralinguistic, and non-linguistic
INTEGRATION OF PERSPECTIVE AND EMOTION
terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S0305000916000519
Downloaded from https:/www.cambridge.org/core. University of Calgary Library, on 20 Jun 2017 at 15:15:58, subject to the Cambridge Core
Citations
More filters
Symposium on S. Butterfill and I. Apperly, "How to Construct a Minimal Theory of Mind"
Stephen A. Butterfill,Ian A. Apperly,Hannes Rakoczy,Shannon Spaulding,Tadeusz Wieslaw Zawidzki +4 more
TL;DR: Minimal theory of mind (MTM) as mentioned in this paper is a theory that allows a person to track others' perceptions, knowledge states and beliefs, including false beliefs, within limits.
Journal ArticleDOI
How young children integrate information sources to infer the meaning of words.
TL;DR: This paper presented a developmental theory of information integration during language learning and illustrates how formal models can be used to make a quantitative test of the predictive and explanatory power of competing theories, and showed that the central locus of development is an increased sensitivity to individual information sources, rather than changes in integration ability.
Journal ArticleDOI
The link between maternal and child verbal abilities: An indirect effect through maternal responsiveness.
TL;DR: A significant indirect effect from maternal verbal abilities to child verbal abilities through maternal responsiveness was indicated and has implications for the study of the intergenerational transmission of verbal abilities and associated skills, behaviours, and adaptive outcomes.
DissertationDOI
Children’s development of Quantity, Relevance and Manner implicature understanding and the role of the speaker’s epistemic state
Journal ArticleDOI
Preschoolers Flexibly Shift Between Speakers' Perspectives During Real‐Time Language Comprehension
TL;DR: It is demonstrated that preschoolers readily use the perspectives of multiple partners to guide language comprehension and that more advanced representational skills are associated with the rapid integration of common ground information.
References
More filters
Journal ArticleDOI
Meta-analysis of theory-of-mind development: The truth about false belief.
TL;DR: A meta-analysis found that when organized into a systematic set of factors that vary across studies, false-belief results cluster systematically with the exception of only a few outliers, and is consistent with theoretical accounts that propose that understanding of belief, and, relatedly, understanding of mind, exhibit genuine conceptual change in the preschool years.
Journal ArticleDOI
Negativity Bias, Negativity Dominance, and Contagion
Paul Rozin,Edward B. Royzman +1 more
TL;DR: The authors hypothesize that there is a general bias, based on both innate predispositions and experience, in animals and humans to give greater weight to negative entities (e.g., events, objects, personal traits).
Journal ArticleDOI
Integration of visual and linguistic information in spoken language comprehension
TL;DR: To test the effects of relevant visual context on the rapid mental processes that accompany spoken language comprehension, eye movements were recorded with a head-mounted eye-tracking system while subjects followed instructions to manipulate real objects.
Journal ArticleDOI
Acoustic profiles in vocal emotion expression.
Rainer Banse,Klaus R. Scherer +1 more
TL;DR: Findings on decoding replicate earlier findings on the ability of judges to infer vocally expressed emotions with much-better-than-chance accuracy, including consistently found differences in the recognizability of different emotions.
Journal ArticleDOI
Behavioral problems and competencies reported by parents of normal and disturbed children aged four through sixteen.
TL;DR: In this paper, the authors provide prevalence data on behavioral problems and competencies, identify differences related to demographic variables, and compare clinically referred and demographically similar non-referred children.
Related Papers (5)
Irony, Prosody, and Social Impressions of Affective Stance
The development of polite stance in preschoolers: how prosody, gesture, and body cues pave the way.
Frequently Asked Questions (3)
Q2. What is the recent research that has examined children’s perspective taking using a visual world?
University of Calgary Library, on 20 Jun 2017 at 15:15:58, subject to the Cambridge CoreTo date, the majority of experimental studies that have examined children’s perspective taking using a visual world paradigm have focused on one type of perspective reasoning – namely, reasoning about information that is visually shared or not shared between themselves and a speaker (e.g. Nadig & Sedivy, ; Epley, Morewedge & Keysar, ).
Q3. How did the researchers find that children used common ground information to guide their interpretation of referential statements?
That is, the authors found that four-year-olds selectively used common ground information to guide their interpretation of referential statements within the earliest moments of processing (i.e. as soon as the critical noun began to unfold).