scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Nonverbal Behavior in 2000"


Journal ArticleDOI
TL;DR: This article examined the hypotheses that nonverbal behavior could be useful in the detection of deceit and that lie detection would be most accurate if both verbal and nonverbal indicators of deception are taken into account.
Abstract: We examined the hypotheses that (1) a systematic analysis of nonverbal behavior could be useful in the detection of deceit and (2) that lie detection would be most accurate if both verbal and nonverbal indicators of deception are taken into account. Seventy-three nursing students participated in a study about "telling lies" and either told the truth or lied about a film they had just seen. The interviews were videotaped and audiotaped, and the nonverbal behavior (NVB) and speech content of the liars and truth tellers were analyzed, the latter with the Criteria-Based Content Analysis technique (CBCA) and the Reality Monitoring technique (RM). Results revealed several nonverbal and verbal indicators of deception. On the basis of nonverbal behavior alone, 78% of the lies and truths could be correctly classi- fied. An even higher percentage could be correctly classified when all three detec- tion techniques (i.e., NVB, CBCA, RM) were taken into account.

398 citations


Journal ArticleDOI
TL;DR: The Japanese and Caucasian Brief Affect Recognition Test (JACBART) as discussed by the authors is a test designed to measure individual differences in emotion recognition ability (ERA), five studies examined the reliability and validity of the scores produced using this test, and the first evidence for a correlation between ERA measured by a standardized test and personality.
Abstract: In this article, we report the development of a new test designed to measure individual differences in emotion recognition ability (ERA), five studies examining the reliability and validity of the scores produced using this test, and the first evidence for a correlation between ERA measured by a standardized test and personality. Utilizing Matsumoto and Ekman's (1988) Japanese and Caucasian Facial Expressions of Emotion (JACFEE) and Neutral Faces (JACNeuF), we call this measure the Japanese and Caucasian Brief Affect Recognition Test (JACBART). The JACBART improves on previous measures of ERA by (1) using expressions that have substantial validity and reliability data associated with them, (2) including posers of two visibly different races (3) balanced across seven universal emotions (4) with equal distribution of poser race and sex across emotions (5) in a format that eliminates afterimages associated with fast exposures. Scores derived using the JACBART are reliable, and three studies demonstrated a correlation between ERA and the personality constructs of Openness and Conscientiousness, while one study reports a correlation with Extraversion and Neuroticism.

330 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate whether factors known to influence the decoding of emotional expressions, such as gender and ethnicity of the stimulus person as well as the intensity of the ex-pression, would also influence attributions of interpersonal intentions.
Abstract: Facial expressions of emotions convey not only information about emotional states but also about interpersonal intentions The present study investi- gated whether factors known to influence the decoding of emotional expressions— the gender and ethnicity of the stimulus person as well as the intensity of the ex- pression—would also influence attributions of interpersonal intentions For this, 145 men and women rated emotional facial expressions posed by both Caucasian and Japanese male and female stimulus persons on perceived dominance and affil- iation The results showed that the sex and the ethnicity of the encoder influenced observers' ratings of dominance and affiliation For anger displays only, this influ- ence was mediated by expectations regarding how likely it is that a particular en- coder group would display anger Further, affiliation ratings were equally influenced by low intensity and by high intensity expressions, whereas only fairly intense emo- tional expressions affected attributions of dominance

325 citations


Journal ArticleDOI
TL;DR: This paper evaluated the effect on verbal language development of encouraging hearing infants to use simple gestures as symbols for objects, requests, and conditions, and found strong evidence that symbolic gesturing does not hamper verbal development and may even facilitate it.
Abstract: The purpose of the present study was to evaluate the effect on verbal language development of purposefully encouraging hearing infants to use simple gestures as symbols for objects, requests, and conditions. To this end, 103, 11- month-old infants were divided into three groups, all of whom were seen in the laboratory for a variety of assessments, including standardized language tests at 15, 19, 24, 30, and 36 months. Parents of those in the Sign Training group modeled symbolic gestures and encouraged their infants to use them. Parents of infants in the Non-intervention Control group knew nothing about symbolic gestures or our spe- cial interest in language development. As a control for "training effects" (i.e., effects attributable to families being engaged in a language intervention program), parents of a second control group of infants (the Verbal Training group) were asked to make special efforts to model verbal labels. After comparisons of the two control groups minimized concerns about training effects, comparisons between the Sign Training and the Non-intervention Control group indicated an advantage for the Sign Train- ing group on the vast majority of language acquisition measures. These results pro- vide strong evidence that symbolic gesturing does not hamper verbal development and may even facilitate it. A variety of possible explanations for such an effect are discussed.

257 citations


Journal ArticleDOI
TL;DR: This article investigated the effect of gesture as a form of external support for spoken language comprehension and found that the effects of gesture on speech comprehension depend both on the relation of gesture to speech, and on the complexity of the spoken message.
Abstract: Two experiments investigated gesture as a form of external support for spoken language comprehension. In both experiments, children selected blocks according to a set of videotaped instructions. Across trials, the instructions were given using no gesture, gestures that reinforced speech, and gestures that conflicted with speech. Experiment 1 used spoken messages that were complex for preschool children but not for kindergarten children. Reinforcing gestures facilitated speech comprehension for preschool children but not for kindergarten children, and conflicting gestures hindered comprehension for kindergarten children but not for preschool children. Experiment 2 tested preschool children with simpler spoken messages. Unlike Experiment 1, preschool children's comprehension was not facilitated by reinforcing gestures. However, children's comprehension also was not hindered by conflicting gestures. Thus, the effects of gesture on speech comprehension depend both on the relation of gesture to speech, and on the complexity of the spoken message.

155 citations


Journal ArticleDOI
TL;DR: In this article, the Diagnostic Assessment of Nonverbal Accuracy 2 (DANVA-2) and the Carolina Older Adult Test of Non-verbal Communication (COAT-NC) for 24 individuals with traumatic brain injury (TBI) and 24 matched controls were examined using facial expressions and vocal prosody.
Abstract: Recognition of facial expressions and vocal prosody was examined using the Diagnostic Assessment of Nonverbal Accuracy 2 (DANVA-2) and the Carolina Older Adult Test of Nonverbal Communication (COAT-NC) for 24 individuals with traumatic brain injury (TBI) and 24 matched controls. Results demonstrated that participants without TBI scored significantly higher than participants with TBI when presented with adult photo and voice stimuli. No significant group difference was noted with child photo and voice stimuli. Both groups scored significantly higher on photo subtests than on voice subtests for child and younger adult stimuli. For older adult stimuli, both groups scored significantly higher on the voice subtest than on the photo subtest. For the subjects with TBI, a significant relationship was found between scores on the voice subtests and a functional measure of cognition and communication.

122 citations


Journal ArticleDOI
TL;DR: The authors found that infants initially accept both words and gestures as symbols because parents often produce both verbal labels and gestural routines within the same joint-attention contexts, and that infants use words and symbolic gestures in markedly similar ways, to name and refer to objects.
Abstract: Infants initially use words and symbolic gestures in markedly similar ways, to name and refer to objects. The goal of these studies is to examine how parental verbal and gestural input shapes infants' expectations about the communicative functions of words and gestures. The studies reported here suggest that infants may initially accept both words and gestures as symbols because parents often produce both verbal labels and gestural routines within the same joint-attention contexts. In two studies, we examined the production of verbal and gestural labels in parental input during joint-attention episodes. In Study 1, parent-infant dyads engaged in a picture-book reading task in which parents introduced their infants to drawings of unfamiliar objects (e.g., accordion). Parents' verbal labeling far outstripped their gestural communication, but the number of gestures produced was non-trivial and was highly predictive of infant gestural production. In Study 2, parent-infant dyads engaged in a free-play session with familiar objects. In this context, parents produced both verbal and gestural symbolic acts frequently with reference to objects. Overall, these studies support an input-driven explanation for why infants acquire both words and gestures as object names, early in development.

105 citations


Journal ArticleDOI
TL;DR: In this article, the authors evaluated men and women on three personality traits: extraversion, neuroticism, and masculinity-femininity (M-F), and found that women showed higher trait accuracy than men in judging neuroticism.
Abstract: Fifty-three men and 56 women viewed brief video segments of 32 male targets and rated them on three personality traits: extraversion, neuroticism, and masculinity-femininity (M-F). Judges were assessed on general intelligence, Big Five traits, and gender-related traits. Two measures of accuracy were computed: 1) consensus accuracy, which measured the correlation between judges' ratings and corresponding ratings made by previous judges, and 2) trait accuracy, which measured the correlation between judges' ratings and targets' assessed personality. There was no gender difference in overall accuracy. However, women showed higher trait accuracy than men in judging neuroticism. Consensus accuracy exceeded trait accuracy, and extraversion and M-F were judged more accurately than neuroticism. M-F judgments showed the highest level of consensus accuracy. Judges' intelligence correlated positively with accuracy. Except for openness, personality traits were generally unrelated to accuracy.

104 citations


Journal ArticleDOI
TL;DR: This article examined gesture development in 5 congenitally blind and 5 sighted toddlers videotaped longi- tudinally between the ages of 14 and 28 months in their homes while engaging in free play with a parent or experimenter.
Abstract: Gesture is widely regarded to play an important role in communica- tion, both in conjunction with and independent of speech. Indeed, gesture is known to develop even before the onset of spoken words. However, little is known about the communicative conditions under which gesture emerges. The aim of this study was to explore the role of vision in early gesturing. We examined gesture development in 5 congenitally blind and 5 sighted toddlers videotaped longi- tudinally between the ages of 14 and 28 months in their homes while engaging in free play with a parent or experimenter. All of the blind children were found to produce at least some gestures during the one-word stage of language development. However, gesture production was relatively low among the blind children relative to their sighted peers. Moreover, although blind and sighted children produced the same overall set of gesture types, the distribution of gesture types across categories differed. In addition, blind children used gestures primarily to communicate about objects that were nearby, while sighted children used them for nearby as well as distally located objects. These findings suggest that gesture may play different roles in the language-learning process for sighted and blind children. Nevertheless, they also make it clear that gesture is a robust phenomenon of early communicative development, emerging even in the absence of experience with a visual model. In the early stages of language acquisition, normally developing chil- dren use both speech and gesture in their efforts to communicate. In fact,

73 citations


Journal ArticleDOI
TL;DR: The present results are consistent with the hypothesis that females are more facially reactive than males, but not more reactive in other respects.
Abstract: The aim of the present study was to explore whether females are specifically more facially reactive than males, or whether females are more emotionally reactive in general, as reflected even by non-facial reactions such as autonomic responding and emotional experience. Forty-eight females and 48 males were exposed to pictures of fear-relevant and fear-irrelevant stimuli while EMG activity was detected from the Corrugator supercilii muscle region. Skin conductance responses (SCRs) were measured, and the participants were also required to rate how unpleasant they experienced the stimuli to be. Fear-relevant stimuli evoked a larger corrugator response than fear-irrelevant stimuli, but only for females. Fear-relevant stimuli also elicited larger SCRs and higher ratings of unpleasantness, but these measures were almost identical for females and males. The present results are consistent with the hypothesis that females are more facially reactive than males, but not more reactive in other respects.

34 citations


Journal ArticleDOI
TL;DR: It was deduced that Down syndrome infants are capable of distinguishing the differential significance of faces and toys, so that, in the same way as typically developing infants, they direct their affective behavior fundamentally towards the social element, which leads to the affiliative function implied by this expression.
Abstract: We studied the relation between direction of gaze and smiling in 15 typically developing infants and 15 infants with Down syndrome All of them were videotaped during face-to-face interaction with their mothers at home, and while having access to their familiar toys Results showed that mothers in the two groups behaved in a similar way; that Down syndrome infants looked at their mother's face for longer than typically developing children; and that the relationship between looking and smiling was similar in the two cases and reflected as an increase in the time the infant looked at its mother's face and a decrease in the time the infant looked at toys It was deduced that Down syndrome infants are capable of distinguishing the differential significance of faces and toys, so that, in the same way as typically developing infants, they direct their affective behavior fundamentally towards the social element, which leads us to consider the affiliative function implied by this expression

Journal ArticleDOI
TL;DR: This article examined how 7-and 8-year-old children, 9-and 10-year old children, and adults process mismatched, task-related speech and gesture differently as a function of development.
Abstract: Two experiments examined how 7- and 8-year-old children, 9- and 10-year-old children, and adults process mismatched, task-related speech and gesture differently as a function of development Participants watched videotapes of children speaking and gesturing about the concept of conservation Using a recognition paradigm, we assessed immediate memory for information conveyed in mismatched speech and gesture In Experiment 1, we used recognition of verbal statements to probe participants' memory, whereas in Experiment 2, we used recognition of gestural statements to probe memory When probed with verbal statements in Experiment 1, 9- and 10-year-old children failed to retrieve gestured information When probed with gestural statements in Experiment 2, 9- and 10-year-old children failed to retrieve verbal information In contrast, the younger children and adults showed retrieval of both verbal and gestural information across both recognition methods in Experiments 1 and 2 These results suggest a U-shaped function with the 9- and 10-year-old children showing a limitation in the ability to process contradictory messages simultaneously conveyed in two modalities Implications for identifying a transitional period in the development of representational skills are discussed

Journal ArticleDOI
TL;DR: The impact of singular and compound facial expressions on individuals' recognition of faces was in- vestigated in three studies as discussed by the authors, where a face recognition paradigm was used as a measure of the proficiency with which participants processed compound and singular facial expressions.
Abstract: The impact of singular (e.g. sadness alone) and compound (e.g. sadness and anger together) facial expressions on individuals' recognition of faces was in- vestigated. In three studies, a face recognition paradigm was used as a measure of the proficiency with which participants processed compound and singular facial expressions. For both positive and negative facial expressions, participants dis- played greater proficiency in processing compound expressions relative to singular expressions. Specifically, the accuracy with which faces displaying compound ex- pressions were recognized was significantly higher than the accuracy with which faces displaying singular expressions were recognized. Possible explanations in- volving the familiarity, distinctiveness, and salience of the facial expressions are discussed.

Journal ArticleDOI
TL;DR: In this paper, nonverbal decoding and encoding abilities of undergraduates were examined as a function of their self-reported history of interparental violence and found that students exposed to domestic violence showed encoding and decoding deficits for recognizing happiness but no evidence for an advantage in decoding anger and fear.
Abstract: Nonverbal decoding and encoding abilities of undergraduates were examined as a function of their self-reported history of interparental violence. Students exposed to domestic violence showed decoding and encoding deficits. Results for decoding revealed an emotion-specific deficit for recognizing happiness but no evidence for an advantage in decoding anger and fear. In contrast, students from violent homes showed overall deficits in posed encoding of emotions. There was no evidence for an emotion-specific encoding bias in the pattern of false negatives and no evidence for suppression of general expressiveness. Hence, it appears that the encoding deficit of students from violent homes is a result of inappropriate encoding. Results are discussed in terms of past theoretical explanations for the influence of family environment on nonverbal abilities.