scispace - formally typeset
Search or ask a question

Showing papers by "Kevin G. Munhall published in 2016"


Journal ArticleDOI
TL;DR: The results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli.
Abstract: The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.

19 citations


Journal ArticleDOI
TL;DR: The results suggest that individual differences in silent speechreading and the McGurk effect are not related, supported by differential influences of high-resolution visual information on the 2 tasks and differences in the pattern of gaze.
Abstract: Purpose The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method We presented vo...

17 citations


Posted ContentDOI
27 Jul 2016-bioRxiv
TL;DR: It is shown that participants embraced the ownership of a fake hand and a stranger’s voice to a similar degree, controlling both for individual suggestibility and for general susceptibility to illusion of body schema.
Abstract: Body-schema, or the multimodal representation of one's own body attributes, has been demonstrated previously to be malleable. In the rubber-hand illusion (Botvinick and Cohen, 1998), synchronous visual and tactile stimulation cause a fake hand to be perceived as one's own. Similarly, if a stranger's voice is heard synchronously with one's own vocal production, that voice comes to be attributed to oneself (Zheng et al., 2011). Multimodal illusions like these involve distorting body schema based on correlated input, yet the degree to which different instances of distortion are perceived within the same individuals has never been examined. Here we show that participants embraced the ownership of a fake hand and a stranger's voice to a similar degree, controlling both for individual suggestibility and for general susceptibility to illusion of body schema. Our findings suggest that the perceptual inference that leads to the distortion of body schema is a stable trait.