scispace - formally typeset
Search or ask a question

Showing papers on "Crossmodal published in 2007"


Journal ArticleDOI
TL;DR: It is suggested that, analogously to the organisation of the visual system, somatosensory processing for the guidance of action can be dissociated from the processing that leads to perception and memory.
Abstract: The functions of the somatosensory system are multiple. We use tactile input to localize and experience the various qualities of touch, and proprioceptive information to determine the position of different parts of the body with respect to each other, which provides fundamental information for action. Further, tactile exploration of the characteristics of external objects can result in conscious perceptual experience and stimulus or object recognition. Neuroanatomical studies suggest parallel processing as well as serial processing within the cerebral somatosensory system that reflect these separate functions, with one processing stream terminating in the posterior parietal cortex (PPC), and the other terminating in the insula. We suggest that, analogously to the organisation of the visual system, somatosensory processing for the guidance of action can be dissociated from the processing that leads to perception and memory. In addition, we find a second division between tactile information processing about external targets in service of object recognition and tactile information processing related to the body itself. We suggest the posterior parietal cortex subserves both perception and action, whereas the insula principally subserves perceptual recognition and learning.

506 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated whether the "unity assumption" in which an observer assumes that two different sensory signals refer to the same underlying multisensory event, influences the multisensor integration of audiovisual speech stimuli.
Abstract: We investigated whether the “unity assumption,” according to which an observer assumes that two different sensory signals refer to the same underlying multisensory event, influences the multisensory integration of audiovisual speech stimuli. Syllables (Experiments 1, 3, and 4) or words (Experiment 2) were presented to participants at a range of different stimulus onset asynchronies using the method of constant stimuli. Participants made unspeeded temporal order judgments regarding which stream (either auditory or visual) had been presented first. The auditory and visual speech stimuli in Experiments 1–3 were either gender matched (i.e., a female face presented together with a female voice) or else gender mismatched (i.e., a female face presented together with a male voice). In Experiment 4, different utterances from the same female speaker were used to generate the matched and mismatched speech video clips. Measuring in terms of the just noticeable difference the participants in all four experiments found it easier to judge which sensory modality had been presented first when evaluating mismatched stimuli than when evaluating the matched-speech stimuli. These results therefore provide the first empirical support for the “unity assumption” in the domain of the multisensory temporal integration of audiovisual speech stimuli.

283 citations


Journal ArticleDOI
TL;DR: In this paper, a large body of empirical research has demonstrated the importance of low-level spatiotemporal factors in the multisensory integration of auditory and visual stimuli (as, for example, indexed by research on the ventriloquism effect).
Abstract: Over the last 50 years or so, a large body of empirical research has demonstrated the importance of a variety of low-level spatiotemporal factors in the multisensory integration of auditory and visual stimuli (as, for example, indexed by research on the ventriloquism effect). Here, the evidence highlighting the contribution of both spatial and temporal factors to multisensory integration is briefly reviewed. The role played by the temporal correlation between auditory and visual signals, stimulus motion, intramodal versus crossmodal perceptual grouping, semantic congruency, and the unity assumption in modulating multisensory integration is also discussed. Taken together, the evidence now supports the view that a number of different factors, both structural and cognitive, conjointly contribute to the multisensory integration (or binding) of auditory and visual information.

202 citations


Journal ArticleDOI
TL;DR: The results showed a specific deficit in multisensory speech processing in the absence of any measurable deficit in unisensor speech processing and suggest that sensory integration dysfunction may be an important and, to date, rather overlooked aspect of schizophrenia.

180 citations


Journal ArticleDOI
TL;DR: This work found that unattended touches to one hand enhanced visual sensitivity for phosphenes induced by occipital trancranial magnetic stimulation (TMS) when the touched hand was spatially coincident to the reported location of the phosphene in external space.

154 citations


Journal ArticleDOI
TL;DR: The results showed that the percentage of visually influenced responses to audiovisual stimuli was reduced when attention was diverted to a tactile task, and it was suggested that the interactions between the attentional system and crossmodal binding mechanisms may be much more extensive and dynamic than it was advanced in previous studies.
Abstract: One of the classic examples of multisensory integration in humans occurs when speech sounds are combined with the sight of corresponding articulatory gestures. Despite the longstanding assumption that this kind of audiovisual binding operates in an attention-free mode, recent findings (Alsius et al. in Curr Biol, 15(9):839-843, 2005) suggest that audiovisual speech integration decreases when visual or auditory attentional resources are depleted. The present study addressed the generalization of this attention constraint by testing whether a similar decrease in multisensory integration is observed when attention demands are imposed on a sensory domain that is not involved in speech perception, such as touch. We measured the McGurk illusion in a dual task paradigm involving a difficult tactile task. The results showed that the percentage of visually influenced responses to audiovisual stimuli was reduced when attention was diverted to a tactile task. This finding is attributed to a modulatory effect on audiovisual integration of speech mediated by supramodal attention limitations. We suggest that the interactions between the attentional system and crossmodal binding mechanisms may be much more extensive and dynamic than it was advanced in previous studies.

147 citations


Journal ArticleDOI
TL;DR: The data suggest the occurrence of a specific, audio-visual integration deficit in AD, which might be the consequence of a connectivity breakdown and corroborate the observation from other studies of crossmodal deficits between the auditory and visual modalities in this population.

124 citations


Journal ArticleDOI
TL;DR: It is demonstrated that irrelevant tactile signals facilitate the detection of faint tones, and increase auditory intensity ratings, and crossmodal facilitation effects were found for synchronous when compared to asynchronous auditory-tactile stimulation.

124 citations


Journal ArticleDOI
TL;DR: This study provides evidence for enhanced sensitivity to facial cues at the level of reflex-like emotional responses in individuals with PDD and argues against impairments in crossmodal affect processing at this level of perception.
Abstract: Background: Despite extensive research, it is still debated whether impairments in social skills of individuals with pervasive developmental disorder (PDD) are related to specific deficits in the early processing of emotional information. We aimed to test both automatic processing of facial affect as well as the integration of auditory and visual emotion cues in individuals with PDD. Methods: In a group of high-functioning adult individuals with PDD and an age- and IQ-matched control group, we measured facial electromyography (EMG) following presentation of visual emotion stimuli (facial expressions) as well as the presentation of audiovisual emotion pairs (faces plus voices). This emotionally driven EMG activity is considered to be a direct correlate of automatic affect processing that is not under intentional control. Results: Our data clearly indicate that among individuals with PDD facial EMG activity is heightened in response to happy and fearful faces, and intact in response to audiovisual affective information. Conclusions: This study provides evidence for enhanced sensitivity to facial cues at the level of reflex-like emotional responses in individuals with PDD. Furthermore, the findings argue against impairments in crossmodal affect processing at this level of perception. However, given how little comparative work has been done in the area of multisensory perception, there is certainly need for further exploration.

103 citations


Journal ArticleDOI
TL;DR: In this paper, EEG analyses using wavelet transform suggested that interelectrode phase synchrony in the gamma-band range (40-50 Hz) was related to behavioral indices of the intermodal illusion under consideration.
Abstract: The integration of multimodal stimuli has been regarded as important for the promotion of adaptive behavior. Although recent work has identified brain areas that respond to multimodal stimuli, the temporal features are not clear yet. Earlier event-related potential studies revealed crossmodal attention effects, but did not focus on mechanisms underlying crossmodal integration. Here, electroencephalography (EEG) activity in the gamma band was considered as a correlate of multimodal integration. Participants localized a tactile stimulus on their fingers while seeing visual stimuli on rubber hands with the same posture as their hands. EEG analyses using wavelet transform suggested that interelectrode phase synchrony in the gamma-band range (40-50 Hz) was related to behavioral indices of the intermodal illusion under consideration. The findings suggest a role of high-frequency oscillations in the integrative processing of stimuli across modalities.

103 citations


Proceedings ArticleDOI
12 Nov 2007
TL;DR: The results suggest that participants can recognize and understand a message in a different modality very effectively and will aid designers of mobile displays in creating effective crossmodal cues which require minimal training for users and can provide alternative presentation modalities through which information may be presented if the context requires.
Abstract: This paper reports an experiment into the design of crossmodal icons which can provide an alternative form of output for mobile devices using audio and tactile modalities to communicate information. A complete set of crossmodal icons was created by encoding three dimensions of information in three crossmodal auditory/tactile parameters. Earcons were used for the audio and Tactons for the tactile crossmodal icons. The experiment investigated absolute identification of audio and tactile crossmodal icons when a user is trained in one modality and tested in the other (and given no training in the other modality) to see if knowledge could be transferred between modalities. We also compared performance when users were static and mobile to see any effects that mobility might have on recognition of the cues. The results showed that if participants were trained in sound with Earcons and then tested with the same messages presented via Tactons they could recognize 85% of messages when stationary and 76% when mobile. When trained with Tactons and tested with Earcons participants could accurately recognize 76.5% of messages when stationary and 71% of messages when mobile. These results suggest that participants can recognize and understand a message in a different modality very effectively. These results will aid designers of mobile displays in creating effective crossmodal cues which require minimal training for users and can provide alternative presentation modalities through which information may be presented if the context requires.

Journal ArticleDOI
TL;DR: Data suggest that endogenously preparing to use a tool enhances visual-tactile interactions near the tools, likely due to the increased behavioural relevance of visual stimuli as each tool use action is prepared before execution.
Abstract: Active tool use in human and non-human primates has been claimed to alter the neural representations of multisensory peripersonal space. To date, most studies suggest that a short period of tool use leads to an expansion or elongation of these spatial representations, which lasts several minutes after the last tool use action. However, the possibility that multisensory interactions also change on a much shorter time scale following or preceding individual tool use movements has not yet been investigated. We measured crossmodal (visual-tactile) congruency effects as an index of multisensory integration during two tool use tasks. In the regular tool use task, the participants used one of two tools in a spatiotemporally predictable sequence after every fourth crossmodal congruency trial. In the random tool use task, the required timing and spatial location of the tool use task varied unpredictably. Multisensory integration effects increased as a function of the number of trials since tool use in the regular tool use group, but remained relatively constant in the random tool use group. The spatial distribution of these multisensory effects, however, was unaffected by tool use predictability, with significant spatial interactions found only near the hands and at the tips of the tools. These data suggest that endogenously preparing to use a tool enhances visual-tactile interactions near the tools. Such enhancements are likely due to the increased behavioural relevance of visual stimuli as each tool use action is prepared before execution.

Journal ArticleDOI
TL;DR: It is suggested that the approach to identify major processing streams based on the processing goal does not preclude interactions between them, and details regarding body representations, haptic object recognition, and crossmodal processing are specified.
Abstract: The commentaries have raised important points regarding different aspects of our model. Some have queried the nature of the proposed dissociations, whereas others have requested and provided further details regarding aspects we had glossed over. Here we suggest that our approach to identify major processing streams based on the processing goal does not preclude interactions between them. We further specify details regarding body representations, haptic object recognition, and crossmodal processing, but are also aware that several features of the model require further filling in.

Dissertation
04 Jun 2007
TL;DR: It is concluded that in comparison to a baseline with visual information, tactile displays can improve performance and lower the risk of sensory and cognitive overload in navigation and orientation tasks.
Abstract: Perceiving and understanding information of, for example, a visual navigation display may be difficult for people with a visual challenge or in situations where the user's visual sense and cognitive resources are heavily loaded. Developing information presentation schemes that reduce the threat of overloading eyes and mind becomes increasingly important. Employing the sense of touch can reduce the reliance on the visual system. By developing an intuitive information presentation concept, we may also lessen the cognitive load. In the tactile sense, an intuitive presentation concept may be based on the proverbial tap-on-the-shoulder. For instance, a localized vibration on the torso can present the direction of a waypoint. The first two questions addressed in this thesis are concerned with the spatial resolution of the torso for tactile stimuli, and the role of the timing parameters of the presentation. We found a uniform acuity of 3 - 4 cm for most of the torso. The burst duration of the presentation has only a small effect on the acuity, but the Stimulus Onset Asynchrony (SOA) is an important determinant of performance: A smaller SOA requires a larger distance between stimuli. The third question concerned the ability to determine the absolute location of stimuli. Observers could localize stimuli within 5 cm from their veridical location. Mislocalizations were mainly along a line originating from the body midaxis. The fourth and fifth question addressed how accurate users couple a direction to a localized vibration, and how observers determine a direction based on a single point stimulus. There is a bias in perceived directions toward the midsagittal plane, but observers are consistent in their direction determination. Observers used two internal reference points (one for each torso half) to determine the direction of the point stimulus. The sixth question concerned the crossmodal visual-tactile perception of space and time. The experimental results indicate that the same internal representation is used in unimodal and multimodal comparisons. The last three questions addressed the behavioural aspects of tactile displays for navigation and orientation. Question 7 looked at the effect on mental effort ratings when a tactile display is present together with, or instead of, visual information. The results show that users rate the required mental effort as lower when they have a tactile display at their disposal. The next relevant question then becomes whether a tactile display can make users immune for (high levels of) mental workload. We found mixed results. We hypothesised that this issue may depend on the design of the tactile information. Favourable effects on performance were found across all tasks tested, including vehicle waypoint navigation, target interception in a jet fighter, maintaining a stable hover in a helicopter, and orienting in microgravity. We conclude that in comparison to a baseline with visual information, tactile displays can improve performance and lower the risk of sensory and cognitive overload in navigation and orientation tasks. keywords: Display, Human, Navigation, Orientation, Perception, Performance, Tactile, Touch, Workload

Journal ArticleDOI
TL;DR: In this paper, the effect of modulating cross-modal interactions between visual and somatosensory stimuli that in isolation do not reach perceptual awareness was investigated, and the results suggest that under sub-threshold conditions of visual and sensory stimulation, cross modal interactions presented in a spatially and temporally specific manner can sum up to become behaviorally significant.
Abstract: Crossmodal sensory interactions serve to integrate behaviorally relevant sensory stimuli. In this study, we investigated the effect of modulating crossmodal interactions between visual and somatosensory stimuli that in isolation do not reach perceptual awareness. When a subthreshold somatosensory stimulus was delivered within close spatiotemporal congruency to the expected site of perception of a phosphene, a subthreshold transcranial magnetic stimulation pulse delivered to the occipital cortex evoked a visual percept. The results suggest that under subthreshold conditions of visual and somatosensory stimulation, crossmodal interactions presented in a spatially and temporally specific manner can sum up to become behaviorally significant. These interactions may reflect an underlying anatomical connectivity and become further enhanced by attention modulation mechanisms.

Journal ArticleDOI
TL;DR: Perceptual dissociation of absolute-frequency-based crossmodal-integration effects from relative-pitch-based explicit perception of the tones provides evidence for a sensory integration of auditory and visual signals in representing human gender.

Reference EntryDOI
01 Jun 2007
TL;DR: It is argued that there is a consistent overall picture emerging from research: Visual perception depends heavily on inborn and early maturing mechanisms, although evidence also suggests roles for learning, especially in calibration and tuning.
Abstract: In this chapter, we consider early visual perception and its development. We begin by reviewing theories of perceptual development, tracing the influences of historically important concepts that frame contemporary scientific efforts. We consider evidence about basic visual sensitivities that support perceptual knowledge in infancy, including visual acuity and sensitivity to contrast, orientation, pattern, color, and motion. From these foundations, we assess initial capabilities and developmental trends in several important domains: visual perception of space and depth, objects, and faces. In a concluding section, we argue that there is a consistent overall picture emerging from research: Visual perception depends heavily on inborn and early maturing mechanisms, although evidence also suggests roles for learning, especially in calibration and tuning. Besides surveying research on early visual abilities, we use it to illustrate several general themes in perceptual development: the multiple levels of explanation required to understand perception; the interactions of hardwired abilities, maturation, and learning; and the methods that allow assessment of early perception. These themes all have broad relevance for research in cognitive and social development. Keywords: color vision; face perception; infant perception; object perception; perceptual development; space perception; theories of perception; visual development

Journal ArticleDOI
TL;DR: It is shown that in sighted subjects the accuracy of sound localization, measured by a task of head pointing to acoustic targets, is reversibly increased after short-term light deprivation, indicating that auditory-visual crossmodal plasticity can be quite rapidly initiated by deprivation of the visual cortex from visual input.

Journal ArticleDOI
TL;DR: It is proposed that cross-modality is the predominant process in early blind subjects whereas mental imagery is predominant in blindfolded sighted subjects, which implies that, with training, sensory substitution mainly induces visual-like perception in Sighted subjects and mainly auditory or tactile perception in blind subjects.

Journal ArticleDOI
TL;DR: The results suggest that this sort of crossmodal orienting is automatic because it occurred even when participants were provided with detailed information about the target to prevent uninformative auditory cues from orienting attention.

Journal ArticleDOI
TL;DR: The results demonstrate cross-modal effects on auditory object perception in that sound ambiguity was resolved by synchronous presentation of visual stimuli, which promoted either an integrated or segregated perception of the sounds.

Posted Content
TL;DR: In this article, the authors examined whether inputs from one sensory modality can influence the experience of a stimulus in another modality, and they found that shapes (jagged versus rounded) or lighting (bright versus dim) presented in temporal proximity to a gustatory stimulus influenced participants' taste judgments as well as consumption.
Abstract: Consumption experiences typically involve inputs from multiple sensory modalities. However, consumer research has seldom investigated how multiple sensory inputs may interact to affect such experiences. This research examines whether inputs from one sensory modality can influence the experience of a stimulus in another sensory modality. In a series of experiments, we find that shapes (jagged versus rounded) or lighting (bright versus dim) presented in temporal proximity to a gustatory stimulus influences participants' taste judgments as well as consumption. These findings suggest a correspondence between sensory stimuli that are typically thought to belong to categorically distinct sensory modalities.

Journal ArticleDOI
TL;DR: An effect of the temporal structure of irrelevant sounds on visual apparent motion that is discussed in light of a related multisensory phenomenon, ‘temporal ventriloquism’, on the assumption that sounds can attract lights in the temporal dimension is demonstrated.
Abstract: When two discrete stimuli are presented in rapid succession, observers typically report a movement of the lead stimulus toward the lag stimulus. The object of this study was to investigate crossmodal effects of irrelevant sounds on this illusion of visual apparent motion. Observers were presented with two visual stimuli that were temporally separated by interstimulus onset intervals from 0 to 350 ms. After each trial, observers classified their impression of the stimuli using a categorisation system. The presentation of short sounds intervening between the visual stimuli facilitated the impression of apparent motion relative to baseline (visual stimuli without sounds), whereas sounds presented before the first and after the second visual stimulus as well as simultaneously presented sounds reduced the motion impression. The results demonstrate an effect of the temporal structure of irrelevant sounds on visual apparent motion that is discussed in light of a related multisensory phenomenon, 'temporal ventriloquism', on the assumption that sounds can attract lights in the temporal dimension.

Journal ArticleDOI
TL;DR: No association between the degree of visual and auditory hemi-inattention was observed amongst the patients, suggesting that there is a certain degree of independence between the mechanisms subserving spatial attention across sensory modalities.

Journal ArticleDOI
TL;DR: This first evidence for a crossmodal deficit in alcoholism contribute in explaining the contrast observed between experimental results describing, up to now, mild impairments in emotional facial expression (EFE) recognition in alcoholic subjects.
Abstract: Aims: Chronic alcoholism is classically associated with major deficits in the visual and auditory processing of emotions. However, the crossmodal (auditory-visual) processing of emotional stimuli, which occurs most frequently in everyday life, has not yet been explored. The aim of this study was to explore crossmodal processing in alcoholism, and specifically the auditory-visual facilitation effect. Methods: Twenty patients suffering from alcoholism, and 20 matched healthy controls had to detect the emotion (anger or happiness) displayed by auditory, visual or auditory-visual stimuli. The stimuli were designed to elicit a facilitation effect (namely, faster reaction times (RTs) for crossmodal condition than for unimodal ones). RTs and performance were recorded. Results: While the control subjects elicited a significant facilitation effect, alcoholic individuals did not present this effect, as no significant differences between RTs according to the modality were shown. This lack of facilitation effect is the marker of an impaired auditory-visual processing. Conclusions: Crossmodal processing of complex social stimuli (such as faces and voices) is crucial for interpersonal relations. This first evidence for a crossmodal deficit in alcoholism contribute in explaining the contrast observed between experimental results describing, up to now, mild impairments in emotional facial expression (EFE) recognition in alcoholic subjects (e.g. Oscar-Berman et al ., 1990), and the many clinical observations suggesting massive problems.

Journal ArticleDOI
TL;DR: The results demonstrate that the difficulties with speech perception by SLI children extend beyond the auditory-only modality to include auditory-visual processing as well.
Abstract: Purpose It has long been known that children with specific language impairment (SLI) can demonstrate difficulty with auditory speech perception. However, speech perception can also involve the integration of both auditory and visual articulatory information. Method Fifty-six preschool children, half with and half without SLI, were studied in order to examine auditory-visual integration. Children watched and listened to video clips of a woman speaking [bi] and [gi]. They also listened to audio clips of [bi], [di], and [gi], produced by the same woman. The effect of visual input on speech perception was tested by presenting an auditory [bi] combined with a visually articulated [gi], which tends to alter the phoneme percept (the McGurk effect). Results Both groups of children performed at ceiling when asked to identify speech tokens in auditory-only and congruent auditory-visual modalities. In the incongruent auditory-visual condition, a stronger McGurk effect was found for the normal language group compared...

Journal ArticleDOI
TL;DR: Investigating audio-visual binding by using the ventriloquism-effect to act as an indicator for perceived binding revealed activation in the insula, superior temporal sulcus and parieto-occipital sulcus in areas discussed to be involved in multisensory processing in the literature.

Journal ArticleDOI
TL;DR: Research demonstrating that deafness affects the development of specific visual functions and their neural substrates and suggests that visual speech perception skills that develop during periods of deafness have positive implications for later perception of auditory speech.
Abstract: Hearing loss has obvious implications for communication and auditory functioning. A less obvious implication of hearing loss is its effect on the remaining sensory systems, particularly vision. This paper will review research demonstrating that deafness affects the development of specific visual functions and their neural substrates, including motion processing, face processing, and attention to peripheral space. Implications of this cross-modal plasticity are discussed in a review of studies with cochlear implant recipients. This latter work suggests that visual speech perception skills that develop during periods of deafness have positive implications for later perception of auditory speech. These effects are discussed in light of multimodal processing and perceptual learning.

Journal ArticleDOI
TL;DR: This work investigates behavioural correlates of multisensory, in particular audiovisual, integration in the processing of biological motion cues using a new psychophysical paradigm and reports the existence of direction-selective effects, which suggest motion-sensitive, direction-tuned integration mechanisms that are, if not unique to biological visual motion, at least not common to all types of visual motion.

Journal ArticleDOI
TL;DR: The results support the view that the perceptual and decisional components involved in audiovisual interactions in motion processing can coexist but are largely independent of one another.