scispace - formally typeset
Search or ask a question

Showing papers in "Multisensory Research in 2021"


Journal ArticleDOI
TL;DR: In this paper, the effect of motion direction and eccentricity on these three phenomena using optic flow patterns displayed using the Valve Index was investigated. And a correlation showed a positive relationship between dizziness and vection duration and between general discomfort and sway.
Abstract: Virtual Reality experienced through head mounted displays often leads to vection, discomfort and sway in the user. This study investigated the effect of motion direction and eccentricity on these three phenomena using optic flow patterns displayed using the Valve Index. Visual motion stimuli were presented in the centre, periphery or far periphery and moved either in-depth (back and forth) or laterally (left and right). Overall vection was stronger for motion-in-depth compared to lateral motion. Additionally, eccentricity primarily affected stimuli moving in-depth with stronger vection for more peripherally presented motion patterns compared to more central ones. Motion direction affected the various aspects of VR sickness differently and modulated the effect of eccentricity on VR sickness. For stimuli moving in-depth far peripheral presentation caused more discomfort, whereas for lateral motion the central stimuli caused more discomfort. Stimuli moving in-depth led to more head movements in the anterior-posterior direction when the entire visual field was stimulated. Observers demonstrated more head movements in the anterior-posterior direction compared to the medio-lateral direction throughout the entire experiment independent of motion direction or eccentricity of the presented moving stimulus. A correlation showed a positive relationship between dizziness and vection duration and between general discomfort and sway. Identifying where in the visual field motion presented to an individual causes the least amount of VR sickness without losing vection and presence can guide development for Virtual Reality games, training and treatment programs.

14 citations


Journal ArticleDOI
TL;DR: In this paper, a crowd-sourced online study was conducted to determine the acoustical/musical attributes that best match saltiness, as well as participants' confidence levels in their choices.
Abstract: Mounting evidence demonstrates that people make surprisingly consistent associations between auditory attributes and a number of the commonly-agreed basic tastes. However, the sonic representation of (association with) saltiness has remained rather elusive. In the present study, a crowd-sourced online study ( n = 1819 participants) was conducted to determine the acoustical/musical attributes that best match saltiness, as well as participants' confidence levels in their choices. Based on previous literature on crossmodal correspondences involving saltiness, thirteen attributes were selected to cover a variety of temporal, tactile, and emotional associations. The results revealed that saltiness was associated most strongly with a long decay time, high auditory roughness, and a regular rhythm. In terms of emotional associations, saltiness was matched with negative valence, high arousal, and minor mode. Moreover, significantly higher average confidence ratings were observed for those saltiness-matching choices for which there was majority agreement, suggesting that individuals were more confident about their own judgments when it matched with the group response, therefore providing support for the so-called 'consensuality principle'. Taken together, these results help to uncover the complex interplay of mechanisms behind seemingly surprising crossmodal correspondences between sound attributes and taste.

12 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the role of multisensory cues on vection by manipulating the availability of visual, auditory, and tactile stimuli in a VR setting and found that all three sensory cues induced vection when presented in isolation, with visual cues eliciting the highest intensity and longest duration.
Abstract: A critical component to many immersive experiences in virtual reality (VR) is vection, defined as the illusion of self-motion. Traditionally, vection has been described as a visual phenomenon, but more recent research suggests that vection can be influenced by a variety of senses. The goal of the present study was to investigate the role of multisensory cues on vection by manipulating the availability of visual, auditory, and tactile stimuli in a VR setting. To achieve this, 24 adults (Mage = 25.04) were presented with a rotating stimulus aimed to induce circular vection. All participants completed trials that included a single sensory cue, a combination of two cues, or all three cues presented together. The size of the field of view (FOV) was manipulated across four levels (no-visuals, small, medium, full). Participants rated vection intensity and duration verbally after each trial. Results showed that all three sensory cues induced vection when presented in isolation, with visual cues eliciting the highest intensity and longest duration. The presence of auditory and tactile cues further increased vection intensity and duration compared to conditions where these cues were not presented. These findings support the idea that vection can be induced via multiple types of sensory inputs and can be intensified when multiple sensory inputs are combined.

8 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated whether visual cue reliability affects audiovisual recalibration in adults and children and found that the immediate ventriloquist aftereffect was independent of visual cues.
Abstract: Reliability-based cue combination is a hallmark of multisensory integration, while the role of cue reliability for crossmodal recalibration is less understood. The present study investigated whether visual cue reliability affects audiovisual recalibration in adults and children. Participants had to localize sounds, which were presented either alone or in combination with a spatially discrepant high- or low-reliability visual stimulus. In a previous study we had shown that the ventriloquist effect (indicating multisensory integration) was overall larger in the children groups and that the shift in sound localization toward the spatially discrepant visual stimulus decreased with visual cue reliability in all groups. The present study replicated the onset of the immediate ventriloquist aftereffect (a shift in unimodal sound localization following a single exposure of a spatially discrepant audiovisual stimulus) at the age of 6-7 years. In adults the immediate ventriloquist aftereffect depended on visual cue reliability, whereas the cumulative ventriloquist aftereffect (reflecting the audiovisual spatial discrepancies over the complete experiment) did not. In 6-7-year-olds the immediate ventriloquist aftereffect was independent of visual cue reliability. The present results are compatible with the idea of immediate and cumulative crossmodal recalibrations being dissociable processes and that the immediate ventriloquist aftereffect is more closely related to genuine multisensory integration.

7 citations


Journal ArticleDOI
TL;DR: This article investigated how preterm newborns and full-term newborn respond to visual numerosity after habituation to auditory stimuli of different numerosities, and found that the two groups have a similar preferential looking response, both preferring the incongruent set after cross-modal habituation.
Abstract: Premature birth is associated with a high risk of damage in the parietal cortex, a key area for numerical and non-numerical magnitude perception and mathematical reasoning. Children born preterm have higher rates of learning difficulties for school mathematics. In this study, we investigated how preterm newborns (born at 28-34 weeks of gestation age) and full-term newborns respond to visual numerosity after habituation to auditory stimuli of different numerosities. The results show that the two groups have a similar preferential looking response to visual numerosity, both preferring the incongruent set after crossmodal habituation. These results suggest that the numerosity system is resistant to prematurity.

5 citations


Journal ArticleDOI
TL;DR: The authors found no significant congruency effects in the blood oxygenation level-dependent (BOLD) signal when participants were attending to visual shapes, however, during attention to auditory pseudowords, they observed greater BOLD activity for incongruent compared to congruent audiovisual pairs.
Abstract: Sound symbolism refers to the association between the sounds of words and their meanings, often studied using the crossmodal correspondence between auditory pseudowords, e.g., 'takete' or 'maluma', and pointed or rounded visual shapes, respectively. In a functional magnetic resonance imaging study, participants were presented with pseudoword-shape pairs that were sound-symbolically congruent or incongruent. We found no significant congruency effects in the blood oxygenation level-dependent (BOLD) signal when participants were attending to visual shapes. During attention to auditory pseudowords, however, we observed greater BOLD activity for incongruent compared to congruent audiovisual pairs bilaterally in the intraparietal sulcus and supramarginal gyrus, and in the left middle frontal gyrus. We compared this activity to independent functional contrasts designed to test competing explanations of sound symbolism, but found no evidence for mediation via language, and only limited evidence for accounts based on multisensory integration and a general magnitude system. Instead, we suggest that the observed incongruency effects are likely to reflect phonological processing and/or multisensory attention. These findings advance our understanding of sound-to-meaning mapping in the brain.

3 citations


Journal ArticleDOI
TL;DR: This article found a positive serial dependence for valence and arousal regardless of the stimulus modality on two consecutive trials, regardless of whether the rating on the previous trial was low or high.
Abstract: How we perceive the world is not solely determined by what we sense at a given moment in time, but also by what we processed recently. Here we investigated whether such serial dependencies for emotional stimuli transfer from one modality to another. Participants were presented a random sequence of emotional sounds and images and instructed to rate the valence and arousal of each stimulus (Experiment 1). For both ratings, we conducted an intertrial analysis, based on whether the rating on the previous trial was low or high. We found a positive serial dependence for valence and arousal regardless of the stimulus modality on two consecutive trials. In Experiment 2, we examined whether passively perceiving a stimulus is sufficient to induce a serial dependence. In Experiment 2, participants were instructed to rate the stimuli only on active trials and not on passive trials. The participants were informed that the active and passive trials were presented in alternating order, so that they were able to prepare for the task. We conducted an intertrial analysis on active trials, based on whether the rating on the previous passive trial (determined in Experiment 1) was low or high. For both ratings, we again observed positive serial dependencies regardless of the stimulus modality. We conclude that the emotional experience triggered by one stimulus affects the emotional experience for a subsequent stimulus regardless of their sensory modalities, that this occurs in a bottom-up fashion, and that this can be explained by residual activation in the emotional network in the brain.

3 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated whether levels of SPD traits were related to audiovisual multisensory temporal processing in a non-clinical sample, revealing two novel findings.
Abstract: Recent literature has suggested that deficits in sensory processing are associated with schizophrenia (SCZ), and more specifically hallucination severity. The DSM-5's shift towards a dimensional approach to diagnostic criteria has led to SCZ and schizotypal personality disorder (SPD) being classified as schizophrenia spectrum disorders. With SCZ and SPD overlapping in aetiology and symptomatology, such as sensory abnormalities, it is important to investigate whether these deficits commonly reported in SCZ extend to non-clinical expressions of SPD. In this study, we investigated whether levels of SPD traits were related to audiovisual multisensory temporal processing in a non-clinical sample, revealing two novel findings. First, less precise multisensory temporal processing was related to higher overall levels of SPD symptomatology. Second, this relationship was specific to the cognitive-perceptual domain of SPD symptomatology, and more specifically, the Unusual Perceptual Experiences and Odd Beliefs or Magical Thinking symptomatology. The current study provides an initial look at the relationship between multisensory temporal processing and schizotypal traits. Additionally, it builds on the previous literature by suggesting that less precise multisensory temporal processing is not exclusive to SCZ but may also be related to non-clinical expressions of schizotypal traits in the general population.

2 citations


Journal ArticleDOI
TL;DR: The authors examined whether cross-modal correspondences (CMCs) modulate perceptual disambiguation by considering the influence of lightness/pitch congruency on the perceptual resolution of the Rubin face/vase (RFV).
Abstract: We examine whether crossmodal correspondences (CMCs) modulate perceptual disambiguation by considering the influence of lightness/pitch congruency on the perceptual resolution of the Rubin face/vase (RFV). We randomly paired a black-and-white RFV (black faces and white vase, or vice versa) with either a high or low pitch tone and found that CMC congruency biases the dominant visual percept. The perceptual option that was CMC-congruent with the tone (white/high pitch or black/low pitch) was reported significantly more often than the perceptual option CMC-incongruent with the tone (white/low pitch or black/high pitch). However, the effect was only observed for stimuli presented for longer and not shorter durations suggesting a perceptual effect rather than a response bias, and moreover, we infer an effect on perceptual reversals rather than initial percepts. We found that the CMC congruency effect for longer-duration stimuli only occurred after prior exposure to the stimuli of several minutes, suggesting that the CMC congruency develops over time. These findings extend the observed effects of CMCs from relatively low-level feature-based effects to higher-level object-based perceptual effects (specifically, resolving ambiguity) and demonstrate that an entirely new category of crossmodal factors (CMC congruency) influence perceptual disambiguation in bistability.

1 citations


Journal ArticleDOI
TL;DR: Findings suggest that embodied visual biological stimuli may modulate visual and tactile multisensory interaction in simultaneity judgements.
Abstract: The concept of embodiment has been used in multiple scenarios, but in cognitive neuroscience it normally refers to the comprehension of the role of one's own body in the cognition of everyday situations and the processes involved in that perception. Multisensory research is gradually embracing the concept of embodiment, but the focus has mostly been concentrated upon audiovisual integration. In two experiments, we evaluated how the likelihood of a perceived stimulus to be embodied modulates visuotactile interaction in a Simultaneity Judgement task. Experiment 1 compared the perception of two visual stimuli with and without biological attributes (hands and geometrical shapes) moving towards each other, while tactile stimuli were provided on the palm of the participants' hand. Participants judged whether the meeting point of two periodically-moving visual stimuli was synchronous with the tactile stimulation in their own hands. Results showed that in the hand condition, the Point of Subjective Simultaneity (PSS) was significantly more distant to real synchrony (60 ms after the Stimulus Onset Asynchrony, SOA) than in the geometrical shape condition (45 ms after SOA). In experiment 2, we further explored the impact of biological attributes by comparing performance on two visual biological stimuli (hands and ears), that also vary in their motor and visuotactile properties. Results showed that the PSS was equally distant to real synchrony in both the hands and ears conditions. Overall, findings suggest that embodied visual biological stimuli may modulate visual and tactile multisensory interaction in simultaneity judgements.

1 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the extent to which temporal and spatial properties of sound modulate visual motion processing in spatial localization tasks and found that sound offset was shown to modulate the perceived visual offset location.
Abstract: The present study examines the extent to which temporal and spatial properties of sound modulate visual motion processing in spatial localization tasks. Participants were asked to locate the place at which a moving visual target unexpectedly vanished. Across different tasks, accompanying sounds were factorially varied within subjects as to their onset and offset times and/or positions relative to visual motion. Sound onset had no effect on the localization error. Sound offset was shown to modulate the perceived visual offset location, both for temporal and spatial disparities. This modulation did not conform to attraction toward the timing or location of the sounds but, demonstrably in the case of temporal disparities, to bimodal enhancement instead. Favorable indications to a contextual effect of audiovisual presentations on interspersed visual-only trials were also found. The short sound-leading offset asynchrony had equivalent benefits to audiovisual offset synchrony, suggestive of the involvement of early-level mechanisms, constrained by a temporal window, at these conditions. Yet, we tentatively hypothesize that the whole of the results and how they compare with previous studies requires the contribution of additional mechanisms, including learning-detection of auditory-visual associations and cross-sensory spread of endogenous attention.

Journal ArticleDOI
TL;DR: In this paper, the authors used foot pedals and required participants to focus on either the hand that was stimulated first (an anatomical bias condition) or the location of the hand (a spatiotopic bias condition), and showed that a spatiotemporal bias can result in either a larger external weight or a smaller internal weight.
Abstract: Exploring the world through touch requires the integration of internal (e.g., anatomical) and external (e.g., spatial) reference frames - you only know what you touch when you know where your hands are in space. The deficit observed in tactile temporal-order judgements when the hands are crossed over the midline provides one tool to explore this integration. We used foot pedals and required participants to focus on either the hand that was stimulated first (an anatomical bias condition) or the location of the hand that was stimulated first (a spatiotopic bias condition). Spatiotopic-based responses produce a larger crossed-hands deficit, presumably by focusing observers on the external reference frame. In contrast, anatomical-based responses focus the observer on the internal reference frame and produce a smaller deficit. This manipulation thus provides evidence that observers can change the relative weight given to each reference frame. We quantify this effect using a probabilistic model that produces a population estimate of the relative weight given to each reference frame. We show that a spatiotopic bias can result in either a larger external weight (Experiment 1) or a smaller internal weight (Experiment 2) and provide an explanation of when each one would occur.

Journal ArticleDOI
TL;DR: This article investigated the relationship between autistic traits (Autism Spectrum Quotient; AQ) and the influence of visual speech on the recognition of Rubin's vase-type speech stimuli with degraded facial speech information.
Abstract: While visual information from facial speech modulates auditory speech perception, it is less influential on audiovisual speech perception among autistic individuals than among typically developed individuals. In this study, we investigated the relationship between autistic traits (Autism-Spectrum Quotient; AQ) and the influence of visual speech on the recognition of Rubin's vase-type speech stimuli with degraded facial speech information. Participants were 31 university students (13 males and 18 females; mean age: 19.2, SD: 1.13 years) who reported normal (or corrected-to-normal) hearing and vision. All participants completed three speech recognition tasks (visual, auditory, and audiovisual stimuli) and the AQ-Japanese version. The results showed that accuracies of speech recognition for visual (i.e., lip-reading) and auditory stimuli were not significantly related to participants' AQ. In contrast, audiovisual speech perception was less susceptible to facial speech perception among individuals with high rather than low autistic traits. The weaker influence of visual information on audiovisual speech perception in autism spectrum disorder (ASD) was robust regardless of the clarity of the visual information, suggesting a difficulty in the process of audiovisual integration rather than in the visual processing of facial speech.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the role of temporal and non-temporal features on cross-modal, specifically auditory over visual, duration perception, and found that temporal alignment revealed a larger impact on the strength of crossmodal duration percepts compared to stimulus complexity.
Abstract: Reliable duration perception is an integral aspect of daily life that impacts everyday perception, motor coordination, and subjective passage of time. The Scalar Expectancy Theory (SET) is a common model that explains how an internal pacemaker, gated by an external stimulus-driven switch, accumulates pulses during sensory events and compares these accumulated pulses to a reference memory duration for subsequent duration estimation. Second-order mechanisms, such as multisensory integration (MSI) and attention, can influence this model and affect duration perception. For instance, diverting attention away from temporal features could delay the switch closure or temporarily open the accumulator, altering pulse accumulation and distorting duration perception. In crossmodal duration perception, auditory signals of unequal duration can induce perceptual compression and expansion of durations of visual stimuli, presumably via auditory influence on the visual clock. The current project aimed to investigate the role of temporal (stimulus alignment) and nontemporal (stimulus complexity) features on crossmodal, specifically auditory over visual, duration perception. While temporal alignment revealed a larger impact on the strength of crossmodal duration percepts compared to stimulus complexity, both features showcase auditory dominance in processing visual duration.

Journal ArticleDOI
TL;DR: In this article, the authors tested the redundant signals effect (RSE) as a new objective measure of the full body illusion that was designed to directly tap into the multisensory integration underlying the illusion.
Abstract: During a full body illusion (FBI), participants experience a change in self-location towards a body that they see in front of them from a third-person perspective and experience touch to originate from this body Multisensory integration is thought to underlie this illusion In the present study we tested the redundant signals effect (RSE) as a new objective measure of the illusion that was designed to directly tap into the multisensory integration underlying the illusion The illusion was induced by an experimenter who stroked and tapped the participant's shoulder and underarm, while participants perceived the touch on the virtual body in front of them via a head-mounted display Participants performed a speeded detection task, responding to visual stimuli on the virtual body, to tactile stimuli on the real body and to combined (multisensory) visual and tactile stimuli Analysis of the RSE with a race model inequality test indicated that multisensory integration took place in both the synchronous and the asynchronous condition This surprising finding suggests that simultaneous bodily stimuli from different (visual and tactile) modalities will be transiently integrated into a multisensory representation even when no illusion is induced Furthermore, this finding suggests that the RSE is not a suitable objective measure of body illusions Interestingly however, responses to the unisensory tactile stimuli in the speeded detection task were found to be slower and had a larger variance in the asynchronous condition than in the synchronous condition The implications of this finding for the literature on body representations are discussed

Journal ArticleDOI
TL;DR: In this article, the Schutz-Lipscomb illusion was examined in younger and older adults with a visual point-light representation of a percussive impact event (i.e., a marimbist striking their instrument with a long or short gesture).
Abstract: Previous studies have examined whether audio-visual integration changes in older age, with some studies reporting age-related differences and others reporting no differences. Most studies have either used very basic and ambiguous stimuli (e.g., flash/beep) or highly contextualized, causally related stimuli (e.g., speech). However, few have used tasks that fall somewhere between the extremes of this continuum, such as those that include contextualized, causally related stimuli that are not speech-based; for example, audio-visual impact events. The present study used a paradigm requiring duration estimates and temporal order judgements (TOJ) of audio-visual impact events. Specifically, the Schutz-Lipscomb illusion, in which the perceived duration of a percussive tone is influenced by the length of the visual striking gesture, was examined in younger and older adults. Twenty-one younger and 21 older adult participants were presented with a visual point-light representation of a percussive impact event (i.e., a marimbist striking their instrument with a long or short gesture) combined with a percussive auditory tone. Participants completed a tone duration judgement task and a TOJ task. Five audio-visual temporal offsets (-400 to +400 ms) and five spatial offsets (from -90 to +90°) were randomly introduced. Results demonstrated that the strength of the illusion did not differ between older and younger adults and was not influenced by spatial or temporal offsets. Older adults showed an 'auditory first bias' when making TOJs. The current findings expand what is known about age-related differences in audio-visual integration by considering them in the context of impact-related events.

Journal ArticleDOI
TL;DR: This article used a variation of the spatial cueing task to examine the effects of unimodal and bimodal attention-orienting primes on target identification latencies and eye gaze movements.
Abstract: The experiment reported here used a variation of the spatial cueing task to examine the effects of unimodal and bimodal attention-orienting primes on target identification latencies and eye gaze movements. The primes were a nonspatial auditory tone and words known to drive attention consistent with the dominant writing and reading direction, as well as introducing a semantic, temporal bias (past-future) on the horizontal dimension. As expected, past-related (visual) word primes gave rise to shorter response latencies on the left hemifield and future-related words on the right. This congruency effect was differentiated by an asymmetric performance on the right space following future words and driven by the left-to-right trajectory of scanning habits that facilitated search times and eye gaze movements to lateralized targets. Auditory tone prime alone acted as an alarm signal, boosting visual search and reducing response latencies. Bimodal priming, i.e., temporal visual words paired with the auditory tone, impaired performance by delaying visual attention and response times relative to the unimodal visual word condition. We conclude that bimodal primes were no more effective in capturing participants' spatial attention than the unimodal auditory and visual primes. Their contribution to the literature on multisensory integration is discussed.

Journal ArticleDOI
Tsukasa Kimura1
TL;DR: In this paper, the authors investigated the relationship between the prediction of a tactile stimulus via this approach effect and spatial coordinates by comparing ERPs and found that the spatial location of the tactile stimulus and hand was consistent in the uncrossed condition and inconsistent in the crossed condition.
Abstract: Interaction with other sensory information is important for prediction of tactile events. Recent studies have reported that the approach of visual information toward the body facilitates prediction of subsequent tactile events. However, the processing of tactile events is influenced by multiple spatial coordinates, and it remains unclear how this approach effect influences tactile events in different spatial coordinates, i.e., spatial reference frames. We investigated the relationship between the prediction of a tactile stimulus via this approach effect and spatial coordinates by comparing ERPs. Participants were asked to place their arms on a desk and required to respond tactile stimuli which were presented to the left (or right) index finger with a high probability (80%) or to the opposite index finger with a low probability (20%). Before the presentation of each tactile stimulus, visual stimuli approached sequentially toward the hand to which the high-probability tactile stimulus was presented. In the uncrossed condition, each hand was placed on the corresponding side. In the crossed condition, each hand was crossed and placed on the opposite side, i.e., left (right) hand placed on the right (left) side. Thus, the spatial location of the tactile stimulus and hand was consistent in the uncrossed condition and inconsistent in the crossed condition. The results showed that N1 amplitudes elicited by high-probability tactile stimuli only decreased in the uncrossed condition. These results suggest that the prediction of a tactile stimulus facilitated by approaching visual information is influenced by multiple spatial coordinates.

Journal ArticleDOI
TL;DR: In this paper, the cross-hand deficit was investigated by asking blindfolded participants to visually imagine their crossed arms as uncrossed, which significantly decreased the magnitude of the crossed-hands deficit by bringing information in the two reference frames into alignment.
Abstract: Successful interaction with our environment requires accurate tactile localization. Although we seem to localize tactile stimuli effortlessly, the processes underlying this ability are complex. This is evidenced by the crossed-hands deficit, in which tactile localization performance suffers when the hands are crossed. The deficit results from the conflict between an internal reference frame, based in somatotopic coordinates, and an external reference frame, based in external spatial coordinates. Previous evidence in favour of the integration model employed manipulations to the external reference frame (e.g., blindfolding participants), which reduced the deficit by reducing conflict between the two reference frames. The present study extends this finding by asking blindfolded participants to visually imagine their crossed arms as uncrossed. This imagery manipulation further decreased the magnitude of the crossed-hands deficit by bringing information in the two reference frames into alignment. This imagery manipulation differentially affected males and females, which was consistent with the previously observed sex difference in this effect: females tend to show a larger crossed-hands deficit than males and females were more impacted by the imagery manipulation. Results are discussed in terms of the integration model of the crossed-hands deficit.

Journal ArticleDOI
TL;DR: In this paper, the authors examine processes of audiovisual separation in synaesthesia by using a simultaneity judgement task, where subjects were asked to indicate whether an acoustic and visual stimulus occurred simultaneously or not.
Abstract: Synaesthesia is a multimodal phenomenon in which the activation of one sensory modality leads to an involuntary additional experience in another sensory modality. To date, normal multisensory processing has hardly been investigated in synaesthetes. In the present study we examine processes of audiovisual separation in synaesthesia by using a simultaneity judgement task. Subjects were asked to indicate whether an acoustic and a visual stimulus occurred simultaneously or not. Stimulus onset asynchronies (SOA) as well as the temporal order of the stimuli were systematically varied. Our results demonstrate that synaesthetes are better in separating auditory and visual events than control subjects, but only when vision leads.