scispace - formally typeset
Search or ask a question

Showing papers on "Crossmodal published in 2014"


Journal ArticleDOI
TL;DR: Focus on improving the acuity of multisensory temporal function may have important implications for the amelioration of the "higher-order" deficits that serve as the defining features of these disorders.

233 citations


Journal ArticleDOI
05 Feb 2014-Neuron
TL;DR: It is shown that visual deprivation leads to improved frequency selectivity as well as increased frequency and intensity discrimination performance of A1 neurons, and that in adults visual deprivation strengthens thalamocortical synapses in A1, but not in primary visual cortex (V1).

107 citations


Journal ArticleDOI
TL;DR: It is shown that numerosity of both auditory and visual sequences is greatly affected by prior adaptation to slow or rapid sequences of events, and this points to a perceptual system that transcends vision and audition to encode an abstract sense of number in space and in time.
Abstract: Much evidence has accumulated to suggest that many animals, including young human infants, possess an abstract sense of approximate quantity, a number sense . Most research has concentrated on apparent numerosity of spatial arrays of dots or other objects, but a truly abstract sense of number should be capable of encoding the numerosity of any set of discrete elements, however displayed and in whatever sensory modality. Here, we use the psychophysical technique of adaptation to study the sense of number for serially presented items. We show that numerosity of both auditory and visual sequences is greatly affected by prior adaptation to slow or rapid sequences of events. The adaptation to visual stimuli was spatially selective (in external, not retinal coordinates), pointing to a sensory rather than cognitive process. However, adaptation generalized across modalities, from auditory to visual and vice versa. Adaptation also generalized across formats : adapting to sequential streams of flashes affected the perceived numerosity of spatial arrays. All these results point to a perceptual system that transcends vision and audition to encode an abstract sense of number in space and in time.

105 citations


Journal ArticleDOI
TL;DR: The taste-patterns shown by the participants from the four countries tested in the present study are quite different from one another, and these differences cannot easily be attributed merely to whether a country is Eastern or Western, which highlights the impact of cultural background on crossmodal correspondences.
Abstract: We report a cross-cultural study designed to investigate crossmodal correspondences between a variety of visual features (11 colors, 15 shapes, and 2 textures) and the five basic taste terms (bitter, salty, sour, sweet, and umami). A total of 452 participants from China, India, Malaysia, and the USA viewed color patches, shapes, and textures online and had to choose the taste term that best matched the image and then rate their confidence in their choice. Across the four groups of participants, the results revealed a number of crossmodal correspondences between certain colors/shapes and bitter, sour, and sweet tastes. Crossmodal correspondences were also documented between the color white and smooth/rough textures on the one hand and the salt taste on the other. Cross-cultural differences were observed in the correspondences between certain colors, shapes, and one of the textures and the taste terms. The taste-patterns shown by the participants from the four countries tested in the present study are quite different from one another, and these differences cannot easily be attributed merely to whether a country is Eastern or Western. These findings therefore highlight the impact of cultural background on crossmodal correspondences. As such, they raise a number of interesting questions regarding the neural mechanisms underlying crossmodal correspondences.

99 citations


Journal ArticleDOI
TL;DR: The McGurk effect should be defined as a categorical change in auditory perception induced by incongruent visual speech, resulting in a single percept of hearing something other than what the voice is saying, as is argued below.
Abstract: McGurk and MacDonald (1976) reported a powerful multisensory illusion occurring with audiovisual speech. They recorded a voice articulating a consonant and dubbed it with a face articulating another consonant. Even though the acoustic speech signal was well recognized alone, it was heard as another consonant after dubbing with incongruent visual speech. The illusion has been termed the McGurk effect. It has been replicated many times, and it has sparked an abundance of research. The reason for the great impact is that this is a striking demonstration of multisensory integration. It shows that auditory and visual information is merged into a unified, integrated percept. It is a very useful research tool since the strength of the McGurk effect can be taken to reflect the strength of audiovisual integration. Here I shall make two main claims regarding the definition and interpretation of the McGurk effect since they bear relevance to its use as a measure of multisensory integration. First, the McGurk effect should be defined as a categorical change in auditory perception induced by incongruent visual speech, resulting in a single percept of hearing something other than what the voice is saying. Second, when interpreting the McGurk effect, it is crucial to take into account the perception of the unisensory acoustic and visual stimulus components. There are many variants of the McGurk effect (McGurk and MacDonald, 1976; MacDonald and McGurk, 1978)1. The best-known case is when dubbing a voice saying [b] onto a face articulating [g] results in hearing [d]. This is called the fusion effect since the percept differs from the acoustic and visual components. Many researchers have defined the McGurk effect exclusively as the fusion effect because here integration results in the perception of a third consonant, obviously merging information from audition and vision (van Wassenhove et al., 2007; Keil et al., 2012; Setti et al., 2013). This definition ignores the fact that other incongruent audiovisual stimuli produce different types of percepts. For example, a reverse combination of these consonants, A[g]V[b], is heard as [bg], i.e., the visual and auditory components one after the other. There are other pairings, which result in hearing according to the visual component, e.g., acoustic [b] presented with visual [d] is heard as [d]. Here my first claim is that the definition of the McGurk effect should be that an acoustic utterance is heard as another utterance when presented with discrepant visual articulation. This definition includes all variants of the illusion, and it has been used by MacDonald and McGurk (1978) themselves, as well as by several others (e.g., Rosenblum and Saldana, 1996; Brancazio et al., 2003). The different variants of the McGurk effect represent the outcome of audiovisual integration. When integration takes place, it results in a unified percept, without access to the individual components that contributed to the percept. Thus, when the McGurk effect occurs, the observer has the subjective experience of hearing a certain utterance, even though another utterance is presented acoustically. One challenge with this interpretation of the McGurk effect is that it is impossible to be certain that the responses the observer gives correspond to the actual percepts. The real McGurk effect arises due to multisensory integration, resulting in an altered auditory percept. However, if integration does not occur, the observer can perceive the components separately and may choose to respond either according to what he heard or according to what he saw. This is one reason why the fusion effect is so attractive: If the observer reports a percept that differs from both stimulus components, he does not seem to rely on either modality alone, but instead really fuse the information from both. However, this approach does not guarantee a straightforward measure of integration any more than the other variants of the illusion, as is argued below. The second main claim here is that the perception of the acoustic and visual stimulus components has to be taken into account when interpreting the McGurk effect. This issue has been elaborated previously in the extensive work by Massaro and colleagues (Massaro, 1998) and others (Sekiyama and Tohkura, 1991; Green and Norrix, 1997; Jiang and Bernstein, 2011). It is important because the identification accuracy of unisensory components is reflected into audiovisual speech perception. In general, the strength of the McGurk effect is taken to increase when the proportion of responses according to the acoustic component decreases and/or when the proportion of fusion responses increases. That is, the McGurk effect for stimulus A[b]V[g] is considered stronger when fewer B responses and/or more D responses are given. This is often an adequate way to measure the strength of the McGurk effect—if one keeps in mind that it implicitly assumes that perception of the acoustic and visual components is accurate (or at least constant across conditions that are compared). However, it can lead to erroneous conclusions if this assumption does not hold. The fusion effect provides a prime example of this caveat. It has been interpreted to mean that acoustic and visual information is integrated to produce a novel, intermediate percept. For example, when A[b]V[g] is heard as [d], the percept is thought to emerge due to fusion of the features (for the place of articulation) provided via audition (bilabial) and vision (velar), so that a different, intermediate consonant (alveolar) is perceived (van Wassenhove, 2013). However, already McGurk and MacDonald (1976) themselves wrote that “lip movements for [ga] are frequently misread as [da],” even though they did not measure speechreading performance, unfortunately. The omission of the unisensory visual condition in the original study is one factor that has contributed to the strong status of the fusion effect as the only real McGurk effect, reflecting true integration. Still, if visual [g] is confused with [d], it is not at all surprising or special if A[b]V[g] is perceived as [d]. To demonstrate the contribution of the unisensory components more explicitly, I'll take two examples of my research, in which fusion-type stimuli produced different percepts depending on the clarity of the visual component. In one study, a McGurk stimulus A[epe]V[eke] was mainly heard as a fusion [ete] (Tiippana et al., 2004). This reflected the fact that in a visual-only identification task, the visual [eke] was confused with [ete] (42% K responses and 45% T responses to visual [eke]). In another study, a McGurk stimulus A[apa]V[aka] was mainly heard as [aka], and this could be traced back to the fact that in a visual-only identification task, the visual [aka] was clearly distinguishable from [ata], and thus recognized very accurately (100% correct in typical adults; Saalasti et al., 2012; but note the deviant behavior of individuals with Asperger syndrome). Thus, even though the McGurk stimuli were of a fusion type in both studies, their perception differed depending largely on the clarity of the visual components. These findings underscore the importance of knowing the perceptual qualities of the unisensory stimuli before making conclusions about multisensory integration. Exactly how to take the properties of the unisensory components into account in multisensory perception of speech is beyond this paper. Addressing this issue in detail requires carefully designed experimental studies (Bertelson et al., 2003; Alsius et al., 2005), computational modeling (Massaro, 1998; Schwartz, 2010), and investigation of the underlying brain mechanisms (Sams et al., 1991; Skipper et al., 2007). However, the main guideline is that unisensory perception of stimulus components is reflected into multisensory perception of the whole (Ernst and Bulthoff, 2004). During experiments, when the task is to report what was heard, the observer reports the conscious auditory percept evoked by the audiovisual stimulus. If there is no multisensory integration or interaction, the percept is identical for the audiovisual stimulus and the auditory component presented alone. If there is audiovisual integration, the conscious auditory percept changes. To which extent visual input influences the percept depends on how coherent and reliable information each modality provides. Coherent information is integrated and weighted e.g., according to the reliability of each modality, which is reflected in unisensory discriminability. This perceptual process is the same for audiovisual speech—be it natural, congruent audiovisual speech or artificial, incongruent McGurk speech stimuli. The outcome is the conscious auditory percept. Depending on the relative weighting of audition and vision, the outcome for McGurk stimuli can range from hearing according to the acoustic component (when audition is more reliable than vision) to fusion and combination percepts (when both modalities are informative to some extent) to hearing according to the visual component (when vision is more reliable than audition). Congruent audiovisual speech is treated no differently, showing visual influence when the auditory reliability decreases. The different variants of the McGurk effect are all results of this same perceptual process and reflect audiovisual integration. The McGurk effect is an excellent tool to investigate multisensory integration in speech perception. The main messages of this opinion paper are, first, that the McGurk effect should be defined as a change in auditory perception due to incongruent visual speech, so that observers hear another speech sound than what the voice uttered, and second, that the perceptual properties of the acoustic and visual stimulus components should be taken into account when interpreting the McGurk effect as reflecting integration.

88 citations


Journal ArticleDOI
TL;DR: The integrated framework proposed here has the potential to impact on the way rehabilitation programs for sensory recovery are carried out, with the promising prospect of eventually improving their final outcomes.

86 citations


Journal ArticleDOI
TL;DR: Substantial support for multisensory deficits in dyslexia is found, and it is made the case that to fully understand its neurological basis, it will be necessary to thoroughly probe the integrity of auditory-visual integration mechanisms.

85 citations


Journal ArticleDOI
TL;DR: The results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices.
Abstract: The integration of emotional information from the face and voice of other persons is known to be mediated by a number of “multisensory” cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion—although there was a greater weighting of face information—and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices.

85 citations


Journal ArticleDOI
TL;DR: The most commonly cited effect that demonstrates perceptual binding in audiovisual speech perception is the McGurk effect (McGurk and MacDonald, 1976), where a listener hears a speaker utter the syllable "ba" and sees the speaker utter "ga" as mentioned in this paper.
Abstract: Speech perception is an inherently multisensory process. When having a face-to-face conversation, a listener not only hears what a speaker is saying, but also sees the articulatory gestures that accompany those sounds. Speech signals in visual and auditory modalities provide complementary information to the listener (Kavanagh and Mattingly, 1974), and when both are perceived in unison, behavioral gains in in speech perception are observed (Sumby and Pollack, 1954). Notably, this benefit is accentuated when speech is perceived in a noisy environment (Sumby and Pollack, 1954). To achieve a behavioral gain from multisensory processing of speech, however, the auditory and visual signals must be perceptually bound into a single, unified percept. The most commonly cited effect that demonstrates perceptual binding in audiovisual speech perception is the McGurk effect (McGurk and MacDonald, 1976), where a listener hears a speaker utter the syllable “ba,” and sees the speaker utter the syllable “ga.” When these two speech signals are perceptually bound, the listener perceives the speaker as having said “da” or “tha,” syllables that are not contained in either of the unisensory signals, resulting in a perceptual binding, or integration, of the speech signals (Calvert and Thesen, 2004).

75 citations


Journal ArticleDOI
TL;DR: Findings support the idea that crossmodal correspondences underlie sound symbolism in spoken language.

60 citations


Journal ArticleDOI
TL;DR: The results demonstrate that, during viewing of complex multisensory stimuli, activity in sensory areas reflects both stimulus‐driven signals and their efficacy for spatial orienting; and that the posterior parietal cortex combines spatial information about the visual and the auditory modality.
Abstract: Previous studies on crossmodal spatial orienting typically used simple and stereotyped stimuli in the absence of any meaningful context This study combined computational models, behavioural measures and functional magnetic resonance imaging to investigate audiovisual spatial interactions in naturalistic settings We created short videos portraying everyday life situations that included a lateralised visual event and a co-occurring sound, either on the same or on the opposite side of space Subjects viewed the videos with or without eye-movements allowed (overt or covert orienting) For each video, visual and auditory saliency maps were used to index the strength of stimulus-driven signals, and eye-movements were used as a measure of the efficacy of the audiovisual events for spatial orienting Results showed that visual salience modulated activity in higher-order visual areas, whereas auditory salience modulated activity in the superior temporal cortex Auditory salience modulated activity also in the posterior parietal cortex, but only when audiovisual stimuli occurred on the same side of space (multisensory spatial congruence) Orienting efficacy affected activity in the visual cortex, within the same regions modulated by visual salience These patterns of activation were comparable in overt and covert orienting conditions Our results demonstrate that, during viewing of complex multisensory stimuli, activity in sensory areas reflects both stimulus-driven signals and their efficacy for spatial orienting; and that the posterior parietal cortex combines spatial information about the visual and the auditory modality

Journal ArticleDOI
TL;DR: Assessing whether the hedonic congruence between odor and sound stimuli can modulate the perception of odor intensity, pleasantness, and quality in untrained participants reveals that broadband white noise actually had a more pronounced effect on participants’ odor ratings than either the consonant or dissonant musical selections.
Abstract: Previous research has demonstrated that ratings of the perceived pleasantness and quality of odors can be modulated by auditory stimuli presented at around the same time. Here, we extend these results by assessing whether the hedonic congruence between odor and sound stimuli can modulate the perception of odor intensity, pleasantness, and quality in untrained participants. Unexpectedly, our results reveal that broadband white noise, which was rated as unpleasant in a follow-up experiment, actually had a more pronounced effect on participants’ odor ratings than either the consonant or dissonant musical selections. In particular, participants rated the six smells used as being less pleasant and less sweet when they happened to be listening to white noise, as compared to any one of the other music conditions. What is more, these results also add evidence to the existence a close relationship between an odor’s hedonic character and the perception of odor quality. So, for example, independent of the sound condition, pleasant odors were rated as sweeter, less dry, and brighter than the unpleasant odors. These results are discussed in terms of their implications for the understanding of crossmodal correspondences between olfactory and auditory stimuli.

Journal ArticleDOI
TL;DR: It is proposed that frequency-specific modulations in local oscillatory power and in long-range functional connectivity may serve as neural mechanisms underlying the crossmodal shaping of pain.

Journal ArticleDOI
TL;DR: The link between awareness and binding advocated for visual information processing needs to be revised for multisensory cases as there is not a perfect match between these conditions and those in which mult isensory integration and binding occur.
Abstract: Given that multiple senses are often stimulated at the same time, perceptual awareness is most likely to take place in multisensory situations. However, theories of awareness are based on studies and models established for a single sense (mostly vision). Here, we consider the methodological and theoretical challenges raised by taking a multisensory perspective on perceptual awareness. First, we consider how well tasks designed to study unisensory awareness perform when used in multisensory settings, stressing that studies using binocular rivalry, bistable figure perception, continuous flash suppression, the attentional blink, repetition blindness and backward masking can demonstrate multisensory influences on unisensory awareness, but fall short of tackling multisensory awareness directly. Studies interested in the latter phenomenon rely on a method of subjective contrast and can, at best, delineate conditions under which individuals report experiencing a multisensory object or two unisensory objects. As there is not a perfect match between these conditions and those in which multisensory integration and binding occur, the link between awareness and binding advocated for visual information processing needs to be revised for multisensory cases. These challenges point at the need to question the very idea of multisensory awareness.

Journal ArticleDOI
20 May 2014
TL;DR: The authors argue that the category of cross-modal correspondences best captures the core of the phenomenon that is at stake and explain why the use of such cross-sensory pairings by chefs, food companies, marketers, and designers can be particularly effective.
Abstract: Does it make sense to talk about a round wine, or a sharp taste? Many chefs and wine writers certainly seem to think that it does. The historical precedent of ‘the man who tasted shapes’, as well as recent claims that the chemical senses could present us with forms of universal synaesthesia (Stevenson and Tomiczek 2007), make it natural to wonder whether there might not be a widespread form of synaesthesia underlying these surprising reports. Alternatively, however, they might instead reflect nothing more than the metaphorical use of language (cf. White 2008). Intriguingly, a new field of experimental research is now starting to demonstrate many examples where tastes, aromas, flavours, and the oral-somatosensory attributes of foods and beverages are reliably matched to particular shapes. These crossmodal matches are thus both ubiquitous and robust across the general population, or at least within the cultures in which they have been tested to date. After discussing a number of these examples of the crossmodal matching of shape (or shape attributes such as angularity) to food and drink stimuli, we argue that the category of crossmodal correspondences best captures the core of the phenomenon that is at stake. What is more, they may help to explain why the use of such cross-sensory pairings by chefs, food companies, marketers, and designers can be particularly effective. The focus on this specific type of cross-sensory matching demonstrates that it is a much more robust empirical phenomenon than it might at first seem, both because of its extensive use out there in the marketplace, and also because of the theoretical issues it raises about the differences between several plausible alternative explanations of crossmodal associations.

Journal ArticleDOI
TL;DR: This review argues that sensory substitution does indeed show properties of synaesthesia and proposes two testable predictions: firstly that, in an expert user of a sensory substitution device, the substituting modality should not be lost, and secondly that stimulation within the substitution modality, but by means other than a sensory replacement device, should still produce sensation in the normally substituted modality.

Journal ArticleDOI
TL;DR: Although both auditory and visual kinematic cues contribute significantly to the perception of overall expressivity, the effect of visual k Cinematic cues appears to be somewhat stronger, which provides preliminary evidence of crossmodal interactions in the perceived expressivity.
Abstract: In musical performance, bodily gestures play an important role in communicating expressive intentions to audiences. Although previous studies have demonstrated that visual information can have an effect on the perceived expressivity of musical performances, the investigation of audiovisual interactions has been held back by the technical difficulties associated with the generation of controlled, mismatching stimuli.With the present study, we aimed to address this issue by utilizing a novel method in order to generate controlled, balanced stimuli that comprised both matching and mismatching bimodal combinations of different expressive intentions. The aim of Experiment 1 was to investigate the relative contributions of auditory and visual kinematic cues in the perceived expressivity of piano performances, and in Experiment 2 we explored possible crossmodal interactions in the perception of auditory and visual expressivity. The results revealed that although both auditory and visual kinematic cues contribute significantly to the perception of overall expressivity, the effect of visual kinematic cues appears to be somewhat stronger. These results also provide preliminary evidence of crossmodal interactions in the perception of auditory and visual expressivity. In certain performance conditions, visual cues had an effect on the ratings of auditory expressivity, and auditory cues had a small effect on the ratings of visual expressivity.

BookDOI
01 Oct 2014
TL;DR: In this paper, a cross-modal perspective on sensory substitution is presented, with a focus on the non-visual senses and the sense of smell and taste, and the dominance of the visual cortex.
Abstract: About the Editors About the Contributors New Models of Perception 1. Perceiving as Predicting Andy Clark 2. Active Perception and the Representation of Space Mohan Matthen 3. Distinguishing Top-Down From Bottom-Up Effects Nicholas Shea Multimodal Perception 4. Is Consciousness Multisensory? Charles Spence and Tim Bayne 5. Not all perceptual experience is modality specific Casey O'Callaghan 6. Is audio-visual perception 'amodal' or 'crossmodal'? Matthew Nudds The Non-Visual Senses 7. What Counts as Touch? Matthew Fulkerson 8. Sound stimulants: defending the stable disposition view John Kulvicki 9. Olfactory Objects Clare Batty 10. Confusing Tastes with Flavours Charles Spence, Malika Auvray, and Barry Smith Sensing Ourselves 11. Inner Sense Vincent Picciuto and Peter Carruthers New Issues Concerning Vision 12. The Diversity of Human Visual Experience Howard C. Hughes, Robert Fendrich and Sarah E. Streeter 13. A crossmodal perspective on sensory substitution Ophelia Deroy and Malika Auvray 14. The dominance of the visual Dustin Stokes and Stephen Biggs 15. More Color Science for Philosophers C. L. Hardin Relating the Modalities 16. Morphing Senses Erik Myin, Ed Cooke, and Karim Zahidi 17. A Methodological Molyneux Question: Sensory Substitution, Plasticity and the Unification of Perceptual Theory Mazviita Chirimuuta and Mark Paterson 18. The Space of Sensory Modalities Fiona Macpherson 19. Distinguishing the Commonsense Senses Roberto Casati, Jerome Dokic, and Francois Le Corre Index

Journal ArticleDOI
TL;DR: There is a crossmodal correspondence between visual and trigeminal senses that can influence perception of spiciness in the US sample, and red was associated with significantly higher ratings of expected spice than blue, and darker reds were expected to be spicier than lighter reds.
Abstract: Color cues can influence the experience of flavor, both by influencing identification and perceived intensity of foods. Previous research has largely focused on the crossmodal influence of vision upon taste or olfactory cues. It is plausible that color cues could also affect perceived trigeminal sensation; these studies demonstrate a crossmodal influence of color on piquancy. In our first two experiments, participants rated the spiciness of images of salsas that were adjusted to vary in color and intensity. We found that red was associated with significantly higher ratings of expected spice than blue, and that darker reds were expected to be spicier than lighter reds. In our third experiment, participants tasted and then rated the spiciness of each of four salsas (with two levels of color and of piquancy) when sighted and when blindfolded. Spiciness ratings were unaffected by differing colors when the salsa was mild, but when the piquancy was increased, a lack of increase in color corresponded to a depressed spiciness. These results can be explained using a model of assimilation and contrast. Taken together, our findings show that in our US sample, there is a crossmodal correspondence between visual and trigeminal senses that can influence perception of spiciness.

DOI
01 Apr 2014
TL;DR: Sensory features (i.e., unusual behavioral responses to sensory stimuli) are highly prevalent and heterogeneous across individuals with ASD, and from a developmental perspective, sensory response patterns are associated with and have cascading effects on other core symptoms in ASD as discussed by the authors.
Abstract: Sensory features (i.e., unusual behavioral responses to sensory stimuli) are highly prevalent and heterogeneous across individuals with ASD. From a developmental perspective, sensory response patterns are associated with and have cascading effects on other core symptoms in ASD. Burgeoning research using novel technologies is beginning to uncover the nature and pathogenesis of these features. As sensory issues often demonstrate a functional impact on adaptive behavior, social participation, and well-being, it is important that the evidence base for clinical interventions continues to grow. Keywords: assessment and intervention; development; enhanced perception; functional impact; hyperresponsiveness; hyporesponsiveness; repetitions; seeking behaviors; sensory features; sensory integration; sensory interests; sensory processing

Journal ArticleDOI
TL;DR: The study tested the "Bouba-Kiki" effect in the auditory-haptic modalities, using 2D cut-outs and 3D models based on Köhler's original drawings, suggesting that, in the absence of a direct visual stimulus, visual imagery plays a role in crossmodal integration.

Journal ArticleDOI
TL;DR: This study is the first to directly show that crossmodal brain activity is specifically related to connectivity in the AF, supporting its role in phoneme–grapheme integration ability and helps to define an interdependent neural network for reading-related integration.
Abstract: Crossmodal integration of auditory and visual information, such as phonemes and graphemes, is a critical skill for fluent reading. Previous work has demonstrated that white matter connectivity along the arcuate fasciculus AF is predicted by reading skill and that crossmodal processing particularly activates the posterior STS pSTS. However, the relationship between this crossmodal activation and white matter integrity has not been previously reported. We investigated the interrelationship of crossmodal integration, both in terms of behavioral performance and pSTS activity, with AF tract coherence using a rhyme judgment task in a group of 47 children with a range of reading abilities. We demonstrate that both response accuracy and pSTS activity for crossmodal auditory-visual rhyme judgments was predictive of fractional anisotropy along the left AF. Unimodal auditory-only or visual-only pSTS activity was not significantly related to AF connectivity. Furthermore, activity in other reading-related ROIs did not show the same AV-only AF coherence relationship, and AV pSTS activity was not related to connectivity along other language-related tracts. This study is the first to directly show that crossmodal brain activity is specifically related to connectivity in the AF, supporting its role in phoneme-grapheme integration ability. More generally, this study helps to define an interdependent neural network for reading-related integration.

Journal ArticleDOI
TL;DR: The data reveal that animals can compare the arrival time of simultaneously emitted multimodal cues to obtain information on relative distance to a source, and it is speculated that communicative benefits from crossmodal comparison may have been an important driver of the evolution of elaborate multi-modal displays.

Journal ArticleDOI
TL;DR: A pattern of functional dissociations suggests complex, multidimensional contributions of the PFC and its subregions to crossmodal cognition.
Abstract: In the present study, we assessed the involvement of the prefrontal cortex (PFC) in the ability of rats to perform crossmodal (tactile-to-visual) object recognition tasks. We tested rats with 3 different types of bilateral excitotoxic lesions: (1) Large PFC lesions, including the medial PFC (mPFC) and ventral and lateral regions of the orbitofrontal cortex (OFC); (2) selective mPFC lesions; and (3) selective OFC lesions. Rats were tested on 2 versions of crossmodal object recognition (CMOR): (1) The original CMOR task, which uses a tactile-only sample phase and a visual-only choice phase; and (2) a "multimodal pre-exposure" version (PE/CMOR), in which simultaneous pre-exposure to the tactile and visual features of an object facilitates CMOR performance over longer memory delays. Inclusive PFC lesions disrupted performance on both versions of CMOR, whereas selective mPFC damage had no effect. Lesions limited to the OFC caused delay-dependent deficits on the CMOR task, but failed to reverse the enhancement produced by multimodal object pre-exposure. This pattern of functional dissociations suggests complex, multidimensional contributions of the PFC and its subregions to crossmodal cognition.

Journal ArticleDOI
TL;DR: Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks, but attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlyingCrossmodal competition in stimulus processing and modality-specific biasing of attentional set.

Journal ArticleDOI
TL;DR: A collection of reviews that assess the current state of neuroscience research on sensory substitution, visual rehabilitation, and multisensory processes are described.

Journal ArticleDOI
TL;DR: It is suggested that the fusiform gyrus adapts to input of a new modality even in the mature brain and thus demonstrate an adult type of crossmodal plasticity.

Journal ArticleDOI
TL;DR: Using the same procedure in a unimodal context showed that rapid recalibration does not occur in audition following exposure to asynchronous tones of different frequencies, nor in vision following asynchronous lines differing in colour and orientation, which suggests that rapid re-alignment is in essence an inter-sensory temporal process.

Journal ArticleDOI
TL;DR: It is found that prolonged training with a mechanical hand capable of distal hand movements and providing sensory feedback induces a pattern of interference, which is not observed after a brief training, between visual stimuli close to the prosthesis and touches on the body.

Journal ArticleDOI
11 Jun 2014-PLOS ONE
TL;DR: The results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing.
Abstract: Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be "rewired" through auditory reafferentation.