scispace - formally typeset
Search or ask a question

Showing papers on "Crossmodal published in 2015"


Journal ArticleDOI
TL;DR: A conceptualization based on Bayesian causal inference is proposed for addressing how the authors' nervous system could infer whether an object belongs to their own body, using multisensory, sensorimotor, and semantic information, and how this can account for several experimental findings.
Abstract: Which is my body and how do I distinguish it from the bodies of others, or from objects in the surrounding environment? The perception of our own body and more particularly our sense of body ownership is taken for granted. Nevertheless, experimental findings from body ownership illusions (BOIs), show that under specific multisensory conditions, we can experience artificial body parts or fake bodies as our own body parts or body, respectively. The aim of the present paper is to discuss how and why BOIs are induced. We review several experimental findings concerning the spatial, temporal, and semantic principles of crossmodal stimuli that have been applied to induce BOIs. On the basis of these principles, we discuss theoretical approaches concerning the underlying mechanism of BOIs. We propose a conceptualization based on Bayesian causal inference for addressing how our nervous system could infer whether an object belongs to our own body, using multisensory, sensorimotor, and semantic information, and we discuss how this can account for several experimental findings. Finally, we point to neural network models as an implementational framework within which the computational problem behind BOIs could be addressed in the future.

362 citations


Journal ArticleDOI
TL;DR: Positive correlation between individual alpha frequency (IAF) peak and the size of the temporal window of the illusion was found, suggesting that alpha oscillations might represent the temporal unit of visual processing that cyclically gates perception and the neurophysiological substrate promoting audio-visual interactions.

327 citations


Journal ArticleDOI
08 Jul 2015-Flavour
TL;DR: Some of the innovative ways in which chefs, culinary artists, designers, and marketers are taking the latest insights from research in this area as inspiration for their own creative endeavours are looked at.
Abstract: Can basic tastes, such as sweet, sour, bitter, salty, and possibly also umami, be conveyed by means of colour? If so, how should we understand the relationship between colours and tastes: Is it universal or relative, innate or acquired, unidirectional or bidirectional? Here, we review the growing body of scientific research showing that people systematically associate specific colours with particular tastes. We highlight how these widely shared bidirectional crossmodal correspondences generalize across cultures and stress their difference from synaesthesia (with which they are often confused). The various explanations that have been put forward to account for such crossmodal mappings are then critically evaluated. Finally, we go on to look at some of the innovative ways in which chefs, culinary artists, designers, and marketers are taking—or could potentially push further—the latest insights from research in this area as inspiration for their own creative endeavours.

146 citations


Journal ArticleDOI
TL;DR: This paper reported a series of four experiments designed to assess what drives people's matching of visual roundness/angularity to both "basic" taste names and actual tastes, and found that people consistently matched sweetness to roundness.

118 citations


Journal ArticleDOI
TL;DR: Two types of anatomical pathways are described and quantify that possibly underlie short-latency multisensory integration processes in the primary auditory, somatosensory, and visual cortex of Mongolian gerbils, and V1, where V1 provides the most pronounced feedforward-type outputs and receives most pronounced feedback-type inputs.
Abstract: Multisensory integration does not only recruit higher-level association cortex, but also low-level and even primary sensory cortices. Here, we will describe and quantify two types of anatomical pathways, a thalamocortical and a corticocortical that possibly underlie short-latency multisensory integration processes in the primary auditory (A1), somatosensory (S1), and visual cortex (V1). Results were obtained from Mongolian gerbils, a common model-species in neuroscience, using simultaneous injections of different retrograde tracers into A1, S1, and V1. Several auditory, visual, and somatosensory thalamic nuclei project not only to the primary sensory area of their own (matched) but also to areas of other (non-matched) modalities. The crossmodal output ratios of these nuclei, belonging to both core and non-core sensory pathways, vary between 0.4 and 63.5 % of the labeled neurons. Approximately 0.3 % of the sensory thalamic input to A1, 5.0 % to S1, and 2.1 % to V1 arise from non-matched nuclei. V1 has most crossmodal corticocortical connections, projecting strongest to S1 and receiving a similar amount of moderate inputs from A1 and S1. S1 is mainly interconnected with V1. A1 has slightly more projections to V1 than S1, but gets just faint inputs from there. Concerning the layer-specific distribution of the retrogradely labeled somata in cortex, V1 provides the most pronounced feedforward-type outputs and receives (together with S1) most pronounced feedback-type inputs. In contrast, A1 has most pronounced feedback-type outputs and feedforward-type inputs in this network. Functionally, the different sets of thalamocortical and corticocortical connections could underlie distinctive types of integration mechanisms for different modality pairings.

93 citations


Journal ArticleDOI
TL;DR: This review synthesizes evidence across sensory modalities to report emerging themes, including the systems' flexibility to emphasize different aspects of a sensory stimulus depending on its predictive features and ability of different forms of learning to produce similar plasticity in sensory structures.
Abstract: Historically, the body's sensory systems have been presumed to provide the brain with raw information about the external environment, which the brain must interpret to select a behavioral response. Consequently, studies of the neurobiology of learning and memory have focused on circuitry that interfaces between sensory inputs and behavioral outputs, such as the amygdala and cerebellum. However, evidence is accumulating that some forms of learning can in fact drive stimulus-specific changes very early in sensory systems, including not only primary sensory cortices but also precortical structures and even the peripheral sensory organs themselves. This review synthesizes evidence across sensory modalities to report emerging themes, including the systems' flexibility to emphasize different aspects of a sensory stimulus depending on its predictive features and ability of different forms of learning to produce similar plasticity in sensory structures. Potential functions of this learning-induced neuroplasticity are discussed in relation to the challenges faced by sensory systems in changing environments, and evidence for absolute changes in sensory ability is considered. We also emphasize that this plasticity may serve important nonsensory functions, including balancing metabolic load, regulating attentional focus, and facilitating downstream neuroplasticity.

82 citations


Journal ArticleDOI
TL;DR: Brain responses to auditory stimuli in 11 adults who had been deprived of all patterned vision at birth by congenital cataracts until they were treated at 9 to 238 days of age showed enhanced auditory-driven activity in focal visual regions.

78 citations


Journal ArticleDOI
TL;DR: Interestingly, sweet taste intensity was rated progressively lower, whereas the perception of umami taste was augmented during the experimental sound condition, to a progressively greater degree with increasing concentration, and it is postulate that this effect arises from mechanostimulation of the chorda tympani nerve.
Abstract: Our sense of taste can be influenced by our other senses, with several groups having explored the effects of olfactory, visual, or tactile stimulation on what we perceive as taste. Research into multisensory, or crossmodal perception has rarely linked our sense of taste with that of audition. In our study, 48 participants in a crossover experiment sampled multiple concentrations of solutions of 5 prototypic tastants, during conditions with or without broad spectrum auditory stimulation, simulating that of airline cabin noise. Airline cabins are an unusual environment, in which food is consumed routinely under extreme noise conditions, often over 85 dB, and in which the perceived quality of food is often criticized. Participants rated the intensity of solutions representing varying concentrations of the 5 basic tastes on the general Labeled Magnitude Scale. No difference in intensity ratings was evident between the control and sound condition for salty, sour, or bitter tastes. Likewise, panelists did not perform differently during sound conditions when rating tactile, visual, or auditory stimulation, or in reaction time tests. Interestingly, sweet taste intensity was rated progressively lower, whereas the perception of umami taste was augmented during the experimental sound condition, to a progressively greater degree with increasing concentration. We postulate that this effect arises from mechanostimulation of the chorda tympani nerve, which transits directly across the tympanic membrane of the middle ear.

73 citations


Journal ArticleDOI
20 Nov 2015-Flavour
TL;DR: A growing body of scientific evidence now shows that what people taste when evaluating a wine, and how much they enjoy the experience, can be influenced by the music that happens to be playing at the same time as discussed by the authors.
Abstract: A growing body of scientific evidence now shows that what people taste when evaluating a wine, and how much they enjoy the experience, can be influenced by the music that happens to be playing at the same time. It has long been known that what we hear can influence the hedonic aspects of tasting. However, what the latest research now shows is that by playing the “right” music one can also impact specific sensory-discriminative aspects of tasting as well. Music has been shown to influence the perceived acidity, sweetness, fruitiness, astringency, and length of wine. We argue against an account of such results in terms of synaesthesia, or “oenesthesia,” as some have chosen to call it. Instead, we suggest that attention, directed via the crossmodal correspondences that exist between sound and taste (in the popular meaning of the term, i.e., flavor), can modify (perhaps enhance, or certainly highlight when attended, or suppress when unattended) certain elements in the complex tasting experience that is drinking wine. We also highlight the likely role played by any change in the mood or emotional state of the person listening to the music on taste/aroma perception as well. Finally, we highlight how the crossmodal masking of sweetness perception may come into effect if the music happens to be too loud (a form of crossmodal sensory masking). Taken together, the evidence reviewed here supports the claim that, strange though it may seem, what we hear (specifically in terms of music) really can change our perception of the taste of wine, not to mention how much we enjoy the experience. Several plausible mechanisms that may underlie such crossmodal effects are outlined.

68 citations


Journal ArticleDOI
TL;DR: This article presents a critical perspective about the importance of top-down control for eMSI: in other words, who is controlling whom?
Abstract: Traditional views contend that behaviorally-relevant multisensory interactions occur relatively late during stimulus processing and subsequently to influences of (top-down) attentional control In contrast, work from the last 15 years shows that information from different senses is integrated in the brain also during the initial 100 ms after stimulus onset and within low-level cortices Critically, many of these early-latency multisensory interactions (hereafter eMSI) directly impact behavior The prevalence of eMSI substantially advances our understanding of how unified perception and goal-related behavior emerge However, it also raises important questions about the dependency of the eMSI on top-down, goal-based attentional control mechanisms that bias information processing toward task-relevant objects (hereafter top-down control) To date, this dependency remains controversial, because eMSI can occur independently of top-down control, making it plausible for (some) multisensory processes to directly shape perception and behavior In other words, the former is not necessary for these early effects to occur and to link them with perception (see Figure ​Figure1A)1A) This issue epitomizes the fundamental question regarding direct links between sensation, perception, and behavior (direct perception), and also extends it in a crucial way to incorporate the multisensory nature of everyday experience At the same time, the emerging framework must strive to also incorporate the variety of higher-order control mechanisms that likely influence multisensory stimulus responses but which are not based on task-relevance This article presents a critical perspective about the importance of top-down control for eMSI: In other words, who is controlling whom? Figure 1 (A) Depiction of manners in which top-down attentional control and bottom-up multisensory processes may influence direct perception in multisensory contexts In this model, the bottom-up multisensory processes that occur early in time (eMSI; beige box)

67 citations


Journal ArticleDOI
TL;DR: Crossmodal correspondence effects were observed in 6-month-old infants but not in younger infants, suggesting that experience and/or further maturation is needed to fully develop this crossmodal association.
Abstract: We examined 4- and 6-month-old infants’ sensitivity to the perceptual association between pitch and object size. Crossmodal correspondence effects were observed in 6-month-old infants but not in younger infants, suggesting that experience and/or further maturation is needed to fully develop this crossmodal association.

Journal ArticleDOI
TL;DR: Support is provided for the claim that ambient sound influences taste judgments, and the approach outlined here may help researchers and experience designers to obtain more profound effects of the auditory or multisensory atmosphere.
Abstract: All of the senses can potentially contribute to the perception and experience of food and drink. Sensory influences come both from the food or drink itself, and from the environment in which that food or drink is tasted and consumed. In this study, participants initially had to pair each of three soundtracks with one of three chocolates (varying on the bitter-sweet dimension). In a second part of the study, the impact of the various music samples on these participants’ ratings of the taste of various chocolates was assessed. The results demonstrate that what people hear exerts a significant influence over their rating of the taste of the chocolate. Interestingly, when the results were analysed based on the participants’ individual music-chocolate matches (rather than the average response of the whole group), more robust crossmodal effects were revealed. These results therefore provide support for the claim that ambient sound influences taste judgments, and potentially provide useful insights concerning the future design of multisensory tasting experiences. Practical Applications The approach outlined here follows the increasing demand from the field of gastronomy for greater influence over the general multisensory atmosphere surrounding eating/drinking experiences. One of the novel contributions of the present research is to show how, by considering a participant's individual response, further insight for user-studies in gastrophysics may be provided. Increasing the personalization of such experiments in the years to come may help researchers to design individualized “sonic seasoning” experiences that are even more effective. In the future, then, the approach outlined here may help researchers and experience designers to obtain more profound effects of the auditory or multisensory atmosphere.

Journal ArticleDOI
TL;DR: Several examples of how the body affects perception are provided, demonstrating how sensory and motor information, body representations, and perceptions (of the body and the world) are interdependent.
Abstract: Incorporating the fact that the senses are embodied is necessary for an organism to interpret sensory information. Before a unified perception of the world can be formed, sensory signals must be processed with reference to body representation. The various attributes of the body such as shape, proportion, posture, and movement can be both derived from the various sensory systems and can affect perception of the world (including the body itself). In this review we examine the relationships between sensory and motor information, body representations, and perceptions of the world and the body. We provide several examples of how the body affects perception (including but not limited to body perception). First we show that body orientation effects visual distance perception and object orientation. Also, visual-auditory crossmodal-correspondences depend on the orientation of the body: audio “high” frequencies correspond to a visual “up” defined by both gravity and body coordinates. Next, we show that perceived locations of touch is affected by the orientation of the head and eyes on the body, suggesting a visual component to coding body locations. Additionally, the reference-frame used for coding touch locations seems to depend on whether gaze is static or moved relative to the body during the tactile task. The perceived attributes of the body such as body size, affect tactile perception even at the level of detection thresholds and two-point discrimination. Next, long-range tactile masking provides clues to the posture of the body in a canonical body schema. Finally, ownership of seen body parts depends on the orientation and perspective of the body part in view. Together, all of these findings demonstrate how sensory and motor information, body representations, and perceptions (of the body and the world) are interdependent.

Journal ArticleDOI
TL;DR: Differences in eye movements when viewing the talker’s face may be an important contributor to interindividual differences in multisensory speech perception.
Abstract: The McGurk effect is an illusion in which visual speech information dramatically alters the perception of auditory speech. However, there is a high degree of individual variability in how frequently the illusion is perceived: some individuals almost always perceive the McGurk effect, while others rarely do. Another axis of individual variability is the pattern of eye movements make while viewing a talking face: some individuals often fixate the mouth of the talker, while others rarely do. Since the talker's mouth carries the visual speech necessary information to induce the McGurk effect, we hypothesized that individuals who frequently perceive the McGurk effect should spend more time fixating the talker's mouth. We used infrared eye tracking to study eye movements as 40 participants viewed audiovisual speech. Frequent perceivers of the McGurk effect were more likely to fixate the mouth of the talker, and there was a significant correlation between McGurk frequency and mouth looking time. The noisy encoding of disparity model of McGurk perception showed that individuals who frequently fixated the mouth had lower sensory noise and higher disparity thresholds than those who rarely fixated the mouth. Differences in eye movements when viewing the talker’s face may be an important contributor to interindividual differences in multisensory speech perception.

Journal ArticleDOI
TL;DR: It is demonstrated that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate thatCrossmodal plasticity following deafness may unmask these existing non‐auditory functions.
Abstract: The recent findings in several species that primary auditory cortex processes non-auditory information have largely overlooked the possibility for somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior – AAF, and primary auditory-- A1, fields) for tactile responsivity. Multiple single-unit recordings from anesthetized ferret cortex yielded histologically verified neurons (n=311) tested with electronically controlled auditory, visual and tactile stimuli and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions.

Journal ArticleDOI
TL;DR: This essay examines the processing of motion information as it ascends the primate visual and somatosensory neuraxes and concludes that similar computations are implemented in the two sensory systems.
Abstract: While the different sensory modalities are sensitive to different stimulus energies, they are often charged with extracting analogous information about the environment. Neural systems may thus have evolved to implement similar algorithms across modalities to extract behaviorally relevant stimulus information, leading to the notion of a canonical computation. In both vision and touch, information about motion is extracted from a spatiotemporal pattern of activation across a sensory sheet (in the retina and in the skin, respectively), a process that has been extensively studied in both modalities. In this essay, we examine the processing of motion information as it ascends the primate visual and somatosensory neuraxes and conclude that similar computations are implemented in the two sensory systems.

Journal ArticleDOI
TL;DR: How targeted manipulation of neural activity using invasive and non-invasive neuromodulation techniques have advanced the understanding of multisensory processing is reviewed.
Abstract: We rely on rich and complex sensory information to perceive and understand our environment. Our multisensory experience of the world depends on the brain's remarkable ability to combine signals across sensory systems. Behavioural, neurophysiological and neuroimaging experiments have established principles of multisensory integration and candidate neural mechanisms. Here we review how targeted manipulation of neural activity using invasive and non-invasive neuromodulation techniques have advanced our understanding of multisensory processing. Neuromodulation studies have provided detailed characterizations of brain networks causally involved in multisensory integration. Despite substantial progress, important questions regarding multisensory networks remain unanswered. Critically, experimental approaches will need to be combined with theory in order to understand how distributed activity across multisensory networks collectively supports perception.

Journal ArticleDOI
TL;DR: Evidence highlighting the contribution of crossmodal illusions to restore, at least in part, defective mechanisms underlying a number of disorders of body representation related to pain, sensory, and motor impairments in neuropsychological and neurological diseases is considered.
Abstract: In everyday life, many diverse bits of information, simultaneously derived from the different sensory channels, converge into discrete brain areas, and are ultimately synthetized into unified percepts. Such multisensory integration can dramatically alter the phenomenal experience of both environmental events and our own body. Crossmodal illusions are one intriguing product of multisensory integration. This review describes and discusses the main clinical applications of the most known crossmodal illusions in rehabilitation settings. We consider evidence highlighting the contribution of crossmodal illusions to restore, at least in part, defective mechanisms underlying a number of disorders of body representation related to pain, sensory, and motor impairments in neuropsychological and neurological diseases, and their use for improving neuroprosthetics. This line of research is enriching our understanding of the relationships between multisensory functions and the pathophysiological mechanisms at the basis of a number of brain disorders. The review illustrates the potential of crossmodal illusions for restoring disarranged spatial and body representations, and, in turn, different pathological symptoms.

Journal ArticleDOI
TL;DR: The results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex.
Abstract: Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention.

Journal ArticleDOI
TL;DR: It is shown that intuitive SS sounds can be matched to the correct images by naive sighted participants just as well as by intensively-trained participants, indicating that existing crossmodal interactions and amodal sensory cortical processing may be as important in the interpretation of patterns by SS asCrossmodal plasticity.
Abstract: Millions of people are blind worldwide. Sensory substitution (SS) devices (e.g., vOICe) can assist the blind by encoding a video stream into a sound pattern, recruiting visual brain areas for auditory analysis via crossmodal interactions and plasticity. SS devices often require extensive training to attain limited functionality. In contrast to conventional attention-intensive SS training that starts with visual primitives (e.g., geometrical shapes), we argue that sensory substitution can be engaged efficiently by using stimuli (such as textures) associated with intrinsic crossmodal mappings. Crossmodal mappings link images with sounds and tactile patterns. We show that intuitive SS sounds can be matched to the correct images by naive sighted participants just as well as by intensively-trained participants. This result indicates that existing crossmodal interactions and amodal sensory cortical processing may be as important in the interpretation of patterns by SS as crossmodal plasticity (e.g., the strengthening of existing connections or the formation of new ones), especially at the earlier stages of SS usage. An SS training procedure based on crossmodal mappings could both considerably improve participant performance and shorten training times, thereby enabling SS devices to significantly expand blind capabilities.

Journal ArticleDOI
01 Feb 2015-Cortex
TL;DR: It is found that early blind individuals showed significantly superior performance in detecting tactile symmetric patterns compared to sighted controls, and the neural correlates associated with crossmodal neuroplasticity following visual deprivation are identified.

Journal ArticleDOI
TL;DR: This study showed a stronger poststimulus suppression of beta-band power at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli, demonstrating that auditory perception is influenced by visual context and that the subsequent formation of a McGurK illusion requires stronger audiovisual integration even at early processing stages.
Abstract: The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influen...

Journal ArticleDOI
TL;DR: There may be a temporal window in which both MSI and exogenous crossmodal spatial attention can contribute to multisensory response enhancement, which is currently unclear what the relative contribution of each of these is to MRE.

Journal ArticleDOI
TL;DR: Results add to the increasing evidence that low frequency oscillations are functionally relevant for integration in distributed brain networks, as demonstrated for crossmodal interactions in visuotactile pattern matching in the current study.

Journal ArticleDOI
TL;DR: It is suggested that cross- modal correspondences modulate cross-modal integration during learning, leading to new learned units that have different stability over time.
Abstract: Experiencing a stimulus in one sensory modality is often associated with an experience in another sensory modality. For instance, seeing a lemon might produce a sensation of sourness. This might indicate some kind of cross-modal correspondence between vision and gustation. The aim of the current study was to explore whether such cross-modal correspondences influence cross-modal integration during perceptual learning. To that end, we conducted two experiments. Using a speeded classification task, Experiment 1 established a cross-modal correspondence between visual lightness and the frequency of an auditory tone. Using a short-term priming procedure, Experiment 2 showed that manipulation of such cross-modal correspondences led to the creation of a crossmodal unit regardless of the nature of the correspondence (i.e., congruent, Experiment 2a or incongruent, Experiment 2b). However, a comparison of priming effects sizes suggested that cross-modal correspondences modulate cross-modal integration during learning, leading to new learned units that have different stability over time. We discuss the implications of our results for the relation between cross-modal correspondence and perceptual learning in the context of a Bayesian explanation of cross-modal correspondences.

Journal ArticleDOI
TL;DR: The results confirm that vision of the body differentially affects nociceptive and non-nociception processing, but question the robustness of visual analgesia.
Abstract: Previous studies have suggested that looking at the hand can reduce the perception of pain and the magnitude of the ERPs elicited by nociceptive stimuli delivered onto the hand. In contrast, other studies have suggested that looking at the hand can increase tactile sensory discrimination performance, and enhance the magnitude of the ERPs elicited by tactile stimulation. These opposite effects could be related to differences in the crossmodal effects between vision, nociception, and touch. However, these differences could also be related to the use of different experimental designs. Importantly, most studies on the effects of vision on pain have relied on a mirror to create the illusion that the reflected hand is a direct view of the stimulated hand. Here, we compared the effects of direct versus mirror vision of the hand versus an object on the perception and ERPs elicited by non-nociceptive and nociceptive stimuli. We did not observe any significant effect of vision on the perceived intensity. However, vision of the hand did reduce the magnitude of the nociceptive N240 wave, and enhanced the magnitude of the non-nociceptive P200. Our results confirm that vision of the body differentially affects nociceptive and non-nociceptive processing, but question the robustness of visual analgesia.

Journal ArticleDOI
TL;DR: Results indicate that neural networks in the DLPFC function sequentially in the cross modal task from visual stimulus encoding and crossmodal information transferring between visual and tactile stimuli to the behavioral action.
Abstract: Previous studies have shown that neurons of monkey dorsolateral prefrontal cortex (DLPFC) integrate information across modalities and maintain it throughout the delay period of working-memory (WM) tasks. However, the mechanisms of this temporal integration in the DLPFC are still poorly understood. In the present study, to further elucidate the role of the DLPFC in crossmodal WM, we trained monkeys to perform visuo–haptic (VH) crossmodal and haptic–haptic (HH) unimodal WM tasks. The neuronal activity recorded in the DLPFC in the delay period of both tasks indicates that the early-delay differential activity probably is related to the encoding of sample information with different strengths depending on task modality, that the late-delay differential activity reflects the associated (modality-independent) action component of haptic choice in both tasks (that is, the anticipation of the behavioral choice and/or active recall and maintenance of sample information for subsequent action), and that the sustained whole-delay differential activity likely bridges and integrates the sensory and action components. In addition, the VH late-delay differential activity was significantly diminished when the haptic choice was not required. Taken together, the results show that, in addition to the whole-delay differential activity, DLPFC neurons also show early- and late-delay differential activities. These previously unidentified findings indicate that DLPFC is capable of (i) holding the coded sample information (e.g., visual or tactile information) in the early-delay activity, (ii) retrieving the abstract information (orientations) of the sample (whether the sample has been haptic or visual) and holding it in the late-delay activity, and (iii) preparing for behavioral choice acting on that abstract information.

Journal ArticleDOI
TL;DR: It is suggested that multisensory emotion perception involves at least two distinct mechanisms; classical mult isensory integration, as shown for neutral expressions, and crossmodal prediction, as evident for emotional expressions.

Journal ArticleDOI
TL;DR: The findings suggest that the reduced neural integration of letters and speech sounds in dyslexic children may show moderate improvement with reading instruction and training and that behavioral improvements relate especially to individual differences in the timing of this neural integration.
Abstract: A failure to build solid letter-speech sound associations may contribute to reading impairments in developmental dyslexia. Whether this reduced neural integration of letters and speech sounds changes over time within individual children and how this relates to behavioral gains in reading skills remains unknown. In this research, we examined changes in event-related potential (ERP) measures of letter-speech sound integration over a 6-month period during which 9-year-old dyslexic readers (n = 17) followed a training in letter-speech sound coupling next to their regular reading curriculum. We presented the Dutch spoken vowels /a/ and /o/ as standard and deviant stimuli in one auditory and two audiovisual oddball conditions. In one audiovisual condition (AV0), the letter "a" was presented simultaneously with the vowels, while in the other (AV200) it was preceding vowel onset for 200 ms. Prior to the training (T1), dyslexic readers showed the expected pattern of typical auditory mismatch responses, together with the absence of letter-speech sound effects in a late negativity (LN) window. After the training (T2), our results showed earlier (and enhanced) crossmodal effects in the LN window. Most interestingly, earlier LN latency at T2 was significantly related to higher behavioral accuracy in letter-speech sound coupling. On a more general level, the timing of the earlier mismatch negativity (MMN) in the simultaneous condition (AV0) measured at T1, significantly related to reading fluency at both T1 and T2 as well as with reading gains. Our findings suggest that the reduced neural integration of letters and speech sounds in dyslexic children may show moderate improvement with reading instruction and training and that behavioral improvements relate especially to individual differences in the timing of this neural integration.

Journal ArticleDOI
TL;DR: These maps of multisensory convergence and crossmodal generalization reveal the underlying organization of the association cortices, and may be related to the neural basis for mental concepts.
Abstract: We continuously perceive objects in the world through multiple sensory channels. In this study, we investigated the convergence of information from different sensory streams within the cerebral cortex. We presented volunteers with three common objects via three different modalities-sight, sound, and touch-and used multivariate pattern analysis of functional magnetic resonance imaging data to map the cortical regions containing information about the identity of the objects. We could reliably predict which of the three stimuli a subject had seen, heard, or touched from the pattern of neural activity in the corresponding early sensory cortices. Intramodal classification was also successful in large portions of the cerebral cortex beyond the primary areas, with multiple regions showing convergence of information from two or all three modalities. Using crossmodal classification, we also searched for brain regions that would represent objects in a similar fashion across different modalities of presentation. We trained a classifier to distinguish objects presented in one modality and then tested it on the same objects presented in a different modality. We detected audiovisual invariance in the right temporo-occipital junction, audiotactile invariance in the left postcentral gyrus and parietal operculum, and visuotactile invariance in the right postcentral and supramarginal gyri. Our maps of multisensory convergence and crossmodal generalization reveal the underlying organization of the association cortices, and may be related to the neural basis for mental concepts.