scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Cognitive Neuroscience in 2007"


Journal ArticleDOI
TL;DR: Automated calculation of whole-brain volume and estimated total intracranial volume are presented to demonstrate use of the data for measuring differences associated with normal aging and Alzheimer's disease.
Abstract: The Open Access Series of Imaging Studies is a series of magnetic resonance imaging data sets that is publicly available for study and analysis. The initial data set consists of a cross-sectional collection of 416 subjects aged 18 to 96 years. One hundred of the included subjects older than 60 years have been clinically diagnosed with very mild to moderate Alzheimer's disease. The subjects are all right-handed and include both men and women. For each subject, three or four individual T1-weighted magnetic resonance imaging scans obtained in single imaging sessions are included. Multiple within-session acquisitions provide extremely high contrast-to-noise ratio, making the data amenable to a wide range of analytic approaches including automated computational analysis. Additionally, a reliability data set is included containing 20 subjects without dementia imaged on a subsequent visit within 90 days of their initial session. Automated calculation of whole-brain volume and estimated total intracranial volume are presented to demonstrate use of the data for measuring differences associated with normal aging and Alzheimer's disease.

1,526 citations


Journal ArticleDOI
TL;DR: Comparisons of statistical tests using both simulated data and data obtained from a sample of stroke patients with disturbed spatial perception suggest that the Liebermeister approach for binomial data is more sensitive than the chi-square test and that a test described by Brunner and Munzel is more appropriate than the t test for nonbinomial data.
Abstract: Measures of brain activation (e.g., changes in scalp electrical potentials) have become the most popular method for inferring brain function. However, examining brain disruption (e.g., examining behavior after brain injury) can complement activation studies. Activation techniques identify regions involved with a task, whereas disruption techniques are able to discover which regions are crucial for a task. Voxel-based lesion mapping can be used to determine relationships between behavioral measures and the location of brain injury, revealing the function of brain regions. Lesion mapping can also correlate the effectiveness of neurosurgery with the location of brain resection, identifying optimal surgical targets. Traditionally, voxel-based lesion mapping has employed the chi-square test when the clinical measure is binomial and the Student's t test when measures are continuous. Here we suggest that the Liebermeister approach for binomial data is more sensitive than the chi-square test. We also suggest that a test described by Brunner and Munzel is more appropriate than the t test for nonbinomial data because clinical and neuropsychological data often violate the assumptions of the t test. We test our hypotheses comparing statistical tests using both simulated data and data obtained from a sample of stroke patients with disturbed spatial perception. We also developed software to implement these tests (MRIcron), made freely available to the scientific community.

1,258 citations


Journal ArticleDOI
TL;DR: The view that humans' responses to the pain of others can be modulated by cognitive and motivational processes, which influence whether observing a conspecific in need of help will result in empathic concern, an important instigator for helping behavior, is supported.
Abstract: Whether observation of distress in others leads to empathic concern and altruistic motivation, or to personal distress and egoistic motivation, seems to depend upon the capacity for self-other differentiation and cognitive appraisal. In this experiment, behavioral measures and event-related functional magnetic resonance imaging were used to investigate the effects of perspective-taking and cognitive appraisal while participants observed the facial expression of pain resulting from medical treatment. Video clips showing the faces of patients were presented either with the instruction to imagine the feelings of the patient (“imagine other”) or to imagine oneself to be in the patient's situation (“imagine self”). Cognitive appraisal was manipulated by providing information that the medical treatment had or had not been successful. Behavioral measures demonstrated that perspective-taking and treatment effectiveness instructions affected participants' affective responses to the observed pain. Hemodynamic changes were detected in the insular cortices, anterior medial cingulate cortex (aMCC), amygdala, and in visual areas including the fusiform gyrus. Graded responses related to the perspective-taking instructions were observed in middle insula, aMCC, medial and lateral premotor areas, and selectively in left and right parietal cortices. Treatment effectiveness resulted in signal changes in the perigenual anterior cingulate cortex, in the ventromedial orbito-frontal cortex, in the right lateral middle frontal gyrus, and in the cerebellum. These findings support the view that humans' responses to the pain of others can be modulated by cognitive and motivational processes, which influence whether observing a conspecific in need of help will result in empathic concern, an important instigator for helping behavior.

1,251 citations


Journal ArticleDOI
TL;DR: It is concluded that, in addition to their role in movement production, the basal ganglia and SMAs may mediate beat perception.
Abstract: When we listen to rhythm, we often move spontaneously to the beat. This movement may result from processing of the beat by motor areas. Previous studies have shown that several motor areas respond when attending to rhythms. Here we investigate whether specific motor regions respond to beat in rhythm. We predicted that the basal ganglia and supplementary motor area (SMA) would respond in the presence of a regular beat. To establish what rhythm properties induce a beat, we asked subjects to reproduce different types of rhythmic sequences. Improved reproduction was observed for one rhythm type, which had integer ratio relationships between its intervals and regular perceptual accents. A subsequent functional magnetic resonance imaging study found that these rhythms also elicited higher activity in the basal ganglia and SMA. This finding was consistent across different levels of musical training, although musicians showed activation increases unrelated to rhythm type in the premotor cortex, cerebellum, and SMAs (pre-SMA and SMA). We conclude that, in addition to their role in movement production, the basal ganglia and SMAs may mediate beat perception.

821 citations


Journal ArticleDOI
TL;DR: Increasing and decreasing positive and negative emotion can thus increase and decrease subjective reactions and associated amygdala activity in line with regulatory goals, and is associated with different patterns of prefrontal activation as a function of emotional valence and regulatory goal.
Abstract: The ability to cope adaptively with emotional events by volitionally altering one's emotional reactions is important for psychological and physical health as well as social interaction. Cognitive regulation of emotional responses to aversive events engages prefrontal regions that modulate activity in emotion-processing regions such as the amygdala. However, the neural correlates of the regulation of positive emotions remain largely unexplored. We used event-related functional magnetic resonance imaging to examine the neural correlates of cognitively increasing and decreasing emotional reactions to positive and negative stimuli. Participants viewed negative, positive, and neutral pictures while attempting to increase, decrease, or not alter their emotional reactions. Subjective reactions were assessed via on-line ratings. Consistent with previous studies, increasing negative and positive emotion engaged primarily left-lateralized prefrontal regions, whereas decreasing emotion activated bilateral prefrontal regions. Different activations unique to increasing versus decreasing emotion were observed for positive and negative stimuli: Unique increase-related activations were observed only for positive stimuli, whereas unique decrease-related activations were observed only for negative stimuli. Regulation also modulated activity in the amygdala, a key emotion-processing region. Regulation effects on amygdala activity were larger for positive than for negative stimuli, potentially reflecting a greater malleability of positive emotional reactions. Increasing and decreasing positive and negative emotion can thus increase and decrease subjective reactions and associated amygdala activity in line with regulatory goals, and is associated with different patterns of prefrontal activation as a function of emotional valence and regulatory goal.

575 citations


Journal ArticleDOI
TL;DR: Test whether the rostro-caudal axis of the PFC is organized hierarchically, based on the level of abstraction at which multiple representations compete to guide selection of action, finding that activation in PFC subregions was consistent with an abstract representational hierarchy.
Abstract: The prefrontal cortex (PFC) is central to flexible and organized action. Recent theoretical and empirical results suggest that the rostro-caudal axis of the frontal lobes may reflect a hierarchical organization of control. Here, we test whether the rostro-caudal axis of the PFC is organized hierarchically, based on the level of abstraction at which multiple representations compete to guide selection of action. Four functional magnetic resonance imaging (fMRI) experiments parametrically manipulated the set of task-relevant (a) responses, (b) features, (c) dimensions, and (d) overlapping cue-to-dimension mappings. A systematic posterior to anterior gradient was evident within the PFC depending on the manipulated level of representation. Furthermore, across four fMRI experiments, activation in PFC subregions was consistent with the sub-and superordinate relationships that define an abstract representational hierarchy. In addition to providing further support for a representational hierarchy account of the rostro-caudal gradient in the PFC, these data provide important empirical constraints on current theorizing about control hierarchies and the PFC.

558 citations


Journal ArticleDOI
TL;DR: Differential engagement of the MPFC, the PCC/precuneus, and temporo-parietal regions in the self-task indicates that these structures act as key players in the evaluation of one's own emotional state during empathic face-to-face interaction.
Abstract: Empathy allows emotional psychological inference about other person's mental states and feelings in social contexts. We aimed at specifying the common and differential neural mechanisms of “self”-and “other”-related attribution of emotional states using event-related functional magnetic resonance imaging. Subjects viewed faces expressing emotions with direct or averted gaze and either focused on their own emotional response to each face (self-task) or evaluated the emotional state expressed by the face (other-task). The common network activated by both tasks included the left lateral orbito-frontal and medial prefrontal cortices (MPFC), bilateral inferior frontal cortices, superior temporal sulci and temporal poles, as well as the right cerebellum. In a subset of these regions, neural activity was significantly correlated with empathic abilities. The self-(relative to the other-) task differentially activated the MPFC, the posterior cingulate cortex (PCC)/precuneus, and the temporo-parietal junction bilaterally. Empathy-related processing of emotional facial expressions recruited brain areas involved in mirror neuron and theory-of-mind (ToM) mechanisms. The differential engagement of the MPFC, the PCC/precuneus, and temporo-parietal regions in the self-task indicates that these structures act as key players in the evaluation of one's own emotional state during empathic face-to-face interaction. Activation of mirror neurons in a task relying on empathic abilities without explicit task-related motor components supports the view that mirror neurons are not only involved in motor cognition but also in emotional interpersonal cognition. An interplay between ToM and mirror neuron mechanisms may hold for the maintenance of a self-other distinction during empathic interpersonal face-to-face interactions.

496 citations


Journal ArticleDOI
TL;DR: Findings show that self-referential processing and perspective taking recruit distinct regions of the MPFC and suggest that the left dorsal MPFC may be involved in decoupling one's own from other people's perspectives on the self.
Abstract: The medial prefrontal cortex (MPFC) appears to play a prominent role in two fundamental aspects of social cognition, that is, self-referential processing and perspective taking. However, it is currently unclear whether the same or different regions of the MPFC mediate these two interdependent processes. This functional magnetic resonance imaging study sought to clarify the issue by manipulating both dimensions in a factorial design. Participants judged the extent to which trait adjectives described their own personality (e.g., “Are you sociable?”) or the personality of a close friend (e.g., “Is Caroline sociable?”) and were also asked to put themselves in the place of their friend (i.e., to take a third-person perspective) and estimate how this person would judge the adjectives, with the target of the judgments again being either the self (e.g., “According to Caroline, are you sociable?”) or the other person (e.g., “According to Caroline, is she sociable?”). We found that self-referential processing (i.e., judgments targeting the self vs. the other person) yielded activation in the ventral and dorsal anterior MPFC, whereas perspective taking (i.e., adopting the other person's perspective, rather than one's own, when making judgments) resulted in activation in the posterior dorsal MPFC; the interaction between the two dimensions yielded activation in the left dorsal MPFC. These findings show that self-referential processing and perspective taking recruit distinct regions of the MPFC and suggest that the left dorsal MPFC may be involved in decoupling one's own from other people's perspectives on the self.

411 citations


Journal ArticleDOI
TL;DR: It is suggested that the amygdala automatically categorizes faces according to face properties commonly perceived to signal untrustworthiness, which is better predicted by consensus ratings of trustworthiness than by an individual's own judgments.
Abstract: Deciding whether an unfamiliar person is trustworthy is one of the most important decisions in social environments. We used functional magnetic resonance imaging to show that the amygdala is involved in implicit evaluations of trustworthiness of faces, consistent with prior findings. The amygdala response increased as perceived trustworthiness decreased in a task that did not demand person evaluation. More importantly, we tested whether this response is due to an individual's idiosyncratic perception or to face properties that are perceived as untrustworthy across individuals. The amygdala response was better predicted by consensus ratings of trustworthiness than by an individual's own judgments. Individual judgments accounted for little residual variance in the amygdala after controlling for the shared variance with consensus ratings. These findings suggest that the amygdala automatically categorizes faces according to face properties commonly perceived to signal untrustworthiness.

408 citations


Journal ArticleDOI
TL;DR: This work compared two tasks that are widely used in research on mentalizing false belief stories and animations of rigid geometric shapes that depict social interaction to investigate whether the neural systems that mediate the representation of others' mental states are consistent across these tasks.
Abstract: We compared two tasks that are widely used in research on mentalizing---false belief stories and animations of rigid geometric shapes that depict social interactions---to investigate whether the neural systems that mediate the representation of others' mental states are consistent across these tasks. Whereas false belief stories activated primarily the anterior paracingulate cortex (APC), the posterior cingulate cortex/precuneus (PCC/PC), and the temporo-parietal junction (TPJ)---components of the distributed neural system for theory of mind (ToM)---the social animations activated an extensive region along nearly the full extent of the superior temporal sulcus, including a locus in the posterior superior temporal sulcus (pSTS), as well as the frontal operculum and inferior parietal lobule (IPL)---components of the distributed neural system for action understanding---and the fusiform gyrus. These results suggest that the representation of covert mental states that may predict behavior and the representation of intentions that are implied by perceived actions involve distinct neural systems. These results show that the TPJ and the pSTS play dissociable roles in mentalizing and are parts of different distributed neural systems. Because the social animations do not depict articulated body movements, these results also highlight that the perception of the kinematics of actions is not necessary to activate the mirror neuron system, suggesting that this system plays a general role in the representation of intentions and goals of actions. Furthermore, these results suggest that the fusiform gyrus plays a general role in the representation of visual stimuli that signify agency, independent of visual form.

402 citations


Journal ArticleDOI
TL;DR: ERP evidence is presented that the feedback-elicited error-related negativity, an ERP component attributed to the ACC, can be elicited by positive feedback when a person is expecting negative feedback and vice versa, suggesting that performance monitoring in the ACC is not limited to error processing.
Abstract: Several converging lines of evidence suggest that the anterior cingulate cortex (ACC) is selectively involved in error detection or evaluation of poor performance. Here we challenge this notion by presenting event-related potential (ERP) evidence that the feedback-elicited error-related negativity, an ERP component attributed to the ACC, can be elicited by positive feedback when a person is expecting negative feedback and vice versa. These results suggest that performance monitoring in the ACC is not limited to error processing. We propose that the ACC acts as part of a more general performance-monitoring system that is activated by violations in expectancy. Further, we propose that the common observation of increased ACC activity elicited by negative events could be explained by an overoptimistic bias in generating expectations of performance. These results could shed light into neurobehavioral disorders, such as depression and mania, associated with alterations in performance monitoring and also in judgments of self-related events.

Journal ArticleDOI
TL;DR: The results demonstrate that the visually induced speeding-up and suppression of auditory N1 amplitude reflect multisensory integrative mechanisms of AV events that crucially depend on whether vision predicts when the sound occurs.
Abstract: A question that has emerged over recent years is whether audiovisual (AV) speech perception is a special case of multi-sensory perception. Electrophysiological (ERP) studies have found that auditory neural activity (N1 component of the ERP) induced by speech is suppressed and speeded up when a speech sound is accompanied by concordant lip movements. In Experiment 1, we show that this AV interaction is not speech-specific. Ecologically valid nonspeech AV events (actions performed by an actor such as handclapping) were associated with a similar speeding-up and suppression of auditory N1 amplitude as AV speech (syllables). Experiment 2 demonstrated that these AV interactions were not influenced by whether A and V were congruent or incongruent. In Experiment 3 we show that the AV interaction on N1 was absent when there was no anticipatory visual motion, indicating that the AV interaction only occurred when visual anticipatory motion preceded the sound. These results demonstrate that the visually induced speeding-up and suppression of auditory N1 amplitude reflect multisensory integrative mechanisms of AV events that crucially depend on whether vision predicts when the sound occurs.

Journal ArticleDOI
TL;DR: Demand-related changes in deactivation magnitude correlated with performance changes, suggesting that individual and group differences in de activation have functional significance.
Abstract: The network of regions shown by functional imaging studies to be deactivated by experimental tasks relative to nominally more passive baselines (task < baseline) may reflect processes engaged during the resting state or “default mode.” Deactivation may result when attention and resources are diverted from default-mode processes toward task processes. Aging is associated with altered patterns of deactivation which may be related to declining resources, difficulties with resource allocation, or both. These possibilities predict that greater task demand, which increases deactivation levels in younger adults, should exacerbate age-related declines in allocating resources away from the default mode. The present study investigated the magnitude and temporal properties of deactivations in young and older adults during tasks that varied in their demand for cognitive control. Two versions of a verb generation task that varied in their demand for selection among competing alternatives were compared to word reading and a fixation baseline condition. Consistent with our hypothesis, greater deactivations were found with increasing demand. Young and older adults showed equivalent deactivations in the minimal selection condition. By contrast, age differences in both the magnitude and time course of deactivation increased with selection demand: Compared to young adults', older adults' deactivation response showed less sensitivity to demand. Demand-related changes in deactivation magnitude correlated with performance changes, suggesting that individual and group differences in deactivation have functional significance.

Journal ArticleDOI
TL;DR: Electroencephalogram measurements are used to determine what happens in the human visual cortex during detection of a texture-defined square under nonmasked and masked conditions, and conclude that masking derives its effectiveness, at least partly, from disrupting reentrant processing, thereby interfering with the neural mechanisms of figure-ground segmentation and visual awareness itself.
Abstract: In masking, a stimulus is rendered invisible through the presentation of a second stimulus shortly after the first Over the years, authors have typically explained masking by postulating some early disruption process In these feedforward-type explanations, the mask somehow “catches up” with the target stimulus, disrupting its processing either through lateral or interchannel inhibition However, studies from recent years indicate that visual perception---and most notably visual awareness itself---may depend strongly on cortico-cortical feedback connections from higher to lower visual areas This has led some researchers to propose that masking derives its effectiveness from selectively interrupting these reentrant processes In this experiment, we used electroencephalogram measurements to determine what happens in the human visual cortex during detection of a texture-defined square under nonmasked (seen) and masked (unseen) conditions Electro-encephalogram derivatives that are typically associated with reentrant processing turn out to be absent in the masked condition Moreover, extrastriate visual areas are still activated early on by both seen and unseen stimuli, as shown by scalp surface Laplacian current source-density maps This conclusively shows that feedforward processing is preserved, even when subject performance is at chance as determined by objective measures From these results, we conclude that masking derives its effectiveness, at least partly, from disrupting reentrant processing, thereby interfering with the neural mechanisms of figure-ground segmentation and visual awareness itself

Journal ArticleDOI
TL;DR: Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously.
Abstract: During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.

Journal ArticleDOI
TL;DR: The results suggest that the rIFG is part of a network active during response inhibition across different response modalities, as well as suggesting that the traditional antisaccade task required heightened response inhibition at the time of antisaccades execution under a minimal preparatory set.
Abstract: The go/no-go task, which effectively taps the ability to inhibit prepotent response tendency, has consistently activated the lateral prefrontal cortex, particularly the right inferior frontal gyrus (rIFG). On the other hand, rIFG activation has rarely been reported in the antisaccade task, seemingly an oculomotor version of the manual go/no-go task. One possible explanation for the variable IFG activation is the modality difference of the two tasks: The go/no-go task is performed manually, whereas the antisaccade task is performed in the oculomotor modality. Another explanation is that these two tasks have different task structures that require different cognitive processes: The traditional antisaccade task requires (i) configuration of a preparatory set prior to antisaccade execution and (ii) response inhibition at the time of antisaccade execution, whereas the go/no-go task requires heightened response inhibition under a minimal preparatory set. To test these possibilities, the traditional antisaccade task was modified in the present functional magnetic resonance imaging study such that it required heightened response inhibition at the time of antisaccade execution under a minimal preparatory set. Prominent activation related to response inhibition was observed in multiple frontoparietal regions, including the rIFG. Moreover, meta-analyses revealed that the rIFG activation in the present study was observed in the go/no-go tasks but not in the traditional antisaccade task, indicating that the rIFG activation was sensitive to the task structure difference, but not to the response modality difference. These results suggest that the rIFG is part of a network active during response inhibition across different response modalities.

Journal ArticleDOI
TL;DR: General considerations for voxel-based methods are outlined, the use of a nonparametric permutation test adapted from functional neuroimaging is characterized, and methods for regional power analysis in lesion studies are presented.
Abstract: Lesion analysis in brain-injured populations complements what can be learned from functional neuroimaging. Voxel-based approaches to mapping lesion-behavior correlations in brain-injured populations are increasingly popular, and have the potential to leverage image analysis methods drawn from functional magnetic resonance imaging. However, power is a major concern for these studies, and is likely to vary regionally due to the distribution of lesion locations. Here, we outline general considerations for voxel-based methods, characterize the use of a nonparametric permutation test adapted from functional neuroimaging, and present methods for regional power analysis in lesion studies.

Journal ArticleDOI
TL;DR: Direct comparisons between children and adults indicated that children activated the MPFC during self-knowledge retrieval to a much greater extent than adults.
Abstract: Previous neuroimaging research with adults suggests that the medial prefrontal cortex (MPFC) and the medial posterior parietal cortex (MPPC) are engaged during self-knowledge retrieval processes. However, this has yet to be assessed in a developmental sample. Twelve children and 12 adults (average age = 10.2 and 26.1 years, respectively) reported whether short phrases described themselves or a highly familiar other (Harry Potter) while undergoing functional magnetic resonance imaging. In both children and adults, the MPFC was relatively more active during self-than social knowledge retrieval, and the MPPC was relatively more active during social than self-knowledge retrieval. Direct comparisons between children and adults indicated that children activated the MPFC during self-knowledge retrieval to a much greater extent than adults. The particular regions of the MPPC involved varied between the two groups, with the posterior precuneus engaged by adults, but the anterior precuneus and posterior cingulate engaged by children. Only children activated the MPFC significantly above baseline during self-knowledge retrieval. Implications for social cognitive development and the processing functions performed by the MPFC are discussed.

Journal ArticleDOI
TL;DR: Results indicate that stress impaired performance on cognitive flexibility tasks, but not control tasks; compared to placebo, cognitive flexibility improved during stress with propranolol, which may provide support for the hypothesis that stress-related impairments in cognitive flexibility are related to the noradrenergic system.
Abstract: Stress-induced activation of the locus ceruleus-norepinephrine (LC-NE) system produces significant cognitive and behavioral effects, including enhanced arousal and attention. Improvements in discrimination task performance and memory have been attributed to this stress response. In contrast, for other cognitive functions that require cognitive flexibility, increased activity of the LC-NE system may produce deleterious effects. The aim of the present study was to determine the effect of pharmacological modulation of the LC-NE system on stress-induced impairments in cognitive flexibility performance in healthy individuals. Cognitive performance, plus psychological and physiological parameters for 16 adults without any history of anxiety disorders, was assessed during four test sessions: stress and no-stress, with each condition tested after administration of propranolol and placebo. The Trier Social Stress Test, a public-speaking and mental arithmetic stressor, was presented to participants for the stress sessions, whereas a similar, but nonstressful, control task (reading, counting) was utilized for the no-stress sessions. Tests of cognitive flexibility included lexical-semantic and associative problem-solving tasks (anagrams, Compound Remote Associates Test). Visuo-spatial memory and motor processing speed tests served as control tasks. Results indicate that (1) stress impaired performance on cognitive flexibility tasks, but not control tasks; (2) compared to placebo, cognitive flexibility improved during stress with propranolol. Therefore, psychological stress, such as public speaking, negatively impacts performance on tasks requiring cognitive flexibility in normal individuals, and this effect is reversed by beta-adrenergic antagonism. This may provide support for the hypothesis that stress-related impairments in cognitive flexibility are related to the noradrenergic system.

Journal ArticleDOI
TL;DR: The data suggest that the early ERP primarily reflects obligatory perceptual processing that is facilitated by active short-term memory representations, whereas the late ERP reflects increased resource allocation due to the motivational relevance of affective cues.
Abstract: A repetition paradigm was used to assess the nature of affective modulation of early and late components of the event-related potential (ERP) during picture viewing. High-density ERPs were measured while participants passively viewed affective or neutral pictures that were repeated up to 90 times each. Both ERP components were modulated by emotional arousal, with ERPs elicited when viewing pleasant and unpleasant pictures different than when viewing neutral pictures. On the other hand, repetition had different effects on these two components. The early occipitotemporal component (150--300 msec) primarily showed a decrease in amplitude within a block of repetitions that did not differ as a function of picture content. The late centroparietal component (300--600 msec) showed a decrease both between and within blocks of repetitions, with neutral pictures eliciting no late positive potential in the final block of the study. The data suggest that the early ERP primarily reflects obligatory perceptual processing that is facilitated by active short-term memory representations, whereas the late ERP reflects increased resource allocation due to the motivational relevance of affective cues.

Journal ArticleDOI
TL;DR: A neural model of face processing is proposed in which face- and eye-selective neurons situated in the superior temporal sulcus region of the human brain respond differently to the face configuration and to the eyes depending on the face context.
Abstract: Unlike most other objects that are processed analytically, faces are processed configurally. This configural processing is reflected early in visual processing following face inversion and contrast reversal, as an increase in the N170 amplitude, a scalp-recorded event-related potential. Here, we show that these face-specific effects are mediated by the eye region. That is, they occurred only when the eyes were present, but not when eyes were removed from the face. The N170 recorded to inverted and negative faces likely reflects the processing of the eyes. We propose a neural model of face processing in which face-and eye-selective neurons situated in the superior temporal sulcus region of the human brain respond differently to the face configuration and to the eyes depending on the face context. This dynamic response modulation accounts for the N170 variations reported in the literature. The eyes may be central to what makes faces so special.

Journal ArticleDOI
TL;DR: The theoretical, statistical, and practical underpinnings of pattern-based classification approaches to functional neuroimaging analyses are examined, showing how patterns of brain activity encode information can offer insight into the nature of neural representations.
Abstract: The goal of pattern-based classification of functional neuroimaging data is to link individual brain activation patterns to the experimental conditions experienced during the scans. These “brain-reading” analyses advance functional neuroimaging on three fronts. From a technical standpoint, pattern-based classifiers overcome fatal f laws in the status quo inferential and exploratory multivariate approaches by combining pattern-based analyses with a direct link to experimental variables. In theoretical terms, the results that emerge from pattern-based classifiers can offer insight into the nature of neural representations. This shifts the emphasis in functional neuroimaging studies away from localizing brain activity toward understanding how patterns of brain activity encode information. From a practical point of view, pattern-based classifiers are already well established and understood in many areas of cognitive science. These tools are familiar to many researchers and provide a quantitatively so...

Journal ArticleDOI
TL;DR: Analysis of the time course of pitch processing, as revealed by the event-related brain potentials to the prosodically congruous and incongruous sentence-final words, showed that musicians were, on average, 300 msec faster than nonmusicians to categorize prosodical congruously and incONGruous endings.
Abstract: The aim of this study was to determine whether musical expertise influences the detection of pitch variations in a foreign language that participants did not understand. To this end, French adults, musicians and nonmusicians, were presented with sentences spoken in Portuguese. The final words of the sentences were prosodically congruous (spoken at normal pitch height) or incongruous (pitch was increased by 35% or 120%). Results showed that when the pitch deviations were small and difficult to detect (35%: weak prosodic incongruities), the level of performance was higher for musicians than for nonmusicians. Moreover, analysis of the time course of pitch processing, as revealed by the event-related brain potentials to the prosodically congruous and incongruous sentence-final words, showed that musicians were, on average, 300 msec faster than nonmusicians to categorize prosodically congruous and incongruous endings. These results are in line with previous ones showing that musical expertise, by increasing discrimination of pitch---a basic acoustic parameter equally important for music and speech prosody---does facilitate the processing of pitch variations not only in music but also in language. Finally, comparison with previous results [Schon, D., Magne, C., & Besson, M. The music of speech: Music training facilitates pitch processing in both music and language. Psychophysiology, 41, 341--349, 2004] points to the influence of semantics on the perception of acoustic prosodic cues.

Journal ArticleDOI
TL;DR: A time-frequency analysis of the 3 to 30 Hz EEG oscillatory activity in a verbal n-back working memory paradigm is proposed, providing further functional information on the fronto-posterior network supporting working memory.
Abstract: Working memory involves the short-term storage and manipulation of information necessary for cognitive performance, including comprehension, learning, reasoning and planning. Although electroencephalogram (EEG) rhythms are modulated during working memory, the temporal relationship of EEG oscillations with the eliciting event has not been well studied. In particular, the dynamics of the neural network supporting memory processes may be best captured in induced oscillations, characterized by a loose temporal link with the stimulus. In order to differentiate induced from evoked functional processes, the present study proposes a time-frequency analysis of the 3 to 30 Hz EEG oscillatory activity in a verbal n-back working memory paradigm. Control tasks were designed to identify oscillatory activity related to stimulus presentation (passive task) and focused attention to the stimulus (detection task). Evoked theta activity (4--8 Hz) phase-locked to the visual stimulus was evidenced in the parieto-occipital region for all tasks. In parallel, induced theta activity was recorded in the frontal region for detection and n-back memory tasks, but not for the passive task, suggesting its dependency on focused attention to the stimulus. Sustained induced oscillatory activity was identified in relation to working memory in the theta and beta (15--25 Hz) frequency bands, larger for the highest memory load. Its late occurrence limited to nonmatched items suggests that it could be related to item retention and active maintenance for further task requirements. Induced theta and beta activities displayed respectively a frontal and parietal topographical distribution, providing further functional information on the fronto-posterior network supporting working memory.

Journal ArticleDOI
TL;DR: A common neural network underlying all declarative memory retrieval, as well as unique neural contributions reflecting the specific properties of retrieved memories are suggested.
Abstract: This study sought to explore the neural correlates that underlie autobiographical, episodic, and semantic memory. Autobiographical memory was defined as the conscious recollection of personally relevant events, episodic memory as the recall of stimuli presented in the laboratory, and semantic memory as the retrieval of factual information and general knowledge about the world. Our objective was to delineate common neural activations, reflecting a functional overlap, and unique neural activations, reflecting functional dissociation of these memory processes. We conducted an event-related functional magnetic resonance imaging study in which we utilized the same pictorial stimuli but manipulated retrieval demands to extract autobiographical, episodic, or semantic memories. The results show a functional overlap of the three types of memory retrieval in the inferior frontal gyrus, the middle frontal gyrus, the caudate nucleus, the thalamus, and the lingual gyrus. All memory conditions yielded activation of the left medial-temporal lobe; however, we found a functional dissociation within this region. The anterior and superior areas were active in episodic and semantic retrieval, whereas more posterior and inferior areas were active in autobiographical retrieval. Unique activations for each memory type were also delineated, including medial frontal increases for autobiographical, right middle frontal increases for episodic, and right inferior temporal increases for semantic retrieval. These findings suggest a common neural network underlying all declarative memory retrieval, as well as unique neural contributions reflecting the specific properties of retrieved memories.

Journal ArticleDOI
TL;DR: Results unveil a more important modulation of interhemispheric interactions during generation of dominant than nondominant hand movements that might release muscles from inhibition in the contralateralPrimary motor cortex while preventing the occurrence of the mirror activity in ipsilateral primary motor cortex and could therefore contribute to intermanual differences in dexterity.
Abstract: Interhemispheric inhibition (IHI) between motor cortical areas is thought to play a critical role in motor control and could influence manual dexterity. The purpose of this study was to investigate IHI preceding movements of the dominant and nondominant hands of healthy volunteers. Movement-related IHI was studied by means of a double-pulse transcranial magnetic stimulation protocol in right-handed individuals in a simple reaction time paradigm. IHI targeting the motor cortex contralateral (IHIc) and ipsilateral (IHIi) to each moving finger was determined. IHIc was comparable after the go signal, a long time preceding movement onset, in both hands. Closer to movement onset, IHIc reversed into facilitation for the right dominant hand but remained inhibitory for left nondominant hand movements. IHIi displayed a nearly constant inhibition with a trough early in the premovement period in both hands. In conclusion, our results unveil a more important modulation of interhemispheric interactions during generation of dominant than nondominant hand movements. This modulation essentially consisted of a shift from a balanced IHI at rest to an IHI predominantly directed toward the ipsilateral primary motor cortex at movement onset. Such a mechanism might release muscles from inhibition in the contralateral primary motor cortex while preventing the occurrence of the mirror activity in ipsilateral primary motor cortex and could therefore contribute to intermanual differences in dexterity.

Journal ArticleDOI
TL;DR: The integration of gesture and speech in comprehension does not appear to be an obligatory process but is modulated by situational factors such as the amount of observed meaningful hand movements.
Abstract: The present series of experiments explored the extent to which iconic gestures convey information not found in speech. Electroencephalogram (EEG) was recorded as participants watched videos of a person gesturing and speaking simultaneously. The experimental sentences contained an unbalanced homonym in the initial part of the sentence (e.g., She controlled the ball ...) and were disambiguated at a target word in the subsequent clause (which during the game ... vs. which during the dance ...). Coincident with the initial part of the sentence, the speaker produced an iconic gesture which supported either the dominant or the subordinate meaning. Event-related potentials were time-locked to the onset of the target word. In Experiment 1, participants were explicitly asked to judge the congruency between the initial homonym-gesture combination and the subsequent target word. The N400 at target words was found to be smaller after a congruent gesture and larger after an incongruent gesture, suggesting that listeners can use gestural information to disambiguate speech. Experiment 2 replicated the results using a less explicit task, indicating that the disambiguating effect of gesture is somewhat task-independent. Unrelated grooming movements were added to the paradigm in Experiment 3. The N400 at subordinate targets was found to be smaller after subordinate gestures and larger after dominant gestures as well as grooming, indicating that an iconic gesture can facilitate the processing of a lesser frequent word meaning. The N400 at dominant targets no longer varied as a function of the preceding gesture in Experiment 3, suggesting that the addition of meaningless movements weakened the impact of gesture. Thus, the integration of gesture and speech in comprehension does not appear to be an obligatory process but is modulated by situational factors such as the amount of observed meaningful hand movements.

Journal ArticleDOI
TL;DR: Results suggest that responses in regions traditionally implicated in emotional processing and cognitive control are sensitive to rejection stimuli irrespective of RS, but that low RS individuals may activate prefrontal structures to regulate distress associated with viewing such images.
Abstract: Rejection sensitivity (RS) is the tendency to anxiously expect, readily perceive, and intensely react to rejection. This study used functional magnetic resonance imaging to explore whether individual differences in RS are mediated by differential recruitment of brain regions involved in emotional appraisal and/or cognitive control. High and low RS participants were scanned while viewing either representational paintings depicting themes of rejection and acceptance or nonrepresentational control paintings matched for positive or negative valence, arousal and interest level. Across all participants, rejection versus acceptance images activated regions of the brain involved in processing affective stimuli (posterior cingulate, insula), and cognitive control (dorsal anterior cingulate cortex; medial frontal cortex). Low and high RS individuals' responses to rejection versus acceptance images were not, however, identical. Low RS individuals displayed significantly more activity in left inferior and right dorsal frontal regions, and activity in these areas correlated negatively with participants' self-report distress ratings. In addition, control analyses revealed no effect of viewing negative versus positive images in any of the areas described above, suggesting that the aforementioned activations were involved in rejection-relevant processing rather than processing negatively valenced stimuli per se. Taken together, these findings suggest that responses in regions traditionally implicated in emotional processing and cognitive control are sensitive to rejection stimuli irrespective of RS, but that low RS individuals may activate prefrontal structures to regulate distress associated with viewing such images.

Journal ArticleDOI
TL;DR: The results provide an index of memory for face-scene relations, indicate the time by which retrieval and identification of these relations occur, and suggest that retrieval and use of relational memory depends critically on the hippocampus and occurs obligatorily, regardless of response requirements.
Abstract: Little is known about the mechanisms by which memory for relations is accomplished, or about the time course of the critical processes. Here, eye movement measures were used to examine the time course of subjects' access to and use of relational memory. In four experiments, participants studied faces superimposed on scenic backgrounds and were tested with three-face displays superimposed on the scenes viewed earlier. Participants exhibited disproportionate viewing of the face originally studied with the scene, compared to other equally familiar faces in the test display. When a preview of a previously viewed scene was provided, permitting expectancies about the to-be-presented face to emerge, disproportionate viewing was manifested within 500--750 msec after test display onset, more than a full second in advance of explicit behavioral responses, and occurred even when overt responses were not required. In the absence of preview, the viewing effects were delayed by approximately 1 sec. Relational memory effects were absent in the eye movement behavior of amnesic patients with hippocampal damage, suggesting that these effects depend critically on the hippocampal system. The results provide an index of memory for face-scene relations, indicate the time by which retrieval and identification of these relations occur, and suggest that retrieval and use of relational memory depends critically on the hippocampus and occurs obligatorily, regardless of response requirements.

Journal ArticleDOI
TL;DR: The progressive, unaware condition was associated with larger negative after-effects, transfer to the non-exposed hand for the visual and auditory pointing tasks, and greater robustness, but the amount of adaptation obtained remained, nevertheless, lower than the exaggerated adaptive capacity seen in patients with neglect.
Abstract: Neglect patients exhibit both a lack of awareness for the spatial distortions imposed during visuomanual prism adaptation procedures, and exaggerated postadaptation negative after-effects. To better understand this unexpected adaptive capacity in brain-lesioned patients, we investigated the contribution of awareness for the optical shift to the development of prism adaptation. The lack of awareness found in neglect was simulated in a multiple-step group where healthy subjects remained unaware of the optical deviation because of its progressive stepwise increase from 2° to 10°. We contrasted this method with the classical single-step group in which subjects were aware of the visual shift because they were directly exposed to the full 10° shift. Because the number of pointing trials was identical in the two groups, the total amount of deviation exposure was 50% larger in the single-step group. Negative after-effects were examined with an open-loop pointing task performed with the adapted hand, and generalization was tested with open-loop pointing with the nonexposed hand to visual and auditory targets. The robustness of adaptation was assessed by an open-loop pointing task after a simple de-adaptation procedure. The progressive, unaware condition was associated with larger negative after-effects, transfer to the non-exposed hand for the visual and auditory pointing tasks, and greater robustness. The amount of adaptation obtained remained, nevertheless, lower than the exaggerated adaptive capacity seen in patients with neglect. Implications for the functional mechanisms and the anatomical substrates of prism adaptation are discussed.