scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Cognitive Neuroscience in 2003"


Journal ArticleDOI
TL;DR: Results suggest that TID represents reallocation of processing resources from areas in which TID occurs to areas involved in task performance, and short-term memory load and stimulus rate also predict suppression of spontaneous thought.
Abstract: Task-induced deactivation (TID) refers to a regional decrease in blood flow during an active task relative to a "resting" or "passive" baseline. We tested the hypothesis that TID results from a reallocation of processing resources by parametrically manipulating task difficulty within three factors: target discriminability, stimulus presentation rate, and short-term memory load. Subjects performed an auditory target detection task during functional magnetic resonance imaging (fMRI), responding to a single target tone or, in the short-term memory load conditions, to target sequences. Seven task conditions (a common version and two additional levels for each of the three factors) were each alternated with "rest" in a block design. Analysis of covariance identified brain regions in which TID occurred. Analyses of variance identified seven regions (left anterior cingulate/superior frontal gyrus, left middle frontal gyrus, right anterior cingulate gyrus, left and right posterior cingulate gyrus, left posterior parieto-occipital cortex, and right precuneus) in which TID magnitude varied across task levels within a factor. Follow-up tests indicated that for each of the three factors, TID magnitude increased with task difficulty. These results suggest that TID represents reallocation of processing resources from areas in which TID occurs to areas involved in task performance. Short-term memory load and stimulus rate also predict suppression of spontaneous thought, and many of the brain areas showing TID have been linked with semantic processing, supporting claims that TID may be due in part to suspension of spontaneous semantic processes that occur during "rest" (Binder et al., 1999). The concept that the typical "resting state" is actually a condition characterized by rich cognitive activity has important implications for the design and analysis of neuroimaging studies.

1,102 citations


Journal ArticleDOI
TL;DR: It is shown that weak direct currents are capable of improving implicit motor learning in the human and that the primary motor cortex is involved in the acquisition and early consolidation phase of implicit motorlearning.
Abstract: Transcranially applied weak direct currents are capable of modulating motor cortical excitability in the human. Anodal stimulation enhances excitability, cathodal stimulation diminishes it. Cortical excitability changes accompany motor learning. Here we show that weak direct currents are capable of improving implicit motor learning in the human. During performance of a serial reaction time task, the primary motor cortex, premotor, or prefrontal cortices were stimulated contralaterally to the performing hand. Anodal stimulation of the primary motor cortex resulted in increased performance, whereas stimulation of the remaining cortices had no effect. We conclude that the primary motor cortex is involved in the acquisition and early consolidation phase of implicit motor learning.

967 citations


Journal ArticleDOI
Moshe Bar1
TL;DR: This work proposes a specific mechanism for the activation of top-down facilitation during visual object recognition, and suggests that a partially analyzed version of the input image is projected rapidly from early visual areas directly to the prefrontal cortex (PFC) to be integrated with the bottom-up analysis.
Abstract: The majority of the research related to visual recognition has so far focused on bottom-up analysis, where the input is processed in a cascade of cortical regions that analyze increasingly complex information. Gradually more studies emphasize the role of top-down facilitation in cortical analysis, but it remains something of a mystery how such processing would be initiated. After all, top-down facilitation implies that high-level information is activated earlier than some relevant lower-level information. Building on previous studies, I propose a specific mechanism for the activation of top-down facilitation during visual object recognition. The gist of this hypothesis is that a partially analyzed version of the input image (i.e., a blurred image) is projected rapidly from early visual areas directly to the prefrontal cortex (PFC). This coarse representation activates in the PFC expectations about the most likely interpretations of the input image, which are then back-projected as an "initial guess" to the temporal cortex to be integrated with the bottom-up analysis. The top-down process facilitates recognition by substantially limiting the number of object representations that need to be considered. Furthermore, such a rapid mechanism may provide critical information when a quick response is necessary.

844 citations


Journal ArticleDOI
TL;DR: This paper used fMRI to identify human auditory regions with both sensory and motor response properties, analogous to single unit responses in known visuomotor integration areas, and found that a small set of areas in the superior temporal and temporal-parietal cortex responded both during the listening phase and the rehearsal/humming phase.
Abstract: The concept of auditory-motor interaction pervades speech science research, yet the cortical systems supporting this interface have not been elucidated. Drawing on experimental designs used in recent work in sensory-motor integration in the cortical visual system, we used fMRI in an effort to identify human auditory regions with both sensory and motor response properties, analogous to single-unit responses in known visuomotor integration areas. The sensory phase of the task involved listening to speech (nonsense sentences) or music (novel piano melodies); the "motor" phase of the task involved covert rehearsal/humming of the auditory stimuli. A small set of areas in the superior temporal and temporal-parietal cortex responded both during the listening phase and the rehearsal/humming phase. A left lateralized region in the posterior Sylvian fissure at the parietal-temporal boundary, area Spt, showed particularly robust responses to both phases of the task. Frontal areas also showed combined auditory + rehearsal responsivity consistent with the claim that the posterior activations are part of a larger auditory-motor integration circuit. We hypothesize that this circuit plays an important role in speech development as part of the network that enables acoustic-phonetic input to guide the acquisition of language-specific articulatory-phonetic gestures; this circuit may play a role in analogous musical abilities. In the adult, this system continues to support aspects of speech production, and, we suggest, supports verbal working memory.

603 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated whether it is possible to orient selective spatial attention to internal representations held in working memory in a similar fashion to orienting to perceptual stimuli, and found that subjects were either cued to orient to a spatial location before a stimulus array was presented (pre-cue), or given no cueing information (neutral cue).
Abstract: Three experiments investigated whether it is possible to orient selective spatial attention to internal representations held in working memory in a similar fashion to orienting to perceptual stimuli. In the first experiment, subjects were either cued to orient to a spatial location before a stimulus array was presented (pre-cue), cued to orient to a spatial location in working memory after the array was presented (retro-cue), or given no cueing information (neutral cue). The stimulus array consisted of four differently colored crosses, one in each quadrant. At the end of a trial, a colored cross (probe) was presented centrally, and subjects responded according to whether it had occurred in the array. There were equivalent patterns of behavioral costs and benefits of cueing for both pre-cues and retro-cues. A follow-up experiment used a peripheral probe stimulus requiring a decision about whether its color matched that of the item presented at the same location in the array. Replication of the behavioral costs and benefits of pre-cues and retro-cues in this experiment ruled out changes in response criteria as the only explanation for the effects. The third experiment used event-related potentials (ERPs) to compare the neural processes involved in orienting attention to a spatial location in an external versus an internal spatial representation. In this task, subjects responded according to whether a central probe stimulus occurred at the cued location in the array. There were both similarities and differences between ERPs to spatial cues toward a perception versus an internal spatial representation. Lateralized early posterior and later frontal negativities were observed for both pre- and retro-cues. Retro-cues also showed additional neural processes to be involved in orienting to an internal representation, including early effects over frontal electrodes.

559 citations


Journal ArticleDOI
TL;DR: What sets Dayan and Abbott’s book apart is that it lands right smack in the center of computational neuroscience, spanning the entire range from models at the cellular level, to those at the cognitive level, such as reinforcement learning.
Abstract: Every field of science relies on having its trusted sources of knowledge, the books that unite investigators with a common language and provide them with the basic toolbox for approaching problems. Physics, for instance, has its ‘‘Landau and Lifschitz’’; electrical engineers routinely turn to ‘‘Horowitz and Hill’’; and many a neuroscientist was brought up on ‘‘Kandel and Schwartz.’’ Now, at last, the field of computational neuroscience has one of its own with the recent publication of Dayan and Abbott’s Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. The emergence of this book represents more than the usual feat of textbook publication. It is a significant development for the field as a whole, because up to now there has been no single book that unites the basic methods and models of computational neuroscience in one place. Those who teach courses on computational models have mostly hobbled along by copying papers and chapters from assorted journals and books, or by writing their own elaborate lecture notes. While there exist several excellent texts on neural networks—such as Introduction to the Theory of Neural Computation (Hertz, Krogh, & Palmer), Neural Networks for Pattern Recognition (Bishop), and Neural Networks: A Comprehensive Foundation (Haykin)—they are mostly written from the perspective of engineering, math, or physics, and so they do not make serious connections to neuroscience. Others that do address brain function are either tilted more towards cognitive science, emphasizing higher-level aspects of brain function—such as Parallel Distributed Processing (Mclelland & Rumelhart), An Inroduction to Neural Networks (Anderson), and An Introduction to Natural Computation (Ballard)— or else towards lower-level cellular models—as in The Biophysics of Computation: Information Processing in Single Neurons (Koch), and Spikes: Exploring the Neural Code (Rieke et al.). What sets Dayan and Abbott’s book apart is that it lands right smack in the center of computational neuroscience, spanning the entire range from models at the cellular level, such as ion channel kinetics, to those at the cognitive level, such as reinforcement learning. It also does a beautiful job explaining stateof-the-art techniques in neural coding, as well as recent advances in unsupervised learning models. And it does all of these with a level of depth and thoroughness that is impressive. There tend to be two camps in the field of computational neuroscience, and they are probably best exemplified by how they use the term ‘‘computation.’’ In one, mathematical models are constructed primarily to describe or characterize existing data, and computation is used mainly as a means to simulate or analyze the data, in rather the same way as computational chemistry or computational fluid dynamics. In the other camp, computation is applied in a more theoretical manner, as a metaphor for what the brain is actually doing. The first six chapters of the book fall more in the first category, by using mathematical and computational techniques to characterize neural function. Some of these methods attempt to describe neural function in computational terms, but they are still primarily descriptive in nature. These include methods for characterizing spike statistics, reverse correlation techniques for measuring receptive field properties, methods for decoding information contained in neural spike trains, and information theoretic techniques for measuring information capacity of neurons and coding efficiency. There are also two chapters covering detailed electrical models of neurons, including channel kinetics, synapses, and cable properties. All of these techniques are covered with the kind of nuts-and-bolts detail that will allow readers to begin implementing and experimenting with them in computer simulations. The last four chapters of the book fall into the second camp of computational neuroscience, presenting more abstract models that extrapolate beyond the available data. Experimentalists often recoil at the idea of entertaining such models, but they are every bit as essential as the descriptive models because they provide a theoretical framework for interpreting data and motivating future experiments. These chapters discuss recurrent network models with attractor dynamics, models of learning and adaptation, and theories of representation based on probabilistic models. Many of these topics are also covered in the more traditional books on neural networks, but the advantage of the presentation here is that it makes more direct contact with neuroscientific data. Also, by using terminology and mathematical notation that is consistent throughout the book, the authors

550 citations


Journal ArticleDOI
TL;DR: The results clarify the functional anatomy of the LIPC by demonstrating that anterior and posterior regions contribute to both semantic and phonological processing, albeit to different extents.
Abstract: The involvement of the left inferior prefrontal cortex (LIPC) in phonological processing is well established from both lesion-deficit studies with neurological patients and functional neuroimaging studies of normals. Its involvement in semantic processing, on the other hand, is less clear. Although many imaging studies have demonstrated LIPC activation during semantic tasks, this may be due to implicit phonological processing. This article presents two experiments investigating semantic functions in the LIPC. Results from a functional magnetic resonance imaging experiment demonstrated that both semantic and phonological processing activated a common set of areas within this region. In addition, there was a reliable increase in activation for semantic relative to phonological decisions in the anterior LIPC while the opposite comparison (phonological vs. semantic decisions) revealed an area of enhanced activation within the posterior LIPC. A second experiment used transcranial magnetic stimulation (TMS) to temporarily interfere with neural information processing in the anterior portion of the LIPC to determine whether this region was essential for normal semantic performance. Both repetitive and single pulse TMS significantly slowed subjects' reactions for the semantic but not for the perceptual control task. Our results clarify the functional anatomy of the LIPC by demonstrating that anterior and posterior regions contribute to both semantic and phonological processing, albeit to different extents. In addition, the findings go beyond simply establishing a correlation between semantic processing and activation in the LIPC and demonstrate that a transient disruption of processing selectively interfered with semantic processing.

515 citations


Journal ArticleDOI
TL;DR: The findings suggest that prefrontal structures play an important part in a network mediating the empathic response and specifically that the right ventromedial cortex has a unique role in integrating cognition and affect to produce the empathy response.
Abstract: Impaired empathic response has been described in patients following brain injury, suggesting that empathy may be a fundamental aspect of the social behavior disturbed by brain damage. However, the neuroanatomical basis of impaired empathy has not been studied in detail. The empathic response of patients with localized lesions in the prefrontal cortex (n = 25) was compared to responses of patients with posterior (n = 17) and healthy control subjects (n = 19). To examine the cognitive processes that underlie the empathic ability, the relationships between empathy scores and the performance on tasks that assess processes of cognitive flexibility, affect recognition, and theory of mind (TOM) were also examined. Patients with prefrontal lesions, particularly when their damage included the ventromedial prefrontal cortex, were significantly impaired in empathy as compared to patients with posterior lesions and healthy controls. However, among patients with posterior lesions, those with damage to the right hemisphere were impaired, whereas those with left posterior lesions displayed empathy levels similar to healthy controls. Seven of nine patients with the most profound empathy deficit had a right ventromedial lesion. A differential pattern regarding the relationships between empathy and cognitive performance was also found: Whereas among patients with dorsolateral prefrontal damage empathy was related to cognitive flexibility but not to TOM and affect recognition, empathy scores in patients with ventromedial lesions were related to TOM but not to cognitive flexibility. Our findings suggest that prefrontal structures play an important part in a network mediating the empathic response and specifically that the right ventromedial cortex has a unique role in integrating cognition and affect to produce the empathic response.

466 citations


Journal ArticleDOI
TL;DR: It is confirmed that pitch processing is enhanced in high-functioning autism and as predicted by the enhanced perceptual functioning model for peaks of ability in autism, autistic individuals outperform typically developing population in a variety of low-level perceptual tasks.
Abstract: Past research has shown a superiority of participants with high-functioning autism over comparison groups in memorizing picture-pitch associations and in detecting pitch changes in melodies. A subset of individuals with autism, known as "musical savants," is also known to possess absolute pitch. This superiority might be due to an abnormally high sensitivity to fine-grained pitch differences in sounds. To test this hypothesis, psychoacoustic tasks were devised so as to use a signal detection methodology. Participants were all musically untrained and were divided into a group of 12 high-functioning individuals with autism and a group of 12 normally developing individuals. Their task was to judge the pitch of pure tones in a "same-different" discrimination task and in a "high-low" categorization task. In both tasks, the obtained psychometric functions revealed higher pitch sensitivity for subjects with autism, with a more pronounced advantage over control participants in the categorization task. These findings confirm that pitch processing is enhanced in "high-functioning" autism. Superior performance in pitch discrimination and categorization extends previous findings of enhanced visual performance to the auditory domain. Thus, and as predicted by the enhanced perceptual functioning model for peaks of ability in autism (Mottron & Burack, 2001), autistic individuals outperform typically developing population in a variety of low-level perceptual tasks.

456 citations


Journal ArticleDOI
TL;DR: The lateral temporal cortex showed strong responses to both moving videos and moving point-light displays, supporting the hypothesis that the lateral temporal prefrontal cortex is the cortical locus for processing complex visual motion.
Abstract: We used fMRI to study the organization of brain responses to different types of complex visual motion. In a rapid event-related design, subjects viewed video clips of humans performing different whole-body motions, video clips of manmade manipulable objects (tools) moving with their characteristic natural motion, point-light displays of human whole-body motion, and point-light displays of manipulable objects.The lateral temporal cortex showed strong responses to both moving videos and moving point-light displays, supporting the hypothesis that the lateral temporal cortex is the cortical locus for processing complex visual motion. Within the lateral temporal cortex, we observed segregated responses to different types of motion. The superior temporal sulcus (STS) responded strongly to human videos and human point-light displays, while the middle temporal gyrus (MTG) and the inferior temporal sulcus responded strongly to tool videos and tool point-light displays.In the ventral temporal cortex, the lateral fusiform responded more to human videos than to any other stimulus category while the medial fusiform preferred tool videos. The relatively weak responses observed to point-light displays in the ventral temporal cortex suggests that form, color, and texture (present in video but not point-light displays) are the main contributors to ventral temporal activity. In contrast, in the lateral temporal cortex, the MTG responded as strongly to point-light displays as to videos, suggesting that motion is the key determinant of response in the MTG. Whereas the STS responded strongly to point-light displays, it showed an even larger response to video displays, suggesting that the STS integrates form, color, and motion information.

448 citations


Journal ArticleDOI
TL;DR: Functional brain imaging results confirm the hypothesis that quantity is represented by a common mechanism for both symbolic and nonsymbolic stimuli in IPS.
Abstract: The close behavioral parallels between the processing of quantitative information conveyed by symbolic and nonsymbolic stimuli led to the hypothesis that there exists a common cerebral representation of quantity (Dehaene, Dehaene-Lambertz, & Cohen, 1998). The neural basis underlying the encoding of number magnitude has been localized to regions in and around the intraparietal sulcus (IPS) by brain-imaging studies. However, it has never been demonstrated that these same regions are also involved in the quantitative processing of nonsymbolic stimuli. Using functional brain imaging, we explicitly tested the hypothesis of a common substrate. Angles, lines, and two-digit numbers were presented pairwise, one to the left and one to the right of the fixation point. In the three comparison tasks, participants (n = 18) pressed the key on the side of the largest quantity. In the three control tasks, they indicated the side on which dimming occurred. A conjunction analysis between the three subtractions (comparison task-control task) revealed a site in left IPS that is specifically responsive when two stimuli have to be compared quantitatively, irrespective of stimulus format. The results confirm the hypothesis that quantity is represented by a common mechanism for both symbolic and nonsymbolic stimuli in IPS. In addition, the interaction between task and type of stimulus identified a region anterior to the conjunction site, not specific for quantitative processing, but reflecting general processes loaded by number processing.

Journal ArticleDOI
TL;DR: An asymmetry in the interplay between syntax and semantics during on-line sentence comprehension is revealed, suggesting that semantic integration is influenced by syntactic processing.
Abstract: This study investigated the effects of combined semantic and syntactic violations in relation to the effects of single semantic and single syntactic violations on language-related event-related brain potential (ERP) effects (N400 and P600/SPS). Syntactic violations consisted of a mismatch in grammatical gender or number features of the definite article and the noun in sentence-internal or sentence-final noun phrases (NPs). Semantic violations consisted of semantically implausible adjective–noun combinations in the same NPs. Combined syntactic and semantic violations were a summation of these two respective violation types. ERPs were recorded while subjects read the sentences with the different types of violations and the correct control sentences. ERP effects were computed relative to ERPs elicited by the sentence-internal or sentence-final nouns. The size of the N400 effect to the semantic violation was increased by an additional syntactic violation (the syntactic boost). In contrast, the size of the P600/SPS to the syntactic violation was not affected by an additional semantic violation. This suggests that in the absence of syntactic ambiguity, the assignment of syntactic structure is independent of semantic context. However, semantic integration is influenced by syntactic processing. In the sentence-final position, additional global processing consequences were obtained as a result of earlier violations in the sentence. The resulting increase in the N400 amplitude to sentence-final words was independent of the nature of the violation. A speeded anomaly detection task revealed that it takes substantially longer to detect semantic than syntactic anomalies. These results are discussed in relation to the latency and processing characteristics of the N400 and P600/SPS effects. Overall, the results reveal an asymmetry in the interplay between syntax and semantics during on-line sentence comprehension.

Journal ArticleDOI
TL;DR: The results are consistent with the idea that a positivity with a posterior distribution across the scalp (posterior P600) is an index of syntactic processing difficulty, including repair and revision, and that a frontally distributed positivity is related to ambiguity resolution and/ or to an increase in discourse level complexity.
Abstract: One of the core aspects of human sentence processing is the ability to detect errors and to recover from erroneous analysis through revision of ambiguous sentences and repair of ungrammatical sentences. In the present study, we used event-related potentials (ERPs) to help identify the nature of these processes by directly comparing ERPs to complex ambiguous sentence structures with and without grammatical violations, and to simpler unambiguous sentence structures with and without grammatical violations. In ambiguous sentences, preference of syntactic analysis was manipulated such that in one condition, the structures agreed with the preferred analysis, and in another condition, a nonpreferred but syntactically correct analysis (garden path) was imposed. Nonpreferred ambiguous structures require revision, whereas ungrammatical structures require repair. We found that distinct ERPs reflected different characteristics of syntactic processing. Specifically, our results are consistent with the idea that a positivity with a posterior distribution across the scalp (posterior P600) is an index of syntactic processing difficulty, including repair and revision, and that a frontally distributed positivity (frontal P600) is related to ambiguity resolution and/or to an increase in discourse level complexity.

Journal ArticleDOI
TL;DR: These activations were insensitive to retrieval task, suggesting that visually presented tools automatically recruit both left VPMCx and left PMTG in response to action features that are inherent in tool representations.
Abstract: PET was used to investigate the neural correlates of action knowledge in object representations, particularly the left lateralized network of activations previously implicated in the processing of tools and their associated actions: ventral premotor cortex (VPMCx), posterior middle temporal gyrus (PMTG), and intraparietal sulcus (IPS). Judgments were made about the actions and functions associated with manipulable man-made objects (e.g., hammer); this enabled us to measure activations in response to both explicit and implicit retrieval of knowledge about actions associated with manipulable tools. Function judgments were also made about nonmanipulable artifacts (e.g., traffic light) providing a direct comparison for manipulable objects. Although neither the left VPMCx nor the left PMTG were selective for tool stimuli (nonmanipulable objects also activated these areas relative to a visual control condition), both regions responded more strongly to manipulable objects, suggesting a role for these cortical areas in the processing of knowledge associated with tools. Furthermore, these activations were insensitive to retrieval task, suggesting that visually presented tools automatically recruit both left VPMCx and left PMTG in response to action features that are inherent in tool representations. In contrast, the IPS showed clear selectivity for explicit retrieval of action information about manipulable objects. No regions of cortex were more activated by function relative to action judgments about artifacts. These results are consistent with the brain's preferential responsiveness to how we interact with objects, rather than what they are used for.

Journal ArticleDOI
TL;DR: Although TMS can help bridge the gap between psychological models and brain-based arguments of cognitive functions, hypothesis-driven carefully designed experiments that acknowledge the current limitations of TMS are critical.
Abstract: The application of transcranial magnetic stimulation (TMS) to investigate important questions in cognitive neuroscience has increased considerably in the last few years. TMS can provide substantial insights into the nature and the chronometry of the computations performed by specific cortical areas during various aspects of cognition. However, the use of TMS in cognitive studies has many potential perils and pitfalls. Although TMS can help bridge the gap between psychological models and brain-based arguments of cognitive functions, hypothesis-driven carefully designed experiments that acknowledge the current limitations of TMS are critical.

Journal ArticleDOI
TL;DR: In this paper, the extent to which this circuitry was activated when seen speech was deprived of its time-varying characteristics was explored using functional magnetic resonance imaging (fMRI), where hearing participants were instructed to look for a prespecified visible speech target sequence ("voo" or "ahv") among other monosyllables.
Abstract: Speech is perceived both by ear and by eye. Unlike heard speech, some seen speech gestures can be captured in stilled image sequences. Previous studies have shown that in hearing people, natural time-varying silent seen speech can access the auditory cortex (left superior temporal regions). Using functional magnetic resonance imaging (fMRI), the present study explored the extent to which this circuitry was activated when seen speech was deprived of its time-varying characteristics.In the scanner, hearing participants were instructed to look for a prespecified visible speech target sequence ("voo" or "ahv") among other monosyllables. In one condition, the image sequence comprised a series of stilled key frames showing apical gestures (e.g., separate frames for "v" and "oo" [from the target] or "ee" and "m" [i.e., from nontarget syllables]). In the other condition, natural speech movement of the same overall segment duration was seen.In contrast to a baseline condition in which the letter "V" was superimposed on a resting face, stilled speech face images generated activation in posterior cortical regions associated with the perception of biological movement, despite the lack of apparent movement in the speech image sequence. Activation was also detected in traditional speech-processing regions including the left inferior frontal (Broca's) area, left superior temporal sulcus (STS), and left supramarginal gyrus (the dorsal aspect of Wernicke's area). Stilled speech sequences also generated activation in the ventral premotor cortex and anterior inferior parietal sulcus bilaterally.Moving faces generated significantly greater cortical activation than stilled face sequences, and in similar regions. However, a number of differences between stilled and moving speech were also observed. In the visual cortex, stilled faces generated relatively more activation in primary visual regions (V1/V2), while visual movement areas (V5/MT+) were activated to a greater extent by moving faces. Cortical regions activated more by naturally moving speaking faces included the auditory cortex (Brodmann's Areas 41/42; lateral parts of Heschl's gyrus) and the left STS and inferior frontal gyrus.Seen speech with normal time-varying characteristics appears to have preferential access to "purely" auditory processing regions specialized for language, possibly via acquired dynamic audiovisual integration mechanisms in STS. When seen speech lacks natural time-varying characteristics, access to speech-processing systems in the left temporal lobe may be achieved predominantly via action-based speech representations, realized in the ventral premotor cortex.

Journal ArticleDOI
TL;DR: While 65 of a total of 299 sleep mentation reports were judged to reflect aspects of recent waking life experiences, the episodic replay of waking events was found in no more than 12 of the dream reports, consistent with evidence that sleep has no role in episodic memory consolidation.
Abstract: The activity that takes place in memory systems during sleep is likely to be related to the role of sleep in memory consolidation and learning, as well as to the generation of dream hallucinations. This study addressed the often-stated hypothesis that replay of whole episodic memories contributes to the multimodal hallucinations of sleep. Over a period of 14 days, 29 subjects kept a log of daytime activities, events, and concerns, wrote down any recalled dreams, and scored the dreams for incorporation of any waking experiences. While 65p of a total of 299 sleep mentation reports were judged to reflect aspects of recent waking life experiences, the episodic replay of waking events was found in no more than 1 – 2p of the dream reports. This finding has implications for understanding the unique memory processing that takes place during the night and is consistent with evidence that sleep has no role in episodic memory consolidation.

Journal ArticleDOI
TL;DR: The results illustrate that pseudowords place increased demands on areas that have previously been linked to lexical retrieval, and highlight the importance of including one or more baselines to qualify word type effects.
Abstract: Several functional neuroimaging studies have compared words and pseudowords to test different cognitive models of reading. There are difficulties with this approach, however, because cognitive models do not make clear-cut predictions at the neural level. Therefore, results can only be interpreted on the basis of prior knowledge of cognitive anatomy. Furthermore, studies comparing words and pseudowords have produced inconsistent results. The inconsistencies could reflect false-positive results due to the low statistical thresholds applied or confounds from nonlexical aspects of the stimuli. Alternatively, they may reflect true effects that are inconsistent across subjects; dependent on experimental parameters such as stimulus rate or duration; or not replicated across studies because of insufficient statistical power. In this fMRI study, we investigate consistent and inconsistent differences between word and pseudoword reading in 20 subjects, and distinguish between effects associated with increases and decreases in activity relative to fixation. In addition, the interaction of word type with stimulus duration is explored. We find that words and pseudowords activate the same set of regions relative to fixation, and within this system, there is greater activation for pseudowords than words in the left frontal operculum, left posterior inferior temporal gyrus, and the right cerebellum. The only effects of words relative to pseudowords consistent over subjects are due to decreases in activity for pseudowords relative to fixation; and there are no significant interactions between word type and stimulus duration. Finally, we observe inconsistent but highly significant effects of word type at the individual subject level. These results (i) illustrate that pseudowords place increased demands on areas that have previously been linked to lexical retrieval, and (ii) highlight the importance of including one or more baselines to qualify word type effects. Furthermore, (iii) they suggest that inconsistencies observed in the previous literature may result from effects arising from a small number of subjects only.

Journal ArticleDOI
TL;DR: The results show neural correlates of access to specific word information, with robust differences in activation by words and word-like nonwords, and the absence of facilitatory lexical neighborhood effects on activation in these brain regions argues for an interpretation in terms of semantic access.
Abstract: People can discriminate real words from nonwords even when the latter are orthographically and phonologically word-like, presumably because words activate specific lexical and/or semantic information. We investigated the neural correlates of this identification process using event-related functional magnetic resonance imaging (fMRI). Participants performed a visual lexical decision task under conditions that encouraged specific word identification: Nonwords were matched to words on orthographic and phonologic characteristics, and accuracy was emphasized over speed. To identify neural responses associated with activation of nonsemantic lexical information, processing of words and nonwords with many lexical neighbors was contrasted with processing of items with no neighbors. The fMRI data showed robust differences in activation by words and word-like nonwords, with stronger word activation occurring in a distributed, left hemisphere network previously associated with semantic processing, and stronger nonword activation occurring in a posterior inferior frontal area previously associated with grapheme-to-phoneme mapping. Contrary to lexicon-based models of word recognition, there were no brain areas in which activation increased with neighborhood size. For words, activation in the left prefrontal, angular gyrus, and ventrolateral temporal areas was stronger for items without neighbors, probably because accurate responses to these items were more dependent on activation of semantic information. The results show neural correlates of access to specific word information. The absence of facilitatory lexical neighborhood effects on activation in these brain regions argues for an interpretation in terms of semantic access. Because subjects performed the same task throughout, the results are unlikely to be due to task-specific attentional, strategic, or expectancy effects.

Journal ArticleDOI
TL;DR: The fact that both increased activation of task-specific areas and increased deactivation oftask-irrelevant areas mediate cognitive functions underlying good RVIP task performance suggests two independent circuits, presumably reflecting different cognitive strategies, can be recruited to perform this vigilance task.
Abstract: Sustained attention deficits occur in several neuropsychiatric disorders. However, the underlying neurobiological mechanisms are still incompletely understood. To that end, functional MRI was used to investigate the neural substrates of sustained attention (vigilance) using the rapid visual information processing (RVIP) task in 25 healthy volunteers. In order to better understand the neural networks underlying attentional abilities, brain regions where task-induced activation correlated with task performance were identified. Performance of the RVIP task activated a network of frontal, parietal, occipital, thalamic, and cerebellar regions. Deactivation during task performance was seen in the anterior and posterior cingulate, insula, and the left temporal and parahippocampal gyrus. Good task performance, as defined by better detection of target stimuli, was correlated with enhanced activation in predominantly right fronto-parietal regions and with decreased activation in predominantly left temporo-limbic and cingulate areas. Factor analysis revealed that these performance-correlated regions were grouped into two separate networks comprised of positively activated and negatively activated intercorrelated regions. Poor performers failed to significantly activate or deactivate these networks, whereas good performers either activated the positive or deactivated the negative network, or did both. The fact that both increased activation of task-specific areas and increased deactivation of task-irrelevant areas mediate cognitive functions underlying good RVIP task performance suggests two independent circuits, presumably reflecting different cognitive strategies, can be recruited to perform this vigilance task.

Journal ArticleDOI
TL;DR: The first assessment of motion sensitivity for persons with autism and normal intelligence using motion patterns that require neural processing mechanisms of varying complexity demonstrates that the motion sensitivity of observers with autism is similar to that of nonautistic observers for different types of first-order motion stimuli, but significantly decreased for the same types of second-order stimuli.
Abstract: We present the first assessment of motion sensitivity for persons with autism and normal intelligence using motion patterns that require neural processing mechanisms of varying complexity. Compared to matched controls, our results demonstrate that the motion sensitivity of observers with autism is similar to that of nonautistic observers for different types of first-order (luminance-defined) motion stimuli, but significantly decreased for the same types of second-order (texture-defined) stimuli. The latter class of motion stimuli has been demonstrated to require additional neural computation to be processed adequately. This finding may reflect less efficient integrative functioning of the neural mechanisms that mediate visuoperceptual processing in autism. The contribution of this finding with regards to abnormal perceptual integration in autism, its effect on cognitive operations, and possible behavioral implications are discussed.

Journal ArticleDOI
TL;DR: Performing an object-matching task during the scan significantly improved the ability to predict objects from controls, but had minimal effect on object classification, suggesting that the task-based attentional benefit was non-specific to object categories.
Abstract: Object perception has been a subject of extensive fMRI studies in recent years. Yet the nature of the cortical representation of objects in the human brain remains controversial. Analyses of fMRI data have traditionally focused on the activation of individual voxels associated with presentation of various stimuli. The current analysis approaches functional imaging data as collective information about the stimulus. Linking activity in the brain to a stimulus is treated as a pattern-classification problem. Linear discriminant analysis was used to reanalyze a set of data originally published by Ishai et al. (2000), available from the fMRIDC (accession no. 2-2000-1113D). Results of the new analysis reveal that patterns of activity that distinguish one category of objects from other categories are largely independent of one another, both in terms of the activity and spatial overlap. The information used to detect objects from phase-scrambled control stimuli is not essential in distinguishing one object category from another. Furthermore, performing an object-matching task during the scan significantly improved the ability to predict objects from controls, but had minimal effect on object classification, suggesting that the task-based attentional benefit was non-specific to object categories.

Journal ArticleDOI
TL;DR: The reduced activity in the temporal lobe suggests that the perception of the prime word activates a lexical semantic network that shares common elements with the target word, and, thus, the target can be recognized with enhanced neural efficiency.
Abstract: The neural basis underlying implicit semantic priming was investigated using event-related fMRI. Prime-target pairs were presented auditorily for lexical decision (LD) on the target stimulus, which was either semantically related or unrelated to the prime, or was a nonword. A tone task was also administered as a control. Behaviorally, all participants demonstrated semantic priming in the LD task. fMRI results showed that for all three conditions of the LD task, activation was seen in the superior temporal gyrus (STG), the middle temporal gyrus (MTG), and the inferior parietal lobe, with greater activation in the unrelated and nonword conditions than in the related condition. Direct comparisons of the related and unrelated conditions revealed foci in the left STG, left precentral gyrus, left and right MTGs, and right caudate, exhibiting significantly lower activation levels in the related condition. The reduced activity in the temporal lobe suggests that the perception of the prime word activates a lexical-semantic network that shares common elements with the target word, and, thus, the target can be recognized with enhanced neural efficiency. The frontal lobe reductions most likely reflect the increased efficiency in monitoring the activation of lexical representations in the temporal lobe, making a decision, and planning the appropriate motor response.

Journal ArticleDOI
TL;DR: This paper found that morphosyntactic and pragmatic violations elicited significant P600 and N400 effects, respectively, replicating previous ERP studies that have established qualitative differences in processing conceptually and syntactic anomalies.
Abstract: The aim of this study was to gain further insights into how the brain distinguishes between meaning and syntax during language comprehension. Participants read and made plausibility judgments on sentences that were plausible, morphosyntactically anomalous, or pragmatically anomalous. In an event-related potential (ERP) experiment, morphosyntactic and pragmatic violations elicited significant P600 and N400 effects, respectively, replicating previous ERP studies that have established qualitative differences in processing conceptually and syntactic anomalies. Our main focus was a functional magnetic resonance imaging (fMRI) study in which the same subjects read the same sentences presented in the same pseudorandomized sequence while performing the same task as in the ERP experiment. Rapid-presentation event-related fMRI methods allowed us to estimate the hemodynamic response at successive temporal windows as the sentences unfolded word by word, without assumptions about the shape of the underlying response function. Relative to nonviolated sentences, the pragmatic anomalies were associated with an increased hemodynamic response in left temporal and inferior frontal regions and a decreased response in the right medial parietal cortex. Relative to nonviolated sentences, the morphosyntactic anomalies were associated with an increased response in bilateral medial and lateral parietal regions and a decreased response in left temporal and inferior frontal regions. Thus, overlapping neural networks were modulated in opposite directions to the two types of anomaly. These fMRI findings document both qualitative and quantitative differences in how the brain distinguishes between these two types of anomalies. This suggests that morphosyntactic and pragmatic information can be processed in different ways but by the same neural systems.

Journal ArticleDOI
TL;DR: Event-related functional magnetic resonance imaging of the human brain showed greater prestimulus preparatory activity in the pre-supplementary motor area before voluntary antisaccades compared with reflexive prosaccades, illustrating a mechanism for top-down control over reflexive behavior.
Abstract: The dynamic interplay between reflexive and controlled determinants of behavior is one of the most general organizing principles of brain function. A powerful analogue of this interplay is seen in the antisaccade task, which pits reflexive and willed saccadic mechanisms against one another. Event-related functional magnetic resonance imaging of the human brain showed greater prestimulus preparatory activity in the pre-supplementary motor area before voluntary antisaccades (saccades away from a target) compared with reflexive prosaccades (saccades to a target). Moreover, this preparatory activity was critically associated with reflex suppression; it predicted whether the reflex was later successfully inhibited in the trial. These dataillustrate a mechanism for top-down control over reflexive behavior.

Journal ArticleDOI
TL;DR: In this article, an event-related fMRI was used to assess neural activity in prefrontal cortex (PFC) and fusiform face area (FFA) of subjects performing a delay-recognition task for faces.
Abstract: Interactions between prefrontal cortex (PFC) and stimulus-specific visual cortical association areas are hypothesized to mediate visual working memory in behaving monkeys. To clarify the roles for homologous regions in humans, event-related fMRI was used to assess neural activity in PFC and fusiform face area (FFA) of subjects performing a delay-recognition task for faces. In both PFC and FFA, activity increased parametrically with memory load during encoding and maintenance of face stimuli, despite quantitative differences in the magnitude of activation. Moreover, timing differences in PFC and FFA activation during memory encoding and retrieval implied a context dependence in the flow of neural information. These results support existing neurophysiological models of visual working memory developed in the nonhuman primate.

Journal ArticleDOI
TL;DR: This is the first time that a single task has been used to demonstrate a double dissociation between the associative learning impairments caused by hippocampal versus basal ganglia damage/dysfunction, and has implications for understanding the distinct contributions of the medial temporal lobe and basal Ganglia to learning and memory.
Abstract: Based on prior animal and computational models, we propose a double dissociation between the associative learning deficits observed in patients with medial temporal (hippocampal) damage versus patients with Parkinson's disease (basal ganglia dysfunction). Specifically, we expect that basal ganglia dysfunction may result in slowed learning, while individuals with hippocampal damage may learn at normal speed. However, when challenged with a transfer task where previously learned information is presented in novel recombinations, we expect that hippocampal damage will impair generalization but basal ganglia dysfunction will not. We tested this prediction in a group of healthy elderly with mild-to-moderate hippocampal atrophy, a group of patients with mild Parkinson's disease, and healthy controls, using an "acquired equivalence" associative learning task. As predicted, Parkinson's patients were slower on the initial learning but then transferred well, while the hippocampal atrophy group showed the opposite pattern: good initial learning with impaired transfer. To our knowledge, this is the first time that a single task has been used to demonstrate a double dissociation between the associative learning impairments caused by hippocampal versus basal ganglia damage/dysfunction. This finding has implications for understanding the distinct contributions of the medial temporal lobe and basal ganglia to learning and memory.

Journal ArticleDOI
TL;DR: It is concluded that the frontal N2 ERP and lateral PFC activation are not markers for withholding an immediate response or switching tasks per se, but are associated with switching into a response-suppression mode.
Abstract: We investigated the extent to which a common neural mechanism is involved in task set-switching and response withholding, factors that are frequently confounded in task-switching and go/no-go paradigms. Subjects' brain activity was measured using event-related electrical potentials (ERPs) and event-related functional MRI (fMRI) neuroimaging in separate studies using the same cognitive paradigm. Subjects made compatible left/right keypress responses to left/right arrow stimuli of 1000 msec duration; they switched every two trials between responding at stimulus onset (GO task—green arrows) and stimulus offset (WAIT task—red arrows). With-holding an immediate response (WAIT vs. GO) elicited an enhancement of the frontal N2 ERP and lateral PFC activation of the right hemisphere, both previously associated with the "no-go" response, but only on switch trials. Task-switching (switch vs. nonswitch) was associated with frontal N2 amplification and right hemisphere ventrolateral PFC activation, but only for the WAIT task. The anterior cingulate cortex (ACC) was the only brain region to be activated for both types of task switch, but this activation was located more rostrally for the WAIT than for the GO switch trials. We conclude that the frontal N2 ERP and lateral PFC activation are not markers for withholding an immediate response or switching tasks per se, but are associated with switching into a response-suppression mode. Different regions within the ACC may be involved in two processes integral to task-switching: processing response conflict (rostral ACC) and overcoming prior response suppression (caudal ACC).

Journal ArticleDOI
TL;DR: The results indicate that cortical activity during reasoning depends on the nature of verbal relations, and all relations elicit mental models that underlie reasoning, but visual relations in addition elicit visual images.
Abstract: The goal of this study was to investigate the neurocognitive processes of mental imagery in deductive reasoning. Behavioral studies yielded four sorts of verbal relations: (1) visuospatial relations that are easy to envisage both visually and spatially; (2) visual relations that are easy to envisage visually but hard to envisage spatially; (3) spatial relations that are hard to envisage visually but easy to envisage spatially; and (4) control relations that are hard to envisage both visually and spatially. In three experiments, visual relations slowed the process of reasoning in comparison with control relations, whereas visuospatial and spatial relations yielded inferences comparable to those of control relations. An experiment using functional magnetic resonance imaging showed that in the absence of any correlated visual input (problems were presented acoustically via headphones), all types of reasoning problems evoked activity in the left middle temporal gyrus, in the right superior parietal cortex, and bilaterally in the precuneus. In the prefrontal cortex, increased activity was found in the middle and inferior frontal gyri. However, only the problems based on visual relations also activated areas of the visual association cortex corresponding to V2. The results indicate that cortical activity during reasoning depends on the nature of verbal relations. All relations elicit mental models that underlie reasoning, but visual relations in addition elicit visual images. This account resolves inconsistencies in the previous literature.

Journal ArticleDOI
TL;DR: The results provide evidence for the production-monitoring hypothesis and clarify the role of different brain regions typically activated in PET and functional magnetic resonance imaging (fMRI) studies of episodic retrieval.
Abstract: We propose a new hypothesis concerning the lateralization of prefrontal cortex (PFC) activity during verbal episodic memory retrieval. The hypothesis states that the left PFC is differentially more involved in semantically guided information production than is the right PFC, and that the right PFC is differentially more involved in monitoring and verification than is the left PFC. This "production-monitoring hypothesis" differs from the existing "systematic-heuristic hypothesis," which proposes that the left PFC is primarily involved in systematic retrieval operations, and the right PFC in heuristic retrieval operations. To compare the two hypotheses, we measured PFC activity using positron emission tomography (PET) during the performance of four episodic retrieval tasks: stem cued recall, associative cued recall, context recognition (source memory), and item recognition. Recall tasks emphasized production processes, whereas recognition tasks emphasized monitoring processes. Stem cued recall and context-recognition tasks underscored systematic operations, whereas associative cued recall and item-recognition tasks underscored heuristic operations. Consistent with the production-monitoring hypothesis, the left PFC was more activated for recall than for recognition tasks and the right PFC was more activated for recognition than for recall tasks. Inconsistent with the systematic-heuristic hypothesis, the left PFC was more activated for heuristic than for systematic tasks and the right PFC showed the converse result. Additionally, the study yielded activation differences outside the PFC. In agreement with a previous recall/recognition PET study, anterior cingulate, cerebellar, and striatal regions were more activated for recall than for recognition tasks, and the converse occurred for posterior parietal regions. A right medial temporal lobe region was more activated for stem cued recall and context recognition than for associative cued recall and item recognition, possibly reflecting perceptual integration. In sum, the results provide evidence for the production-monitoring hypothesis and clarify the role of different brain regions typically activated in PET and functional magnetic resonance imaging (fMRI) studies of episodic retrieval.