scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Cognitive Neuroscience in 2001"


Journal ArticleDOI
TL;DR: It is shown that visual categorization of a natural scene involves different mechanisms with different time courses: a perceptual, task-independent mechanism, followed by a task-related, category-independent process.
Abstract: Experiments investigating the mechanisms involved in visual processing often fail to separate low-level encoding mechanisms from higher-level behaviorally relevant ones. Using an alternating dual-task event-related potential (ERP) experimental paradigm (animals or vehicles categorization) where targets of one task are intermixed among distractors of the other, we show that visual categorization of a natural scene involves different mechanisms with different time courses: a perceptual, task-independent mechanism, followed by a task-related, category-independent process. Although average ERP responses reflect the visual category of the stimulus shortly after visual processing has begun (e.g. 75–80 msec), this difference is not correlated with the subject's behavior until 150 msec poststimulus.

694 citations


Journal ArticleDOI
TL;DR: It is suggested that amygdala dysfunction in autism might contribute to an impaired ability to link visual perception of socially relevant stimuli with retrieval of social knowledge and with elicitation of social behavior.
Abstract: Autism has been thought to be characterized, in part, by dysfunction in emotional and social cognition, but the pathology of the underlying processes and their neural substrates remain poorly understood. Several studies have hypothesized that abnormal amygdala function may account for some of the impairments seen in autism, specifically, impaired recognition of socially relevant information from faces. We explored this issue in eight high-functioning subjects with autism in four experiments that assessed recognition of emotional and social information, primarily from faces. All tasks used were identical to those previously used in studies of subjects with bilateral amygdala damage, permitting direct comparisons. All subjects with autism made abnormal social judgments regarding the trustworthiness of faces; however, all were able to make normal social judgments from lexical stimuli, and all had a normal ability to perceptually discriminate the stimuli. Overall, these data from subjects with autism show some parallels to those from neurological subjects with focal amygdala damage. We suggest that amygdala dysfunction in autism might contribute to an impaired ability to link visual perception of socially relevant stimuli with retrieval of social knowledge and with elicitation of social behavior.

604 citations


Journal ArticleDOI
TL;DR: Functional magnetic resonance imaging (fMRI) shows that the human auditory cortex displays a similar hierarchical organization: pure tones activate primarily the core, whereas belt areas prefer complex sounds, such as narrow-band noise bursts.
Abstract: The concept of hierarchical processing - that the sensory world is broken down into basic features later integrated into more complex stimulus preferences - originated from investigations of the visual cortex. Recent studies of the auditory cortex in nonhuman primates revealed a comparable architecture, in which core areas, receiving direct input from the thalamus, in turn, provide input to a surrounding belt. Here functional magnetic resonance imaging (fMRI) shows that the human auditory cortex displays a similar hierarchical organization: pure tones (PTs) activate primarily the core, whereas belt areas prefer complex sounds, such as narrow-band noise bursts.

455 citations


Journal ArticleDOI
TL;DR: The model accounts well for the effects upon saccadic reaction time (SRT), the presence of distractors, execution of pro-versus antisaccades, and variation in target probability, and suggests a possible mechanism for the generation of express saccades.
Abstract: Significant advances in cognitive neuroscience can be achieved by combining techniques used to measure behavior and brain activity with neural modeling. Here we apply this approach to the initiation of rapid eye movements (saccades), which are used to redirect the visual axis to targets of interest. It is well known that the superior colliculus (SC) in the midbrain plays a major role in generating saccadic eye movements, and physiological studies have provided important knowledge of the activity pattern of neurons in this structure. Based on the observation that the SC receives localized sensory (exogenous) and voluntary (endogenous) inputs, our model assumes that this information is integrated by dynamic competition across local collicular interactions. The model accounts well for the effects upon saccadic reaction time (SRT) due to removal of fixation, the presence of distractors, execution of pro- versus antisaccades, and variation in target probability, and suggests a possible mechanism for the generation of express saccades. In each of these cases, the activity patterns of "neurons" within the model closely resemble actual cell behavior in the intermediate layer of the SC. The interaction structure we employ is instrumental for producing a physiologically faithful model and results in new insights and hypotheses regarding the neural mechanisms underlying saccade initiation.

451 citations


Journal ArticleDOI
TL;DR: The results suggest that early face processing in the human brain is subserved by a multiple-component neural system in which both whole-face configurations and face parts are processed.
Abstract: The range of specificity and the response properties of the extrastriate face area were investigated by comparing the N170 event-related potential (ERP) component elicited by photographs of natural faces, realistically painted portraits, sketches of faces, schematic faces, and by nonface meaningful and meaningless visual stimuli. Results showed that the N170 distinguished between faces and nonface stimuli when the concept of a face was clearly rendered by the visual stimulus, but it did not distinguish among different face types: Even a schematic face made from simple line fragments triggered the N170. However, in a second experiment, inversion seemed to have a different effect on natural faces in which face components were available and on the pure gestalt-based schematic faces: The N170 amplitude was enhanced when natural faces were presented upside down but reduced when schematic faces were inverted. Inversion delayed the N170 peak latency for both natural and schematic faces. Together, these results suggest that early face processing in the human brain is subserved by a multiple-component neural system in which both whole-face configurations and face parts are processed. The relative involvement of the two perceptual processes is probably determined by whether the physiognomic value of the stimuli depends upon holistic configuration, or whether the individual components can be associated with faces even when presented outside the face context.

394 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used positron emission tomography (PET) to test the hypothesis that a 'left' parietal area, the supramarginal gyrus, is important for attention in relation to limb movements.
Abstract: It is widely agreed that visuospatial orienting attention depends on a network of frontal and parietal areas in the right hemisphere. It is thought that the visuospatial orienting role of the right parietal lobe is related to its role in the production of overt eye movements. The experiments reported here test the possibility that other parietal regions may be important for directing attention in relation to response modalities other than eye movement. Specifically, we used positron emission tomography (PET) to test the hypothesis that a 'left' parietal area, the supramarginal gyrus, is important for attention in relation to limb movements (Rushworth et al., 1997; Rushworth, Ellison, & Walsh, in press). We have referred to this process as 'motor attention' to distinguish it from orienting attention. In one condition subjects spent most of the scanning period covertly attending to 'left' hand movements that they were about to make. Activity in this first condition was compared with a second condition with identical stimuli and movement responses but lacking motor attention periods. Comparison of the conditions revealed that motor attention-related activity was almost exclusively restricted to the 'left' hemisphere despite the fact that subjects only ever made ipsilateral, left-hand responses. Left parietal activity was prominent in this comparison, within the parietal lobe the critical region for motor attention was the supramarginal gyrus and the adjacent anterior intraparietal sulcus (AIP), a region anterior to the posterior parietal cortex identified with orienting attention. In a second part of the experiment we compared a condition in which subjects covertly rehearsed verbal responses with a condition in which they made verbal responses immediately without rehearsal. A comparison of the two conditions revealed verbal rehearsal-related activity in several anterior left hemisphere areas including Broca's area. The lack of verbal rehearsal-related activity in the left supra-marginal gyrus confirms that this area plays a direct role in motor attention that cannot be attributed to any strategy of verbal mediation. The results also provide evidence concerning the importance of ventral premotor (PMv) and Broca's area in motor attention and language processes.

381 citations


Journal ArticleDOI
TL;DR: It is proposed that semantic impairment alone can account for the full range of word production deficits described here, on the basis of both the neuropsychological and computational evidence.
Abstract: The processes required for object naming were addressed in a study of patients with semantic dementia (a selective decline of semantic memory resulting from progressive temporal lobe atrophy) and in a computational model of single-word production. Although all patients with semantic dementia are impaired in both single-word production and comprehension, previous reports had indicated two different patterns: (a) a parallel decline in accuracy of naming and comprehension, with frequent semantic naming errors, suggesting a purely semantic basis for the anomia and (b) a dramatic progressive anomia without commensurate decline in comprehension, which might suggest a mainly postsemantic source of the anomia. Longitudinal data for 16 patients with semantic dementia reflected these two profiles, but with the following additional important specifications: (1) despite a few relatively extreme versions of one or other profile, the full set of cases formed a continuum in the extent of anomia for a given degree of degraded comprehension; (2) the degree of disparity between these two abilities was associated with relative asymmetry in laterality of atrophy: a parallel decline in the two measures characterized patients with greater right- than left-temporal atrophy, while disproportionate anomia occurred with a predominance of atrophy in the left-temporal lobe. In an implemented computational model of naming, semantic representations were distributed across simulated left- and right-temporal regions, but the semantic units on the left were more strongly connected to left-lateralized phonological representations. Asymmetric damage to semantic units reproduced the longitudinal patient profiles of naming relative to comprehension, plus additional characteristics of the patients' naming performance. On the basis of both the neuropsychological and computational evidence, we propose that semantic impairment alone can account for the full range of word production deficits described here.

369 citations


Journal ArticleDOI
TL;DR: This rapid processing mode was seen with a wide range of visual complex images, challenging the idea that short reaction times can only be seen with simple visual stimuli and implying that highly automatic feed-forward mechanisms underlie a far greater proportion of the sophisticated image analysis needed for everyday vision than is generally assumed.
Abstract: The processing required to decide whether a briefly flashed natural scene contains an animal can be achieved in 150 msec (Thorpe, Fize, & Marlot, 1996). Here we report that extensive training with a subset of photographs over a 3-week period failed to increase the speed of the processing underlying such Rapid Visual Categorizations: Completely novel scenes could be categorized just as fast as highly familiar ones. Such data imply that the visual system processes new stimuli at a speed and with a number of stages that cannot be compressed. This rapid processing mode was seen with a wide range of visual complex images, challenging the idea that short reaction times can only be seen with simple visual stimuli and implying that highly automatic feed-forward mechanisms underlie a far greater proportion of the sophisticated image analysis needed for everyday vision than is generally assumed.

349 citations


Journal ArticleDOI
TL;DR: This work investigates the performance, by normal individual and subjects with a selective impairment in either motor or visual imagery, of an imagery task involving a mental rotation, and highlights the distinct but complementary contribution of covert motor and visual processes during mental rotation.
Abstract: Recent studies indicate that covert mental activities, such as simulating a motor action and imagining the shape of an object, involve shared neural representations with actual motor performance and with visual perception, respectively. Here we investigate the performance, by normal individual and subjects with a selective impairment in either motor or visual imagery, of an imagery task involving a mental rotation. The task involved imagining a hand in a particular orientation in space and making a subsequent laterality judgement. A simple change in the phrasing of the imagery instructions (first-person or third-person imagery) and in actual hand posture (holding the hands on the lap or in the back) had a strong impact on response time (RT) in normal subjects, and on response accuracy in brain-damaged subjects. The pattern of results indicates that the activation of covert motor and visual processes during mental imagery depends on both top-down and bottom-up factors, and highlights the distinct but complementary contribution of covert motor and visual processes during mental rotation.

344 citations


Journal ArticleDOI
TL;DR: The performance of single neurons was comparable to that of humans and responded in a similar way to changes in presentation rate and the implications for the role of temporal cortex cells in perception are discussed.
Abstract: Macaque monkeys were presented with continuous rapid serial visual presentation (RSVP) sequences of unrelated naturalistic images at rates of 14–222 msec/image, while neurons that responded selectively to complex patterns (e.g., faces) were recorded in temporal cortex. Stimulus selectivity was preserved for 65p of these neurons even at surprisingly fast presentation rates (14 msec/image or 72 images/sec). Five human subjects were asked to detect or remember images under equivalent conditions. Their performance in both tasks was above chance at all rates (14–111 msec/image). The performance of single neurons was comparable to that of humans and responded in a similar way to changes in presentation rate. The implications for the role of temporal cortex cells in perception are discussed.

344 citations


Journal ArticleDOI
TL;DR: It is demonstrated that pianists, when listening to well-trained piano music, exhibit involuntary motor activity involving the contralateral primary motor cortex (M1).
Abstract: Pianists often report that pure listening to a well-trained piece of music can involuntarily trigger the respective finger movements We designed a magnetoencephalography (MEG) experiment to compare the motor activation in pianists and nonpianists while listening to piano pieces For pianists, we found a statistically significant increase of activity above the region of the contralateral motor cortex Brain surface current density (BSCD) reconstructions revealed a spatial dissociation of this activity between notes preferably played by the thumb and the little finger according to the motor homunculus Hence, we could demonstrate that pianists, when listening to well-trained piano music, exhibit involuntary motor activity involving the contralateral primary motor cortex (M1)

Journal ArticleDOI
TL;DR: It was concluded that the N200 effect is related to the lexical selection process, where word-form information resulting from an initial phonological analysis and content information derived from the context interact.
Abstract: An event-related brain potential experiment was carried out to investigate the time course of contextual influences on spoken-word recognition. Subjects were presented with spoken sentences that ended with a word that was either (a) congruent, (b) semantically anomalous, but beginning with the same initial phonemes as the congruent completion, or (c) semantically anomalous beginning with phonemes that differed from the congruent completion. In addition to finding an N400 effect in the two semantically anomalous conditions, we obtained an early negative effect in the semantically anomalous condition where word onset differed from that of the congruent completions. It was concluded that the N200 effect is related to the lexical selection process, where word-form information resulting from an initial phonological analysis and content information derived from the context interact.

Journal ArticleDOI
TL;DR: In this article, a verbal Sternberg task was used with continuously changing targets (novel task, NT) and with constant, practiced targets (practiced task, PT) to examine how the shift from controlled to automatic processing changes brain activity.
Abstract: Behavioral studies have shown that consistent practice of a cognitive task can increase the speed of performance and reduce variability of responses and error rate, reflecting a shift from controlled to automatic processing. This study examines how the shift from controlled to automatic processing changes brain activity. A verbal Sternberg task was used with continuously changing targets (novel task, NT) and with constant, practiced targets (practiced task, PT). NT and PT were presented in a blocked design and contrasted to a choice reaction time (RT) control task (CT) to isolate working memory (WM)-related activity. The three-dimensional (3-D) PRESTO functional magnetic resonance imaging (fMRI) sequence was used to measure hemodynamic responses. Behavioral data revealed that task processing became automated after practice, as responses were faster, less variable, and more accurate. This was accompanied specifically by a decrease in activation in regions related to WM (bilateral but predominantly left dorsolateral prefrontal cortex (DLPFC), right superior frontal cortex (SFC), and right frontopolar area) and the supplementary motor area. Results showed no evidence for a shift of foci of activity within or across regions of the brain. The findings have theoretical implications for understanding the functional anatomical substrates of automatic and controlled processing, indicating that these types of information processing have the same functional anatomical substrate, but differ in efficiency. In addition, there are practical implications for interpreting activity as a measure for task performance, such as in patient studies. Whereas reduced activity can reflect poor performance if a task is not sensitive to practice effects, it can reflect good performance if a task is sensitive to practice effects.

Journal ArticleDOI
TL;DR: The conjunction of temporal and spatial characteristics of P200post and P340post leads to the deduction that input processing-related attention associated with emotional visual stimulation involves an initial, rapid, and brief early attentional response oriented to rapid motor action, being more prominent towards negative stimulation.
Abstract: Several studies on hemodynamic brain activity indicate that emotional visual stimuli elicit greater activation than neutral stimuli in attention-related areas such as the anterior cingulate cortex (ACC) and the visual association cortex (VAC). In order to explore the temporo-spatial characteristics of the interaction between attention and emotion, two processes characterized by involving short and rapid phases, event-related potentials (ERPs) were measured in 29 subjects using a 60-electrode array and the LORETA source localization software. A cue/target paradigm was employed in order to investigate both expectancy-related and input processing-related attention. Four categories of stimuli were presented to subjects: positive arousing, negative arousing, relaxing, and neutral. Three attention-related components were finally analyzed: N280pre (from pretarget ERPs), P200post and P340post (both from posttarget ERPs). N280pre had a prefrontal focus (ACC and/or medial prefrontal cortex) and presented significantly lower amplitudes in response to cues announcing negative targets. This result suggests a greater capacity of nonaversive stimuli to generate expectancy-related attention. P200post and P340post were both elicited in the VAC, and showed their highest amplitudes in response to negative- and to positive-arousing stimuli, respectively. The origin of P200post appears to be located dorsally with respect to the clear ventral-stream origin of P340post. The conjunction of temporal and spatial characteristics of P200post and P340post leads to the deduction that input processing-related attention associated with emotional visual stimulation involves an initial, rapid, and brief 'early' attentional response oriented to rapid motor action, being more prominent towards negative stimulation. This is followed by a slower but longer 'late' attentional response oriented to deeper processing, elicited to a greater extent by appetitive stimulation.

Journal ArticleDOI
TL;DR: Results directly demonstrate that a subset of the left inferior frontal regions involved in phonological processing is also sensitive to transient acoustic features within the range of comprehensible speech.
Abstract: Functional magnetic resonance imaging (fMRI) was used to examine how the brain responds to temporal compression of speech and to determine whether the same regions are also involved in phonological processes associated with reading. Recorded speech was temporally compressed to varying degrees and presented in a sentence verification task. Regions involved in phonological processing were identified in a separate scan using a rhyming judgment task with pseudowords compared to a lettercase judgment task. The left inferior frontal and left superior temporal regions (Broca's and Wernicke's areas), along with the right inferior frontal cortex, demonstrated a convex response to speech compression; their activity increased as compression increased, but then decreased when speech became incomprehensible. Other regions exhibited linear increases in activity as compression increased, including the middle frontal gyri bilaterally. The auditory cortices exhibited compression-related decreases bilaterally, primarily reflecting a decrease in activity when speech became incomprehensible. Rhyme judgments engaged two left inferior frontal gyrus regions (pars triangularis and pars opercularis), of which only the pars triangularis region exhibited significant compression-related activity. These results directly demonstrate that a subset of the left inferior frontal regions involved in phonological processing is also sensitive to transient acoustic features within the range of comprehensible speech.

Journal ArticleDOI
TL;DR: Regions in the left inferior frontal cortex were specifically recruited during semantic processing in a task-dependent manner, and a region in the right cerebellum may be functionally related to those in theleft inferior frontal Cortex.
Abstract: To distinguish areas involved in the processing of word meaning (semantics) from other regions involved in lexical processing more generally, subjects were scanned with positron emission tomography (PET) while performing lexical tasks, three of which required varying degrees of semantic analysis and one that required phonological analysis. Three closely apposed regions in the left inferior frontal cortex and one in the right cerebellum were significantly active above baseline in the semantic tasks, but not in the nonsemantic task. The activity in two of the frontal regions was modulated by the difficulty of the semantic judgment. Other regions, including some in the left temporal cortex and the cerebellum, were active across all four language tasks. Thus, in addition to a number of regions known to be active during language processing, regions in the left inferior frontal cortex were specifically recruited during semantic processing in a task-dependent manner. A region in the right cerebellum may be functionally related to those in the left inferior frontal cortex. Discussion focuses on the implications of these results for current views regarding neural substrates of semantic processing.

Journal ArticleDOI
TL;DR: His recall of previously unfamiliar newsreel event was impaired, but gained substantially from repetition over a 2-day period, consistent with the hypothesis that the recollective process of episodic memory is not necessary either for recognition or for the acquisition of semantic knowledge.
Abstract: We report the performance on recognition memory tests of Jon, who, despite amnesia from early childhood, has developed normal levels of performance on tests of intelligence, language, and general knowledge. Despite impaired recall, he performed within the normal range on each of six recognition tests, but he appears to lack the recollective phenomenological experience normally associated with episodic memory. His recall of previously unfamiliar newsreel events was impaired, but gained substantially from repetition over a 2-day period. Our results are consistent with the hypothesis that the recollective process of episodic memory is not necessary either for recognition or for the acquisition of semantic knowledge.

Journal ArticleDOI
TL;DR: It is demonstrated that crossmodal links in spatial attention can influence sensory brain responses as early as the N1, and that these links operate in a spatial frame-of-reference that can remap between the modalities across changes in posture.
Abstract: Tactile–visual links in spatial attention were examined by presenting spatially nonpredictive tactile cues to the left or right hand, shortly prior to visual targets in the left or right hemifield. To examine the spatial coordinates of any crossmodal links, different postures were examined. The hands were either uncrossed, or crossed so that the left hand lay in the right visual field and vice versa. Visual judgments were better on the side where the stimulated hand lay, though this effect was somewhat smaller with longer intervals between cue and target, and with crossed hands. Event-related brain potentials (ERPs) showed a similar pattern. Larger amplitude occipital N1 components were obtained for visual events on the same side as the preceding tactile cue, at ipsilateral electrode sites. Negativities in the Nd2 interval at midline and lateral central sites, and in the Nd1 interval at electrode Pz, were also enhanced for the cued side. As in the psychophysical results, ERP cueing effects during the crossed posture were determined by the side of space in which the stimulated hand lay, not by the anatomical side of the initial hemispheric projection for the tactile cue. These results demonstrate that crossmodal links in spatial attention can influence sensory brain responses as early as the N1, and that these links operate in a spatial frame-of-reference that can remap between the modalities across changes in posture.

Journal ArticleDOI
TL;DR: It is hypothesized that the right medial temporal lobe modulates fear responses while viewing emotional pictures, which involves exposure to (emotional) visual information and is consistent with the emotional processing traditionally ascribed to the right hemisphere.
Abstract: In the present study we report a double dissociation between right and left medial temporal lobe damage in the modulation of fear responses to different types of stimuli. We found that right unilateral temporal lobectomy (RTL) patients, in contrast to control subjects and left temporal lobectomy (LTL) patients, failed to show potentiated startle while viewing negative pictures. However, the opposite pattern of impairment was observed during a stimulus that patients had been told signaled the possibility of shock. Control subjects and RTL patients showed potentiated startle while LTL patients failed to show potentiated startle. We hypothesize that the right medial temporal lobe modulates fear responses while viewing emotional pictures, which involves exposure to (emotional) visual information and is consistent with the emotional processing traditionally ascribed to the right hemisphere. In contrast, the left medial temporal lobe modulates fear responses when those responses are the result of a linguistic/cognitive representation acquired through language, which, like other verbally mediated material, generally involves the left hemisphere. Additional evidence from case studies suggests that, within the medial temporal lobe, the amygdala is responsible for this modulation.

Journal ArticleDOI
TL;DR: The results may suggest that the left and right amygdalae play a differential role in effective processing of facial expressions in collaboration with other cortical or subcortical regions, with the left being related with the bilateral prefrontal cortex, and the right with the right temporal lobe.
Abstract: Some involvement of the human amygdala in the processing of facial expressions has been investigated in neuroimaging studies, although the neural mechanisms underlying motivated or emotional behavior in response to facial stimuli are not yet fully understood. We investigated, using functional magnetic resonance imaging (fMRI) and healthy volunteers, how the amygdala interacts with other cortical regions while subjects are judging the sex of faces with negative, positive, or neutral emotion. The data were analyzed by a subtractive method, then, to clarify possible interaction among regions within the brain, several kinds of analysis (i.e., a correlation analysis, a psychophysiological interaction analysis and a structural equation modeling) were performed. Overall, significant activation was observed in the bilateral fusiform gyrus, medial temporal lobe, prefrontal cortex, and the right parietal lobe during the task. The results of subtraction between the conditions showed that the left amygdala, right orbitofrontal cortex, and temporal cortices were predominantly involved in the processing of the negative expressions. The right angular gyrus was involved in the processing of the positive expressions when the negative condition was subtracted from the positive condition. The correlation analysis showed that activity in the left amygdala positively correlated with activity in the left prefrontal cortex under the negative minus neutral subtraction condition. The psychophysiological interaction revealed that the neural responses in the left amygdala and the right prefrontal cortex underwent the condition-specific changes between the negative and positive face conditions. The right amygdaloid activity also had an interactive effect with activity in the right hippocampus and middle temporal gyrus. These results may suggest that the left and right amygdalae play a differential role in effective processing of facial expressions in collaboration with other cortical or subcortical regions, with the left being related with the bilateral prefrontal cortex, and the right with the right temporal lobe.

Journal ArticleDOI
TL;DR: A model based on synchronization and desynchronization of reverberatory neural assemblies is presented, which can parsimoniously account for both the limited capacity of visual working memory, and for the temporary binding of multiple assemblies into a single pattern.
Abstract: Luck and Vogel (1997) showed that the storage capacity of visual working memory is about four objects and that this capacity does not depend on the number of features making up the objects. Thus, visual working memory seems to process integrated objects rather than individual features, just as verbal working memory handles higher-order "chunks" instead of individual features or letters. In this article, we present a model based on synchronization and desynchronization of reverberatory neural assemblies, which can parsimoniously account for both the limited capacity of visual working memory, and for the temporary binding of multiple assemblies into a single pattern. A critical capacity of about three to four independent patterns showed up in our simulations, consistent with the results of Luck and Vogel. The same desynchronizing mechanism optimizing phase segregation between assemblies coding for separate features or multifeature objects poses a limit to the number of oscillatory reverberations. We show how retention of multiple features as visual chunks (feature conjunctions or objects) in terms of synchronized reverberatory assemblies may be achieved with and without long-term memory guidance.

Journal ArticleDOI
TL;DR: Speech stimuli elicited significantly greater activation than both complex and simple nonspeech stimuli in classic receptive language areas, namely the middle temporal gyri bilaterally and in a locus lateralized to the left posterior superior temporal gyrus.
Abstract: The detection of speech in an auditory stream is a requisite first step in processing spoken language. In this study, we used event-related fMRI to investigate the neural substrates mediating detection of speech compared with that of nonspeech auditory stimuli. Unlike previous studies addressing this issue, we contrasted speech with nonspeech analogues that were matched along key temporal and spectral dimensions. In an oddball detection task, listeners heard nonsense speech sounds, matched sine wave analogues (complex nonspeech), or single tones (simple nonspeech). Speech stimuli elicited significantly greater activation than both complex and simple nonspeech stimuli in classic receptive language areas, namely the middle temporal gyri bilaterally and in a locus lateralized to the left posterior superior temporal gyrus. In addition, speech activated a small cluster of the right inferior frontal gyrus. The activation of these areas in a simple detection task, which requires neither identification nor linguistic analysis, suggests they play a fundamental role in speech processing.

Journal ArticleDOI
TL;DR: A general role is suggested for posterior parietal areas in the deployment of visual attentional resources in a covert motion-tracking task that manipulated attentional load by varying the number of tracked balls.
Abstract: Although visual attention is known to modulate brain activity in the posterior parietal, prefrontal, and visual sensory areas, the unique roles of these areas in the control of attentional resources have remained unclear. Here, we report a dissociation in the response profiles of these areas. In a parametric functional magnetic resonance imaging (fMRI) study, subjects performed a covert motion-tracking task, in which we manipulated "attentional load" by varying the number of tracked balls. While strong effects of attention - independent of attentional load - were wide-spread, robust linear increases of brain activity with number of balls tracked were seen primarily in the posterior parietal areas, including the intraparietal sulcus (IPS) and superior parietal lobule (SPL). Thus, variations in attentional load revealed different response profiles in sensory areas as compared to control areas. Our results suggest a general role for posterior parietal areas in the deployment of visual of attentional resources.

Journal ArticleDOI
TL;DR: Contrary to the prevalent view that rote rehearsal does not impact learning, data suggest that phonological maintenance mechanisms, in addition to semantic elaboration, support the encoding of an experience such that it can be later remembered.
Abstract: The ability to bring to mind a past experience depends on the cognitive and neural processes that are engaged during the experience and that support memory formation. A central and much debated question is whether the processes that underlie rote verbal rehearsal—that is, working memory mechanisms that keep information in mind—impact memory formation and subsequent remembering. The present study used event-related functional magnetic resonance imaging (fMRI) to explore the relation between working memory maintenance operations and long-term memory. Specifically, we investigated whether the magnitude of activation in neural regions supporting the on-line maintenance of verbal codes is predictive of subsequent memory for words that were rote-rehearsed during learning. Furthermore, during rote rehearsal, the extent of neural activation in regions associated with semantic retrieval was assessed to determine the role that incidental semantic elaboration may play in subsequent memory for rote-rehearsed items. Results revealed that (a) the magnitude of activation in neural regions previously associated with phonological rehearsal (left prefrontal, bilateral parietal, supplementary motor, and cerebellar regions) was correlated with subsequent memory, and (b) while rote rehearsal did not—on average—elicit activation in an anterior left prefrontal region associated with semantic retrieval, activation in this region was greater for trials that were subsequently better remembered. Contrary to the prevalent view that rote rehearsal does not impact learning, these data suggest that phonological maintenance mechanisms, in addition to semantic elaboration, support the encoding of an experience such that it can be later remembered.

Journal ArticleDOI
TL;DR: These results are the first clear demonstration of response bias effects on ERPs linked to recognition memory and are consistent with the idea that frontal cortex areas may be responsible for relaxing the retrieval criterion for negative stimuli so as to ensure that emotional events are not as easily missed or forgotten as neutral events.
Abstract: The question of how emotions influence recognition memory is of interest not only within basic cognitive neuroscience but from clinical and forensic perspectives as well. Emotional stimuli can induce a "recognition bias" such that individuals are more likely to respond "old" to a negative item than to an emotionally neutral item, whether the item is actually old or new. We investigated this bias using event-related brain potential (ERP) measures by comparing the processing of words given "old" responses with accurate recognition of old/new differences. For correctly recognized items, the ERP difference between old items (hits) and new items (correct rejections, CR) was largely unaffected by emotional violence. That is, regardless of emotional valence, the ERP associated with hits was characterized by a widespread positivity between 300 and 700 msec relative to that for CRs. By contrast, the analysis of ERPs to old and new items that were judged "old" (hits and false alarms [FAs], respectively) revealed a differential effect of valence by 300 msec: Neutral items showed a large old/new difference over prefrontal sites, whereas negative items did not. These results are the first clear demonstration of response bias effects on ERPs linked to recognition memory. They are consistent with the idea that frontal cortex areas may be responsible for relaxing the retrieval criterion for negative stimuli so as to ensure that emotional events are not as easily "missed" or forgotten as neutral events.

Journal ArticleDOI
TL;DR: In this article, a form of incidental encoding was explored based on the "Testing" phenomenon: the incidental-encoding task was an episodic memory retrieval task, and subjects viewed old and new words and indicated whether they remembered them.
Abstract: Episodic memory encoding is pervasive across many kinds of task and often arises as a secondary processing effect in tasks that do not require intentional memorization. To illustrate the pervasive nature of information processing that leads to episodic encoding, a form of incidental encoding was explored based on the "Testing" phenomenon: The incidental-encoding task was an episodic memory retrieval task. Behavioral data showed that performing a memory retrieval task was as effective as intentional instructions at promoting episodic encoding. During fMRI imaging, subjects viewed old and new words and indicated whether they remembered them. Relevant to encoding, the fate of the new words was examined using a second, surprise test of recognition after the imaging session. fMRI analysis of those new words that were later remembered revealed greater activity in left frontal regions than those that were later forgotten - the same pattern of results as previously observed for traditional incidental and intentional episodic encoding tasks. This finding may offer a partial explanation for why repeated testing improves memory performance. Furthermore, the observation of correlates of episodic memory encoding during retrieval tasks challenges some interpretations that arise from direct comparisons between "encoding tasks" and "retrieval tasks" in imaging data. Encoding processes and their neural correlates may arise in many tasks, even those nominally labeled as retrieval tasks by the experimenter.

Journal ArticleDOI
TL;DR: A computational model based on the hypothesis that the visual and motor loops achieve both the quick acquisition of novel sequences and the robust execution of well-learned sequences is examined and found that the dual mechanism with the coordinator was superior to the single (visual or motor) mechanism.
Abstract: Experimental studies have suggested that many brain areas, including the basal ganglia (BG), contribute to procedural learning. Focusing on the basal ganglia-thalamocortical (BG-TC) system, we propose a computational model to explain how different brain areas work together in procedural learning. The BG-TC system is composed of multiple separate loop circuits. According to our model, two separate BG-TC loops learn a visuomotor sequence concurrently but using different coordinates, one visual, and the other motor. The visual loop includes the dorsolateral prefrontal (DLPF) cortex and the anterior part of the BG, while the motor loop includes the supplementary motor area (SMA) and the posterior BG. The concurrent learning in these loops is based on reinforcement signals carried by dopaminergic (DA) neurons that project divergently to the anterior ("visual") and posterior ("motor") parts of the striatum. It is expected, however, that the visual loop learns a sequence faster than the motor loop due to their different coordinates. The difference in learning speed may lead to inconsistent outputs from the visual and motor loops, and this problem is solved by a mechanism called a "coordinator," which adjusts the contribution of the visual and motor loops to a final motor output. The coordinator is assumed to be in the presupplementary motor area (pre-SMA). We hypothesize that the visual and motor loops, with the help of the coordinator, achieve both the quick acquisition of novel sequences and the robust execution of well-learned sequences. A computational model based on the hypothesis is examined in a series of computer simulations, referring to the results of the 2 × 5 task experiments that have been used on both monkeys and humans. We found that the dual mechanism with the coordinator was superior to the single (visual or motor) mechanism. The model replicated the following essential features of the experimental results: (1) the time course of learning, (2) the effect of opposite hand use, (3) the effect of sequence reversal, and (4) the effects of localized brain inactivations. Our model may account for a common feature of procedural learning: A spatial sequence of discrete actions (subserved by the visual loop) is gradually replaced by a robust motor skill (subserved by the motor loop).

Journal ArticleDOI
TL;DR: Recurrent transcranial magnetic stimulation is used to suppress the excitability of a portion of left prefrontal cortex and to assess its role in producing nouns and verbs, demonstrating that grammatical categories have a neuroanatomical basis and that the left cortex is selectively engaged in processing verbs as grammatical objects.
Abstract: Selective deficits in producing verbs relative to nouns in speech are well documented in neuropsychology and have been associated with left hemisphere frontal cortical lesions resulting from stroke and other neurological disorders. The basis for these impairments is unresolved: Do they arise because of differences in the way grammatical categories of words are organized in the brain, or because of differences in the neural representation of actions and objects? We used repetitive transcranial magnetic stimulation (rTMS) to suppress the excitability of a portion of left prefrontal cortex and to assess its role in producing nouns and verbs. In one experiment subjects generated real words; in a second, they produced pseudowords as nouns or verbs. In both experiments, response latencies increased for verbs but were unaffected for nouns following rTMS. These results demonstrate that grammatical categories have a neuroanatomical basis and that the left prefrontal cortex is selectively engaged in processing verbs as grammatical objects.

Journal ArticleDOI
TL;DR: A model in which color information influences the perception of digits through reentrant pathways in the visual system is proposed, which suggests that C's colored photisms influence her perception of black digits.
Abstract: When C, a digit-color synaesthete, views black digits, she reports that each digit elicits a highly specific color (a photism), which is experienced as though the color was externally projected onto the digit. We evaluated this claim by assessing whether C's photisms influenced her ability to perceive visually presented digits. C identified and localized target digits presented against backgrounds that were either congruent or incongruent with the color of her photism for the digits. The results showed that C was poorer at identifying and localizing digits on congruent than incongruent trials. Such differences in performance between congruent and incongruent trials were not found with nonsynaesthete control participants. These results suggest that C's colored photisms influence her perception of black digits. We propose a model in which color information influences the perception of digits through reentrant pathways in the visual system.

Journal ArticleDOI
TL;DR: A modified Rubin vase-face illusion was employed to explore to what extent the activation in face-related regions is attributable to the presence of local face features, or is due to a more holistic grouping process that involves the entire face figure.
Abstract: Recent neuroimaging studies have described a differential activation pattern associated with specific object images (e.g., face-related and building-related activation) in human occipito-temporal cortex. However, it is as yet unclear to what extent this selectivity is due to differences in the statistics of local object features present in the different object categories, and to what extent it reflects holistic grouping processes operating across the entire object image. To resolve this question it is essential to use images in which identical sets of local features elicit the perception of different object categories. The classic Rubin vase–face illusion provides an excellent experimental set to test this question. In the illusion, the same local contours lead to the perception of different objects (vase or face). Here we employed a modified Rubin vase–face illusion to explore to what extent the activation in face-related regions is attributable to the presence of local face features, or is due to a more holistic grouping process that involves the entire face figure. Biasing cues (gratings and color) were used to control the perceptual state of the observer. We found enhanced activation in face-related regions during the "face profile" perceptual state compared to the "vase" perceptual state. Control images ruled out the involvement of the biasing cues in the effect. Thus, object-selective activation in human face-related regions entails global grouping processes that go beyond the local processing of stimulus features.