scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Cognitive Neuroscience in 1994"


Journal ArticleDOI
TL;DR: An event-related brain potential reflecting the acoustic-phonetic process in the phonological stage of word processing was recorded to the terminal words of spoken sentences and the independence of this response from the ERP component known to be sensitive to semantic violations (N400) was demonstrated.
Abstract: An event-related brain potential (ERP) reflecting the acoustic-phonetic process in the phonological stage of word processing was recorded to the terminal words of spoken sentences. The peak latency of this negative-going response occurred between 270 and 300 msec after the onset of the terminal word. The independence of this response (the phonological mismatch negativity, PMN) from the ERP component known to be sensitive to semantic violations (N400) was demonstrated by manipulating sentence endings so that phonemic and semantic violations occurred together or separately. Four conditions used sentences that ended with (1) the highest Cloze probability word (e.g., “The piano was out of tune.”), (2) a word having the same initial phoneme of the highest Cloze probability word but that was, in fact, semantically anomalous (e.g., “The gambler had a streak of bad luggage.”), (3) a word having an initial phoneme different from that of the highest Cloze probability word but that was, in fact, semantically appropriate (e.g., “Don caught the ball with his glove.”), or (4) a word that was semantically anomalous and, therefore, had an initial phoneme that was totally unexpected given the sentence's context (e.g., “The dog chased our cat up the queen”). Neither the PMN nor the N400 was found in the first condition. Only an N400 was observed in the second condition while only a PMN was seen in the third. Both responses were elicited in the last condition. Finally, a delayed N400 occurred to semantic violations in the second condition where the initial phoneme was identical to that of the highest Cloze probability ending. Results are discussed with regard to the Cohort model of word recognition.

447 citations


Journal ArticleDOI
TL;DR: The authors found that the RH processes words with relatively coarser coding than the LH, a conclusion consistent with a recent suggestion that RH coarsely codes visual input (Kosslyn, Chabris, Marsolek, & Koenig, 1992).
Abstract: There are now numerous observations of subtle right hemisphere (RH) contributions to language comprehension. It has been suggested that these contributions reflect coarse semantic coding in the RH. That is, the RH weakly activates large semantic fields---including concepts distantly related to the input word---whereas the left hemisphere (LH) strongly activates small semantic fields---limited to concepts closely related to the input (Beeman, 1993a,b). This makes the RH less effective at interpreting single words, but more sensitive to semantic overlap of multiple words. To test this theory, subjects read target words preceded by either “Summation” primes (three words each weakly related to the target) or Unrelated primes (three unrelated words), and target exposure duration was manipulated so that subjects correctly named about half the target words in each hemifield. In Experiment 1, subjects benefited more from Summation primes when naming target words presented to the left visual field-RH (Ivf-RH) than when naming target words presented to the right visual field-LH (rvf-LH), suggesting a RH advantage in coarse semantic coding. In Experiment 2, with a low proportion of related prime-target trials, subjects benefited more from “Direct” primes (one strong associate flanked by two unrelated words) than from Summation primes for rvf-LH target words, indicating that the LH activates closely related information much more strongly than distantly related information. Subjects benefited equally from both prime types for Ivf-RH target words, indicating that the RH activates closely related information only slightly more strongly, at best, than distantly related information. This suggests that the RH processes words with relatively coarser coding than the LH, a conclusion consistent with a recent suggestion that the RH coarsely codes visual input (Kosslyn, Chabris, Mar-solek, & Koenig, 1992).

436 citations


Journal ArticleDOI
TL;DR: The idea that fornix transection in the monkey impairs spatial memory but leaves object memory intact is shown to be an oversimplification.
Abstract: A series of five experiments investigated the relationship between object memory and scene memory in normal and fornix-transected monkeys. An algorithm created formally defined background and objects on a large visual display; the disposition of some particular objects in particular places in a particular background constitutes a formally defined scene. The animals learned four types of discrimination problem: (1) object-in-place discrimination learning, in which the correct (rewarded) response was to a particular object that always occupied the same place in a particular unique background, (2) place discrimination learning, in which the correct response was to a particular place in a unique background, with no distinctive object at that place, (3) object discrimination learning in unique backgrounds, in which the correct response was to a particular object that could occupy one or the other of two possible places in a unique background, and (4) object discrimination learning in varying backgrounds, in which the correct response was to a particular object that could appear at any place in any background. The severest impairment produced by fornix transection was in object-in-place learning. Fornix transection did not impair object discrimination learning in varying backgrounds. The results from the other two types of learning task showed intermediate severity of impairment in the fornix-transected animals. The idea that fornix transection in the monkey impairs spatial memory but leaves object memory intact is thus shown to be an oversimplification. The impairments of object memory in the present experiments are analogous to the impairments of episodic memory seen in human amnesic patients.

423 citations


Journal ArticleDOI
TL;DR: The cell responses provide direct evidence for neural mechanisms computing form from nonrigid motion and the selectivity of the cells was for body view, specific direction, and specific type of body motion presented by moving light displays and is not predicted by many current computational approaches to the extraction of form from motion.
Abstract: Cells have been found in the superior temporal polysensory area (STPa) of the macaque temporal cortex that are selectively responsive to the sight of particular whole body movements (e.g., walking) under normal lighting. These cells typically discriminate the direction of walking and the view of the body (e.g., left profile walking left). We investigated the extent to which these cells are responsive under “biological motion” conditions where the form of the body is defined only by the movement of light patches attached to the points of limb articulation. One-third of the cells (25/72) selective for the form and motion of walking bodies showed sensitivity to the moving light displays. Seven of these cells showed only partial sensitivity to form from motion, in so far as the cells responded more to moving light displays than to moving controls but failed to discriminate body view. These seven cells exhibited directional selectivity. Eighteen cells showed statistical discrimination for both direction of movement and body view under biological motion conditions. Most of these cells showed reduced responses to the impoverished moving light stimuli compared to full light conditions. The 18 cells were thus sensitive to detailed form information (body view) from the pattern of articulating motion. Cellular processing of the global pattern of articulation was indicated by the observations that none of these cells were found sensitive to movement of individual limbs and that jumbling the pattern of moving limbs reduced response magnitude. A further 10 cells were tested for sensitivity to moving light displays of whole body actions other than walking. Of these cells 5/10 showed selectivity for form displayed by biological motion stimuli that paralleled the selectivity under normal lighting conditions. The cell responses thus provide direct evidence for neural mechanisms computing form from nonrigid motion. The selectivity of the cells was for body view, specific direction, and specific type of body motion presented by moving light displays and is not predicted by many current computational approaches to the extraction of form from motion.

379 citations



Journal ArticleDOI
TL;DR: The isolation of this novel language-related ERF' that is sensitive to semantic manipulations has important consequences for temporal and mechanistic aspects of theories of language processing.
Abstract: Event-related potentials (ERPs) were recorded from the scalp to investigate the processing of word stimuli. Three tasks were used: (1) a task comparing words that provided an anomalous or normal sentence ending, (2) a word-list task in which different word types were examined, and (3) a word-list task in which semantic priming was examined. ERPs were recorded from a 50-channel montage in an attempt to dissociate overlapping ERP features by their scalp distributions. The focus of these studies was the N400, an ERP previously associated with language processing (Kutas & Hillyard, 1980). The temporal interval typically associated with N400 (250--500 msec) was found to contain overlapping ERP features. Two of these features were common to both sentence and word-list tasks---but one appeared different. Anomalous sentence endings and words with semantic content in lists both showed coincident negative left frontotemporal and midline-anterior ERP foci, peaking at 332 msec for sentences and 316 msec for word lists. The most negative voltage obtained in the sentence task peaked at 386 msec and had a midline-posterior focus. A right frontotemporal focus developed after the midline-posterior focus and outlasted its duration. The most negative voltage for content words in lists was reached at 364 msec. The distribution of this ERP was extensive over the midline and appeared to differ from that observed in the sentence task. Modulation of language-related ERPs by word type and semantic priming was investigated using the word-list tasks, which required category-detection responses. Two novel findings were obtained: (1) The ERP distributions for words serving grammatical function and content words differed substantially in word lists. Even when devoid of any sentence context, function words presented significantly attenuated measures of N400 compared to content words. These findings support hypotheses that suggest a differential processing of content and function words. (2) Semantic priming functionally dissociated two ERP features in the 250--500 msec range. The later and most negative midline ERP feature (peaking at 364 msec) was attenuated by semantic priming. However, the earlier left frontotemporal feature (peaking at 316 msec) was enhanced by semantic priming. The isolation of this novel language-related ERF' that is sensitive to semantic manipulations has important consequences for temporal and mechanistic aspects of theories of language processing.

314 citations


Journal ArticleDOI
TL;DR: The ability to time the response sequence in such a way that the response 5 were placed right ahead of the stimuli started to break clown, i.e., the task was fulfilled by reactions to the stimuli rather than by advanced responses.
Abstract: The concept of a temporal integration process in the timing mechanisms in the brain, postulated on the basis of experimental observations from various paradigms (for a review see P$oUppel, 1978), has been explored in a sensorimotor synchronization task. Subjects synchronized their finger taps to sequences of auditory stimuli with interstimulus-onset intervals (ISIs) between 300 and 4800 msec in different trials. Each tonal sequence consisted of 110 stimuli; the tones had a frequency of 500 Hz and a duration of 100 msec. As observed previously, response onsets preceded onsets of the stimuli by some tens of milliseconcls for ISIs in the range from about 600 to 1800 msec. For ISIs longer than or equal to 2400 msec, the ability to time the response sequence in such a way that the response 5 were placed right ahead of the stimuli started to break clown, i.e., the task was fulfilled by reactions to the stimuli rather than by advanced responses. This observation can he understood within the general framework of a temporal integration puce 55 that is supposed to have a maximal capacity (integration interval) of approximately 3 sec. Only if successive stimuli fall within one integration period, can motor programs be initiated properly by a prior stimulus and thus lead to an appropriate synchronization between the stimulus sequence and corresponding motor acts.

228 citations


Journal ArticleDOI
TL;DR: Three patients with semantic dementia, involving progressive deterioration of semantic memory, performed immediate serial recall of short sequences of familiar words, with a marked advantage in recall of known as compared to familiar but now unknown words.
Abstract: Three patients with semantic dementia, involving progressive deterioration of semantic memory, performed immediate serial recall of short sequences of familiar words. On the basis of their performance in other tasks of word comprehension and production, the stimuli were selected individually for each patient as either known or unknown words. All patients showed a marked advantage in recall of known as compared to familiar but now unknown words. Errors consisted primarily of incorrect combinations of correct phoneme sequences in the stimulus string, with a large number of errors preserving onsethime syllable structure (e.g., mint, rug reproduced as “rint, mug”). Discussion focuses on the implication of these errors for the structure of phonological representations, and in particular on a hypothesis that meaning plays a crucial role in binding the elements of phonological word forms.

218 citations


Journal ArticleDOI
TL;DR: It can be inferred that separate functional subsystems process the two types of spatial relations in patients with unilateral stroke, providing evidence for complementary lateralization of the two type of spatial perception.
Abstract: Sixty patients with unilateral stroke (half with left hemisphere damage and half with right hemisphere damage) and a control group (N = 15) matched for age and educational level were tested in two experiments. In one experiment they were first shown, on each trial, a sample drawing depicting one or more objects. Following a short delay, they were asked to identify the drawing when it was paired with a drawing in which the same object(s) was transformed in categorical or coordinate spatial relations. In the other experiment, the same subjects first were shown, on each trial, a sample drawing. They then judged which of two variants (each in one type of spatial relation) looked more similar to the sample drawing. Typically, patients with left-sided stroke mistakenly identified the categorical transformation for the sample drawing in the first task; in the second task, they judged the categorical transformation as more similar to the sample drawing. Patients with right-sided stroke mistakenly identified the coordinate transformations for the sample drawing in the first task, and, in the second task, typically judged the drawings transformed along coordinate spatial relations as more similar to the sample drawing. These findings provide evidence for complementary lateralization of the two types of spatial perception. It can therefore be inferred that separate functional subsystems process the two types of spatial relations.

154 citations


Journal ArticleDOI
TL;DR: Two experiments examined phonological priming effects on reaction times, error rates, and event-related brain potential (ERP) measures in an auditory lexical decision task and found that the ERP effects in the two experiments could be modulations of the same underlying component, possibly the N400.
Abstract: Two experiments examined phonological priming effects on reaction times, error rates, and event-related brain potential (ERP) measures in an auditory lexical decision task. In Experiment 1 related prime-target pairs rhymed, and in Experiment 2 they alliterated (i.e., shared the consonantal onset and vowel). Event-related potentials were recorded in a delayed response task. Reaction times and error rates were obtained both for the delayed and an immediate response task. The behavioral data of Experiment 1 provided evidence for phonological facilitation of word, but not of nonword decisions. The brain potentials were more negative to unrelated than to rhyming word-word pairs between 450 and 700 rnsec after target onset. This negative enhancement was not present for word-nonword pairs. Thus, the ERP results match the behavioral data. The behavioral data of Experiment 2 provided no evidence for phonological Facilitation. However, between 250 and 450 msec after target onset, i.e., considerably earlier than in Experiment 1, brain potentials were more negative for unrelated than for alliterating Word-word and word-nonword pairs. It is argued that the ERP effects in the two experiments could be modulations of the same underlying component, possibly the N400. The difference in the timing of the effects is likely to be due to the fact that the shared segments in related stimulus pairs appeared in different word positions in the two experiments.

145 citations


Journal ArticleDOI
TL;DR: An account of the coexistence of neglect in more than one frame of reference and the presence of object-centered neglect under a restricted set of conditions is presented.
Abstract: When patients with right-sided hemispheric lesions neglect information on the left side, with respect to what set of spatial coordinates is left defined? Two potential reference frames were examined in this study, one where left and right are defined with respect to the midline of the viewer and/or environment (viewer/env-centered) and the other where left and right are defined with respect to the midline of the object (object-centered). By rotating the stimulus 90° clockwise or counterclockwise, and instructing patients with neglect to report the colors appearing around the border of a stimulus, an independent measure was obtained for the number of colors reported from the left and right of the viewer/env-and from the object-based reference frame. Whereas significant object-centered neglect was observed only for upper case asymmetrical letters but not for symmetrical letters nor for drawings of familiar animals or objects, significant viewer/env-based neglect was observed with all the stimulus types. We present an account of the coexistence of neglect in more than one frame of reference and the presence of object-centered neglect under a restricted set of conditions.

Journal ArticleDOI
TL;DR: A computational model is presented that accounts for normal attentional effects by interactivity and competition among representations of different locations in space, without a dedicated disengage mechanism, and shows that when the model is lesioned, it produces the disengage deficit shown by parietal-damaged patients.
Abstract: Parietal-damaged patients respond abnormally slowly to targets presented in the affected hemifield when preceded by cues in the intact hemifield. This inability to disengage attention from the ipsilesional field to reengage it in the contralesional field has been interpreted as evidence for a distinct “disengage” mechanism, localized in parietal cortex. We present a computational model that accounts for normal attentional effects by interactivity and competition among representations of different locations in space, without a dedicated “disengage” mechanism. We show that when the model is lesioned, it produces the “disengage deficit” shown by parietal-damaged patients. This suggests that the deficit observed in such patients can be understood as an emergent property of interactions among the remaining parts of the system, and need not imply the existence of a dedicated “disengage” mechanism in the normal brain.

Journal ArticleDOI
TL;DR: The architecture of the model may offer a functional explanation of hitherto mysterious tectocerebellar projections, and a framework for investigating in greater detail how the cerebellum adaptively controls saccadic accuracy.
Abstract: Saccadic accuracy requires that the control signal sent to the motor neurons must be the right size to bring the fovea to the target, whatever the initial position of the eyes (and corresponding state of the eye muscles). Clinical and experimental evidence indicates that the basic machinery for generating saccadic eye movements, located in the brainstem, is not accurate: learning to make accurate saccades requires cerebellar circuitry located in the posterior vermis and fastigial nucleus. How do these two circuits interact to achieve adaptive control of saccades? A model of this interaction is described, based on Kawato's principle of feedback-error-learning. Its three components were (1) a simple controller with no knowledge of initial eye position, corresponding to the superior colliculus; (2) Robinson's internal feedback model of the saccadic burst generator, corresponding to preoculomotor areas in the brain-stem; and (3) Albus's Cerebellar Model Arithmetic Computer (CMK), a neural net model of the cerebellum. The connections between these components were (I) the simple feedback controller passed a (usually inaccurate) command to the pulse generator, and (2) a copy of this command to the CMAC; (3) the CMAC combined the copy with information about initial eye position to (4) alter the gain on the pulse generator's internal feedback loop, thereby adjusting the size of burst sent to the motor neurons. (5) If the saccade were inaccurate, an error signal from the feedback controller adjusted the weights in the CMAC. It was proposed that connection (2) corresponds to the mossy fiber projection from superior colliculus to oculomotor vermis via the nucleus reticularis tegmenti pontis, and connection (5) to the climbing fiber projection from superior colliculus to the oculomotor vermis via the inferior olive. Plausible initialization values were chosen so that the system produced hypometric saccades (as do human infants) at the start of learning, and position-dependent hypermetric saccades when the cerebellum was removed. Simulations for horizontal eye movements showed that accurate saccades from any starting position could be learned rapidly, even if the error signal conveyed only whether the initial saccade were too large or too small. In subsequent tests the model adapted realistically both to simulated weakening of the eye muscles, and to intrasaccadic displacement of the target, thereby mimicking saccadic plasticity in adults. The architecture of the model may therefore offer a functional explanation of hitherto mysterious tectocerebellar projections, and a framework for investigating in greater detail how the cerebellum adaptively controls saccadic accuracy.

Journal ArticleDOI
TL;DR: Nine patients with chronic, unilateral lesions of the dorso-lateral prefrontal cortex including the frontal eye fields (FEF) made saccades toward contralesional and ipsilesional fields, finding the effect of FEF lesions on saccacles contrasted with those observed in a second experiment requiring a key press response.
Abstract: Nine patients with chronic, unilateral lesions of the dorso-lateral prefrontal cortex including the frontal eye fields (FEF) made saccades toward contralesional and ipsilesional fields. The saccades were either voluntarily directed in response to arrows in the center of a visual display, or were reflexively summoned by a peripheral visual signal. Saccade latencies were compared to those made by seven neurologic control patients with chronic, unilateral lesions of dorsolateral prefrontal cortex sparing the FEF, and by 13 normal control subjects. In both the normal and neurologic control subjects, reflexive saccades had shorter Latencies than voluntary saccades. In the FEF lesion patients, voluntary saccades had longer latencies toward the contralesional field than toward the ipsilesional field. The opposite pattern was found for reflexive saccades: latencies of saccades to targets in the contralesional field were shorter than saccades summoned to ipsilesional targets. Reflexive saccades toward the ipsilesional field had abnormally prolonged latencies; they were comparable to the latencies observed for voluntary Saccades. The effect of FEF lesions on saccacles contrasted with those observed in a second experiment requiring a key press response: FEF lesion patients were slower in making key press responses to signals detected in the contralesional field. To assess covert attention and preparatory set the effects of precues providing advance information were measured in both saccade and key press experiments. Neither patient group showed any deficiency in using precues to shift attention or to prepare saccades. The FEF facilitates the generation of voluntary saccatles and also inhibits reflexive saccades to exogenous signals. FEF lesions may disinhibit the ipsilesional midbrain which in turn may inhibit the opposite colliculus to slow reflexive saccades toward the ipsilesional field.

Journal ArticleDOI
TL;DR: An increase in N400 amplitude in response to the words in the Untitled paragraphs relative to the Titled paragraphs, indicating that global coherence does affect the N400, and subjects in the Titling group showed an enhanced P1-N1 component relative toThe Untitled group suggesting that the presence of globalCoherence allows greater attention to be allocated to early visual processing of words.
Abstract: Previous research on the N400 component of the event-related brain potential (ERP) has dealt primarily with measuring the degree of expectancy on the part of the reader as a result of the context within a sentence. Research has shown that when the final word in a sentence is unexpected or incoherent, a greater N400 amplitude is elicited than if the final word is expected or coherent within the context of the sentence. The present study investigated whether the N400 component is sensitive to global, as well as local, semantic expectancy. Global coherence refers to the ease with which subjects can relate the current proposition they are reading with theme-related ideas. In the present study, the effect of global coherence on event-related brain potentials was tested using four titled and untitled paragraphs (Bransford & Johnson, 1972; Dooling & Lachman, 1971), presented one word at a time. These paragraphs are noncoherent, and are made coherent only with the presentation of a title. The EEG was recorded in response to every word in all four paragraphs. We found an increase in N400 amplitude in response to the words in the Untitled paragraphs relative to the Titled paragraphs, indicating that global coherence does affect the N400. In addition, subjects in the Titled group showed an enhanced P1-N1 component relative to the Untitled group suggesting that the presence of global coherence allows greater attention to be allocated to early visual processing of words.

Journal ArticleDOI
TL;DR: Patients with parietal volume loss showed electrophysiological and behavioral signs of abnormally narrow regions of enhancement of sensory stimulation at an attended location, suggesting a mechanism that may underlie the impairments in spatial attention that follow damage to parietal cortex.
Abstract: Patients with parietal volume loss showed electrophysiological and behavioral signs of abnormally narrow regions of enhancement of sensory stimulation at an attended location. On a test of focused spatial attention, when compared to normal control subjects and patients without parietal abnormality, patients with abnormalities of parietal cortex demonstrated (1) faster button press RTs to targets, (2) earlier P3b event-related potential (ERP) latencies to targets, and (3) larger than normal P1 ERP attention effects (i.e., greater than normal enhancement of sensory responses at an attended location). These data are evidence for visual attention distributed as a spotlight at the attentional focus with little surrounding processing enhancement. This dysfunctional attentional map facilitates simple responses within the attentional beam quite well, but could hinder responses outside the beam. Severely gated sensory responses outside the immediate attentional focus are likely to result in severely delayed responses to information in those locations. This would be consistent with the response delays seen in patients with parietal damage following an incorrect spatial cue (extinction-like pattern), and also with clinical observations of inattention in such patients. The patterns of sensory enhancement seen in these data suggest a mechanism that may underlie the impairments in spatial attention that follow damage to parietal cortex, and help to specify the role of parietal cortex in spatial attention.

Journal ArticleDOI
TL;DR: Results indicate that point-localization in the somatosensory system is accomplished with respect to a spatially defined frame-of-reference and not strictly withrespect to somatotopically defined coordinates.
Abstract: Unilateral parietal lobe damage, particularly in the right cerebral hemisphere, leads to neglect of stimuli on the contra lateral side. To determine the reference frame within which neglect operates in the somatosensory system, 11 patients with unilateral neglect were touched simultaneously on the left and right side of the wrist of one hand. The hand was tested in both the palm up and the palm down position. Patients neglected the stimuli on the side of space contra lateral to the lesion regardless of hand position. These results indicate that point-localization in the somatosensory system is accomplished with respect to a spatially defined frame-of-reference and not strictly with respect to somatotopically defined coordinates.

Journal ArticleDOI
TL;DR: Young patients with damage to the neocerebellum were found to be impaired in rapidly shifting their mention between visual stimuli that occurred within a single location, suggesting a deficit in the ability to selectively activate and deactivate attention.
Abstract: In a previous study, we found that patients with damage to the neocerebellum were significantly impaired in the ability to rapidly shift their attention between ongoing sequences of auditory and visual stimuli (Akshoomoff & Courchesne, 1992). In the present study, young patients with damage to the neoccrebelluni were found to be impaired in rapidly shifting their mention between visual stimuli that occurred within a single location. Event-related potentials recorded during the shifting attention experiment suggested that this reflects a deficit in the. covert ability to selectively activate and deactivate attention. These results lend Further support to the hypothesis that the neocerebellum plays a role in the ability to rapidly shift attention.

Journal ArticleDOI
TL;DR: Performance of patients with quadrant lesions on the inherently ambiguous Cognitive Bias Task (CBT) suggests sexual dimorphism in the fundamental aspects of functional cortical geometry, by emphasizing different cerebral axes.
Abstract: Performance of patients with quadrant lesions on the inherently ambiguous Cognitive Bias Task (CBT) suggests sexual dimorphism in the fundamental aspects of functional cortical geometry, by emphasizing different cerebral axes. In right-handed males, extreme context-dependent and context-independent response selection biases are reciprocally linked to left vs. right frontal systems. In right-handed females, these complementary biases appear to be reciprocally linked to posterior vs. frontal cortices. Frontal lobe functions are more lateralized in males than females due to sexual dimorphism of the left frontal systems. Both in males and females, patterns of CBT scores in non-right-handers with quadrant lesions are opposite to those found in right-handers. This suggests the existence of two functionally and neurally distinct cognitive selection mechanisms. Both mechanisms involve the frontal lobes, but their exact neuroanatomy depends on sex and handedness.

Journal ArticleDOI
TL;DR: The limitations in a patient with visual form agnosia's ability to use different kinds of pattern information to guide her hand rotation suggest that such information may need to be transmitted from the ventral visual stream to these parietal areas to enable the full range of prehensive acts in the intact individual.
Abstract: We have previously reported that a patient (DF) with visual form agnosia shows accurate guidance of hand and finger movements with respect to the size, orientation, and shape of the objects to which her movements are directed. Despite this, she is unable to indicate any knowledge about these object properties. In the present study, we investigated the extent to which DF is able to use visual shape or pattern to guide her hand movements. In the first experiment, we found that when presented with a stimulus aperture cut in the shape of the letter T, DF was able to guide a T-shaped form into it on about half of the trials, across a range of different stimulus orientations. On the remaining trials, her responses were almost always perpendicular to the correct Orientation. Thus, the visual information guiding the rotation of DF's hand appears to be limited to a single orientation. In other words, the visuomotor transformations mediating her hand rotation appear to be unable to combine the orientations of the stem and the top of the T, although they are sensitive to the orientation of the element(s) that comprise the T. In a second experiment, we examined her ability to use different sources of visual information to guide her hand rotation. In this experiment, DF was required to guide the leading edge of a hand-held card onto a rectangular target positioned at dHerent orientations on a flat surface. Here the orientation of her hand was determined primarily by the predominant orientation of the luminance edge elements present in the stimulus, rather than by information about orientation that was conveyed by nonluminance boundaries. Little evidence was found for an ability to use contour boundaries defined by Gestalt principles of grouping (good continuation or similarity) or “nonaccidental” image properties (colinearity) to guide her movements. We have argued elsewhere that the dorsal visual pathway from occipital to parietal cortex may underlie these preserved visuomotor skills in DF. If so, the limitations in her ability to use different kinds of “pattern” information to guide her hand rotation suggest that such information may need to be transmitted from the ventral visual stream to these parietal areas to enable the full range of prehensive acts in the intact individual.

Journal ArticleDOI
TL;DR: By the end of the book the reader is convinced that dynamic systems theory can provide us with rich metaphors in the authors' attempts to understand behavior and development, and this is a pioneering volume.
Abstract: 1)evelopmental scientists have always been receptive to d!naniic systems because they come face to face with complexity, nonlinearity, and context dependency every la!^. The works o f scientists froni Werner and von Bertnlanffy through Piaget, Lerner, Sameroff, and Gottlieb kittest t o this receptivity. It still seems that the broad scholarly environment became interested only recently in the very general principles o f process and change. The enthusiasm for this approach expressed by the participants o f the workshop “Dynamic Systems in Development” held in Kansas City in 1989 convinced Linda R. Smith and Esther Thelen that the papers from the workshop and some invited papers would find a large audience. This book intends to be a pioneering attempt to bridge the general principles with applications to developmental issues that span domains, levels, and time scales. The chapters o f the book are organized into two major parts. Part I is introduced with a tutorial on the general principles o f dynamic systems by Kelso, Ding, and Schiiner. This chapter presents dynamic principles in a generic form applicable to questions of pattern formation in any biological organisms over many time scales. The remainder of the first part contains chapters whose focus is primarily on the motor performance of developing infants and children. In Part I1 the authors cast their nets more broadly into various domains o f inquiry, inclucling perception and its relation to action, infant state, cognition and language, and behavior. Thus, they move within the largest social and ecological spheres. Finally, Kicharcl kslin provides a witty concluding commentary :inti critique. Experts in system theories will tind an excellent overview of dynamic system applications in this book. Those who are pursuing works in developmental sciences but are not familiar with the terms used in the book ma!. initially doubt that this book is accessible t o them. Fortunately, the logic, definitions, and clear explanations help the reader negotiate some of the topics presentccl here that might otherwise be perplexing. By the end o f the book the reader is convinced that dynamic systems theory can provide us with rich metaphors in our :ittempts to understand behavior and development. Yet, despite the best efforts o f the authors t o go beyond metaphors and actually provide new insights into application, the result is not completely successful. Most of the work presented here is based on data obtained independent of the dynamic systems approach. In addition, the term ”development” covers a much broader ontogenetic range than postpartum human development. Thus, it might have been helphil if the chapters presented here had been expanded to incorporate studies on the dynamic approach to morphological changes during prcnatal maturation. Summing up, this is a pioneering volume. I t proviclcs new insight into the rather complex and important tield of developmental sciences. The well-written, clear text, and the clever structure of the book offer a valuahle resource for those familiar with dynamic systems Lipproach. It is also accessible to those working in related areas who might regard themselves as largely ignorant of system theories.

Journal ArticleDOI
TL;DR: Right hemisphere attentional allocation to events in the ipsilateral visual half-field is mediated in part via intact subcortical systems, consistent with a body of evidence from studies in patients with cortical lesions who display different attentional deficits for right versus left hemisphere damage.
Abstract: Hemispheric specialization and subcortical processes in visual anention were investigated in callosotomy split-brain patients by measuring reaction times to lateralized stimuli in a spatial cuing paradigm. Cuing effects were obtained for targets presented to the right hemisphere left visual hemifield but not for those presented to the left hemisphere. These cuing effects were manifest as faster reaction times when the cue correctly indicated the location of the subsequent target valid trials, as compared to trials in which the cue and target appeared in opposite hemifields invalid trials. This pattern suggests that the right hemisphere allocated attention to cued locations in either visual hemifield, whereas the left hemisphere allocated attention predominantly to the right hemifield. This finding is consistent with a body of evidence from studies in patients with cortical lesions who display different attentional deficits for right versus left hemisphere damage. Because the present pattern occurs in patients whose cerebral hemispheres are separated at the cortical level, it suggests that right hemisphere attentional allocation to events in the ipsilateral visual half-field is mediated in part via intact subcortical systems.

Journal ArticleDOI
TL;DR: In a group of four commis-surotomy patients, the search rates for bilateral stimulus arrays was found to be approximately twice as fast as the search rate for unilateral arrays, indicating that the separated hemispheres were able to scan their respective hemifields independently.
Abstract: Previous studies of visuospatial attention indicated that the isolated cerebral hemispheres of split-brain patients maintain an integrated, unitary focus of attention, presumably due to subcortical attentional mechanisms. The present study examined whether a unitary attentional focus would also be observed during a visual search task in which subjects scanned stimulus arrays for a target item. In a group of four commis-surotomy patients, the search rate for bilateral stimulus arrays was found to be approximately twice as fast as the search rate for unilateral arrays, indicating that the separated hemispheres were able to scan their respective hemifields independently. In contrast, the search rates for unilateral and bilateral arrays were approximately equal in a group of six normal control subjects, suggesting that the intact corpus callosum in these subjects is responsible for maintaining a unitary attentional focus during visual search.

Journal ArticleDOI
TL;DR: It is concluded that the anterior attention system increases the potency of processing of consciously perceived stimuli, but there is a component of semantic priming that occurs without both focusing of attention and awareness, involving different cerebral areas to those involved in attention to language.
Abstract: This research takes advantage of combined cognitive and anatomical studies to ask whether attention is necessary for high-level word processing to occur. In Experiment 1 we used a lexical decision task in which two prime words, one in the fovea and the other in the parafovea, appeared simultaneously for 150 msec, followed by a foveal target (word/nonword). Target words were semantically related either to the foveal or to the parafoveal word, or unrelated to them. In one block of trials subjects were also required to perform an auditory shadowing task. From PET studies we know that shadowing activates the anterior cingulate cortex, involved in selective attention. If the anterior attention system is always involved in semantic processing, shadowing should reduce semantic priming obtained from both foveal and parafoveal words. In contrast, if semantic priming by parafoveal words is independent of activation in that attention area, priming will not be affected by shadowing. Our results supported the latter hypothesis. A large priming effect arose from foveal primes, which was reduced by shadowing. For parafoveal primes a smaller priming effect arose, which was not affected by shadowing. In Experiment 2 prime words were masked. Semantic priming was reliable for both foveal and parafoveal words but there were then no differences between them. Most important, the size of priming was similar to that obtained from parafoveal words in Experiment 1. We conclude that the anterior attention system increases the potency of processing of consciously perceived stimuli, but there is a component of semantic priming that occurs without both focusing of attention and awareness, involving different cerebral areas to those involved in attention to language.

Journal ArticleDOI
TL;DR: Electric and magnetic recordings of average power within the high a band were made over the parietal and occipital areas of the scalp while subjects were engaged in the mental imagery task of Cooper and Shepard, finding that for large rotation angles of the probe figures, a shift in the spatial pattern of suppression indicates some additional activity in left occipitals.
Abstract: Electric and magnetic recordings of average power within the high a band (10--12 Hz) were made over the parietal and occipital areas of the scalp while subjects were engaged in the mental imagery task of Cooper and Shepard. The subject had to determine whether an abstract probe figure was identical to a memory figure presented earlier at a different orientation, or whether it was the mirror image of the memory figure. Alpha power was found to be suppressed while the subjects were engaged in the comparison, and the duration of suppression increased with the minimum rotation angle to achieve a match. Strong correlations between suppression duration and reaction time give further evidence that the visual cortex is engaged in the process of mental imagery. Moreover, for large rotation angles of the probe figures, where the task is markedly more difficult, a shift in the spatial pattern of suppression indicates some additional activity in left occipital areas.

Journal ArticleDOI
TL;DR: A neural network architecture was designed that learned to produce neural commands to a set of muscle-like actuators based only on information about spatial errors to generate point-to-point horizontal arm movements and the resulting muscle activation patterns and hand trajectories were found to be similar to those observed experimentally for human subjects.
Abstract: Unconstrained point-to-point reaching movements performed in the horizontal plane tend to follow roughly straight hand paths with smooth, bell-shaped velocity profiles. The objective of the research reported here was to explore the hypothesis that these data reflect an underlying learning process that prefers simple paths in space. Under this hypothesis, movements are learned based only on spatial errors between the actual hand path and a desired hand path; temporally varying targets are not allowed. We designed a neural network architecture that learned to produce neural commands to a set of muscle-like actuators based only on information about spatial errors. Following repetitive executions of the reaching task, the network was able to generate point-to-point horizontal arm movements and the resulting muscle activation patterns and hand trajectories were found to be similar to those observed experimentally for human subjects. The implications of our results with respect to current theories of multijoint limb movement generation are discussed.

Journal ArticleDOI
TL;DR: A pattern of results suggests that for both cerebral hemispheres, somewhat different aspects of visual information are relevant for categorical versus coordinate spatial processing and that the right hemisphere is superior to the left for coordinate (but not categorical) spatial processing.
Abstract: The present experiment examined the effects of dioptric blurring on the performance of two different spatial processing tasks using the same visual stimuli. One task (the above/below, categorical task) required subjects to indicate whether a dot was above or below a horizontal line. The other task (the coordinate, near/far task) required subjects to indicate whether the dot was within 3 mm of the line. For both tasks, the stimuli on each trial were presented to either the right visual field and left hemisphere (RVF/LH) or the left Visual field and right hemisphere (LVF/RH). For the above/below task, dioptric blurring consistently increased reaction time (RT) and did so equally on LVF/RH and RVF/LH trials. Furthermore, there was no significant difference between the two visual fields for either clear or blurred stimuli. For the near/far task, dioptric blurring had no consistent effect on either RT or error rate for either visual field. On an initial block of trials, however, there were significantly fewer errors on LVF/RH than on RVF/LH trials, with the LVF/RH advantage being independent of whether the stimuli were clear or blurred. This initial LVF/RH advantage disappeared quickly with practice, regardless of whether the stimuli were clear or blurred. This pattern of results suggests that for both cerebral hemispheres, somewhat different aspects of visual information are relevant for categorical versus coordinate spatial processing and that the right hemisphere is superior to the left for coordinate (but not categorical) spatial processing.

Journal ArticleDOI
TL;DR: A neural model is described of how the brain may autonomously learn a body-centered representation of a three-dimensional (3-D) target position by combining information about retinal target position, eye position, and head position in real time.
Abstract: A neural model is described of how the brain may autonomously learn a body-centered representation of a three-dimensional (3-D) target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation---otherwise known as a parcellated distributed representation---of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of nonfoveated target position to learn a visuomotor representation of both foveated and nonfoveated target position that is capable of commanding yoked eye movements. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of Commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates are compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.


Journal ArticleDOI
TL;DR: The results suggest that anterior brain regions play an important role in the development of visual attention, and that left hemisphere attentional processes are particularly affected by disruption of anterior function.
Abstract: The neural systems underlying visual attention have been well-documented in adults through studies examining the effects of brain lesions on specific attentional operations (Posner, Cohen, & Rafal, 1982). The questions of how this attentional system develops and how it is affected by disruption during development are only beginning to be addressed. In the present study, a covert orienting task (Posner et al., 1982) was administered to 33 children with bilateral perinatal injury to anterior, posterior, or diffuse brain regions and 36 normal children to determine the effects of such injury on visual attention. Children with bilateral anterior lesions showed lateralized impairment indicating compromise of left hemisphere early attentional processes. In contrast, children with posterior lesions that typically disrupt attention in adults showed only general slowing, with no differences in right or left visual field performance or deficits in specific attentional operations. These results suggest that anterior brain regions play an important role in the development of visual attention, and that left hemisphere attentional processes are particularly affected by disruption of anterior function.