scispace - formally typeset
Search or ask a question

Showing papers on "Visual perception published in 1998"


Journal ArticleDOI
TL;DR: The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.

1,776 citations


Journal ArticleDOI
TL;DR: Three experiments using a stimulus-response compatibility paradigm with photographs of common graspable objects as stimuli show that compatibility effects of an irrelevant stimulus dimension can be obtained across a wide variety of naturally occurring stimuli, and support the view that intentions to act operate on already existing motor representations of the possible actions in a visual scene.
Abstract: Accounts of visually directed actions usually assume that their planning begins with an intention to act. This article describes three experiments that challenged this view through the use of a stimulus-response compatibility paradigm with photographs of common graspable objects as stimuli. Participants had to decide as fast as possible whether each object was upright or inverted. Experiments 1 and 2 examined the effect of the irrelevant dimension of left-fight object orientation on bimanual and tmimanual keypress responses. Experiment 3 examined wrist rotation responses to objects requiring either clockwise or anticlockwise wrist rotations when grasped. The results (a) are consistent with the view that seen objects automatically potentiate components of the actions they afford, (b) show that compatibility effects of an irrelevant stimulus dimension can be obtained across a wide variety of naturally occurring stimuli, and (c) support the view that intentions to act operate on already existing motor representations of the possible actions in a visual scene. The use of vision to control actions has typically been framed as a problem that begins with the intention to act. How we use visual information depends, after all, on the goal of the action. Grasping a ball and kicking it require both representing different visual information and transforming that information into very different muscle commands. In this article we explore the possibility that visual objects potentiate actions even in the absence of explicit intentions to act. There are many reasons for supposing that a representation of the visual world includes information about possible actions. Perception and action are intimately linked. Our decisions to act are not made in a vacuum but are informed by the possibilities inherent in any visual scene. In this sense, vision is important for providing information about what actions are possible, as well as for the on-line control of their execution. Furthermore, knowledge of the possibilities for action depends critically on the relation between the visual world and the physical apparatus of the perceiver--a point long emphasized in the ecological approach to perception and action. How might such action possibilities be represented? A plausible proposal is that the perception of an object (or scene) results in the potentiation of the actions that

1,006 citations


Journal ArticleDOI
TL;DR: Functional anatomical and single-unit recording studies indicate that a set of neural signals in parietal and frontal cortex mediates the covert allocation of attention to visual locations, as originally proposed by psychological studies.
Abstract: Functional anatomical and single-unit recording studies indicate that a set of neural signals in parietal and frontal cortex mediates the covert allocation of attention to visual locations, as originally proposed by psychological studies This frontoparietal network is the source of a location bias that interacts with extrastriate regions of the ventral visual system during object analysis to enhance visual processing The frontoparietal network is not exclusively related to visual attention, but may coincide or overlap with regions involved in oculomotor processing The relationship between attention and eye movement processes is discussed at the psychological, functional anatomical, and cellular level of analysis

995 citations


Journal ArticleDOI
19 Jun 1998-Science
TL;DR: The results suggest that frontoparietal areas play a central role in conscious perception, biasing the content of visual awareness toward abstract internal representations of visual scenes, rather than simply toward space.
Abstract: When dissimilar images are presented to the two eyes, perception alternates spontaneously between each monocular view, a phenomenon called binocular rivalry. Functional brain imaging in humans was used to study the neural basis of these subjective perceptual changes. Cortical regions whose activity reflected perceptual transitions included extrastriate areas of the ventral visual pathway, and parietal and frontal regions that have been implicated in spatial attention; whereas the extrastriate areas were also engaged by nonrivalrous perceptual changes, activity in the frontoparietal cortex was specifically associated with perceptual alternation only during rivalry. These results suggest that frontoparietal areas play a central role in conscious perception, biasing the content of visual awareness toward abstract internal representations of visual scenes, rather than simply toward space.

713 citations


Journal ArticleDOI
TL;DR: Experiments suggest that attention plays a central role in solving the 'binding problem', which concerns the way in which the authors select and integrate the separate features of objects in the correct combinations.
Abstract: The seemingly effortless ability to perceive meaningful objects in an integrated scene actually depends on complex visual processes. The 'binding problem' concerns the way in which we select and integrate the separate features of objects in the correct combinations. Experiments suggest that attention plays a central role in solving this problem. Some neurological patients show a dramatic breakdown in the ability to see several objects; their deficits suggest a role for the parietal cortex in the binding process. However, indirect measures of priming and interference suggest that more information may be implicitly available than we can consciously access.

673 citations


Journal ArticleDOI
27 Aug 1998-Nature
TL;DR: Modulations of visual signals in two adjacent cortical fields, LIP and 7a, are referenced to the body and to the world, respectively, and segregation of spatial information is consistent with a streaming of information.
Abstract: In order to direct a movement towards a visual stimulus, visual spatial information must be combined with postural information. For example, directing gaze (eye plus head) towards a visible target requires the combination of retinal image location with eye and head position to determine the location of the target relative to the body. Similarly, world-referenced postural information is required to determine where something lies in the world. Posterior parietal neurons recorded in monkeys combine visual information with eye and head position. A population of such cells could make up a distributed representation of target location in an extraretinal frame of reference. However, previous studies have not distinguished between world-referenced and body-referenced signals. Here we report that modulations of visual signals (gain fields) in two adjacent cortical fields, LIP and 7a, are referenced to the body and to the world, respectively. This segregation of spatial information is consistent with a streaming of information, with one path carrying body-referenced information for the control of gaze, and the other carrying world-referenced information for navigation and other tasks that require an absolute frame of reference.

488 citations


Journal ArticleDOI
TL;DR: Evidence for orientation selectivity in V1 is shown by measuring transient functional MRI increases produced at the change in response to gratings of differing orientations, and the bandwidth of the orientation "transients" is measured.
Abstract: Human area V1 offers an excellent opportunity to study, using functional MRI, a range of properties in a specific cortical visual area, whose borders are defined objectively and convergently by retinotopic criteria. The retinotopy in V1 (also known as primary visual cortex, striate cortex, or Brodmann's area 17) was defined in each subject by using both stationary and phase-encoded polar coordinate stimuli. Data from V1 and neighboring retinotopic areas were displayed on flattened cortical maps. In additional tests we revealed the paired cortical representations of the monocular "blind spot." We also activated area V1 preferentially (relative to other extrastriate areas) by presenting radial gratings alternating between 6% and 100% contrast. Finally, we showed evidence for orientation selectivity in V1 by measuring transient functional MRI increases produced at the change in response to gratings of differing orientations. By systematically varying the orientations presented, we were able to measure the bandwidth of the orientation "transients" (45 degrees).

473 citations


Journal ArticleDOI
TL;DR: Functional magnetic resonance imaging is used to study patients with the Charles Bonnet syndrome to find that hallucinations of color, faces, textures and objects correlate with cerebral activity in ventral extrastriate visual cortex, that the content of the hallucinations reflects the functional specializations of the region and that patients who hallucinate have increased ventralside activity, which persists between hallucinations.
Abstract: Despite recent advances in functional neuroimaging, the apparently simple question of how and where we see—the neurobiology of visual consciousness—continues to challenge neuroscientists. Without a method to differentiate neural processing specific to consciousness from unconscious afferent sensory signals, the issue has been difficult to resolve experimentally. Here we use functional magnetic resonance imaging (fMRI) to study patients with the Charles Bonnet syndrome, for whom visual perception and sensory input have become dissociated. We found that hallucinations of color, faces, textures and objects correlate with cerebral activity in ventral extrastriate visual cortex, that the content of the hallucinations reflects the functional specializations of the region and that patients who hallucinate have increased ventral extrastriate activity, which persists between hallucinations.

457 citations


Journal ArticleDOI
TL;DR: The dissociation observed in the performance of dyslexic individuals on different auditory tasks suggests a sub-modality division similar to that already described in the visual system, which may provide a non-linguistic means of identifying children at risk of reading failure.

426 citations


Journal ArticleDOI
TL;DR: The temporal characteristics of masking illusions in humans and corresponding neuronal responses in the primary visual cortex of awake and anesthetized monkeys are compared to suggest that, for targets that can be masked (those of short duration), the transient neuronal responses associated with onset and turning off of the target may be important in its visibility.
Abstract: A brief visual target stimulus may be rendered invisible if it is immediately preceded or followed by another stimulus. This class of illusions, known as visual masking, may allow insights into the neural mechanisms that underlie visual perception. We have therefore explored the temporal characteristics of masking illusions in humans, and compared them with corresponding neuronal responses in the primary visual cortex of awake and anesthetized monkeys. Stimulus parameters that in humans produce forward masking (in which the mask precedes the target) suppress the transient on-response to the target in monkey visual cortex. Those that produce backward masking (in which the mask comes after the target) inhibit the transient after-discharge, the excitatory response that occurs just after the disappearance of the target. These results suggest that, for targets that can be masked (those of short duration), the transient neuronal responses associated with onset and turning off of the target may be important in its visibility.

400 citations


Journal ArticleDOI
TL;DR: Joint visual attention does not reliably appear prior to 10 months of age, a gaze-following response can be learned, and simple learning is not sufficient as the mechanism through which joint attention cues acquire their signal value.
Abstract: Two experiments examined the origins of joint visual attention with a training procedure. In Experiment 1, infants aged 6-11 months were tested for a gaze-following (joint visual attention) response under feedback and no feedback conditions. In Experiment 2, infants 8-9 months received feedback for either following the experimenter's gaze (natural group) or looking to the opposite side (unnatural group). Results of the 2 experiments indicate that (a) joint visual attention does not reliably appear prior to 10 months of age, (b) from about 8 months of age, a gaze-following response can be learned, and (c) simple learning is not sufficient as the mechanism through which joint attention cues acquire their signal value.

Journal ArticleDOI
TL;DR: Evidently, grip aperture is calibrated to the true size of an object, even when perception of object size is distorted by a pictorial illusion, a result that is consistent with recent suggestions that visually guided prehension and visual perception are mediated by separate visual pathways.
Abstract: The present study examined the effect of a size-contrast illusion (Ebbinghaus or Titchener Circles Illusion) on visual perception and the visual control of grasping movements. Seventeen right-handed participants picked up and, on other trials, estimated the size of "poker-chip" disks, which functioned as the target circles in a three-dimensional version of the illusion. In the estimation condition, subjects indicated how big they thought the target was by separating their thumb and forefinger to match the target's size. After initial viewing, no visual feedback from the hand or the target was available. Scaling of grip aperture was found to be strongly correlated with the physical size of the disks, while manual estimations of disk size were biased in the direction of the illusion. Evidently, grip aperture is calibrated to the true size of an object, even when perception of object size is distorted by a pictorial illusion, a result that is consistent with recent suggestions that visually guided prehension and visual perception are mediated by separate visual pathways.

Journal ArticleDOI
16 Jul 1998-Nature
TL;DR: These findings agree with the proposal that BA37 is an association area that integrates converging inputs from many regions, and confirm a prediction of theories of brain function that depend on convergence zones.
Abstract: Reading words and naming pictures involves the association of visual stimuli with phonological and semantic knowledge. Damage to a region of the brain in the left basal posterior temporal lobe (BA37), which is strategically situated between the visual cortex and the more anterior temporal cortex, leads to reading and naming deficits. Additional evidence implicating this region in linguistic processing comes from functional neuroimaging studies of reading in normal subjects and subjects with developmental dyslexia. Here we test whether the visual component of reading is essential for activation of BA37 by comparing cortical activations elicited by word processing in congenitally blind, late-blind and sighted subjects using functional neuroimaging. Despite the different modalities used (visual and tactile), all groups of subjects showed a common activation of BA37 by words relative to non-word letter-strings. These findings agree with the proposal that BA37 is an association area that integrates converging inputs from many regions. Our study confirms a prediction of theories of brain function that depend on convergence zones; the absence of one input (that is, visual) does not alter the response properties of such a convergence region.

Journal ArticleDOI
TL;DR: A strikingly large number of neurons in the early visual areas remained active during the perceptual suppression of the stimulus, a finding suggesting that conscious visual perception might be mediated by only a subset of the cells exhibiting stimulus selective responses.
Abstract: Figures that can be seen in more than one way are invaluable tools for the study of the neural basis of visual awareness, because such stimuli permit the dissociation of the neural responses that underlie what we perceive at any given time from those forming the sensory representation of a visual pattern To study the former type of responses, monkeys were subjected to binocular rivalry, and the response of neurons in a number of different visual areas was studied while the animals reported their alternating percepts by pulling levers Perception-related modulations of neural activity were found to occur to different extents in different cortical visual areas The cells that were affected by suppression were almost exclusively binocular, and their proportion was found to increase in the higher processing stages of the visual system The strongest correlations between neural activity and perception were observed in the visual areas of the temporal lobe A strikingly large number of neurons in the early visual areas remained active during the perceptual suppression of the stimulus, a finding suggesting that conscious visual perception might be mediated by only a subset of the cells exhibiting stimulus selective responses These physiological findings, together with a number of recent psychophysical studies, offer a new explanation of the phenomenon of binocular rivalry Indeed, rivalry has long been considered to be closely linked with binocular fusion and stereopsis, and the sequences of dominance and suppression have been viewed as the result of competition between the two monocular channels The physiological data presented here are incompatible with this interpretation Rather than reflecting interocular competition, the rivalry is most probably between the two different central neural representations generated by the dichoptically presented stimuli The mechanisms of rivalry are probably the same as, or very similar to, those underlying multistable perception in general, and further physiological studies might reveal much about the neural mechanisms of our perceptual organization

Journal ArticleDOI
01 Jan 1998-Brain
TL;DR: It is concluded that agnosopsia, gnosopsia and gnosanopsia are all manifestations of a single condition which the Riddoch syndrome, in deference to the British neurologist who, in 1917, first characterized the major aspect of this disability.
Abstract: We have studied a patient, G.Y., who was rendered hemianopic following a lesion affecting the primary visual cortex (area VI), sustained 31 years ago, with the hope of characterizing his ability to discriminate visual stimuli presented in his blind field, both psychophysically and in terms of the brain activity revealed by imaging methods. Our results show that (i) there is a correlation between G.Y.'s capacity to discriminate stimuli presented in his blind field and his conscious awareness of the same stimuli and (ii) that G.Y.'s performance on some tasks is characterized by a marked variability, both in terms of his awareness for a given level of discrimination and in his discrimination for a given level of awareness. The observations on G.Y., and a comparison of his capacities with those of normal subjects, leads us to propose a simple model of the relationship between visual discrimination and awareness. This supposes that the two independent capacities are very tightly coupled in normal subjects (gnosopsia) and that the effect of a VI lesion is to uncouple them, but only slightly. This uncoupling leads to two symmetrical departures, on the one hand to gnosanopsia (awareness without discrimination) and on the other to agnosopsia (discrimination without awareness). Our functional MRI studies show that V5 is always active when moving stimuli, whether slow or fast, are presented to his blind field and that the activity in V5 co-varies with less intense activity in other cortical areas. The difference in cerebral activity between gnosopsia and agnosopsia is that, in the latter, the activity in V5 is less intense and lower statistical thresholds are required to demonstrate it. Direct comparison of the brain activity during individual 'aware' and 'unaware' trials, corrected for the confounding effects of motion, has also allowed us, for the first time, to titrate conscious awareness against brain activity and show that there is a straightforward relationship between awareness and activity, both in individual cortical areas, in this case area V5, and in the reticular activating system. The imaging evidence, together with the variability in his levels of awareness and discrimination, manifested in his capacity to discriminate consciously on some occasions and unconsciously on others, leads us to conclude that agnosopsia, gnosopsia and gnosanopsia are all manifestations of a single condition which we call the Riddoch syndrome, in deference to the British neurologist who, in 1917, first characterized the major aspect of this disability. We discuss the significance of these results in relation to historical views about the organization of the visual brain.

Journal ArticleDOI
TL;DR: Recent findings from experiments on cross-modal links in attention are reviewed, which reveal extensive spatial links between the modalities and suggest information from different sensory modalities may be integrated preattentively, to produce the multimodal internal spatial representations in which attention can be directed.
Abstract: A great deal is now known about the effects of spatial attention within individual sensory modalities, especially for vision and audition. However, there has been little previous study of possible crossmodal links in attention. Here, we review recent findings from our own experiments on this topic, which reveal extensive spatial links between the modalities. An irrelevant but salient event presented within touch, audition, or vision, can attract covert spatial attention in the other modalities (with the one exception that visual events do not attract auditory attention when saccades are prevented). By shifting receptors in one modality relative to another, the spatial coordinates of these crossmodal interactions can be examined. For instance, when a hand is placed in a new position, stimulation of it now draws visual attention to a correspondingly different location, although some aspects of attention do not spatially remap in this way. Crossmodal links are also evident in voluntary shifts of attention. When a person strongly expects a target in one modality (e.g. audition) to appear in a particular location, their judgements improve at that location not only for the expected modality but also for other modalities (e.g. vision), even if events in the latter modality are somewhat more likely elsewhere. Finally, some of our experiments suggest that information from different sensory modalities may be integrated preattentively, to produce the multimodal internal spatial representations in which attention can be directed. Such preattentive crossmodal integration can, in some cases, produce helpful illusions that increase the efficiency of selective attention in complex scenes.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that separate, but interactive, visual systems have evolved for the perception of objects on the one hand and the control of actions directed at those objects, and that Marrian or'reconstructive' approaches and Gibsonian or 'purposive-animate-behaviorist' approaches need not be seen as mutually exclusive, but rather as complementary in their emphases on different aspects of visual function.


Journal ArticleDOI
TL;DR: It is proposed that visual perceptual categorization based on long-term experience begins by 125 ms, P150 amplitude varies with the cumulative experience people have discriminating among instances of specific categories of visual objects (e.g., words, faces), and the P150 is a scalp reflection of letterstring and face intracranial ERPs in posterior fusiform gyrus.
Abstract: The nature and early time course of the initial processing differences between visually matched linguistic and nonlinguistic images were studied with event-related potentials (ERPs). The first effect began at 90 ms when ERPs to written words diverged from other objects, including faces. By 125 ms, ERPs to words and faces were more positive than those to other objects, effects identified with the P150. The amplitude and scalp distribution of P150s to words and faces were similar. The P150 seemed to be elicited selectively by images resembling any well-learned category of visual patterns. We propose that (a) visual perceptual categorization based on long-term experience begins by 125 ms, (b) P150 amplitude varies with the cumulative experience people have discriminating among instances of specific categories of visual objects (e.g., words, faces), and (c) the P150 is a scalp reflection of letterstring and face intracranial ERPs in posterior fusiform gyrus.

Journal ArticleDOI
TL;DR: Without any physical stimulus changes, salient perceptual flips briefly engage widely separated specialized cortical areas, but are also associated with intermittent activity breakdown in structures putatively maintaining perceptual stability.
Abstract: Looking at ambiguous figures results in rivalry with spontaneous alternation between two percepts. Using event-related functional magnetic resonance imaging, we localized transient human brain activity changes during perceptual reversals. Activation occurred in ventral occipital and intraparietal higher-order visual areas, deactivation in primary visual cortex and the pulvinar. Thus, without any physical stimulus changes, salient perceptual flips briefly engage widely separated specialized cortical areas, but are also associated with intermittent activity breakdown in structures putatively maintaining perceptual stability. Together, the dynamics of integrative perceptual experience are reflected in rapid spatially differentiated activity modulation within a cooperative set of neural structures.

Journal ArticleDOI
TL;DR: Comparisons between the responses of single cortical neurons in the behaving macaque monkey and the stimulus parameters that give rise to the ventriloquism aftereffect suggest that the changes in the cortical representation of acoustic space may begin as early as the primary auditory cortex.
Abstract: Cortical representational plasticity has been well documented after peripheral and central injuries or improvements in perceptual and motor abilities This has led to inferences that the changes in cortical representations parallel and account for the improvement in performance during the period of skill acquisition There have also been several examples of rapidly induced changes in cortical neuronal response properties, for example, by intracortical microstimulation or by classical conditioning paradigms This report describes similar rapidly induced changes in a cortically mediated perception in human subjects, the ventriloquism aftereffect, which presumably reflects a corresponding change in the cortical representation of acoustic space The ventriloquism aftereffect describes an enduring shift in the perception of the spatial location of acoustic stimuli after a period of exposure of spatially disparate and simultaneously presented acoustic and visual stimuli Exposure of a mismatch of 8° for 20–30 min is sufficient to shift the perception of acoustic space by approximately the same amount across subjects and acoustic frequencies Given that the cerebral cortex is necessary for the perception of acoustic space, it is likely that the ventriloquism aftereffect reflects a change in the cortical representation of acoustic space Comparisons between the responses of single cortical neurons in the behaving macaque monkey and the stimulus parameters that give rise to the ventriloquism aftereffect suggest that the changes in the cortical representation of acoustic space may begin as early as the primary auditory cortex

Journal ArticleDOI
TL;DR: In this paper, the authors show that when attention is directed to part of a perceptual object, other parts of that object enjoy an attentional advantage as well, and that this object-specific advantage accrues to partly occluded objects and to objects defined by subjective contours.
Abstract: A large body of evidence suggests that visual attention selects objects as well as spatial locations. If attention is to be regarded as truly object based, then it should operate not only on object representations that are explicit in the image, but also on representations that are the result of earlier perceptual completion processes. Reporting the results of two experiments, we show that when attention is directed to part of a perceptual object, other parts of that object enjoy an attentional advantage as well. In particular, we show that this object-specific attentional advantage accrues to partly occluded objects and to objects defined by subjective contours. The results corroborate the claim that perceptual completion precedes object-based attentional selection.

Journal ArticleDOI
TL;DR: The findings show that reflexively oriented attention produces modulations in early sensory analysis at the same extrastriate neural locus as the earliest effects of voluntarily focused attention, and stimulus processing was found to be enhanced at later stages of analysis, which reflect stimulus relevance.
Abstract: Attention can be oriented reflexively to a location in space by an abrupt change in the visual scene. In the present study, we investigated the consequences of reflexive attention on the neural processing of visual stimuli. The findings show that reflexively oriented attention produces modulations in early sensory analysis at the same extrastriate neural locus as the earliest effects of voluntarily focused attention. In addition, stimulus processing was found to be enhanced at later stages of analysis, which reflect stimulus relevance. As is the case with behavioral measures of reflexive attention, these physiological enhancement effects are rapidly engaged but short-lived. As time passes between the initial attention-capturing event and subsequent stimuli, the extrastriate effect reverses, and the enhancement of higher order processing subsides. These findings indicate that reflexive attention is able to affect perceptions of the visual world by modulating neural processing as early as extrastriate visual cortex.

Journal ArticleDOI
TL;DR: A correlation between the conscious perception of a visual stimulus and the synchronous activity of large populations of neurons as reflected by steady-state neuromagnetic responses is demonstrated.
Abstract: In binocular rivalry, a subject views two incongruent stimuli through each eye but consciously perceives only one stimulus at a time, with a switch in perceptual dominance every few seconds. To investigate the neural correlates of perceptual dominance in humans, seven subjects were recorded with a 148-channel magnetoencephalography array while experiencing binocular rivalry. A red vertical grating flickering at one frequency was presented to one eye through a red filter and a blue horizontal grating flickering at a different frequency was presented to the other eye through a blue filter. Steady-state neuromagnetic responses at the two frequencies were used as tags for the two stimuli and analyzed with high-resolution power spectra. It was found that a large number of channels showed peaks at both frequencies, arranged in a horseshoe pattern from posterior to anterior regions, whether or not the subject was consciously perceiving the corresponding stimulus. However, the amount of power at the stimulus frequency was modulated in relation to perceptual dominance, being lower in many channels by 50–85% when the subject was not conscious of that stimulus. Such modulation by perceptual dominance, although not global, was distributed to a large subset of regions showing stimulus-related responses, including regions outside visual cortex. The results demonstrate a correlation between the conscious perception of a visual stimulus and the synchronous activity of large populations of neurons as reflected by steady-state neuromagnetic responses.

Journal ArticleDOI
TL;DR: The model shows the ability to recognize complex images invariantly with respect to shift, rotation and scale, and contains a low-level subsystem which performs both a fovea-like transformation and detection of primary features (edges).

Journal ArticleDOI
TL;DR: The data demonstrate that discrimination performance is superior when the discrimination stimulus is also the target for manual aiming; when thediscrimination stimulus and point...
Abstract: The primate visual system can be divided into a ventral stream for perception and recognition and a dorsal stream for computing spatial information for motor action. How are selection mechanisms in both processing streams coordinated? We recently demonstrated that selection-for-perception in the ventral stream (usually termed “visual attention”) and saccade target selection in the dorsal stream are tightly coupled (Deubel & Schneider, 1996). Here we investigate whether such coupling also holds for the preparation of manual reaching movements. A dual-task paradigm required the preparation of a reaching movement to a cued item in a letter string. Simultaneously, the ability to discriminate between the symbols “E” and “∃” presented tachistoscopically within the surrounding distractors was taken as a measure of perceptual performance. The data demonstrate thatdiscrimination performance is superior when the discrimination stimulus is also the target for manual aiming; when the discrimination stimulus and point...

Journal ArticleDOI
TL;DR: Analysis of a large library of digitized scenes using image processing with orientation-sensitive filters shows a prevalence of vertical and horizontal orientations in indoor, outdoor, and even entirely natural settings, suggesting real world anisotropy is related to the enhanced ability of humans and other animals to process contours in the cardinal axes.
Abstract: In both humans and experimental animals, the ability to perceive contours that are vertically or horizontally oriented is superior to the perception of oblique angles. There is, however, no consensus about the developmental origins or functional basis of this phenomenon. Here, we report the analysis of a large library of digitized scenes using image processing with orientation-sensitive filters. Our results show a prevalence of vertical and horizontal orientations in indoor, outdoor, and even entirely natural settings. Because visual experience is known to influence the development of visual cortical circuitry, we suggest that this real world anisotropy is related to the enhanced ability of humans and other animals to process contours in the cardinal axes, perhaps by stimulating the development of a greater amount of visual circuitry devoted to processing vertical and horizontal contours.

Book
27 Jun 1998
TL;DR: Visual Perception: A Clinical Orientation 1. Experimental Approaches 2. Introductory Concepts 3. The Duplex Retina 4. Photometry 5. Color Vision 6. Anomalies of Color Vision 7. Spatial Vision 8. Temporal Aspects of Vision 9. Motion Perception 10. Depth Perception 11. Psychophysical Methodology 12. Functional Retinal Physiology
Abstract: Visual Perception: A Clinical Orientation 1. Experimental Approaches 2. Introductory Concepts 3. The Duplex Retina 4. Photometry 5. Color Vision 6. Anomalies of Color Vision 7. Spatial Vision 8. Temporal Aspects of Vision 9. Motion Perception 10. Depth Perception 11. Psychophysical Methodology 12. Functional Retinal Physiology 13. Parallel Processing 14. Striate Cortex 15. Information Streams and Extrastriate Processing 16. Gross Electrical Potentials 17. Development and Maturation of Vision Answers to Self-Assessment Questions Practice Exams 1, 2, 3 Answers to Practice Exams References

Journal ArticleDOI
TL;DR: Cognitive neuroscience techniques are used in differentiating between perceptual and postperceptual attentional mechanisms and a specific role of attention is proposed to resolve ambiguities in neural coding that arise when multiple objects are processed simultaneously.
Abstract: What is the role of selective attention in visual perception? Before answering this question, it is necessary to differentiate between attentional mechanisms that influence the identification of a stimulus from those that operate after perception is complete. Cognitive neuroscience techniques are particularly well suited to making this distinction because they allow different attentional mechanisms to be isolated in terms of timing and/or neuroanatomy. The present article describes the use of these techniques in differentiating between perceptual and postperceptual attentional mechanisms and then proposes a specific role of attention in visual perception. Specifically, attention is proposed to resolve ambiguities in neural coding that arise when multiple objects are processed simultaneously. Evidence for this hypothesis is provided by two experiments showing that attention—as measured electrophysiologically—is allocated to visual search targets only under conditions that would be expected to lead to ambiguous neural coding.

Journal ArticleDOI
09 Jul 1998-Nature
TL;DR: It is shown that visual grouping is indeed facilitated when elements of one percept are presented at the same time as each other and are temporally separated from elements of another percept or from background elements.
Abstract: The visual system analyses information by decomposing complex objects into simple components (visual features) that are widely distributed across the cortex. When several objects are present simultaneously in the visual field, a mechanism is required to group (bind) together visual features that belong to each object and to separate (segment) them from features of other objects. An attractive scheme for binding visual features into a coherent percept consists of synchronizing the activity of their neural representations. If synchrony is important in binding, one would expect that binding and segmentation are facilitated by visual displays that are temporally manipulated to induce stimulus-dependent synchrony. Here we show that visual grouping is indeed facilitated when elements of one percept are presented at the same time as each other and are temporally separated (on a scale below the integration time of the visual system) from elements of another percept or from background elements. Our results indicate that binding is due to a global mechanism of grouping caused by synchronous neural activation, and not to a local mechanism of motion computation.