scispace - formally typeset
Search or ask a question

Showing papers on "Visual perception published in 2000"


Journal ArticleDOI
TL;DR: A model for the organization of this system that emphasizes a distinction between the representation of invariant and changeable aspects of faces is proposed and is hierarchical insofar as it is divided into a core system and an extended system.

4,430 citations


Journal ArticleDOI
14 Dec 2000-Nature
TL;DR: It is shown that auditory information can qualitatively alter the perception of an unambiguous visual stimulus to create a striking visual illusion, indicating that visual perception can be manipulated by other sensory modalities.
Abstract: Vision is believed to dominate our multisensory perception of the world. Here we overturn this established view by showing that auditory information can qualitatively alter the perception of an unambiguous visual stimulus to create a striking visual illusion. Our findings indicate that visual perception can be manipulated by other sensory modalities.

1,080 citations


Journal ArticleDOI
TL;DR: Evidence that imagery and perception share common processing mechanisms is strengthened, and it is demonstrated that the specific brain regions activated during mental imagery depend on the content of the visual image.
Abstract: What happens in the brain when you conjure up a mental image in your mind's eye? We tested whether the particular regions of extrastriate cortex activated during mental imagery depend on the content of the image. Using functional magnetic resonance imaging (fMRRI), we demonstrated selective activation within a region of cortex specialized for face perception during mental imagery of faces, and selective activation within a place-selective cortical region during imagery of places. In a further study, we compared the activation for imagery and perception in these regions, and found greater response magnitudes for perception than for imagery of the same items. Finally, we found that it is possible to determine the content of single cognitive events from an inspection of the fMRI data from individual imagery trials. These findings strengthen evidence that imagery and perception share common processing mechanisms, and demonstrate that the specific brain regions activated during mental imagery depend on the content of the visual image.

905 citations


Journal ArticleDOI
22 Dec 2000-Science
TL;DR: In an informative Perspective, Seung and Lee explain the mathematical intricacies of two new algorithms for modeling the variability of perceptual stimuli and other types of high-dimensional data.
Abstract: One of the great puzzles of visual perception is how an image that is in perpetual flux can still be seen by the observer as the same object. In an informative Perspective, Seung and Lee explain the mathematical intricacies of two new algorithms for modeling the variability of perceptual stimuli and other types of high-dimensional data (Tenenbaum et al., and Roweis and Saul).

809 citations


Journal ArticleDOI
TL;DR: It is argued that eye-movement data provide an excellent on-line indication of the cognitive processes underlying visual search and reading and the relationship between attention and eye movements is discussed.

757 citations


Journal ArticleDOI
TL;DR: In this paper, two masking processes were found: an early process affected by physical factors such as adapting luminance, and a later process influenced by attentional factors, called masking by object substitution, which occurs whenever there is a mismatch between the reentrant visual representation and the ongoing lower level activity.
Abstract: Advances in neuroscience implicate reentrant signaling as the predominant form of communication between brain areas. This principle was used in a series of masking experiments that defy explanation by feed-forward theories. The masking occurs when a brief display of target plus mask is continued with the mask alone. Two masking processes were found: an early process affected by physical factors such as adapting luminance and a later process affected by attentional factors such as set size. This later process is called masking by object substitution, because it occurs whenever there is a mismatch between the reentrant visual representation and the ongoing lower level activity. Iterative reentrant processing was formalized in a computational model that provides an excellent fit to the data. The model provides a more comprehensive account of all forms of visual masking than do the long-held feed-forward views based on inhibitory contour interactions.

728 citations


Journal ArticleDOI
01 Dec 2000-Neuron
TL;DR: The results suggest that content-related activation during imagery in visual extrastriate cortex may be implemented by "top-down" mechanisms in parietal and frontal cortex that mediate the retrieval of face and object representations from long-term memory and their maintenance through visual imagery.

542 citations


Journal ArticleDOI
TL;DR: These findings support a new explanation for the cross-race (CR) recognition deficit based on feature coding differences between CR and SR faces, and appear incompatible with similarity-based models of face categories.
Abstract: One of the most familiar empirical phenomena associated with face recognition is the cross-race (CR) recognition deficit whereby people have difficulty recognizing members of a race different from their own. Most researchers assume that the CR deficit is caused by failure to generalize perceptual encoding expertise from same-race (SR) faces to CR faces. However, this explanation ignores critical differences in the social cognitions and feature coding priorities associated with SR and CR faces. On the basis of data from visual search and perceptual discrimination tasks, it appears that the deficit occurs because people emphasize visual information specifying race at the expense of individuating information when recognizing CR faces. In particular, it is possible to observe a paradoxical improvement in both detection and perceptual discrimination accuracy for CR faces that is limited to those who recognize them poorly. These findings support a new explanation for the CR recognition deficit based on feature coding differences between CR and SR faces, and appear incompatible with similarity-based models of face categories.

533 citations


Journal ArticleDOI
TL;DR: Activity in early visual cortex quantitatively predicted the subject's pattern-detection performance: when activity was greater, the subject was more likely to correctly discern the presence or absence of the pattern.
Abstract: Visual attention can affect both neural activity and behavior in humans. To quantify possible links between the two, we measured activity in early visual cortex (V1, V2 and V3) during a challenging pattern-detection task. Activity was dominated by a large response that was independent of the presence or absence of the stimulus pattern. The measured activity quantitatively predicted the subject's pattern-detection performance: when activity was greater, the subject was more likely to correctly discern the presence or absence of the pattern. This stimulus-independent activity had several characteristics of visual attention, suggesting that attentional mechanisms modulate activity in early visual cortex, and that this attention-related activity strongly influences performance.

529 citations


Journal ArticleDOI
18 May 2000-Nature
TL;DR: It is concluded that prefrontal cortex neurons are part of integrative networks that represent behaviourally meaningful cross-modal associations and are crucial for the temporal transfer of information in the structuring of behaviour, reasoning and language.
Abstract: The prefrontal cortex is essential for the temporal integration of sensory information in behavioural and linguistic sequences. Such information is commonly encoded in more than one sense modality, notably sight and sound. Connections from sensory cortices to the prefrontal cortex support its integrative function. Here we present the first evidence that prefrontal cortex cells associate visual and auditory stimuli across time. We gave monkeys the task of remembering a tone of a certain pitch for 10 s and then choosing the colour associated with it. In this task, prefrontal cortex cells responded selectively to tones, and most of them also responded to colours according to the task rule. Thus, their reaction to a tone was correlated with their subsequent reaction to the associated colour. This correlation faltered in trials ending in behavioural error. We conclude that prefrontal cortex neurons are part of integrative networks that represent behaviourally meaningful cross-modal associations. The orderly and timely activation of neurons in such networks is crucial for the temporal transfer of information in the structuring of behaviour, reasoning and language.

512 citations


Journal ArticleDOI
TL;DR: Recording from 427 single neurons in the human hippocampus, entorhinal cortex and amygdala found a remarkable degree of category-specific firing of individual neurons on a trial-by-trial basis, providing direct support for the role of human medial temporal regions in the representation of different categories of visual stimuli.
Abstract: The hippocampus, amygdala and entorhinal cortex receive convergent input from temporal neocortical regions specialized for processing complex visual stimuli and are important in the representation and recognition of visual images. Recording from 427 single neurons in the human hippocampus, entorhinal cortex and amygdala, we found a remarkable degree of category-specific firing of individual neurons on a trial-by-trial basis. Of the recorded neurons, 14% responded selectively to visual stimuli from different categories, including faces, natural scenes and houses, famous people and animals. Based on the firing rate of individual neurons, stimulus category could be predicted with a mean probability of error of 0.24. In the hippocampus, the proportion of neurons responding to spatial layouts was greater than to other categories. Our data provide direct support for the role of human medial temporal regions in the representation of different categories of visual stimuli.

Journal ArticleDOI
TL;DR: This argument centers around the proposal that focused attention is needed for the explicit perception of change, and it is proposed that perception involves a virtual representation, where object representations do not accumulate, but are formed as needed.

Journal ArticleDOI
TL;DR: This article outlines, review, and evaluates three new models of backward masking: an extension of the dual-channel approach as realized in the neural network model of retino-cortical dynamics, the perceptual retouch theory, and the boundary contour system.
Abstract: Visual backward masking not only is an empirically rich and theoretically interesting phenomenon but also has found increasing application as a powerful methodological tool in studies of visual information processing and as a useful instrument for investigating visual function in a variety of specific subject populations. Since the dual-channel, sustained-transient approach to visual masking was introduced about two decades ago, several new models of backward masking and metacontrast have been proposed as alternative approaches to visual masking. In this article, we outline, review, and evaluate three such approaches: an extension of the dual-channel approach as realized in the neural network model of retino-cortical dynamics (Ogmen, 1993), the perceptual retouch theory (Bachmann, 1984, 1994), and the boundary contour system (Francis, 1997; Grossberg & Mingolla, 1985b). Recent psychophysical and electrophysiological findings relevant to backward masking are reviewed and, whenever possible, are related to the aforementioned models. Besides noting the positive aspects of these models, we also list their problems and suggest changes that may improve them and experiments that can empirically test them.

Journal ArticleDOI
TL;DR: The Ebbinghaus illusion does not provide evidence for the existence of two distinct pathways for perception and action in the visual system, and the differences found previously can be accounted for by a hitherto unknown, nonadditive effect in the illusion.
Abstract: Neuropsychological studies prompted the theory that the primate visual system might be organized into two parallel pathways, one for conscious perception and one for guiding action. Supporting evidence in healthy subjects seemed to come from a dissociation in visual illusions: In previous studies, the Ebbinghaus (or Titchener) illusion deceived perceptual judgments of size, but only marginally influenced the size estimates used in grasping. Contrary to those results, the findings from the present study show that there is no difference in the sizes of the perceptual and grasp illusions if the perceptual and grasping tasks are appropriately matched. We show that the differences found previously can be accounted for by a hitherto unknown, nonadditive effect in the illusion. We conclude that the illusion does not provide evidence for the existence of two distinct pathways for perception and action in the visual system.

Journal ArticleDOI
19 Oct 2000-Nature
TL;DR: Psychophysical evidence that a sudden sound improves the detectability of a subsequent flash appearing at the same location is provided to show that the involuntary orienting of attention to sound enhances early perceptual processing of visual stimuli.
Abstract: To perceive real-world objects and events, we need to integrate several stimulus features belonging to different sensory modalities. Although the neural mechanisms and behavioural consequences of intersensory integration have been extensively studied1,2,3,4, the processes that enable us to pay attention to multimodal objects are still poorly understood. An important question is whether a stimulus in one sensory modality automatically attracts attention to spatially coincident stimuli that appear subsequently in other modalities, thereby enhancing their perceptual salience. The occurrence of an irrelevant sound does facilitate motor responses to a subsequent light appearing nearby5,6,7. However, because participants in previous studies made speeded responses rather than psychophysical judgements, it remains unclear whether involuntary auditory attention actually affects the perceptibility of visual stimuli as opposed to postperceptual decision and response processes. Here we provide psychophysical evidence that a sudden sound improves the detectability of a subsequent flash appearing at the same location. These data show that the involuntary orienting of attention to sound enhances early perceptual processing of visual stimuli.

Journal ArticleDOI
TL;DR: Findings show that the neural network supporting time perception involves the same brain areas that are responsible for the temporal planning and coordination of movements, indicating that time perception and motor timing rely on similar cerebral structures.

Journal ArticleDOI
TL;DR: The results show that perceptual organization in the auditory modality can have an effect on perceptibility in the visual modality.
Abstract: Six experiments demonstrated cross-modal influences from the auditory modality on the visual modality at an early level of perceptual organization. Participants had to detect a visual target in a rapidly changing sequence of visual distractors. A high tone embedded in a sequence of low tones improved detection of a synchronously presented visual target (Experiment 1), but the effect disappeared when the high tone was presented before the target (Experiment 2). Rhythmically based or order-based anticipation was unlikely to account for the effect because the improvement was unaffected by whether there was jitter (Experiment 3) or a random number of distractors between successive targets (Experiment 4). The facilitatory effect was greatly reduced when the tone was less abrupt and part of a melody (Experiments 5 and 6). These results show that perceptual organization in the auditory modality can have an effect on perceptibility in the visual modality.

Journal ArticleDOI
TL;DR: A spatial segregation of opposing contextual interactions in the response properties of neurons in primary visual cortex of alert monkeys and in human perception is found, suggesting that V1 neurons can participate in multiple perceptual processes via spatially segregated and functionally distinct components of their receptive fields.
Abstract: To examine the role of primary visual cortex in visuospatial integration, we studied the spatial arrangement of contextual interactions in the response properties of neurons in primary visual cortex of alert monkeys and in human perception. We found a spatial segregation of opposing contextual interactions. At the level of cortical neurons, excitatory interactions were located along the ends of receptive fields, while inhibitory interactions were strongest along the orthogonal axis. Parallel psychophysical studies in human observers showed opposing contextual interactions surrounding a target line with a similar spatial distribution. The results suggest that V1 neurons can participate in multiple perceptual processes via spatially segregated and functionally distinct components of their receptive fields.

Journal ArticleDOI
TL;DR: Functional magnetic resonance imaging (fMRI) was used to measure an asymmetry in the responses of human primary visual cortex (V1) to oriented stimuli and found that neural responses in V1 were larger for cardinal stimuli than for oblique stimuli.
Abstract: Visual perception critically depends on orientation-specific signals that arise early in visual processing Humans show greater behavioral sensitivity to gratings with horizontal or vertical (0 degrees /90 degrees; 'cardinal') orientations than to other, 'oblique' orientations Here we used functional magnetic resonance imaging (fMRI) to measure an asymmetry in the responses of human primary visual cortex (V1) to oriented stimuli We found that neural responses in V1 were larger for cardinal stimuli than for oblique (45 degrees /135 degrees ) stimuli Thus the fMRI pattern in V1 closely resembled subjects' behavioral judgments; responses in V1 were greater for those orientations that yielded better perceptual performance

Journal ArticleDOI
TL;DR: Enhanced peripheral attention to moving stimuli in the deaf may be mediated by alterations of the connectivity between MT/MST and the parietal cortex, one of the primary centers for spatial representation and attention.
Abstract: We compared normally hearing individuals and congenitally deaf individuals as they monitored moving stimuli either in the periphery or in the center of the visual field. When participants monitored the peripheral visual field, greater recruitment (as measured by functional magnetic resonance imaging) of the motion-selective area MT/MST was observed in deaf than in hearing individuals, whereas the two groups were comparable when attending to the central visual field. This finding indicates an enhancement of visual attention to peripheral visual space in deaf individuals. Structural equation modeling was used to further characterize the nature of this plastic change in the deaf. The effective connectivity between MT/MST and the posterior parietal cortex was stronger in deaf than in hearing individuals during peripheral but not central attention. Thus, enhanced peripheral attention to moving stimuli in the deaf may be mediated by alterations of the connectivity between MT/MST and the parietal cortex, one of the primary centers for spatial representation and attention.

Journal ArticleDOI
TL;DR: In this paper, a combination of visual supports for two elementary-age boys with autism was evaluated, and the visual supports were used to aid transitions from one activity to another in community and home settings.
Abstract: A combination of visual supports for two elementary-age boys with autism was evaluated. The visual supports were used to aid transitions from one activity to another in community and home settings....

Journal ArticleDOI
16 Nov 2000-Nature
TL;DR: This study reveals single neuron correlates of volitional visual imagery in humans and suggests a common substrate for the processing of incoming visual information and visual recall.
Abstract: Vivid visual images can be voluntarily generated in our minds in the absence of simultaneous visual input. While trying to count the number of flowers in Van Gogh's Sunflowers, understanding a description or recalling a path, subjects report forming an image in their "mind's eye". Whether this process is accomplished by the same neuronal mechanisms as visual perception has long been a matter of debate. Evidence from functional imaging, psychophysics, neurological studies and monkey electrophysiology suggests a common process, yet there are patients with deficits in one but not the other. Here we directly investigated the neuronal substrates of visual recall by recording from single neurons in the human medial temporal lobe while the subjects were asked to imagine previously viewed images. We found single neurons in the hippocampus, amygdala, entorhinal cortex and parahippocampal gyrus that selectively altered their firing rates depending on the stimulus the subjects were imagining. Of the neurons that fired selectively during both vision and imagery, the majority (88%) had identical selectivity. Our study reveals single neuron correlates of volitional visual imagery in humans and suggests a common substrate for the processing of incoming visual information and visual recall.

Journal ArticleDOI
TL;DR: The findings show that AD affects several aspects of vision and are compatible with the hypothesis that visual dysfunction in AD may contribute to performance decrements in other cognitive domains.

Journal ArticleDOI
TL;DR: A series of 6 experiments investigating crossmodal links between vision and touch in covert endogenous spatial attention found that these visuotactile links in spatial attention apply to common positions in external space.
Abstract: The authors report a series of 6 experiments investigating crossmodal links between vision and touch in covert endogenous spatial attention. When participants were informed that visual and tactile targets were more likely on 1 side than the other, speeded discrimination responses (continuous vs. pulsed, Experiments 1 and 2; or up vs. down, Experiment 3) for targets in both modalities were significantly faster on the expected side, even though target modality was entirely unpredictable. When participants expected a target on a particular side in just one modality, corresponding shifts of covert attention also took place in the other modality, as evidenced by faster elevation judgments on that side (Experiment 4). Larger attentional effects were found when directing visual and tactile attention to the same position rather than to different positions (Experiment 5). A final study with crossed hands revealed that these visuotactile links in spatial attention apply to common positions in external space.

Book
15 May 2000
TL;DR: The interlinked approach to development of attention and action and Plasticity in visual development is described, which highlights the need to understand more fully the role of language in the development of visual development.
Abstract: 1. Background context 2. Paediatric vision testing 3. Models of visual development 4. Newborn vision 5. Development optics - refraction and focusing or accommodation 6. Functional onset of specific cortical modules 7. Development of integration ('binding') and segmentation processes leading to object perception 8. The interlinked approach to development of attention and action 9. Plasticity in visual development 10. Concluding remarks References Index

Journal ArticleDOI
TL;DR: Results of the present experiments in combination with other studies presented in this volume are supportive for the notion that induced gamma band activity in the human EEG is closely related to visual information processing and attentional perceptual mechanisms.

Journal ArticleDOI
TL;DR: The most powerful, consistent, and earliest attention effects were those found to occur in area V4, during the 100-300 ms poststimulus interval, and attention effects grew over the time course of the neuronal response.
Abstract: This study quantified the magnitude and timing of selective attention effects across areas of the macaque visual system, including the lateral geniculate nucleus (LGN), lower cortical areas V1 and V2, and multiple higher visual areas in the dorsal and ventral processing streams. We used one stimulus configuration and behavioral paradigm, with simultaneous recordings from different areas to allow direct comparison of the distribution and timing of attention effects across the system. Streams of interdigitated auditory and visual stimuli were presented at a high rate with an irregular interstimulus interval (mean of 4/s). Attention to visual stimuli was manipulated by requiring subjects to make discriminative behavioral responses to stimuli in one sensory modality, ignoring all stimuli in the other. The attended modality was alternated across trial blocks, and difficulty of discrimination was equated across modalities. Stimulus presentation was gated, so that no stimuli were presented unless the subject gazed at the center of the visual stimulus display. Visual stimuli were diffuse light flashes differing in intensity or color and subtending 12 degrees centered at the point of gaze. Laminar event-related potential (ERP) and current source density (CSD) response profiles were sampled during multiple paired penetrations in multiple visual areas with linear array multicontact electrodes. Attention effects were assessed by comparing responses to specific visual stimuli when attended versus when visual stimuli were looked at the same way, but ignored. Effects were quantified by computing a modulation index (MI), a ratio of the differential CSD response produced by attention to the sum responses to attended and ignored visual stimuli. The average MI increased up levels of the lower visual pathways from none in the LGN to 0.0278 in V1 to 0.101 in V2 to 0.170 in V4. Above the V2 level, attention effects were larger in ventral stream areas (MI = 0. 152) than in dorsal stream areas (MI = 0.052). Although onset latencies were shortest in dorsal stream areas, attentional modulation of the early response was small relative to the stimulus-evoked response. Higher ventral stream areas showed substantial attention effects at the earliest poststimulus time points, followed by the lower visual areas V2 and V1. In all areas, attentional modulation lagged the onset of the stimulus-evoked response, and attention effects grew over the time course of the neuronal response. The most powerful, consistent, and earliest attention effects were those found to occur in area V4, during the 100-300 ms poststimulus interval. Smaller effects occurred in V2 over the same interval, and the bulk of attention effects in V1 were later. In the accompanying paper, we describe the physiology of attention effects in V1, V2 and V4.

Journal ArticleDOI
TL;DR: The results suggest that children's sensitivity to both dynamic auditory and visual stimuli are related to their literacy skills and support the hypothesis that sensitivity at detecting dynamic stimuli influences normal children's reading skills.
Abstract: The relationship between sensory sensitivity and reading performance was examined to test the hypothesis that the orthographic and phonological skills engaged in visual word recognition are constrained by the ability to detect dynamic visual and auditory events. A test battery using sensory psychophysics, psychometric tests, and measures of component literacy skills was administered to 32 unselected 10-year-old primary school children. The results suggest that children's sensitivity to both dynamic auditory and visual stimuli are related to their literacy skills. Importantly, after controlling for intelligence and overall reading ability, visual motion sensitivity explained independent variance in orthographic skill but not phonological ability, and auditory FM sensitivity covaried with phonological skill but not orthographic skill. These results support the hypothesis that sensitivity at detecting dynamic stimuli influences normal children's reading skills. Vision and audition separately may affect the ability to extract orthographic and phonological information during reading.

Journal ArticleDOI
TL;DR: It is concluded that ventriloquism largely reflects automatic sensory interactions, with little or no role for deliberate spatial attention.
Abstract: It is well known that discrepancies in the location of synchronized auditory and visual events can lead to mislocalizations of the auditory source, so-called ventriloquism. In two experiments, we tested whether such cross-modal influences on auditory localization depend on deliberate visual attention to the biasing visual event. In Experiment 1, subjects pointed to the apparent source of sounds in the presence or absence of a synchronous peripheral flash. They also monitored for target visual events, either at the location of the peripheral flash or in a central location. Auditory localization was attracted toward the synchronous peripheral flash, but this was unaffected by where deliberate visual attention was directed in the monitoring task. In Experiment 2, bilateral flashes were presented in synchrony with each sound, to provide competing visual attractors. When these visual events were equally salient on the two sides, auditory localization was unaffected by which side subjects monitored for visual targets. When one flash was larger than the other, auditory localization was slightly but reliably attracted toward it, but again regardless of where visual monitoring was required. We conclude that ventriloquism largely reflects automatic sensory interactions, with little or no role for deliberate spatial attention.

Journal ArticleDOI
TL;DR: Observations in primary visual cortex raise new questions about the circuitry responsible for receptive field surround effects and their contribution to visual perception.