scispace - formally typeset
Search or ask a question

Showing papers on "Visual perception published in 2003"


Journal ArticleDOI
29 May 2003-Nature
TL;DR: It is shown that action-video-game playing is capable of altering a range of visual skills, and non-players trained on an action video game show marked improvement from their pre-training abilities.
Abstract: As video-game playing has become a ubiquitous activity in today's society, it is worth considering its potential consequences on perceptual and motor skills. It is well known that exposing an organism to an altered visual environment often results in modification of the visual system of the organism. The field of perceptual learning provides many examples of training-induced increases in performance. But perceptual learning, when it occurs, tends to be specific to the trained task; that is, generalization to new tasks is rarely found. Here we show, by contrast, that action-video-game playing is capable of altering a range of visual skills. Four experiments establish changes in different aspects of visual attention in habitual video-game players as compared with non-video-game players. In a fifth experiment, non-players trained on an action video game show marked improvement from their pre-training abilities, thereby establishing the role of playing in this effect.

2,260 citations


Journal ArticleDOI
TL;DR: It is argued that the interplay between the unique demands of word reading and the structural constraints of the visual system lead to the emergence of the Visual Word Form Area.

1,406 citations


Journal ArticleDOI
TL;DR: Current approaches and empirical findings in human gaze control during real-world scene perception are reviewed.

1,318 citations


Journal ArticleDOI
30 Oct 2003-Nature
TL;DR: It is suggested that dynamically switching cortical states could represent the brain's internal context, and therefore reflect or influence memory, perception and behaviour.
Abstract: Spontaneous cortical activity--ongoing activity in the absence of intentional sensory input--has been studied extensively, using methods ranging from EEG (electroencephalography), through voltage sensitive dye imaging, down to recordings from single neurons. Ongoing cortical activity has been shown to play a critical role in development, and must also be essential for processing sensory perception, because it modulates stimulus-evoked activity, and is correlated with behaviour. Yet its role in the processing of external information and its relationship to internal representations of sensory attributes remains unknown. Using voltage sensitive dye imaging, we previously established a close link between ongoing activity in the visual cortex of anaesthetized cats and the spontaneous firing of a single neuron. Here we report that such activity encompasses a set of dynamically switching cortical states, many of which correspond closely to orientation maps. When such an orientation state emerged spontaneously, it spanned several hypercolumns and was often followed by a state corresponding to a proximal orientation. We suggest that dynamically switching cortical states could represent the brain's internal context, and therefore reflect or influence memory, perception and behaviour.

895 citations


Journal ArticleDOI
03 Jan 2003-Science
TL;DR: Attention is tracked in the monkey and the activity of neurons in the lateral intraparietal area (LIP) is correlated with the monkey's attentional performance, revealing the spatial and temporal dynamics of a monkeys attention.
Abstract: Although the parietal cortex has been implicated in the neural processes underlying visual attention, the nature of its contribution is not well understood. We tracked attention in the monkey and correlated the activity of neurons in the lateral intraparietal area (LIP) with the monkey's attentional performance. The ensemble activity in LIP across the entire visual field describes the spatial and temporal dynamics of a monkey's attention. Activity subtending a single location in the visual field describes the attentional priority at that area but does not predict that the monkey will actually attend to or make an eye movement to that location.

834 citations


Journal ArticleDOI
TL;DR: A neuronal network model is described that proposes that the step of conscious perception is related to the entry of processed visual stimuli into a global brain state that links distant areas including the prefrontal cortex through reciprocal connections, and thus makes perceptual information reportable by multiple means.
Abstract: The subjective experience of perceiving visual stimuli is accompanied by objective neuronal activity patterns such as sustained activity in primary visual area (V1), amplification of perceptual processing, correlation across distant regions, joint parietal, frontal, and cingulate activation, γ-band oscillations, and P300 waveform. We describe a neuronal network model that aims at explaining how those physiological parameters may cohere with conscious reports. The model proposes that the step of conscious perception, referred to as access awareness, is related to the entry of processed visual stimuli into a global brain state that links distant areas including the prefrontal cortex through reciprocal connections, and thus makes perceptual information reportable by multiple means. We use the model to simulate a classical psychological paradigm: the attentional blink. In addition to reproducing the main objective and subjective features of this paradigm, the model predicts an unique property of nonlinear transition from nonconscious processing to subjective perception. This all-or-none dynamics of conscious perception was verified behaviorally in human subjects.

700 citations


Book
09 Oct 2003
TL;DR: The traditional approach: 'compensatory taking into account' and Trans-saccadic integration 9.4 Conclusion: The Active Vision Cycle 9.5 Future directions
Abstract: PASSIVE VISION AND ACTIVE VISION 1.1 Introduction 1.2 Passive vision 1.3 Visual attention 1.4 Active vision 1.5 Active vision and vision for action 1.6 Outline of the book BACKGROUND TO ACTIVE VISION 2.1 Introduction 2.2 The inhomogeneity of the visual projections 2.3 Parallel visual pathways 2.4 The oculomotor system 2.5 Saccadic eye movements 2.6 Summary VISUAL SELECTION, COVERT ATTENTION AND EYE MOVEMENTS 3.1 Covert and overt attention 3.2 Covert spatial attention 3.3 The relationship between covert and overt attention 3.4 Speed of attention 3.5 Neurophysiology of attention 3.6 Non-spatial attention 3.7 Active vision and attention 3.8 Summary VISUAL ORIENTING 4.1 Introduction 4.2 What determines the latency of orienting saccades? 4.3 Physiology of saccade initiation 4.4 What determines the landing position of orienting saccades? 4.5 Physiology of the WHERE system 4.6 The Findlay and Walker model 4.7 Development and plasticity VISUAL SAMPLING DURING TEXT READING 5.1 Introduction 5.2 Basic patterns of visual sampling during reading 5.3 Perception during fixations in reading 5.4 Language processing 5.5 Control of fixation duration 5.6 Control of landing position 5.7 Theories of eye control during reading 5.8 Practical aspects of eye control in reading 5.9 Overview VISUAL SEARCH 6.1 Visual search tasks 6.2 Theories of visual search 6.3 The need for eye movements in visual search 6.4 Eye movements in visual search 6.5 Ocular capture in visual search 6.6 Saccades in visual search: scanpaths 6.7 Physiology of visual search 6.8 Summary NATURAL SCENES AND ACTIVITIES 7.1 Introduction 7.2 Analytic studies of scene and object perception 7.3 Dynamic scenes and situations 7.4 Summary HUMAN NEUROPSYCHOLOGY 8.1 Blindsight 8.2 Neglect 8.3 Balint's syndrome and dorsal simultanagnosia 8.4 Frontal lobe damage 8.5 Orienting without eye movements 8.6 Summary SPACE CONSTANCY AND TRANS-SACCADIC INTEGRATION 9.1 The traditional approach: 'compensatory taking into account' 9.2 Trans-saccadic integration 9.3 Resolution of the conflicting results 9.4 Conclusion: The Active Vision Cycle 9.5 Future directions

690 citations


Journal ArticleDOI
TL;DR: Studies of single cells, field potential recordings and functional neuroimaging data indicate that specialized visual mechanisms exist in the superior temporal sulcus (STS) of both human and non-human primates that produce selective neural responses to moving natural images of faces and bodies.
Abstract: The movements of the faces and bodies of other conspecifics provide stimuli of considerable interest to the social primate. Studies of single cells, field potential recordings and functional neuroimaging data indicate that specialized visual mechanisms exist in the superior temporal sulcus (STS) of both human and non-human primates that produce selective neural responses to moving natural images of faces and bodies. STS mechanisms also process simplified displays of biological motion involving point lights marking the limb articulations of animate bodies and geometrical shapes whose motion simulates purposeful behaviour. Facial movements such as deviations in eye gaze, important for gauging an individual's social attention, and mouth movements, indicative of potential utterances, generate particularly robust neural responses that differentiate between movement types. Collectively such visual processing can enable the decoding of complex social signals and through its outputs to limbic, frontal and parietal systems the STS may play a part in enabling appropriate affective responses and social behaviour.

677 citations


Journal ArticleDOI
TL;DR: Cortical and thalamocortical oscillations in different frequency bands could provide a neuronal basis for discrete processes, but are rarely analyzed in this context.

606 citations


Journal ArticleDOI
TL;DR: The hypothesis that sensory encoding of affective stimuli is facilitated implicitly by natural selective attention is supported, suggesting that the affect system not only modulates motor output, but already operates at an early level of sensory encoding.
Abstract: A key function of emotion is the preparation for action. However, organization of successful behavioral strategies depends on efficient stimulus encoding. The present study tested the hypothesis that perceptual encoding in the visual cortex is modulated by the emo- tional significance of visual stimuli. Event-related brain potentials were measured while subjects viewed pleasant, neutral, and unpleas- ant pictures. Early selective encoding of pleasant and unpleasant im- ages was associated with a posterior negativity, indicating primary sources of activation in the visual cortex. The study also replicated previous findings in that affective cues also elicited enlarged late posi- tive potentials, indexing increased stimulus relevance at higher-order stages of stimulus processing. These results support the hypothesis that sensory encoding of affective stimuli is facilitated implicitly by natural selective attention. Thus, the affect system not only modulates motor output (i.e., favoring approach or avoidance dispositions), but already operates at an early level of sensory encoding.

596 citations


Journal ArticleDOI
TL;DR: The ITC seems more involved in the analysis of currently viewed shapes, whereas the PFC showed stronger category signals, memory effects, and a greater tendency to encode information in terms of its behavioral meaning.
Abstract: Previous studies have suggested that both the prefrontal cortex (PFC) and inferior temporal cortex (ITC) are involved in high-level visual processing and categorization, but their respective roles are not known. To address this, we trained monkeys to categorize a continuous set of visual stimuli into two categories, “cats” and “dogs.” The stimuli were parametrically generated using a computer graphics morphing system (Shelton, 2000) that allowed precise control over stimulus shape. After training, we recorded neural activity from the PFC and the ITC of monkeys while they performed a category-matching task. We found that the PFC and the ITC play distinct roles in category-based behaviors: the ITC seems more involved in the analysis of currently viewed shapes, whereas the PFC showed stronger category signals, memory effects, and a greater tendency to encode information in terms of its behavioral meaning.

Journal ArticleDOI
TL;DR: Autistic children and typically developing control children were tested on two visual tasks, one involving grouping of small line elements into a global figure and the other involving perception of human activity portrayed in point-light animations, and performance was equivalent on the figure task, but autistic children were significantly impaired on the biological motion task.
Abstract: Autistic children and typically developing control children were tested on two visual tasks, one involving grouping of small line elements into a global figure and the other involving perception of hu- man activity portrayed in point-light animations. Performance of the two groups was equivalent on the figure task, but autistic children were significantly impaired on the biological motion task. This latter deficit may be related to the impaired social skills characteristic of autism, and we speculate that this deficit may implicate abnormalities in brain areas mediating perception of human movement.

Journal ArticleDOI
17 Jul 2003-Neuron
TL;DR: It is suggested that human hippocampus mediates reactivation of crossmodal semantic associations, even in the absence of explicit memory processing, as indicating that human olfactory perception is notoriously unreliable but shows substantial benefits from visual cues.


Journal ArticleDOI
TL;DR: Although many neuroimaging studies of visual mental imagery have revealed activation in early visual cortex (Areas 17 or 18), many others have not, and the variability in the literature is not random.
Abstract: Although many neuroimaging studies of visual mental imagery have revealed activation in early visual cortex (Areas 17 or 18), many others have not. The authors review this literature and compare how well 3 models explain the disparate results. Each study was coded 1 or 0, indicating whether activation in early visual cortex was observed, and sets of variables associated with each model were fit to the observed results using logistic regression analysis. Three variables predicted all of the systematic differences in the probability of activation across studies. Two of these variables were identified with a perceptual anticipation theory, and the other was identified with a methodological factors theory. Thus, the variability in the literature is not random.

Journal ArticleDOI
TL;DR: It is suggested that these two on-line methodologies, eye movements and event-related potentials, can be used in complementary ways to produce a better picture of the mental action the authors call reading.

Journal ArticleDOI
TL;DR: The findings indicate that experimental variations that modify the subjective visual experience of masked stimuli have no effect on motor effects of those stimuli in early processing, and proposes a model that provides a quantitative account of priming effects on response speed and accuracy.
Abstract: Visual stimuli may remain invisible but nevertheless produce strong and reliable effects on subsequent actions. How well features of a masked prime are perceived depends crucially on its physical parameters and those of the mask. We manipulated the visibility of masked stimuli and contrasted it with their influence on the speed of motor actions, comparing the temporal dynamics of visual awareness in metacontrast masking with that of action priming under the same conditions. We observed priming with identical time course for reportable and invisible prime stimuli, despite qualitative changes in the masking time course. Our findings indicate that experimental variations that modify the subjective visual experience of masked stimuli have no effect on motor effects of those stimuli in early processing. We propose a model that provides a quantitative account of priming effects on response speed and accuracy.

Journal ArticleDOI
Hugh R. Wilson1
TL;DR: This model demonstrates that competitive inhibition in the first rivalry stage can be eliminated by using suitable stimulus dynamics, thereby revealing properties of a later stage, and suggests that neural competition may be a general characteristic throughout the form-vision hierarchy.
Abstract: Cortical-form vision comprises multiple, hierarchically arranged areas with feedforward and feedback interconnections. This complex architecture poses difficulties for attempts to link perceptual phenomena to activity at a particular level of the system. This difficulty has been especially salient in studies of binocular rivalry alternations, where there is seemingly conflicting evidence for a locus in primary visual cortex or alternatively in higher cortical areas devoted to object perception. Here, I use a competitive neural model to demonstrate that the data require at least two hierarchic rivalry stages for their explanation. This model demonstrates that competitive inhibition in the first rivalry stage can be eliminated by using suitable stimulus dynamics, thereby revealing properties of a later stage, a result obtained with both spike-rate and conductance-based model neurons. This result provides a synthesis of competing rivalry theories and suggests that neural competition may be a general characteristic throughout the form-vision hierarchy.

Book
01 Jan 2003
TL;DR: In this article, Pylyshyn argues that there is a core stage of vision independent from the influence of our prior beliefs and examines how vision can be intelligent and yet essentially knowledge-free.
Abstract: In Seeing and Visualizing, Zenon Pylyshyn argues that seeing is different from thinking and that to see is not, as it may seem intuitively, to create an inner replica of the world. Pylyshyn examines how we see and how we visualize and why the scientific account does not align with the way these processes seem to us "from the inside." In doing so, he addresses issues in vision science, cognitive psychology, philosophy of mind, and cognitive neuroscience. First, Pylyshyn argues that there is a core stage of vision independent from the influence of our prior beliefs and examines how vision can be intelligent and yet essentially knowledge-free. He then proposes that a mechanism within the vision module, called a visual index (or FINST), provides a direct preconceptual connection between parts of visual representations and things in the world, and he presents various experiments that illustrate the operation of this mechanism. He argues that such a deictic reference mechanism is needed to account for many properties of vision, including how mental images attain their apparent spatial character without themselves being laid out in space in our brains. The final section of the book examines the "picture theory" of mental imagery, including recent neuroscience evidence, and asks whether any current evidence speaks to the issue of the format of mental images. This analysis of mental imagery brings together many of the themes raised throughout the book and provides a framework for considering such issues as the distinction between the form and the content of representations, the role of vision in thought, and the relation between behavioral, neuroscientific, and phenomenological evidence regarding mental representations.

Journal ArticleDOI
TL;DR: Hits and false alarms evoked more activity than misses indicates that activity in early visual cortex corresponded to the subjects' percepts, rather than to the physically presented stimulus.
Abstract: We used functional magnetic resonance imaging (fMRI) to measure activity in human early visual cortex (areas V1, V2 and V3) during a challenging contrast-detection task. Subjects attempted to detect the presence of slight contrast increments added to two kinds of background patterns. Behavioral responses were recorded so that the corresponding cortical activity could be grouped into the usual signal detection categories: hits, false alarms, misses and correct rejects. For both kinds of background patterns, the measured cortical activity was retinotopically specific. Hits and false alarms were associated with significantly more cortical activity than were correct rejects and misses. That false alarms evoked more activity than misses indicates that activity in early visual cortex corresponded to the subjects' percepts, rather than to the physically presented stimulus.

Journal ArticleDOI
TL;DR: These studies are beginning to indicate that colour is processed not in isolation, but together with information about luminance and visual form, by the same neural circuits, to achieve a unitary and robust representation of the visual world.
Abstract: The perception of colour is a central component of primate vision. Colour facilitates object perception and recognition, and has an important role in scene segmentation and visual memory. Moreover, it provides an aesthetic component to visual experiences that is fundamental to our perception of the world. Despite the long history of colour vision studies, much has still to be learned about the physiological basis of colour perception. Recent advances in our understanding of the early processing in the retina and thalamus have enabled us to take a fresh look at cortical processing of colour. These studies are beginning to indicate that colour is processed not in isolation, but together with information about luminance and visual form, by the same neural circuits, to achieve a unitary and robust representation of the visual world.

Journal ArticleDOI
17 Jul 2003-Neuron
TL;DR: This work scans subjects during a task that involved remapping of visual signals across hemifields and demonstrates that updating of visual information occurs in human parietal cortex.

Journal ArticleDOI
TL;DR: Recovery after long-term blindness was first studied in 1793, but few cases have been reported since, so this combined psychophysical and neuroimaging techniques to characterize the effects of long- term visual deprivation on human cortex are combined.
Abstract: Recovery after long-term blindness was first studied1 in 1793, but few cases have been reported since2,3,4,5,6,7. We combined psychophysical and neuroimaging techniques to characterize the effects of long-term visual deprivation on human cortex.

Journal ArticleDOI
TL;DR: The results suggest that dedicated, real-time visuomotor mechanisms are engaged for the control of action only after the response is cued, and only if the target is visible, and therefore resist size-contrast illusions.
Abstract: Participants were cued by an auditory tone to grasp a target object from within a size-contrast display. The peak grip aperture was unaffected by the perceptual size illusion when the target array was visible between the response cue and movement onset (vision trials). The grasp was sensitive to the illusion, however, when the target array was occluded from view when the response was cued (occlusion trials). This was true when the occlusion occurred 2.5 s before the response cue (delay), but also when the occlusion coincided with the response cue (no-delay). Unlike previous experiments, vision and occlusion trials were presented in random sequence. The results suggest that dedicated, real-time visuomotor mechanisms are engaged for the control of action only after the response is cued, and only if the target is visible. These visuomotor mechanisms compute the absolute metrics of the target object and therefore resist size-contrast illusions. In other situations (e.g. prior to the response cue, or if the target is no longer visible), a perceptual representation of the target object can be used for action planning. Unlike the real-time visuomotor mechanisms, perception-based movement planning makes use of relational metrics, and is therefore sensitive to size-contrast illusions.

Journal ArticleDOI
TL;DR: Cross-modal binding in auditory-visual speech perception was investigated by using the McGurk effect, a phenomenon in which hearing is altered by incongruent visual mouth movements using functional magnetic resonance imaging and positron emission tomography.

Journal ArticleDOI
TL;DR: The results show that the auditory system can strongly influence visual perception and are consistent with the idea that bimodal sensory conflicts are dominated by the sensory system with the greater acuity for the stimulus parameter being discriminated.
Abstract: Visual stimuli are known to influence the perception of auditory stimuli in spatial tasks, giving rise to the ventriloquism effect. These influences can persist in the absence of visual input following a period of exposure to spatially disparate auditory and visual stimuli, a phenomenon termed the ventriloquism aftereffect. It has been speculated that the visual dominance over audition in spatial tasks is due to the superior spatial acuity of vision compared with audition. If that is the case, then the auditory system should dominate visual perception in a manner analogous to the ventriloquism effect and aftereffect if one uses a task in which the auditory system has superior acuity. To test this prediction, the interactions of visual and auditory stimuli were measured in a temporally based task in normal human subjects. The results show that the auditory system has a pronounced influence on visual temporal rate perception. This influence was independent of the spatial location, spectral bandwidth, and intensity of the auditory stimulus. The influence was, however, strongly dependent on the disparity in temporal rate between the two stimulus modalities. Further, aftereffects were observed following approximately 20 min of exposure to temporally disparate auditory and visual stimuli. These results show that the auditory system can strongly influence visual perception and are consistent with the idea that bimodal sensory conflicts are dominated by the sensory system with the greater acuity for the stimulus parameter being discriminated.

Journal ArticleDOI
TL;DR: Advances in Bayesian models of computer vision and in the measurement and modeling of natural image statistics are providing the tools to test and constrain theories of human object perception, which are having an impact on the interpretation of cortical function.

Journal ArticleDOI
TL;DR: These findings demonstrate that children with autism do not have a general difficulty in connecting context information and item information as predicted by weak central coherence theory, and suggest that there is specific difficulty with complex verbal stimuli and in particular with using sentence context to disambiguate meaning.
Abstract: Background: This research investigated the proposal that children with autism are impaired in processing information in its context. To date, this proposal rests almost exclusively on evidence from verbal tasks. Given evidence of visuo-spatial proficiency in autism in other areas of functioning, it is possible that the ability to use context is spared in the visual domain but impaired in the verbal domain. Method: Fifteen children with autism and 16 age and IQ-matched typically developing children were tested on their ability to take account of visual context information (Experiment 1) and verbal context information (Experiment 2) using an adaptation of Palmer's (1975) visual context task. They were also given an adaptation of Tager-Flusberg's (1991) visual and verbal semantic memory task (Experiment 3) and Frith and Snowling's (1983) homograph task (Experiment 4). Results: Experiment 1 showed that children with autism were facilitated by the provision of visual context information. Experiments 2 and 3 showed that the same children were also able to use both verbal context information when identifying words and semantic category information in a verbal task when naming and recalling words. However, in Experiment 4 these children had difficulties with a sentence-processing task when using sentence context to disambiguate homographs. Conclusions: These findings demonstrate that children with autism do not have a general difficulty in connecting context information and item information as predicted by weak central coherence theory. Instead the results suggest that there is specific difficulty with complex verbal stimuli and in particular with using sentence context to disambiguate meaning.

Journal ArticleDOI
TL;DR: Using human fMRI, it is demonstrated that not only higher occipitotemporal but also early retinotopic areas are involved in the perceptual organization and detection of global shapes and provides novel evidence for the role of both early feature integration processes and higher stages of visual analysis in coherent visual perception.

Journal ArticleDOI
TL;DR: It is reported that graspable objects may facilitate visuomotor transformations by automatically grabbing visual spatial attention, and it is suggested that visual sensory gain aids perception and may also have consequences for object-directed actions.
Abstract: Visually guided grasping movements require a rapid transformation of visual representations into object-specific motor programs. Here we report that graspable objects may facilitate these visuomotor transformations by automatically grabbing visual spatial attention. Human subjects viewed two taskirrelevant objects—one was a ‘tool’, the other a ‘non-tool’—while waiting for a target to be presented in one of the two object locations. Using event-related potentials (ERPs), we found that spatial attention was systematically drawn to tools in the right and lower visual fields, the hemifields that are dominant for visuomotor processing. Using event-related fMRI, we confirmed that tools grabbed spatial attention only when they also activated dorsal regions of premotor and prefrontal cortices, regions associated with visually guided actions and their planning. Although it is widely accepted that visual sensory gain aids perception, our results suggest that it may also have consequences for object-directed actions.