scispace - formally typeset
Search or ask a question

Showing papers on "Visual perception published in 2009"


Journal ArticleDOI
TL;DR: Research on the following topics is reviewed with respect to reading: (a) the perceptual span, (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements.
Abstract: Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.

2,033 citations


Journal ArticleDOI
02 Apr 2009-Nature
TL;DR: It is shown that orientations held in working memory can be decoded from activity patterns in the human visual cortex, even when overall levels of activity are low, and early visual areas can retain specific information about visual features held inWorking memory, over periods of many seconds when no physical stimulus is present.
Abstract: Visual working memory provides an essential link between perception and higher cognitive functions, allowing for the active maintenance of information about stimuli no longer in view. Research suggests that sustained activity in higher-order prefrontal, parietal, inferotemporal and lateral occipital areas supports visual maintenance, and may account for the limited capacity of working memory to hold up to 3-4 items. Because higher-order areas lack the visual selectivity of early sensory areas, it has remained unclear how observers can remember specific visual features, such as the precise orientation of a grating, with minimal decay in performance over delays of many seconds. One proposal is that sensory areas serve to maintain fine-tuned feature information, but early visual areas show little to no sustained activity over prolonged delays. Here we show that orientations held in working memory can be decoded from activity patterns in the human visual cortex, even when overall levels of activity are low. Using functional magnetic resonance imaging and pattern classification methods, we found that activity patterns in visual areas V1-V4 could predict which of two oriented gratings was held in memory with mean accuracy levels upwards of 80%, even in participants whose activity fell to baseline levels after a prolonged delay. These orientation-selective activity patterns were sustained throughout the delay period, evident in individual visual areas, and similar to the responses evoked by unattended, task-irrelevant gratings. Our results demonstrate that early visual areas can retain specific information about visual features held in working memory, over periods of many seconds when no physical stimulus is present.

1,123 citations


Journal ArticleDOI
TL;DR: The results support the notion that ongoing oscillations shape perception by providing a temporal reference frame for neural codes that rely on precise spike timing, and indicate that the visual detection threshold fluctuates over time along with the phase of ongoing EEG activity.
Abstract: Oscillations are ubiquitous in electrical recordings of brain activity. While the amplitude of ongoing oscillatory activity is known to correlate with various aspects of perception, the influence of oscillatory phase on perception remains unknown. In particular, since phase varies on a much faster timescale than the more sluggish amplitude fluctuations, phase effects could reveal the fine-grained neural mechanisms underlying perception. We presented brief flashes of light at the individual luminance threshold while EEG was recorded. Although the stimulus on each trial was identical, subjects detected approximately half of the flashes (hits) and entirely missed the other half (misses). Phase distributions across trials were compared between hits and misses. We found that shortly before stimulus onset, each of the two distributions exhibited significant phase concentration, but at different phase angles. This effect was strongest in the theta and alpha frequency bands. In this time-frequency range, oscillatory phase accounted for at least 16% of variability in detection performance and allowed the prediction of performance on the single-trial level. This finding indicates that the visual detection threshold fluctuates over time along with the phase of ongoing EEG activity. The results support the notion that ongoing oscillations shape our perception, possibly by providing a temporal reference frame for neural codes that rely on precise spike timing.

1,038 citations


Journal ArticleDOI
TL;DR: It is shown that the phase of EEG α rhythm measured over posterior brain regions can reliably predict both subsequent visual detection and stimulus-elicited cortical activation levels in a metacontrast masking paradigm, suggesting that cortical excitability level may mediate target detection.
Abstract: We often fail to see something that at other times is readily detectable. Because the visual stimulus itself is unchanged, this variability in conscious awareness is likely related to changes in the brain. Here we show that the phase of EEG alpha rhythm measured over posterior brain regions can reliably predict both subsequent visual detection and stimulus-elicited cortical activation levels in a metacontrast masking paradigm. When a visual target presentation coincides with the trough of an alpha wave, cortical activation is suppressed as early as 100 ms after stimulus onset, and observers are less likely to detect the target. Thus, during one alpha cycle lasting 100 ms, the human brain goes through a rapid oscillation in excitability, which directly influences the probability that an environmental stimulus will reach conscious awareness. Moreover, ERPs to the appearance of a fixation cross before the target predict its detection, further suggesting that cortical excitability level may mediate target detection. A novel theory of cortical inhibition is proposed in which increased alpha power represents a “pulsed inhibition” of cortical activity that affects visual awareness.

946 citations


Journal ArticleDOI
TL;DR: Recent work that has begun to delineate a neurobiology of visual expectation is reviewed, and the findings are contrasted with those of the attention literature to explore how these two central influences on visual perception overlap, differ and interact.

801 citations


Journal ArticleDOI
TL;DR: It is demonstrated that visual perceptual learning, an example of adult neural plasticity, modifies the resting covariance structure of spontaneous activity between networks engaged by the task, concluding that functional connectivity serves a dynamic role in brain function, supporting the consolidation of previous experience.
Abstract: The brain is not a passive sensory-motor analyzer driven by environmental stimuli, but actively maintains ongoing representations that may be involved in the coding of expected sensory stimuli, prospective motor responses, and prior experience. Spontaneous cortical activity has been proposed to play an important part in maintaining these ongoing, internal representations, although its functional role is not well understood. One spontaneous signal being intensely investigated in the human brain is the interregional temporal correlation of the blood-oxygen level-dependent (BOLD) signal recorded at rest by functional MRI (functional connectivity-by-MRI, fcMRI, or BOLD connectivity). This signal is intrinsic and coherent within a number of distributed networks whose topography closely resembles that of functional networks recruited during tasks. While it is apparent that fcMRI networks reflect anatomical connectivity, it is less clear whether they have any dynamic functional importance. Here, we demonstrate that visual perceptual learning, an example of adult neural plasticity, modifies the resting covariance structure of spontaneous activity between networks engaged by the task. Specifically, after intense training on a shape-identification task constrained to one visual quadrant, resting BOLD functional connectivity and directed mutual interaction between trained visual cortex and frontal-parietal areas involved in the control of spatial attention were significantly modified. Critically, these changes correlated with the degree of perceptual learning. We conclude that functional connectivity serves a dynamic role in brain function, supporting the consolidation of previous experience.

745 citations


Journal ArticleDOI
TL;DR: Multiple strategies including retinal tiling, hierarchical and parallel processing and modularity, defined spatially and by cell type-specific connectivity, are used by the visual system to recover the intricate detail of the authors' visual surroundings.
Abstract: Incoming sensory information is sent to the brain along modality-specific channels corresponding to the five senses. Each of these channels further parses the incoming signals into parallel streams to provide a compact, efficient input to the brain. Ultimately, these parallel input signals must be elaborated on and integrated in the cortex to provide a unified and coherent percept. Recent studies in the primate visual cortex have greatly contributed to our understanding of how this goal is accomplished. Multiple strategies including retinal tiling, hierarchical and parallel processing and modularity, defined spatially and by cell type-specific connectivity, are used by the visual system to recover the intricate detail of our visual surroundings.

663 citations


Journal ArticleDOI
15 Jan 2009-Neuron
TL;DR: The orientation selectivity of stimulus-evoked LFP signals in primary visual cortex is determined and a quantitative estimate indicates that LFPs are more local than often recognized and provides a guide to the interpretation of the increasing number of studies that rest on LFP recordings.

560 citations


Journal ArticleDOI
24 Sep 2009-Neuron
TL;DR: A new Bayesian decoder is demonstrated that uses fMRI signals from early and anterior visual areas to reconstruct complex natural images that accurately reflect both the spatial structure and semantic category of the objects contained in the observed natural image.

523 citations


Journal ArticleDOI
TL;DR: The accumulated evidence demonstrates that microsaccades serve both perceptual and oculomotor goals and although in some cases their contribution is neither necessary nor unique, micros Accades are a malleable tool conveniently employed by the visual system.

506 citations


Journal ArticleDOI
TL;DR: The study of higher-order topographic cortex promises to yield unprecedented insights into the neural mechanisms of cognitive processes and, in conjunction with parallel studies in non-human primates, into the evolution of cognition.

Journal ArticleDOI
TL;DR: The hypothesis that the brain's ability to see in the present incorporates a representation of the affective impact of those visual sensations in the past that makes up part of the brain’s prediction of what the visual sensations stand for in thepresent, including how to act on them in the near future is developed.
Abstract: People see with feeling. We ‘gaze’, ‘behold’, ‘stare’, ‘gape’ and ‘glare’. In this paper, we develop the hypothesis that the brain's ability to see in the present incorporates a representation of the affective impact of those visual sensations in the past. This representation makes up part of the brain's prediction of what the visual sensations stand for in the present, including how to act on them in the near future. The affective prediction hypothesis implies that responses signalling an object's salience, relevance or value do not occur as a separate step after the object is identified. Instead, affective responses support vision from the very moment that visual stimulation begins.

Journal ArticleDOI
TL;DR: It is found that monkeys can rapidly reweight visual and vestibular cues according to their reliability, the first such demonstration in a nonhuman species.
Abstract: The perception of self-motion direction, or heading, relies on integration of multiple sensory cues, especially from the visual and vestibular systems. However, the reliability of sensory information can vary rapidly and unpredictably, and it remains unclear how the brain integrates multiple sensory signals given this dynamic uncertainty. Human psychophysical studies have shown that observers combine cues by weighting them in proportion to their reliability, consistent with statistically optimal integration schemes derived from Bayesian probability theory. Remarkably, because cue reliability is varied randomly across trials, the perceptual weight assigned to each cue must change from trial to trial. Dynamic cue reweighting has not been examined for combinations of visual and vestibular cues, nor has the Bayesian cue integration approach been applied to laboratory animals, an important step toward understanding the neural basis of cue integration. To address these issues, we tested human and monkey subjects in a heading discrimination task involving visual (optic flow) and vestibular (translational motion) cues. The cues were placed in conflict on a subset of trials, and their relative reliability was varied to assess the weights that subjects gave to each cue in their heading judgments. We found that monkeys can rapidly reweight visual and vestibular cues according to their reliability, the first such demonstration in a nonhuman species. However, some monkeys and humans tended to over-weight vestibular cues, inconsistent with simple predictions of a Bayesian model. Nonetheless, our findings establish a robust model system for studying the neural mechanisms of dynamic cue reweighting in multisensory perception.

Journal ArticleDOI
TL;DR: In this paper, it is rather puzzling that given the massive interest in affective neuroscience in the last decade, it still seems to make sense to raise the question "Why bodies" and to try to provide...
Abstract: Why bodies? It is rather puzzling that given the massive interest in affective neuroscience in the last decade, it still seems to make sense to raise the question ‘Why bodies’ and to try to provide...

Journal ArticleDOI
TL;DR: A critical review and "meta-analysis" of perceptual learning in adults and children with amblyopia is provided, with a view to extracting principles that might make PL more effective and efficient.

Journal ArticleDOI
01 May 2009-Brain
TL;DR: This review will draw together work from animal and human studies in an attempt to provide an insight into how Parkinson's disease affects the retina and how these changes might contribute to the visual symptoms experienced by patients.
Abstract: As a more complete picture of the clinical phenotype of Parkinson's disease emerges, non-motor symptoms have become increasingly studied. Prominent among these non-motor phenomena are mood disturbance, cognitive decline and dementia, sleep disorders, hyposmia and autonomic failure. In addition, visual symptoms are common, ranging from complaints of dry eyes and reading difficulties, through to perceptual disturbances (feelings of presence and passage) and complex visual hallucinations. Such visual symptoms are a considerable cause of morbidity in Parkinson's disease and, with respect to visual hallucinations, are an important predictor of cognitive decline as well as institutional care and mortality. Evidence exists of visual dysfunction at several levels of the visual pathway in Parkinson's disease. This includes psychophysical, electrophysiological and morphological evidence of disruption of retinal structure and function, in addition to disorders of ‘higher’ (cortical) visual processing. In this review, we will draw together work from animal and human studies in an attempt to provide an insight into how Parkinson's disease affects the retina and how these changes might contribute to the visual symptoms experienced by patients.

Journal ArticleDOI
12 Mar 2009-Neuron
TL;DR: Results show that visual learning can be formed in human adults through stimulus-reward pairing in the absence of a task and without awareness of the stimulus presentation or reward contingencies.

Journal ArticleDOI
TL;DR: Functional magnetic resonance imaging and dynamic causal modeling are used to furnish neurophysiological evidence that statistical associations are learnt, even when task-irrelevant, and posit a dual role for prediction-error in encoding surprise and driving associative plasticity.
Abstract: Confronted with a rich sensory environment, the brain must learn statistical regularities across sensory domains to construct causal models of the world. Here, we used functional magnetic resonance imaging and dynamic causal modeling (DCM) to furnish neurophysiological evidence that statistical associations are learnt, even when task-irrelevant. Subjects performed an audio-visual target-detection task while being exposed to distractor stimuli. Unknown to them, auditory distractors predicted the presence or absence of subsequent visual distractors. We modeled incidental learning of these associations using a Rescorla-Wagner (RW) model. Activity in primary visual cortex and putamen reflected learning-dependent surprise: these areas responded progressively more to unpredicted, and progressively less to predicted visual stimuli. Critically, this prediction-error response was observed even when the absence of a visual stimulus was surprising. We investigated the underlying mechanism by embedding the RW model into a DCM to show that auditory to visual connectivity changed significantly over time as a function of prediction error. Thus, consistent with predictive coding models of perception, associative learning is mediated by prediction-error dependent changes in connectivity. These results posit a dual role for prediction-error in encoding surprise and driving associative plasticity.

Journal ArticleDOI
01 Sep 2009-Brain
TL;DR: It is concluded that PFC makes a causal contribution to conscious visual perception of masked stimuli, and a dual-route signal detection theory of objective and subjective decision making is outlined.
Abstract: What neural mechanisms support our conscious perception of briefly presented stimuli? Some theories of conscious access postulate a key role of top–down amplification loops involving prefrontal cortex (PFC). To test this issue, we measured the visual backward masking threshold in patients with focal prefrontal lesions, using both objective and subjective measures while controlling for putative attention deficits. In all conditions of temporal or spatial attention cueing, the threshold for access to consciousness was systematically shifted in patients, particular after a lesion of the left anterior PFC. The deficit affected subjective reports more than objective performance, and objective performance conditioned on subjective visibility was essentially normal. We conclude that PFC makes a causal contribution to conscious visual perception of masked stimuli, and outline a dual-route signal detection theory of objective and subjective decision making.

Journal ArticleDOI
TL;DR: Modeling revealed that perceiving the average facial expression in groups of faces was not due to noisy representation or noisy discrimination, and support the hypothesis that ensemble coding occurs extremely fast at multiple levels of visual analysis.
Abstract: We frequently encounter groups of similar objects in our visual environment: a bed of flowers, a basket of oranges, a crowd of people. How does the visual system process such redundancy? Research shows that rather than code every element in a texture, the visual system favors a summary statistical representation of all the elements. The authors demonstrate that although it may facilitate texture perception, ensemble coding also occurs for faces—a level of processing well beyond that of textures. Observers viewed sets of faces varying in emotionality (e.g., happy to sad) and assessed the mean emotion of each set. Although observers retained little information about the individual set members, they had a remarkably precise representation of the mean emotion. Observers continued to discriminate the mean emotion accurately even when they viewed sets of 16 faces for 500 ms or less. Modeling revealed that perceiving the average facial expression in groups of faces was not due to noisy representation or noisy discrimination. These findings support the hypothesis that ensemble coding

Journal ArticleDOI
TL;DR: The plasticity in the size of this temporal window was investigated using a perceptual learning paradigm in which participants were given feedback during a two-alternative forced choice (2-AFC) audiovisual simultaneity judgment task, suggesting a high degree of flexibility in multisensory temporal processing.
Abstract: The brain’s ability to bind incoming auditory and visual stimuli depends critically on the temporal structure of this information. Specifically, there exists a temporal window of audiovisual integration within which stimuli are highly likely to be bound together and perceived as part of the same environmental event. Several studies have described the temporal bounds of this window, but few have investigated its malleability. Here, the plasticity in the size of this temporal window was investigated using a perceptual learning paradigm in which participants were given feedback during a two-alternative forced-choice (2-AFC) audiovisual simultaneity judgment task. Training resulted in a marked (i.e., approximately 40%) narrowing in the size of the window. To rule out the possibility that this narrowing was the result of changes in cognitive biases, a second experiment employing a two-interval forced choice (2-IFC) paradigm was undertaken during which participants were instructed to identify a simultaneously-presented audiovisual pair presented within one of two intervals. The 2-IFC paradigm resulted in a narrowing that was similar in both degree and dynamics to that using the 2-AFC approach. Together, these results illustrate that different methods of multisensory perceptual training can result in substantial alterations in the circuits underlying the perception of audiovisual simultaneity. These findings suggest a high degree of flexibility in multisensory temporal processing and have important implications for interventional strategies that may be used to ameliorate clinical conditions (e.g., autism, dyslexia) in which multisensory temporal function may be impaired.

Journal ArticleDOI
TL;DR: A computational theory of performance in this task is described, which provides a detailed, quantitative account of attentional effects in spatial cuing tasks at the level of response accuracy and the response time distributions.
Abstract: The simplest attentional task, detecting a cued stimulus in an otherwise empty visual field, produces complex patterns of performance. Attentional cues interact with backward masks and with spatial uncertainty, and there is a dissociation in the effects of these variables on accuracy and on response time. A computational theory of performance in this task is described. The theory links visual encoding, masking, spatial attention, visual short-term memory (VSTM), and perceptual decision making in an integrated dynamic framework. The theory assumes that decisions are made by a diffusion process driven by a neurally plausible, shunting VSTM. The VSTM trace encodes the transient outputs of early visual filters in a durable form that is preserved for the time needed to make a decision. Attention increases the efficiency of VSTM encoding, either by increasing the rate of trace formation or by reducing the delay before trace formation begins. The theory provides a detailed, quantitative account of attentional effects in spatial cuing tasks at the level of response accuracy and the response time distributions.

Journal ArticleDOI
TL;DR: It is demonstrated that autistic children have difficulty with visual perspective taking compared to a task requiring mental rotation, relative to typical children, and performance on the level 2 visual perspectiveTaking task correlated with theory of mind performance.

Journal ArticleDOI
TL;DR: It is indicated that prestimulus fluctuations in visual areas can influence the conscious detection of an upcoming stimulus via two distinct mechanisms: an attention-driven baseline shift in the alpha range, and a decision bias in the gamma range.
Abstract: Visual perception fluctuates across repeated presentations of the same near-threshold stimulus. These perceptual fluctuations have often been attributed to baseline shifts--i.e., ongoing modulations of neuronal activity in visual areas--driven by top-down attention. Using magnetoencephalography, we directly tested whether ongoing attentional modulations could fully account for the perceptual impact of prestimulus activity on a subsequent seen-unseen decision. We found that prestimulus gamma-band fluctuations in lateral occipital areas (LO) predicted visual awareness, but did not reflect the focus of spatial attention. Moreover, these prestimulus signals influenced the decision outcome independently from the strength of the following visual response, suggesting that baseline shifts alone could not explain their perceptual impact. Using a straightforward decision-making model based on the accumulation of sensory evidence over time, we show that prestimulus gamma-band fluctuations in LO behave as a decision bias at stimulus onset, irrespectively of subsequent stimulus processing. In contrast, spatial attention suppressed prestimulus alpha-band signals in the same region, and produced a sustained baseline shift that also predicted the outcome of the seen-unseen decision. Together, our results indicate that prestimulus fluctuations in visual areas can influence the conscious detection of an upcoming stimulus via two distinct mechanisms: an attention-driven baseline shift in the alpha range, and a decision bias in the gamma range.

Journal ArticleDOI
TL;DR: It is confirmed that sound does tend to dominate the perceived timing of audio-visual stimuli, and the dominance was predicted qualitatively by considering the better temporal localization of audition, but the quantitative fit was less than perfect.
Abstract: The "ventriloquist effect" refers to the fact that vision usually dominates hearing in spatial localization, and this has been shown to be consistent with optimal integration of visual and auditory signals (Alais and Burr in Curr Biol 14(3):257-262, 2004). For temporal localization, however, auditory stimuli often "capture" visual stimuli, in what has become known as "temporal ventriloquism". We examined this quantitatively using a bisection task, confirming that sound does tend to dominate the perceived timing of audio-visual stimuli. The dominance was predicted qualitatively by considering the better temporal localization of audition, but the quantitative fit was less than perfect, with more weight being given to audition than predicted from thresholds. As predicted by optimal cue combination, the temporal localization of audio-visual stimuli was better than for either sense alone.

Journal ArticleDOI
TL;DR: It is demonstrated that the social context in which pain occurs modulate the brain response to other's pain and this modulation may reflect successful adaptation to potential danger present in a social interaction.

Journal ArticleDOI
TL;DR: The authors provide evidence that EC can occur through an implicit misattribution mechanism in which an evaluative response evoked by a valenced stimulus is incorrectly and implicitly attributed to another stimulus, forming or changing an attitude toward this other stimulus.
Abstract: Evaluative conditioning (EC) refers to the formation or change of an attitude toward an object, following that object's pairing with positively or negatively valenced stimuli. The authors provide evidence that EC can occur through an implicit misattribution mechanism in which an evaluative response evoked by a valenced stimulus is incorrectly and implicitly attributed to another stimulus, forming or changing an attitude toward this other stimulus. In 5 studies, the authors measured or manipulated variables related to the potential for the misattribution of an evaluation, or source confusability. Greater EC was observed when participants' eye gaze shifted frequently between a valenced and a neutral stimulus (Studies 1 & 2), when the 2 stimuli appeared in close spatial proximity (Study 3), and when the neutral stimulus was made more perceptually salient than was the valenced stimulus, due to the larger size of the neutral stimulus (Study 4). In other words, conditions conducive to source confusability increased EC. Study 5 provided evidence for multiple mechanisms of EC by comparing the effects of mildly evocative valenced stimuli (those evoking responses that might more easily be misattributed to another object) with more strongly evocative stimuli.

Journal ArticleDOI
TL;DR: The history of the technique, underlying theory, and procedural variation in its measurement are traced, and an appeal for a return to the study of habituation per se as a valid measure of infant learning is made.

Journal ArticleDOI
TL;DR: A framework - the unconscious binding hypothesis - is proposed to distinguish unconscious processing from conscious processing, according to which the unconscious mind not only encodes individual features but also temporally binds distributed features to give rise to cortical representations; unlike conscious binding, however, unconscious binding is fragile.

Journal ArticleDOI
TL;DR: A cognitive relevance framework is outlined to account for the control of attention and fixation in scenes, with participants much more likely to look to the targets than to the salient regions in search.
Abstract: We investigated whether the deployment of attention in scenes is better explained by visual salience or by cognitive relevance. In two experiments, participants searched for target objects in scene photographs. The objects appeared in semantically appropriate locations but were not visually salient within their scenes. Search was fast and efficient, with participants much more likely to look to the targets than to the salient regions. This difference was apparent from the first fixation and held regardless of whether participants were familiar with the visual form of the search targets. In the majority of trials, salient regions were not fixated. The critical effects were observed for all 24 participants across the two experiments. We outline a cognitive relevance framework to account for the control of attention and fixation in scenes.