scispace - formally typeset

Journal ArticleDOI

More evidence for sensorimotor adaptation in color perception.

01 Sep 2005-Journal of Vision (Association for Research in Vision and Ophthalmology)-Vol. 6, Iss: 2, pp 145-153

TL;DR: It is shown that sensorim motor adaptation can be obtained for color, as a consequence of the introduction of a new sensorimotor contingency between eye movements and color changes.

AbstractSensorimotor adaptation can be defined as a perceptual adaptation whose effects depend on the occurrence and nature of the performed motor actions. Examples of sensorimotor adaptation can be found in the literature on prisms concerning several space-related attributes like orientation, curvature, and size. In this article, we show that sensorimotor adaptation can be obtained for color, as a consequence of the introduction of a new sensorimotor contingency between eye movements and color changes. In an adaptation phase, trials involved the successive presentation of two patches, first on the left, and then on the right or the opposite. The left patch being always red and the right patch green, a correlation is introduced between left–right (respectively right–left) eye saccades and red–green (respectively green–red) color change. After 40 min of adaptation, when two yellow patches are successively presented on each side of the screen, the chromaticity of the left and right patches need respectively to be shifted toward the chromaticity of the red and green adaptation patches for subjective equality to be obtained. When the eyes are kept fixed during the adaptation stage, creating a strong nonhomogeneity in retinal adaptation, no effect is found. This ensures that, if present, adaptation at a given retinal location cannot explain the present effect. A third experiment shows a dependency of the effect on the eyes' saccadic movements and not on the position on the screen, that is, on the position of the eyes in the orbits. These results argue for the involvement of sensorimotor mechanisms in color perception. The relation of these experimental findings toward a sensorimotor theory of color perception is discussed.

Topics: Retinal adaptation (68%), Color vision (54%), Adaptation (eye) (51%)

...read more

Content maybe subject to copyright    Report

Citations
More filters

Journal Article
Abstract: Many current neurophysiological, psychophysical, and psychological approaches to vision rest on the idea that when we see, the brain produces an internal representation of the world. The activation of this internal representation is assumed to give rise to the experience of seeing. The problem with this kind of approach is that it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness. An alternative proposal is made here. We propose that seeing is a way of acting. It is a particular way of exploring the environment. Activity in internal representations does not generate the experience of seeing. The outside world serves as its own, external, representation. The experience of seeing occurs when the organism masters what we call the governing laws of sensorimotor contingency. The advantage of this approach is that it provides a natural and principled way of accounting for visual consciousness, and for the differences in the perceived quality of sensory experience in the different sensory modalities. Several lines of empirical evidence are brought forward in support of the theory, in particular: evidence from experiments in sensorimotor adaptation, visual \"filling in,\" visual stability despite eye movements, change blindness, sensory substitution, and color perception.

2,271 citations


Journal ArticleDOI
TL;DR: This review systematically investigated the role of temporal prediction, temporal control, identity prediction, and motor prediction in previous published reports of sensory attenuation and intentional binding, and assessed the degree to which existing data provide evidence for therole of forward action models in these phenomena.
Abstract: Sensory processing of action effects has been shown to differ from that of externally triggered stimuli, with respect both to the perceived timing of their occurrence (intentional binding) and to their intensity (sensory attenuation). These phenomena are normally attributed to forward action models, such that when action prediction is consistent with changes in our environment, our experience of these effects is altered. Although much progress has been made in recent years in understanding sensory attenuation and intentional binding, a number of important questions regarding the precise nature of the predictive mechanisms involved remain unanswered. Moreover, these mechanisms are often not discussed in empirical papers, and a comprehensive review of these issues is yet to appear. This review attempts to fill this void. We systematically investigated the role of temporal prediction, temporal control, identity prediction, and motor prediction in previous published reports of sensory attenuation and intentional binding. By isolating the individual processes that have previously been contrasted and incorporating these experiments with research in the related fields of temporal attention and stimulus expectation, we assessed the degree to which existing data provide evidence for the role of forward action models in these phenomena. We further propose a number of avenues for future research, which may help to better determine the role of motor prediction in processing of voluntary action effects, as well as to improve understanding of how these phenomena might fit within a general predictive processing framework. Furthermore, our analysis has important implications for understanding disorders of agency in schizophrenia.

286 citations


Journal ArticleDOI
TL;DR: By assuming that action preparation includes activation of the predicted sensory consequences of the action, this work provides a mechanism to understand sensory attenuation and intentional binding and proposes a possible neural basis for the processing of predicted action effects.
Abstract: Voluntary actions are thought to be selected with respect to their intended goal. Converging data suggests that medial frontal cortex plays a crucial role in linking actions to their predicted effects. Recent neuroimaging data also suggests that during action selection, the brain pre-activities the representation of the predicted action effect. We review evidence of action effect prediction, both in terms of its neurophysiological basis as well as its functional consequences. By assuming that action preparation includes activation of the predicted sensory consequences of the action, we provide a mechanism to understand sensory attenuation and intentional binding. In this account, sensory attenuation results from more difficult discrimination between the observed action effect and the pre-activation of the predicted effect, as compared to when no (or incorrect) prediction is present. Similarly, a predicted action effect should also reach the threshold of awareness faster (intentional binding), if its perceptual representation is pre-activated. By comparing this potential mechanism to mental imagery and repetition suppression we propose a possible neural basis for the processing of predicted action effects.

167 citations


Journal ArticleDOI
TL;DR: The findings show that the human visual system can effectively use peripheral and foveal information about object features and that visual perception does not simply correspond to disconnected snapshots during each fixation.
Abstract: Due to the inhomogenous visual representation across the visual field, humans use peripheral vision to select objects of interest and foveate them by saccadic eye movements for further scrutiny. Thus, there is usually peripheral information available before and foveal information after a saccade. In this study we investigated the integration of information across saccades. We measured reliabilities--i.e., the inverse of variance-separately in a presaccadic peripheral and a postsaccadic foveal orientation--discrimination task. From this, we predicted trans-saccadic performance and compared it to observed values. We show that the integration of incongruent peripheral and foveal information is biased according to their relative reliabilities and that the reliability of the trans-saccadic information equals the sum of the peripheral and foveal reliabilities. Both results are consistent with and indistinguishable from statistically optimal integration according to the maximum-likelihood principle. Additionally, we tracked the gathering of information around the time of the saccade with high temporal precision by using a reverse correlation method. Information gathering starts to decline between 100 and 50 ms before saccade onset and recovers immediately after saccade offset. Altogether, these findings show that the human visual system can effectively use peripheral and foveal information about object features and that visual perception does not simply correspond to disconnected snapshots during each fixation.

96 citations


Cites background from "More evidence for sensorimotor adap..."

  • ...Several studies have shown that trans-saccadic changes in object features (Cox, Meier, Oertelt, & DiCarlo, 2005; Li & DiCarlo, 2008) and associations between saccade direction and postsaccadic foveal displays (Bompas & O’Regan, 2006) can be learned....

    [...]


Journal ArticleDOI
TL;DR: The results show that both selective attention and saccadic eye movements influenced the magnitude of the tilt aftereffect, but in different ways, suggesting that trans-saccadic perception is not limited to a single object but instead depends on the allocation of selective attention.
Abstract: When the same object is attended both before and after a saccadic eye movement, its visual features may be remapped to the new retinal position of the object. To further investigate the role of selective attention in trans-saccadic perception, the magnitude of the cross-saccadic tilt aftereffect was measured for both attended and unattended objects. The results show that both selective attention and saccadic eye movements influenced the magnitude of the tilt aftereffect, but in different ways. Dividing attention among multiple objects lead to a general decrease in the tilt aftereffect, independent of whether or not a saccade occurred. Making a saccade also resulted in a consistent reduction of the aftereffect, but this was due to incomplete transfer of form adaptation to the new retinal position. The influences of selective attention and saccadic remapping on the tilt aftereffect were independent and additive. These findings suggest that trans-saccadic perception is not limited to a single object but instead depends on the allocation of selective attention. Overall, the results are consistent with the hypothesis that the role of attention is to select salient objects, with trans-saccadic perception mechanisms acting to maintain information about those salient objects across eye movements.

78 citations


References
More filters

Journal ArticleDOI
TL;DR: The Psychophysics Toolbox is a software package that supports visual psychophysics and its routines provide an interface between a high-level interpreted language and the video display hardware.
Abstract: The Psychophysics Toolbox is a software package that supports visual psychophysics. Its routines provide an interface between a high-level interpreted language (MATLAB on the Macintosh) and the video display hardware. A set of example programs is included with the Toolbox distribution.

15,313 citations


"More evidence for sensorimotor adap..." refers methods in this paper

  • ...The stimuli were generated using Matlab with the psychophysics toolbox extension (Brainard, 1997; Pelli, 1997) on a PC....

    [...]


Journal ArticleDOI
TL;DR: The VideoToolbox is a free collection of two hundred C subroutines for Macintosh computers that calibrates and controls the computer-display interface to create accurately specified visual stimuli.
Abstract: The VideoToolbox is a free collection of two hundred C subroutines for Macintosh computers that calibrates and controls the computer-display interface to create accurately specified visual stimuli. High-level platform-independent languages like MATLAB are best for creating the numbers that describe the desired images. Low-level, computer-specific VideoToolbox routines control the hardware that transforms those numbers into a movie. Transcending the particular computer and language, we discuss the nature of the computer-display interface, and how to calibrate and control it.

9,169 citations


"More evidence for sensorimotor adap..." refers methods in this paper

  • ...The stimuli were generated using Matlab with the psychophysics toolbox extension (Brainard, 1997; Pelli, 1997) on a PC....

    [...]


Book
01 Jan 1966

6,268 citations


Book
01 Jan 1996
TL;DR: Professor Ripley brings together two crucial ideas in pattern recognition; statistical methods and machine learning via neural networks in this self-contained account.
Abstract: From the Publisher: Pattern recognition has long been studied in relation to many different (and mainly unrelated) applications, such as remote sensing, computer vision, space research, and medical imaging. In this book Professor Ripley brings together two crucial ideas in pattern recognition; statistical methods and machine learning via neural networks. Unifying principles are brought to the fore, and the author gives an overview of the state of the subject. Many examples are included to illustrate real problems in pattern recognition and how to overcome them.This is a self-contained account, ideal both as an introduction for non-specialists readers, and also as a handbook for the more expert reader.

5,508 citations


Journal Article
Abstract: Many current neurophysiological, psychophysical, and psychological approaches to vision rest on the idea that when we see, the brain produces an internal representation of the world. The activation of this internal representation is assumed to give rise to the experience of seeing. The problem with this kind of approach is that it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness. An alternative proposal is made here. We propose that seeing is a way of acting. It is a particular way of exploring the environment. Activity in internal representations does not generate the experience of seeing. The outside world serves as its own, external, representation. The experience of seeing occurs when the organism masters what we call the governing laws of sensorimotor contingency. The advantage of this approach is that it provides a natural and principled way of accounting for visual consciousness, and for the differences in the perceived quality of sensory experience in the different sensory modalities. Several lines of empirical evidence are brought forward in support of the theory, in particular: evidence from experiments in sensorimotor adaptation, visual \"filling in,\" visual stability despite eye movements, change blindness, sensory substitution, and color perception.

2,271 citations