scispace - formally typeset
Search or ask a question
Journal ArticleDOI

More evidence for sensorimotor adaptation in color perception.

01 Sep 2005-Journal of Vision (Association for Research in Vision and Ophthalmology)-Vol. 6, Iss: 2, pp 145-153
TL;DR: It is shown that sensorim motor adaptation can be obtained for color, as a consequence of the introduction of a new sensorimotor contingency between eye movements and color changes.
Abstract: Sensorimotor adaptation can be defined as a perceptual adaptation whose effects depend on the occurrence and nature of the performed motor actions. Examples of sensorimotor adaptation can be found in the literature on prisms concerning several space-related attributes like orientation, curvature, and size. In this article, we show that sensorimotor adaptation can be obtained for color, as a consequence of the introduction of a new sensorimotor contingency between eye movements and color changes. In an adaptation phase, trials involved the successive presentation of two patches, first on the left, and then on the right or the opposite. The left patch being always red and the right patch green, a correlation is introduced between left–right (respectively right–left) eye saccades and red–green (respectively green–red) color change. After 40 min of adaptation, when two yellow patches are successively presented on each side of the screen, the chromaticity of the left and right patches need respectively to be shifted toward the chromaticity of the red and green adaptation patches for subjective equality to be obtained. When the eyes are kept fixed during the adaptation stage, creating a strong nonhomogeneity in retinal adaptation, no effect is found. This ensures that, if present, adaptation at a given retinal location cannot explain the present effect. A third experiment shows a dependency of the effect on the eyes' saccadic movements and not on the position on the screen, that is, on the position of the eyes in the orbits. These results argue for the involvement of sensorimotor mechanisms in color perception. The relation of these experimental findings toward a sensorimotor theory of color perception is discussed.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The results showed that an action-congruent color was more effective as a valid cue in the search task (increased benefit), but less effective as an invalid cue (reduced cost), and the findings were argued in favor of the preactivation account.
Abstract: The effect of a salient visual feature in orienting spatial attention was examined as a function of the learned association between the visual feature and the observer’s action. During an initial acquisition phase, participants learned that two keypress actions consistently produced red and green visual cues. Next, in a test phase, participants’ actions continued to result in singletons, but their color could be either congruent or incongruent with the learned action–color associations. Furthermore, the color singletons now functioned as valid or invalid spatial cues in a visual search, in which participants looked for a tilted line (“/” or “\”) among distractors (“X”s). The results showed that an action-congruent color was more effective as a valid cue in the search task (increased benefit), but less effective as an invalid cue (reduced cost). We discuss our findings in terms of both an inhibition account and a preactivation account of action-driven sensory bias, and argue in favor of the preactivation account.

13 citations

Journal ArticleDOI
TL;DR: The authors examined the role of action selection, prior to action execution, in the guidance of visual attention and found that learned action-outcome association contributes to the attentional bias, furthermore, this guidance was short-lived and disappeared with larger delays between action selection and execution.
Abstract: We plan our actions in order to fulfil certain sensory goals. It is generally believed, therefore, that anticipation of sensory action-outcomes plays an essential role in action selection. In this study, we examined the role of action selection, prior to action execution, in the guidance of visual attention. The experiments began with an initial acquisition phase, in which participants learned to associate two actions (left/right keypress) with two visual outcomes (red/green colour). Next, participants performed in a test phase, in which they continued to select and perform the same actions while their attentional bias was measured for items that resembled the anticipated action-outcome. The findings indicate that learned action-outcome association contributes to the guidance of attention. This guidance, furthermore, was short-lived and disappeared with larger delays between action selection and execution. The findings help situate processes of visual attention in a context that includes action se...

12 citations


Additional excerpts

  • ...…findings of reduced sensitivity and weaker neural responses to self-caused perceptual events (e. g., Bäß, Widmann, Roye, Schröger, & Jacobsen, 2009; Blakemore, Wolpert, & Frith, 1998; Bompas & O’Regan, 2006a, 2006b; Cardoso-Leite et al., 2010; Kimura & Takeda, 2014; Roussel et al., 2013, 2014)....

    [...]

Journal ArticleDOI
TL;DR: The fact that agency influenced attention when the controlled object contained the target in 100%, 50%, and 25% of trials, and occurred even when participants needed to monitor the center of the display in order to know which arrow key to press, suggests that its influence does not depend on task relevance or volitional decision-making.
Abstract: While the factors that contribute to individuals feeling a sense agency over a stimulus have been extensively studied, the cognitive effects of a sense of agency over a stimulus are little known. Here, we conducted three experiments examining whether attentional selection is biased towards controllable stimuli. In all three experiments, participants moved four circle stimuli, one of which was under their control. A search target then appeared on one of the stimuli. In Experiment 1, the target was always on the controlled stimulus, but we manipulated the degree of control the participant had. In Experiment 2, the controlled stimulus was the target on 50% of the trials. In Experiment 3, we used a central arrow cue to tell participants which arrow key to press (rather than using a free choice task) and made the controlled stimulus the target on 25% of the trials, making it nonpredictive of the target’s location. Across the three experiments we found that visual selection was biased towards controllable stimuli. This attentional bias was larger when participants had full, rather than partial, control over the stimulus, indicating that sense of agency leads one to prioritize objects under their control. The fact that agency influenced attention when the controlled object contained the target in 100%, 50%, and 25% of trials, and occurred even when participants needed to monitor the center of the display in order to know which arrow key to press, suggests that its influence does not depend on task relevance or volitional decision-making.

9 citations

01 Jan 2012
TL;DR: In this article, the authors tried to link the empirically grounded theory of sensory-motor contingency and mirror system based embodied simulation/emulation to newly discovered cases of swimming style-color synesthesia.
Abstract: Synesthesia is traditionally regarded as a phenomenon in which an additional non-standard phenomenal experience occurs consistently in response to ordinary stimulation applied to the same or another modality. Recent studies suggest an important role of semantic representations in the induction of synesthesia. In the present proposal we try to link the empirically grounded theory of sensory-motor contingency and mirror system based embodied simulation/emulation to newly discovered cases of swimming style-color synesthesia. In the latter color experiences are evoked only by showing the synesthetes a picture of a swimming person or asking them to think about a given swimming style. Neural mechanisms of mirror systems seem to be involved here. It has been shown that for mirror-sensory synesthesia, such as mirror-touch or mirror-pain synesthesia (when visually presented tactile or noxious stimulation of others results in the projection of the tactile or pain experience onto oneself), concurrent experiences are caused by overactivity in the mirror neuron system responding to the specific observation. The comparison of different forms of synesthesia has the potential of challenging conventional thinking on this phenomenon and providing a more general, sensory-motor account of synesthesia encompassing cases driven by semantic or emulational rather than pure sensory or motor representations. Such an interpretation could include top-down associations, questioning the explanation in terms of hard-wired structural connectivity. In the paper the hypothesis is developed that the wide-ranging phenomenon of synesthesia might result from a process of hyperbinding between “too many” semantic attribute domains. This hypothesis is supplemented by some suggestions for an underlying neural mechanism.

9 citations

13 Sep 2012
TL;DR: Inspired by theories rooted in the research field of embodied cognition, artificial neural architectures for the learning of sensorimotor laws are designed, united by the idea that actively acquired sensorim motor knowledge enhances perception and results in goal-directed behavior.
Abstract: The active nature of perception and the intimate relation of action and cognition has been emphasized in philosophy and cognitive science for a long time. However, most of the current approaches do not consider the fundamental role of action for perception. Inspired by theories rooted in the research field of embodied cognition we have designed artificial neural architectures for the learning of sensorimotor laws. All our models have in common that the agent actually needs to act to perceive. This core principle is exploited for the design of a series of computational studies, including simulations and real-world robot experiments. In a first experiment, a virtual robot learns to navigate towards a target region. For this purpose, it learns sensorimotor laws and visual features simultaneously, using the world as an outside memory. The control laws are trained using a two-layer network consisting of a feature (sensory) layer that feeds into an action (reinforcement learning) layer. The prediction error modulates the learning of both layers. In a second experiment, we introduce a novel bio-inspired neural architecture that combines reinforcement learning and Sigma-Pi neurons. In a simulation we verify that a virtual agent successfully learns to reach for an object while discovering invariant hand-object relations simultaneously. Again, the prediction error of the action layer is used to modulate all the weights in the network. In a third experiment we extend a recurrent architecture with an adaptive learning regime and use this algorithm for an object categorization task with a real humanoid robot. Based on self-organized dynamic multi-modal sensory perceptions, the robot is able to ‘feel’ different objects and discriminate them with a very low error rate. All these experiments are inspired by the same sensorimotor design principles. Further, they are united by the idea that actively acquired sensorimotor knowledge enhances perception and results in goal-directed behavior. In der Philosophie und in den Kognitionswissenschaften wird schon seit langerer Zeit auf die besonders enge Verknupfung, die Handlungen und kognitive Prozesse haben, hingewiesen. Leider berucksichtigen die meisten der gegenwartigen Studien aus dem Bereich der Robotik diesen fundamentalen Einfluss von Handlungen auf die Wahrnehmung nicht. Inspiriert durch Theorien, die ihren Ursprung in einem Forschungsfeld haben, das unter dem Begriff des Embodiments zusammengefasst wird, einer These nach der Intelligenz die physikalische Interaktion des Korpers voraussetzt, haben wir verschiedene kunstliche neuronale Netzwerkarchitekturen entwickelt, die in der Lage sind, sensomotorische Zusammenhange zu erlernen. Allen unseren Modellen ist gemein, dass der Agent handeln muss, um uberhaupt etwas wahrzunehmen. Dieses Kernprinzip nutzen wir fur verschiedene Computerexperimente aus, die Simulationen sowie Studien mit echten Robotern umfassen. Die erste Studie befasst sich mit der Navigation zu einer Zielregion. Ein virtueller Roboter erlernt sensomotorische Gesetzmasigkeiten und extrahiert dabei gleichzeitig visuelle Merkmale aus seiner Umwelt. Hierfur ist der Agent mit einem zwei-schichtigen kunstlichen neuronalen Netz ausgerustet, das aus einer sensorischen und einer Handlungs-Schicht besteht. Der Vorhersagefehler der Handlungs-Schicht, realisiert durch verstarkendes Lernen, dient hierbei nicht nur zur Anpassung der Synapsen dieser Schicht, sondern moduliert gleichzeitig auch noch die Synapsen der sensorischen Neuronen. In einem zweiten Experiment stellen wir eine neu entwickelte bio-inspirierte Netzwerkarchitektur vor, die verstarkendes Lernen mit Sigma-Pi Neuronen verbindet. Es wird in einer Simulation gezeigt, dass ein virtueller Agent mit Hilfe dieser Architektur in der Lage ist, invariante Situationen zu erkennen. Gleichzeitig erlernt er auch noch das erfolgreiche Greifen nach Objekten. Auch in diesem Fall beeinflusst der Vorhersagefehler der Handlungs-Schicht alle synaptischen Gewichte des Netzwerks. In der dritten Studie erlernt ein echter humanoider Roboter, Bauklotze durch multisensorische Wahrnehmung zu kategorisieren. Zu diesem Zweck haben wir den Algorithmus einer speziellen rekurrenten Netzwerkarchitektur um eine adaptive Lernregel erweitert. Das rekurrente Netz speichert und gruppiert die multisensorischen Eindrucke, die durch die Interaktion mit den Objekten entstehen. Hierdurch ist der Roboter spater in der Lage, verschiedene Objekte zu ‘erfuhlen’ und erfolgreich voneinander zu diskriminieren. Alle drei Studien sind durch die selben sensomotorischen Design-Prinzipien motiviert. Auserdem verbindet sie die Idee, dass aktiv erworbene sensomotorische Zusammenhange die Wahrnehmung erweitern und dadurch zu zielgerichtetem und erfolgreichem Handeln fuhren konnen.

8 citations


Cites background from "More evidence for sensorimotor adap..."

  • ...Thus, it is not internally generated neural activity which is responsible for perceptual presence, but rather the mastery and the access to our sensorimotor skills that enables us to see (O’Regan and Noë, 2001; Bompas and O’Regan, 2006), smell (Cooke and Myin, in press), hear (Aytekin et al....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The Psychophysics Toolbox is a software package that supports visual psychophysics and its routines provide an interface between a high-level interpreted language and the video display hardware.
Abstract: The Psychophysics Toolbox is a software package that supports visual psychophysics. Its routines provide an interface between a high-level interpreted language (MATLAB on the Macintosh) and the video display hardware. A set of example programs is included with the Toolbox distribution.

16,594 citations


"More evidence for sensorimotor adap..." refers methods in this paper

  • ...The stimuli were generated using Matlab with the psychophysics toolbox extension (Brainard, 1997; Pelli, 1997) on a PC....

    [...]

Journal ArticleDOI
TL;DR: The VideoToolbox is a free collection of two hundred C subroutines for Macintosh computers that calibrates and controls the computer-display interface to create accurately specified visual stimuli.
Abstract: The VideoToolbox is a free collection of two hundred C subroutines for Macintosh computers that calibrates and controls the computer-display interface to create accurately specified visual stimuli. High-level platform-independent languages like MATLAB are best for creating the numbers that describe the desired images. Low-level, computer-specific VideoToolbox routines control the hardware that transforms those numbers into a movie. Transcending the particular computer and language, we discuss the nature of the computer-display interface, and how to calibrate and control it.

10,084 citations


"More evidence for sensorimotor adap..." refers methods in this paper

  • ...The stimuli were generated using Matlab with the psychophysics toolbox extension (Brainard, 1997; Pelli, 1997) on a PC....

    [...]

Book
01 Jan 1966

6,307 citations

Book
01 Jan 1996
TL;DR: Professor Ripley brings together two crucial ideas in pattern recognition; statistical methods and machine learning via neural networks in this self-contained account.
Abstract: From the Publisher: Pattern recognition has long been studied in relation to many different (and mainly unrelated) applications, such as remote sensing, computer vision, space research, and medical imaging. In this book Professor Ripley brings together two crucial ideas in pattern recognition; statistical methods and machine learning via neural networks. Unifying principles are brought to the fore, and the author gives an overview of the state of the subject. Many examples are included to illustrate real problems in pattern recognition and how to overcome them.This is a self-contained account, ideal both as an introduction for non-specialists readers, and also as a handbook for the more expert reader.

5,632 citations

Journal Article
TL;DR: In this article, the authors propose that the brain produces an internal representation of the world, and the activation of this internal representation is assumed to give rise to the experience of seeing, but it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness.
Abstract: Many current neurophysiological, psychophysical, and psychological approaches to vision rest on the idea that when we see, the brain produces an internal representation of the world. The activation of this internal representation is assumed to give rise to the experience of seeing. The problem with this kind of approach is that it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness. An alternative proposal is made here. We propose that seeing is a way of acting. It is a particular way of exploring the environment. Activity in internal representations does not generate the experience of seeing. The outside world serves as its own, external, representation. The experience of seeing occurs when the organism masters what we call the governing laws of sensorimotor contingency. The advantage of this approach is that it provides a natural and principled way of accounting for visual consciousness, and for the differences in the perceived quality of sensory experience in the different sensory modalities. Several lines of empirical evidence are brought forward in support of the theory, in particular: evidence from experiments in sensorimotor adaptation, visual \"filling in,\" visual stability despite eye movements, change blindness, sensory substitution, and color perception.

2,271 citations