scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Instruction-induced feature binding.

01 Jan 2007-Psychological Research-psychologische Forschung (Springer-Verlag)-Vol. 71, Iss: 1, pp 92-106
TL;DR: The present data show that the instructed S–R mappings influence performance on the embedded B-task, even when they have never been practiced, and are irrelevant with respect to the B- task.
Abstract: In order to test whether or not instructions specifying the stimulus–response (S–R) mappings for a new task suffice to create bindings between specified stimulus and response features, we developed a dual task paradigm of the ABBA type in which participants saw new S–R instructions for the A-task in the beginning of each trial. Immediately after the A-task instructions, participants had to perform a logically independent B-task. The imperative stimulus for the A-task was presented after the B-task had been executed. The present data show that the instructed S–R mappings influence performance on the embedded B-task, even when they (1) have never been practiced, and (2) are irrelevant with respect to the B-task. These results imply that instructions can induce bindings between S- and R-features without prior execution of the task at hand.
Citations
More filters
Journal ArticleDOI
TL;DR: It is argued that the major control operations may take place long before a stimulus is encountered (the prepared-reflex principle), that stimulus-response translation may be more automatic than commonly thought, that action selection and execution are more interwoven than most approaches allow, and that the acquisition of action-contingent events is likely to subserve both the selection and the evaluation of actions.
Abstract: The theory of event coding (TEC) is a general framework explaining how perceived and produced events (stimuli and responses) are cognitively represented and how their representations interact to generate perception and action. This article discusses the implications of TEC for understanding the control of voluntary action and makes an attempt to apply, specify, and concretize the basic theoretical ideas in the light of the available research on action control. In particular, it is argued that the major control operations may take place long before a stimulus is encountered (the prepared-reflex principle), that stimulus-response translation may be more automatic than commonly thought, that action selection and execution are more interwoven than most approaches allow, and that the acquisition of action-contingent events (action effects) is likely to subserve both the selection and the evaluation of actions.

502 citations

Journal ArticleDOI
TL;DR: S–R bindings are more flexible and pervasive than previously thought and enable rapid yet context-dependent behaviors that complicate interpretations of priming.

183 citations

Journal ArticleDOI
TL;DR: These findings support the theory of event coding, which claims that perceptual codes and action plans share a common representational medium, which presumably involves the human premotor cortex.
Abstract: Neurophysiological observations suggest that attending to a particular perceptual dimension, such as location or shape, engages dimension-related action, such as reaching and prehension networks. Here we reversed the perspective and hypothesized that activating action systems may prime the processing of stimuli defined on perceptual dimensions related to these actions. Subjects prepared for a reaching or grasping action and, before carrying it out, were presented with location- or size-defined stimulus events. As predicted, performance on the stimulus event varied with action preparation: planning a reaching action facilitated detecting deviants in location sequences whereas planning a grasping action facilitated detecting deviants in size sequences. These findings support the theory of event coding, which claims that perceptual codes and action plans share a common representational medium, which presumably involves the human premotor cortex.

182 citations


Cites background from "Instruction-induced feature binding..."

  • ...As pointed out in Introduction, we are not the first to demonstrate that action planning can affect perceptual processes (see Craighero et al., 1999 ;M u¨ sseler & Hommel, 1997; Wenke, Gaschler, & Nattkemper, 2005; Wohlschla¨ ger, 2000; among others)....

    [...]

Journal ArticleDOI
TL;DR: By manipulating the salient nature of reference-providing events in an auditory go-nogo Simon task, the present study demonstrates that spatial reference events do not necessarily require social or movement features to induce action coding and suggests that the cSE does not necessarily imply the co-representation of tasks.
Abstract: The joint go-nogo Simon effect (social Simon effect, or joint cSE) has been considered as an index of automatic action/task co-representation. Recent findings, however, challenge extreme versions of this social co-representation account by suggesting that the (joint) cSE results from any sufficiently salient event that provides a reference for spatially coding one's own action. By manipulating the salient nature of reference-providing events in an auditory go-nogo Simon task, the present study indeed demonstrates that spatial reference events do not necessarily require social (Experiment 1) or movement features (Experiment 2) to induce action coding. As long as events attract attention in a bottom-up fashion (e.g., auditory rhythmic features; Experiment 3 and 4), events in an auditory go-nogo Simon task seem to be co-represented irrespective of the agent or object producing these events. This suggests that the cSE does not necessarily imply the co-representation of tasks. The theory of event coding provides a comprehensive account of the available evidence on the cSE: the presence of another salient event requires distinguishing the cognitive representation of one's own action from the representation of other events, which can be achieved by referential coding-the spatial coding of one's action relative to the other events.

165 citations

Journal ArticleDOI
TL;DR: The findings suggest that task-relevant stimulus and response features are spontaneously integrated into independent, local event files, each linking one stimulus to one response feature, thereby increasing the likelihood to repeat a response if one or more stimulus features are repeated.
Abstract: Five experiments investigated the spontaneous integration of stimulus and response features. Participants performed simple, prepared responses (R1) to the mere presence of Go signals (S1) before carrying out another, freely chosen response (R2) to another stimulus (S2), the main question being whether the likelihood of repeating a response depends on whether or not the stimulus, or some of its features, are repeated. Indeed, participants were more likely to repeat the previous response if stimulus form or color was repeated than if it was alternated. The same was true for stimulus location, but only if location was made task-relevant, whether by defining the response set in terms of location, by requiring the report of S2 location, or by having S1 to be selected against a distractor. These findings suggest that task-relevant stimulus and response features are spontaneously integrated into independent, local event files, each linking one stimulus to one response feature. Upon reactivation of one member of the binary link activation is spread to the other, thereby increasing the likelihood to repeat a response if one or more stimulus features are repeated. These findings support the idea that both perceptual events and action plans are cognitively represented in terms of their features, and that feature-integration processes cross borders between perception and action.

123 citations


Cites methods from "Instruction-induced feature binding..."

  • ...This is implemented by attentional control settings providing top-down support for codes of feature domains that are considered taskrelevant (Folk, Remington & Johnson, 1992; Pratt & Hommel, 2003; Wenke, Gaschler, & Nattkemper, 2005 )....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A perceptual theory of knowledge can implement a fully functional conceptual system while avoiding problems associated with amodal symbol systems and implications for cognition, neuroscience, evolution, development, and artificial intelligence are explored.
Abstract: Prior to the twentieth century, theories of knowledge were inherently perceptual. Since then, developments in logic, statis- tics, and programming languages have inspired amodal theories that rest on principles fundamentally different from those underlying perception. In addition, perceptual approaches have become widely viewed as untenable because they are assumed to implement record- ing systems, not conceptual systems. A perceptual theory of knowledge is developed here in the context of current cognitive science and neuroscience. During perceptual experience, association areas in the brain capture bottom-up patterns of activation in sensory-motor areas. Later, in a top-down manner, association areas partially reactivate sensory-motor areas to implement perceptual symbols. The stor- age and reactivation of perceptual symbols operates at the level of perceptual components - not at the level of holistic perceptual expe- riences. Through the use of selective attention, schematic representations of perceptual components are extracted from experience and stored in memory (e.g., individual memories of green, purr, hot). As memories of the same component become organized around a com- mon frame, they implement a simulator that produces limitless simulations of the component (e.g., simulations of purr). Not only do such simulators develop for aspects of sensory experience, they also develop for aspects of proprioception (e.g., lift, run) and introspec- tion (e.g., compare, memory, happy, hungry). Once established, these simulators implement a basic conceptual system that represents types, supports categorization, and produces categorical inferences. These simulators further support productivity, propositions, and ab- stract concepts, thereby implementing a fully functional conceptual system. Productivity results from integrating simulators combinato- rially and recursively to produce complex simulations. Propositions result from binding simulators to perceived individuals to represent type-token relations. Abstract concepts are grounded in complex simulations of combined physical and introspective events. Thus, a per- ceptual theory of knowledge can implement a fully functional conceptual system while avoiding problems associated with amodal sym- bol systems. Implications for cognition, neuroscience, evolution, development, and artificial intelligence are explored.

5,259 citations

Journal ArticleDOI
TL;DR: In this article, the authors propose to delegate the control of goal-directed responses to anticipated situational cues, which elicit these responses automatically when actually encountered, and demonstrate that implementation intentions further the attainment of goals.
Abstract: When people encounter problems in translating their goals into action (e.g., failing to get started, becoming distracted, or falling into bad habits), they may strategically call on automatic processes in an attempt to secure goal attainment. This can be achieved by plans in the form of implementation intentions that link anticipated critical situations to goal-directed responses ("Whenever situation x arises, I will initiate the goal-directed response y!"). Implementation intentions delegate the control of goal-directed responses to anticipated situational cues, which (when actually encountered) elicit these responses automatically. A program of research demonstrates that implementation intentions further the attainment of goals, and it reveals the underlying processes.

4,631 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a theory in which automatization is construed as the acquisition of a domainspeciSc knowledge base, formed of separate representations, instances, of each exposure to the task.
Abstract: This article presents a theory in which automatization is construed as the acquisition of a domainspeciSc knowledge base, formed of separate representations, instances, of each exposure to the task. Processing is considered automatic if it relies on retrieval of stored instances, which will occur only after practice in a consistent environment. Practice is important because it increases the amount retrieved and the speed of retrieval; consistency is important because it ensures that the retrieved instances will be useful. The theory accounts quantitatively for the power-function speed-up and predicts a power-function reduction in the standard deviation that is constrained to have the same exponent as the power function for the speed-up. The theory accounts for qualitative properties as well, explaining how some may disappear and others appear with practice. More generally, it provides an alternative to the modal view of automaticity, arguing that novice performance is limited by a lack of knowledge rather than a scarcity of resources. The focus on learning avoids many problems with the modal view that stem from its focus on resource limitations.

3,222 citations

Journal ArticleDOI
TL;DR: A new framework for a more adequate theoretical treatment of perception and action planning is proposed, in which perceptual contents and action plans are coded in a common representational medium by feature codes with distal reference, showing that the main assumptions are well supported by the data.
Abstract: Traditional approaches to human information processing tend to deal with perception and action planning in isolation, so that an adequate account of the perception-action interface is still missing On the perceptual side, the dominant cognitive view largely underestimates, and thus fails to account for, the impact of action-related processes on both the processing of perceptual information and on perceptual learning On the action side, most approaches conceive of action planning as a mere continuation of stimulus processing, thus failing to account for the goal-directedness of even the simplest reaction in an experimental task We propose a new framework for a more adequate theoretical treatment of perception and action planning, in which perceptual contents and action plans are coded in a common representational medium by feature codes with distal reference Perceived events (perceptions) and to-be-produced events (actions) are equally represented by integrated, task-tuned networks of feature codes – cognitive structures we call event codes We give an overview of evidence from a wide variety of empirical domains, such as spatial stimulus-response compatibility, sensorimotor synchronization, and ideomotor action, showing that our main assumptions are well supported by the data

2,736 citations

Journal ArticleDOI
TL;DR: The concept of anobject file as a temporary episodic representation, within which successive states of an object are linked and integrated, is developed, which develops the concept of a reviewing process, which is triggered by the appearance of the target and retrieves just one of the previewed items.

1,855 citations