scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Experimental Psychology: Human Perception and Performance in 2013"


Journal ArticleDOI
TL;DR: There is no ERP evidence for the salience-driven selection hypothesis for salient distractors because the ERPs were noisy and were averaged across all trials, thereby making it difficult to know whether attention was deployed directly to the target on some trials.
Abstract: People must often search cluttered and continually changing visual environments for objects of interest (targets). The search for the target is particularly challenging when another highly salient object is present in the search array. Visual search for a target in the presence of a salient but irrelevant item (the distractor) has been studied in the laboratory using theadditional-singleton paradigm(Lamy & Yashar, 2008;Pinto, Olivers, & Theeuwes, 2005;Theeuwes, 1991,1992). In this paradigm, observers search covertly for a target singleton while trying to ignore an irrelevant singleton that is also present in the display on a subset of trials. The target and distractor “pop out” from the rest of the items because they each possess a unique feature. In most experiments of this sort, the distractor is chosen to pop out even more than the target, so that distractor’s salience and the observer’s intention are in opposition. Early studies with the additional-singleton paradigm (Theeuwes, 1991,1992) gave rise to thesalience-driven selection hypothesisin which the initial visual selection is said to be determined entirely by bottom-up activations based on stimulus salience (for a recent review, seeTheeuwes, 2010). 1 According to this hypothesis, vo

212 citations


Journal ArticleDOI
TL;DR: It is demonstrated that during maintenance, even when items are no longer visible, attention resources can be selectively redeployed to protect the accuracy with which a cued item can be recalled over time, but with a corresponding cost in recall for uncued items.
Abstract: Recent studies have demonstrated that memory performance can be enhanced by a cue which indicates the item most likely to be subsequently probed, even when that cue is delivered seconds after a stimulus array is extinguished. Although such retro-cuing has attracted considerable interest, the mechanisms underlying it remain unclear. Here, we tested the hypothesis that retro-cues might protect an item from degradation over time. We employed two techniques that previously have not been deployed in retro-cuing tasks. First, we used a sensitive, continuous scale for reporting the orientation of a memorized item, rather than binary measures (change or no change) typically used in previous studies. Second, to investigate the stability of memory across time, we also systematically varied the duration between the retro-cue and report. Although accuracy of reporting uncued objects rapidly declined over short intervals, retro-cued items were significantly more stable, showing negligible decline in accuracy across time and protection from forgetting. Retro-cuing an object’s color was just as advantageous as spatial retro-cues. These findings demonstrate that during maintenance, even when items are no longer visible, attention resources can be selectively redeployed to protect the accuracy with which a cued item can be recalled over time, but with a corresponding cost in recall for uncued items.

196 citations


Journal ArticleDOI
TL;DR: Evidence is provided for an enduring effect of reward learning on attentional priority: stimuli previously associated with reward in a training phase capture attention when presented as irrelevant distractors over half a year later, without the need for further reward learning.
Abstract: Stimuli that have previously been associated with the delivery of reward involuntarily capture attention when presented as unrewarded and task-irrelevant distractors in a subsequent visual search task. It is unknown how long such effects of reward learning on attention persist. One possibility is that value-driven attentional biases are plastic and constantly evolve to reflect only recent reward history. According to such a mechanism of attentional control, only consistently reinforced patterns of attention allocation persist for extended periods of time. Another possibility is that reward learning creates enduring changes in attentional priority that can persist indefinitely without further learning. Here we provide evidence for an enduring effect of reward learning on attentional priority: stimuli previously associated with reward in a training phase capture attention when presented as irrelevant distractors over half a year later, without the need for further reward learning.

181 citations


Journal ArticleDOI
TL;DR: By manipulating the salient nature of reference-providing events in an auditory go-nogo Simon task, the present study demonstrates that spatial reference events do not necessarily require social or movement features to induce action coding and suggests that the cSE does not necessarily imply the co-representation of tasks.
Abstract: The joint go-nogo Simon effect (social Simon effect, or joint cSE) has been considered as an index of automatic action/task co-representation. Recent findings, however, challenge extreme versions of this social co-representation account by suggesting that the (joint) cSE results from any sufficiently salient event that provides a reference for spatially coding one's own action. By manipulating the salient nature of reference-providing events in an auditory go-nogo Simon task, the present study indeed demonstrates that spatial reference events do not necessarily require social (Experiment 1) or movement features (Experiment 2) to induce action coding. As long as events attract attention in a bottom-up fashion (e.g., auditory rhythmic features; Experiment 3 and 4), events in an auditory go-nogo Simon task seem to be co-represented irrespective of the agent or object producing these events. This suggests that the cSE does not necessarily imply the co-representation of tasks. The theory of event coding provides a comprehensive account of the available evidence on the cSE: the presence of another salient event requires distinguishing the cognitive representation of one's own action from the representation of other events, which can be achieved by referential coding-the spatial coding of one's action relative to the other events.

165 citations


Journal ArticleDOI
TL;DR: Responsibility variability during the 5 trials preceding probe-caught reports of mind wandering (tuned-out and zoned-out mind wandering) is significantly greater than during the5 trials preceding reports of on-task performance, suggesting that, at least in some tasks, behavioral variability is an online marker of mind wander.
Abstract: Mind wandering is a pervasive feature of human cognition often associated with the withdrawal of task-related executive control processes. Here, we explore the possibility that, in tasks requiring executive control to sustain consistent responding, moments of mind wandering could be associated with moments of increased behavioral variability. To test this possibility, we developed and administered a novel task (the metronome response task) in which participants were instructed to respond synchronously (via button presses) with the continuous rhythmic presentation of tones. We provide evidence (replicated across 2 independent samples) that response variability during the 5 trials preceding probe-caught reports of mind wandering (tuned-out and zoned-out mind wandering) is significantly greater than during the 5 trials preceding reports of on-task performance. These results suggest that, at least in some tasks, behavioral variability is an online marker of mind wandering.

163 citations


Journal ArticleDOI
TL;DR: Across 2 experiments, heavy media multitaskers were better able to switch between tasks in the task-switching paradigm, and media multitasking was not associated with increased ability to process 2 tasks in parallel, it was associated with an increased able to shift between discrete tasks.
Abstract: The recent rise in media use has prompted researchers to investigate its influence on users' basic cognitive processes, such as attention and cognitive control. However, most of these investigations have failed to consider that the rise in media use has been accompanied by an even more dramatic rise in media multitasking (engaging with multiple forms of media simultaneously). Here we investigate how one's ability to switch between 2 tasks and to perform 2 tasks simultaneously is associated with media multitasking experience. Participants saw displays comprised of a number-letter pair and classified the number as odd or even and/or the letter as a consonant or vowel. In task-switching blocks, a cue indicated which classification to perform on each trial. In dual-task blocks, participants performed both classifications. Heavy and light media multitaskers showed comparable performance in the dual-task. Across 2 experiments, heavy media multitaskers were better able to switch between tasks in the task-switching paradigm. Thus, while media multitasking was not associated with increased ability to process 2 tasks in parallel, it was associated with an increased ability to shift between discrete tasks.

157 citations


Journal ArticleDOI
TL;DR: The influence of top-down cognitive control on 2 putatively distinct forms of distraction was investigated in this article, where focal-task engagement was promoted either by increasing the difficulty of encoding the visual to-be-remembered stimuli (by reducing their perceptual discriminability) or providing foreknowledge of an imminent deviation (Experiment 2).
Abstract: The influence of top-down cognitive control on 2 putatively distinct forms of distraction was investigated. Attentional capture by a task-irrelevant auditory deviation (e.g., a female-spoken token following a sequence of male-spoken tokens)—as indexed by its disruption of a visually presented recall task—was abolished when focal-task engagement was promoted either by increasing the difficulty of encoding the visual to-be-remembered stimuli (by reducing their perceptual discriminability; Experiments 1 and 2) or by providing foreknowledge of an imminent deviation (Experiment 2). In contrast, distraction from continuously changing auditory stimuli (“changing-state effect”) was not modulated by task-difficulty or foreknowledge (Experiment 3). We also confirmed that individual differences in working memory capacity—typically associated with maintaining task-engagement in the face of distraction—predict the magnitude of the deviation effect, but not the changing-state effect. This convergence of experimental and psychometric data strongly supports a duplex-mechanism account of auditory distraction: Auditory attentional capture (deviation effect) is open to top-down cognitive control, whereas auditory distraction caused by direct conflict between the sound and focal-task processing (changing-state effect) is relatively immune to such control.

145 citations


Journal ArticleDOI
TL;DR: The pace at which people incidentally learn to prioritize specific locations is investigated, finding that long-term persistence differentiates incidentally learned attentional biases from the more flexible goal-driven attention.
Abstract: Substantial research has focused on the allocation of spatial attention based on goals or perceptual salience. In everyday life, however, people also direct attention using their previous experience. Here we investigate the pace at which people incidentally learn to prioritize specific locations. Participants searched for a T among Ls in a visual search task. Unbeknownst to them, the target was more often located in one region of the screen than in other regions. An attentional bias toward the rich region developed over dozens of trials. However, the bias did not rapidly readjust to new contexts. It persisted for at least a week and for hundreds of trials after the target's position became evenly distributed. The persistence of the bias did not reflect a long window over which visual statistics were calculated. Long-term persistence differentiates incidentally learned attentional biases from the more flexible goal-driven attention. (PsycINFO Database Record (c) 2012 APA, all rights reserved). Language: en

129 citations


Journal ArticleDOI
TL;DR: Results indicate that the distractor was prevented from engaging the attentional mechanism associated with N2pc, the distraction did not interrupt the deployment of attention to the target, and competition for attention can be resolved by suppressing locations of irrelevant items on a salience-based priority map.
Abstract: Salient distractors delay visual search for less salient targets in additional-singleton tasks, even when the features of the stimuli are fixed across trials. According to the salience-driven selection hypothesis, this delay is due to an initial attentional deployment to the distractor. Recent event-related potential (ERP) studies found no evidence for salience-driven selection in fixed-feature search, but the methods employed were not optimized to isolate distractor ERP components such as the N2pc and distractor positivity (PD; indices of selection and suppression, respectively). Here, we isolated target and distractor ERPs in two fixed-feature search experiments. Participants searched for a shape singleton in the presence of a more-salient color singleton (Experiment 1) or for a color singleton in the presence of a less-salient shape singleton (Experiment 2). The salient distractor did not elicit an N2pc, but it did elicit a PD on fast-response trials. Furthermore, distractors had no effect on the timing of the target N2pc. These results indicate that (a) the distractor was prevented from engaging the attentional mechanism associated with N2pc, (b) the distractor did not interrupt the deployment of attention to the target, and (c) competition for attention can be resolved by suppressing locations of irrelevant items on a salience-based priority map.

126 citations


Journal ArticleDOI
TL;DR: This work provides the first evidence that humans use ensemble-coding mechanisms to perceive the behavior of a crowd of people with surprisingly high sensitivity, and shows that this pooling provided tolerance against crowd variability and may cause a chaotic group to cohere into a unified Gestalt.
Abstract: Many species, including humans, display group behavior. Thus, perceiving crowds may be important for social interaction and survival. Here, we provide the first evidence that humans use ensemble-coding mechanisms to perceive the behavior of a crowd of people with surprisingly high sensitivity. Observers estimated the headings of briefly presented crowds of point-light walkers that differed in the number and headings of their members (i.e., people in differently sized crowds had identical or increasingly variable directions of walking). We found that observers rapidly pooled information from multiple walkers to estimate the heading of a crowd. This ensemble code was precise; observer's perceived the behavior of a crowd better than the behavior of an individual. We also showed that this pooling provided tolerance against crowd variability and may cause a chaotic group to cohere into a unified Gestalt. Sensitive perception of a crowd's behavior required integration of human form and motion, suggesting that the ensemble code was generated in high-level visual areas. Overall, these mechanisms may reflect the prevalence of crowd behavior in nature and a social benefit for perceiving crowds as unified entities. (PsycINFO Database Record (c) 2012 APA, all rights reserved). Language: en

111 citations


Journal ArticleDOI
TL;DR: The interplay between higher-level planning processes and motor simulation in a joint action task where online feedback was not available is investigated and suggests that joint coordination might rely on similar principles as interlimb coordination.
Abstract: When two or more individuals intend to achieve a joint outcome, they often need to time their own actions carefully with respect to those of their coactors. Online perceptual feedback supports coordination by allowing coactors to entrain with and predict each other's actions. However, joint actions are still possible when no or little online feedback is available. The current study investigated the interplay between higher-level planning processes and motor simulation in a joint action task where online feedback was not available. Pairs of participants performed forward jumps (hops) next to each other with the instruction to land at the same time. They could neither see nor hear each other, but were informed about their own and the partner's jumping distance beforehand. The analysis of basic movement parameters showed that participants adjusted the temporal and spatial properties of the movement preparation and execution phase of their jumps to the specific difference in distance between themselves and their partner. However, this adaptation was made exclusively by the person with the shorter distance to jump, indicating a distribution of coactors' efforts based on task characteristics. A comparison with an individual bipedal coordination condition suggests that joint coordination might rely on similar principles as interlimb coordination. These findings are interpreted within a framework of motor simulation.

Journal ArticleDOI
TL;DR: It is demonstrated via computational simulations and norming studies that corpus-based word frequencies systematically overestimate strengths of word representations, especially in the low-frequency range and in smaller-size vocabularies, challenging the view that the more skilled an individual is in generic mechanisms of word processing, the less reliant he or she will be on the actual lexical characteristics of that word.
Abstract: The importance of vocabulary in reading comprehension emphasizes the need to accurately assess an individual’s familiarity with words. The present article highlights problems with using occurrence counts in corpora as an index of word familiarity, especially when studying individuals varying in reading experience. We demonstrate via computational simulations and norming studies that corpus-based word frequencies systematically overestimate strengths of word representations, especially in the lowfrequency range and in smaller-size vocabularies. Experience-driven differences in word familiarity prove to be faithfully captured by the subjective frequency ratings collected from responders at different experience levels. When matched on those levels, this lexical measure explains more variance than corpus-based frequencies in eye-movement and lexical decision latencies to English words, attested in populations with varied reading experience and skill. Furthermore, the use of subjective frequencies removes the widely reported (corpus) Frequency Skill interaction, showing that more skilled readers are equally faster in processing any word than the less skilled readers, not disproportionally faster in processing lower frequency words. This finding challenges the view that the more skilled an individual is in generic mechanisms of word processing, the less reliant he or she will be on the actual lexical characteristics of that word.

Journal ArticleDOI
TL;DR: It was found that search reaction time was faster when the target appeared in the high-frequency region rather than the low-frequency regions, suggesting that probability cuing guides spatial attention.
Abstract: Our visual system is highly sensitive to regularities in the environment. Locations that were important in one’s previous experience are often prioritized during search, even though observers may not be aware of the learning. In this study we characterized the guidance of spatial attention by incidental learning of a target’s spatial probability, and examined the interaction between endogenous cuing and probability cuing. Participants searched for a target (T) among distractors (L’s). The target was more often located in one region of the screen than in others. We found that search RT was faster when the target appeared in the high-frequency region rather than the low-frequency regions. This difference increased when there were more items on the display, suggesting that probability cuing guides spatial attention. Additional data indicated that on their own, probability cuing and endogenous cuing (e.g., a central arrow that predicted a target’s location) were similarly effective at guiding attention. However, when both cues were presented at once, probability cuing was largely eliminated. Thus, although both incidental learning and endogenous cuing can effectively guide attention, endogenous cuing takes precedence over incidental learning.

Journal ArticleDOI
TL;DR: The present study shows that it is possible to dissociate explicit expectancies from sequential adaptation effects in a Stroop task, in conditions in which feature repetitions are avoided, and in which the response-to-stimulus interval is set to 0 ms.
Abstract: In conflict tasks, congruency effects are modulated by the sequence of preceding trials. This modulation effect has been interpreted as an influence of a proactive mechanism of adaptation to conflict (Botvinick, Nystrom, Fissell, Carter, & Cohen, 1999), but the possible contribution of explicit expectancies to this adaptation effect remains unclear. The present study shows that it is possible to dissociate explicit expectancies from sequential adaptation effects in a Stroop task, in conditions in which feature repetitions are avoided, and in which the response-to-stimulus interval is set to 0 ms. We found a progressive adaptation effect that depends on the congruency of the previous series of trials, rather than exclusively on the preceding trial. This effect is independent from explicit expectancies (Experiment 1), and can even contradict these expectancies when participants are presented with informative patterns favoring either repeating or alternating congruency (Experiments 2a and 2b). The existence of a progressive adaptation effect independent from explicit expectancies and from repetition priming challenges the idea that conflict adaptation acts always on a top-down basis (Notebaert, Gevers, Verbruggen, & Liefooghe, 2006), and it rather indicates the existence of automatic sources of sequential adaptation, including the adaptation to the lack of conflict. Implications of these results on current understanding of some empirical phenomena of cognitive control, such as that of proportion of congruency, will be highlighted.

Journal ArticleDOI
TL;DR: The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention.
Abstract: In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change-detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention.

Journal ArticleDOI
TL;DR: Novel findings provide converging evidence for reactive control of color-word Stroop interference at the item level, reveal theoretically important factors that modulate reliance on item- specific control versus contingency learning, and suggest an update to the item-specific control account.
Abstract: Prior studies have shown that cognitive control is implemented at the list and context levels in the color–word Stroop task. At first blush, the finding that Stroop interference is reduced for mostly incongruent items as compared with mostly congruent items (i.e., the item-specific proportion congruence [ISPC] effect) appears to provide evidence for yet a third level of control, which modulates word reading at the item level. However, evidence to date favors the view that ISPC effects reflect the rapid prediction of high-contingency responses and not item-specific control. In Experiment 1, we first show that an ISPC effect is obtained when the relevant dimension (i.e., color) signals proportion congruency, a problematic pattern for theories based on differential response contingencies. In Experiment 2, we replicate and extend this pattern by showing that item-specific control settings transfer to new stimuli, ruling out alternative frequency-based accounts. In Experiment 3, we revert to the traditional design in which the irrelevant dimension (i.e., word) signals proportion congruency. Evidence for item-specific control, including transfer of the ISPC effect to new stimuli, is apparent when 4-item sets are employed but not when 2-item sets are employed. We attribute this pattern to the absence of high-contingency responses on incongruent trials in the 4-item set. These novel findings provide converging evidence for reactive control of color–word Stroop interference at the item level, reveal theoretically important factors that modulate reliance on item-specific control versus contingency learning, and suggest an update to the item-specific control account (Bugg, Jacoby, & Chanani, 2011).

Journal ArticleDOI
TL;DR: It is shown that contingent capture can also occur for conceptual information at the superordinate level (e.g., sports equipment, marine animal, dessert food), suggesting that natural images can be decoded into their conceptual meaning to drive shifts of attention within the time course of a single fixation.
Abstract: Attentional capture is an unintentional shift of visuospatial attention to the location of a distractor that is either highly salient, or relevant to the current task set. The latter situation is referred to as contingent capture, in that the effect is contingent on a match between characteristics of the stimuli and the task-defined attentional-control settings of the viewer. Contingent capture has been demonstrated for low-level features, such as color, motion, and orientation. In the present paper we show that contingent capture can also occur for conceptual information at the superordinate level (e.g., sports equipment, marine animal, dessert food). This effect occurs rapidly (i.e., within 200 ms), is a spatial form of attention, and is contingent on attentional-control settings that change on each trial, suggesting that natural images can be decoded into their conceptual meaning to drive shifts of attention within the time course of a single fixation. (PsycINFO Database Record (c) 2012 APA, all rights reserved). Language: en

Journal ArticleDOI
TL;DR: The results showed that interword spacing reduced children and adults' first pass reading times and refixation probabilities indicating spaces between words facilitated word identification, and adults targeted refixations contingent on initial landing positions to a greater degree than did children.
Abstract: The present study examined children and adults' eye movement behavior when reading word spaced and unspaced Chinese text The results showed that interword spacing reduced children and adults' first pass reading times and refixation probabilities indicating spaces between words facilitated word identification Word spacing effects occurred to a similar degree for both children and adults, though there were differential landing position effects for single and multiple fixation situations in both groups; clear preferred viewing location effects occurred for single fixations, whereas landing positions were closer to word beginnings, and further into the word for adults than children for multiple fixation situations Furthermore, adults targeted refixations contingent on initial landing positions to a greater degree than did children Overall, the results indicate that some aspects of children's eye movements during reading show similar levels of maturity to adults, while others do not

Journal ArticleDOI
TL;DR: A novel paradigm is introduced, testing QE duration as an independent variable by experimentally manipulating the onset of the last fixation before movement unfolding and investigating the functional mechanisms behind the QE phenomenon by manipulating the predictability of the target position.
Abstract: Evidence suggests that superior motor performance coincides with a longer duration of the last fixation before movement initiation, an observation called “quiet eye” (QE). Although the empirical findings over the last two decades underline the robustness of the phenomenon, little is known about its functional role in motor performance. Therefore, a novel paradigm is introduced, testing QE duration as an independent variable by experimentally manipulating the onset of the last fixation before movement unfolding. Furthermore, this paradigm is employed to investigate the functional mechanisms behind the QE phenomenon by manipulating the predictability of the target position and thereby the amount of information to be processed over the QE period. The results further support the assumption that QE affects motor performance, with experimentally prolonged QE durations increasing accuracy in a throwing task. However, it is only under a high information-processing load that a longer QE duration is beneficial for throwing performance. Therefore, the optimization of information processing, particularly in motor execution, turns out to be a promising candidate for explaining QE benefits on a functional level.

Journal ArticleDOI
TL;DR: It is found that removing found targets from the display or making them salient and easily segregated color singletons improved subsequent search accuracy, however, replacing found targets with random distractor items did not improve subsequentSearch accuracy.
Abstract: Multiple-target visual searches--when more than 1 target can appear in a given search display--are commonplace in radiology, airport security screening, and the military. Whereas 1 target is often found accurately, additional targets are more likely to be missed in multiple-target searches. To better understand this decrement in 2nd-target detection, here we examined 2 potential forms of interference that can arise from finding a 1st target: interference from the perceptual salience of the 1st target (a now highly relevant distractor in a known location) and interference from a newly created memory representation for the 1st target. Here, we found that removing found targets from the display or making them salient and easily segregated color singletons improved subsequent search accuracy. However, replacing found targets with random distractor items did not improve subsequent search accuracy. Removing and highlighting found targets likely reduced both a target's visual salience and its memory load, whereas replacing a target removed its visual salience but not its representation in memory. Collectively, the current experiments suggest that the working memory load of a found target has a larger effect on subsequent search accuracy than does its perceptual salience.

Journal ArticleDOI
TL;DR: The diffusion model is used to account for the effects of masked and unmasked priming for identity and associatively related primes and leads to the following conclusion: Masked related prIMEs give a head start to the processing of the target compared with unrelated primes, whereas unmasking affects primarily the quality of the lexical information.
Abstract: In the past decades, hundreds of articles have explored the mechanisms underlying priming. Most researchers assume that masked and unmasked priming are qualitatively different. For masked priming, the effects are often assumed to reflect savings in the encoding of the target stimulus, whereas for unmasked priming, it has been suggested that the effects reflect the familiarity of the prime–target compound cue. In contrast, other researchers have claimed that masked and unmasked priming reflect essentially the same core processes. In this article, we use the diffusion model (R. Ratcliff, 1978, A theory of memory retrieval, Psychological Review, Vol. 85, pp. 59–108) to account for the effects of masked and unmasked priming for identity and associatively related primes. The fits of the model led us to the following conclusion: Masked related primes give a head start to the processing of the target compared with unrelated primes, whereas unmasked priming affects primarily the quality of the lexical information.

Journal ArticleDOI
TL;DR: Evidence of bidirectional interference between the concurrent tasks, such that the executive tasks interfered with timing performance and the timing task interfered with executive performance, suggests that timing relies on the same processing resources that support basic executive functions.
Abstract: Three dual-task experiments were designed to assess the contribution of executive cognitive functions to the perception of time. Each experiment combined a serial temporal production timing task with an executive task emphasizing either shifting, updating, or inhibition. The experiments uncovered evidence of bidirectional interference between the concurrent tasks, such that the executive tasks interfered with timing performance and the timing task interfered with executive performance. Each experiment also included 3 dual-task conditions in which subjects allocated attention to the concurrent tasks in specified proportions. The results showed a reciprocal tradeoff in performance on each task: More attention allocated to timing caused timing performance to improve and executive performance to decline, whereas more attention allocated to the executive task produced the opposite pattern. The findings suggest that timing relies on the same processing resources that support basic executive functions. (PsycINFO Database Record (c) 2012 APA, all rights reserved). Language: en

Journal ArticleDOI
TL;DR: The results indicate that task conflict and stop-signal inhibition share a common control mechanism that is dissociable from the control mechanism activated by the informational conflict.
Abstract: Performance of the Stroop task reflects two conflicts--informational (between the incongruent word and ink color) and task (between relevant color naming and irrelevant word reading). The task conflict is usually not visible, and is only seen when task control is damaged. Using the stop-signal paradigm, a few studies demonstrated longer stop-signal reaction times for incongruent trials than for congruent trials. This indicates interaction between stopping and the informational conflict. Here we suggest that "zooming in" on task-control failure trials will reveal another interaction--between stopping and task conflict. To examine this suggestion, we combined stop-signal and Stroop tasks in the same experiment. When participants' control failed and erroneous responses to a stop signal occurred, a reverse facilitation emerged in the Stroop task (Experiment 1) and this was eliminated using methods that manipulated the emergence of the reverse facilitation (Experiment 2). Results from both experiments were replicated when all stimuli were used in the same task (Experiment 3). In erroneous response trials, only the task conflict increased, not the informational conflict. These results indicate that task conflict and stop-signal inhibition share a common control mechanism that is dissociable from the control mechanism activated by the informational conflict.

Journal ArticleDOI
TL;DR: The present work uses eyetracking and a Visual World Paradigm task without object-relevant actions to assess the time course of activation of action representations, as well as their responsiveness to lexical-semantic context, to support the "Two Action System" model of object and action processing.
Abstract: Previous studies suggest that action representations are activated during object processing, even when task-irrelevant. In addition, there is evidence that lexical-semantic context may affect such activation during object processing. Finally, prior work from our laboratory and others indicates that function-based ("use") and structure-based ("move") action subtypes may differ in their activation characteristics. Most studies assessing such effects, however, have required manual object-relevant motor responses, thereby plausibly influencing the activation of action representations. The present work uses eyetracking and a Visual World Paradigm task without object-relevant actions to assess the time course of activation of action representations, as well as their responsiveness to lexical-semantic context. In two experiments, participants heard a target word and selected its referent from an array of four objects. Gaze fixations on nontarget objects signal activation of features shared between targets and nontargets. The experiments assessed activation of structure-based (Experiment 1) or function-based (Experiment 2) distractors, using neutral sentences ("S/he saw the....") or sentences with a relevant action verb (Experiment 1: "S/he picked up the...."; Experiment 2: "S/he used the...."). We observed task-irrelevant activations of action information in both experiments. In neutral contexts, structure-based activation was relatively faster-rising but more transient than function-based activation. Additionally, action verb contexts reliably modified patterns of activation in both Experiments. These data provide fine-grained information about the dynamics of activation of function-based and structure-based actions in neutral and action-relevant contexts, in support of the "Two Action System" model of object and action processing (e.g., Buxbaum & Kalenine, 2010).

Journal ArticleDOI
TL;DR: A new functional dissociation is established between the roles of different types of WM load in the fundamental visual perception process of detection in order to predict reduced detection sensitivity during maintenance and enhance detection sensitivity for a low-priority stimulus.
Abstract: We contrasted the effects of different types of working memory (WM) load on detection. Considering the sensory-recruitment hypothesis of visual short-term memory (VSTM) within load theory (e.g., Lavie, 2010) led us to predict that VSTM load would reduce visual-representation capacity, thus leading to reduced detection sensitivity during maintenance, whereas load on WM cognitive control processes would reduce priority-based control, thus leading to enhanced detection sensitivity for a low-priority stimulus. During the retention interval of a WM task, participants performed a visual-search task while also asked to detect a masked stimulus in the periphery. Loading WM cognitive control processes (with the demand to maintain a random digit order [vs. fixed in conditions of low load]) led to enhanced detection sensitivity. In contrast, loading VSTM (with the demand to maintain the color and positions of six squares [vs. one in conditions of low load]) reduced detection sensitivity, an effect comparable with that found for manipulating perceptual load in the search task. The results confirmed our predictions and established a new functional dissociation between the roles of different types of WM load in the fundamental visual perception process of detection.

Journal ArticleDOI
TL;DR: Evidence is provided that longer term contextual learning can rapidly and automatically influence the instantiation of a given attentional set and overcome distraction by salient, task-irrelevant information.
Abstract: A number of studies have demonstrated that the likelihood of a salient item capturing attention is dependent on the "attentional set" an individual employs in a given situation. The instantiation of an attentional set is often viewed as a strategic, voluntary process, relying on working memory systems that represent immediate task priorities. However, influential theories of attention and automaticity propose that goal-directed control can operate more or less automatically on the basis of longer term task representations, a notion supported by a number of recent studies. Here, we provide evidence that longer term contextual learning can rapidly and automatically influence the instantiation of a given attentional set. Observers learned associations between specific attentional sets and specific task-irrelevant background scenes during a training session, and in the ensuing test session, simply reinstating particular scenes on a trial-by-trial basis biased observers to employ the associated attentional set. This directly influenced the magnitude of attentional capture, suggesting that memory for the context in which a task is performed can play an important role in the ability to instantiate a particular attentional set and overcome distraction by salient, task-irrelevant information.

Journal ArticleDOI
TL;DR: An experimental study on the naturally biased association between shape and color, where participants systematically established an association between shapes and colors when explicitly asked to choose the color that they saw as the most naturally related to a series of given shapes.
Abstract: This article presents an experimental study on the naturally biased association between shape and color. For each basic geometric shape studied, participants were asked to indicate the color perceived as most closely related to it, choosing from the Natural Color System Hue Circle. Results show that the choices of color for each shape were not random, that is, participants systematically established an association between shapes and colors when explicitly asked to choose the color that, in their view, without any presupposition, they saw as the most naturally related to a series of given shapes. The strongest relations were found between the triangle and yellows, and the circle and square with reds. By contrast, the parallelogram was connected particularly infrequently with yellows and the pyramid with reds. Correspondence analysis suggested that two main aspects determine these relationships, namely the "warmth" and degree of "natural lightness" of hues.

Journal ArticleDOI
TL;DR: Findings indicate that the phonological and orthographic processing problems of dyslexic readers manifest differently during parafoveal and foveal processing, with each contributing to slower RAN performance and impaired reading fluency.
Abstract: The ability to coordinate serial processing of multiple items is crucial for fluent reading but is known to be impaired in dyslexia. To investigate this impairment, we manipulated the orthographic and phonological similarity of adjacent letters online as dyslexic and nondyslexic readers named letters in a serial naming (RAN) task. Eye movements and voice onsets were recorded. Letter arrays contained target item pairs in which the second letter was orthographically or phonologically similar to the first letter when viewed either parafoveally (Experiment 1a) or foveally (Experiment 1b). Relative to normal readers, dyslexic readers were more affected by orthographic confusability in Experiment 1a and phonological confusability in Experiment 1b. Normal readers were slower to process orthographically similar letters in Experiment 1b. Findings indicate that the phonological and orthographic processing problems of dyslexic readers manifest differently during parafoveal and foveal processing, with each contributing to slower RAN performance and impaired reading fluency.

Journal ArticleDOI
TL;DR: For both aligned and rolling gap pairs, children demonstrated less skill than adults in coordinating self and object movement, which has implications for understanding perception-action-cognition links and for understanding risk factors underlying car-bicycle collisions.
Abstract: This investigation examined how children and adults negotiate a challenging perceptual-motor problem with significant real-world implications-bicycling across two lanes of opposing traffic. Twelve- and 14-year-olds and adults rode a bicycling simulator through an immersive virtual environment. Participants crossed intersections with continuous cross traffic coming from opposing directions. Opportunities for crossing were divided into aligned (far gap opens with or before near gap) and rolling (far gap opens after near gap) gap pairs. Children and adults preferred rolling to aligned gap pairs, though this preference was stronger for adults than for children. Crossing aligned versus rolling gap pairs produced substantial differences in direction of travel, speed of crossing, and timing of entry into the near and far lanes. For both aligned and rolling gap pairs, children demonstrated less skill than adults in coordinating self and object movement. These findings have implications for understanding perception-action-cognition links and for understanding risk factors underlying car-bicycle collisions. (PsycINFO Database Record (c) 2012 APA, all rights reserved). Language: en

Journal ArticleDOI
TL;DR: Dutch listeners showed comparable effects of category retuning when they heard the same speaker speak her native language (Dutch) during the test, suggesting that production patterns in a second language are deemed a stable speaker characteristic likely to transfer to the native language; thus retuning of phoneme categories applies across languages.
Abstract: Max Planck Institute for Psycholinguistics Native listeners adapt to noncanonically produced speech by retuning phoneme boundaries by means of lexical knowledge We asked whether a second language lexicon can also guide category retuning and whether perceptual learning transfers from a second language (L2) to the native language (L1) During a Dutch lexical-decision task, German and Dutch listeners were exposed to unusual pronunciation variants in which word-final /f/ or /s/ was replaced by an ambiguous sound At test, listeners categorized Dutch minimal word pairs ending in sounds along an /f/-/s/ continuum Dutch L1 and German L2 listeners showed boundary shifts of a similar magnitude Moreover, following exposure to Dutch- accented English, Dutch listeners also showed comparable effects of category retuning when they heard the same speaker speak her native language (Dutch) during the test The former result suggests that lexical representations in a second language are specific enough to support lexically guided retuning, and the latter implies that production patterns in a second language are deemed a stable speaker characteristic likely to transfer to the native language; thus retuning of phoneme categories applies across languages