scispace - formally typeset
Search or ask a question

Showing papers on "Attentional blink published in 2014"


Journal ArticleDOI
TL;DR: The findings distinguish the fundamental contributions of attention and awareness at central stages of visual cognition: Conscious perception emerges in a quantal manner, with attention serving to modulate the probability that representations reach awareness.
Abstract: Attention and awareness are two tightly coupled processes that have been the subject of the same enduring debate: Are they allocated in a discrete or in a graded fashion? Using the attentional blink paradigm and mixture-modeling analysis, we show that awareness arises at central stages of information processing in an all-or-none manner. Manipulating the temporal delay between two targets affected subjects' likelihood of consciously perceiving the second target, but did not affect the precision of its representation. Furthermore, these results held across stimulus categories and paradigms, and they were dependent on attention having been allocated to the first target. The findings distinguish the fundamental contributions of attention and awareness at central stages of visual cognition: Conscious perception emerges in a quantal manner, with attention serving to modulate the probability that representations reach awareness.

77 citations


Journal ArticleDOI
TL;DR: An established paradigm in visual cognition, the “attentional blink,” is used to demonstrate that the authors' attention is captured more slowly by plants than by animals, which suggests fundamental differences in how the visual system processes plants.
Abstract: Plants, to many, are simply not as interesting as animals. Students typically prefer to study animals rather than plants and recall plants more poorly, and plants are underrepresented in the classroom. The observed paucity of interest for plants has been described as plant blindness, a term that is meant to encapsulate both the tendency to neglect plants in the environment and the lack of appreciation for plants’ functional roles. While the term plant blindness suggests a perceptual or attentional component to plant neglect, few studies have examined whether there are real differences in how plants and animals are perceived. Here, we use an established paradigm in visual cognition, the “attentional blink,” to compare the extent to which images of plants and animals capture attentional resources. We find that participants are better able to detect animals than plants in rapid image sequences and that visual attention has a different refractory period when a plant has been detected. These results suggest there are fundamental differences in how the visual system processes plants that may contribute to plant blindness. We discuss how perceptual and physiological constraints on visual processing may suggest useful strategies for characterizing and overcoming zoocentrism.

76 citations


Journal ArticleDOI
TL;DR: VGPs and nVGPs did not differ in second target identification performance on an attentional blink task, suggesting that the anti-cueing results were due to flexible control over exogenous attention rather than to more general speed-of-processing differences.
Abstract: Action video game players (VGPs) have demonstrated a number of attentional advantages over non-players. Here, we propose that many of those benefits might be underpinned by improved control over exogenous (i.e., stimulus-driven) attention. To test this we used an anti-cueing task, in which a sudden-onset cue indicated that the target would likely appear in a separate location on the opposite side of the fixation point. When the time between the cue onset and the target onset was short (40 ms), non-players (nVGPs) showed a typical exogenous attention effect. Their response times were faster to targets presented at the cued (but less probable) location compared with the opposite (more probable) location. VGPs, however, were less likely to have their attention drawn to the location of the cue. When the onset asynchrony was long (600 ms), VGPs and nVGPs were equally able to endogenously shift their attention to the likely (opposite) target location. In order to rule out processing-speed differences as an explanation for this result, we also tested VGPs and nVGPs on an attentional blink (AB) task. In a version of the AB task that minimized demands on task switching and iconic memory, VGPs and nVGPs did not differ in second target identification performance (i.e., VGPs had the same magnitude of AB as nVGPs), suggesting that the anti-cueing results were due to flexible control over exogenous attention rather than to more general speed-of-processing differences.

59 citations


Journal ArticleDOI
TL;DR: A theory of attention and consciousness (TAC) is offered that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation, and suggests neural correlates of phenomenal consciousness.
Abstract: Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation.

53 citations


Journal ArticleDOI
TL;DR: The link between awareness and binding advocated for visual information processing needs to be revised for multisensory cases as there is not a perfect match between these conditions and those in which mult isensory integration and binding occur.
Abstract: Given that multiple senses are often stimulated at the same time, perceptual awareness is most likely to take place in multisensory situations. However, theories of awareness are based on studies and models established for a single sense (mostly vision). Here, we consider the methodological and theoretical challenges raised by taking a multisensory perspective on perceptual awareness. First, we consider how well tasks designed to study unisensory awareness perform when used in multisensory settings, stressing that studies using binocular rivalry, bistable figure perception, continuous flash suppression, the attentional blink, repetition blindness and backward masking can demonstrate multisensory influences on unisensory awareness, but fall short of tackling multisensory awareness directly. Studies interested in the latter phenomenon rely on a method of subjective contrast and can, at best, delineate conditions under which individuals report experiencing a multisensory object or two unisensory objects. As there is not a perfect match between these conditions and those in which multisensory integration and binding occur, the link between awareness and binding advocated for visual information processing needs to be revised for multisensory cases. These challenges point at the need to question the very idea of multisensory awareness.

50 citations


Journal ArticleDOI
TL;DR: It is found that irrelevant emotional pictures gain access to working memory, even when observers are attempting to ignore them and, like the AB, prevent access of a closely following target.
Abstract: Emotion-induced blindness (EIB) refers to impaired awareness of items appearing soon after an irrelevant, emotionally arousing stimulus. Superficially, EIB appears to be similar to the attentional blink (AB), a failure to report a target that closely follows another relevant target. Previous studies of AB using event-related potentials suggest that the AB results from interference with selection (N2 component) and consolidation (P3b component) of the second target into working memory. The present study applied a similar analysis to EIB and, similarly, found that an irrelevant emotional distractor suppressed the N2 and P3b components associated with the following target at short lags. Emotional distractors also elicited a positive deflection that appeared to be similar to the PD component, which has been associated with attempts to suppress salient, irrelevant distractors (Kiss, Grubert, Petersen, & Eimer, 2012; Sawaki, Geng, & Luck, 2012; Sawaki & Luck, 2010). These results suggest that irrelevant emotional pictures gain access to working memory, even when observers are attempting to ignore them and, like the AB, prevent access of a closely following target.

48 citations


Journal ArticleDOI
TL;DR: The findings show that the formation of a durable and consciously accessible working memory trace for a briefly shown visual stimulus can be disturbed by a trailing 2-AFC task for up to several hundred milliseconds after the stimulus has been masked.
Abstract: While studies on visual memory commonly assume that the consolidation of a visual stimulus into working memory is interrupted by a trailing mask, studies on dual-task interference suggest that the consolidation of a stimulus can continue for several hundred milliseconds after a mask. As a result, estimates of the time course of working memory consolidation differ more than an order of magnitude. Here, we contrasted these opposing views by examining if and for how long the processing of a masked display of visual stimuli can be disturbed by a trailing 2-alternative forced choice task (2-AFC; a color discrimination task or a visual or auditory parity judgment task). The results showed that the presence of the 2-AFC task produced a pronounced retroactive interference effect that dissipated across stimulus onset asynchronies of 250-1,000 ms, indicating that the processing elicited by the 2-AFC task interfered with the gradual consolidation of the earlier shown stimuli. Furthermore, this interference effect occurred regardless of whether the to-be-remembered stimuli comprised a string of letters or an unfamiliar complex visual shape, and it occurred regardless of whether these stimuli were masked. Conversely, the interference effect was reduced when the memory load for the 1st task was reduced, or when the 2nd task was a color detection task that did not require decision making. Taken together, these findings show that the formation of a durable and consciously accessible working memory trace for a briefly shown visual stimulus can be disturbed by a trailing 2-AFC task for up to several hundred milliseconds after the stimulus has been masked. By implication, the current findings challenge the common view that working memory consolidation involves an immutable central processing bottleneck, and they also make clear that consolidation does not stop when a stimulus is masked.

46 citations


Journal ArticleDOI
TL;DR: The present work examined whether this training benefited performance directly, by eliminating processing limitations as claimed, or indirectly, by creating expectations about when targets would appear, and found that when temporal expectations were reduced, training-related improvements declined significantly.
Abstract: The attentional blink (AB) refers to a deficit in reporting the second of two sequentially presented targets when they are separated by less than 500 ms. Two decades of research has suggested that the AB is a robust phenomenon that is likely attributable to a fundamental limit in sequential object processing. This assumption, however, has recently been undermined by a demonstration that the AB can be eliminated after only a few hundred training trials (Choi, Chang, Shibata, Sasaki, & Watanabe in Proceedings of the National Academy of Sciences 109:12242-12247, 2012). In the present work, we examined whether this training benefited performance directly, by eliminating processing limitations as claimed, or indirectly, by creating expectations about when targets would appear. Consistent with the latter option, when temporal expectations were reduced, training-related improvements declined significantly. This suggests that whereas training may ameliorate the AB indirectly, the processing limits evidenced in the AB cannot be directly eliminated by brief exposure to the task.

43 citations


Journal ArticleDOI
TL;DR: Using the attentional blink (AB) phenomenon, the assumption that only consciously perceived information is durable (>500 ms) is questioned, and sustained BOLD signal change is found in the right mid-lateral prefrontal cortex, orbitofrontal cortex, and crus II of the cerebellum during maintenance of non-consciously perceived information.
Abstract: Conscious processing is generally seen as required for flexible and willful actions, as well as for tasks that require durable information maintenance. Here we present research that questions the assumption that only consciously perceived information is durable (> 500 ms). Using the attentional blink phenomenon, we rendered otherwise relatively clearly perceived letters non-conscious. In a first experiment we systematically manipulated the delay between stimulus presentation and response, for the purpose of estimating the durability of non-conscious perceptual representations. For items reported not seen, we found that behavioral performance was better than chance across intervals up to 15 seconds. In a second experiment we used fMRI to investigate the neural correlates underlying the maintenance of non-conscious perceptual representations. Critically, the relatively long delay period demonstrated in experiment 1 enabled isolation of the signal change specifically related to the maintenance period, separate from stimulus presentation and response. We found sustained BOLD signal change in the right mid-lateral prefrontal cortex, orbitofrontal cortex, and crus II of the cerebellum during maintenance of non-consciously perceived information. These findings are consistent with the controversial claim that working-memory mechanisms are involved in the short-term maintenance of non-conscious perceptual representations.

41 citations


Journal ArticleDOI
TL;DR: It is demonstrated that attention can be flexibly deployed as either a unitary or a divided focus in the same experimental task, depending on the observer's goals.
Abstract: The distribution of visual attention has been the topic of much investigation, and various theories have posited that attention is allocated either as a single unitary focus or as multiple independent foci. In the present experiment, we demonstrate that attention can be flexibly deployed as either a unitary or a divided focus in the same experimental task, depending on the observer’s goals. To assess the distribution of attention, we used a dual-stream Attentional Blink (AB) paradigm and 2 target pairs. One component of the AB, Lag-1 sparing, occurs only if the second target pair appears within the focus of attention. By varying whether the first-target-pair could be expected in a predictable location (always in-stream) or not (unpredictably in-stream or between-streams), observers were encouraged to deploy a divided or a unitary focus, respectively. When the second-target-pair appeared between the streams, Lag-1 sparing occurred for the Unpredictable group (consistent with a unitary focus) but not for the Predictable group (consistent with a divided focus). Thus, diametrically different outcomes occurred for physically identical displays, depending on the expectations of the observer about where spatial attention would be required.

35 citations


Journal ArticleDOI
TL;DR: The results support the idea that the functional size of the primary visual cortex is an important determinant of the efficiency of selective spatial attention for simple tasks, and that the attentional processing required for complex tasks like reading are to a large extent determined by other brain areas and inter-areal connections.

Journal ArticleDOI
TL;DR: The findings suggest that SZ is characterized by a diffuse pathophysiology affecting all stages of visual processing whereas in BD disruption is only at the latest stage involving higher order attentional functions.

Journal ArticleDOI
TL;DR: A smaller attentional blink during open monitoring compared to focused attention meditation due to reduced T1 capture was found, which may suggest that very advanced practitioners can exert some control over their conscious experience.

Journal ArticleDOI
TL;DR: Findings show that training benefits can transfer across cognitive operations that draw on the central bottleneck in information processing and have implications for theories of the AB and for the design of cognitive-training regimens that aim to produce transferable training benefits.
Abstract: A growing body of research suggests that dual-task interference in sensory consolidation (e.g., the attentional blink, AB) and response selection (e.g., the psychological refractory period, PRP) stems from a common central bottleneck of information processing. With regard to response selection, it is well known that training reduces dual-task interference. We tested whether training that is known to be effective for response selection can also reduce dual-task interference in sensory consolidation. Over two experiments, performance on a PRP paradigm (Exp. 1) and on AB paradigms (differing in their stimuli and task demands, Exps. 1 and 2) was examined after participants had completed a relevant training regimen (T1 practice for both paradigms), an irrelevant training regimen (comparable sensorimotor training, not related to T1 for both tasks), a visual-search training regimen (Exp. 2 only), or after participants had been allocated to a no-training control group. Training that had shown to be effective for reducing dual-task interference in response selection was also found to be effective for reducing interference in sensory consolidation. In addition, we found some evidence that training benefits transferred to the sensory consolidation of untrained stimuli. Collectively, these findings show that training benefits can transfer across cognitive operations that draw on the central bottleneck in information processing. These findings have implications for theories of the AB and for the design of cognitive-training regimens that aim to produce transferable training benefits.

Journal ArticleDOI
TL;DR: In this paper, the authors examined temporal expectancy effects in greater detail in the context of the attentional blink (AB), in which identification of the second of two targets is impaired when the targets are separated by less than about half a second.
Abstract: Although perception is typically constrained by limits in available processing resources, these constraints can be overcome if information about environmental properties, such as the spatial location or expected onset time of an object, can be used to direct resources to particular sensory inputs. In this work, we examined these temporal expectancy effects in greater detail in the context of the attentional blink (AB), in which identification of the second of two targets is impaired when the targets are separated by less than about half a second. We replicated previous results showing that presenting information about the expected onset time of the second target can overcome the AB. Uniquely, we also showed that information about expected onset (a) reduces susceptibility to distraction, (b) can be derived from salient temporal consistencies in intertarget intervals across exposures, and (c) is more effective when presented consistently rather than intermittently, along with trials that do not contain expectancy information. These results imply that temporal expectancy can benefit object processing at perceptual and postperceptual stages, and that participants are capable of flexibly encoding consistent timing information about environmental events in order to aid perception.

Journal ArticleDOI
TL;DR: Coupling in EEG data is studied to substantiate the role of phase–amplitude modulation in conscious access to visual target representations and shows delta–gamma coupling showed the largest increase in cases of correct target detection in the most challenging AB conditions.
Abstract: Global workspace access is considered as a critical factor for the ability to report a visual target. A plausible candidate mechanism for global workspace access is coupling of slow and fast brain activity. We studied coupling in EEG data using cross-frequency phase-amplitude modulation measurement between delta/theta phases and beta/gamma amplitudes from two experimental sessions, held on different days, of a typical attentional blink AB task, implying conscious access to targets. As the AB effect improved with practice between sessions, theta-gamma and theta-beta coupling increased generically. Most importantly, practice effects observed in delta-gamma and delta-beta couplings were specific to performance on the AB task. In particular, delta-gamma coupling showed the largest increase in cases of correct target detection in the most challenging AB conditions. All these practice effects were observed in the right temporal region. Given that the delta band is the main frequency of the P3 ERP, which is a marker of global workspace activity for conscious access, and because the gamma band is involved in visual object processing, the current results substantiate the role of phase-amplitude modulation in conscious access to visual target representations.

Journal ArticleDOI
TL;DR: The results suggest that visual attention, as triggered by a cue or target, is better described by a convergent gradient-field attention model and argue against a biased-competition theory of attention.
Abstract: Recent findings have suggested that transient attention can be triggered at two locations simultaneously. However, it is unclear whether doing so reduces the effect of attention at each attended location. In two experiments, we explored the consequences of dividing attention. In the first experiment, we compared the effects of one or two cues against an uncued baseline to determine the consequences of dividing attention in a paradigm with four rapid serial visual presentation (RSVP) streams. The results indicated that two simultaneous cues increase the accuracy of reporting two targets by almost the same amount as a single cue increases the report of a single target. These results suggest that when attention is divided between multiple locations, the attentional benefit at each location is not reduced in proportion to the total number of cues. A consequent prediction of this finding is that the identification of two RSVP targets should be better when they are presented simultaneously rather than sequentially. In a second experiment, we verified this prediction by finding evidence of lag-0 sparing: Two targets presented simultaneously in different locations were reported more easily than two targets separated by 100 ms. These findings argue against a biased-competition theory of attention. We suggest that visual attention, as triggered by a cue or target, is better described by a convergent gradient-field attention model.

Journal ArticleDOI
06 Oct 2014-Emotion
TL;DR: After neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry, suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based.
Abstract: Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry—suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

Journal ArticleDOI
TL;DR: Impaired T2 report accuracy at a short stimulus-onset asynchrony (SOA) was accompanied by a significant delay of the N2pc to lateral T2 targets when compared to a long SOA condition, suggesting that the attentional blink impacts attention allocation to targets, not distractors.

Journal ArticleDOI
TL;DR: Complimentary self-reported ratings support the reappraisal manipulation of negative images and the modulatory effect of reappRAisal on attention was not found for neutral images.
Abstract: Our brain is unable to fully process all the sensory signals we encounter. Attention is the process that helps selecting input from all available information for detailed processing and it is largely influenced by the affective value of the stimuli. This study examined if attentional bias toward emotional stimuli can be modulated by cognitively changing their emotional value. Participants were presented with negative and neutral images from four different scene-categories depicting humans ("Reading", "Working", "Crying" and "Violence"). Using cognitive reappraisal subjects decreased and increased the negativity of one negative (e.g., "Crying") and one neutral (e.g., "Reading") category respectively, whereas they only had to watch the other two categories (e.g., "Working" and "Violence") without changing their feelings. Subsequently, subjects performed the attentional blink paradigm. Two targets were embedded in a stream of distractors, with the previously seen human pictures serving as the first target (T1) and rotated landmark/landscape images as the second (T2). Subjects then reported T1 visibility and the orientation of T2. We investigated if the detection accuracy of T2 is influenced by the change of the emotional value of T1 due to the reappraisal manipulation. Indeed, T2 detection rate was higher when T2 was preceded by a negative image that was only viewed compared to negative images that were reappraised to be neutral. Thus, more resources were captured by images that have been reappraised before, i.e., their negativity has been reduced. This modulatory effect of reappraisal on attention was not found for neutral images. Possibly upon re-exposure to negative stimuli subjects had to recall the previously performed affective change. In this case resources may be allocated to maintain the reappraised value and therefore hinder the detection of a temporally close target. Complimentary self-reported ratings support the reappraisal manipulation of negative images.

Journal ArticleDOI
07 Apr 2014-Emotion
TL;DR: The results of this study demonstrate that fearful facial expressions can uniquely and implicitly enhance environmental monitoring above and beyond explicit attentional effects related to task instructions.
Abstract: We previously demonstrated that fearful facial expressions implicitly facilitate memory for contextual events whereas angry facial expressions do not. The current study sought to more directly address the implicit effect of fearful expressions on attention for contextual events within a classic attentional paradigm (i.e., the attentional blink) in which memory is tested on a trial-by-trial basis, thereby providing subjects with a clear, explicit attentional strategy. Neutral faces of a single gender were presented via rapid serial visual presentation (RSVP) while bordered by four gray pound signs. Participants were told to watch for a gender change within the sequence (T1). It is critical to note that the T1 face displayed a neutral, fearful, or angry expression. Subjects were then told to detect a color change (i.e., gray to green; T2) at one of the four peripheral pound sign locations appearing after T1. This T2 color change could appear at one of six temporal positions. Complementing previous attentional blink paradigms, participants were told to respond via button press immediately when a T2 target was detected. We found that, compared with the neutral T1 faces, fearful faces significantly increased target detection ability at four of the six temporal locations (all ps < .05) whereas angry expressions did not. The results of this study demonstrate that fearful facial expressions can uniquely and implicitly enhance environmental monitoring above and beyond explicit attentional effects related to task instructions.

Journal ArticleDOI
R Adam1, Uta Noppeney1
TL;DR: The results demonstrate that a sound around the time of T2 increases subjects' awareness of the visual target as a function of T1 and T2 congruency, suggesting that the brain may combine phonological congruencies cues provided by the audiovisual inputs at T2 to infer whether auditory and visual signals emanate from a common source and should hence be integrated for perceptual decisions.
Abstract: Capacity limitations of attentional resources allow only a fraction of sensory inputs to enter our awareness. Most prominently, in the attentional blink the observer often fails to detect the second of two rapidly successive targets that are presented in a sequence of distractor items. To investigate how auditory inputs enable a visual target to escape the attentional blink, this study presented the visual letter targets T1 and T2 together with phonologically congruent or incongruent spoken letter names. First, a congruent relative to an incongruent sound at T2 rendered visual T2 more visible. Second, this T2 congruency effect was amplified when the sound was congruent at T1 as indicated by a T1 congruency × T2 congruency interaction. Critically, these effects were observed both when the sounds were presented in synchrony with and prior to the visual target letters suggesting that the sounds may increase visual target identification via multiple mechanisms such as audiovisual priming or decisional interactions. Our results demonstrate that a sound around the time of T2 increases subjects' awareness of the visual target as a function of T1 and T2 congruency. Consistent with Bayesian causal inference, the brain may thus combine (1) prior congruency expectations based on T1 congruency and (2) phonological congruency cues provided by the audiovisual inputs at T2 to infer whether auditory and visual signals emanate from a common source and should hence be integrated for perceptual decisions.

Journal ArticleDOI
TL;DR: It is argued that the pattern of results indicates that in the pre-target interval several processes act in parallel, and the balance between these processes relates to the occurrence of an AB.
Abstract: The attentional blink AB is a deficit in conscious perception of the second of two targets if it follows the first within 200-500 msec The AB phenomenon has been linked to pre-target oscillatory alpha activity However, this is based on paradigms that use a rapid serial visual presentation RSVP stimulus stream in which the targets are embedded This distracter stream is usually presented at a frequency of 10 Hz and thus generates a steady-state visual-evoked potential ssVEP at the center of the alpha frequency band This makes the interpretation of alpha findings in the AB difficult To be able to relate these findings either to the presence of the ssVEP or to an effect of endogenously generated alpha activity, we compared AB paradigms with and without different pre-target distracter streams The distracter stream was always presented at 12 Hz, and power and intertrial phase coherence were analyzed in the alpha range 8-12 Hz Without a distracter stream alpha power dropped before target presentation, whereas coherence did not change Presence of a distracter stream was linked to stronger pre-target power reduction and increased coherence, which were both modulated by distracter stream characteristics With regard to the AB results indicated that, whereas ssVEP-related power tended to be higher when both targets were detected, endogenous alpha power tended to be lower We argue that the pattern of results indicates that in the pre-target interval several processes act in parallel The balance between these processes relates to the occurrence of an AB

Journal ArticleDOI
TL;DR: The results demonstrated that target encoding in attentional windows has an all-or-none influence on subsequent item report, and a comparison of the results to a computational model of temporal attention demonstrates how structural limitations on the rate of encoding affect perception, even during sparing.
Abstract: The attentional blink (AB) is a dual-target, rapid serial visual presentation (RSVP) deficit thought to represent a failure of perceptual awareness that reflects the dynamics of temporal attention. However, second target (T2) report is typically unimpaired when the targets appear within 150 ms of one another (i.e., lag-1 sparing). In addition, this sparing can be extended if more targets appear sequentially. It is thought that sequential targets are processed in the same attentional window. Here, we investigated the fate of targets processed in these windows and, specifically, the consequence for subsequent targets when an item at lag-1 is reported versus missed. The results demonstrated that target encoding in attentional windows has an all-or-none influence on subsequent item report: When comparing two-and three-target (T1 and T2 not separated by distractors) RSVP streams, there was no difference in AB magnitude for the final target when either T2 or T1 was missed in the three-target condition, but both of these conditions had significantly smaller blinks than those observed when T1 and T2 were accurately reported. A comparison of our results to a computational model of temporal attention demonstrates how structural limitations on the rate of encoding affect perception, even during sparing.

Journal ArticleDOI
TL;DR: Alzheimer's Disease patients under-reproduced the duration of previously-exposed stimulus in the high attentional relative to the low attentional task, and the same pattern was observed in older adults.

Journal ArticleDOI
TL;DR: It is shown that fear, anger and pain produced an AB and that alexithymia moderated it such that difficulty in describing feelings and externally oriented thinking were associated with higher interference after the processing of fear and anger at short time presentations.
Abstract: The present studies aimed to analyse the modulatory effect of distressing facial expressions on attention processing The attentional blink (AB) paradigm is one of the most widely used paradigms for studying temporal attention, and is increasingly applied to study the temporal dynamics of emotion processing The aims of this study were to investigate how identifying fear and pain facial expressions (Study 1) and fear and anger facial expressions (Study 2) would influence the detection of subsequent stimuli presented within short time intervals, and to assess the moderating influence of alexithymia and affectivity on this effect It has been suggested that high alexithymia scorers need more attentional resources to process distressing facial expressions and that negative affectivity increases the AB We showed that fear, anger and pain produced an AB and that alexithymia moderated it such that difficulty in describing feelings (Study 1) and externally oriented thinking (Study 2) were associated with higher interference after the processing of fear and anger at short time presentations These studies provide evidence that distressing facial expressions modulate the attentional processing at short time intervals and that alexithymia influences the early attentional processing of fear and anger expressions Controlling for state affect did not change these conclusions

Journal ArticleDOI
TL;DR: It is shown that the two odors have specific effects on attentional control: as compared with the calming lavender aroma, the arousing peppermint condition yielded a larger AB.
Abstract: Increasing evidence suggests that `aromas have distinctive effects on the allocation of attention in space: Arousing olfactory fragrances (e.g., peppermint) are supposed to induce a more focused state, and calming olfactory fragrances (e.g., lavender) a broader attentional state. Here, we investigate whether odors have similar effects on the allocation of attention in time. Participants performed the attentional blink (AB) task, known to produce a deficit in reporting the second of two target stimuli presented in close succession in a rapid sequence of distractors, while being exposed to either a peppermint or a lavender aroma. In two experiments using a between-subjects and a within-subjects design, respectively, we show that the two odors have specific effects on attentional control: As compared with the calming lavender aroma, the arousing peppermint condition yielded a larger AB. Our results demonstrate that attentional control is systematically modulated by factors that induce a more or a less distributed state of mind.

Journal ArticleDOI
TL;DR: Results clearly show that masking T1 attenuates the P3 to T1 and delays the P4 to T2 in the AB, and implications for extant theories of the AB are discussed.
Abstract: The attentional blink (AB) refers to the decline in report accuracy of a second target (T2) when presented shortly after a first target (T1) in a rapid serial visual presentation (RSVP) of distractors. It is known that masking T1 increases the magnitude of the AB, and masking a single target (equivalent to T1) in a RSVP stream attenuates the P3 to the target in correct trials. The major purpose of the present study was to clarify how these two effects may be integrated. An intervening distractor was presented at lag 1 (T1+1), at lag 2 (T1+2), or at neither of these two lags (no distractor). T2 was always presented at lag 3, as the last item in the stream. The P3 to T1 was attenuated and the P3 to T2 delayed in the T1+1 condition compared to the two other distractor conditions. These results clearly show that masking T1 attenuates the P3 to T1 and delays the P3 to T2 in the AB. Implications for extant theories of the AB are discussed.

Journal ArticleDOI
TL;DR: Neither spatially focused attention nor attentional engagement is sufficient to prevent attentional capture, and a distractor–target letter compatibility manipulation showed that the peripheral distractor summoned attention, irrespective of whether attention had just been engaged.
Abstract: What conditions, if any, can fully prevent attentional capture (i.e., involuntary allocation of spatial attention to an irrelevant object) has been a matter of debate. In a previous study, Folk, Ester, and Troemel (Psychonomic Bulletin & Review 16:127–132, 2009) suggested that attentional capture can be blocked entirely when attention is already engaged in a different object. This conclusion relied on the finding that in a search for a known-color target in a rapid serial visual presentation stream, a peripheral distractor with the target color did not further impair target identification performance when a distractor also with the target color that appeared in the stream had already captured attention. In the present study, we argue that this conclusion is unwarranted, because the effects of the central and peripheral distractors could not be disentangled. In order to isolate the effect of the peripheral distractor, we introduced a distractor–target letter compatibility manipulation. Our results showed that the peripheral distractor summoned attention, irrespective of whether attention had just been engaged. We conclude that neither spatially focused attention nor attentional engagement is sufficient to prevent attentional capture.

Journal ArticleDOI
TL;DR: The attentional blink (AB) paradigm is used to demonstrate privileged access to perceptual awareness for tool–action recipient object pairs and to investigate how motor affordances modulate their joint processing.
Abstract: Facilitatory effects have been noted between tools and the objects that they act upon (their "action recipients") across several paradigms. However, it has not been convinc- ingly established that the motor system is directly involved in the joint visual processing of these object pairings. Here, we used the attentional blink (AB) paradigm to demonstrate privileged access to perceptual awareness for tool-action re- cipient object pairs and to investigate how motor affordances modulate their joint processing. We demonstrated a reduction inthesizeoftheABthatwasgreaterforcongruenttool-action recipient pairings (e.g., hammer-nail) than for incongruent pairings (e.g., scissors-nail). Moreover, the AB was reduced only when action recipients followed their associated tool in the temporal sequence, but not when this order was reversed. Importantly, we also found that the effect was sensitive to manipulations of the motor congruence between the tool and the action recipient. First, we observed a greater reduction in the AB when the tool and action recipient were correctly aligned for action than when the tool was rotated to face away from the action recipient. Second, presenting a different tool as a distractor between the tool and action recipient target objects removed any benefit seen for congruent pairings. This was likely due to interference from the motor properties of the distractor tool that disrupted the motor synergy between the congruent tool and action recipient targets. Overall, these findings demonstrate that the contextual motoric relationship between tools and their action recipients facilitates their visual encoding and access to perceptual awareness.