scispace - formally typeset
Search or ask a question

Showing papers in "Attention Perception & Psychophysics in 2016"


Journal ArticleDOI
TL;DR: The degrees of experimental support for six hypotheses about what causes the retro-cue effect are evaluated: attention protects representations from decay, attention prioritizes the selected WM contents for comparison with a probe display, attended representations are strengthened in WM, not-attended representations are removed from WM, and so on.
Abstract: The concept of attention has a prominent place in cognitive psychology. Attention can be directed not only to perceptual information, but also to information in working memory (WM). Evidence for an internal focus of attention has come from the retro-cue effect: Performance in tests of visual WM is improved when attention is guided to the test-relevant contents of WM ahead of testing them. The retro-cue paradigm has served as a test bed to empirically investigate the functions and limits of the focus of attention in WM. In this article, we review the growing body of (behavioral) studies on the retro-cue effect. We evaluate the degrees of experimental support for six hypotheses about what causes the retro-cue effect: (1) Attention protects representations from decay, (2) attention prioritizes the selected WM contents for comparison with a probe display, (3) attended representations are strengthened in WM, (4) not-attended representations are removed from WM, (5) a retro-cue to the retrieval target provides a head start for its retrieval before decision making, and (6) attention protects the selected representation from perceptual interference. The extant evidence provides support for the last four of these hypotheses.

253 citations


Journal ArticleDOI
TL;DR: Suggests are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.
Abstract: Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

169 citations


Journal ArticleDOI
TL;DR: This work addressed the critical question of whether a causal relationship exists between changes in hand position sense and changes in limb ownership by devising a novel setup that allowed us to mechanically manipulate the position of the participant’s hand without the participant noticing, while the rubber hand illusion was being elicited.
Abstract: The rubber hand illusion is a perceptual illusion in which participants experience an inanimate rubber hand as belonging to their own body. The illusion is elicited by synchronously stroking the rubber hand and the participant's real hand, which is hidden from sight. The feeling of owning the rubber hand is accompanied by changes in hand position sense (proprioception), so that when participants are asked to indicate the location of their (unseen) hand, they indicate that it is located closer to the rubber hand. This "proprioceptive drift" is the most widely used objective measure of the rubber hand illusion, and from a theoretical perspective, it suggests a close link between proprioception and the feeling of body ownership. However, the critical question of whether a causal relationship exists between changes in hand position sense and changes in limb ownership is unknown. Here we addressed this question by devising a novel setup that allowed us to mechanically manipulate the position of the participant's hand without the participant noticing, while the rubber hand illusion was being elicited. Our results showed that changing the sensed position closer to or farther away from the rubber hand did not change the strength of the rubber hand illusion. Thus, the illusion is not dependent on changes in hand position sense. This finding supports models of body ownership and central body representation that hold that proprioceptive drift and the subjective illusion are related to different central processes.

146 citations


Journal ArticleDOI
TL;DR: A Bayesian modeling framework is presented that explains why the tendency to ignore evidence against a selected perceptual choice may be a heuristic adopted by the perceptual decision-making system, rather than reflecting inherent biological limitations, and why this heuristic strategy may be advantageous in real-world contexts.
Abstract: Zylberberg et al. [Zylberberg, Barttfeld, & Sigman (Frontiers in Integrative Neuroscience, 6; 79, 2012), Frontiers in Integrative Neuroscience 6:79] found that confidence decisions, but not perceptual decisions, are insensitive to evidence against a selected perceptual choice. We present a signal detection theoretic model to formalize this insight, which gave rise to a counter-intuitive empirical prediction: that depending on the observer's perceptual choice, increasing task performance can be associated with decreasing metacognitive sensitivity (i.e., the trial-by-trial correspondence between confidence and accuracy). The model also provides an explanation as to why metacognitive sensitivity tends to be less than optimal in actual subjects. These predictions were confirmed robustly in a psychophysics experiment. In a second experiment we found that, in at least some subjects, the effects were replicated even under performance feedback designed to encourage optimal behavior. However, some subjects did show improvement under feedback, suggesting the tendency to ignore evidence against a selected perceptual choice may be a heuristic adopted by the perceptual decision-making system, rather than reflecting inherent biological limitations. We present a Bayesian modeling framework that explains why this heuristic strategy may be advantageous in real-world contexts.

92 citations


Journal ArticleDOI
TL;DR: This tutorial highlights and discusses several issues concerning study design and the test of the race model inequality, such as inappropriate control of Type I error, insufficient statistical power, wrong treatment of omitted responses or anticipations, and the interpretation of violations of theRace model inequality.
Abstract: When participants respond in the same way to stimuli of two categories, responses are often observed to be faster when both stimuli are presented together (redundant signals) relative to the response time obtained when they are presented separately. This effect is known as the redundant signals effect. Several models have been proposed to explain this effect, including race models and coactivation models of information processing. In race models, the two stimulus components are processed in separate channels, and the faster channel determines the processing time. This mechanism leads, on average, to faster responses to redundant signals. In contrast, coactivation models assume integrated processing of the combined stimuli. To distinguish between these two accounts, Miller (Cognitive Psychology, 14, 247-279, 1982) derived the well-known race model inequality, which has become a routine test for behavioral data in experiments with redundant signals. In this tutorial, we review the basic properties of redundant signals experiments and current statistical procedures used to test the race model inequality during the period between 2011 and 2014. We highlight and discuss several issues concerning study design and the test of the race model inequality, such as inappropriate control of Type I error, insufficient statistical power, wrong treatment of omitted responses or anticipations, and the interpretation of violations of the race model inequality. We make detailed recommendations on the design of redundant signals experiments and on the statistical analysis of redundancy gains. We describe a number of coactivation models that may be considered when the race model has been shown to fail.

65 citations


Journal ArticleDOI
TL;DR: The role of executive control in stimulus-driven and goal-directed attention in visual working memory is examined using probed recall of a series of objects, a task that allows study of the dynamics of storage through analysis of serial position data.
Abstract: We examined the role of executive control in stimulus-driven and goal-directed attention in visual working memory using probed recall of a series of objects, a task that allows study of the dynamics of storage through analysis of serial position data. Experiment 1 examined whether executive control underlies goal-directed prioritization of certain items within the sequence. Instructing participants to prioritize either the first or final item resulted in improved recall for these items, and an increase in concurrent task difficulty reduced or abolished these gains, consistent with their dependence on executive control. Experiment 2 examined whether executive control is also involved in the disruption caused by a post-series visual distractor (suffix). A demanding concurrent task disrupted memory for all items except the most recent, whereas a suffix disrupted only the most recent items. There was no interaction when concurrent load and suffix were combined, suggesting that deploying selective attention to ignore the distractor did not draw upon executive resources. A final experiment replicated the independent interfering effects of suffix and concurrent load while ruling out possible artifacts. We discuss the results in terms of a domain-general episodic buffer in which information is retained in a transient, limited capacity privileged state, influenced by both stimulus-driven and goal-directed processes. The privileged state contains the most recent environmental input together with goal-relevant representations being actively maintained using executive resources.

60 citations


Journal ArticleDOI
TL;DR: The adaptive choice visual search enables a fresh approach to studying goal-directed control, and contributes new evidence that control is partly determined by both performance maximization and effort minimization, as well as at least one additional factor, which is speculated to include novelty seeking.
Abstract: Goal-directed attentional control supports efficient visual search by prioritizing relevant stimuli in the environment. Previous research has shown that goal-directed control can be configured in many ways, and often multiple control settings can be used to achieve the same goal. However, little is known about how control settings are selected. We explored the extent to which the configuration of goal-directed control is driven by performance maximization (optimally configuring settings to maximize speed and accuracy) and effort minimization (selecting the least effortful settings). We used a new paradigm, adaptive choice visual search, which allows participants to choose one of two available targets (a red or a blue square) on each trial. Distractor colors vary predictively across trials, such that the optimal target switches back and forth throughout the experiment. Results (N = 43) show that participants chose the optimal target most often, updating to the new target when the environment changed, supporting performance maximization. However, individuals were sluggish to update to the optimal color, consistent with effort minimization. Additionally, we found a surprisingly high rate of nonoptimal choices and switching between targets, which could not be explained by either factor. Analysis of participants’ self-reported search strategy revealed substantial individual differences in the control strategies used. In sum, the adaptive choice visual search enables a fresh approach to studying goal-directed control. The results contribute new evidence that control is partly determined by both performance maximization and effort minimization, as well as at least one additional factor, which we speculate to include novelty seeking.

54 citations


Journal ArticleDOI
TL;DR: Although monetary reward can increase attentional priority for the high-reward target during training, subsequent attentional capture effects may not be reward-based, but reflect, in part, attentional Capture by previous targets.
Abstract: Recent research reported that task-irrelevant colors captured attention if these colors previously served as search targets and received high monetary reward. We showed that both monetary reward and value-independent mechanisms influenced selective attention. Participants searched for two potential target colors among distractor colors in the training phase. Subsequently, they searched for a shape singleton in a testing phase. Experiment 1 found that participants were slower in the testing phase if a distractor of a previous target color was present rather than absent. Such slowing was observed even when no monetary reward was used during training. Experiment 2 associated monetary rewards with the target colors during the training phase. Participants were faster finding the target associated with higher monetary reward. However, reward training did not yield value-dependent attentional capture in the testing phase. Attentional capture by the previous target colors was not significantly greater for the previously high-reward color than the previously low or no-reward color. These findings revealed both the power and limitations of monetary reward on attention. Although monetary reward can increase attentional priority for the high-reward target during training, subsequent attentional capture effects may not be reward-based, but reflect, in part, attentional capture by previous targets.

51 citations


Journal ArticleDOI
TL;DR: It seems, therefore, that above-chance static face–voice matching is sensitive to the experimental procedure employed, and voices and dynamic articulating faces, as well as voices and static faces, share concordant source identity information.
Abstract: Research investigating whether faces and voices share common source identity information has offered contradictory results. Accurate face–voice matching is consistently above chance when the facial stimuli are dynamic, but not when the facial stimuli are static. We tested whether procedural differences might help to account for the previous inconsistencies. In Experiment 1, participants completed a sequential two-alternative forced choice matching task. They either heard a voice and then saw two faces or saw a face and then heard two voices. Face–voice matching was above chance when the facial stimuli were dynamic and articulating, but not when they were static. In Experiment 2, we tested whether matching was more accurate when faces and voices were presented simultaneously. The participants saw two face–voice combinations, presented one after the other. They had to decide which combination was the same identity. As in Experiment 1, only dynamic face–voice matching was above chance. In Experiment 3, participants heard a voice and then saw two static faces presented simultaneously. With this procedure, static face–voice matching was above chance. The overall results, analyzed using multilevel modeling, showed that voices and dynamic articulating faces, as well as voices and static faces, share concordant source identity information. It seems, therefore, that above-chance static face–voice matching is sensitive to the experimental procedure employed. In addition, the inconsistencies in previous research might depend on the specific stimulus sets used; our multilevel modeling analyses show that some people look and sound more similar than others.

49 citations


Journal ArticleDOI
TL;DR: Results suggest that non-native listeners show native-like sensitivity to distributional information in the input and use this information to adjust categorization, just as native listeners do, with the specific trajectory of category adaptation governed by initial cue-weighting strategies.
Abstract: Listeners possess a remarkable ability to adapt to acoustic variability in the realization of speech sound categories (e.g., different accents). The current work tests whether non-native listeners adapt their use of acoustic cues in phonetic categorization when they are confronted with changes in the distribution of cues in the input, as native listeners do, and examines to what extent these adaptation patterns are influenced by individual cue-weighting strategies. In line with previous work, native English listeners, who use voice onset time (VOT) as a primary cue to the stop voicing contrast (e.g., 'pa' vs. 'ba'), adjusted their use of f0 (a secondary cue to the contrast) when confronted with a noncanonical "accent" in which the two cues gave conflicting information about category membership. Native Korean listeners' adaptation strategies, while variable, were predictable based on their initial cue weighting strategies. In particular, listeners who used f0 as the primary cue to category membership adjusted their use of VOT (their secondary cue) in response to the noncanonical accent, mirroring the native pattern of "downweighting" a secondary cue. Results suggest that non-native listeners show native-like sensitivity to distributional information in the input and use this information to adjust categorization, just as native listeners do, with the specific trajectory of category adaptation governed by initial cue-weighting strategies.

43 citations


Journal ArticleDOI
TL;DR: It is concluded that the value-modulated oculomotor capture effect is a consequence of biased competition on the saccade priority map and cannot be explained by a general reduction in saccadic threshold.
Abstract: Recent research has shown that reward learning can modulate oculomotor and attentional capture by physically salient and task-irrelevant distractor stimuli, even when directing gaze to those stimuli is directly counterproductive to receiving reward. This value-modulated oculomotor capture effect may reflect biased competition in the oculomotor system, such that the relationship between a stimulus feature and reward enhances that feature's representation on an internal priority map. However, it is also possible that this effect is a result of reward reducing the threshold for a saccade to be made to salient items. Here, we demonstrate value-modulated oculomotor capture when two reward-associated distractor stimuli are presented simultaneously in the same search display. The influence of reward on oculomotor capture is found to be most prominent at the shortest saccade latencies. We conclude that the value-modulated oculomotor capture effect is a consequence of biased competition on the saccade priority map and cannot be explained by a general reduction in saccadic threshold.

Journal ArticleDOI
TL;DR: The present study evaluated the discrepancy-attention link in a display where novel and familiar stimuli are equated for saliency, and found that novelty captures and binds attention.
Abstract: While the classical distinction between task-driven and stimulus-driven biasing of attention appears to be a dichotomy at first sight, there seems to be a third category that depends on the contrast or discrepancy between active representations and the upcoming stimulus, and may be termed novelty, surprise, or prediction failure. For previous demonstrations of the discrepancy-attention link, stimulus-driven components (saliency) may have played a decisive role. The present study was conducted to evaluate the discrepancy-attention link in a display where novel and familiar stimuli are equated for saliency. Eye tracking was used to determine fixations on novel and familiar stimuli as a proxy for attention. Results show a prioritization of attention by the novel color, and a de-prioritization of the familiar color, which is clearly present at the second fixation, and spans over the next couple of fixations. Saliency, on the other hand, did not prioritize items in the display. The results thus reinforce the notion that novelty captures and binds attention.

Journal ArticleDOI
TL;DR: It is suggested that a general and unselective mechanism is responsible for integrating own responses with a large variety of stimuli.
Abstract: Short-term bindings between responses and events in the environment ensure efficient behavioral control. This notion holds true for two particular types of binding: bindings between responses and response-irrelevant distractor stimuli that are present at the time of responding, and also for bindings between responses and the effects they cause. Although both types of binding have been extensively studied in the past, little is known about their interrelation. In three experiments, we analyzed both types of binding processes in a distractor-response binding design and in a response-effect binding design, which yielded two central findings: (1) Distractor-response binding and response-effect binding effects were observed not only in their native, but also in the corresponding “non-native” design, and (2) a manipulation of retrieval delay affected both types of bindings in a similar way. We suggest that a general and unselective mechanism is responsible for integrating own responses with a large variety of stimuli.

Journal ArticleDOI
TL;DR: Examination of the underlying relationships across seven paradigms that varied in their response selection and response inhibition requirements provides evidence in support of the hypothesis that response selectionand response inhibition reflect two distinct cognitive operations.
Abstract: The abilities to select appropriate responses and suppress unwanted actions are key executive functions that enable flexible and goal-directed behavior. However, to date it has been unclear whether these two cognitive operations tap a common action control resource or reflect two distinct processes. In the present study, we used an individual differences approach to examine the underlying relationships across seven paradigms that varied in their response selection and response inhibition requirements: stop-signal, go–no-go, Stroop, flanker, single-response selection, psychological refractory period, and attentional blink tasks. A confirmatory factor analysis suggested that response inhibition and response selection are separable, with stop-signal and go–no-go task performance being related to response inhibition, and performance in the psychological refractory period, Stroop, single-response selection, and attentional blink tasks being related to response selection. These findings provide evidence in support of the hypothesis that response selection and response inhibition reflect two distinct cognitive operations.

Journal ArticleDOI
Jeff Miller1
TL;DR: The idea that individual racers proceed at the same speed in the single and redundant conditions (also known as “context independence”) is shown to be better understood as an inherent part of Raab's original race model than as a separate, additional assumption.
Abstract: As a supplement to Gondan and Minakata’s (2015) tutorial on methods for testing the race model inequality, this theoretical note attempts to clarify further (a) the types of models that obey and violate the inequality and (b) the conclusions that can be drawn when the inequality is violated. In particular, the idea that individual racers proceed at the same speed in the single and redundant conditions (also known as “context independence”) is shown to be better understood as an inherent part of Raab’s (1962) original race model than as a separate, additional assumption. Thus, evidence that individual racers proceeded at different speeds in the single and redundant conditions, if available, should be viewed as supporting one type of coactivation model rather than an alternative model. In addition, it is shown that a class of race-like models without the assumption of context independence is so broad that it can never be falsified.

Journal ArticleDOI
TL;DR: It is demonstrated that previously reward-associated sounds also capture attention, interfering more strongly with the performance of a visual task, suggesting that value-driven attention reflects a broad principle of information processing that can be extended to other sensory modalities and that value -driven attention can bias cross-modal stimulus competition.
Abstract: It is now well established that the visual attention system is shaped by reward learning. When visual features are associated with a reward outcome, they acquire high priority and can automatically capture visual attention. To date, evidence for value-driven attentional capture has been limited entirely to the visual system. In the present study, I demonstrate that previously reward-associated sounds also capture attention, interfering more strongly with the performance of a visual task. This finding suggests that value-driven attention reflects a broad principle of information processing that can be extended to other sensory modalities and that value-driven attention can bias cross-modal stimulus competition.

Journal ArticleDOI
TL;DR: The results suggest that a forward model creates predictions for multiple modalities, and consequently contributes to multisensory interactions in the context of action.
Abstract: Predicting the sensory consequences of our own actions contributes to efficient sensory processing and might help distinguish the consequences of self- versus externally generated actions. Previous research using unimodal stimuli has provided evidence for the existence of a forward model, which explains how such sensory predictions are generated and used to guide behavior. However, whether and how we predict multisensory action outcomes remains largely unknown. Here, we investigated this question in two behavioral experiments. In Experiment 1, we presented unimodal (visual or auditory) and bimodal (visual and auditory) sensory feedback with various delays after a self-initiated buttonpress. Participants had to report whether they detected a delay between their buttonpress and the stimulus in the predefined task modality. In Experiment 2, the sensory feedback and task were the same as in Experiment 1, but in half of the trials the action was externally generated. We observed enhanced delay detection for bimodal relative to unimodal trials, with better performance in general for actively generated actions. Furthermore, in the active condition, the bimodal advantage was largest when the stimulus in the task-irrelevant modality was not delayed-that is, when it was time-contiguous with the action-as compared to when both the task-relevant and task-irrelevant modalities were delayed. This specific enhancement for trials with a nondelayed task-irrelevant modality was absent in the passive condition. These results suggest that a forward model creates predictions for multiple modalities, and consequently contributes to multisensory interactions in the context of action.

Journal ArticleDOI
TL;DR: The results suggest that visual speech can compensate for convergence that is reduced by auditory noise masking, and the visibility of articulatory mouth movements as being important to the visual enhancement of phonetic convergence.
Abstract: Talkers automatically imitate aspects of perceived speech, a phenomenon known as phonetic convergence. Talkers have previously been found to converge to auditory and visual speech information. Furthermore, talkers converge more to the speech of a conversational partner who is seen and heard, relative to one who is just heard (Dias & Rosenblum Perception, 40, 1457-1466, 2011). A question raised by this finding is what visual information facilitates the enhancement effect. In the following experiments, we investigated the possible contributions of visible speech articulation to visual enhancement of phonetic convergence within the noninteractive context of a shadowing task. In Experiment 1, we examined the influence of the visibility of a talker on phonetic convergence when shadowing auditory speech either in the clear or in low-level auditory noise. The results suggest that visual speech can compensate for convergence that is reduced by auditory noise masking. Experiment 2 further established the visibility of articulatory mouth movements as being important to the visual enhancement of phonetic convergence. Furthermore, the word frequency and phonological neighborhood density characteristics of the words shadowed were found to significantly predict phonetic convergence in both experiments. Consistent with previous findings (e.g., Goldinger Psychological Review, 105, 251-279, 1998), phonetic convergence was greater when shadowing low-frequency words. Convergence was also found to be greater for low-density words, contrasting with previous predictions of the effect of phonological neighborhood density on auditory phonetic convergence (e.g., Pardo, Jordan, Mallari, Scanlon, & Lewandowski Journal of Memory and Language, 69, 183-195, 2013). Implications of the results for a gestural account of phonetic convergence are discussed.

Journal ArticleDOI
TL;DR: A method whereby MDS can be repurposed to successfully quantify the similarity of experimental stimuli is proposed, thereby opening up theoretical questions in visual search and attention that cannot currently be addressed.
Abstract: Visual search is one of the most widely studied topics in vision science, both as an independent topic of interest, and as a tool for studying attention and visual cognition. A wide literature exists that seeks to understand how people find things under varying conditions of difficulty and complexity, and in situations ranging from the mundane (e.g., looking for one's keys) to those with significant societal importance (e.g., baggage or medical screening). A primary determinant of the ease and probability of success during search are the similarity relationships that exist in the search environment, such as the similarity between the background and the target, or the likeness of the non-targets to one another. A sense of similarity is often intuitive, but it is seldom quantified directly. This presents a problem in that similarity relationships are imprecisely specified, limiting the capacity of the researcher to examine adequately their influence. In this article, we present a novel approach to overcoming this problem that combines multi-dimensional scaling (MDS) analyses with behavioral and eye-tracking measurements. We propose a method whereby MDS can be repurposed to successfully quantify the similarity of experimental stimuli, thereby opening up theoretical questions in visual search and attention that cannot currently be addressed. These quantifications, in conjunction with behavioral and oculomotor measures, allow for critical observations about how similarity affects performance, information selection, and information processing. We provide a demonstration and tutorial of the approach, identify documented examples of its use, discuss how complementary computer vision methods could also be adopted, and close with a discussion of potential avenues for future application of this technique.

Journal ArticleDOI
TL;DR: Across three experiments, only precursors heard as intelligible speech generated a speech-rate effect, suggesting that rate-dependent speech processing can be domain specific.
Abstract: The perception of reduced syllables, including function words, produced in casual speech can be made to disappear by slowing the rate at which surrounding words are spoken (Dilley & Pitt, Psychological Science, 21(11), 1664-1670. doi: 10.1177/0956797610384743 , 2010). The current study explored the domain generality of this speech-rate effect, asking whether it is induced by temporal information found only in speech. Stimuli were short word sequences (e.g., minor or child) appended to precursors that were clear speech, degraded speech (low-pass filtered or sinewave), or tone sequences, presented at a spoken rate and a slowed rate. Across three experiments, only precursors heard as intelligible speech generated a speech-rate effect (fewer reports of function words with a slowed context), suggesting that rate-dependent speech processing can be domain specific.

Journal ArticleDOI
TL;DR: Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bISTable perception appear to continue without requiring attention.
Abstract: How does attention interact with incoming sensory information to determine what we perceive? One domain in which this question has received serious consideration is that of bistable perception: a captivating class of phenomena that involves fluctuating visual experience in the face of physically unchanging sensory input Here, some investigations have yielded support for the idea that attention alone determines what is seen, while others have implicated entirely attention-independent processes in driving alternations during bistable perception We review the body of literature addressing this divide and conclude that in fact both sides are correct-depending on the form of bistable perception being considered Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bistable perception appear to continue without requiring attention We discuss some implications of this differential effect of attention for our understanding of the mechanisms underlying bistable perception, and examine how these mechanisms operate during our everyday visual experiences

Journal ArticleDOI
TL;DR: The analysis of saccadic reaction times in relation to the motor response showed that peripheral vision is naturally used to detect motion and form changes in MOT, and for changes of comparable task difficulties, motion changes are detected better by peripheral vision than are form changes.
Abstract: In the present study, we investigated whether peripheral vision can be used to monitor multiple moving objects and to detect single-target changes. For this purpose, in Experiment 1, a modified multiple object tracking (MOT) setup with a large projection screen and a constant-position centroid phase had to be checked first. Classical findings regarding the use of a virtual centroid to track multiple objects and the dependency of tracking accuracy on target speed could be successfully replicated. Thereafter, the main experimental variations regarding the manipulation of to-be-detected target changes could be introduced in Experiment 2. In addition to a button press used for the detection task, gaze behavior was assessed using an integrated eyetracking system. The analysis of saccadic reaction times in relation to the motor response showed that peripheral vision is naturally used to detect motion and form changes in MOT, because saccades to the target often occurred after target-change offset. Furthermore, for changes of comparable task difficulties, motion changes are detected better by peripheral vision than are form changes. These findings indicate that the capabilities of the visual system (e.g., visual acuity) affect change detection rates and that covert-attention processes may be affected by vision-related aspects such as spatial uncertainty. Moreover, we argue that a centroid-MOT strategy might reduce saccade-related costs and that eyetracking seems to be generally valuable to test the predictions derived from theories of MOT. Finally, we propose implications for testing covert attention in applied settings.

Journal ArticleDOI
TL;DR: Evidence is reported showing that action-effect binding also occurs in the spatial domain, and that the results yielded spatial binding between the perceived stylus position and the perceived stimulus position when the stimulus was under full control of the hand movement compared to control conditions without direct control.
Abstract: The temporal interval between an action and its ensuing effect is perceptually compressed. Specifically, the perceived onset of actions is shifted towards their effects in time and, vice versa, the perceived onset of effects is shifted towards their causing actions. In four experiments, we report evidence showing that action-effect binding also occurs in the spatial domain. Participants controlled the location of a visual stimulus by performing stylus movements before they judged either the position of the stylus or the position of the visual stimulus. The results yielded spatial binding between the perceived stylus position and the perceived stimulus position when the stimulus was under full control of the hand movement compared to control conditions without direct control.

Journal ArticleDOI
TL;DR: The results suggest that distractor priming only improves visual search if volitional control is relatively high, and suggest that intertrial priming for distractors is due to decreased attentional capture by repeatedly presented distractors, whereas target processing remains unaffected.
Abstract: Targets are found more easily in a visual search task when their feature is repeatedly presented, an effect known as intertrial priming. Recent findings suggest that priming of distractors can also improve search performance by facilitated suppression of repeated distractor features. The efficacy of intertrial priming for targets can be potentiated by the expectancy of a specific target feature; systematic repetition shows larger intertrial priming than random repetition. For distractors, the underlying mechanism is less clear. We used the systematic lateralization approach to disentangle target- and distractor-related processing with subcomponents of the N2pc. We found no modulation of the NT component, which reflects prioritization of target processing. The ND component, which reflects attentional capture by irrelevant stimuli, however, showed intertrial priming: ND monotonically decreased with repetition of a distractor color, but only if a specific distractor feature was expected, and if the context induced a search that was vulnerable to attentional capture. These observations suggest that distractor priming only improves visual search if volitional control is relatively high. The results also suggest that intertrial priming for distractors is due to decreased attentional capture by repeatedly presented distractors, whereas target processing remains unaffected.

Journal ArticleDOI
TL;DR: This article offers a framework of attentional mechanisms that will aid in guiding future research on this topic and concludes that even though the existing evidence mostly favors the account of serial central and parallel noncentral attention, there is no experiment that has conclusively borne out these claims.
Abstract: In this brief review, we argue that attention operates along a hierarchy from peripheral through central mechanisms. We further argue that these mechanisms are distinguished not just by their functional roles in cognition, but also by a distinction between serial mechanisms (associated with central attention) and parallel mechanisms (associated with midlevel and peripheral attention). In particular, we suggest that peripheral attentional deployments in distinct representational systems may be maintained simultaneously with little or no interference, but that the serial nature of central attention means that even tasks that largely rely on distinct representational systems will come into conflict when central attention is demanded. We go on to review both the behavioral and neural evidence for this prediction. We conclude that even though the existing evidence mostly favors our account of serial central and parallel noncentral attention, we know of no experiment that has conclusively borne out these claims. As such, this article offers a framework of attentional mechanisms that will aid in guiding future research on this topic.

Journal ArticleDOI
TL;DR: It is demonstrated that as long as an object is attended, its semantic properties bias attention, even if it is irrelevant to an ongoing task and if more predictive factors are available.
Abstract: Every object is represented by semantic information in extension to its low-level properties. It is well documented that such information biases attention when it is necessary for an ongoing task. However, whether semantic relationships influence attentional selection when they are irrelevant to the ongoing task remains an open question. The ubiquitous nature of semantic information suggests that it could bias attention even when these properties are irrelevant. In the present study, three objects appeared on screen, two of which were semantically related. After a varying time interval, a target or distractor appeared on top of each object. The objects’ semantic relationships never predicted the target location. Despite this, a semantic bias on attentional allocation was observed, with an initial, transient bias to semantically related objects. Further experiments demonstrated that this effect was contingent on the objects being attended: if an object never contained the target, it no longer exerted a semantic influence. In a final set of experiments, we demonstrated that the semantic bias is robust and appears even in the presence of more predictive cues (spatial probability). These results suggest that as long as an object is attended, its semantic properties bias attention, even if it is irrelevant to an ongoing task and if more predictive factors are available.

Journal ArticleDOI
TL;DR: In two studies, it is found that smaller faces take longer to categorize than those that are larger, and this pattern interacts with local background clutter.
Abstract: The perception of facial expressions and objects at a distance are entrenched psychological research venues, but their intersection is not. We were motivated to study them together because of their joint importance in the physical composition of popular movies—shots that show a larger image of a face typically have shorter durations than those in which the face is smaller. For static images, we explore the time it takes viewers to categorize the valence of different facial expressions as a function of their visual size. In two studies, we find that smaller faces take longer to categorize than those that are larger, and this pattern interacts with local background clutter. More clutter creates crowding and impedes the interpretation of expressions for more distant faces but not proximal ones. Filmmakers at least tacitly know this. In two other studies, we show that contemporary movies lengthen shots that show smaller faces, and even more so with increased clutter.

Journal ArticleDOI
TL;DR: A series of experiments showing evidence of both auditory and visual dominance effects are reported, and Mechanisms underlying sensory dominance and factors that may modulate sensory dominance are discussed.
Abstract: Approximately 40 years of research on modality dominance has shown that humans are inclined to focus on visual information when presented with compounded visual and auditory stimuli. The current paper reports a series of experiments showing evidence of both auditory and visual dominance effects. Using a behavioral oddball task, we found auditory dominance when examining response times to auditory and visual oddballs-simultaneously presenting pictures and sounds slowed down responses to visual but not auditory oddballs. However, when requiring participants to make separate responses for auditory, visual, and bimodal oddballs, auditory dominance was eliminated with a reversal to visual dominance (Experiment 2). Experiment 3 replicated auditory dominance and showed that increased task demands and asking participants to analyze cross-modal stimuli conjunctively (as opposed to disjunctively) cannot account for the reversal to visual dominance. Mechanisms underlying sensory dominance and factors that may modulate sensory dominance are discussed.

Journal ArticleDOI
TL;DR: The results showed that those very infrequent distractors that signaled reward captured attention, whereas the distractors (both frequent and infrequent ones) not associated withReward were simply ignored, which indicates that even when attention is directed to a location in space, stimuli associated with reward break through the focus of attention, but equally salient stimuli notassociated with reward do not.
Abstract: In the present study, we investigated the conditions in which rewarded distractors have the ability to capture attention, even when attention is directed toward the target location. Experiment 1 showed that when the probability of obtaining reward was high, all salient distractors captured attention, even when they were not associated with reward. This effect may have been caused by participants suboptimally using the 100%-valid endogenous location cue. Experiment 2 confirmed this result by showing that salient distractors did not capture attention in a block in which no reward was expected. In Experiment 3, the probability of the presence of a distractor was high, but it only signaled reward availability on a low number of trials. The results showed that those very infrequent distractors that signaled reward captured attention, whereas the distractors (both frequent and infrequent ones) not associated with reward were simply ignored. The latter experiment indicates that even when attention is directed to a location in space, stimuli associated with reward break through the focus of attention, but equally salient stimuli not associated with reward do not.

Journal ArticleDOI
TL;DR: It is demonstrated that body-related action effects affect action control much as environment-related effects do, and therefore support the theoretical assumption of the functional equivalence of all types of action effects.
Abstract: Empirical investigations of ideomotor effect anticipations have mainly focused on action effects in the environment. By contrast, action effects that apply to the agent's body have rarely been put to the test in corresponding experimental paradigms. We present a series of experiments using the response-effect compatibility paradigm, in which we studied the impacts of to-be-produced tactile action effects on action selection, initiation, and execution. The results showed a robust and reliable impact if these tactile action effects were rendered task-relevant (Exp. 1), but not when they were task-irrelevant (Exps. 2a and 2b). We further showed that anticipations of tactile action effects follow the same time course as anticipations of environment-related effects (Exps. 3 and 4). These findings demonstrate that body-related action effects affect action control much as environment-related effects do, and therefore support the theoretical assumption of the functional equivalence of all types of action effects.