scispace - formally typeset
Search or ask a question

Showing papers in "Attention Perception & Psychophysics in 2017"


Journal ArticleDOI
TL;DR: A brief psychophysical test for determining whether online experiment participants are wearing headphones mitigates concerns over sound quality for online experiments and is a first step toward opening auditory perceptual research to the possibilities afforded by crowdsourcing.
Abstract: Psychophysical experiments conducted remotely over the internet permit data collection from large numbers of participants but sacrifice control over sound presentation and therefore are not widely employed in hearing research. To help standardize online sound presentation, we introduce a brief psychophysical test for determining whether online experiment participants are wearing headphones. Listeners judge which of three pure tones is quietest, with one of the tones presented 180° out of phase across the stereo channels. This task is intended to be easy over headphones but difficult over loudspeakers due to phase-cancellation. We validated the test in the lab by testing listeners known to be wearing headphones or listening over loudspeakers. The screening test was effective and efficient, discriminating between the two modes of listening with a small number of trials. When run online, a bimodal distribution of scores was obtained, suggesting that some participants performed the task over loudspeakers despite instructions to use headphones. The ability to detect and screen out these participants mitigates concerns over sound quality for online experiments, a first step toward opening auditory perceptual research to the possibilities afforded by crowdsourcing.

190 citations


Journal ArticleDOI
TL;DR: It was found that under conditions that promote active suppression of the irrelevant singletons, overt attention was less likely to be directed toward the salient distractors than toward nonsalient distractors.
Abstract: For more than 2 decades, researchers have debated the nature of cognitive control in the guidance of visual attention. Stimulus-driven theories claim that salient stimuli automatically capture attention, whereas goal-driven theories propose that an individual's attentional control settings determine whether salient stimuli capture attention. In the current study, we tested a hybrid account called the signal suppression hypothesis, which claims that all stimuli automatically generate a salience signal but that this signal can be actively suppressed by top-down attentional mechanisms. Previous behavioral and electrophysiological research has shown that participants can suppress covert shifts of attention to salient-but-irrelevant color singletons. In this study, we used eye-tracking methods to determine whether participants can also suppress overt shifts of attention to irrelevant singletons. We found that under conditions that promote active suppression of the irrelevant singletons, overt attention was less likely to be directed toward the salient distractors than toward nonsalient distractors. This result provides direct evidence that people can suppress salient-but-irrelevant singletons below baseline levels.

164 citations


Journal ArticleDOI
TL;DR: This tutorial will share what they have learned about using tDCS to manipulate how the brain perceives, attends, remembers, and responds to information from the authors' environment and spur discussion of the standardization of methods to enhance replicability.
Abstract: Noninvasive brain stimulation methods are becoming increasingly common tools in the kit of the cognitive scientist. In particular, transcranial direct-current stimulation (tDCS) is showing great promise as a tool to causally manipulate the brain and understand how information is processed. The popularity of this method of brain stimulation is based on the fact that it is safe, inexpensive, its effects are long lasting, and you can increase the likelihood that neurons will fire near one electrode and decrease the likelihood that neurons will fire near another. However, this method of manipulating the brain to draw causal inferences is not without complication. Because tDCS methods continue to be refined and are not yet standardized, there are reports in the literature that show some striking inconsistencies. Primary among the complications of the technique is that the tDCS method uses two or more electrodes to pass current and all of these electrodes will have effects on the tissue underneath them. In this tutorial, we will share what we have learned about using tDCS to manipulate how the brain perceives, attends, remembers, and responds to information from our environment. Our goal is to provide a starting point for new users of tDCS and spur discussion of the standardization of methods to enhance replicability.

98 citations


Journal ArticleDOI
TL;DR: It is argued that multiple object tracking provides a good mean to study the broader topic of continuous and dynamic visual attention and how this research has helped to understand visual attention in dynamic settings.
Abstract: Human observers are capable of tracking multiple objects among identical distractors based only on their spatiotemporal information. Since the first report of this ability in the seminal work of Pylyshyn and Storm (1988, Spatial Vision, 3, 179–197), multiple object tracking has attracted many researchers. A reason for this is that it is commonly argued that the attentional processes studied with the multiple object paradigm apparently match the attentional processing during real-world tasks such as driving or team sports. We argue that multiple object tracking provides a good mean to study the broader topic of continuous and dynamic visual attention. Indeed, several (partially contradicting) theories of attentive tracking have been proposed within the almost 30 years since its first report, and a large body of research has been conducted to test these theories. With regard to the richness and diversity of this literature, the aim of this tutorial review is to provide researchers who are new in the field of multiple object tracking with an overview over the multiple object tracking paradigm, its basic manipulations, as well as links to other paradigms investigating visual attention and working memory. Further, we aim at reviewing current theories of tracking as well as their empirical evidence. Finally, we review the state of the art in the most prominent research fields of multiple object tracking and how this research has helped to understand visual attention in dynamic settings.

86 citations


Journal ArticleDOI
TL;DR: Findings on phonetic convergence are consolidated in a large-scale examination of the impacts of talker sex, word frequency, and model talkers on multiple measures of convergence in a holistic AXB perceptual-similarity task and in acoustic measures of duration.
Abstract: This study consolidates findings on phonetic convergence in a large-scale examination of the impacts of talker sex, word frequency, and model talkers on multiple measures of convergence. A survey of nearly three dozen published reports revealed that most shadowing studies used very few model talkers and did not assess whether phonetic convergence varied across same- and mixed-sex pairings. Furthermore, some studies have reported effects of talker sex or word frequency on phonetic convergence, but others have failed to replicate these effects or have reported opposing patterns. In the present study, a set of 92 talkers (47 female) shadowed either same-sex or opposite-sex models (12 talkers, six female). Phonetic convergence was assessed in a holistic AXB perceptual-similarity task and in acoustic measures of duration, F0, F1, F2, and the F1 × F2 vowel space. Across these measures, convergence was subtle, variable, and inconsistent. There were no reliable main effects of talker sex or word frequency on any measures. However, female shadowers were more susceptible to lexical properties than were males, and model talkers elicited varying degrees of phonetic convergence. Mixed-effects regression models confirmed the complex relationships between acoustic and holistic perceptual measures of phonetic convergence. In order to draw broad conclusions about phonetic convergence, studies should employ multiple models and shadowers (both male and female), balanced multisyllabic items, and holistic measures. As a potential mechanism for sound change, phonetic convergence reflects complexities in speech perception and production that warrant elaboration of the underspecified components of current accounts.

79 citations


Journal ArticleDOI
TL;DR: The results, and those of Matzke et al. (2016), who report that controls also display a substantial although lower trigger-failure rate, indicate that attentional factors need to be taken into account when interpreting results from the stop-signal paradigm.
Abstract: We used Bayesian cognitive modelling to identify the underlying causes of apparent inhibitory deficits in the stop-signal paradigm. The analysis was applied to stop-signal data reported by Badcock et al. (Psychological Medicine 32: 87-297, 2002) and Hughes et al. (Biological Psychology 89: 220-231, 2012), where schizophrenia patients and control participants made rapid choice responses, but on some trials were signalled to stop their ongoing response. Previous research has assumed an inhibitory deficit in schizophrenia, because estimates of the mean time taken to react to the stop signal are longer in patients than controls. We showed that these longer estimates are partly due to failing to react to the stop signal (“trigger failures”) and partly due to a slower initiation of inhibition, implicating a failure of attention rather than a deficit in the inhibitory process itself. Correlations between the probability of trigger failures and event-related potentials reported by Hughes et al. are interpreted as supporting the attentional account of inhibitory deficits. Our results, and those of Matzke et al. (2016), who report that controls also display a substantial although lower trigger-failure rate, indicate that attentional factors need to be taken into account when interpreting results from the stop-signal paradigm.

69 citations


Journal ArticleDOI
TL;DR: The results of two replication studies and a meta-analysis that included the results from all published studies into the relationship between distractor filtering and media multitasking are reported, leading to question the existence of an association betweenMedia multitasking and distractibility in laboratory tasks of information processing.
Abstract: Ophir, Nass, and Wagner (2009, Proceedings of the National Academy of Sciences of the United States of America, 106(37), 15583–15587) found that people with high scores on the media-use questionnaire—a questionnaire that measures the proportion of media-usage time during which one uses more than one medium at the same time—show impaired performance on various tests of distractor filtering. Subsequent studies, however, did not all show this association between media multitasking and distractibility, thus casting doubt on the reliability of the initial findings. Here, we report the results of two replication studies and a meta-analysis that included the results from all published studies into the relationship between distractor filtering and media multitasking. Our replication studies included a total of 14 tests that had an average replication power of 0.81. Of these 14 tests, only five yielded a statistically significant effect in the direction of increased distractibility for people with higher scores on the media-use questionnaire, and only two of these effects held in a more conservative Bayesian analysis. Supplementing these outcomes, our meta-analysis on a total of 39 effect sizes yielded a weak but significant association between media multitasking and distractibility that turned nonsignificant after correction for small-study effects. Taken together, these findings lead us to question the existence of an association between media multitasking and distractibility in laboratory tasks of information processing.

66 citations


Journal ArticleDOI
TL;DR: It is demonstrated that working-memory representations are not independent but instead interact with each other in a manner that depends on attentional priority.
Abstract: We investigated whether the representations of different objects are maintained independently in working memory or interact with each other. Observers were shown two sequentially presented orientations and required to reproduce each orientation after a delay. The sequential presentation minimized perceptual interactions so that we could isolate interactions between memory representations per se. We found that similar orientations were repelled from each other whereas dissimilar orientations were attracted to each other. In addition, when one of the items was given greater attentional priority by means of a cue, the representation of the high-priority item was not influenced very much by the orientation of the low-priority item, but the representation of the low-priority item was strongly influenced by the orientation of the high-priority item. This indicates that attention modulates the interactions between working memory representations. In addition, errors in the reported orientations of the two objects were positively correlated under some conditions, suggesting that representations of distinct objects may become grouped together in memory. Together, these results demonstrate that working-memory representations are not independent but instead interact with each other in a manner that depends on attentional priority.

62 citations


Journal ArticleDOI
TL;DR: The authors showed that associative reward learning was completely response independent by letting participants perform a task at fixation, while high and low rewards were automatically administered following the presentation of task-irrelevant colored stimuli in the periphery or at fixation.
Abstract: Recent evidence shows that distractors that signal high compared to low reward availability elicit stronger attentional capture, even when this is detrimental for task-performance. This suggests that simply correlating stimuli with reward administration, rather than their instrumental relationship with obtaining reward, produces value-driven attentional capture. However, in previous studies, reward delivery was never response independent, as only correct responses were rewarded, nor was it completely task-irrelevant, as the distractor signaled the magnitude of reward that could be earned on that trial. In two experiments, we ensured that associative reward learning was completely response independent by letting participants perform a task at fixation, while high and low rewards were automatically administered following the presentation of task-irrelevant colored stimuli in the periphery (Experiment 1) or at fixation (Experiment 2). In a following non-reward test phase, using the additional singleton paradigm, the previously reward signaling stimuli were presented as distractors to assess truly task-irrelevant value driven attentional capture. The results showed that high compared to low reward-value associated distractors impaired performance, and thus captured attention more strongly. This suggests that genuine Pavlovian conditioning of stimulus-reward contingencies is sufficient to obtain value-driven attentional capture. Furthermore, value-driven attentional capture can occur following associative reward learning of temporally and spatially task-irrelevant distractors that signal the magnitude of available reward (Experiment 1), and is independent of training spatial shifts of attention towards the reward signaling stimuli (Experiment 2). This confirms and strengthens the idea that Pavlovian reward learning underlies value driven attentional capture.

62 citations


Journal ArticleDOI
TL;DR: A stringent test of the value-dependence hypothesis using the traditional value-driven attentional capture paradigm showed that, with a sufficiently large sample size, value-Dependence was observed based on both criteria, with no evidence of attentional Capture without rewards during training.
Abstract: Findings from an increasingly large number of studies have been used to argue that attentional capture can be dependent on the learned value of a stimulus, or value-driven. However, under certain circumstances attention can be biased to select stimuli that previously served as targets, independent of reward history. Value-driven attentional capture, as studied using the training phase-test phase design introduced by Anderson and colleagues, is widely presumed to reflect the combined influence of learned value and selection history. However, the degree to which attentional capture is at all dependent on value learning in this paradigm has recently been questioned. Support for value-dependence can be provided through one of two means: (1) greater attentional capture by prior targets following rewarded training than following unrewarded training, and (2) greater attentional capture by prior targets previously associated with high compared to low value. Using a variant of the original value-driven attentional capture paradigm, Sha and Jiang (Attention, Perception, and Psychophysics, 78, 403–414, 2016) failed to find evidence of either, and raised criticisms regarding the adequacy of evidence provided by prior studies using this particular paradigm. To address this disparity, here we provided a stringent test of the value-dependence hypothesis using the traditional value-driven attentional capture paradigm. With a sufficiently large sample size, value-dependence was observed based on both criteria, with no evidence of attentional capture without rewards during training. Our findings support the validity of the traditional value-driven attentional capture paradigm in measuring what its name purports to measure.

60 citations


Journal ArticleDOI
TL;DR: The research described in this article shows how perceptual control theory (PCT) can provide a “ground truth” for judgments about intention and its implications for psychological research and public policy are discussed.
Abstract: There is limited evidence regarding the accuracy of inferences about intention. The research described in this article shows how perceptual control theory (PCT) can provide a “ground truth” for these judgments. In a series of 3 studies, participants were asked to identify a person’s intention in a tracking task where the person’s true intention was to control the position of a knot connecting a pair of rubber bands. Most participants failed to correctly infer the person’s intention, instead inferring complex but nonexistent goals (such as “tracing out two kangaroos boxing”) based on the actions taken to keep the knot under control. Therefore, most of our participants experienced what we call “control blindness.” The effect persisted with many participants even when their awareness was successfully directed at the knot whose position was under control. Beyond exploring the control blindness phenomenon in the context of our studies, we discuss its implications for psychological research and public policy.

Journal ArticleDOI
TL;DR: This study proposes an alternative, neurobiologically plausible account of rate-dependent perception involving neural entrainment of endogenous oscillations to the rate of a rhythmic stimulus, challenging durational contrast as explanatory mechanism behind rate- dependent perception.
Abstract: The perception of temporal contrasts in speech is known to be influenced by the speech rate in the surrounding context. This rate-dependent perception is suggested to involve general auditory processes because it is also elicited by nonspeech contexts, such as pure tone sequences. Two general auditory mechanisms have been proposed to underlie rate-dependent perception: durational contrast and neural entrainment. This study compares the predictions of these two accounts of rate-dependent speech perception by means of four experiments, in which participants heard tone sequences followed by Dutch target words ambiguous between /ɑs/ "ash" and /a:s/ "bait". Tone sequences varied in the duration of tones (short vs. long) and in the presentation rate of the tones (fast vs. slow). Results show that the duration of preceding tones did not influence target perception in any of the experiments, thus challenging durational contrast as explanatory mechanism behind rate-dependent perception. Instead, the presentation rate consistently elicited a category boundary shift, with faster presentation rates inducing more /a:s/ responses, but only if the tone sequence was isochronous. Therefore, this study proposes an alternative, neurobiologically plausible account of rate-dependent perception involving neural entrainment of endogenous oscillations to the rate of a rhythmic stimulus.

Journal ArticleDOI
TL;DR: Across experiments, participants’ metacognitive judgments reliably predicted variation in working memory performance but consistently and severely underestimated the extent of failures, suggesting that metac cognitive monitoring may be key to working memory success.
Abstract: Working memory performance fluctuates dramatically from trial to trial. On many trials, performance is no better than chance. Here, we assessed participants' awareness of working memory failures. We used a whole-report visual working memory task to quantify both trial-by-trial performance and trial-by-trial subjective ratings of inattention to the task. In Experiment 1 (N = 41), participants were probed for task-unrelated thoughts immediately following 20% of trials. In Experiment 2 (N = 30), participants gave a rating of their attentional state following 25% of trials. Finally, in Experiments 3a (N = 44) and 3b (N = 34), participants reported confidence of every response using a simple mouse-click judgment. Attention-state ratings and off-task thoughts predicted the number of items correctly identified on each trial, replicating previous findings that subjective measures of attention state predict working memory performance. However, participants correctly identified failures on only around 28% of failure trials. Across experiments, participants' metacognitive judgments reliably predicted variation in working memory performance but consistently and severely underestimated the extent of failures. Further, individual differences in metacognitive accuracy correlated with overall working memory performance, suggesting that metacognitive monitoring may be key to working memory success.

Journal ArticleDOI
TL;DR: Three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.
Abstract: We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.

Journal ArticleDOI
TL;DR: It is concluded that nonsalient, task-irrelevant but reward-signaling stimuli can affect attentional selection above and beyond top-down or bottom-up attentional control, however, only after such stimuli were initially prioritized for selection.
Abstract: Previous research has shown that attentional selection is affected by reward contingencies: previously selected and rewarded stimuli continue to capture attention even if the reward contingencies are no longer in place. In the current study, we investigated whether attentional selection also is affected by stimuli that merely signal the magnitude of reward available on a given trial but, crucially, have never had instrumental value. In a series of experiments, we show that a stimulus signaling high reward availability captures attention even when that stimulus is and was never physically salient or part of the task set, and selecting it is harmful for obtaining reward. Our results suggest that irrelevant reward-signaling stimuli capture attention, because participants have learned about the relationship between the stimulus and reward. Importantly, we only observed learning after initial attentional prioritization of the reward signaling stimulus. We conclude that nonsalient, task-irrelevant but reward-signaling stimuli can affect attentional selection above and beyond top-down or bottom-up attentional control, however, only after such stimuli were initially prioritized for selection.

Journal ArticleDOI
TL;DR: This study measured participants’ susceptibility to the McGurk illusion as well as their ability to identify sentences in noise across a range of signal-to-noise ratios in audio-only and audiovisual modalities, suggesting that McGurK susceptibility may not be a valid measure of audiovISual integration in everyday speech processing.
Abstract: In noisy situations, visual information plays a critical role in the success of speech communication: listeners are better able to understand speech when they can see the speaker. Visual influence on auditory speech perception is also observed in the McGurk effect, in which discrepant visual information alters listeners' auditory perception of a spoken syllable. When hearing /ba/ while seeing a person saying /ga/, for example, listeners may report hearing /da/. Because these two phenomena have been assumed to arise from a common integration mechanism, the McGurk effect has often been used as a measure of audiovisual integration in speech perception. In this study, we test whether this assumed relationship exists within individual listeners. We measured participants' susceptibility to the McGurk illusion as well as their ability to identify sentences in noise across a range of signal-to-noise ratios in audio-only and audiovisual modalities. Our results do not show a relationship between listeners' McGurk susceptibility and their ability to use visual cues to understand spoken sentences in noise, suggesting that McGurk susceptibility may not be a valid measure of audiovisual integration in everyday speech processing.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the influence of effect delay and temporal predictability on intentional binding (IB) and found that IB increased with delay (Experiment 1: 200 ms, 250 ms, 300 ms).
Abstract: Stimuli caused by actions (i.e., effects) are perceived earlier than stimuli not caused by actions. This phenomenon is termed intentional binding (IB) and serves as implicit measure of sense of agency. We investigated the influence of effect delay and temporal predictability on IB, operationalized as the bias to perceive the effect as temporally shifted toward the action. For short delays, IB increased with delay (Experiment 1: 200 ms, 250 ms, 300 ms). The initial increase declined for longer delays (Experiment 2: 100 ms, 250 ms, 400 ms). This extends previous findings showing IB to decrease with increasing delays for delay ranges of 250 ms to 650 ms. Further, the hypothesis that IB, that is, sense of agency, might be maximal for different delays depending on the specific characteristics and context of action and effect, has important implications for human-machine interfaces.

Journal ArticleDOI
TL;DR: It is argued that when the authors view natural environments, scene and object relationships are processed obligatorily, such that irrelevant semantic mismatches between scene andobject identity can modulate ongoing eye-movement behavior.
Abstract: People have an amazing ability to identify objects and scenes with only a glimpse. How automatic is this scene and object identification? Are scene and object semantics—let alone their semantic congruity—processed to a degree that modulates ongoing gaze behavior even if they are irrelevant to the task at hand? Objects that do not fit the semantics of the scene (e.g., a toothbrush in an office) are typically fixated longer and more often than objects that are congruent with the scene context. In this study, we overlaid a letter T onto photographs of indoor scenes and instructed participants to search for it. Some of these background images contained scene-incongruent objects. Despite their lack of relevance to the search, we found that participants spent more time in total looking at semantically incongruent compared to congruent objects in the same position of the scene. Subsequent tests of explicit and implicit memory showed that participants did not remember many of the inconsistent objects and no more of the consistent objects. We argue that when we view natural environments, scene and object relationships are processed obligatorily, such that irrelevant semantic mismatches between scene and object identity can modulate ongoing eye-movement behavior.

Journal ArticleDOI
TL;DR: It is suggested that media multitasking may be related to the propensity to disengage from ongoing tasks and performance on the n-back, and higher scores on the MMI were associated with an increase in false alarms, but not with a change in hits.
Abstract: A number of studies have recently examined the link between individual differences in media multitasking (using the MMI) and performance on working memory paradigms. However, these studies have yielded mixed results. Here we examine the relation between media multitasking and one particular working memory paradigm-the n-back (2- and 3-back)-improving upon previous research by (a) treating media multitasking as a continuous variable and adopting a correlational approach as well as (b) using a large sample of participants. First, we found that higher scores on the MMI were associated with a greater proportion of omitted trials on both the 2-back and 3-back, indicating that heavier media multitaskers were more disengaged during the n-back. In line with such a claim, heavier media multitaskers were also more likely to confess to responding randomly during various portions of the experiment, and to report media multitasking during the experiment itself. Importantly, when controlling for the relation between MMI scores and omissions, higher scores on the MMI were associated with an increase in false alarms, but not with a change in hits. These findings refine the extant literature on media multitasking and working memory performance (specifically, performance on the n-back), and suggest that media multitasking may be related to the propensity to disengage from ongoing tasks.

Journal ArticleDOI
TL;DR: Evidence that short-lived binding effects can be distinguished from learning of longer lasting stimulus–response associations is presented and it is concluded that distinct underlying processes should be assumed for binding and incidental learning effects.
Abstract: A single encounter of a stimulus together with a response can result in a short-lived association between the stimulus and the response [sometimes called an event file, see Hommel, Musseler, Aschersleben, & Prinz, (2001) Behavioral and Brain Sciences, 24, 910-926]. The repetition of stimulus-response pairings typically results in longer lasting learning effects indicating stimulus-response associations (e.g., Logan & Etherton, (1994) Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 1022-1050]. An important question is whether or not what has been described as stimulus-response binding in action control research is actually identical with an early stage of incidental learning (e.g., binding might be seen as single-trial learning). Here, we present evidence that short-lived binding effects can be distinguished from learning of longer lasting stimulus-response associations. In two experiments, participants always responded to centrally presented target letters that were flanked by response irrelevant distractor letters. Experiment 1 varied whether distractors flanked targets on the horizontal or vertical axis. Binding effects were larger for a horizontal than for a vertical distractor-target configuration, while stimulus configuration did not influence incidental learning of longer lasting stimulus-response associations. In Experiment 2, the duration of the interval between response n - 1 and presentation of display n (500 ms vs. 2000 ms) had opposing influences on binding and learning effects. Both experiments indicate that modulating factors influence stimulus-response binding and incidental learning effects in different ways. We conclude that distinct underlying processes should be assumed for binding and incidental learning effects.

Journal ArticleDOI
TL;DR: It is demonstrated that visual information can affect the interpretation, but not the perception, of accented speech, when adapting sounds relied on visual information to influence the accentedness of an ambiguous auditory adaptor.
Abstract: Prior studies have reported that seeing an Asian face makes American English sound more accented The current study investigates whether this effect is perceptual, or if it instead occurs at a later decision stage We first replicated the finding that showing static Asian and Caucasian faces can shift people’s reports about the accentedness of speech accompanying the pictures When we changed the static pictures to dubbed videos, reducing the demand characteristics, the shift in reported accentedness largely disappeared By including unambiguous items along with the original ambiguous items, we introduced a contrast bias and actually reversed the shift, with the Asian-face videos yielding lower judgments of accentedness than the Caucasian-face videos By changing to a mixed rather than blocked design, so that the ethnicity of the videos varied from trial to trial, we eliminated the difference in accentedness rating Finally, we tested participants’ perception of accented speech using the selective adaptation paradigm After establishing that an auditory-only accented adaptor shifted the perception of how accented test words are, we found that no such adaptation effect occurred when the adapting sounds relied on visual information (Asian vs Caucasian videos) to influence the accentedness of an ambiguous auditory adaptor Collectively, the results demonstrate that visual information can affect the interpretation, but not the perception, of accented speech

Journal ArticleDOI
TL;DR: This work investigates whether apparent versus objective instructions modulate findings of distorted body representations underlying position sense, tactile distance perception, as well as the conscious body image and shows that the distortions measured with these paradigms are robust to differences in instructions and do not reflect a dissociation between perception and belief.
Abstract: Several recent reports have shown that even healthy adults maintain highly distorted representations of the size and shape of their body. These distortions have been shown to be highly consistent across different study designs and dependent measures. However, previous studies have found that visual judgments of size can be modulated by the experimental instructions used, for example, by asking for judgments of the participant’s subjective experience of stimulus size (i.e., apparent instructions) versus judgments of actual stimulus properties (i.e., objective instructions). Previous studies investigating internal body representations have relied exclusively on ‘apparent’ instructions. Here, we investigated whether apparent versus objective instructions modulate findings of distorted body representations underlying position sense (Exp. 1), tactile distance perception (Exp. 2), as well as the conscious body image (Exp. 3). Our results replicate the characteristic distortions previously reported for each of these tasks and further show that these distortions are not affected by instruction type (i.e., apparent vs. objective). These results show that the distortions measured with these paradigms are robust to differences in instructions and do not reflect a dissociation between perception and belief.

Journal ArticleDOI
TL;DR: The results indicated that object size judgments do benefit from interaction with the VE, and that this benefit extends to distances beyond the explored space.
Abstract: Distances tend to be underperceived in virtual environments (VEs) by up to 50%, whereas distances tend to be perceived accurately in the real world. Previous work has shown that allowing participants to interact with the VE while receiving continual visual feedback can reduce this underperception. Judgments of virtual object size have been used to measure whether this improvement is due to the rescaling of perceived space, but there is disagreement within the literature as to whether judgments of object size benefit from interaction with feedback. This study contributes to that discussion by employing a more natural measure of object size. We also examined whether any improvement in virtual distance perception was limited to the space used for interaction (1–5 m) or extended beyond (7–11 m). The results indicated that object size judgments do benefit from interaction with the VE, and that this benefit extends to distances beyond the explored space.

Journal ArticleDOI
TL;DR: This study is the first of its kind to investigate whether value judgements are influenced by attentional processes when assimilating information and shows that valuations can be influenced by Attentional processes, and can lead to less accurate subjective judgements.
Abstract: People often have to make decisions based on many pieces of information. Previous work has found that people are able to integrate values presented in a rapid serial visual presentation (RSVP) stream to make informed judgements on the overall stream value (Tsetsos et al. Proceedings of the National Academy of Sciences of the United States of America, 109(24), 9659–9664, 2012). It is also well known that attentional mechanisms influence how people process information. However, it is unknown how attentional factors impact value judgements of integrated material. The current study is the first of its kind to investigate whether value judgements are influenced by attentional processes when assimilating information. Experiments 1–3 examined whether the attentional salience of an item within an RSVP stream affected judgements of overall stream value. The results showed that the presence of an irrelevant high or low value salient item biased people to judge the stream as having a higher or lower overall mean value, respectively. Experiments 4–7 directly tested Tsetsos et al.’s (Proceedings of the National Academy of Sciences of the United States of America, 109(24), 9659–9664, 2012) theory examining whether extreme values in an RSVP stream become over-weighted, thereby capturing attention more than other values in the stream. The results showed that the presence of both a high (Experiments 4, 6 and 7) and a low (Experiment 5) value outlier captures attention leading to less accurate report of subsequent items in the stream. Taken together, the results showed that valuations can be influenced by attentional processes, and can lead to less accurate subjective judgements.

Journal ArticleDOI
Zaifeng Gao1, Fan Wu1, Fangfang Qiu1, Kaifeng He1, Yue Yang1, Mowei Shen1 
TL;DR: Experiments 1–6 consistently revealed a significantly larger impairment for bindings than for the constituent features, suggesting that object-based attention plays a pivotal role in retaining bindings in WM.
Abstract: Over the past decade, it has been debated whether retaining bindings in working memory (WM) requires more attention than retaining constituent features, focusing on domain-general attention and space-based attention. Recently, we proposed that retaining bindings in WM needs more object-based attention than retaining constituent features (Shen, Huang, & Gao, 2015, Journal of Experimental Psychology: Human Perception and Performance, doi: 10.1037/xhp0000018 ). However, only unitized visual bindings were examined; to establish the role of object-based attention in retaining bindings in WM, more emperical evidence is required. We tested 4 new bindings that had been suggested requiring no more attention than the constituent features in the WM maintenance phase: The two constituent features of binding were stored in different WM modules (cross-module binding, Experiment 1), from auditory and visual modalities (cross-modal binding, Experiment 2), or temporally (cross-time binding, Experiments 3) or spatially (cross-space binding, Experiments 4-6) separated. In the critical condition, we added a secondary object feature-report task during the delay interval of the change-detection task, such that the secondary task competed for object-based attention with the to-be-memorized stimuli. If more object-based attention is required for retaining bindings than for retaining constituent features, the secondary task should impair the binding performance to a larger degree relative to the performance of constituent features. Indeed, Experiments 1-6 consistently revealed a significantly larger impairment for bindings than for the constituent features, suggesting that object-based attention plays a pivotal role in retaining bindings in WM.

Journal ArticleDOI
TL;DR: It is shown that efficient object detection in natural scenes is independently facilitated by spatial and category-based attention, and that metacognitive ability for object detection performance is not influenced.
Abstract: Humans are remarkably efficient in detecting highly familiar object categories in natural scenes, with evidence suggesting that such object detection can be performed in the (near) absence of attention. Here we systematically explored the influences of both spatial attention and category-based attention on the accuracy of object detection in natural scenes. Manipulating both types of attention additionally allowed for addressing how these factors interact: whether the requirement for spatial attention depends on the extent to which observers are prepared to detect a specific object category-that is, on category-based attention. The results showed that the detection of targets from one category (animals or vehicles) was better than the detection of targets from two categories (animals and vehicles), demonstrating the beneficial effect of category-based attention. This effect did not depend on the semantic congruency of the target object and the background scene, indicating that observers attended to visual features diagnostic of the foreground target objects from the cued category. Importantly, in three experiments the detection of objects in scenes presented in the periphery was significantly impaired when observers simultaneously performed an attentionally demanding task at fixation, showing that spatial attention affects natural scene perception. In all experiments, the effects of category-based attention and spatial attention on object detection performance were additive rather than interactive. Finally, neither spatial nor category-based attention influenced metacognitive ability for object detection performance. These findings demonstrate that efficient object detection in natural scenes is independently facilitated by spatial and category-based attention.

Journal ArticleDOI
TL;DR: The novel hypothesis is proposed that, rather than the DRT’s sensitivity to cognitive load being a direct result of a loss of information processing capacity to other tasks, it is an indirect results of a general tendency to be more cautious when making responses in more demanding situations.
Abstract: Cognitive load from secondary tasks is a source of distraction causing injuries and fatalities on the roadway. The Detection Response Task (DRT) is an international standard for assessing cognitive load on drivers' attention that can be performed as a secondary task with little to no measurable effect on the primary driving task. We investigated whether decrements in DRT performance were related to the rate of information processing, levels of response caution, or the non-decision processing of drivers. We had pairs of participants take part in the DRT while performing a simulated driving task, manipulated cognitive load via the conversation between driver and passenger, and observed associated slowing in DRT response time. Fits of the single-bound diffusion model indicated that slowing was mediated by an increase in response caution. We propose the novel hypothesis that, rather than the DRT's sensitivity to cognitive load being a direct result of a loss of information processing capacity to other tasks, it is an indirect result of a general tendency to be more cautious when making responses in more demanding situations.

Journal ArticleDOI
TL;DR: The results show that familiarity enhances the automatic processing of some types of facial information more than others.
Abstract: In this study, we explore the automaticity of encoding for different facial characteristics and ask whether it is influenced by face familiarity. We used a matching task in which participants had to report whether the gender, identity, race, or expression of two briefly presented faces was the same or different. The task was made challenging by allowing nonrelevant dimensions to vary across trials. To test for automaticity, we compared performance on trials in which the task instruction was given at the beginning of the trial, with trials in which the task instruction was given at the end of the trial. As a strong criterion for automatic processing, we reasoned that if perception of a given characteristic (gender, race, identity, or emotion) is fully automatic, the timing of the instruction should not influence performance. We compared automaticity for the perception of familiar and unfamiliar faces. Performance with unfamiliar faces was higher for all tasks when the instruction was given at the beginning of the trial. However, we found a significant interaction between instruction and task with familiar faces. Accuracy of gender and identity judgments to familiar faces was the same regardless of whether the instruction was given before or after the trial, suggesting automatic processing of these properties. In contrast, there was an effect of instruction for judgments of expression and race to familiar faces. These results show that familiarity enhances the automatic processing of some types of facial information more than others.

Journal ArticleDOI
TL;DR: It is proposed that both image-space properties affect human decisions when recognising images, and it was found that colour presentation did not yield better memory performance over grayscale images.
Abstract: Previous studies have demonstrated that humans have a remarkable capacity to memorise a large number of scenes. The research on memorability has shown that memory performance can be predicted by the content of an image. We explored how remembering an image is affected by the image properties within the context of the reference set, including the extent to which it is different from its neighbours (image-space sparseness) and if it belongs to the same category as its neighbours (uniformity). We used a reference set of 2,048 scenes (64 categories), evaluated pairwise scene similarity using deep features from a pretrained convolutional neural network (CNN), and calculated the image-space sparseness and uniformity for each image. We ran three memory experiments, varying the memory workload with experiment length and colour/greyscale presentation. We measured the sensitivity and criterion value changes as a function of image-space sparseness and uniformity. Across all three experiments, we found separate effects of 1) sparseness on memory sensitivity, and 2) uniformity on the recognition criterion. People better remembered (and correctly rejected) images that were more separated from others. People tended to make more false alarms and fewer miss errors in images from categorically uniform portions of the image-space. We propose that both image-space properties affect human decisions when recognising images. Additionally, we found that colour presentation did not yield better memory performance over grayscale images.

Journal ArticleDOI
TL;DR: This work tests four qualitative predictions concerning the worst performance rule and its diffusion model explanation in terms of drift rate and suggests that the WPR may be less robust and less ubiquitous than is commonly believed.
Abstract: People with higher IQ scores also tend to perform better on elementary cognitive-perceptual tasks, such as deciding quickly whether an arrow points to the left or the right Jensen (2006). The worst performance rule (WPR) finesses this relation by stating that the association between IQ and elementary-task performance is most pronounced when this performance is summarized by people's slowest responses. Previous research has shown that the WPR can be accounted for in the Ratcliff diffusion model by assuming that the same ability parameter-drift rate-mediates performance in both elementary tasks and higher-level cognitive tasks. Here we aim to test four qualitative predictions concerning the WPR and its diffusion model explanation in terms of drift rate. In the first stage, the diffusion model was fit to data from 916 participants completing a perceptual two-choice task; crucially, the fitting happened after randomly shuffling the key variable, i.e., each participant's score on a working memory capacity test. In the second stage, after all modeling decisions were made, the key variable was unshuffled and the adequacy of the predictions was evaluated by means of confirmatory Bayesian hypothesis tests. By temporarily withholding the mapping of the key predictor, we retain flexibility for proper modeling of the data (e.g., outlier exclusion) while preventing biases from unduly influencing the results. Our results provide evidence against the WPR and suggest that it may be less robust and less ubiquitous than is commonly believed.