scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Experimental Psychology: Human Perception and Performance in 2020"


Journal ArticleDOI
TL;DR: The signal-suppression account cannot resolve the long-standing debate regarding stimulus-driven and goal-driven attentional capture and is concluded that the relative salience of items in the display is a crucial factor in attentional control.
Abstract: Recently the signal-suppression account was proposed, positing that salient stimuli automatically produce a bottom-up salience signal that can be suppressed via top-down control processes. Evidence for this hybrid account came from a capture-probe paradigm that showed that while searching for a specific shape, observers suppressed the location of the irrelevant color singleton. Here we replicate these findings but also show that this occurs only for search arrays with 4 elements. For larger array sizes when both target and distractor singleton are salient, there is no evidence for suppression; instead and consistent with the stimulus-driven account, there is clear evidence that the salient distractor captured attention. The current study shows that the relative salience of items in the display is a crucial factor in attentional control. In displays with a few heterogeneous items, top-down suppression is possible. However, in larger displays in which both target and distractor singletons are salient, no top-down suppression is observed. We conclude that the signal-suppression account cannot resolve the long-standing debate regarding stimulus-driven and goal-driven attentional capture. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

48 citations


Journal ArticleDOI
TL;DR: Insight is provided into how the visual system prioritizes external information when attention is focused inward and the importance of task demands when assessing the relationship between eye movements, visual processing, and mind wandering is indicated.
Abstract: During mind wandering, visual processing of external information is attenuated. Accordingly, mind wandering is associated with changes in gaze behaviors, albeit findings are inconsistent in the literature. This heterogeneity obfuscates a complete view of the moment-to-moment processing priorities of the visual system during mind wandering. We hypothesize that this observed heterogeneity is an effect of idiosyncrasy across tasks with varying spatial allocation demands, visual processing demands, and discourse processing demands and reflects a strategic, compensatory shift in how the visual system operates during mind wandering. We recorded eye movements and mind wandering (via thought-probes) as 132 college-aged adults completed a battery of 7 short (6 min) tasks with different visual demands. We found that for tasks requiring extensive sampling of the visual field, there were fewer fixations, and, depending on the specific task, fixations were longer and/or more dispersed. This suggests that visual sampling is sparser and potentially slower and more dispersed to compensate for the decreased sampling rate during mind wandering. For tasks that demand centrally focused gaze, mind wandering was accompanied by more exploratory eye movements, such as shorter and more dispersed fixations as well as larger saccades. Gaze behaviors were not reliably associated with mind wandering during a film comprehension task. These findings provide insight into how the visual system prioritizes external information when attention is focused inward and indicates the importance of task demands when assessing the relationship between eye movements, visual processing, and mind wandering. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

34 citations


Journal ArticleDOI
TL;DR: It is demonstrated that suppression of high-probability distractor locations persists after location probabilities are equalized and likely reflects a genuine reshaping of the priority map rather than more transient effects of selection history.
Abstract: Statistical regularities in distractor location trigger suppression of high-probability distractor locations during visual search. The degree to which such suppression reflects generalizable, persistent changes in a spatial priority map has not been examined. We demonstrate that suppression of high-probability distractor locations persists after location probabilities are equalized and likely reflects a genuine reshaping of the priority map rather than more transient effects of selection history. Statistically learned suppression generalizes across contexts within a task during learning but does not generalize between task paradigms using unrelated stimuli in identical spatial locations. These findings suggest that stimulus features do play a role in learned spatial suppression, potentially gating the weights applied to a spatial priority map. However, the binding of location to context during learning is not automatic, in contrast to the previously reported interaction of location-based statistical learning and stimulus features. (PsycINFO Database Record (c) 2020 APA, all rights reserved).

30 citations


Journal ArticleDOI
TL;DR: It is shown that items are not independent and that the recalled orientation of an individual item is strongly influenced by the summary statistical representation of all items (ensemble representation), and that ensemble information can be an important source of information to constrain uncertain information about individuals.
Abstract: Prevailing theories of visual working memory assume that each encoded item is stored or forgotten as a separate unit independent from other items. Here, we show that items are not independent and that the recalled orientation of an individual item is strongly influenced by the summary statistical representation of all items (ensemble representation). We find that not only is memory for an individual orientation substantially biased toward the mean orientation, but the precision of memory for an individual item also closely tracks the precision with which people store the mean orientation (which is, in turn, correlated with the physical range of orientations). Thus, individual items are reported more precisely when items on a trial are more similar. Moreover, the narrower the range of orientations present on a trial, the more participants appear to rely on the mean orientation as representative of all individuals. This can be observed not only when the range is carefully controlled, but also shown even in randomly generated, unstructured displays, and after accounting for the possibility of location-based 'swap' errors. Our results suggest that the information about a set of items is represented hierarchically, and that ensemble information can be an important source of information to constrain uncertain information about individuals. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

22 citations


Journal ArticleDOI
TL;DR: In this article, the authors explored whether a human-like feel of touch biases perceived pleasantness and whether such a bias depends on top-down cognitive and/or bottom-up sensory processes.
Abstract: This study explored whether a human-like feel of touch biases perceived pleasantness and whether such a bias depends on top-down cognitive and/or bottom-up sensory processes. In 2 experiments, 11 materials were stroked across the forearm at different velocities (bottom-up) and participants rated tactile pleasantness and humanness. Additionally, in Experiment 1, participants identified the materials (top-down), whereas in Experiment 2, they rated each material with respect to its somatosensory properties (bottom-up). Stroking felt most pleasant at velocities optimal for the stimulation of CT-afferents, a mechanosensory nerve hypothesized to underpin affective touch. A corresponding effect on perceived humanness was significant in Experiment 1 and marginal in Experiment 2. Whereas material identification was unrelated to both pleasantness and humanness, we observed a robust relation with the somatosensory properties. Materials perceived as smooth, slippery, and soft were also pleasant. A corresponding effect on perceived humanness was significant for the first somatosensory property only. Humanness positively predicted pleasantness and neither top-down nor bottom-up factors altered this relationship. Thus, perceiving gentle touch as human appears to promote pleasure possibly because this serves to reinforce interpersonal contact as a means for creating and maintaining social bonds. (PsycINFO Database Record (c) 2020 APA, all rights reserved).

21 citations


Journal ArticleDOI
TL;DR: The results imply that the modulatory effect of sound on taste was not driven by retrospective interpretation of the taste experience, but by mechanisms such as priming and crossmodal association.
Abstract: Recent evidence demonstrates that the presentation of crossmodally corresponding auditory stimuli can modulate the taste and hedonic evaluation of various foods (an effect often called "sonic seasoning"). To further understand the mechanism underpinning such crossmodal effects, the time at which a soundtrack was presented relative to tasting was manipulated in a series of experiments. Participants heard two soundtracks corresponding to sweet and bitter tastes either exclusively during or after chocolate tasting (Experiment 1) or during and before chocolate tasting (Experiment 2). The results revealed that the soundtracks affected chocolate taste ratings only if they were presented before or during tasting but not if they were heard after tasting. Moreover, participants' individual soundtrack-taste association mediated the strength of the sonic seasoning effect. These results therefore imply that the modulatory effect of sound on taste was not driven by retrospective interpretation of the taste experience, but by mechanisms such as priming and crossmodal association. Taken together, these studies demonstrate the complex interplay of cognitive mechanisms that likely underlie sonic seasoning effects. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

17 citations


Journal ArticleDOI
TL;DR: Eye-tracking with a rewarded visual search task is used to investigate whether capture by reward can be suppressed in the same way as capture by physical salience, and whether reward-related stimuli are given special priority within the visual attention system over and above physically salient stimuli.
Abstract: Salient-but-irrelevant distractors can automatically capture attention and eye-gaze in visual search. However, recent findings have suggested that attention to salient-but-irrelevant stimuli can be suppressed when observers use a specific target template to guide their search (i.e., feature search). A separate line of research has indicated that attentional selection is influenced by factors other than the physical salience of a stimulus and the observer's goals. For instance, pairing a stimulus with reward has been shown to increase the extent to which it captures attention and gaze (as though it has become more physically salient), even when such capture has negative consequences for the observer. Here we used eye-tracking with a rewarded visual search task to investigate whether capture by reward can be suppressed in the same way as capture by physical salience. When participants were encouraged to use feature search, attention to a distractor paired with relatively small reward was suppressed. However, under the same conditions attention was captured by a distractor paired with large reward, even when such capture resulted in reward omission. These findings suggest that reward-related stimuli are given special priority within the visual attention system over and above physically salient stimuli, and have implications for our understanding of real-world biases to reward-related stimuli, such as those seen in addiction. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

16 citations


Journal ArticleDOI
TL;DR: The results highlight that people can use novel sensory information (here, echolocation) to guide actions, demonstrating the action system’s ability to adapt to changes in sensory input and enhance sensory-motor coordination for walking in blind people.
Abstract: People use sensory, in particular visual, information to guide actions such as walking around obstacles, grasping or reaching. However, it is presently unclear how malleable the sensorimotor system is. The present study investigated this by measuring how click-based echolocation may be used to avoid obstacles while walking. We tested 7 blind echolocation experts, 14 sighted, and 10 blind echolocation beginners. For comparison, we also tested 10 sighted participants, who used vision. To maximize the relevance of our research for people with vision impairments, we also included a condition where the long cane was used and considered obstacles at different elevations. Motion capture and sound data were acquired simultaneously. We found that echolocation experts walked just as fast as sighted participants using vision, and faster than either sighted or blind echolocation beginners. Walking paths of echolocation experts indicated early and smooth adjustments, similar to those shown by sighted people using vision and different from later and more abrupt adjustments of beginners. Further, for all participants, the use of echolocation significantly decreased collision frequency with obstacles at head, but not ground level. Further analyses showed that participants who made clicks with higher spectral frequency content walked faster, and that for experts higher clicking rates were associated with faster walking. The results highlight that people can use novel sensory information (here, echolocation) to guide actions, demonstrating the action system's ability to adapt to changes in sensory input. They also highlight that regular use of echolocation enhances sensory-motor coordination for walking in blind people. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

16 citations


Journal ArticleDOI
TL;DR: It is argued here that the weights within the spatial priority map can be dynamically adapted from trial to trial such that the selection of a target at a particular location increases the weights of the upcoming target location within theatial priority map, giving rise to a more efficient target selection.
Abstract: Previous studies have shown that attentional selection can be biased toward locations that are likely to contain a target and away from locations that are likely to contain a distractor. It is assumed that through statistical learning, participants are able to extract the regularities in the display, which in turn biases attentional selection. The present study employed the additional singleton task to examine the ability of participants to extract regularities that occurred across trials. In four experiments, we found that participants were capable of picking up statistical regularities concerning target positions across trials both in the absence and presence of distracting information. It is concluded that through statistical learning, participants are able to extract intertrial statistical associations regarding subsequent target location, which in turn biases attentional selection. We argue here that the weights within the spatial priority map can be dynamically adapted from trial to trial such that the selection of a target at a particular location increases the weights of the upcoming target location within the spatial priority map, giving rise to a more efficient target selection. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

15 citations


Journal ArticleDOI
TL;DR: Findings support the view that task sets serve as boundaries for the CSE, which is associated with orienting attention to the sensory modality in which task stimuli appear, which may facilitate the formation of a modality-specific task set.
Abstract: Cognitive control processes that enable purposeful behavior are often context-specific. A teenager, for example, may inhibit the tendency to daydream at work but not in the classroom. However, the nature of contextual boundaries for cognitive control processes remains unclear. Therefore, we revisited an ongoing controversy over whether such boundaries reflect (a) an attentional reset that occurs whenever a context-defining (e.g., sensory) feature changes or (b) a disruption of episodic memory retrieval that occurs only when the updated context-defining feature is linked to a different task set. To distinguish between these hypotheses, we used a cross-modal distractor-interference task to determine precisely when changing a salient context-defining feature-the sensory modality in which task stimuli appear-bounds control processes underlying the congruency sequence effect (CSE). Consistent with the task set hypothesis, but not with the attentional reset hypothesis, Experiments 1 and 2 revealed that changing the sensory modality in which task stimuli appear eliminates the CSE only when the task structure enables participants to form modality-specific task sets. Experiment 3 further revealed that such "modality-specific" CSEs are associated with orienting attention to the sensory modality in which task stimuli appear, which may facilitate the formation of a modality-specific task set. These findings support the view that task sets serve as boundaries for the CSE. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

15 citations


Journal ArticleDOI
TL;DR: The results consistently showed that the context cues in the nonmatching cue condition captured attention, as reflected in shorter RTs compared to neutral cues and a substantial N2pc to lateralized context cues.
Abstract: Recent attentional capture studies with the spatial cueing paradigm often found that target-dissimilar precues resulted in longer RTs on valid than invalid cue trials. These same location costs were accompanied by a contralateral positivity over posterior electrodes from 200 to 300 ms, similar to a PD component. Same location costs and the PD have been linked to the inhibition of cues with a unique feature (singleton cues) that do not match the target feature. In some studies reporting same location costs, the cue was surrounded by other cues (i.e., the context cues) that matched the physical or relative feature of the target. We hypothesized that the context cues might have captured attention and might have elicited data patterns that mimicked the inhibitory effects. To disentangle inhibition of the singleton cue from capture by the context cues, we added gray cues to the cue array, which we considered neutral because gray matched neither the target nor the nontarget color. In four experiments, the results consistently showed that the context cues in the nonmatching cue condition captured attention, as reflected in shorter RTs compared to neutral cues and a substantial N2pc to lateralized context cues. By contrast, the evidence for inhibition of the singleton cue was rather weak. Therefore, same location costs and lateralized positivity in the event-related potential of participants in several recent studies probably reflected attentional capture by the context cues, not inhibition of the singleton cue. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: Learned distractor rejection indeed operates alongside guidance from a target template, indicating that theories of visual attention should incorporate guidance by both target templates and learned nontargets.
Abstract: Visual attention is guided toward behaviorally relevant objects by target "templates" stored in visual memory. Visual attention also is guided away from nontarget distractors by learned distractor rejection. In a series of 5 visual search experiments, we asked if learned distractor rejection operated while attention was simultaneously guided by a target template. Participants performed a visual search in 2-color, spatially unsegregated displays where we manipulated attentional guidance by both target templates and consistent nontarget distractors. We observed faster mean response times to the target when a consistent nontarget distractor was present than when it was absent-the hallmark of learned distractor rejection-despite the use of strong target guidance. Learned distractor rejection indeed operates alongside guidance from a target template, indicating that theories of visual attention should incorporate guidance by both target templates and learned nontargets. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: While youths and older adults experience greater difficulties using multiple languages in response to external cues, they are affected less when they can freely use their languages.
Abstract: How bilinguals control their languages and switch between them may change across the life span. Furthermore, bilingual language control may depend on the demands imposed by the context. Across 2 experiments, we examined how Spanish-Basque children, teenagers, younger, and older adults switch between languages in voluntary and cued picture-naming tasks. In the voluntary task, bilinguals could freely choose a language while the cued task required them to use a prespecified language. In the cued task, youths and older adults showed larger language mixing costs than young adults, suggesting that using 2 languages in response to cues was more effortful. Cued switching costs, especially when the switching sequence was predictable, were also greater for youths and older adults. The voluntary switching task showed limited age effects. Older adults, but not youths, showed larger switching costs than younger adults. A voluntary mixing benefit was found in all ages, implying that voluntarily using 2 languages was less effortful than using one language across the life span. Thus, while youths and older adults experience greater difficulties using multiple languages in response to external cues, they are affected less when they can freely use their languages. This shows that age effects on bilingual language control are context-dependent. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
Dirk Kerzel1
TL;DR: In search arrays where the target is presented with similar nontarget stimuli, it is advantageous to shift the internal representation of the target features away from the nontarget features, and optimal tuning theory provides the best explanation.
Abstract: In search arrays where the target is presented with similar nontarget stimuli, it is advantageous to shift the internal representation of the target features away from the nontarget features. According to optimal tuning theory (Navalpakkam & Itti, 2007), the shift of the attentional template increases the signal-to-noise ratio because the overlap of neural populations representing the target and nontarget features is reduced. While previous research has shown that the internal representation of the target is indeed shifted, there is little evidence in favor of a shift in attentional selectivity. To fill this gap, we used a cue-target paradigm where shorter reaction times (RTs) at cued than at uncued locations indicate attentional capture by the cue. Consistent with previous research, we found that attentional capture decreased with decreasing similarity between cue and target color. Importantly, target-similar cue colors closer to the nontarget colors captured attention less than target-similar cue colors further away from the nontarget colors, suggesting that attentional selectivity was biased away from the nontarget colors. The shift of attentional selectivity matched the shift of the memory representation of the target. Further, the bias in attentional capture was reduced when the nontarget colors were more distinct from the target. We discuss alternative accounts of the data, such as saliency-driven capture and the relational account of attentional capture (Becker, 2010), but conclude that optimal tuning theory provides the best explanation. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The present experiments used combinations of auditory and motor tasks to examine the relation between the direction of the focus of attention (external/internal) and attentional demand on accuracy, and the results are consistent with the predictions of the constrained action and conscious processing hypotheses.
Abstract: Only a few research studies using reaction time (RT) measures have clearly shown that an external focus of attention requires fewer attentional resources than an internal focus of attention. The present experiments used combinations of auditory and motor tasks to examine the relation between the direction of the focus of attention (external/internal) and attentional demand on accuracy. Participants concurrently performed a dart throwing task and either a tone estimation task (Experiments 1 and 2) or a manual force production task (Experiments 3 and 4). In Experiment 1 with a between-subjects design there was a nonsignificant trend for spatial errors in dart throwing to be reduced when focus was directed externally, as opposed to internally, but only in the dual-task condition. In Experiment 2 with a within-subject design both the internal and external focus conditions showed reduced errors in the dual-task conditions compared with the single-task conditions. The correlations between the actual and estimated tones were strong and positive in both experiments (at least .90). In Experiment 3, focusing externally on either task resulted in better force production accuracy than did focusing internally. In Experiment 4, an external focus on either task resulted in better throwing accuracy than did an internal focus. Overall, the results are consistent with the predictions of the constrained action and conscious processing hypotheses that an external focus of attention lowers attentional demands relative to an internal focus of attention, but focus of attention effects also depend on the overall attentional demands of the tasks involved. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

Journal ArticleDOI
TL;DR: It is demonstrated that repetition greatly improves long-term memory, including the ability to discriminate an item from a very similar item (fidelity), in both a lab setting and a naturalistic setting, and the results suggest both memory systems are capable of storing similar incredibly high-fidelity memories under the right circumstances.
Abstract: Long-term memory is often considered easily corruptible, imprecise, and inaccurate, especially in comparison to working memory. However, most research used to support these findings relies on weak long-term memories: those where people have had only one brief exposure to an item. Here we investigated the fidelity of visual long-term memory in more naturalistic setting, with repeated exposures, and ask how it compares to visual working memory fidelity. Using psychophysical methods designed to precisely measure the fidelity of visual memory, we demonstrate that long-term memory for the color of frequently seen objects is as accurate as working memory for the color of a single item seen 1 s ago. In particular, we show that repetition greatly improves long-term memory, including the ability to discriminate an item from a very similar item (fidelity), in both a lab setting (Experiments 1-3) and a naturalistic setting (brand logos, Experiment 4). Overall, our results demonstrate the impressive nature of visual long-term memory fidelity, which we find is even higher fidelity than previously indicated in situations involving repetitions. Furthermore, our results suggest that there is no distinction between the fidelity of visual working memory and visual long-term memory, but instead both memory systems are capable of storing similar incredibly high-fidelity memories under the right circumstances. Our results also provide further evidence that there is no fundamental distinction between the "precision" of memory and the "likelihood of retrieving a memory," instead suggesting a single continuous measure of memory strength best accounts for working and long-term memory. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: Women's preferences for male faces are associated with their preferences for personality traits, and these findings are consistent with the idea that sex-dimorphic characteristics elicit personality trait judgments, which might in turn affect attractiveness.
Abstract: Women prefer male faces with feminine shape and masculine reflectance. Here, we investigated the conceptual correlates of this preference, showing that it might reflect women's preferences for feminine (vs. masculine) personality in a partner. Young heterosexual women reported their preferences for personality traits in a partner and rated male faces-manipulated on masculinity/femininity-on stereotypically masculine (e.g., dominance) and feminine traits (e.g., warmth). Masculine shape and reflectance increased perceptions of masculine traits but had different effects on perceptions of feminine traits and attractiveness. While masculine shape decreased perceptions of both attractiveness and feminine traits, masculine reflectance increased perceptions of attractiveness and, to a weaker extent, perceptions of feminine traits. These findings are consistent with the idea that sex-dimorphic characteristics elicit personality trait judgments, which might in turn affect attractiveness. Importantly, participants found faces attractive to the extent that these faces elicited their preferred personality traits, regardless of gender typicality of the traits. In sum, women's preferences for male faces are associated with their preferences for personality traits. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: A design that as closely as possible matched typical partial repetition cost experiments in overall stimulus processing and response requirements was used, which has implications for interpreting partial repetition costs and for feature integration theories in general.
Abstract: In stimulus identification tasks, stimulus and response, and location and response information, is thought to become integrated into a common event representation following a response. Evidence for this feature integration comes from paradigms requiring keypress responses to pairs of sequentially presented stimuli. In such paradigms, there is a robust cost when a target event only partially matches the preceding event representation. This is known as the partial repetition cost. Notably, however, these experiments rely on discrimination responses. Recent evidence has suggested that changing the responses to localization or detection responses eliminates partial repetition costs. If changing the response type can eliminate partial repetition costs it becomes necessary to question whether partial repetition costs reflect feature integration or some other mechanism. In the current study, we look to answer this question by using a design that as closely as possible matched typical partial repetition cost experiments in overall stimulus processing and response requirements. Unlike typical experiments where participants make a cued response to a first stimulus before making a discrimination response to a second stimulus, here we reversed that sequence such that participants made a discrimination response to the first stimulus before making a cued response to the second. In Experiment 1, this small change eliminated or substantially reduced the typically large partial repetition costs. In Experiment 2 we returned to the typical sequence and restored the large partial repetition costs. Experiment 3 confirmed these findings, which have implications for interpreting partial repetition costs and for feature integration theories in general. (PsycINFO Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: Reflexive first saccades tended toward the left and center of the face rather than preferentially targeting emotion-distinguishing features, reflecting the integration of task-relevant information across the face constrained by the differences between foveal and extrafoveal processing.
Abstract: At normal interpersonal distances all features of a face cannot fall within one's fovea simultaneously. Given that certain facial features are differentially informative of different emotions, does the ability to identify facially expressed emotions vary according to the feature fixated and do saccades preferentially seek diagnostic features? Previous findings are equivocal. We presented faces for a brief time, insufficient for a saccade, at a spatial position that guaranteed that a given feature-an eye, cheek, the central brow, or mouth-fell at the fovea. Across 2 experiments, observers were more accurate and faster at discriminating angry expressions when the high spatial-frequency information of the brow was projected to their fovea than when 1 or other cheek or eye was. Performance in classifying fear and happiness (Experiment 1) was not influenced by whether the most informative features (eyes and mouth, respectively) were projected foveally or extrafoveally. Observers more accurately distinguished between fearful and surprised expressions (Experiment 2) when the mouth was projected to the fovea. Reflexive first saccades tended toward the left and center of the face rather than preferentially targeting emotion-distinguishing features. These results reflect the integration of task-relevant information across the face constrained by the differences between foveal and extrafoveal processing (Peterson & Eckstein, 2012). (PsycINFO Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: It is suggested that positive repetition effects depend on consecutive targets that share visual representations, whereas negative repetition effects reflect a more complex relationship between stimulus and response features across targets.
Abstract: Repetition of target features in the same spatial location can either benefit or impair performance in perceptual tasks. Moreover, which of these two effects occurs can depend on whether an intervening event is presented temporally between consecutive targets. Here, we explored these effects for color feature repetitions by varying the representational overlap of consecutive targets. The second target on all experimental trials was a simple perceptual color. The task and first target were manipulated to vary the representation produced in response to the first target (perceptual representation of color in Experiment 1; imagined representation of color in Experiments 2 and 5; conceptual representation of color in Experiment 3; color-unrelated representation in Experiment 4). Perceptual and imagined color representations for the first target produced a positive repetition effect when an intervening event did not appear between targets but produced a negative repetition effect when an intervening event did appear between targets. In contrast, conceptual color and color-unrelated representations produced a negative repetition effect both with and without an intervening event. These results suggest that positive repetition effects depend on consecutive targets that share visual representations, whereas negative repetition effects reflect a more complex relationship between stimulus and response features across targets. (PsycINFO Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In this paper, the authors examined the relationship between stimuli and action effects and found that action effects that were modality-compatible (e.g., visual stimulus, visual action effect) produced smaller dual-task costs than those that were non-modality compatible, i.e., action effects were either modality compatible or -incompatible with the stimuli.
Abstract: The pairings of tasks' stimulus and response modalities affect the magnitude of dual-task costs. For example, dual-task costs are larger when a visual-vocal task is paired with an auditory-manual task compared with when a visual-manual task is paired with an auditory-vocal task. These results are often interpreted as reflecting increased crosstalk between central codes for each task. Here we examine a potential source: modality-based crosstalk between the stimuli and the response-induced sensory consequences (i.e., action effects). In five experiments, we manipulated experimentally induced action effects so that they were either modality-compatible or -incompatible with the stimuli. Action effects that were modality-compatible (e.g., visual stimulus, visual action effect) produced smaller dual-task costs than those that were modality-incompatible (e.g., visual stimulus, auditory action effect). Thus, the relationship between stimuli and action effects contributes to dual-task costs. Moreover, modality-compatible pairs showed an advantage compared with when no action effects were experimentally induced. These results add to a growing body of work demonstrating that postresponse sensory events affect response selection processes. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: Analysis of visual search times indicated that the search bias toward the rich area (formed during the biased stage) was reduced during the unbiased stage, and cast doubts on the characterization of probabilistic cuing as an implicit and inflexible search habit.
Abstract: In probabilistic cuing of visual search, participants search for a target object that appears more frequently in one region of the display. This task results in a search bias toward the rich quadrant compared with other quadrants. Previous research has suggested that this bias is inflexible (difficult to unlearn) and implicit (participants are unaware of the biased distribution of targets). We tested these hypotheses in two preregistered, high-powered experiments (Ns = 160 and 161). In an initial biased stage, participants performed a standard probabilistic cuing task. In a subsequent unbiased stage, the target appeared in all quadrants with equal probability. Awareness questions were included after the biased stage in one group of participants, and after the unbiased stage in a second group. Results showed that participants were aware of the rich area, and this effect was larger for the group whose awareness was assessed after the biased stage. In addition, analyses of visual search times indicated that the search bias toward the rich area (formed during the biased stage) was reduced during the unbiased stage. These results cast doubts on the characterization of probabilistic cuing as an implicit and inflexible search habit. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: It is demonstrated that associating specific stimuli with frequent switch requirements not only reduces switch costs but also enhances participants' tendency to switch voluntarily.
Abstract: The ability to switch efficiently between different tasks underpins cognitive flexibility and is impaired in various psychiatric disorders. Recent research has suggested that the control processes mediating switching can be subject to learning, because "switch readiness" can become associated with, and primed by, specific stimuli. In cued task switching, items that are frequently associated with the need to switch incur a smaller behavioral switch cost than do items associated with a low probability of switching, known as the item-specific switch probability (ISSP) effect (Chiu & Egner, 2017). However, it remains unknown whether ISSP associations modulate the efficiency of only cued switching or also impact people's voluntary choice to switch tasks. Here, we addressed this question by combining an ISSP manipulation with a protocol that mixed 75% standard cued task trials with 25% free choice trials, allowing us to measure the effect of ISSP on voluntary switch rate (VSR). We observed robust ISSP effects on cued trials, replicating previous findings. Crucially, we also found that the VSR was greater for items associated with a high than with a low switch likelihood. We thus demonstrate that associating specific stimuli with frequent switch requirements not only reduces switch costs but also enhances participants' tendency to switch voluntarily. Public Significance Statement A hallmark of human cognition is people's cognitive flexibility, reflected in the ability to efficiently switch between different tasks, which is impaired by many psychiatric disorders. A recent study demonstrated that the efficiency with which people switch tasks, or "switch readiness," can be improved through learning: People become better at switching tasks for stimuli that are frequently associated with the need to switch compared to stimuli that are rarely associated with switching. In the present study, we show that frequent-switch stimuli also increase people's tendency to switch tasks when they are allowed to choose which task to perform. This finding suggests that it might be possible to employ stimulus-specific cuing of switch readiness to enhance both the ability and the choice to behave more flexibly in clinical populations with deficits in cognitive flexibility.

Journal ArticleDOI
TL;DR: In this article, the authors suggest a new principle of spatial alignment, whereby visual comparison is substantially more efficient when visuals are placed perpendicular to their structural axes, such that the matching components of the visuals are in direct alignment.
Abstract: Humans have a uniquely sophisticated ability to see past superficial features and to understand the relational structure of the world around us. This ability often requires that we compare structures, finding commonalities and differences across visual depictions that are arranged in space, such as maps, graphs, or diagrams. Although such visual comparison of relational structures is ubiquitous in classrooms, textbooks, and news media, surprisingly little is known about how to facilitate this process. Here we suggest a new principle of spatial alignment, whereby visual comparison is substantially more efficient when visuals are placed perpendicular to their structural axes, such that the matching components of the visuals are in direct alignment. In four experiments, this direct alignment led to faster and more accurate comparison than other placements of the same patterns. We discuss the spatial alignment principle in connection to broader work on relational comparison and describe its implications for design and instruction. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: Assessment of how the presence of performance feedback shapes control-learning in the context of item-specific and list-wide proportion of congruency manipulations in a Stroop protocol found that performance feedback did not alter the modulation of the Stroop effect by item- specific cueing, but did enhance the modulation by a list- wide context.
Abstract: Cognitive control refers to the use of internal goals to guide how we process stimuli, and control can be applied proactively (in anticipation of a stimulus) or reactively (once that stimulus has been presented). The application of control can be guided by memory; for instance, people typically learn to adjust their level of attentional selectivity to changing task statistics, such as different frequencies of hard and easy trials in the Stroop task. This type of control-learning is highly adaptive, but its boundary conditions are currently not well understood. In the present study, we assessed how the presence of performance feedback shapes control-learning in the context of item-specific (reactive control, Experiments 1a and 1b) and list-wide (proactive control, Experiments 2a and 2b) proportion of congruency manipulations in a Stroop protocol. We found that performance feedback did not alter the modulation of the Stroop effect by item-specific cueing, but did enhance the modulation of the Stroop effect by a list-wide context. Performance feedback thus selectively promoted proactive, but not reactive, adaptation of cognitive control. These results have important implications for experimental designs, potential psychiatric treatment, and theoretical accounts of the mechanisms underlying control-learning. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The efficiency of individuals preferring a switching or response-grouping strategy increased especially when the reduction in resource competition was response related (manual vs. vocal), leading even to considerable dual-tasking benefits under these circumstances.
Abstract: Previous research has shown that individuals differ with respect to their preferred strategies in self-organized multitasking: They either prefer to work on one task for long sequences before switching to another (blocking), prefer to switch repeatedly after short sequences (switching), or prefer to respond almost simultaneously after processing the stimuli of two concurrently visible tasks (response grouping). In two experiments, we tested to what extent the choice of strategy and related differences in multitasking efficiency were affected by the between-resource competition (Wickens, 2002) of two tasks to be performed concurrently in a self-organized manner. All participants performed a set of dual tasks that differed with respect to the kind of stimuli (verbal vs. spatial) and/or responses (manual vs. vocal). The choice of strategy was hardly affected as most individuals persisted in their response strategy independent of the degree of resource competition. However, the efficiency of individuals preferring a switching or response-grouping strategy increased especially when the reduction in resource competition was response related (manual vs. vocal), leading even to considerable dual-tasking benefits under these circumstances. In contrast, individuals who preferred to block their responses did not achieve any considerable benefits (or costs) with either of the different dual tasks. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The present findings indicated that referential coding may be at the base when taking the avatar's perspective, and any study in perspective taking needs to consider and evaluate possible mechanisms of referentIAL coding.
Abstract: Previous studies have shown that users spontaneously take the position of a virtual avatar and solve spatial tasks from the avatar's perspective. The common impression is that users develop a spatial representation that allows them to "see" the world through the eyes of the avatar-that is, from its virtual perspective. In the present paper, this perspective taking assumption is compared with a referential coding assumption that allows the user to act on the basis of changed reference points. Using a spatial compatibility task, Experiment 1 demonstrated that visual perspective of the avatar was not the determining factor for taking the avatar's spatial position, but that its hand position (as the reference point) was decisive for the spatial coding of objects. Experiment 2 showed, however, that if the participant's hand position was not corresponding with the avatar's hand positions, the spatial referencing by the avatar's hands expired, thereby demonstrating the limits of referential coding. Still, the present findings indicated that referential coding may be at the base when taking the avatar's perspective. Accordingly, any study in perspective taking needs to consider and evaluate possible mechanisms of referential coding. (PsycINFO Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: It is concluded that although the predicted interactions in early eye movement measures may exist, they are sufficiently weak that they are difficult to detect even in large eye movement experiments.
Abstract: The time a reader's eyes spend on a word is influenced by visual (e.g., contrast) as well as lexical (e.g., word frequency) and contextual (e.g., predictability) factors. Well-known visual word recognition models predict that visual and higher-level manipulations may have interactive effects on early eye movement measures, because of cascaded processing between levels. Previous eye movement studies provide conflicting evidence as to whether they do, possibly because of inconsistent manipulations or limited statistical power. In the present study, 2 highly powered experiments used sentences in which a target word's frequency and predictability were factorially manipulated. Experiment 1 also manipulated visual contrast, and Experiment 2 also manipulated font difficulty. Robust main effects of all manipulations were evident in both experiments. In Experiment 1, interactions between the effect of contrast and the effects of frequency and predictability were numerically small and statistically unreliable in both early (word skipping, first fixation duration) and later (gaze duration, go-past time) measures. In Experiment 2, frequency and predictability did demonstrate convincing interactions with font difficulty, but only in the later measures, possibly implicating a checking mechanism. We conclude that although the predicted interactions in early eye movement measures may exist, they are sufficiently weak that they are difficult to detect even in large eye movement experiments. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The results have implications for preference and choice in a wide range of contexts by demonstrating the competition between perceptual fluency and ambiguity solution on preference, and by highlighting the critical factor of the form of preference decision.
Abstract: Human perceptual processes are highly efficient and rapidly extract information to enable fast and accurate responses. The fluency of these processes is reinforcing, meaning that easy-to-perceive objects are liked more as a result of misattribution of the reinforcement affect to the object identity. However, some critical processes are disfluent, yet their completion can be reinforcing leading to object preference through a different route. One such example is identification of objects from camouflage. In a series of 5 experiments, we manipulated object contrast and camouflage to explore the relationship between object preference to perceptual fluency and ambiguity solution. We found that perceptual fluency dominated the process of preference assessment when objects are assessed for "liking". That is, easier-to-perceive objects (high-contrast and noncamouflaged) were preferred over harder-to-perceive objects (low-contrast and camouflaged). However, when objects are assessed for "interest", the disfluent yet reinforcing ambiguity solution process overrode the effect of perceptual fluency, resulting in preference for the harder-to-perceive camouflaged objects over the easier-to-perceive noncamouflaged objects. The results have implications for preference and choice in a wide range of contexts by demonstrating the competition between perceptual fluency and ambiguity solution on preference, and by highlighting the critical factor of the form of preference decision. (PsycINFO Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: Evidence is provided, for the first time, that people make online adjustments of observed actions based on the match between hand grip and object goals, distorting their perceptual representation toward implied goals.
Abstract: Predictive processing accounts of social perception argue that action observation is a predictive process, in which inferences about others’ goals are tested against the perceptual input, inducing a subtle perceptual confirmation bias that distorts observed action kinematics toward the inferred goals. Here we test whether such biases are induced even when goals are not explicitly given but have to be derived from the unfolding action kinematics. In 2 experiments, participants briefly saw an actor reach ambiguously toward a large object and a small object, with either a whole-hand power grip or an index-finger and thumb precision grip. During its course, the hand suddenly disappeared, and participants reported its last seen position on a touch-screen. As predicted, judgments were consistently biased toward apparent action targets, such that power grips were perceived closer to large objects and precision grips closer to small objects, even if the reach kinematics were identical. Strikingly, these biases were independent of participants’ explicit goal judgments. They were of equal size when action goals had to be explicitly derived in each trial (Experiment 1) or not (Experiment 2) and, across trials and across participants, explicit judgments and perceptual biases were uncorrelated. This provides evidence, for the first time, that people make online adjustments of observed actions based on the match between hand grip and object goals, distorting their perceptual representation toward implied goals. These distortions may not reflect high-level goal assumptions, but emerge from relatively low-level processing of kinematic features within the perceptual system.