scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Experimental Psychology: General in 2021"


Journal ArticleDOI
TL;DR: The Neyman-Rubin causal model is reviewed, which is used to prove analytically that linear regression yields unbiased estimates of treatment effects on binary outcomes and, when interaction terms or fixed effects are included, linear regression is safer.
Abstract: When the outcome is binary, psychologists often use nonlinear modeling strategies such as logit or probit. These strategies are often neither optimal nor justified when the objective is to estimate causal effects of experimental treatments. Researchers need to take extra steps to convert logit and probit coefficients into interpretable quantities, and when they do, these quantities often remain difficult to understand. Odds ratios, for instance, are described as obscure in many textbooks (e.g., Gelman & Hill, 2006, p. 83). I draw on econometric theory and established statistical findings to demonstrate that linear regression is generally the best strategy to estimate causal effects of treatments on binary outcomes. Linear regression coefficients are directly interpretable in terms of probabilities and, when interaction terms or fixed effects are included, linear regression is safer. I review the Neyman-Rubin causal model, which I use to prove analytically that linear regression yields unbiased estimates of treatment effects on binary outcomes. Then, I run simulations and analyze existing data on 24,191 students from 56 middle schools (Paluck, Shepherd, & Aronow, 2013) to illustrate the effectiveness of linear regression. Based on these grounds, I recommend that psychologists use linear regression to estimate treatment effects on binary outcomes. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

155 citations


Journal ArticleDOI
TL;DR: Performance on accuracy-based measures on attention control tasks was more reliable, had stronger intercorrelations, formed a more coherent latent factor, and had stronger associations to measures of working memory capacity and fluid intelligence.
Abstract: Cognitive tasks that produce reliable and robust effects at the group level often fail to yield reliable and valid individual differences. An ongoing debate among attention researchers is whether conflict resolution mechanisms are task-specific or domain-general, and the lack of correlation between most attention measures seems to favor the view that attention control is not a unitary concept. We have argued that the use of difference scores, particularly in reaction time (RT), is the primary cause of null and conflicting results at the individual differences level, and that methodological issues with existing tasks preclude making strong theoretical conclusions. The present article is an empirical test of this view in which we used a toolbox approach to develop and validate new tasks hypothesized to reflect attention processes. Here, we administered existing, modified, and new attention tasks to over 400 participants (final N = 396). Compared with the traditional Stroop and flanker tasks, performance on the accuracy-based measures was more reliable, had stronger intercorrelations, formed a more coherent latent factor, and had stronger associations to measures of working memory capacity and fluid intelligence. Further, attention control fully accounted for the relationship between working memory capacity and fluid intelligence. These results show that accuracy-based measures can be better suited to individual differences investigations than traditional RT tasks, particularly when the goal is to maximize prediction. We conclude that attention control is a unitary concept. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

76 citations


Journal ArticleDOI
TL;DR: It is found that functional brain networks that predict intelligence facets overlap to varying degrees with a network that predicts creative ability, particularly within the prefrontal cortex of the executive control network.
Abstract: Are intelligence and creativity distinct abilities, or do they rely on the same cognitive and neural systems? We sought to quantify the extent to which intelligence and creative cognition overlap in brain and behavior by combining machine learning of fMRI data and latent variable modeling of cognitive ability data in a sample of young adults (N = 186) who completed a battery of intelligence and creative thinking tasks. The study had 3 analytic goals: (a) to assess contributions of specific facets of intelligence (e.g., fluid and crystallized intelligence) and general intelligence to creative ability (i.e., divergent thinking originality), (b) to model whole-brain functional connectivity networks that predict intelligence facets and creative ability, and (c) to quantify the degree to which these predictive networks overlap in the brain. Using structural equation modeling, we found moderate to large correlations between intelligence facets and creative ability, as well as a large correlation between general intelligence and creative ability (r = .63). Using connectome-based predictive modeling, we found that functional brain networks that predict intelligence facets overlap to varying degrees with a network that predicts creative ability, particularly within the prefrontal cortex of the executive control network. Notably, a network that predicted general intelligence shared 46% of its functional connections with a network that predicted creative ability-including connections linking executive control and salience/ventral attention networks-suggesting that intelligence and creative thinking rely on similar neural and cognitive systems. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

55 citations


Journal ArticleDOI
TL;DR: There is currently a lack of clear empirical evidence that cognitive sophistication magnifies politically motivated reasoning as commonly understood and the conceptual and empirical challenges that confront tests of this hypothesis are emphasized.
Abstract: Partisan disagreement over policy-relevant facts is a salient feature of contemporary American politics. Perhaps surprisingly, such disagreements are often the greatest among opposing partisans who are the most cognitively sophisticated. A prominent hypothesis for this phenomenon is that cognitive sophistication magnifies politically motivated reasoning-commonly defined as reasoning driven by the motivation to reach conclusions congenial to one's political group identity. Numerous experimental studies report evidence in favor of this hypothesis. However, in the designs of such studies, political group identity is often confounded with prior factual beliefs about the issue in question; and, crucially, reasoning can be affected by such beliefs in the absence of any political group motivation. This renders much existing evidence for the hypothesis ambiguous. To shed new light on this issue, we conducted three studies in which we statistically controlled for people's prior factual beliefs-attempting to isolate a direct effect of political group identity-when estimating the association between their cognitive sophistication, political group identity, and reasoning in the paradigmatic study design used in the literature. We observed a robust direct effect of political group identity on reasoning but found no evidence that cognitive sophistication magnified this effect. In contrast, we found fairly consistent evidence that cognitive sophistication magnified a direct effect of prior factual beliefs on reasoning. Our results suggest that there is currently a lack of clear empirical evidence that cognitive sophistication magnifies politically motivated reasoning as commonly understood and emphasize the conceptual and empirical challenges that confront tests of this hypothesis. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

41 citations


Journal ArticleDOI
TL;DR: It is found that avoiding retrieval leads to significant forgetting in healthy individuals, and that inducing more specific suppression mechanisms fosters voluntary forgetting, suggesting that intact suppression-induced forgetting is a hallmark of psychological well-being.
Abstract: It is still debated whether suppressing the retrieval of unwanted memories causes forgetting and whether this constitutes a beneficial mechanism. To shed light on these 2 questions, we scrutinize the evidence for such suppression-induced forgetting (SIF) and examine whether it is deficient in psychological disorders characterized by intrusive thoughts. Specifically, we performed a focused meta-analysis of studies that have used the think/no-think procedure to test SIF in individuals either affected by psychological disorders or exhibiting high scores on related traits. Overall, across 96 effects from 25 studies, we found that avoiding retrieval leads to significant forgetting in healthy individuals, with a small to moderate effect size (0.28, 95% CI [0.14, 0.43]). Importantly, this effect was indeed larger than for more anxious (-0.21, 95% CI [-0.41, -0.02]) or depressed individuals (0.05, 95% CI [-0.19, 0.29])-though estimates for the healthy may be inflated by publication bias. In contrast, individuals with a stronger repressive coping style showed greater SIF (0.42, 95% CI [0.32, 0.52]). Furthermore, moderator analyses revealed that SIF varied with the exact suppression mechanism that participants were instructed to engage. For healthy individuals, the effect sizes were considerably larger when instructions induced specific mechanisms of direct retrieval suppression or thought substitution than when they were unspecific. These results suggest that intact suppression-induced forgetting is a hallmark of psychological well-being, and that inducing more specific suppression mechanisms fosters voluntary forgetting. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

40 citations


Journal ArticleDOI
TL;DR: The effects of bilingual experience on proactive control, to the extent that they exist in younger adults, are likely small, and future studies will require even larger or qualitatively different samples in combination with valid, granular quantifications of language experience to reveal predictive effects on novel participants.
Abstract: We used insights from machine learning to address an important but contentious question: Is bilingual language experience associated with executive control abilities? Specifically, we assess proactive executive control for over 400 young adult bilinguals via reaction time (RT) on an AX continuous performance task (AX-CPT). We measured bilingual experience as a continuous, multidimensional spectrum (i.e., age of acquisition, language entropy, and sheer second language exposure). Linear mixed effects regression analyses indicated significant associations between bilingual language experience and proactive control, consistent with previous work. Information criteria (e.g., AIC) and cross-validation further suggested that these models are robust in predicting data from novel, unmodeled participants. These results were bolstered by cross-validated LASSO regression, a form of penalized regression. However, the results of both cross-validation procedures also indicated that similar predictive performance could be achieved through simpler models that only included information about the AX-CPT (i.e., trial type). Collectively, these results suggest that the effects of bilingual experience on proactive control, to the extent that they exist in younger adults, are likely small. Thus, future studies will require even larger or qualitatively different samples (e.g., older adults or children) in combination with valid, granular quantifications of language experience to reveal predictive effects on novel participants. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

36 citations


Journal ArticleDOI
TL;DR: Experience-driven top-down modulations of saliency signals at the overall-priority and dimension-specific levels that do not reach down to the specific distractor features are demonstrated.
Abstract: Many attention theories assume that selection is guided by a preattentive, spatial representation of the scene that combines bottom-up stimulus information with top-down influences (task goals and prior experience) to code for potentially relevant locations (priority map). At which level(s) of priority computation top-down influences modulate bottom-up stimulus signals is an open question. In a visual-search task, here we induced experience-driven spatial suppression (statistical learning) by presenting 1 of 2 salient distractors more frequently in one display region than the other. When a distractor standing out in the same dimension as the target was spatially biased in Experiment 1, processing of both the target and another, spatially unbiased distractor standing out in a different dimension was likewise hampered in the suppressed region. This indicates that constraining spatial suppression to a specific distractor feature is not possible, and participants instead resort to purely space-based (distractor-feature-independent) suppression at a supradimensional, overall-priority map. In line with a common locus of suppression, a novel computational model of distraction in visual search captures all 3 location effects with a single spatial-weighting parameter. In contrast, when the different-dimension distractor was spatially biased in Experiment 2, processing of other objects in the suppressed region was unaffected, indicating suppression constrained to a subordinate, dimension-specific level of priority computation. In sum, we demonstrate experience-driven top-down modulations of saliency signals at the overall-priority and dimension-specific levels that do not reach down to the specific distractor features. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

33 citations


Journal ArticleDOI
TL;DR: It is demonstrated that reward-induced cognitive effort allocation in a task-switching paradigm is sensitive to reward context, consistent with the notion of relative value, and confirmed that reward relativity factors into the value computation driving effort allocation, revealing that motivated cognitive control is all relative.
Abstract: Although people seek to avoid expenditure of cognitive effort, reward incentives can increase investment of processing resources in challenging situations that require cognitive control, resulting in improved performance. At the same time, subjective value is relative, rather than absolute: The value of a reward is increased if the local context is reward-poor versus reward-rich. Although this notion is supported by work in economics and psychology, we propose that reward relativity should also play a critical role in the cost-benefit computations that inform cognitive effort allocation. Here we demonstrate that reward-induced cognitive effort allocation in a task-switching paradigm is sensitive to reward context, consistent with the notion of relative value. Informed by predictions of a computational model of divisive reward normalization, we demonstrate that reward-induced switch cost reductions depend critically upon reward context, such that the same reward amount engenders greater control allocation in impoverished versus rich reward context. Succinctly, these results confirm that reward relativity factors into the value computation driving effort allocation, revealing that motivated cognitive control, like choice, is all relative. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

27 citations


Journal ArticleDOI
TL;DR: The studies show that deontological constraints against instrumental harm are not absolute but get weaker the less people morally value the respective entity, including humans, followed by dogs, chimpanzees, pigs, and finally inanimate objects.
Abstract: Most people hold that it is wrong to sacrifice some humans to save a greater number of humans. Do people also think that it is wrong to sacrifice some animals to save a greater number of animals, or do they answer such questions about harm to animals by engaging in a utilitarian cost-benefit calculation? Across 10 studies (N = 4,662), using hypothetical and real-life sacrificial moral dilemmas, we found that participants considered it more permissible to harm a few animals to save a greater number of animals than to harm a few humans to save a greater number of humans. This was explained by a reduced general aversion to harm animals compared with humans, which was partly driven by participants perceiving animals to suffer less and to have lower cognitive capacity than humans. However, the effect persisted even in cases where animals were described as having greater suffering capacity and greater cognitive capacity than some humans, and even when participants felt more socially connected to animals than to humans. The reduced aversion to harming animals was thus also partly due to speciesism-the tendency to ascribe lower moral value to animals due to their species-membership alone. In sum, our studies show that deontological constraints against instrumental harm are not absolute but get weaker the less people morally value the respective entity. These constraints are strongest for humans, followed by dogs, chimpanzees, pigs, and finally inanimate objects. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

27 citations


Journal ArticleDOI
TL;DR: Results indicated that interactions including voice created stronger social bonds and no increase in awkwardness, compared with interactions including text (e-mail, text chat), but miscalibrated expectations about awkwardness or connection could lead to suboptimal preferences for text-based media.
Abstract: Positive social connections improve wellbeing. Technology increasingly affords a wide variety of media that people can use to connect with others, but not all media strengthen social connection equally. Optimizing wellbeing, therefore, requires choosing how to connect with others wisely. We predicted that people's preferences for communication media would be at least partly guided by the expected costs and benefits of the interaction-specifically, how awkward or uncomfortable the interaction would be and how connected they would feel to their partner-but that people's expectations would consistently undervalue the overall benefit of more intimate voice-based interactions. We tested this hypothesis by asking participants in a field experiment to reconnect with an old friend either over the phone or e-mail, and by asking laboratory participants to "chat" with a stranger over video, voice, or text-based media. Results indicated that interactions including voice (phone, video chat, and voice chat) created stronger social bonds and no increase in awkwardness, compared with interactions including text (e-mail, text chat), but miscalibrated expectations about awkwardness or connection could lead to suboptimal preferences for text-based media. Misunderstanding the consequences of using different communication media could create preferences for media that do not maximize either one's own or others' wellbeing. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

25 citations


Journal ArticleDOI
TL;DR: The overconfidence transmission hypothesis, which predicts that individuals calibrate their self-assessments in response to the confidence others display in their social group, is proposed and tested and suggests that social transmission processes may be in part responsible for why local confidence norms emerge in groups, teams, and organizations.
Abstract: We propose and test the overconfidence transmission hypothesis, which predicts that individuals calibrate their self-assessments in response to the confidence others display in their social group Six studies that deploy a mix of correlational and experimental methods support this hypothesis Evidence indicates that individuals randomly assigned to collaborate in laboratory dyads converged on levels of overconfidence about their own performance rankings In a controlled experimental context, observing overconfident peers causally increased an individual's degree of bias The transmission effect persisted over time and across task domains, elevating overconfidence even days after initial exposure In addition, overconfidence spread across indirect social ties (person to person to person), and transmission operated outside of reported awareness However, individuals showed a selective in-group bias; overconfidence was acquired only when displayed by a member of one's in-group (and not out-group), consistent with theoretical notions of selective learning bias Combined, these results advance understanding of the social factors that underlie interindividual differences in overconfidence and suggest that social transmission processes may be in part responsible for why local confidence norms emerge in groups, teams, and organizations (PsycInfo Database Record (c) 2021 APA, all rights reserved)

Journal ArticleDOI
TL;DR: This study tests the hypothesis that an individual's internal decision confidence can be used as a signal to learn the accuracy of others' advice, even in the absence of feedback, and explores implications of these individual-level heuristics for network-level patterns of trust and belief formation.
Abstract: In a world where ideas flow freely across multiple platforms, people must often rely on others' advice and opinions without an objective standard to judge whether this information is accurate. The present study explores the hypothesis that an individual's internal decision confidence can be used as a signal to learn the accuracy of others' advice, even in the absence of feedback. According to this "agreement-in-confidence" hypothesis, people can learn about an advisor's accuracy across multiple interactions according to whether the advice offered agrees with their own initial opinions, weighted by the confidence with which these initial opinions are held. We test this hypothesis using a judge-advisor system paradigm to precisely manipulate the profiles of virtual advisors in a perceptual decision-making task. We find that when advisors' and participants' judgments are independent, people can correctly learn advisors' features, like their accuracy and calibration, whether or not objective feedback is available. However, when their judgments (and thus errors) are correlated-as is the case in many real social contexts-predictable distortions in trust can be observed between feedback and feedback-free scenarios. Using agent-based simulations, we explore implications of these individual-level heuristics for network-level patterns of trust and belief formation. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: It is found that with more contextually enriched and immersive pleas for help, participants preferred to escape feeling compassion, although their preference did not differ from also escaping remaining objectively detached, which temper strong arguments that compassion is an easier route to prosocial motivation.
Abstract: Compassion-the warm, caregiving emotion that emerges from witnessing the suffering of others-has long been considered an important moral emotion for motivating and sustaining prosocial behavior. Some suggest that compassion draws from empathic feelings to motivate prosocial behavior, whereas others try to disentangle these processes to examine their different functions for human prosociality. Many suggest that empathy, which involves sharing in others' experiences, can be biased and exhausting, whereas warm compassionate concern is more rewarding and sustainable. If compassion is indeed a warm and positive experience, then people should be motivated to seek it out when given the opportunity. Here, we ask whether people spontaneously choose to feel compassion, and whether such choices are associated with perceiving compassion as cognitively costly. Across all studies, we found that people opted to avoid compassion when given the opportunity, reported compassion to be more cognitively taxing than empathy and objective detachment, and opted to feel compassion less often to the degree they viewed compassion as cognitively costly. We also revealed two important boundary conditions: first, people were less likely to avoid compassion for close (vs. distant) others, and this choice difference was associated with viewing compassion for close others as less cognitively costly. Second, in the final study we found that with more contextually enriched and immersive pleas for help, participants preferred to escape feeling compassion, although their preference did not differ from also escaping remaining objectively detached. These results temper strong arguments that compassion is an easier route to prosocial motivation. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: It is found that observers were biased to report the presence of expected action outcomes and this bias is suggestive of a mechanism that would enable generation of largely veridical representations of the authors' actions and their consequences in an inherently uncertain sensory world.
Abstract: We predict how our actions will influence the world around us. Prevailing models of action control propose that we use these predictions to suppress or ‘cancel’ perception of expected action outcomes. However, contrasting normative Bayesian models in sensory cognition suggest that top-down predictions bias observers toward perceiving what they expect. Here we adjudicated between these models by investigating how expectations influence perceptual decisions about briefly presented action outcomes. Contrary to dominant cancellation models, we found that observers’ perceptual decisions are biased toward the presence of outcomes congruent with their actions. Computational modelling revealed this action-induced bias reflected a bias in how sensory evidence was accumulated, rather than a baseline shift in decision circuits. In combination, these results reveal a gain control mechanism that can explain how we generate largely veridical representations of our actions and their consequences in an inherently uncertain sensory world.

Journal ArticleDOI
TL;DR: This paper found that people often deny the mutually beneficial nature of exchange, instead espousing the belief that one or both parties fail to benefit from the exchange, and that the most important influences being mercantilist theories of value and theory of mind limits.
Abstract: A core proposition in economics is that voluntary exchanges benefit both parties. We show that people often deny the mutually beneficial nature of exchange, instead espousing the belief that one or both parties fail to benefit from the exchange. Across four studies (and 8 further studies in the online supplementary materials), participants read about simple exchanges of goods and services, judging whether each party to the transaction was better off or worse off afterward. These studies revealed that win-win denial is pervasive, with buyers consistently seen as less likely to benefit from transactions than sellers. Several potential psychological mechanisms underlying win-win denial are considered, with the most important influences being mercantilist theories of value (confusing wealth for money) and theory of mind limits (failing to observe that people do not arbitrarily enter exchanges). We argue that these results have widespread implications for politics and society. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: This is the first theoretical account of referential efficiency that is sensitive to the incrementality of language processing, making different cross-linguistic predictions depending on word order.
Abstract: Pragmatic theories and computational models of reference must account for people's frequent use of redundant color adjectives (e.g., referring to a single triangle as "the blue triangle"). The standard pragmatic view holds that the informativity of a referential expression depends on pragmatic contrast: Color adjectives should be used to contrast competitors of the same kind to preempt an ambiguity (e.g., between several triangles of different colors), otherwise they are redundant. Here we propose an alternative to the standard view, the incremental efficiency hypothesis, according to which the efficiency of a referential expression must be calculated incrementally over the entire visual context. This is the first theoretical account of referential efficiency that is sensitive to the incrementality of language processing, making different cross-linguistic predictions depending on word order. Experiment 1 confirmed that English speakers produced more redundant color adjectives (e.g., "the blue triangle") than Spanish speakers (e.g., "el triangulo azul"), but both language groups used more redundant color adjectives in denser displays where it would be more efficient. In Experiments 2A and 2B, we used eye tracking to show that pragmatic contrast is not a processing constraint. Instead, incrementality and efficiency determine that English listeners establish color contrast across categories (BLUE SHAPES > TRIANGULAR ONE), whereas Spanish listeners establish color contrast within a category (TRIANGLES > BLUE ONE). Spanish listeners, however, reversed their visual search strategy when tested in English immediately after. Our results show that speakers and listeners of different languages exploit word order to increase communicative efficiency. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The results suggest that more intelligent individuals benefit from an adaptive modulation of theta-band synchronization during the time-course of information processing, which supports theoretical accounts of intelligence and emphasizes the role of interregional goal-directed information-processing for cognitive control processes in human intelligence.
Abstract: Individual differences in cognitive control have been suggested to act as a domain-general bottleneck constraining performance in a variety of cognitive ability measures, including but not limited to fluid intelligence, working memory capacity, and processing speed. However, owing to psychometric problems associated with the measurement of individual differences in cognitive control, it has been challenging to empirically test the assumption that individual differences in cognitive control underlie individual differences in cognitive abilities. In the present study, we addressed these issues by analyzing the chronometry of intelligence-related differences in midfrontal global theta connectivity, which has been shown to reflect cognitive control functions. We demonstrate in a sample of 98 adults, who completed a cognitive control task while their electroencephalogram was recorded, that individual differences in midfrontal global theta connectivity during stages of higher-order information-processing explained 65% of the variance in fluid intelligence. In comparison, task-evoked theta connectivity during earlier stages of information processing was not related to fluid intelligence. These results suggest that more intelligent individuals benefit from an adaptive modulation of theta-band synchronization during the time-course of information processing. Moreover, they emphasize the role of interregional goal-directed information-processing for cognitive control processes in human intelligence and support theoretical accounts of intelligence, which propose that individual differences in cognitive control processes give rise to individual differences in cognitive abilities. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Posted ContentDOI
TL;DR: In this paper, the authors meta-analyze the evidence for the impact of episodic future thinking on intertemporal choices that have monetary or health-relevant consequences, finding that the effect was stronger when the imagined events were positive, more vivid, and related to the delayed choice.
Abstract: Episodic future thinking (EFT) denotes our capacity to imagine prospective events. It has been suggested to promote farsighted decisions that entail a trade-off between short-term versus long-term gains. Here, we meta-analyze the evidence for the impact of EFT on such intertemporal choices that have monetary or health-relevant consequences. Across 174 effect sizes from 48 articles, a three-level model yielded a medium-sized effect of g = .44, 95% (CI) [.33, .55]. Notably, this analysis included a substantial number of unpublished experiments, and the effect remained significant following further adjustments for remaining publication bias. We exploited the observed heterogeneity to determine critical core components that moderate the impact of EFT. Specifically, the effect was stronger when the imagined events were positive, more vivid, and related to the delayed choice. We further obtained evidence for the contribution of the episodicity and future-orientedness of EFT. These results indicate that the impact of EFT cannot simply be accounted for by other modes of prospection (e.g., semantic future thinking). Of note, EFT had a greater impact in samples characterized by choice impulsivity (e.g., in obesity), suggesting that EFT can ameliorate maladaptive decision making. It may accordingly constitute a beneficial intervention for individuals who tend to make myopic decisions. Our analyses moreover indicated that the effect is unlikely to merely reflect demand characteristics. This meta-analysis highlights the potential of EFT in promoting long-term goals, a finding that extends from the laboratory to real-life decisions. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The results suggest that blatant dehumanization may be more widespread than previously recognized and that it can persist even in the minds of those who explicitly reject it.
Abstract: Research suggests that some people, particularly those on the political right, tend to blatantly dehumanize low-status groups. However, these findings have largely relied on self-report measures, which are notoriously subject to social desirability concerns. To better understand just how widely blatant forms of intergroup dehumanization might extend, the present article leverages an unobtrusive, data-driven perceptual task to examine how U.S. respondents mentally represent "Americans" versus "Arabs" (a low-status group in the United States that is often explicitly targeted with blatant dehumanization). Data from 2 reverse-correlation experiments (original N = 108; preregistered replication N = 336) and 7 rating studies (N = 2,301) suggest that U.S. respondents' mental representations of Arabs are significantly more dehumanizing than their representations of Americans. Furthermore, analyses indicate that this phenomenon is not reducible to a general tendency for our sample to mentally represent Arabs more negatively than Americans. Finally, these findings reveal that blatantly dehumanizing representations of Arabs can be just as prevalent among individuals exhibiting low levels of explicit dehumanization (e.g., liberals) as among individuals exhibiting high levels of explicit dehumanization (e.g., conservatives)-a phenomenon into which exploratory analyses suggest liberals may have only limited awareness. Taken together, these results suggest that blatant dehumanization may be more widespread than previously recognized and that it can persist even in the minds of those who explicitly reject it. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: This article found that cultural tightness, the strictness of cultural norms and normative punishment, helps to catalyze punitive religious beliefs by increasing people's motivation to punish norm violators, and that tightness mediates the impact of ecological threat on punitive belief.
Abstract: Billions of people from around the world believe in vengeful gods who punish immoral behavior. These punitive religious beliefs may foster prosociality and contribute to large-scale cooperation, but little is known about how these beliefs emerge and why people adopt them in the first place. We present a cultural-psychological model suggesting that cultural tightness-the strictness of cultural norms and normative punishment-helps to catalyze punitive religious beliefs by increasing people's motivation to punish norm violators. Our model also suggests that tightness mediates the impact of ecological threat on punitive belief, explaining why punitive religious beliefs are most common in regions with high levels of ecological threat. Five multimethod studies support these predictions. Studies 1-3 focus on the effect of cultural tightness on punitive religious beliefs. Historical increases in cultural tightness precede and predict historical increases in punitive beliefs (Study 1), and both manipulating people's support for tightness (Study 2) and placing people in a simulated tight society (Study 3) increase punitive religious beliefs via the personal motivation to punish norm violators. Studies 4-5 focus on whether cultural tightness mediates the link between ecological threat and punitive religious beliefs. Cultural tightness helps explain why U.S. states with high ecological threat (e.g., natural hazards, scarcity) have the highest levels of punitive religious beliefs (Study 4) and why experimental manipulations of threat increase punitive religious beliefs (Study 5). Past research has shown how religion impacts culture, but our studies show how culture can shape religion. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: It is shown that go/no-go training influences explicit liking for smartphone apps and that this liking partially mediates the effect of the training on consequential choices for using these apps 1 day later (Experiment 2).
Abstract: Human behavior can be classified into 2 basic categories: execution of responses and withholding responses. This classification is used in go/no-go training, where people respond to some objects and withhold their responses to other objects. Despite its simplicity, there is now substantial evidence that such training is powerful in changing human behavior toward such objects. However, it is poorly understood how simple responses can influence behavior. Contrary to the remarkably tenacious idea that go/no-go training changes behavior by strengthening inhibitory control, we propose that the training changes behavior via changes in explicit liking of objects. In two preregistered experiments, we show that go/no-go training influences explicit liking for smartphone apps (Experiments 1 and 2) and that this liking partially mediates the effect of the training on consequential choices for using these apps 1 day later (Experiment 2). The results highlight the role of evaluations when examining how motor response training influences behavior. This knowledge can inform development of more effective applied motor response training procedures and raises new theoretical questions on the relation between motor responses and affect. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: A prolonged period of development and refinement of hierarchical beat perception and surprisingly weak overall ability to attend to 2 beat levels at the same time across all ages are suggested.
Abstract: Most music is temporally organized within a metrical hierarchy, having nested periodic patterns that give rise to the experience of stronger (downbeat) and weaker (upbeat) events. Musical meter presumably makes it possible to dance, sing, and play instruments in synchrony with others. It is nevertheless unclear whether or not listeners perceive multiple levels of periodicity simultaneously, and if they do, when and how they learn to do this. We tested children, adolescents, and musically trained and untrained adults with a new meter perception task. We presented excerpts of human-performed music paired with metronomes that matched or mismatched the metrical structure of the music at 2 hierarchical levels (beat and measure), and asked listeners to provide a rating of fit of metronome and music. Fit ratings suggested that adults with and without musical training were sensitive to both levels of meter simultaneously, but ratings were more strongly influenced by beat-level than by measure-level synchrony. Sensitivity to two simultaneous levels of meter was not evident in children or adolescents. Sensitivity to the beat alone was apparent in the youngest children and increased with age, whereas sensitivity to the measure alone was not present in younger children (5- to 8-year-olds). These findings suggest a prolonged period of development and refinement of hierarchical beat perception and surprisingly weak overall ability to attend to 2 beat levels at the same time across all ages. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: For instance, this paper found that children and adults encode minimal group membership as a marker for future collaboration, and experimentally manipulating this expectation can eliminate their minimal ingroup bias, though not for gender.
Abstract: From early in development, humans show a strong preference for members of their own groups, even in so-called minimal (i.e., arbitrary and unfamiliar) groups, leading to tremendous negative consequences such as outgroup discrimination and derogation. A better understanding of the underlying processes driving humans' group mindedness is an important first step toward fighting discrimination and inequality on a bigger level. Based on the assumption that minimal group allocation elicits the anticipation of future within-group cooperation, which in turn elicits ingroup preference, we investigate whether changing participants' anticipation from within-group cooperation to between-group cooperation reduces their ingroup bias. In the present set of five studies (overall N = 465) we test this claim in two different populations (children and adults), in two different countries (United States and Germany), and in two kinds of groups (minimal and social group based on gender). Results confirm that changing participants' anticipation of who they will cooperate with from ingroup to outgroup members significantly reduces their ingroup bias in minimal groups, though not for gender, a noncoalitional group. In summary, these experiments provide robust evidence for the hypothesis that children and adults encode minimal group membership as a marker for future collaboration. They show that experimentally manipulating this expectation can eliminate their minimal ingroup bias. This study sheds light on the underlying cognitive processes in intergroup behavior throughout development and opens up new avenues for research on reducing ingroup bias and discrimination. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: There was no association between any measure of cognitive functioning and whether participants were currently "brain training" or not, even for the most committed brain trainers, and duration of brain training showed no relationship with any cognitive performance measure.
Abstract: The foundational tenet of brain training is that general cognitive functioning can be enhanced by completing computerized games, a notion that is both intuitive and appealing. Moreover, there is strong incentive to improve our cognitive abilities, so much so that it has driven a billion-dollar industry. However, whether brain training can really produce these desired outcomes continues to be debated. This is, in part, because the literature is replete with studies that use ill-defined criteria for establishing transferable improvements to cognition, often using single training and outcome measures with small samples. To overcome these limitations, we conducted a large-scale online study to examine whether practices and beliefs about brain training are associated with better cognition. We recruited a diverse sample of over 1000 participants, who had been using an assortment of brain training programs for up to 5 years. Cognition was assessed using multiple tests that measure attention, reasoning, working memory and planning. We found no association between any measure of cognitive functioning and whether participants were currently "brain training" or not, even for the most committed brain trainers. Duration of brain training also showed no relationship with any cognitive performance measure. This result was the same regardless of participant age, which brain training program they used, or whether they expected brain training to work. Our results pose a significant challenge for "brain training" programs that purport to improve general cognitive functioning among the general population. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: In this paper, the Balloon Analogue Risk Task (BART) was investigated and it was found that the typical implementation of this task violates the principle of representative design, thus conflicting with the expectations people likely form from real balloons, which may explain the previously observed limitations in some of the BART's psychometric properties.
Abstract: Representative design refers to the idea that experimental stimuli should be sampled or designed such that they represent the environments to which measured constructs are supposed to generalize. In this article we investigate the role of representative design in achieving valid and reliable psychological assessments, by focusing on a widely used behavioral measure of risk taking-the Balloon Analogue Risk Task (BART). Specifically, we demonstrate that the typical implementation of this task violates the principle of representative design, thus conflicting with the expectations people likely form from real balloons. This observation may provide an explanation for the previously observed limitations in some of the BART's psychometric properties (e.g., convergent validity with other measures of risk taking). To experimentally test the effects of improved representative designs, we conducted two extensive empirical studies (N = 772 and N = 632), finding that participants acquired more accurate beliefs about the optimal behavior in the BART because of these task adaptions. Yet, improving the task's representativeness proved to be insufficient to enhance the BART's psychometric properties. It follows that for the development of valid behavioral measurement instruments-as are needed, for instance, in functional neuroimaging studies-our field has to overcome the philosophy of the "repair program" (i.e., fixing existing tasks). Instead, we suggest that the development of valid task designs requires novel ecological assessments, aimed at identifying those real-life behaviors and associated psychological processes that lab tasks are supposed to capture and generalize to. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: Two important cardiac afferent effects on threat learning and memory are revealed: 1) Cardiac signals bias processing toward threat; and 2) cardiac signals are a context for fear memory; altering this context can disrupt the memory.
Abstract: Fear is coupled to states of physiological arousal. We tested how learning and memory of threat, specifically conditioned fear, is influenced by interoceptive signals. Forty healthy individuals were exposed to two threat (conditioned stimuli [CS+], paired with electrocutaneous shocks) and two safety (CS-) stimuli, time-locked to either cardiac ventricular systole (when arterial baroreceptors signal cardiovascular arousal to brainstem), or diastole (when these afferent signals are quiescent). Threat learning was indexed objectively using skin conductance responses (SCRs). During acquisition of threat contingencies, cardiac effects dominated: Stimuli (both CS+ and CS-) presented at systole evoked greater SCR responses, relative to stimuli (both CS+ and CS-) presented at diastole. This difference was amplified in more anxious individuals. Learning of conditioned fear was established by the end of the acquisition phase, which was followed by an extinction phase when unpaired CSs were presented at either the same or switched cardiac contingencies. One day later, electrocutaneous shocks triggered the reinstatement of fear responses. Subsequent presentation of stimuli previously encoded at systole evoked higher SCRs. Moreover, only those participants for whom stimuli had the same cardiac-contingency over both acquisition and extinction phases retained conditioned fear memory (i.e., CS+ > CS-). Our findings reveal two important cardiac afferent effects on threat learning and memory: 1) Cardiac signals bias processing toward threat; and 2) cardiac signals are a context for fear memory; altering this context can disrupt the memory. These observations suggest how threat reactivity may be reinforced and maintained by both acute and enduring states of cardiac arousal. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Journal ArticleDOI
TL;DR: The striking similarity between rhythmic attentional selection of mental representations and perceptual information suggests that attentional oscillations are a general mechanism of information processing in human cognition.
Abstract: Attention selects relevant information regardless of whether it is physically present or internally stored in working memory. Perceptual research has shown that attentional selection of external information is better conceived as rhythmic prioritization than as stable allocation. Here we tested this principle using information processing of internal representations held in working memory. Participants memorized 4 spatial positions that formed the end points of 2 objects. One of the positions was cued for a delayed match-nonmatch test. When uncued positions were probed, participants responded faster to uncued positions located on the same object as the cued position than to those located on the other object, revealing object-based attention in working memory. Manipulating the interval between cue and probe at a high temporal resolution revealed that reaction times oscillated at a theta rhythm of 6 Hz. Moreover, oscillations showed an antiphase relationship between memorized but uncued positions on the same versus other object as the cued position, suggesting that attentional prioritization fluctuated rhythmically in an object-based manner. Our results demonstrate the highly rhythmic nature of attentional selection in working memory. Moreover, the striking similarity between rhythmic attentional selection of mental representations and perceptual information suggests that attentional oscillations are a general mechanism of information processing in human cognition. These findings have important implications for current, attention-based models of working memory. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: Evidence is found that individuals high in obsessive-compulsive and anxious traits show a generalized increase in willingness-to-pay for unusable information about uncertain future outcomes, even though this behavior reduces their expected future reward.
Abstract: Aversion to uncertainty about the future has been proposed as a transdiagnostic trait underlying psychiatric diagnoses including obsessive-compulsive disorder and generalized anxiety. This association might explain the frequency of pathological information-seeking behaviors such as compulsive checking and reassurance-seeking in these disorders. Here we tested the behavioral predictions of this model using a noninstrumental information-seeking task that measured preferences for unusable information about future outcomes in different payout domains (gain, loss, and mixed gain/loss). We administered this task, along with a targeted battery of self-report questionnaires, to a general-population sample of 146 adult participants. Using computational cognitive modeling of choices to test competing theories of information valuation, we found evidence for a model in which preferences for costless and costly information about future outcomes were independent, and in which information preference was modulated by both outcome mean and outcome variance. Critically, we also found positive associations between a model parameter controlling preference for costly information and individual differences in latent traits of both anxiety and obsessive-compulsion. These associations were invariant across different payout domains, providing evidence that individuals high in obsessive-compulsive and anxious traits show a generalized increase in willingness-to-pay for unusable information about uncertain future outcomes, even though this behavior reduces their expected future reward. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Journal ArticleDOI
TL;DR: It is hypothesized that the evaluation of a prototype depends on the valence of its category, and results from three experiments support this hypothesis and show that for positive categories, greater typicality increases liking and for negative categories, more typicality decreases liking.
Abstract: A classic phenomenon known as prototype preference effect (PPE) or beauty-in-averageness effect is that prototypical exemplars of a neutral category are preferred over atypical exemplars This PPE has been explained in terms of deviance avoidance, hedonic fluency, or preference for certainty and familiarity However, typicality also facilitates greater activation of category-related information Thus, prototypes rather than atypical exemplars should be more associated with the valence of the category, either positive or negative Hence, we hypothesize that the evaluation of a prototype depends on the valence of its category Results from three experiments crossing a standard PPE paradigm with an evaluative conditioning procedure support our hypothesis We show that for positive categories, greater typicality increases liking Critically, for negative categories, greater typicality decreases liking This pattern of results challenges dominant explanations of prototype evaluation (PsycInfo Database Record (c) 2021 APA, all rights reserved)

Journal ArticleDOI
TL;DR: Across replication experiments, there is no consistent evidence for the claim that witnessing immoral behavior causes people to increase their general belief in free will, and a novel experiment demonstrated broad support for the norm-violation account, suggesting that people's willingness to attribute free will to others is malleable, but not because people are motivated to blame.
Abstract: Free will is often appraised as a necessary input to for holding others morally or legally responsible for misdeeds. Recently, however, Clark and colleagues (2014) argued for the opposite causal relationship. They assert that moral judgments and the desire to punish motivate people's belief in free will. Three replication experiments (Studies 1-2b) attempt to reproduce these findings. Additionally, a novel experiment (Study 3) tests a theoretical challenge derived from attribution theory, which suggests that immoral behaviors do not uniquely influence free will judgments. Instead, our nonviolation model argues that norm deviations of any kind-good, bad, or strange-cause people to attribute more free will to agents. Across replication experiments we found no consistent evidence for the claim that witnessing immoral behavior causes people to increase their general belief in free will. By contrast, we replicated the finding that people attribute more free will to agents who behave immorally compared to a neutral control (Studies 2a and 3). Finally, our novel experiment demonstrated broad support for our norm-violation account, suggesting that people's willingness to attribute free will to others is malleable, but not because people are motivated to blame. Instead, this experiment shows that attributions of free will are best explained by people's expectations for norm adherence, and when these expectations are violated, people infer that an agent expressed their free will to do so. (PsycInfo Database Record (c) 2020 APA, all rights reserved).