scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Experimental Psychology: General in 2016"


Journal ArticleDOI
TL;DR: A new 7-dimensional model of self-reported ways of being independent or interdependent is developed and validated across cultures and will allow future researchers to test more accurately the implications of cultural models of selfhood for psychological processes in diverse ecocultural contexts.
Abstract: Markus and Kitayama’s (1991) theory of independent and interdependent self-construals had a major influence on social, personality, and developmental psychology by highlighting the role of culture in psychological processes. However, research has relied excessively on contrasts between North American and East Asian samples, and commonly used self-report measures of independence and interdependence frequently fail to show predicted cultural differences. We revisited the conceptualization and measurement of independent and interdependent self-construals in 2 large-scale multinational surveys, using improved methods for cross-cultural research. We developed (Study 1: N = 2924 students in 16 nations) and validated across cultures (Study 2: N = 7279 adults from 55 cultural groups in 33 nations) a new 7-dimensional model of self-reported ways of being independent or interdependent. Patterns of global variation support some of Markus and Kitayama’s predictions, but a simple contrast between independence and interdependence does not adequately capture the diverse models of selfhood that prevail in different world regions. Cultural groups emphasize different ways of being both independent and interdependent, depending on individualism-collectivism, national socioeconomic development, and religious heritage. Our 7-dimensional model will allow future researchers to test more accurately the implications of cultural models of selfhood for psychological processes in diverse ecocultural contexts. (PsycINFO Database Record (c) 2016 APA, all rights reserved)

309 citations


Journal ArticleDOI
TL;DR: The authors tested 9 interventions (8 real and 1 sham) to reduce implicit racial preferences over time and found that none were effective after a delay of several hours to several days, and also found that these interventions did not change explicit racial preferences and were not reliably moderated by motivations to respond without prejudice.
Abstract: Implicit preferences are malleable, but does that change last? We tested 9 interventions (8 real and 1 sham) to reduce implicit racial preferences over time. In 2 studies with a total of 6,321 participants, all 9 interventions immediately reduced implicit preferences. However, none were effective after a delay of several hours to several days. We also found that these interventions did not change explicit racial preferences and were not reliably moderated by motivations to respond without prejudice. Short-term malleability in implicit preferences does not necessarily lead to long-term change, raising new questions about the flexibility and stability of implicit preferences. (PsycINFO Database Record

298 citations


Journal ArticleDOI
TL;DR: A meta-analysis of 13 new experiments and 9 experiments from other groups found that promoting intuition relative to deliberation increased giving in a Dictator Game among women, but not among men as mentioned in this paper.
Abstract: Are humans intuitively altruistic, or does altruism require self-control? A theory of social heuristics, whereby intuitive responses favor typically successful behaviors, suggests that the answer may depend on who you are. In particular, evidence suggests that women are expected to behave altruistically, and are punished for failing to be altruistic, to a much greater extent than men. Thus, women (but not men) may internalize altruism as their intuitive response. Indeed, a meta-analysis of 13 new experiments and 9 experiments from other groups found that promoting intuition relative to deliberation increased giving in a Dictator Game among women, but not among men (Study 1, N = 4,366). Furthermore, this effect was shown to be moderated by explicit sex role identification (Study 2, N = 1,831): the more women described themselves using traditionally masculine attributes (e.g., dominance, independence) relative to traditionally feminine attributes (e.g., warmth, tenderness), the more deliberation reduced their altruism. Our findings shed light on the connection between gender and altruism, and highlight the importance of social heuristics in human prosociality.

213 citations


Journal ArticleDOI
TL;DR: In this article, the relations among various spatial and mathematics skills were assessed in a cross-sectional study of 854 children from kindergarten, third, and sixth grades (i.e., 5 to 13 years of age).
Abstract: The relations among various spatial and mathematics skills were assessed in a cross-sectional study of 854 children from kindergarten, third, and sixth grades (i.e., 5 to 13 years of age). Children completed a battery of spatial mathematics tests and their scores were submitted to exploratory factor analyses both within and across domains. In the within domain analyses, all of the measures formed single factors at each age, suggesting consistent, unitary structures across this age range. Yet, as in previous work, the 2 domains were highly correlated, both in terms of overall composite score and pairwise comparisons of individual tasks. When both spatial and mathematics scores were submitted to the same factor analysis, the 2 domain specific factors again emerged, but there also were significant cross-domain factor loadings that varied with age. Multivariate regressions replicated the factor analysis and further revealed that mental rotation was the best predictor of mathematical performance in kindergarten, and visual-spatial working memory was the best predictor of mathematical performance in sixth grade. The mathematical tasks that predicted the most variance in spatial skill were place value (K, 3rd, 6th), word problems (3rd, 6th), calculation (K), fraction concepts (3rd), and algebra (6th). Thus, although spatial skill and mathematics each have strong internal structures, they also share significant overlap, and have particularly strong cross-domain relations for certain tasks. (PsycINFO Database Record

188 citations


Journal ArticleDOI
TL;DR: The authors suggest that bilingual benefits are not as broad and as robust as has been previously claimed and that earlier effects were possibly due to task-specific effects in selective and often small samples.
Abstract: The question whether being bilingual yields cognitive benefits is highly controversial with prior studies providing inconsistent results. Failures to replicate the bilingual advantage have been attributed to methodological factors such as comparing dichotomous groups and measuring cognitive abilities separately with single tasks. Therefore, the authors evaluated the 4 most prominent hypotheses of bilingual advantages for inhibitory control, conflict monitoring, shifting, and general cognitive performance by assessing bilingualism on 3 continuous dimensions (age of acquisition, proficiency, and usage) in a sample of 118 young adults and relating it to 9 cognitive abilities each measured by multiple tasks. Linear mixed-effects models accounting for multiple sources of variance simultaneously and controlling for parents' education as an index of socioeconomic status revealed no evidence for any of the 4 hypotheses. Hence, the authors' results suggest that bilingual benefits are not as broad and as robust as has been previously claimed. Instead, earlier effects were possibly due to task-specific effects in selective and often small samples.

170 citations


Journal ArticleDOI
TL;DR: In this article, a functional explanation for why moral intuitions typically follow deontological prescriptions, as opposed to those of other ethical theories, was proposed. And the authors showed that people who make characteristically deontology judgments are preferred as social partners, perceived as more moral and trustworthy, and are trusted more in economic games.
Abstract: Moral judgments play a critical role in motivating and enforcing human cooperation, and research on the proximate mechanisms of moral judgments highlights the importance of intuitive, automatic processes in forming such judgments. Intuitive moral judgments often share characteristics with deontological theories in normative ethics, which argue that certain acts (such as killing) are absolutely wrong, regardless of their consequences. Why do moral intuitions typically follow deontological prescriptions, as opposed to those of other ethical theories? Here, we test a functional explanation for this phenomenon by investigating whether agents who express deontological moral judgments are more valued as social partners. Across 5 studies, we show that people who make characteristically deontological judgments are preferred as social partners, perceived as more moral and trustworthy, and are trusted more in economic games. These findings provide empirical support for a partner choice account of moral intuitions whereby typically deontological judgments confer an adaptive function by increasing a person's likelihood of being chosen as a cooperation partner. Therefore, deontological moral intuitions may represent an evolutionarily prescribed prior that was selected for through partner choice mechanisms. (PsycINFO Database Record

166 citations


Journal ArticleDOI
TL;DR: It is found that it is not the absolute value of information that drives learning but, rather, the gap between the reward expected and reward received, an "information prediction error."
Abstract: Curiosity drives many of our daily pursuits and interactions; yet, we know surprisingly little about how it works. Here, we harness an idea implied in many conceptualizations of curiosity: that information has value in and of itself. Reframing curiosity as the motivation to obtain reward-where the reward is information-allows one to leverage major advances in theoretical and computational mechanisms of reward-motivated learning. We provide new evidence supporting 2 predictions that emerge from this framework. First, we find an asymmetric effect of positive versus negative information, with positive information enhancing both curiosity and long-term memory for information. Second, we find that it is not the absolute value of information that drives learning but, rather, the gap between the reward expected and reward received, an "information prediction error." These results support the idea that information functions as a reward, much like money or food, guiding choices and driving learning in systematic ways.

148 citations


Journal ArticleDOI
TL;DR: The data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics.
Abstract: As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record

148 citations


Journal ArticleDOI
TL;DR: The findings support the idea that episodic memory processes are involved in means-end problem solving and episodic reappraisal, and that increasing the episodic specificity of imagining constructive behaviors regarding worrisome events may be related to improved psychological well-being.
Abstract: Previous research has demonstrated that an episodic specificity induction--brief training in recollecting details of a recent experience--enhances performance on various subsequent tasks thought to draw upon episodic memory processes. Existing work has also shown that mental simulation can be beneficial for emotion regulation and coping with stressors. Here we focus on understanding how episodic detail can affect problem solving, reappraisal, and psychological well-being regarding worrisome future events. In Experiment 1, an episodic specificity induction significantly improved participants' performance on a subsequent means-end problem solving task (i.e., more relevant steps) and an episodic reappraisal task (i.e., more episodic details) involving personally worrisome future events compared with a control induction not focused on episodic specificity. Imagining constructive behaviors with increased episodic detail via the specificity induction was also related to significantly larger decreases in anxiety, perceived likelihood of a bad outcome, and perceived difficulty to cope with a bad outcome, as well as larger increases in perceived likelihood of a good outcome and indicated use of active coping behaviors compared with the control. In Experiment 2, we extended these findings using a more stringent control induction, and found preliminary evidence that the specificity induction was related to an increase in positive affect and decrease in negative affect compared with the control. Our findings support the idea that episodic memory processes are involved in means-end problem solving and episodic reappraisal, and that increasing the episodic specificity of imagining constructive behaviors regarding worrisome events may be related to improved psychological well-being.

139 citations


Journal ArticleDOI
TL;DR: Whether executive deficits are consequences rather than risk factors for schizophrenia, or executive failures barely precede or precipitate diagnosable schizophrenia symptoms, a large correlational study took a latent-variable approach to the generality of executive control.
Abstract: A large correlational study took a latent-variable approach to the generality of executive control by testing the individual-differences structure of executive-attention capabilities and assessing their prediction of schizotypy, a multidimensional construct (with negative, positive, disorganized, and paranoid factors) conveying risk for schizophrenia. Although schizophrenia is convincingly linked to executive deficits, the schizotypy literature is equivocal. Subjects completed tasks of working memory capacity (WMC), attention restraint (inhibiting prepotent responses), and attention constraint (focusing visual attention amid distractors), the latter 2 in an effort to fractionate the "inhibition" construct. We also assessed mind-wandering propensity (via in-task thought probes) and coefficient of variation in response times (RT CoV) from several tasks as more novel indices of executive attention. WMC, attention restraint, attention constraint, mind wandering, and RT CoV were correlated but separable constructs, indicating some distinctions among "attention control" abilities; WMC correlated more strongly with attentional restraint than constraint, and mind wandering correlated more strongly with attentional restraint, attentional constraint, and RT CoV than with WMC. Across structural models, no executive construct predicted negative schizotypy and only mind wandering and RT CoV consistently (but modestly) predicted positive, disorganized, and paranoid schizotypy; stalwart executive constructs in the schizophrenia literature-WMC and attention restraint-showed little to no predictive power, beyond restraint's prediction of paranoia. Either executive deficits are consequences rather than risk factors for schizophrenia, or executive failures barely precede or precipitate diagnosable schizophrenia symptoms. (PsycINFO Database Record

121 citations


Journal ArticleDOI
TL;DR: Results indicate that social network processes reflect moral selection, and both online and offline differences in moral purity concerns are particularly predictive of social distance.
Abstract: Does sharing moral values encourage people to connect and form communities? The importance of moral homophily (love of same) has been recognized by social scientists, but the types of moral similarities that drive this phenomenon are still unknown. Using both large-scale, observational social-media analyses and behavioral lab experiments, the authors investigated which types of moral similarities influence tie formations. Analysis of a corpus of over 700,000 tweets revealed that the distance between 2 people in a social-network can be predicted based on differences in the moral purity content-but not other moral content-of their messages. The authors replicated this finding by experimentally manipulating perceived moral difference (Study 2) and similarity (Study 3) in the lab and demonstrating that purity differences play a significant role in social distancing. These results indicate that social network processes reflect moral selection, and both online and offline differences in moral purity concerns are particularly predictive of social distance. This research is an attempt to study morality indirectly using an observational big-data study complemented with 2 confirmatory behavioral experiments carried out using traditional social-psychology methodology.

Journal ArticleDOI
TL;DR: There was no effect of training on any measure of self-control and the implication is that training self- control through repeated practice does not result in generalized improvements inSelf-control.
Abstract: Can self-control be improved through practice? Several studies have found that repeated practice of tasks involving self-control improves performance on other tasks relevant to self-control. However, in many of these studies, improvements after training could be attributable to methodological factors (e.g., passive control conditions). Moreover, the extent to which the effects of training transfer to real-life settings is not yet clear. In the present research, participants (N = 174) completed a 6-week training program of either cognitive or behavioral self-control tasks. We then tested the effects of practice on a range of measures of self-control, including lab-based and real-world tasks. Training was compared with both active and no-contact control conditions. Despite high levels of adherence to the training tasks, there was no effect of training on any measure of self-control. Trained participants did not, for example, show reduced ego depletion effects, become better at overcoming their habits, or report exerting more self-control in everyday life. Moderation analyses found no evidence that training was effective only among particular groups of participants. Bayesian analyses suggested that the data were more consistent with a null effect of training on self-control than with previous estimates of the effect of practice. The implication is that training self-control through repeated practice does not result in generalized improvements in self-control. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: The results indicate that 3 influences combine to make altering learners' misconceptions difficult: the sense of fluency that can accompany nonoptimal modes of instruction; pre-existing beliefs learners bring to new tasks; and the willingness, even eagerness, to believe that what enhances others' learning differs from what enhances one's own learning.
Abstract: Interleaving exemplars of to-be-learned categories-rather than blocking exemplars by category-typically enhances inductive learning. Learners, however, tend to believe the opposite, even after their own performance has benefited from interleaving. In Experiments 1 and 2, the authors examined the influence of 2 factors that they hypothesized contribute to the illusion that blocking enhances inductive learning: Namely, that (a) blocking creates a sense of fluent extraction during study of the features defining a given category, and (b) learners come to the experimental task with a pre-existing belief that blocking instruction by topic is superior to intermixing topics. In Experiments 3-5, the authors attempted to uproot learners' belief in the superiority of blocking through experience-based and theory-based debiasing techniques by (a) providing detailed theory-based information as to why blocking seems better, but is not, and (b) explicitly drawing attention to the link between study schedule and subsequent performance, both of which had only modest effects. Only when they disambiguated test performance on the 2 schedules by separating them (Experiment 6) did the combination of experience- and theory-based debiasing lead a majority of learners to appreciate interleaving. Overall, the results indicate that 3 influences combine to make altering learners' misconceptions difficult: the sense of fluency that can accompany nonoptimal modes of instruction; pre-existing beliefs learners bring to new tasks; and the willingness, even eagerness, to believe that 1 is unique as a learner-that what enhances others' learning differs from what enhances one's own learning. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: These experiments provide new theoretical insight into the relation between not responding and evaluation, and can be applied to design motor response training procedures aimed at changing people's behavior toward appetitive stimuli.
Abstract: In a series of 6 experiments (5 preregistered), we examined how not responding to appetitive stimuli causes devaluation. To examine this question, a go/no-go task was employed in which appetitive stimuli were consistently associated with cues to respond (go stimuli), or with cues to not respond (either no-go cues or the absence of cues; no-go stimuli). Change in evaluations of no-go stimuli was compared to change in evaluations of both go stimuli and of stimuli not presented in the task (untrained stimuli). Experiments 1 to 3 show that not responding to appetitive stimuli in a go/no-go task causes devaluation of these stimuli regardless of the presence of an explicit no-go cue. Experiments 4a and 4b show that the devaluation effect of appetitive stimuli is contingent on the percentage of no-go trials; devaluation appears when no-go trials are rare, but disappears when no-go trials are frequent. Experiment 5 shows that simply observing the go/no-go task does not lead to devaluation. Experiment 6 shows that not responding to neutral stimuli does not cause devaluation. Together, these results suggest that devaluation of appetitive stimuli by not responding to them is the result of response inhibition. By employing both go stimuli and untrained stimuli as baselines, alternative explanations are ruled out, and apparent inconsistencies in the literature are resolved. These experiments provide new theoretical insight into the relation between not responding and evaluation, and can be applied to design motor response training procedures aimed at changing people's behavior toward appetitive stimuli. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: Correlations, hierarchical regression analyses, confirmatory factor analyses, structural equation models, and relative weight analyses revealed several key findings, including that attention control and capacity fully mediated the WM and multitasking relationship.
Abstract: Previous research has identified several cognitive abilities that are important for multitasking, but few studies have attempted to measure a general multitasking ability using a diverse set of multitasks In the final dataset, 534 young adult subjects completed measures of working memory (WM), attention control, fluid intelligence, and multitasking Correlations, hierarchical regression analyses, confirmatory factor analyses, structural equation models, and relative weight analyses revealed several key findings First, although the complex tasks used to assess multitasking differed greatly in their task characteristics and demands, a coherent construct specific to multitasking ability was identified Second, the cognitive ability predictors accounted for substantial variance in the general multitasking construct, with WM and fluid intelligence accounting for the most multitasking variance compared to attention control Third, the magnitude of the relationships among the cognitive abilities and multitasking varied as a function of the complexity and structure of the various multitasks assessed Finally, structural equation models based on a multifaceted model of WM indicated that attention control and capacity fully mediated the WM and multitasking relationship (PsycINFO Database Record

Journal ArticleDOI
TL;DR: The results suggest that judgments of learning are partially constructed in response to the measurement question, and places them in the company of other reactive verbal reporting methods, counseling researchers to consider incorporating control groups, creating alternative scales, and exploring other verbalReporting methods.
Abstract: Self-report measurements are ubiquitous in psychology, but they carry the potential of altering processes they are meant to measure. We assessed whether a common metamemory measure, judgments of learning, can change the ongoing process of memorizing and subsequent memory performance. Judgments of learning are a form of metamemory monitoring described as conscious reflection on one's own memory performance or encoding activities for the purpose of exerting strategic control over one's study and retrieval activities (T. O. Nelson & Narens, 1990). Much of the work examining the conscious monitoring of encoding relies heavily on a paradigm in which participants are asked to estimate the probability that they will recall a given item in a judgment of learning. In 5 experiments, we find effects of measuring judgments of learning on how people allocate their study time to difficult versus easy items, and on what they will recall. These results suggest that judgments of learning are partially constructed in response to the measurement question. The tendency of judgments of learning to alter performance places them in the company of other reactive verbal reporting methods, counseling researchers to consider incorporating control groups, creating alternative scales, and exploring other verbal reporting methods. Less directive methods of accessing participants' metacognition and other judgments should be considered as an alternative to response scales.

Journal ArticleDOI
TL;DR: The relationship between adaptive choice and associative memory generalized to more complex, ecologically valid choice behavior, such as social decision-making, and individuals more strongly encode experiences of social violations-such as being treated unfairly, suggesting a bias for how individuals form associative memories within social contexts.
Abstract: Prior research illustrates that memory can guide value-based decision-making. For example, previous work has implicated both working memory and procedural memory (i.e., reinforcement learning) in guiding choice. However, other types of memories, such as episodic memory, may also influence decision-making. Here we test the role for episodic memory-specifically item versus associative memory-in supporting value-based choice. Participants completed a task where they first learned the value associated with trial unique lotteries. After a short delay, they completed a decision-making task where they could choose to reengage with previously encountered lotteries, or new never before seen lotteries. Finally, participants completed a surprise memory test for the lotteries and their associated values. Results indicate that participants chose to reengage more often with lotteries that resulted in high versus low rewards. Critically, participants not only formed detailed, associative memories for the reward values coupled with individual lotteries, but also exhibited adaptive decision-making only when they had intact associative memory. We further found that the relationship between adaptive choice and associative memory generalized to more complex, ecologically valid choice behavior, such as social decision-making. However, individuals more strongly encode experiences of social violations-such as being treated unfairly, suggesting a bias for how individuals form associative memories within social contexts. Together, these findings provide an important integration of episodic memory and decision-making literatures to better understand key mechanisms supporting adaptive behavior.

Journal ArticleDOI
TL;DR: Results revealed a strong higher-order General Benevolence dimension that accounted for variability across all measurement domains, potentially providing an explanation for why older adults typically contribute more to the public good than young adults.
Abstract: [Correction Notice: An Erratum for this article was reported in Vol 145(10) of Journal of Experimental Psychology: General (see record 2016-46925-004). In the article, there was an error in the Task, Stimuli, and Procedures section. In the 1st sentence in the 6th paragraph, “Following the scanning phase, participants completed self-report questionnaires meant to reflected the Prosocial Disposition construct: the agreeableness scale from the Big F, which includes empathic concern and perspective-taking, and a scale of personality descriptive adjectives related to altruistic behavior (Wood, Nye, & Saucier, 2010).” should have read: “Following the scanning phase, participants completed self-report questionnaires that contained scales to reflect the Prosocial Disposition construct: the Big Five Inventory (BFI; John et al., 1991), from which we used the agreeableness scale to measure prosocial disposition; the Interpersonal Reactivity Index (IRI; Davis, 1980), from which we used the empathic concern and perspective-taking scales; and a scale of personality descriptive adjectives related to altruistic behavior (Wood, Nye, & Saucier, 2010).”] Individual and life span differences in charitable giving are an important economic force, yet the underlying motives are not well understood. In an adult, life span sample, we assessed manifestations of prosocial tendencies across 3 different measurement domains: (a) psychological self-report measures, (b) actual giving choices, and (c) fMRI-derived, neural indicators of “pure altruism.” The latter expressed individuals’ activity in neural valuation areas when charities received money compared to when oneself received money and thus reflected an altruistic concern for others. Results based both on structural equation modeling and unit-weighted aggregate scores revealed a strong higher-order General Benevolence dimension that accounted for variability across all measurement domains. The fact that the neural measures likely reflect pure altruistic tendencies indicates that General Benevolence is based on a genuine concern for others. Furthermore, General Benevolence exhibited a robust increase across the adult life span, potentially providing an explanation for why older adults typically contribute more to the public good than young adults.

Journal ArticleDOI
TL;DR: These results were robust across age, gender, static versus dynamic display of the facial expressions, and between- versus within-subjects design.
Abstract: That all humans recognize certain specific emotions from their facial expression-the Universality Thesis-is a pillar of research, theory, and application in the psychology of emotion. Its most rigorous test occurs in indigenous societies with limited contact with external cultural influences, but such tests are scarce. Here we report 2 such tests. Study 1 was of children and adolescents (N = 68; aged 6-16 years) of the Trobriand Islands (Papua New Guinea, South Pacific) with a Western control group from Spain (N = 113, of similar ages). Study 2 was of children and adolescents (N = 36; same age range) of Matemo Island (Mozambique, Africa). In both studies, participants were shown an array of prototypical facial expressions and asked to point to the person feeling a specific emotion: happiness, fear, anger, disgust, or sadness. The Spanish control group matched faces to emotions as predicted by the Universality Thesis: matching was seen on 83% to 100% of trials. For the indigenous societies, in both studies, the Universality Thesis was moderately supported for happiness: smiles were matched to happiness on 58% and 56% of trials, respectively. For other emotions, however, results were even more modest: 7% to 46% in the Trobriand Islands and 22% to 53% in Matemo Island. These results were robust across age, gender, static versus dynamic display of the facial expressions, and between- versus within-subjects design. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: Results suggest that humans use transient vocal changes to track, signal, and coordinate status relationships as well as influence perceptions of rank and formidability.
Abstract: Similar to the nonverbal signals shown by many nonhuman animals during aggressive conflicts, humans display a broad range of behavioral signals to advertise and augment their apparent size, strength, and fighting prowess when competing for social dominance. Favored by natural selection, these signals communicate the displayer's capacity and willingness to inflict harm, and increase responders' likelihood of detecting and establishing a rank asymmetry, and thus avoiding costly physical conflicts. Included among this suite of adaptations are vocal changes, which occur in a wide range of nonhuman animals (e.g., chimpanzees, rhesus monkeys) prior to aggression, but have not been systematically examined in humans. The present research tests whether and how humans use vocal pitch modulations to communicate information about their intention to dominate or submit. Results from Study 1 demonstrate that in the context of face-to-face group interactions, individuals spontaneously alter their vocal pitch in a manner consistent with rank signaling. Raising one's pitch early in the course of an interaction predicted lower emergent rank, whereas deepening one's pitch predicted higher emergent rank. Results from Study 2 provide causal evidence that these vocal shifts influence perceptions of rank and formidability. Together, findings suggest that humans use transient vocal changes to track, signal, and coordinate status relationships.

Journal ArticleDOI
TL;DR: The hypothesis that scene categories reflect functions, or the possibilities for actions within a scene, is tested, suggesting instead that a scene's category may be determined by the scene's function.
Abstract: How do we know that a kitchen is a kitchen by looking? Traditional models posit that scene categorization is achieved through recognizing necessary and sufficient features and objects, yet there is little consensus about what these may be. However, scene categories should reflect how we use visual information. We therefore test the hypothesis that scene categories reflect functions, or the possibilities for actions within a scene. Our approach is to compare human categorization patterns with predictions made by both functions and alternative models. We collected a large-scale scene category distance matrix (5 million trials) by asking observers to simply decide whether two images were from the same or different categories. Using the actions from the American Time Use Survey, we mapped actions onto each scene (1.4 million trials). We found a strong relationship between ranked category distance and functional distance (r=0.50, or 66% of the maximum possible correlation). The function model outperformed alternative models of object-based distance (r=0.33), visual features from a convolutional neural network (r=0.39), lexical distance (r=0.27), and models of visual features. Using hierarchical linear regression, we found that functions captured 85.5% of overall explained variance, with nearly half of the explained variance captured only by functions, implying that the predictive power of alternative models was due to their shared variance with the function-based model. These results challenge the dominant school of thought that visual features and objects are sufficient for scene categorization, suggesting instead that a scene’s category may be determined by the scene’s function.

Journal ArticleDOI
TL;DR: The results suggested that little evidence is available to support updating as a separate factor from general memory factors; that inhibition does not separate from general speed; and that switching is supported as a narrow factor under general speed, but with a more restricted definition than some clinicians and researchers have conceptualized.
Abstract: Executive function is an important concept in neuropsychological and cognitive research, and is often viewed as central to effective clinical assessment of cognition. However, the construct validity of executive function tests is controversial. The switching, inhibition, and updating model is the most empirically supported and replicated factor model of executive function (Miyake et al., 2000). To evaluate the relation between executive function constructs and nonexplicitly executive cognitive constructs, we used confirmatory factor reanalysis guided by the comprehensive Cattell-Horn-Carroll (CHC) model of cognitive abilities. Data from 7 of the best studies supporting the executive function model were reanalyzed, contrasting executive function models and CHC models. Where possible, we examined the effect of specifying executive function factors in addition to the CHC factors. The results suggested that little evidence is available to support updating as a separate factor from general memory factors; that inhibition does not separate from general speed; and that switching is supported as a narrow factor under general speed, but with a more restricted definition than some clinicians and researchers have conceptualized. The replicated executive function factor structure was integrated with the larger body of research on individual difference in cognition, as represented by the CHC model.

Journal ArticleDOI
TL;DR: It is demonstrated that the physical spaces associated with Black Americans are also subject to negative racial stereotypes, which can powerfully influence how connected people feel to a space, how they evaluate that space, and how they protect that space from harm.
Abstract: Social psychologists have long demonstrated that people are stereotyped on the basis of race. Researchers have conducted extensive experimental studies on the negative stereotypes associated with Black Americans in particular. Across 4 studies, we demonstrate that the physical spaces associated with Black Americans are also subject to negative racial stereotypes. Such spaces, for example, are perceived as impoverished, crime-ridden, and dirty (Study 1). Moreover, these space-focused stereotypes can powerfully influence how connected people feel to a space (Studies 2a, 2b, and 3), how they evaluate that space (Studies 2a and 2b), and how they protect that space from harm (Study 3). Indeed, processes related to space-focused stereotypes may contribute to social problems across a range of domains-from racial disparities in wealth to the overexposure of Blacks to environmental pollution. Together, the present studies broaden the scope of traditional stereotyping research and highlight promising new directions. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: It is demonstrated that people are more likely to infer a human creator when they hear a voice expressing thoughts than when they read the same thoughts in text, and removing voice from communication would increase the likelihood of mistaking the text's creator for a machine.
Abstract: Treating a human mind like a machine is an essential component of dehumanization, whereas attributing a humanlike mind to a machine is an essential component of anthropomorphism. Here we tested how a cue closely connected to a person's actual mental experience-a humanlike voice-affects the likelihood of mistaking a person for a machine, or a machine for a person. We predicted that paralinguistic cues in speech are particularly likely to convey the presence of a humanlike mind, such that removing voice from communication (leaving only text) would increase the likelihood of mistaking the text's creator for a machine. Conversely, adding voice to a computer-generated script (resulting in speech) would increase the likelihood of mistaking the text's creator for a human. Four experiments confirmed these hypotheses, demonstrating that people are more likely to infer a human (vs. computer) creator when they hear a voice expressing thoughts than when they read the same thoughts in text. Adding human visual cues to text (i.e., seeing a person perform a script in a subtitled video clip), did not increase the likelihood of inferring a human creator compared with only reading text, suggesting that defining features of personhood may be conveyed more clearly in speech (Experiments 1 and 2). Removing the naturalistic paralinguistic cues that convey humanlike capacity for thinking and feeling, such as varied pace and intonation, eliminates the humanizing effect of speech (Experiment 4). We discuss implications for dehumanizing others through text-based media, and for anthropomorphizing machines through speech-based media. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: Examination of genetic and environmental overlap between EFs and intelligence in a racially and socioeconomically diverse sample of 811 twins ages 7 to 15 years provides evidence that genetic influences on general intelligence are highly overlapping with those on EF.
Abstract: Executive functions (EFs) are cognitive processes that control, monitor, and coordinate more basic cognitive processes. EFs play instrumental roles in models of complex reasoning, learning, and decision making, and individual differences in EFs have been consistently linked with individual differences in intelligence. By middle childhood, genetic factors account for a moderate proportion of the variance in intelligence, and these effects increase in magnitude through adolescence. Genetic influences on EFs are very high, even in middle childhood, but the extent to which these genetic influences overlap with those on intelligence is unclear. We examined genetic and environmental overlap between EFs and intelligence in a racially and socioeconomically diverse sample of 811 twins ages 7 to 15 years (M = 10.91, SD = 1.74) from the Texas Twin Project. A general EF factor representing variance common to inhibition, switching, working memory, and updating domains accounted for substantial proportions of variance in intelligence, primarily via a genetic pathway. General EF continued to have a strong, genetically mediated association with intelligence even after controlling for processing speed. Residual variation in general intelligence was influenced only by shared and nonshared environmental factors, and there remained no genetic variance in general intelligence that was unique of EF. Genetic variance independent of EF did remain, however, in a more specific perceptual reasoning ability. These results provide evidence that genetic influences on general intelligence are highly overlapping with those on EF. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: The authors found that when high system justifiers were led to believe that the economy was in a recovery, they recalled climate change information to be more serious than did those assigned to a control condition.
Abstract: The contemporary political landscape is characterized by numerous divisive issues. Unlike many other issues, however, much of the disagreement about climate change centers not on how best to take action to address the problem, but on whether the problem exists at all. Psychological studies indicate that, to the extent that sustainability initiatives are seen as threatening to the socioeconomic system, individuals may downplay environmental problems in order to defend and protect the status quo. In the current research, participants were presented with scientific information about climate change and later asked to recall details of what they had learned. Individuals who were experimentally induced (Study 1) or dispositionally inclined (Studies 2 and 3) to justify the economic system misremembered the evidence to be less serious, and this was associated with increased skepticism. However, when high system justifiers were led to believe that the economy was in a recovery, they recalled climate change information to be more serious than did those assigned to a control condition. When low system justifiers were led to believe that the economy was in recession, they recalled the information to be less serious (Study 3). These findings suggest that because system justification can impact information processing, simply providing the public with scientific evidence may be insufficient to inspire action to mitigate climate change. However, linking environmental information to statements about the strength of the economic system may satiate system justification needs and break the psychological link between proenvironmental initiatives and economic risk. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: Performance of 5-year-old children on a categorization task involving novel, Greek symbols was measured, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions.
Abstract: Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: Findings suggest that association of meaningful conceptual information with an image shifts its representation from an image-based percept to a view-invariant concept and indicate that the role of conceptual information should be considered to account for the superior recognition that the authors have for familiar faces and objects.
Abstract: The representation of familiar objects is comprised of perceptual information about their visual properties as well as the conceptual knowledge that we have about them. What is the relative contribution of perceptual and conceptual information to object recognition? Here, we examined this question by designing a face familiarization protocol during which participants were either exposed to rich perceptual information (viewing each face in different angles and illuminations) or with conceptual information (associating each face with a different name). Both conditions were compared with single-view faces presented with no labels. Recognition was tested on new images of the same identities to assess whether learning generated a view-invariant representation. Results showed better recognition of novel images of the learned identities following association of a face with a name label, but no enhancement following exposure to multiple face views. Whereas these findings may be consistent with the role of category learning in object recognition, face recognition was better for labeled faces only when faces were associated with person-related labels (name, occupation), but not with person-unrelated labels (object names or symbols). These findings suggest that association of meaningful conceptual information with an image shifts its representation from an image-based percept to a view-invariant concept. They further indicate that the role of conceptual information should be considered to account for the superior recognition that we have for familiar faces and objects. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: It is argued that people intuitively distinguish epistemic (knowable) uncertainty from aleatory (random) uncertainty and show that the relative salience of these dimensions is reflected in natural language use.
Abstract: We argue that people intuitively distinguish epistemic (knowable) uncertainty from aleatory (random) uncertainty and show that the relative salience of these dimensions is reflected in natural language use. We hypothesize that confidence statements (e.g., “I am fairly confident,” “I am 90% sure,” “I am reasonably certain”) communicate a subjective assessment of primarily epistemic uncertainty, whereas likelihood statements (e.g., “I believe it is fairly likely,” “I’d say there is a 90% chance,” “I think there is a high probability”) communicate a subjective assessment of primarily aleatory uncertainty. First, we show that speakers tend to use confidence statements to express epistemic uncertainty and they tend to use likelihood statements to express aleatory uncertainty; we observe this in a 2-year sample of New York Times articles (Study 1), and in participants’ explicit choices of which statements more naturally express different uncertain events (Studies 2A and 2B). Second, we show that when speakers apply confidence versus likelihood statements to the same events, listeners infer different reasoning (Study 3): confidence statements suggest epistemic rationale (singular reasoning, feeling of knowing, internal control), whereas likelihood statements suggest aleatory rationale (distributional reasoning, relative frequency information, external control). Third, we show that confidence versus likelihood statements can differentially prompt epistemic versus aleatory thoughts, respectively, as observed when participants complete sentences that begin with confidence versus likelihood statements (Study 4) and when they quantify these statements based on feeling-of-knowing (epistemic) and frequency (aleatory) information (Study 5).

Journal ArticleDOI
TL;DR: The results demonstrate that big and small objects have reliably different mid-level perceptual features, and suggest that early perceptual information about broad-category membership may influence downstream object perception, recognition, and categorization processes.
Abstract: Understanding how perceptual and conceptual representations are connected is a fundamental goal of cognitive science. Here, we focus on a broad conceptual distinction that constrains how we interact with objects--real-world size. Although there appear to be clear perceptual correlates for basic-level categories (apples look like other apples, oranges look like other oranges), the perceptual correlates of broader categorical distinctions are largely unexplored, i.e., do small objects look like other small objects? Because there are many kinds of small objects (e.g., cups, keys), there may be no reliable perceptual features that distinguish them from big objects (e.g., cars, tables). Contrary to this intuition, we demonstrated that big and small objects have reliable perceptual differences that can be extracted by early stages of visual processing. In a series of visual search studies, participants found target objects faster when the distractor objects differed in real-world size. These results held when we broadly sampled big and small objects, when we controlled for low-level features and image statistics, and when we reduced objects to texforms--unrecognizable textures that loosely preserve an object's form. However, this effect was absent when we used more basic textures. These results demonstrate that big and small objects have reliably different mid-level perceptual features, and suggest that early perceptual information about broad-category membership may influence downstream object perception, recognition, and categorization processes.