scispace - formally typeset
Search or ask a question

Showing papers in "Perspectives on Psychological Science in 2012"


Journal ArticleDOI
TL;DR: It is suggested that SDT is a viable conceptual framework to study antecedents and outcomes of motivation for health-related behaviors and positive relations of psychological need satisfaction and autonomous motivation to beneficial health outcomes are shown.
Abstract: Behavior change is more effective and lasting when patients are autonomously motivated. To examine this idea, we identified 184 independent data sets from studies that utilized self-determination theory (SDT; Deci & Ryan, 2000) in health care and health promotion contexts. A meta-analysis evaluated relations between the SDT-based constructs of practitioner support for patient autonomy and patients' experience of psychological need satisfaction, as well as relations between these SDT constructs and indices of mental and physical health. Results showed the expected relations among the SDT variables, as well as positive relations of psychological need satisfaction and autonomous motivation to beneficial health outcomes. Several variables (e.g., participants' age, study design) were tested as potential moderators when effect sizes were heterogeneous. Finally, we used path analyses of the meta-analyzed correlations to test the interrelations among the SDT variables. Results suggested that SDT is a viable conceptual framework to study antecedents and outcomes of motivation for health-related behaviors.

1,285 citations


Journal ArticleDOI
TL;DR: The authors conducted a comprehensive literature search, identifying 412 relevant articles, which were sorted into 5 categories: descriptive analysis of users, motivations for using Facebook, identity presentation, the role of Facebook in social interactions, and privacy and information disclosure.
Abstract: With over 800 million active users, Facebook is changing the way hundreds of millions of people relate to one another and share information. A rapidly growing body of research has accompanied the meteoric rise of Facebook as social scientists assess the impact of Facebook on social life. In addition, researchers have recognized the utility of Facebook as a novel tool to observe behavior in a naturalistic setting, test hypotheses, and recruit participants. However, research on Facebook emanates from a wide variety of disciplines, with results being published in a broad range of journals and conference proceedings, making it difficult to keep track of various findings. And because Facebook is a relatively recent phenomenon, uncertainty still exists about the most effective ways to do Facebook research. To address these issues, the authors conducted a comprehensive literature search, identifying 412 relevant articles, which were sorted into 5 categories: descriptive analysis of users, motivations for using Facebook, identity presentation, the role of Facebook in social interactions, and privacy and information disclosure. The literature review serves as the foundation from which to assess current findings and offer recommendations to the field for future research on Facebook and online social networks more broadly.

1,148 citations


Journal ArticleDOI
TL;DR: Although the very public problems experienced by psychology over this 2-year period are embarrassing to those of us working in the field, some have found comfort in the fact that, over the same period, similar concerns have been arising across the scientific landscape.
Abstract: Is there currently a crisis of confidence in psychological science reflecting an unprecedented level of doubt among practitioners about the reliability of research findings in the field? It would certainly appear that there is. These doubts emerged and grew as a series of unhappy events unfolded in 2011: the Diederik Stapel fraud case (see Stroebe, Postmes, & Spears, 2012, this issue), the publication in a major social psychology journal of an article purporting to show evidence of extrasensory perception (Bem, 2011) followed by widespread public mockery (see Galak, LeBoeuf, Nelson, & Simmons, in press; Wagenmakers, Wetzels, Borsboom, & van der Maas, 2011), reports by Wicherts and colleagues that psychologists are often unwilling or unable to share their published data for reanalysis (Wicherts, Bakker, & Molenaar, 2011; see also Wicherts, Borsboom, Kats, & Molenaar, 2006), and the publication of an important article in Psychological Science showing how easily researchers can, in the absence of any real effects, nonetheless obtain statistically significant differences through various questionable research practices (QRPs) such as exploring multiple dependent variables or covariates and only reporting these when they yield significant results (Simmons, Nelson, & Simonsohn, 2011). For those psychologists who expected that the embarrassments of 2011 would soon recede into memory, 2012 offered instead a quick plunge from bad to worse, with new indications of outright fraud in the field of social cognition (Simonsohn, 2012), an article in Psychological Science showing that many psychologists admit to engaging in at least some of the QRPs examined by Simmons and colleagues (John, Loewenstein, & Prelec, 2012), troubling new meta-analytic evidence suggesting that the QRPs described by Simmons and colleagues may even be leaving telltale signs visible in the distribution of p values in the psychological literature (Masicampo & Lalande, in press; Simonsohn, 2012), and an acrimonious dust-up in science magazines and blogs centered around the problems some investigators were having in replicating well-known results from the field of social cognition (Bower, 2012; Yong, 2012). Although the very public problems experienced by psychology over this 2-year period are embarrassing to those of us working in the field, some have found comfort in the fact that, over the same period, similar concerns have been arising across the scientific landscape (triggered by revelations that will be described shortly). Some of the suspected causes of unreplicability, such as publication bias (the tendency to publish only positive findings) have been discussed for years; in fact, the phrase file-drawer problem was first coined by a distinguished psychologist several decades ago (Rosenthal, 1979). However, many have speculated that these problems have been exacerbated in recent years as academia reaps the harvest of a hypercompetitive academic climate and an incentive scheme that provides rich rewards for overselling one’s work and few rewards at all for caution and circumspection (see Giner-Sorolla, 2012, this issue). Equally disturbing, investigators seem to be replicating each others’ work even less often than they did in the past, again presumably reflecting an incentive scheme gone askew (a point discussed in several articles in this issue, e.g., Makel, Plucker, & Hegarty, 2012). The frequency with which errors appear in the psychological literature is not presently known, but a number of facts suggest it might be disturbingly high. Ioannidis (2005) has shown through simple mathematical modeling that any scientific field that ignores replication can easily come to the miserable state wherein (as the title of his most famous article puts it) “most published research findings are false” (see also Ioannidis, 2012, this issue, and Pashler & Harris, 2012, this issue). Meanwhile, reports emerging from cancer research have made such grim scenarios seem more plausible: In 2012, several large pharmaceutical companies revealed that their efforts to replicate exciting preclinical findings from published academic studies in cancer biology were only rarely verifying the original results (Begley & Ellis, 2012; see also Osherovich, 2011; Prinz, Schlange, & Asadullah, 2011).

1,063 citations


Journal ArticleDOI
TL;DR: In this article, the authors develop strategies for improving scientific practices and knowledge accumulation that account for ordinary human motivations and biases, and demonstrate that the persistence of false findings can be mitigated with strategies that make the fundamental but abstract accuracy motive competitive with the more tangible and concrete incentive.
Abstract: An academic scientist’s professional success depends on publishing. Publishing norms emphasize novel, positive results. As such, disciplinary incentives encourage design, analysis, and reporting decisions that elicit positive results and ignore negative results. Prior reports demonstrate how these incentives inflate the rate of false effects in published science. When incentives favor novelty over replication, false results persist in the literature unchallenged, reducing efficiency in knowledge accumulation. Previous suggestions to address this problem are unlikely to be effective. For example, a journal of negative results publishes otherwise unpublishable reports. This enshrines the low status of the journal and its content. The persistence of false findings can be meliorated with strategies that make the fundamental but abstract accuracy motive—getting it right—competitive with the more tangible and concrete incentive—getting it published. This article develops strategies for improving scientific practices and knowledge accumulation that account for ordinary human motivations and biases.

850 citations


Journal ArticleDOI
TL;DR: Though the process model of depletion may sacrifice the elegance of the resource metaphor, it paints a more precise picture of ego depletion and suggests several nuanced predictions for future research.
Abstract: According to the resource model of self-control, overriding one’s predominant response tendencies consumes and temporarily depletes a limited inner resource. Over 100 experiments have lent support ...

803 citations


Journal ArticleDOI
TL;DR: A tight correlation between the activity of the locus coeruleus (i.e., the "hub" of the noradrenergic system) and pupillary dilation and neurophysiological findings provide new important insights to the meaning of pupillary responses for mental activity.
Abstract: The measurement of pupil diameter in psychology (in short, "pupillometry") has just celebrated 50 years. The method established itself after the appearance of three seminal studies (Hess & Polt, 1960, 1964; Kahneman & Beatty, 1966). Since then, the method has continued to play a significant role within the field, and pupillary responses have been successfully used to provide an estimate of the "intensity" of mental activity and of changes in mental states, particularly changes in the allocation of attention and the consolidation of perception. Remarkably, pupillary responses provide a continuous measure regardless of whether the participant is aware of such changes. More recently, research in neuroscience has revealed a tight correlation between the activity of the locus coeruleus (i.e., the "hub" of the noradrenergic system) and pupillary dilation. As we discuss in this short review, these neurophysiological findings provide new important insights to the meaning of pupillary responses for mental activity. Finally, given that pupillary responses can be easily measured in a noninvasive manner, occur from birth, and can occur in the absence of voluntary, conscious processes, they constitute a very promising tool for the study of preverbal (e.g., infants) or nonverbal participants (e.g., animals, neurological patients).

733 citations


Journal ArticleDOI
TL;DR: This article proposes that researchers preregister their studies and indicate in advance the analyses they intend to conduct, and proposes that only these analyses deserve the label “confirmatory,” and only for these analyses are the common statistical tests valid.
Abstract: The veracity of substantive research claims hinges on the way experimental data are collected and analyzed. In this article, we discuss an uncomfortable fact that threatens the core of psychology’s academic enterprise: almost without exception, psychologists do not commit themselves to a method of data analysis before they see the actual data. It then becomes tempting to fine tune the analysis to the data in order to obtain a desired result—a procedure that invalidates the interpretation of the common statistical tests. The extent of the fine tuning varies widely across experiments and experimenters but is almost impossible for reviewers and readers to gauge. To remedy the situation, we propose that researchers preregister their studies and indicate in advance the analyses they intend to conduct. Only these analyses deserve the label “confirmatory,” and only for these analyses are the common statistical tests valid. Other analyses can be carried out but these should be labeled “exploratory.” We illustrate our proposal with a confirmatory replication attempt of a study on extrasensory perception.

709 citations


Journal ArticleDOI
TL;DR: This paper considers 13 meta-analyses covering 281 primary studies in various fields of psychology and finds indications of biases and/or an excess of significant results in seven, highlighting the need for sufficiently powerful replications and changes in journal policies.
Abstract: If science were a game, a dominant rule would probably be to collect results that are statistically significant. Several reviews of the psychological literature have shown that around 96% of papers involving the use of null hypothesis significance testing report significant outcomes for their main results but that the typical studies are insufficiently powerful for such a track record. We explain this paradox by showing that the use of several small underpowered samples often represents a more efficient research strategy (in terms of finding p < .05) than does the use of one larger (more powerful) sample. Publication bias and the most efficient strategy lead to inflated effects and high rates of false positives, especially when researchers also resorted to questionable research practices, such as adding participants after intermediate testing. We provide simulations that highlight the severity of such biases in meta-analyses. We consider 13 meta-analyses covering 281 primary studies in various fields of psychology and find indications of biases and/or an excess of significant results in seven. These results highlight the need for sufficiently powerful replications and changes in journal policies.

588 citations


Journal ArticleDOI
TL;DR: It was found that the majority of replications in psychology journals reported similar findings to their original studies (i.e., they were successful replications), however, replications were significantly less likely to be successful when there was no overlap in authorship between the original and replicating articles.
Abstract: Recent controversies in psychology have spurred conversations about the nature and quality of psychological research. One topic receiving substantial attention is the role of replication in psychological science. Using the complete publication history of the 100 psychology journals with the highest 5-year impact factors, the current article provides an overview of replications in psychological research since 1900. This investigation revealed that roughly 1.6% of all psychology publications used the term replication in text. A more thorough analysis of 500 randomly selected articles revealed that only 68% of articles using the term replication were actual replications, resulting in an overall replication rate of 1.07%. Contrary to previous findings in other fields, this study found that the majority of replications in psychology journals reported similar findings to their original studies (i.e., they were successful replications). However, replications were significantly less likely to be successful when there was no overlap in authorship between the original and replicating articles. Moreover, despite numerous systemic biases, the rate at which replications are being published has increased in recent decades.

565 citations


Journal ArticleDOI
TL;DR: Smartphone research will require new skills in app development and data analysis and will raise tough new ethical issues, but smartphones could transform psychology even more profoundly than PCs and brain imaging did.
Abstract: By 2025, when most of today's psychology undergraduates will be in their mid-30s, more than 5 billion people on our planet will be using ultra-broadband, sensor-rich smartphones far beyond the abilities of today's iPhones, Androids, and Blackberries. Although smartphones were not designed for psychological research, they can collect vast amounts of ecologically valid data, easily and quickly, from large global samples. If participants download the right "psych apps," smartphones can record where they are, what they are doing, and what they can see and hear and can run interactive surveys, tests, and experiments through touch screens and wireless connections to nearby screens, headsets, biosensors, and other peripherals. This article reviews previous behavioral research using mobile electronic devices, outlines what smartphones can do now and will be able to do in the near future, explains how a smartphone study could work practically given current technology (e.g., in studying ovulatory cycle effects on women's sexuality), discusses some limitations and challenges of smartphone research, and compares smartphones to other research methods. Smartphone research will require new skills in app development and data analysis and will raise tough new ethical issues, but smartphones could transform psychology even more profoundly than PCs and brain imaging did.

551 citations


Journal ArticleDOI
TL;DR: This work argues that boredom is universally conceptualized as “the aversive experience of wanting, but being unable, to engage in satisfying activity,” and proposes that boredom be defined in terms of attention.
Abstract: Our central goal is to provide a definition of boredom in terms of the underlying mental processes that occur during an instance of boredom. Through the synthesis of psychodynamic, existential, arousal, and cognitive theories of boredom, we argue that boredom is universally conceptualized as "the aversive experience of wanting, but being unable, to engage in satisfying activity." We propose to map this conceptualization onto underlying mental processes. Specifically, we propose that boredom be defined in terms of attention. That is, boredom is the aversive state that occurs when we (a) are not able to successfully engage attention with internal (e.g., thoughts or feelings) or external (e.g., environmental stimuli) information required for participating in satisfying activity, (b) are focused on the fact that we are not able to engage attention and participate in satisfying activity, and (c) attribute the cause of our aversive state to the environment. We believe that our definition of boredom fully accounts for the phenomenal experience of boredom, brings existing theories of boredom into dialogue with one another, and suggests specific directions for future research on boredom and attention.

Journal ArticleDOI
TL;DR: It is argued that there are no plausible concrete scenarios to back up such forecasts and that what is needed is not patience, but rather systematic reforms in scientific practice.
Abstract: We discuss three arguments voiced by scientists who view the current outpouring of concern about replicability as overblown. The first idea is that the adoption of a low alpha level (e.g., 5%) puts reasonable bounds on the rate at which errors can enter the published literature, making false-positive effects rare enough to be considered a minor issue. This, we point out, rests on statistical misunderstanding: The alpha level imposes no limit on the rate at which errors may arise in the literature (Ioannidis, 2005b). Second, some argue that whereas direct replication attempts are uncommon, conceptual replication attempts are common—providing an even better test of the validity of a phenomenon. We contend that performing conceptual rather than direct replication attempts interacts insidiously with publication bias, opening the door to literatures that appear to confirm the reality of phenomena that in fact do not exist. Finally, we discuss the argument that errors will eventually be pruned out of the literature if the field would just show a bit of patience. We contend that there are no plausible concrete scenarios to back up such forecasts and that what is needed is not patience, but rather systematic reforms in scientific practice.

Journal ArticleDOI
TL;DR: This review argues that recent advances in these related fields may offer a fresh theoretical perspective on how people gather information to support their own learning.
Abstract: A widely advocated idea in education is that people learn better when the flow of experience is under their control (i.e., learning is self-directed). However, the reasons why volitional control might result in superior acquisition and the limits to such advantages remain poorly understood. In this article, we review the issue from both a cognitive and computational perspective. On the cognitive side, self-directed learning allows individuals to focus effort on useful information they do not yet possess, can expose information that is inaccessible via passive observation, and may enhance the encoding and retention of materials. On the computational side, the development of efficient “active learning” algorithms that can select their own training data is an emerging research topic in machine learning. This review argues that recent advances in these related fields may offer a fresh theoretical perspective on how people gather information to support their own learning.

Journal ArticleDOI
TL;DR: The Reproducibility Project is an open, large-scale, collaborative effort to systematically examine the rate and predictors of reproducibility in psychological science.
Abstract: Reproducibility is a defining feature of science However, because of strong incentives for innovation and weak incentives for confirmation, direct replication is rarely practiced or published The Reproducibility Project is an open, large-scale, collaborative effort to systematically examine the rate and predictors of reproducibility in psychological science So far, 72 volunteer researchers from 41 institutions have organized to openly and transparently replicate studies published in three prominent psychological journals in 2008 Multiple methods will be used to evaluate the findings, calculate an empirical rate of replication, and investigate factors that predict reproducibility Whatever the result, a better understanding of reproducibility will ultimately improve confidence in scientific methodology and findings

Journal ArticleDOI
TL;DR: It is argued that the field often constructs arguments to block the publication and interpretation of null results and that null results may be further extinguished through questionable researcher practices, reduce psychological science’s capability to have a proper mechanism for theory falsification, thus resulting in the promulgation of numerous “undead” theories.
Abstract: Publication bias remains a controversial issue in psychological science. The tendency of psychological science to avoid publishing null results produces a situation that limits the replicability assumption of science, as replication cannot be meaningful without the potential acknowledgment of failed replications. We argue that the field often constructs arguments to block the publication and interpretation of null results and that null results may be further extinguished through questionable researcher practices. Given that science is dependent on the process of falsification, we argue that these problems reduce psychological science’s capability to have a proper mechanism for theory falsification, thus resulting in the promulgation of numerous “undead” theories that are ideologically popular but have little basis in fact.

Journal ArticleDOI
TL;DR: A number of impediments to self-correction that have been empirically studied in psychological science are cataloged and some proposed solutions to promote sound replication practices enhancing the credibility of scientific results are discussed.
Abstract: The ability to self-correct is considered a hallmark of science. However, self-correction does not always happen to scientific evidence by default. The trajectory of scientific credibility can fluctuate over time, both for defined scientific fields and for science at-large. History suggests that major catastrophes in scientific credibility are unfortunately possible and the argument that “it is obvious that progress is made” is weak. Careful evaluation of the current status of credibility of various scientific fields is important in order to understand any credibility deficits and how one could obtain and establish more trustworthy results. Efficient and unbiased replication mechanisms are essential for maintaining high levels of scientific credibility. Depending on the types of results obtained in the discovery and replication phases, there are different paradigms of research: optimal, self-correcting, false nonreplication, and perpetuated fallacy. In the absence of replication efforts, one is left with unconfirmed (genuine) discoveries and unchallenged fallacies. In several fields of investigation, including many areas of psychological science, perpetuated and unchallenged fallacies may comprise the majority of the circulating evidence. I catalogue a number of impediments to self-correction that have been empirically studied in psychological science. Finally, I discuss some proposed solutions to promote sound replication practices enhancing the credibility of scientific results as well as some potential disadvantages of each of them. Any deviation from the principle that seeking the truth has priority over any other goals may be seriously damaging to the self-correcting functions of science.

Journal ArticleDOI
TL;DR: The psychology of dehumanization resulting from inherent features of medical settings, the doctor–patient relationship, and the deployment of routine clinical practices is discussed and six fixes for these problems are proposed.
Abstract: Dehumanization is endemic in medical practice. This article discusses the psychology of dehumanization resulting from inherent features of medical settings, the doctor-patient relationship, and the deployment of routine clinical practices. First, we identify six major causes of dehumanization in medical settings (deindividuating practices, impaired patient agency, dissimilarity, mechanization, empathy reduction, and moral disengagement). Next, we propose six fixes for these problems (individuation, agency reorientation, promoting similarity, personification and humanizing procedures, empathic balance and physician selection, and moral engagement). Finally, we discuss when dehumanization in medical practice is potentially functional and when it is not. Appreciating the multiple psychological causes of dehumanization in hospitals allows for a deeper understanding of how to diminish detrimental instances of dehumanization in the medical environment.

Journal ArticleDOI
TL;DR: A “shift-and-persist” model is proposed that, in the midst of adversity, some children find role models who teach them to trust others, better regulate their emotions, and focus on their futures.
Abstract: Some individuals, despite facing recurrent, severe adversities in life such as low socioeconomic status (SES), are nonetheless able to maintain good physical health. This article explores why these individuals deviate from the expected association of low SES with poor health, and outlines a “shift-and-persist” model to explain the psychobiological mechanisms involved. This model proposes that in the midst of adversity, some children find role models who teach them to trust others, better regulate their emotions, and focus on their futures. Over a lifetime, these low SES children develop an approach to life that prioritizes shifting oneself (accepting stress for what it is and adapting the self to it) in combination with persisting (enduring life with strength by holding on to meaning and optimism). This combination of shift-and-persist strategies mitigates sympathetic-nervous-system and hypothalamic-pituitary-adrenocortical responses to the barrage of stressors that low SES individuals confront. This tendency vectors individuals off the trajectory to chronic disease by forestalling pathogenic sequelae of stress reactivity, like insulin resistance, high blood pressure, and systemic inflammation. We outline evidence for the model, and argue that efforts to identify resilience-promoting processes are important in this economic climate, given limited resources for improving the financial circumstances of disadvantaged individuals.

Journal ArticleDOI
TL;DR: A meta-analysis of the interest literature showed that interests are indeed related to performance and persistence in work and academic contexts and the correlations between congruence indices and performance were stronger than for interest scores alone.
Abstract: Despite early claims that vocational interests could be used to distinguish successful workers and superior students from their peers, interest measures are generally ignored in the employee selection literature. Nevertheless, theoretical descriptions of vocational interests from vocational and educational psychology have proposed that interest constructs should be related to performance and persistence in work and academic settings. Moreover, on the basis of Holland's (1959, 1997) theoretical predictions, congruence indices, which quantify the degree of similarity or person-environment fit between individuals and their occupations, should be more strongly related to performance than interest scores alone. Using a comprehensive review of the interest literature that spans more than 60 years of research, a meta-analysis was conducted to examine the veracity of these claims. A literature search identified 60 studies and approximately 568 correlations that addressed the relationship between interests and performance. Results showed that interests are indeed related to performance and persistence in work and academic contexts. In addition, the correlations between congruence indices and performance were stronger than for interest scores alone. Thus, consistent with interest theory, the fit between individuals and their environment was more predictive of performance than interest alone.

Journal ArticleDOI
TL;DR: Anderson, Lindsay, and Bushman as mentioned in this paper compared effect sizes from laboratory and field studies of 38 research topics compiled in 21 meta-analyses and concluded that psychological laboratories produced externally valid results.
Abstract: Anderson, Lindsay, and Bushman (1999) compared effect sizes from laboratory and field studies of 38 research topics compiled in 21 meta-analyses and concluded that psychological laboratories produced externally valid results. A replication and extension of Anderson et al. (1999) using 217 lab-field comparisons from 82 meta-analyses found that the external validity of laboratory research differed considerably by psychological subfield, research topic, and effect size. Laboratory results from industrial-organizational psychology most reliably predicted field results, effects found in social psychology laboratories most frequently changed signs in the field (from positive to negative or vice versa), and large laboratory effects were more reliably replicated in the field than medium and small laboratory effects.

Journal ArticleDOI
TL;DR: In this article, the authors consider recent findings that, despite the widespread bias and logical errors, people at least implicitly detect that their heuristic response conflicts with traditional normative considerations and propose that this conflict sensitivity calls for the postulation of logical and probabilistic knowledge that is intuitive and that is activated automatically when people engage in a reasoning task.
Abstract: Human reasoning has been characterized as often biased, heuristic, and illogical. In this article, I consider recent findings establishing that, despite the widespread bias and logical errors, people at least implicitly detect that their heuristic response conflicts with traditional normative considerations. I propose that this conflict sensitivity calls for the postulation of logical and probabilistic knowledge that is intuitive and that is activated automatically when people engage in a reasoning task. I sketch the basic characteristics of these intuitions and point to implications for ongoing debates in the field.

Journal ArticleDOI
TL;DR: The authors analyze a convenience sample of fraud cases to see whether (social) psychology is more susceptible to fraud than other disciplines and suggest a number of strategies that might reduce the risk of scientific fraud.
Abstract: The recent Stapel fraud case came as a shattering blow to the scientific community of psychologists and damaged both their image in the media and their collective self-esteem. The field responded with suggestions of how fraud could be prevented. However, the Stapel fraud is only one among many cases. Before basing recommendations on one case, it would be informative to study other cases to assess how these frauds were discovered. The authors analyze a convenience sample of fraud cases to see whether (social) psychology is more susceptible to fraud than other disciplines. They also evaluate whether the peer review process and replications work well in practice to detect fraud. There is no evidence that psychology is more vulnerable to fraud than the biomedical sciences, and most frauds are detected through information from whistleblowers with inside information. On the basis of this analysis, the authors suggest a number of strategies that might reduce the risk of scientific fraud.

Journal ArticleDOI
TL;DR: This article considers psychologists’ narrative approach to scientific publications as an underlying reason for this neglect and proposes an incentive structure for replications within psychology that can be developed in a relatively simple and cost-effective manner.
Abstract: Although replications are vital to scientific progress, psychologists rarely engage in systematic replication efforts. In this article, we consider psychologists' narrative approach to scientific publications as an underlying reason for this neglect and propose an incentive structure for replications within psychology. First, researchers need accessible outlets for publishing replications. To accomplish this, psychology journals could publish replication reports in files that are electronically linked to reports of the original research. Second, replications should get cited. This can be achieved by cociting replications along with original research reports. Third, replications should become a valued collaborative effort. This can be realized by incorporating replications in teaching programs and by stimulating adversarial collaborations. The proposed incentive structure for replications can be developed in a relatively simple and cost-effective manner. By promoting replications, this incentive structure may greatly enhance the dependability of psychology's knowledge base.

Journal ArticleDOI
TL;DR: It is argued that the development of some socioemotional skills may be vulnerable to disruption by environmental distraction from certain educational practices or overuse of social media, and hypothesize that high environmental attention demands may bias youngsters to focus on the concrete, physical, and immediate aspects of social situations and self, which may be more compatible with external attention.
Abstract: When people wakefully rest in the functional MRI scanner, their minds wander, and they engage a so-called default mode (DM) of neural processing that is relatively suppressed when attention is focused on the outside world. Accruing evidence suggests that DM brain systems activated during rest are also important for active, internally focused psychosocial mental processing, for example, when recalling personal memories, imagining the future, and feeling social emotions with moral connotations. Here the authors review evidence for the DM and relations to psychological functioning, including associations with mental health and cognitive abilities like reading comprehension and divergent thinking. This article calls for research into the dimensions of internally focused thought, ranging from free-form daydreaming and off-line consolidation to intensive, effortful abstract thinking, especially with socioemotional relevance. It is argued that the development of some socioemotional skills may be vulnerable to disruption by environmental distraction, for example, from certain educational practices or overuse of social media. The authors hypothesize that high environmental attention demands may bias youngsters to focus on the concrete, physical, and immediate aspects of social situations and self, which may be more compatible with external attention. They coin the term constructive internal reflection and advocate educational practices that promote effective balance between external attention and internal reflection.

Journal ArticleDOI
TL;DR: There is multidecade durability of theory controversies in psychology, demonstrated here in the subdisciplines of cognitive and social psychology and in an analysis of the last two decades of Nobel awards in physics, chemistry, and medicine.
Abstract: This article documents two facts that are provocative in juxtaposition. First: There is multidecade durability of theory controversies in psychology, demonstrated here in the subdisciplines of cognitive and social psychology. Second: There is a much greater frequency of Nobel science awards for contributions to method than for contributions to theory, shown here in an analysis of the last two decades of Nobel awards in physics, chemistry, and medicine. The available documentation of Nobel awards reveals two forms of method–theory synergy: (a) existing theories were often essential in enabling development of awarded methods, and (b) award-receiving methods often generated previously inconceivable data, which in turn inspired previously inconceivable theories. It is easy to find illustrations of these same synergies also in psychology. Perhaps greater recognition of the value of method in advancing theory can help to achieve resolutions of psychology’s persistent theory controversies.

Journal ArticleDOI
TL;DR: To open the bottleneck, putting structures in place to reward broader forms of information sharing beyond the exquisite art of present-day journal publication is suggested, suggesting a more palatable solution to the crisis in psychological research.
Abstract: The current crisis in psychological research involves issues of fraud, replication, publication bias, and false positive results. I argue that this crisis follows the failure of widely adopted solutions to psychology’s similar crisis of the 1970s. The untouched root cause is an information-economic one: Too many studies divided by too few publication outlets equals a bottleneck. Articles cannot pass through just by showing theoretical meaning and methodological rigor; their results must appear to support the hypothesis perfectly. Consequently, psychologists must master the art of presenting perfect-looking results just to survive in the profession. This favors aesthetic criteria of presentation in a way that harms science’s search for truth. Shallow standards of statistical perfection distort analyses and undermine the accuracy of cumulative data; narrative expectations encourage dishonesty about the relationship between results and hypotheses; criteria of novelty suppress replication attempts. Concerns about truth in research are emerging in other sciences and may eventually descend on our heads in the form of difficult and insensitive regulations. I suggest a more palatable solution: to open the bottleneck, putting structures in place to reward broader forms of information sharing beyond the exquisite art of present-day journal publication.

Journal ArticleDOI
TL;DR: It is argued that, given the current state of affairs in behavioral science, false negatives often constitute a more serious problem and a scientific culture rewarding strong inference is more likely to see progress than a culture preoccupied with tightening its standards for the mere publication of original findings.
Abstract: Several influential publications have sensitized the community of behavioral scientists to the dangers of inflated effects and false-positive errors leading to the unwarranted publication of nonreplicable findings. This issue has been related to prominent cases of data fabrication and survey results pointing to bad practices in empirical science. Although we concur with the motives behind these critical arguments, we note that an isolated debate of false positives may itself be misleading and counter-productive. Instead, we argue that, given the current state of affairs in behavioral science, false negatives often constitute a more serious problem. Referring to Wason’s (1960) seminal work on inductive reasoning, we show that the failure to assertively generate and test alternative hypotheses can lead to dramatic theoretical mistakes, which cannot be corrected by any kind of rigor applied to statistical tests of the focal hypotheses. We conclude that a scientific culture rewarding strong inference (Platt, ...

Journal ArticleDOI
TL;DR: This paper surveyed a large number of social and personality psychologists and discovered several interesting facts: although only 6% described themselves as conservative overall, there was more diversity of political opinion on economic issues and foreign policy, and respondents significantly underestimated the proportion of conservatives among their colleagues.
Abstract: A lack of political diversity in psychology is said to lead to a number of pernicious outcomes, including biased research and active discrimination against conservatives. We surveyed a large number (combined N = 800) of social and personality psychologists and discovered several interesting facts. First, although only 6% described themselves as conservative “overall,” there was more diversity of political opinion on economic issues and foreign policy. Second, respondents significantly underestimated the proportion of conservatives among their colleagues. Third, conservatives fear negative consequences of revealing their political beliefs to their colleagues. Finally, they are right to do so: In decisions ranging from paper reviews to hiring, many social and personality psychologists said that they would discriminate against openly conservative colleagues. The more liberal respondents were, the more they said they would discriminate.

Journal ArticleDOI
TL;DR: A Bayesian analysis of learning from knowledgeable others is provided, which formalizes how learners may use a person’s actions and goals to make inferences about the actor's knowledge about the world.
Abstract: From early childhood, human beings learn not only from collections of facts about the world but also from social contexts through observations of other people, communication, and explicit teaching. In these contexts, the data are the result of human actions—actions that come about because of people’s goals and intentions. To interpret the implications of others’ actions correctly, learners must understand the people generating the data. Most models of learning, however, assume that data are randomly collected facts about the world and cannot explain how social contexts influence learning. We provide a Bayesian analysis of learning from knowledgeable others, which formalizes how learners may use a person’s actions and goals to make inferences about the actor’s knowledge about the world. We illustrate this framework using two examples from causal learning and conclude by discussing the implications for cognition, social reasoning, and cognitive development.

Journal ArticleDOI
TL;DR: The implications of this observation are described and how to test for too much successful replication by using a set of experiments from a recent research paper are demonstrated.
Abstract: Like other scientists, psychologists believe experimental replication to be the final arbiter for determining the validity of an empirical finding. Reports in psychology journals often attempt to prove the validity of a hypothesis or theory with multiple experiments that replicate a finding. Unfortunately, these efforts are sometimes misguided because in a field like experimental psychology, ever more successful replication does not necessarily ensure the validity of an empirical finding. When psychological experiments are analyzed with statistics, the rules of probability dictate that random samples should sometimes be selected that do not reject the null hypothesis, even if an effect is real. As a result, it is possible for a set of experiments to have too many successful replications. When there are too many successful replications for a given set of experiments, a skeptical scientist should be suspicious that null or negative findings have been suppressed, the experiments were run improperly, or the experiments were analyzed improperly. This article describes the implications of this observation and demonstrates how to test for too much successful replication by using a set of experiments from a recent research paper.