scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Retracted Science and the Retraction Index

01 Oct 2011-Infection and Immunity (American Society for Microbiology)-Vol. 79, Iss: 10, pp 3855-3859
TL;DR: Using a novel measure that is called the “retraction index,” it is found that the frequency of retraction varies among journals and shows a strong correlation with the journal impact factor.
Abstract: Articles may be retracted when their findings are no longer considered trustworthy due to scientific misconduct or error, they plagiarize previously published work, or they are found to violate ethical guidelines. Using a novel measure that we call the “retraction index,” we found that the frequency of retraction varies among journals and shows a strong correlation with the journal impact factor. Although retractions are relatively rare, the retraction process is essential for correcting the literature and maintaining trust in the scientific process.
Citations
More filters
Journal ArticleDOI
TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Abstract: A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect. Here, we show that the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.

5,683 citations

Journal ArticleDOI
TL;DR: A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error, compared with 67.4% attributable to misconduct, including fraud or suspected fraud, duplicate publication, and plagiarism.
Abstract: A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%). Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. The percentage of scientific articles retracted because of fraud has increased ∼10-fold since 1975. Retractions exhibit distinctive temporal and geographic patterns that may reveal underlying causes.

845 citations


Cites background from "Retracted Science and the Retractio..."

  • ...Studies of selected retracted articles have suggested that error is more common than fraud as a cause of retraction (3–5) and that rates of retraction correlate with journal-impact factor (6)....

    [...]

  • ...prestigious journals is consistent with the suggestion that the benefits of publishing in such venues are powerful incentives for fraud (4, 6, 32)....

    [...]

  • ...An association between impact factor and retraction for fraud or error has been noted previously (4, 6, 29, 30)....

    [...]

Proceedings ArticleDOI
01 Aug 2020
TL;DR: SciSci has revealed choices and trade-offs that scientists face as they advance both their own careers and the scientific horizon, and offers a deep quantitative understanding of the relational structure between scientists, institutions, and ideas, which facilitates the identification of fundamental mechanisms responsible for scientific discovery.
Abstract: The rapid development of digital libraries and the proliferation of scholarly big data have created an unprecedented opportunity to explore scientific production and reward at scale. Fueled by the data exploration and computational advances in digital libraries, the science of science is an emerging multidisciplinary field that aims to quantify patterns for scientific relationships and dependencies, and how scientific progress emerges from the scholarly big data. In this tutorial, we will provide an overview of the science of science, including major topics on scientific careers, scientific collaborations and scientific ideas. We will also discuss its historical context, the state-of-art models and exciting discoveries, and promising future directions for participants interested in mining scholarly big data.

579 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a 60-year meta-analysis of statistical power in the behavioural sciences and show that power has not improved despite repeated demonstrations of the necessity of increasing power.
Abstract: Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing-no deliberate cheating nor loafing-by scientists, only that publication is a principal factor for career advancement. Some normative methods of analysis have almost certainly been selected to further publication instead of discovery. In order to improve the culture of science, a shift must be made away from correcting misunderstandings and towards rewarding understanding. We support this argument with empirical evidence and computational modelling. We first present a 60-year meta-analysis of statistical power in the behavioural sciences and show that power has not improved despite repeated demonstrations of the necessity of increasing power. To demonstrate the logical consequences of structural incentives, we then present a dynamic model of scientific communities in which competing laboratories investigate novel or previously published hypotheses using culturally transmitted research methods. As in the real world, successful labs produce more 'progeny,' such that their methods are more often copied and their students are more likely to start labs of their own. Selection for high output leads to poorer methods and increasingly high false discovery rates. We additionally show that replication slows but does not stop the process of methodological deterioration. Improving the quality of research requires change at the institutional level.

435 citations

Journal ArticleDOI
TL;DR: A 60-year meta-analysis of statistical power in the behavioural sciences is presented and it is shown that power has not improved despite repeated demonstrations of the necessity of increasing power, and that replication slows but does not stop the process of methodological deterioration.
Abstract: Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favor them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing---no deliberate cheating nor loafing---by scientists, only that publication is a principle factor for career advancement. Some normative methods of analysis have almost certainly been selected to further publication instead of discovery. In order to improve the culture of science, a shift must be made away from correcting misunderstandings and towards rewarding understanding. We support this argument with empirical evidence and computational modeling. We first present a 60-year meta-analysis of statistical power in the behavioral sciences and show that power has not improved despite repeated demonstrations of the necessity of increasing power. To demonstrate the logical consequences of structural incentives, we then present a dynamic model of scientific communities in which competing laboratories investigate novel or previously published hypotheses using culturally transmitted research methods. As in the real world, successful labs produce more "progeny", such that their methods are more often copied and their students are more likely to start labs of their own. Selection for high output leads to poorer methods and increasingly high false discovery rates. We additionally show that replication slows but does not stop the process of methodological deterioration. Improving the quality of research requires change at the institutional level.

401 citations


Cites background from "Retracted Science and the Retractio..."

  • ...…have found that the statistical power of papers published in prestigious (high impact factor) journals is no different from those with lower impact factors (Brembs et al., 2013), while the rate of retractions for journals is positively correlated with impact factor (Fang & Casadevall, 2011)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors investigated a consecutive series of children with chronic enterocolitis and regressive developmental disorder, and identified associated gastrointestinal disease and developmental regression in a group of previously normal children, which was generally associated in time with possible environmental triggers.

2,505 citations

Journal ArticleDOI
15 Feb 1997-BMJ
TL;DR: Alternative methods for evaluating research are being sought, such as citation rates and journal impact factors, which seem to be quantitative and objective indicators directly related to published science.
Abstract: Evaluating scientific quality is a notoriously difficult problem which has no standard solution. Ideally, published scientific results should be scrutinised by true experts in the field and given scores for quality and quantity according to established rules. In practice, however, what is called peer review is usually performed by committees with general competence rather than with the specialist's insight that is needed to assess primary research data. Committees tend, therefore, to resort to secondary criteria like crude publication counts, journal prestige, the reputation of authors and institutions, and estimated importance and relevance of the research field,1 making peer review as much of a lottery as of a rational process.2 3 On this background, it is hardly surprising that alternative methods for evaluating research are being sought, such as citation rates and journal impact factors, which seem to be quantitative and objective indicators directly related to published science. The citation data are obtained from a database produced by the Institute for Scientific Information (ISI) in Philadelphia, which continuously records scientific citations as represented by the reference lists of articles from a large number of the world's scientific journals. The references are rearranged in the database to show how many times each publication has been cited within a certain period, and by whom, and the results are published as the Science Citation Index (SCI) . On the basis of the Science Citation Index and authors' publication lists, the annual citation rate of papers by a scientific author or research group can thus be calculated. Similarly, the citation rate of a scientific journal—known as the journal impact factor—can be calculated as the mean citation rate of all the articles contained in the journal.4 Journal impact factors, which are published annually in SCI Journal Citation Reports , are widely regarded as …

2,238 citations


"Retracted Science and the Retractio..." refers background in this paper

  • ...For example, publication in journals with high impact factors can be associated with improved job opportunities, grant success, peer recognition, and honorific rewards, despite widespread acknowledgment that impact factor is a flawed measure of scientific quality and importance (8, 29, 33, 77, 80, 86)....

    [...]

Journal ArticleDOI
29 May 2009-PLOS ONE
TL;DR: Meta-regression showed that self reports surveys, surveys using the words “falsification” or “fabrication”, and mailed surveys yielded lower percentages of misconduct, and when these factors were controlled for, misconduct was reported more frequently by medical/pharmacological researchers than others.
Abstract: The frequency with which scientists fabricate and falsify data, or commit other forms of scientific misconduct is a matter of controversy. Many surveys have asked scientists directly whether they have committed or know of a colleague who committed research misconduct, but their results appeared difficult to compare and synthesize. This is the first meta-analysis of these surveys. To standardize outcomes, the number of respondents who recalled at least one incident of misconduct was calculated for each question, and the analysis was limited to behaviours that distort scientific knowledge: fabrication, falsification, "cooking" of data, etc... Survey questions on plagiarism and other forms of professional misconduct were excluded. The final sample consisted of 21 surveys that were included in the systematic review, and 18 in the meta-analysis. A pooled weighted average of 1.97% (N = 7, 95%CI: 0.86-4.45) of scientists admitted to have fabricated, falsified or modified data or results at least once--a serious form of misconduct by any standard--and up to 33.7% admitted other questionable research practices. In surveys asking about the behaviour of colleagues, admission rates were 14.12% (N = 12, 95% CI: 9.91-19.72) for falsification, and up to 72% for other questionable research practices. Meta-regression showed that self reports surveys, surveys using the words "falsification" or "fabrication", and mailed surveys yielded lower percentages of misconduct. When these factors were controlled for, misconduct was reported more frequently by medical/pharmacological researchers than others. Considering that these surveys ask sensitive questions and have other limitations, it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct.

1,387 citations


"Retracted Science and the Retractio..." refers background in this paper

  • ...In such situations, desperate authors may be enticed to take short cuts, withhold data from the review process, overinterpret results, manipulate images, and engage in behavior ranging from questionable practices to outright fraud (26)....

    [...]

  • ...However, a meta-analysis of survey data reported that 2% of scientists report having committed serious research misconduct at least once, and one-third admit to having engaged in questionable research practices (26)....

    [...]

Journal ArticleDOI
12 Mar 2004-Science
TL;DR: In this article, the derivation of a pluripotent embryonic stem (ES) cell line (SCNT-hES-1) from a cloned human blastocyst was reported.
Abstract: Somatic cell nuclear transfer (SCNT) technology has recently been used to generate animals with a common genetic composition. In this study, we report the derivation of a pluripotent embryonic stem (ES) cell line (SCNT-hES-1) from a cloned human blastocyst. The SCNT-hES-1 cells displayed typical ES cell morphology and cell surface markers and were capable of differentiating into embryoid bodies in vitro and of forming teratomas in vivo containing cell derivatives from all three embryonic germ layers in severe combined immunodeficient mice. After continuous proliferation for more than 70 passages, SCNT-hES-1 cells maintained normal karyotypes and were genetically identical to the somatic nuclear donor cells. Although we cannot completely exclude the possibility that the cells had a parthenogenetic origin, imprinting analyses support a SCNT origin of the derived human ES cells.

721 citations