scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Self-correction in biomedical publications and the scientific impact

01 Feb 2014-Croatian Medical Journal (Medicinska Naklada)-Vol. 55, Iss: 1, pp 61-72
TL;DR: The study suggests that the intensified self-correction in biomedicine is due to the attention of readers and authors, who spot errors in their hub of evidence-based information.
Abstract: Aim To analyze mistakes and misconduct in multidisciplinary and specialized biomedical journals.
Citations
More filters
Journal ArticleDOI
TL;DR: While identifying social media as a potential source of misinformation on COVID-19, and a perceived high risk of plagiarism, more stringent peer review and skilled post-publication promotion are advisable.
Abstract: Background The coronavirus disease 2019 (COVID-19) pandemic has led to a large volume of publications, a barrage of non-reviewed preprints on various professional repositories and a slew of retractions in a short amount of time. Methods We conducted an e-survey using a cloud-based website to gauge the potential sources of trustworthy information and misinformation and analyzed researchers', clinicians', and academics' attitude toward unpublished items, and pre- and post-publication quality checks in this challenging time. Results Among 128 respondents (mean age, 43.2 years; M:F, 1.1:1), 60 (46.9%) were scholarly journal editors and editorial board members. Social media channels were distinguished as the most important sources of information as well as misinformation (81 [63.3%] and 86 [67.2%]). Nearly two in five (62, 48.4%) respondents blamed reviewers, editors, and misinterpretation by readers as additional contributors alongside authors for misinformation. A higher risk of plagiarism was perceived by the majority (70, 58.6%), especially plagiarism of ideas (64.1%) followed by inappropriate paraphrasing (54.7%). Opinion was divided on the utility of preprints for changing practice and changing retraction rates during the pandemic period, and higher rejections were not supported by most (76.6%) while the importance of peer review was agreed upon by a majority (80, 62.5%). More stringent screening by journal editors (61.7%), and facilitating open access plagiarism software (59.4%), including Artificial Intelligence (AI)-based algorithms (43.8%) were among the suggested solutions. Most (74.2%) supported the need to launch a specialist bibliographic database for COVID-19, with information indexed (62.3%), available as open-access (82.8%), after expanding search terms (52.3%) and following due verification by academics (66.4%), and journal editors (52.3%). Conclusion While identifying social media as a potential source of misinformation on COVID-19, and a perceived high risk of plagiarism, more stringent peer review and skilled post-publication promotion are advisable. Journal editors should play a more active role in streamlining publication and promotion of trustworthy information on COVID-19.

112 citations

Journal ArticleDOI
TL;DR: The article shows that retractions, by highlighting individual cases of misconduct and general policies for preventing misconduct while obscuring the actors and processes through which retractions are effected, produce highly fragmented patterns of visibility, which resemble the bifurcation in current justice systems.
Abstract: Retractions of scientific articles are becoming the most relevant institution for making sense of scientific misconduct. An increasing number of retracted articles, mainly attributed to misconduct, is currently providing a new empirical basis for research about scientific misconduct. This article reviews the relevant research literature from an interdisciplinary context. Furthermore, the results from these studies are contextualized sociologically by asking how scientific misconduct is made visible through retractions. This study treats retractions as an emerging institution that renders scientific misconduct visible, thus, following up on the sociology of deviance and its focus on visibility. The article shows that retractions, by highlighting individual cases of misconduct and general policies for preventing misconduct while obscuring the actors and processes through which retractions are effected, produce highly fragmented patterns of visibility. These patterns resemble the bifurcation in current justice systems.

96 citations


Cites background from "Self-correction in biomedical publi..."

  • ...Both articles in higher impact factor journals (Fang and Casadevall, 2011; Fang et al., 2012; Gasparyan et al., 2014) and highly cited articles are retracted more often (Furman et al....

    [...]

  • ...Both articles in higher impact factor journals (Fang and Casadevall, 2011; Fang et al., 2012; Gasparyan et al., 2014) and highly cited articles are retracted more often (Furman et al., 2012), which could either mean that high visibility increases the risk of retraction, or that researchers…...

    [...]

  • ...…acknowledged that retraction rates have been rising steadily since the 1970s with further acceleration after 2000 (Cokol et al., 2008; Fanelli, 2013; Fang et al., 2012; Gasparyan et al., 2014; Grieneisen and Zhang, 2012; He, 2013; Redman et al., 2008; Steen, 2011a; Wager and Williams, 2011)....

    [...]

  • ...It is unanimously acknowledged that retraction rates have been rising steadily since the 1970s with further acceleration after 2000 (Cokol et al., 2008; Fanelli, 2013; Fang et al., 2012; Gasparyan et al., 2014; Grieneisen and Zhang, 2012; He, 2013; Redman et al., 2008; Steen, 2011a; Wager and Williams, 2011)....

    [...]

Journal ArticleDOI
TL;DR: This article analyses Scopus-based publication activity and evidence on poor writing, lack of related training, emerging anti-plagiarism strategies, and new forms of massive wasting of resources by publishing largely recycled items, which evade the ‘red flags’ of similarity checks.
Abstract: Plagiarism may take place in any scientific journals despite currently employed anti-plagiarism tools. The absence of widely acceptable definitions of research misconduct and reliance solely on similarity checks do not allow journal editors to prevent most complex cases of recycling of scientific information and wasteful, or 'predatory,' publishing. This article analyses Scopus-based publication activity and evidence on poor writing, lack of related training, emerging anti-plagiarism strategies, and new forms of massive wasting of resources by publishing largely recycled items, which evade the 'red flags' of similarity checks. In some non-Anglophone countries 'copy-and-paste' writing still plagues pre- and postgraduate education. Poor research management, absence of courses on publication ethics, and limited access to quality sources confound plagiarism as a cross-cultural and multidisciplinary phenomenon. Over the past decade, the advent of anti-plagiarism software checks has helped uncover elementary forms of textual recycling across journals. But such a tool alone proves inefficient for preventing complex forms of plagiarism. Recent mass retractions of plagiarized articles by reputable open-access journals point to critical deficiencies of current anti-plagiarism software that do not recognize manipulative paraphrasing and editing. Manipulative editing also finds its way to predatory journals, ignoring the adherence to publication ethics and accommodating nonsense plagiarized items. The evolving preventive strategies are increasingly relying on intelligent (semantic) digital technologies, comprehensively evaluating texts, keywords, graphics, and reference lists. It is the right time to enforce adherence to global editorial guidance and implement a comprehensive anti-plagiarism strategy by helping all stakeholders of scholarly communication.

72 citations


Cites background from "Self-correction in biomedical publi..."

  • ...Self-correction in top journals can limit and prevent the growth of unethical papers (75)....

    [...]

Journal ArticleDOI
11 Apr 2014
TL;DR: The aim of combating plagiarism is to improve the quality, to achieve satisfactory results and to compare the results of their own research, rather than copying the data from theresults of other people's research.
Abstract: Quality is assessed on the basis of adequate evidence, while best results of the research are accomplished through scientific knowledge. Information contained in a scientific work must always be based on scientific evidence. Guidelines for genuine scientific research should be designed based on real results. Dynamic research and use correct methods of scientific work must originate from everyday practice and the fundamentals of the research. The original work should have the proper data sources with clearly defined research goals, methods of operation which are acceptable for questions included in the study. When selecting the methods it is necessary to obtain the consent of the patients/respondents to provide data for execution of the project or so called informed consent. Only by the own efforts can be reached true results, from which can be drawn conclusions and which finally can give a valid scholarly commentary. Text may be copied from other sources, either in whole or in part and marked as a result of the other studies. For high-quality scientific work necessary are expertise and relevant scientific literature, mostly taken from publications that are stored in biomedical databases. These are scientific, professional and review articles, case reports of disease in physician practices, but the knowledge can also be acquired on scientific and expert lectures by renowned scientists. Form of text publications must meet standards on writing a paper. If the article has already been published in a scientific journal, the same article cannot be published in any other journal with a few minor adjustments, or without specifying the parts of the first article which is used in another article. Copyright infringement occurs when the author of a new article, with or without mentioning the author, uses a substantial portion of previously published articles, including past contributions in the first article. With the permission of the publisher and the author, another journal can re-publish the article already published. In that case, that is not plagiarism, because the journal states that the article was re-published with the permission of the journal in which the article is primarily released. The original can be only one, and the copy is a copy, and plagiarism is stolen copy. The aim of combating plagiarism is to improve the quality, to achieve satisfactory results and to compare the results of their own research, rather than copying the data from the results of other people's research. Copy leads to incorrect results. Nowadays the problem of plagiarism has become huge, or widespread and present in almost all spheres of human activity, particularly in science. Scientific institutions and universities should have a center for surveillance, security, promotion and development of quality research. Establishment of rules and respect the rules of good practice are the obligations of each research institutions, universities and every individual researchers, regardless of which area of science is being investigated. There are misunderstandings and doubts about the criteria and standards for when and how to declare someone a plagiarist. European and World Association of Science Editors (EASE and WAME), and COPE - Committee on Publishing Ethics working on the precise definition of that institution or that the scientific committee may sanction when someone is proven plagiarism and familiarize the authors with the types of sanctions. The practice is to inform the editors about discovered plagiarism and articles are withdrawn from the database, while the authors are put on the so-called black list. So far this is the only way of preventing plagiarism, because there are no other sanctions.

65 citations


Cites methods from "Self-correction in biomedical publi..."

  • ...In accordance with the principles of the GSP and Good Laboratory Practice (GLP) institutions should take responsibility for the integrity of research reporting (23-26)....

    [...]

Journal ArticleDOI
TL;DR: The aim of this paper is to guide authors aiming to publish in scholarly journals regarding the methods and means to carry out surveys for valid outcomes, from planning, execution and dissemination of surveys followed by the data analysis and choosing target journals.
Abstract: The coronavirus disease 2019 (COVID-19) pandemic has led to a massive rise in survey-based research. The paucity of perspicuous guidelines for conducting surveys may pose a challenge to the conduct of ethical, valid and meticulous research. The aim of this paper is to guide authors aiming to publish in scholarly journals regarding the methods and means to carry out surveys for valid outcomes. The paper outlines the various aspects, from planning, execution and dissemination of surveys followed by the data analysis and choosing target journals. While providing a comprehensive understanding of the scenarios most conducive to carrying out a survey, the role of ethical approval, survey validation and pilot testing, this brief delves deeper into the survey designs, methods of dissemination, the ways to secure and maintain data anonymity, the various analytical approaches, the reporting techniques and the process of choosing the appropriate journal. Further, the authors analyze retracted survey-based studies and the reasons for the same. This review article intends to guide authors to improve the quality of survey-based research by describing the essential tools and means to do the same with the hope to improve the utility of such studies.

64 citations

References
More filters
Journal ArticleDOI
TL;DR: A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error, compared with 67.4% attributable to misconduct, including fraud or suspected fraud, duplicate publication, and plagiarism.
Abstract: A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%). Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. The percentage of scientific articles retracted because of fraud has increased ∼10-fold since 1975. Retractions exhibit distinctive temporal and geographic patterns that may reveal underlying causes.

845 citations

Journal ArticleDOI
13 Sep 1997-BMJ
TL;DR: There is no evidence of the impact of duplicate data on meta-analysis, and 17% of systematically searched randomised trials of ondansetron as a postoperative antiemetic were covert duplicates and resulted in 28% of patient data being duplicated.
Abstract: Objective: To quantify the impact of duplicate data on estimates of efficacy. Design: Systematic search for published full reports of randomised controlled trials investigating ondansetron9s effect on postoperative emesis. Abstracts were not considered. Data sources: Eighty four trials (11 980 patients receiving ondansetron) published between 1991 and September 1996. Main outcome measures: Percentage of duplicated trials and patient data. Estimation of antiemetic efficacy (prevention of emesis) of the most duplicated ondansetron regimen. Comparison between the efficacy of non-duplicated and duplicated data. Results: Data from nine trials had been published in 14 further reports, duplicating data from 3335 patients receiving ondansetron; none used a clear cross reference. Intravenous ondansetron 4 mg versus placebo was investigated in 16 reports not subject to duplicate publication, three reports subject to duplicate publication, and six duplicates of those three reports. The number needed to treat to prevent vomiting within 24 hours was 9.5 (95% confidence interval 6.9 to 15) in the 16 non-duplicated reports and 3.9 (3.3 to 4.8) in the three reports which were duplicated (P Conclusions: By searching systematically we found 17% of published full reports of randomised trials and 28% of the patient data were duplicated. Trials reporting greater treatment effect were significantly more likely to be duplicated. Inclusion of duplicated data in meta-analysis led to a 23% overestimation of ondansetron9s antiemetic efficacy. Key messages Although publishing the same data more than once is strongly discouraged, there is no evidence of the impact of duplicate data on meta-analysis Re-analysing an important trial, and cross referencing to original reports (overt duplication), may be necessary and valuable in some circumstances Covert duplication, masked by change of authors, of language, or by adding extra data, causes problems. One danger is that patient data are analysed more than once in meta-analysis 17% of systematically searched randomised trials of ondansetron as a postoperative antiemetic were covert duplicates and resulted in 28% of patient data being duplicated. None of these reports cross references the original source. Duplication lead to an overestimation of ondansetron9s antiemetic efficacy of 23%. Trials reporting greater treatment effect were significantly more likely to be duplicated Covert duplication of data has major implications for the assessment of drug efficacy and safety

584 citations

Journal ArticleDOI
08 Jul 2013-PLOS ONE
TL;DR: Lower barriers to publication of flawed articles are seen in the increase in number and proportion of retractions by authors with a single retraction and an increase in retraction for “new” offenses such as plagiarism and a decrease in the time-to-retraction of flawed work.
Abstract: Background The number of retracted scientific publications has risen sharply, but it is unclear whether this reflects an increase in publication of flawed articles or an increase in the rate at which flawed articles are withdrawn.

308 citations

Posted Content
TL;DR: In this article, the authors present the most recent and pertinent data on the consequences of our current scholarly communication system with respect to various measures of scientific quality (such as utility/citations, methodological soundness, expert ratings or retractions).
Abstract: Most researchers acknowledge an intrinsic hierarchy in the scholarly journals ('journal rank') that they submit their work to, and adjust not only their submission but also their reading strategies accordingly. On the other hand, much has been written about the negative effects of institutionalizing journal rank as an impact measure. So far, contributions to the debate concerning the limitations of journal rank as a scientific impact assessment tool have either lacked data, or relied on only a few studies. In this review, we present the most recent and pertinent data on the consequences of our current scholarly communication system with respect to various measures of scientific quality (such as utility/citations, methodological soundness, expert ratings or retractions). These data corroborate previous hypotheses: using journal rank as an assessment tool is bad scientific practice. Moreover, the data lead us to argue that any journal rank (not only the currently-favored Impact Factor) would have this negative impact. Therefore, we suggest that abandoning journals altogether, in favor of a library-based scholarly communication system, will ultimately be necessary. This new system will use modern information technology to vastly improve the filter, sort and discovery functions of the current journal system.

237 citations