scispace - formally typeset
Search or ask a question
Institution

University of Amsterdam

EducationAmsterdam, Noord-Holland, Netherlands
About: University of Amsterdam is a education organization based out in Amsterdam, Noord-Holland, Netherlands. It is known for research contribution in the topics: Population & Context (language use). The organization has 59309 authors who have published 140894 publications receiving 5984137 citations. The organization is also known as: UvA & Universiteit van Amsterdam.


Papers
More filters
Journal ArticleDOI
Lorenzo Galluzzi1, Lorenzo Galluzzi2, Lorenzo Galluzzi3, Stuart A. Aaronson4, John M. Abrams5, Emad S. Alnemri6, David W. Andrews7, Eric H. Baehrecke8, Nicolas G. Bazan9, Mikhail V. Blagosklonny10, Klas Blomgren11, Klas Blomgren12, Christoph Borner13, Dale E. Bredesen14, Dale E. Bredesen15, Catherine Brenner16, Maria Castedo2, Maria Castedo1, Maria Castedo3, John A. Cidlowski17, Aaron Ciechanover18, Gerald M. Cohen19, V De Laurenzi20, R De Maria21, Mohanish Deshmukh22, Brian David Dynlacht23, Wafik S. El-Deiry24, Richard A. Flavell25, Richard A. Flavell26, Simone Fulda27, Carmen Garrido28, Carmen Garrido2, Pierre Golstein2, Pierre Golstein16, Pierre Golstein29, Marie-Lise Gougeon30, Douglas R. Green, Hinrich Gronemeyer2, Hinrich Gronemeyer31, Hinrich Gronemeyer16, György Hajnóczky6, J. M. Hardwick32, Michael O. Hengartner33, Hidenori Ichijo34, Marja Jäättelä, Oliver Kepp1, Oliver Kepp3, Oliver Kepp2, Adi Kimchi35, Daniel J. Klionsky36, Richard A. Knight37, Sally Kornbluth38, Sharad Kumar, Beth Levine26, Beth Levine5, Stuart A. Lipton, Enrico Lugli17, Frank Madeo39, Walter Malorni21, Jean-Christophe Marine40, Seamus J. Martin41, Jan Paul Medema42, Patrick Mehlen16, Patrick Mehlen43, Gerry Melino19, Gerry Melino44, Ute M. Moll45, Ute M. Moll46, Eugenia Morselli3, Eugenia Morselli1, Eugenia Morselli2, Shigekazu Nagata47, Donald W. Nicholson48, Pierluigi Nicotera19, Gabriel Núñez36, Moshe Oren35, Josef M. Penninger49, Shazib Pervaiz50, Marcus E. Peter51, Mauro Piacentini44, Jochen H. M. Prehn52, Hamsa Puthalakath53, Gabriel A. Rabinovich54, Rosario Rizzuto55, Cecília M. P. Rodrigues56, David C. Rubinsztein57, Thomas Rudel58, Luca Scorrano59, Hans-Uwe Simon60, Hermann Steller61, Hermann Steller26, J. Tschopp62, Yoshihide Tsujimoto63, Peter Vandenabeele64, Ilio Vitale3, Ilio Vitale1, Ilio Vitale2, Karen H. Vousden65, Richard J. Youle17, Junying Yuan66, Boris Zhivotovsky67, Guido Kroemer3, Guido Kroemer2, Guido Kroemer1 
University of Paris-Sud1, French Institute of Health and Medical Research2, Institut Gustave Roussy3, Icahn School of Medicine at Mount Sinai4, University of Texas Southwestern Medical Center5, Thomas Jefferson University6, McMaster University7, University of Massachusetts Medical School8, LSU Health Sciences Center New Orleans9, Roswell Park Cancer Institute10, University of Gothenburg11, Boston Children's Hospital12, University of Freiburg13, University of California, San Francisco14, Buck Institute for Research on Aging15, Centre national de la recherche scientifique16, National Institutes of Health17, Technion – Israel Institute of Technology18, University of Leicester19, University of Chieti-Pescara20, Istituto Superiore di Sanità21, University of North Carolina at Chapel Hill22, New York University23, University of Pennsylvania24, Yale University25, Howard Hughes Medical Institute26, University of Ulm27, University of Burgundy28, Aix-Marseille University29, Pasteur Institute30, University of Strasbourg31, Johns Hopkins University32, University of Zurich33, University of Tokyo34, Weizmann Institute of Science35, University of Michigan36, University College London37, Duke University38, University of Graz39, Ghent University40, Trinity College, Dublin41, University of Amsterdam42, University of Lyon43, University of Rome Tor Vergata44, Stony Brook University45, University of Göttingen46, Kyoto University47, Merck & Co.48, Austrian Academy of Sciences49, National University of Singapore50, University of Chicago51, Royal College of Surgeons in Ireland52, La Trobe University53, University of Buenos Aires54, University of Padua55, University of Lisbon56, University of Cambridge57, University of Würzburg58, University of Geneva59, University of Bern60, Rockefeller University61, University of Lausanne62, Osaka University63, University of California, San Diego64, University of Glasgow65, Harvard University66, Karolinska Institutet67
TL;DR: A nonexhaustive comparison of methods to detect cell death with apoptotic or nonapoptotic morphologies, their advantages and pitfalls is provided and the importance of performing multiple, methodologically unrelated assays to quantify dying and dead cells is emphasized.
Abstract: Cell death is essential for a plethora of physiological processes, and its deregulation characterizes numerous human diseases Thus, the in-depth investigation of cell death and its mechanisms constitutes a formidable challenge for fundamental and applied biomedical research, and has tremendous implications for the development of novel therapeutic strategies It is, therefore, of utmost importance to standardize the experimental procedures that identify dying and dead cells in cell cultures and/or in tissues, from model organisms and/or humans, in healthy and/or pathological scenarios Thus far, dozens of methods have been proposed to quantify cell death-related parameters However, no guidelines exist regarding their use and interpretation, and nobody has thoroughly annotated the experimental settings for which each of these techniques is most appropriate Here, we provide a nonexhaustive comparison of methods to detect cell death with apoptotic or nonapoptotic morphologies, their advantages and pitfalls These guidelines are intended for investigators who study cell death, as well as for reviewers who need to constructively critique scientific reports that deal with cellular demise Given the difficulties in determining the exact number of cells that have passed the point-of-no-return of the signaling cascades leading to cell death, we emphasize the importance of performing multiple, methodologically unrelated assays to quantify dying and dead cells

2,218 citations

Journal ArticleDOI
29 Mar 2021-BMJ
TL;DR: The preferred reporting items for systematic reviews and meta-analyses (PRISMA 2020) as mentioned in this paper was developed to facilitate transparent and complete reporting of systematic reviews, and has been updated to reflect recent advances in systematic review methodology and terminology.
Abstract: The methods and results of systematic reviews should be reported in sufficient detail to allow users to assess the trustworthiness and applicability of the review findings. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement was developed to facilitate transparent and complete reporting of systematic reviews and has been updated (to PRISMA 2020) to reflect recent advances in systematic review methodology and terminology. Here, we present the explanation and elaboration paper for PRISMA 2020, where we explain why reporting of each item is recommended, present bullet points that detail the reporting recommendations, and present examples from published reviews. We hope that changes to the content and structure of PRISMA 2020 will facilitate uptake of the guideline and lead to more transparent, complete, and accurate reporting of systematic reviews.

2,217 citations

Journal ArticleDOI
TL;DR: The special issue on Social Acceptance of Renewable Energy Innovation as mentioned in this paper is a collection of best papers presented at an international research conference held in Tramelan (Switzerland) in February 2006.

2,195 citations

Journal ArticleDOI
TL;DR: The preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement as discussed by the authors was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found.
Abstract: The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.

2,192 citations

Journal ArticleDOI
07 Apr 2020-BMJ
TL;DR: Proposed models for covid-19 are poorly reported, at high risk of bias, and their reported performance is probably optimistic, according to a review of published and preprint reports.
Abstract: Objective To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. Design Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. Data sources PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. Study selection Studies that developed or validated a multivariable covid-19 related prediction model. Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). Results 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. Systematic review registration Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.

2,183 citations


Authors

Showing all 59759 results

NameH-indexPapersCitations
Richard A. Flavell2311328205119
Scott M. Grundy187841231821
Stuart H. Orkin186715112182
Kenneth C. Anderson1781138126072
David A. Weitz1781038114182
Dorret I. Boomsma1761507136353
Brenda W.J.H. Penninx1701139119082
Michael Kramer1671713127224
Nicholas J. White1611352104539
Lex M. Bouter158767103034
Wolfgang Wagner1562342123391
Jerome I. Rotter1561071116296
David Cella1561258106402
David Eisenberg156697112460
Naveed Sattar1551326116368
Network Information
Related Institutions (5)
University College London
210.6K papers, 9.8M citations

94% related

University of Edinburgh
151.6K papers, 6.6M citations

94% related

University of Pennsylvania
257.6K papers, 14.1M citations

94% related

Columbia University
224K papers, 12.8M citations

94% related

University of Pittsburgh
201K papers, 9.6M citations

94% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023198
2022699
20219,646
20208,532
20197,821
20186,407