T
Thomas Krendl Gilbert
Researcher at University of California, Berkeley
Publications - 27
Citations - 374
Thomas Krendl Gilbert is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Sociotechnical system. The author has an hindex of 5, co-authored 17 publications receiving 152 citations. Previous affiliations of Thomas Krendl Gilbert include University of California.
Papers
More filters
Posted Content
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
Miles Brundage,Shahar Avin,Jasmine Wang,Haydn Belfield,Gretchen Krueger,Gillian K. Hadfield,Gillian K. Hadfield,Heidy Khlaaf,Jingying Yang,Helen Toner,Ruth Fong,Tegan Maharaj,Pang Wei Koh,Sara Hooker,Jade Leung,Andrew Trask,Emma Bluemke,Jonathan Lebensbold,Cullen O'Keefe,Mark Koren,Théo Ryffel,J. B. Rubinovitz,Tamay Besiroglu,Federica Carugati,Jack Clark,Peter Eckersley,Sarah de Haas,Maritza Johnson,Ben Laurie,Alex Ingerman,Igor Krawczuk,Amanda Askell,Rosario Cammarota,Andrew J. Lohn,David Krueger,Charlotte Stix,Peter Henderson,Logan Graham,Carina E. A. Prunkl,Bianca Martin,Elizabeth Seger,Noa Zilberman,Seán Ó hÉigeartaigh,Frens Kroeger,Girish Sastry,Rebecca Kagan,Adrian Weller,Adrian Weller,Brian Tse,Elizabeth A. Barnes,Allan Dafoe,Paul Scharre,Ariel Herbert-Voss,Martijn Rasser,Shagun Sodhani,Carrick Flynn,Thomas Krendl Gilbert,Lisa Dyer,Saif Khan,Yoshua Bengio,Markus Anderljung +60 more
TL;DR: This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems.
Journal ArticleDOI
To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems
Julia Amann,Dennis Vetter,Stig Nikolaj Blomberg,Helle Collatz Christensen,Megan Coffee,Sara Gerke,Thomas Krendl Gilbert,Thilo Hagendorff,Sune Holm,Michelle Livne,Andy Spezzatti,Inga Strümke,Roberto V. Zicari,Vince I. Madai +13 more
TL;DR: Whether explainability can provide added value to CDSS depends on several key questions: technical feasibility, the level of validation in case of explainable algorithms, the characteristics of the context in which the system is implemented, the designated role in the decision-making process, and the key user group(s).
Journal ArticleDOI
Hard choices in artificial intelligence
TL;DR: The vagueness in debates about the safety and ethical behavior of AI systems is examined, and it is shown how it cannot be resolved through mathematical formalism alone, instead requiring deliberation about the politics of development as well as the context of deployment.
Posted Content
A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics.
TL;DR: This position paper interprets technical bias as an epistemological problem and emergent bias as a dynamical feedback phenomenon and point to value-sensitive design methodologies to revisit the design and implementation process of automated decision-making systems.
Journal ArticleDOI
Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier
Roberto V. Zicari,Sheraz Ahmed,Julia Amann,Stephan Alexander Braun,John Brodersen,Frédérick Bruneault,James Brusseau,Erik Campano,Megan Coffee,Andreas Dengel,Boris Düdder,Alessio Gallucci,Thomas Krendl Gilbert,Philippe Gottfrois,Emmanuel Goffi,Christoffer Bjerre Haase,Thilo Hagendorff,Eleanore Hickman,Elisabeth Hildt,Sune Holm,Pedro Kringen,Ulrich Kühne,Adriano Lucieri,Vince I. Madai,Pedro A. Moreno-Sánchez,Oriana Medlicott,Matiss Ozols,Eberhard Schnebel,Andy Spezzatti,Jesmin Jahan Tithi,Steven Umbrello,Dennis Vetter,Holger Volland,Magnus Westerlund,Renee Wurth +34 more
TL;DR: In this article, the authors use an ethical co-design methodology to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare, which is aimed to explain the decisions made by deep learning networks when used to analyze images of skin lesions.