S
Sara Hooker
Researcher at Google
Publications - 32
Citations - 2433
Sara Hooker is an academic researcher from Google. The author has contributed to research in topics: Computer science & Interpretability. The author has an hindex of 12, co-authored 20 publications receiving 1377 citations.
Papers
More filters
Posted Content
The State of Sparsity in Deep Neural Networks
TL;DR: It is shown that unstructured sparse architectures learned through pruning cannot be trained from scratch to the same test set performance as a model trained with joint sparsification and optimization, and the need for large-scale benchmarks in the field of model compression is highlighted.
Book ChapterDOI
The (Un)reliability of saliency methods
Pieter-Jan Kindermans,Sara Hooker,Julius Adebayo,Maximilian Alber,Kristof T. Schütt,Sven Dähne,Dumitru Erhan,Been Kim +7 more
TL;DR: This work uses a simple and common pre-processing step ---adding a constant shift to the input data--- to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute.
Proceedings Article
A Benchmark for Interpretability Methods in Deep Neural Networks
TL;DR: In this paper, an empirical measure of the approximate accuracy of feature importance estimates in deep neural networks is proposed, and it is shown that ensemble based approaches outperform a random assignment of importance.
Posted Content
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
Miles Brundage,Shahar Avin,Jasmine Wang,Haydn Belfield,Gretchen Krueger,Gillian K. Hadfield,Gillian K. Hadfield,Heidy Khlaaf,Jingying Yang,Helen Toner,Ruth Fong,Tegan Maharaj,Pang Wei Koh,Sara Hooker,Jade Leung,Andrew Trask,Emma Bluemke,Jonathan Lebensbold,Cullen O'Keefe,Mark Koren,Théo Ryffel,J. B. Rubinovitz,Tamay Besiroglu,Federica Carugati,Jack Clark,Peter Eckersley,Sarah de Haas,Maritza Johnson,Ben Laurie,Alex Ingerman,Igor Krawczuk,Amanda Askell,Rosario Cammarota,Andrew J. Lohn,David Krueger,Charlotte Stix,Peter Henderson,Logan Graham,Carina E. A. Prunkl,Bianca Martin,Elizabeth Seger,Noa Zilberman,Seán Ó hÉigeartaigh,Frens Kroeger,Girish Sastry,Rebecca Kagan,Adrian Weller,Adrian Weller,Brian Tse,Elizabeth A. Barnes,Allan Dafoe,Paul Scharre,Ariel Herbert-Voss,Martijn Rasser,Shagun Sodhani,Carrick Flynn,Thomas Krendl Gilbert,Lisa Dyer,Saif Khan,Yoshua Bengio,Markus Anderljung +60 more
TL;DR: This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems.
Posted Content
The (Un)reliability of saliency methods
Pieter-Jan Kindermans,Sara Hooker,Julius Adebayo,Maximilian Alber,Kristof T. Schütt,Sven Dähne,Dumitru Erhan,Been Kim +7 more
TL;DR: In this article, a simple and common pre-processing step, adding a constant shift to the input data, is used to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute.