scispace - formally typeset
S

Sara Hooker

Researcher at Google

Publications -  32
Citations -  2433

Sara Hooker is an academic researcher from Google. The author has contributed to research in topics: Computer science & Interpretability. The author has an hindex of 12, co-authored 20 publications receiving 1377 citations.

Papers
More filters
Posted Content

The State of Sparsity in Deep Neural Networks

TL;DR: It is shown that unstructured sparse architectures learned through pruning cannot be trained from scratch to the same test set performance as a model trained with joint sparsification and optimization, and the need for large-scale benchmarks in the field of model compression is highlighted.
Book ChapterDOI

The (Un)reliability of saliency methods

TL;DR: This work uses a simple and common pre-processing step ---adding a constant shift to the input data--- to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute.
Proceedings Article

A Benchmark for Interpretability Methods in Deep Neural Networks

TL;DR: In this paper, an empirical measure of the approximate accuracy of feature importance estimates in deep neural networks is proposed, and it is shown that ensemble based approaches outperform a random assignment of importance.
Posted Content

Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims

TL;DR: This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems.
Posted Content

The (Un)reliability of saliency methods

TL;DR: In this article, a simple and common pre-processing step, adding a constant shift to the input data, is used to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute.