scispace - formally typeset
B

Been Kim

Researcher at Google

Publications -  75
Citations -  13180

Been Kim is an academic researcher from Google. The author has contributed to research in topics: Interpretability & Computer science. The author has an hindex of 38, co-authored 70 publications receiving 8631 citations. Previous affiliations of Been Kim include Massachusetts Institute of Technology & Allen Institute for Artificial Intelligence.

Papers
More filters
Posted Content

Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure.

TL;DR: The findings suggest that NNs trained with natural images do exhibit closure, in contrast to networks with randomized weights or networks that have been trained on visually random data.
Proceedings Article

Inferring robot task plans from human team meetings: a generative modeling approach with logic-based prior

TL;DR: This work presents an algorithm that reduces the burden of programming and deploying autonomous systems to work in concert with people in time-critical domains such as military field operations and disaster response by inferring the final plan from a processed form of the human team's planning conversation.
Posted Content

On Completeness-aware Concept-Based Explanations in Deep Neural Networks.

TL;DR: In this paper, the authors define the notion of completeness, which quantifies how sufficient a particular set of concepts is in explaining a model's prediction behavior based on the assumption that complete concept scores are sufficient statistics of the model prediction.
Posted Content

Inferring Robot Task Plans from Human Team Meetings: A Generative Modeling Approach with Logic-Based Prior

TL;DR: This work aims to reduce the burden of programming and deploying autonomous systems to work in concert with people in time-critical domains, such as military field operations and disaster response by inferring the final plan from a processed form of the human team's planning conversation.
Posted Content

Automating Interpretability: Discovering and Testing Visual Concepts Learned by Neural Networks.

TL;DR: DTCAV (Discovery TCAV) is introduced, a global concept-based interpretability method that can automatically discover concepts as image segments, along with each concept's estimated importance for a deep neural network's predictions, and it is validated that discovered concepts are as coherent to humans as hand-labeled concepts.