scispace - formally typeset
M

Marco Tulio Ribeiro

Researcher at Microsoft

Publications -  43
Citations -  16859

Marco Tulio Ribeiro is an academic researcher from Microsoft. The author has contributed to research in topics: Computer science & Interpretability. The author has an hindex of 17, co-authored 28 publications receiving 9541 citations. Previous affiliations of Marco Tulio Ribeiro include University of Washington & Universidade Federal de Minas Gerais.

Papers
More filters
Proceedings ArticleDOI

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Proceedings Article

Anchors: High-Precision Model-Agnostic Explanations

TL;DR: This work introduces a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, “sufficient” conditions for predictions, and proposes an algorithm to efficiently compute these explanations for any black-box model with high probability guarantees.
Proceedings ArticleDOI

Beyond accuracy: Behavioral testing of NLP models with checklist

TL;DR: CheckList as mentioned in this paper is a task-agnostic methodology for testing NLP models, which includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly.
Proceedings ArticleDOI

“Why Should I Trust You?”: Explaining the Predictions of Any Classifier

TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Posted Content

Model-Agnostic Interpretability of Machine Learning.

TL;DR: This paper argues for explaining machine learning predictions using model-agnostic approaches, treating the machine learning models as black-box functions, which provide crucial flexibility in the choice of models, explanations, and representations, improving debugging, comparison, and interfaces for a variety of users and models.