scispace - formally typeset
S

Steven Basart

Researcher at University of Chicago

Publications -  18
Citations -  2229

Steven Basart is an academic researcher from University of Chicago. The author has contributed to research in topics: Language model & Benchmark (computing). The author has an hindex of 10, co-authored 15 publications receiving 616 citations. Previous affiliations of Steven Basart include Toyota Technological Institute at Chicago.

Papers
More filters
Posted Content

The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization

TL;DR: It is found that using larger models and artificial data augmentations can improve robustness on real-world distribution shifts, contrary to claims in prior work.
Posted Content

Natural Adversarial Examples

TL;DR: This work introduces two challenging datasets that reliably cause machine learning model performance to substantially degrade and curates an adversarial out-of-distribution detection dataset called IMAGENET-O, which is the first out- of-dist distribution detection dataset created for ImageNet models.
Posted Content

Measuring Massive Multitask Language Understanding

TL;DR: While most recent models have near random-chance accuracy, the very largest GPT-3 model improves over random chance by almost 20 percentage points on average, however, on every one of the 57 tasks, the best models still need substantial improvements before they can reach expert-level accuracy.
Posted Content

Scaling Out-of-Distribution Detection for Real-World Settings

TL;DR: This work departs from small-scale settings and explores large-scale multiclass and multi-label settings with high-resolution images and hundreds of classes for out-of-distribution detection, finding that a surprisingly simple detector based on the maximum logit outperforms prior methods in all the large- scale multi-class, multi- label, and segmentation tasks.
Proceedings ArticleDOI

Natural Adversarial Examples

TL;DR: In this article, the authors introduce two challenging datasets that reliably cause machine learning model performance to substantially degrade, and they also curate an adversarial out-of-distribution detection dataset called IMAGENET-O. The datasets are collected with a simple adversarial filtration technique to create datasets with limited spurious cues.