scispace - formally typeset
L

Li Fei-Fei

Researcher at Stanford University

Publications -  515
Citations -  199224

Li Fei-Fei is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 120, co-authored 420 publications receiving 145574 citations. Previous affiliations of Li Fei-Fei include Google & California Institute of Technology.

Papers
More filters
Book ChapterDOI

Tracking Millions of Humans in Crowded Spaces

TL;DR: This chapter presents all the technical details that lead to successfully understanding the mobility of more than 100 million individuals in crowded train terminals over the course of two years and shares detailed insights on how to address the occlusion problem with sparsity promoting priors and discrete combinatorial optimization that models social interactions.
Posted Content

Scaling Robot Supervision to Hundreds of Hours with RoboTurk: Robotic Manipulation Dataset through Human Reasoning and Dexterity

TL;DR: In this paper, the authors leverage and extend the RoboTurk platform to scale up data collection for robotic manipulation using remote teleoperation, which results in the largest robot dataset collected via teleoperation.
Proceedings Article

AI-Based Request Augmentation to Increase Crowdsourcing Participation

TL;DR: A new technique is introduced that augments questions with ML-based request strategies drawn from social psychology, and a contextual bandit algorithm is introduced to select which strategy to apply for a given task and contributor.
Proceedings ArticleDOI

Scalable Annotation of Fine-Grained Categories Without Experts

TL;DR: This work introduces a graph based crowdsourcing algorithm to automatically group visually indistinguishable objects together and presents the largest fine-grained visual dataset reported to date with 2,657 categories of cars annotated at 1/20th the cost of hiring experts.
Posted Content

HYPE: A Benchmark for Human eYe Perceptual Evaluation of Generative Models

TL;DR: The Human eYe Perceptual Evaluation (HYPE) as mentioned in this paper is a human benchmark for generative realism that is grounded in psychophysics research in perception, reliable across different sets of randomly sampled outputs from a model and efficient in cost and time.