scispace - formally typeset
L

Li Fei-Fei

Researcher at Stanford University

Publications -  515
Citations -  199224

Li Fei-Fei is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 120, co-authored 420 publications receiving 145574 citations. Previous affiliations of Li Fei-Fei include Google & California Institute of Technology.

Papers
More filters
Posted Content

Neural Task Graphs: Generalizing to Unseen Tasks from a Single Video Demonstration

TL;DR: Neural Task Graph Networks are proposed, which use conjugate task graph as the intermediate representation to modularize both the video demonstration and the derived policy, and can effectively predict task structure on the JIGSAWS surgical dataset and generalize to unseen tasks.
Journal ArticleDOI

AI will change the world, so it's time to change AI.

Tess Posner, +1 more
- 09 Dec 2020 - 
TL;DR: To ensure that AI meets its potential as a transformative tool, it must be developed by a truly representative research community, say Tess Posner and Li Fei-Fei.
Proceedings ArticleDOI

Unsupervised camera localization in crowded spaces

TL;DR: This work estimates the relative location of any pair of cameras by solely using noisy trajectories observed from each camera by formulating a nonlinear least square optimization problem, leveraging a continuous approximation of the matching function.

Epitomic Variational Autoencoders

TL;DR: It is shown that the proposed model greatly overcomes the common problem in variational autoencoders (VAE) of model over-pruning, and is efficient in using its model capacity and generalizes better than VAE.
Proceedings ArticleDOI

A Glimpse Far into the Future: Understanding Long-term Crowd Worker Quality

Abstract: Microtask crowdsourcing is increasingly critical to the creation of extremely large datasets. As a result, crowd workers spend weeks or months repeating the exact same tasks, making it necessary to understand their behavior over these long periods of time. We utilize three large, longitudinal datasets of nine million annotations collected from Amazon Mechanical Turk to examine claims that workers fatigue or satisfice over these long periods, producing lower quality work. We find that, contrary to these claims, workers are extremely stable in their quality over the entire period. To understand whether workers set their quality based on the task's requirements for acceptance, we then perform an experiment where we vary the required quality for a large crowdsourcing task. Workers did not adjust their quality based on the acceptance threshold: workers who were above the threshold continued working at their usual quality level, and workers below the threshold self-selected themselves out of the task. Capitalizing on this consistency, we demonstrate that it is possible to predict workers' long-term quality using just a glimpse of their quality on the first five tasks.