scispace - formally typeset
L

Li Fei-Fei

Researcher at Stanford University

Publications -  515
Citations -  199224

Li Fei-Fei is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 120, co-authored 420 publications receiving 145574 citations. Previous affiliations of Li Fei-Fei include Google & California Institute of Technology.

Papers
More filters
Posted Content

BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments

TL;DR: BEHAVIOR as mentioned in this paper is a benchmark for embodied AI with 100 activities in simulation, spanning a range of everyday household chores such as cleaning, maintenance, and food preparation, aiming to reproduce the challenges that agents must face in the real world.
Posted ContentDOI

Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior

TL;DR: This paper assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection.
Proceedings Article

OPTIMOL: a framework for online picture collection via incremental model learning

TL;DR: OPTIMOL (a framework for Online Picture collection via Incremental MOdel Learning) is a novel, automatic dataset collecting and model learning system for object categorization that mimics the human learning process in such a way that the more confident data you incorporate in the training data, the more reliahle models can be learnt.
Journal ArticleDOI

PTPRO suppresses lymph node metastasis of esophageal carcinoma by dephosphorylating MET.

TL;DR: In this article , protein tyrosine phosphatase receptor-type O (PTPRO) is used to suppress the metastasis of esophageal squamous cell carcinoma (ESCC).
Posted Content

Rethinking Architecture Design for Tackling Data Heterogeneity in Federated Learning.

TL;DR: In this article, the authors demonstrate that Transformers are fairly robust to distribution shifts and hence improve federated learning over heterogeneous data and conduct the first rigorous empirical investigation of different neural architectures across a range of federated algorithms, real-world benchmarks, and heterogenous data splits.