scispace - formally typeset
L

Li Fei-Fei

Researcher at Stanford University

Publications -  515
Citations -  199224

Li Fei-Fei is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 120, co-authored 420 publications receiving 145574 citations. Previous affiliations of Li Fei-Fei include Google & California Institute of Technology.

Papers
More filters
Posted Content

Characterizing and Improving Stability in Neural Style Transfer

TL;DR: In this article, a recurrent convolutional network was proposed for real-time video style transfer, which incorporates a temporal consistency loss and overcomes the instability of prior methods, and can be applied at any resolution, do not re- quire optical flow at test time, and produce high quality, temporally consistent stylized videos in real time.
Proceedings Article

Learning to Play With Intrinsically-Motivated, Self-Aware Agents

TL;DR: This work proposes a "world-model" network that learns to predict the dynamic consequences of the agent's actions, and demonstrates that this policy causes the agent to explore novel and informative interactions with its environment, leading to the generation of a spectrum of complex behaviors.
Proceedings ArticleDOI

Learning Temporal Embeddings for Complex Video Analysis

TL;DR: This paper proposes a scheme for incorporating temporal context based on past and future frames in videos, and compares this to other contextual representations, and shows how data augmentation using multi-resolution samples and hard negatives helps to significantly improve the quality of the learned embeddings.
Journal ArticleDOI

Visual categorization is automatic and obligatory: evidence from Stroop-like paradigm.

TL;DR: It is demonstrated that entry-level visual categorization is an automatic and obligatory process.
Proceedings ArticleDOI

KETO: Learning Keypoint Representations for Tool Manipulation

TL;DR: In this paper, a set of task-specific keypoints is jointly predicted from 3D point clouds of the tool object by a deep neural network, which offer a concise and informative description of the object to determine grasps and subsequent manipulation actions.