scispace - formally typeset
L

Li Fei-Fei

Researcher at Stanford University

Publications -  515
Citations -  199224

Li Fei-Fei is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 120, co-authored 420 publications receiving 145574 citations. Previous affiliations of Li Fei-Fei include Google & California Institute of Technology.

Papers
More filters
Proceedings ArticleDOI

Combining the Right Features for Complex Event Recognition

TL;DR: This paper introduces a hierarchical method for combining features based on the AND/OR graph structure, where nodes in the graph represent combinations of different sets of features and introduces an inference procedure that is able to efficiently compute structure scores.
Proceedings ArticleDOI

Improving Image Classification with Location Context

TL;DR: This work tackles the problem of performing image classification with location context, and explores different ways of encoding and extracting features from the GPS coordinates, and shows how to naturally incorporate these features into a Convolutional Neural Network, the current state-of-the-art for most image classification and recognition problems.
Proceedings ArticleDOI

Characterizing and Improving Stability in Neural Style Transfer

TL;DR: It is shown that the trace of the Gram matrix representing style is inversely related to the stability of the method, and a recurrent convolutional network is presented which incorporates a temporal consistency loss and overcomes the instability of prior methods.
Posted Content

Visual Semantic Planning using Deep Successor Representations

TL;DR: This work addresses the problem of visual semantic planning: the task of predicting a sequence of actions from visual observations that transform a dynamic environment from an initial state to a goal state, and develops a deep predictive model based on successor representations.
Proceedings ArticleDOI

Finding "It": Weakly-Supervised Reference-Aware Visual Grounding in Instructional Videos

TL;DR: The visually grounded action graph is introduced, a structured representation capturing the latent dependency between grounding and references in video, and a new reference-aware multiple instance learning objective for weak supervision of grounding in videos is proposed.