scispace - formally typeset
T

Trevor Darrell

Researcher at University of California, Berkeley

Publications -  734
Citations -  222973

Trevor Darrell is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Object detection. The author has an hindex of 148, co-authored 678 publications receiving 181113 citations. Previous affiliations of Trevor Darrell include Massachusetts Institute of Technology & Boston University.

Papers
More filters
Proceedings ArticleDOI

Multiple person and speaker activity tracking with a particle filter

TL;DR: A system that combines sound and vision to track multiple people using a particle filter with audio and video state components, and derive observation likelihood methods based on both audio andVideo measurements.
Journal ArticleDOI

Toward Large-Scale Face Recognition Using Social Network Context

TL;DR: In this paper, the authors argue that social network context may be the key for large-scale face recognition to succeed, and they leverage the resources and structure of such social networks to improve face recognition rates on the images shared.
Proceedings ArticleDOI

Visual speech recognition with loosely synchronized feature streams

TL;DR: A novel dynamic Bayesian network with a multi-stream structure and observations consisting of articulate feature classifier scores, which can model varying degrees of co-articulation in a principled way is presented.
Posted Content

Zero-Shot Visual Imitation

TL;DR: Zero-shot imitation learning as mentioned in this paper explores the world without any expert supervision and then distills its experience into a goal-conditioned skill policy with a novel forward consistency loss, where the role of the expert is only to communicate the goals (i.e., what to imitate) during inference.
Proceedings ArticleDOI

3D pose tracking with linear depth and brightness constraints

TL;DR: This paper explores the direct motion estimation problem assuming that video-rate depth information is available, from either stereo cameras or other sensors, and derives linear brightness and depth change constraint equations that govern the velocity field in 3D for both perspective and orthographic camera projection models.