scispace - formally typeset
T

Trevor Darrell

Researcher at University of California, Berkeley

Publications -  734
Citations -  222973

Trevor Darrell is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Object detection. The author has an hindex of 148, co-authored 678 publications receiving 181113 citations. Previous affiliations of Trevor Darrell include Massachusetts Institute of Technology & Boston University.

Papers
More filters
Patent

Photo-based mobile pointing system and related techniques

TL;DR: In this paper, a mobile deixis device includes a camera to capture an image and a wireless handheld device coupled to the camera and to a wireless network to communicate the image with existing databases to find similar images.
Journal ArticleDOI

Structured Video Tokens @ Ego4D PNR Temporal Localization Challenge 2022

TL;DR: A learning framework StructureViT (SViT for short), which demonstrates how utilizing the structure of a small number of images only available during training can improve a video model by enriching a transformer model with a set of object tokens.
Proceedings ArticleDOI

Instance-Aware Predictive Navigation in Multi-Agent Environments

TL;DR: In this article, an instance-aware predictive control (IPC) approach is proposed to predict and anticipate future events at the object level for making informed driving decisions in dynamic multi-agent environments.
Posted Content

Modularity Improves Out-of-Domain Instruction Following.

TL;DR: This work proposes a modular architecture for following natural language instructions that describe sequences of diverse subgoals, such as navigating to landmarks or picking up objects, and finds that modularization improves generalization to environments unseen in training and to novel tasks.

Visual Domain Adaptation Using Regularized Cross- Domain Transforms

TL;DR: This work introduces a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation which minimizes the effect of domain-induced changes in the feature distribution, and proves that the resulting model may be kernelized to learn non-linear transformations under a variety of regularizers.