T
Trevor Darrell
Researcher at University of California, Berkeley
Publications - 734
Citations - 222973
Trevor Darrell is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Object detection. The author has an hindex of 148, co-authored 678 publications receiving 181113 citations. Previous affiliations of Trevor Darrell include Massachusetts Institute of Technology & Boston University.
Papers
More filters
Proceedings Article
Factorized multi-modal topic model
TL;DR: In this paper, the authors combine the two approaches by presenting a novel HDP-based topic model that automatically learns both shared and private topics, which is shown to be especially useful for querying the contents of one domain given samples of the other.
Posted Content
Quasi-Dense Instance Similarity Learning
TL;DR: This paper presents a simple yet effective quasi-dense matching method to learn instance similarity from hundreds of region proposals in a pair of images, and can outperform existing methods without using location or motion heuristics on joint object detection and tracking.
Team SRI Sarnoff s AURORA System @ TRECVID 2011 (Author's Manuscript)
Hui Cheng,Amir Tamrakar,Saad Ali,Qian Yu,Omar Javed,Jingen Liu,Ajay Divakaran,Harpreet Sawhney,Alexander G. Hauptmann,Mubarak Shah,Subhabrata Bhattacharya,Michael Witbrock,Jon Curtis,Gerald Friedland,Robert Mertens,Trevor Darrell,R. Manmatha,James Allan +17 more
TL;DR: This paper presents results from the experimental evaluation for the TRECVID 2011 MED11 (Multimedia Event Detection) task as a part of Team SRI-Sarnoff's AURORA system being developed under the IARPA ALADDIN Program.
Journal ArticleDOI
Correspondence with cumulative similarity transforms
Trevor Darrell,M. Covell +1 more
TL;DR: A local image transform based on cumulative similarity measures is defined and is shown to enable efficient correspondence and tracking near occluding boundaries and results comparing this method to traditional least-squares and robust correspondence matching are shown.
Journal ArticleDOI
Scale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial Representation Learning
Colorado Reed,Ritwik Gupta,Shufan Li,Sarah Brockman,Christopher S. Funk,Brian Clipp,Salvatore Candido,Matthew T. Uyttendaele,Trevor Darrell +8 more
TL;DR: Scale-MAE as mentioned in this paper pretrain a network by masking an input image at a known input scale, where the area of the Earth covered by the image determines the scale of the ViT positional encoding, not the image resolution.