T
Trevor Darrell
Researcher at University of California, Berkeley
Publications - 734
Citations - 222973
Trevor Darrell is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Object detection. The author has an hindex of 148, co-authored 678 publications receiving 181113 citations. Previous affiliations of Trevor Darrell include Massachusetts Institute of Technology & Boston University.
Papers
More filters
Proceedings ArticleDOI
Interactive adaptation of real-time object detectors
TL;DR: This paper shows how to create new detectors on the fly using large-scale internet image databases, thus allowing a user to choose among thousands of available categories to build a detection system suitable for the particular robotic application.
Proceedings ArticleDOI
Learning Saliency Propagation for Semi-Supervised Instance Segmentation
TL;DR: This work proposes ShapeProp, which learns to activate the salient regions within the object detection and propagate the areas to the whole instance through an iterative learnable message passing module, which establishes new states of the art for semi-supervised instance segmentation.
Proceedings ArticleDOI
Body2Hands: Learning to Infer 3D Hands from Conversational Gesture Body Dynamics
TL;DR: In this paper, a learned deep prior of body motion for 3D hand shape synthesis and estimation in the domain of conversational gestures is proposed, based on the insight that body motion and hand gestures are strongly correlated in nonverbal communication settings.
Proceedings Article
Recovering Articulated Model Topology from Observed Rigid Motion
TL;DR: This paper is concerned with recovering topology of the articulated model, when the rigid motion of constituent segments is known and the Maximum Likelihood tree shaped factorization of the joint probability density function of rigid segment motions is found.
Posted Content
Regularization Matters in Policy Optimization -- An Empirical Study on Continuous Control
TL;DR: This work presents the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks and discusses and analyze why regularization may help generalization in RL from four perspectives: sample complexity, reward distribution, weight norm, and noise robustness.