scispace - formally typeset
Search or ask a question

Showing papers by "Sergio Guadarrama published in 2018"


Book ChapterDOI
Carl Vondrick1, Abhinav Shrivastava1, Alireza Fathi1, Sergio Guadarrama1, Kevin Murphy1 
08 Sep 2018
TL;DR: In this paper, the authors use large amounts of unlabeled video to learn models for visual tracking without manual human supervision, and leverage the natural temporal coherency of color to create a model that learns to colorize gray-scale videos by copying colors from a reference frame.
Abstract: We use large amounts of unlabeled video to learn models for visual tracking without manual human supervision. We leverage the natural temporal coherency of color to create a model that learns to colorize gray-scale videos by copying colors from a reference frame. Quantitative and qualitative experiments suggest that this task causes the model to automatically learn to track visual regions. Although the model is trained without any ground-truth labels, our method learns to track well enough to outperform the latest methods based on optical flow. Moreover, our results suggest that failures to track are correlated with failures to colorize, indicating that advancing video colorization may further improve self-supervised visual tracking.

380 citations


Posted Content
Carl Vondrick1, Abhinav Shrivastava1, Alireza Fathi1, Sergio Guadarrama1, Kevin Murphy1 
TL;DR: In this article, the authors use large amounts of unlabeled video to learn models for visual tracking without manual human supervision, and leverage the natural temporal coherency of color to create a model that learns to colorize gray-scale videos by copying colors from a reference frame.
Abstract: We use large amounts of unlabeled video to learn models for visual tracking without manual human supervision. We leverage the natural temporal coherency of color to create a model that learns to colorize gray-scale videos by copying colors from a reference frame. Quantitative and qualitative experiments suggest that this task causes the model to automatically learn to track visual regions. Although the model is trained without any ground-truth labels, our method learns to track well enough to outperform the latest methods based on optical flow. Moreover, our results suggest that failures to track are correlated with failures to colorize, indicating that advancing video colorization may further improve self-supervised visual tracking.

20 citations