scispace - formally typeset
R

Raquel Urtasun

Researcher at Uber

Publications -  404
Citations -  63944

Raquel Urtasun is an academic researcher from Uber . The author has contributed to research in topics: Object detection & Segmentation. The author has an hindex of 96, co-authored 397 publications receiving 47100 citations. Previous affiliations of Raquel Urtasun include University of Toronto & University of California, Berkeley.

Papers
More filters
Proceedings ArticleDOI

Are we ready for autonomous driving? The KITTI vision benchmark suite

TL;DR: The autonomous driving platform is used to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection, revealing that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world.
Journal ArticleDOI

Vision meets robotics: The KITTI dataset

TL;DR: A novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research, using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras and a high-precision GPS/IMU inertial navigation system.
Proceedings ArticleDOI

Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books

TL;DR: The authors align books to their movie releases to provide rich descriptive explanations for visual content that go semantically far beyond the captions available in the current datasets, and propose a context-aware CNN to combine information from multiple sources.
Proceedings Article

Skip-thought vectors

TL;DR: This article used the continuity of text from books to train an encoder-decoder model that tries to reconstruct the surrounding sentences of an encoded passage, which can produce highly generic sentence representations that are robust and perform well in practice.
Proceedings ArticleDOI

The Role of Context for Object Detection and Semantic Segmentation in the Wild

TL;DR: A novel deformable part-based model is proposed, which exploits both local context around each candidate detection as well as global context at the level of the scene, which significantly helps in detecting objects at all scales.