scispace - formally typeset
Proceedings ArticleDOI

Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?

TLDR
The importance of recognition granularity is investigated, from coarse 2D bounding box estimates over 2D instance segmentations to fine-grained 3D object part predictions, and it is observed that the instance segmentation cue is by far strongest, in the authors' setting.
Abstract
Existing methods for 3D scene flow estimation often fail in the presence of large displacement or local ambiguities, e.g., at texture-less or reflective surfaces. However, these challenges are omnipresent in dynamic road scenes, which is the focus of this work. Our main contribution is to overcome these 3D motion estimation problems by exploiting recognition. In particular, we investigate the importance of recognition granularity, from coarse 2D bounding box estimates over 2D instance segmentations to fine-grained 3D object part predictions. We compute these cues using CNNs trained on a newly annotated dataset of stereo images and integrate them into a CRF-based model for robust 3D scene flow estimation - an approach we term Instance Scene Flow. We analyze the importance of each recognition cue in an ablation study and observe that the instance segmentation cue is by far strongest, in our setting. We demonstrate the effectiveness of our method on the challenging KITTI 2015 scene flow benchmark where we achieve state-of-the-art performance at the time of submission.

read more

Citations
More filters
Posted Content

SENSE: a Shared Encoder Network for Scene-flow Estimation

TL;DR: SENSE as mentioned in this paper introduces a compact network for holistic scene flow estimation, which shares common encoder features among four closely-related tasks: optical flow, disparity estimation from stereo, occlusion estimation, and semantic segmentation.
Proceedings ArticleDOI

Binary TTC: A Temporal Geofence for Autonomous Navigation

TL;DR: In this article, the authors estimate the time to contact (TTC) of an object to collide with the observer's plane via a series of simpler, binary classifications, and predict with low latency whether the observer will collide with an obstacle within a certain time.
Proceedings ArticleDOI

UPFlow: Upsampling Pyramid for Unsupervised Optical Flow Learning

TL;DR: In this paper, a self-guided upsample module is proposed to tackle the interpolation blur problem caused by bilinear upsampling between pyramid levels, and a pyramid distillation loss is added to add supervision for intermediate levels via distilling the finest flow as pseudo labels.
Proceedings ArticleDOI

Visualizing the Invisible: Occluded Vehicle Segmentation and Recovery

TL;DR: Zhang et al. as discussed by the authors proposed an iterative multi-task framework to complete the segmentation mask of an occluded vehicle and recover the appearance of its invisible parts.
Proceedings ArticleDOI

Few-shot Human Motion Prediction via Learning Novel Motion Dynamics.

TL;DR: This work proposes a novel approach named Motion Prediction Network (MoPredNet) for few-short human motion prediction that can be adapted to predicting new motion dynamics using limited data, and it elegantly captures long-term dependency in motion dynamics.
References
More filters
Proceedings ArticleDOI

Fast R-CNN

TL;DR: Fast R-CNN as discussed by the authors proposes a Fast Region-based Convolutional Network method for object detection, which employs several innovations to improve training and testing speed while also increasing detection accuracy and achieves a higher mAP on PASCAL VOC 2012.
Posted Content

Fast R-CNN

TL;DR: This paper proposes a Fast Region-based Convolutional Network method (Fast R-CNN) for object detection that builds on previous work to efficiently classify object proposals using deep convolutional networks.
Posted Content

Caffe: Convolutional Architecture for Fast Feature Embedding

TL;DR: Caffe as discussed by the authors is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.
Proceedings ArticleDOI

Are we ready for autonomous driving? The KITTI vision benchmark suite

TL;DR: The autonomous driving platform is used to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection, revealing that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world.
Proceedings ArticleDOI

Caffe: Convolutional Architecture for Fast Feature Embedding

TL;DR: Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.
Related Papers (5)