scispace - formally typeset
Open AccessPosted Content

Deep Semantic Matching for Optical Flow.

TLDR
This work tackles the problem of estimating optical flow from a monocular camera in the context of autonomous driving by proposing to estimate the traffic participants using instance-level segmentation, and introduces a new convolutional net that learns to perform flow matching, and is able to estimates the uncertainty of its matches.
Abstract
We tackle the problem of estimating optical flow from a monocular camera in the context of autonomous driving. We build on the observation that the scene is typically composed of a static background, as well as a relatively small number of traffic participants which move rigidly in 3D. We propose to estimate the traffic participants using instance-level segmentation. For each traffic participant, we use the epipolar constraints that govern each independent motion for faster and more accurate estimation. Our second contribution is a new convolutional net that learns to perform flow matching, and is able to estimate the uncertainty of its matches. This is a core element of our flow estimation pipeline. We demonstrate the effectiveness of our approach in the challenging KITTI 2015 flow benchmark, and show that our approach outperforms published approaches by a large margin.

read more

Citations
More filters
Posted Content

Deep Image Homography Estimation

TL;DR: Two convolutional neural network architectures are presented for HomographyNet: a regression network which directly estimates the real-valued homography parameters, and a classification network which produces a distribution over quantized homographies.
Posted Content

Building Deep Networks on Grassmann Manifolds

TL;DR: This paper proposes a deep network architecture by generalizing the Euclidean network paradigm to Grassmann manifolds and designs full rank mapping layers to transform input Grassmannian data to more desirable ones, and exploits re-orthonormalization layers to normalize the resulting matrices.
Book ChapterDOI

Deep Discrete Flow

TL;DR: This work presents an efficient way of training a context network with a large receptive field size on top of a local network using dilated convolutions on patches and provides an extensive empirical investigation of network architectures and model parameters.
Journal ArticleDOI

Deep corner prediction to rectify tilted license plate images

TL;DR: This work proposes deep neural network models that can locate four corner plate positions, which can then be used to perform the perspective transformation that can been used to rectify plates.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Posted Content

Adam: A Method for Stochastic Optimization

TL;DR: In this article, the adaptive estimates of lower-order moments are used for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimate of lowerorder moments.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Related Papers (5)