scispace - formally typeset
Proceedings ArticleDOI

Deep Graphical Feature Learning for the Feature Matching Problem

TLDR
This paper proposes a graph neural network model to transform coordinates of feature points into local features and shows promising results on both synthetic and real datasets demonstrate the effectiveness of the proposed method.
Abstract
The feature matching problem is a fundamental problem in various areas of computer vision including image registration, tracking and motion analysis. Rich local representation is a key part of efficient feature matching methods. However, when the local features are limited to the coordinate of key points, it becomes challenging to extract rich local representations. Traditional approaches use pairwise or higher order handcrafted geometric features to get robust matching; this requires solving NP-hard assignment problems. In this paper, we address this problem by proposing a graph neural network model to transform coordinates of feature points into local features. With our local features, the traditional NP-hard assignment problems are replaced with a simple assignment problem which can be solved efficiently. Promising results on both synthetic and real datasets demonstrate the effectiveness of the proposed method.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Deep Graph Matching Consensus

TL;DR: This work presents a two-stage neural architecture for learning and refining structural correspondences between graphs that scales well to large, real-world inputs while still being able to recover global correspondences consistently.
Posted Content

Neural Graph Matching Network: Learning Lawler's Quadratic Assignment Problem with Extension to Hypergraph and Multiple-graph Matching

TL;DR: This paper presents a QAP network directly learning with the affinity matrix (equivalently the association graph) whereby the matching problem is translated into a vertex classification task, and is the first network to directly learn with the general Lawlers QAP.
Posted Content

Deep Graph Matching via Blackbox Differentiation of Combinatorial Solvers

TL;DR: This work proposes an end-to-end trainable architecture for deep graph matching that contains unmodified combinatorial solvers and highlights the conceptual advantages of incorporating solvers into deep learning architectures.
Posted Content

Deep Graph Matching Consensus

TL;DR: Deep Graph Matching Consensus (DGC) as discussed by the authors is a two-stage neural architecture for learning and refining structural correspondences between graphs, which uses localized node embeddings computed by a graph neural network to obtain an initial ranking of soft correspondence between nodes.
Book ChapterDOI

Deep Graph Matching via Blackbox Differentiation of Combinatorial Solvers.

TL;DR: In this paper, an end-to-end trainable architecture for deep graph matching that contains unmodified combinatorial solvers is proposed, which is the state-of-the-art on keypoint correspondence benchmarks.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings ArticleDOI

Object recognition from local scale-invariant features

TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Proceedings Article

Rectified Linear Units Improve Restricted Boltzmann Machines

TL;DR: Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Related Papers (5)