Large-Scale Image Retrieval with Attentive Deep Local Features
Hyeonwoo Noh,Andre Araujo,Jack Sim,Tobias Weyand,Bohyung Han +4 more
- pp 3476-3485
Reads0
Chats0
TLDR
An attentive local feature descriptor suitable for large-scale image retrieval, referred to as DELE (DEep Local Feature), based on convolutional neural networks, which are trained only with image-level annotations on a landmark image dataset.Abstract:
We propose an attentive local feature descriptor suitable for large-scale image retrieval, referred to as DELE (DEep Local Feature). The new feature is based on convolutional neural networks, which are trained only with image-level annotations on a landmark image dataset. To identify semantically useful local features for image retrieval, we also propose an attention mechanism for key point selection, which shares most network layers with the descriptor. This frame-work can be used for image retrieval as a drop-in replacement for other keypoint detectors and descriptors, enabling more accurate feature matching and geometric verification. Our system produces reliable confidence scores to reject false positives–in particular, it is robust against queries that have no correct match in the database. To evaluate the proposed descriptor, we introduce a new large-scale dataset, referred to as Google-Landmarks dataset, which involves challenges in both database and query such as background clutter, partial occlusion, multiple landmarks, objects in variable scales, etc. We show that DELE outperforms the state-of-the-art global and local descriptors in the large-scale setting by significant margins.read more
Citations
More filters
Posted Content
Class-Balanced Active Learning for Image Classification
TL;DR: The authors proposed a general optimization framework that explicitly takes class-balancing into account, which can be combined with most existing active learning algorithms and can be effectively applied to boost the performance of both informative and representative active learning.
Journal ArticleDOI
Relieving Triplet Ambiguity: Consensus Network for Language-Guided Image Retrieval
TL;DR: Wang et al. as mentioned in this paper proposed a consensus network that self-adaptively learns from noisy triplets to minimize the negative effects of triplet ambiguity, achieving competitive performance on three datasets.
Journal ArticleDOI
Feature matching for 3D AR: Review from handcrafted methods to deep learning
Houssam Halmaoui,Abdelkrim Haqiq +1 more
TL;DR: The different types of image matching approaches, starting from handcrafted feature algorithms and machine learning methods, to recent deep learning approaches using various types of CNN architectures, and more modern end-to-end models are presented.
Journal ArticleDOI
Relative Pose Estimation between Image Object and ShapeNet CAD Model for Automatic 4-DoF Annotation
TL;DR: In this article , a pose estimation pipeline consisting of several steps of learned networks followed by image similarity measurements is proposed to estimate and represent the pose of an object in an RGB image only with the 4-DoF annotation to a matching CAD model.
Proceedings ArticleDOI
A Novel Deep Learning Framework For Image KeyPoint Description
TL;DR: In this article , a pre-trained network is employed for keypoint detection and descriptor extraction, and rotated versions of the input image are applied and a new descriptor is presented to introduce a richer descriptor, the proposed method outperforms the original CNN framework in terms of the number of accurate correspondences, the proportion of correct correspondences and the matching error.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI
Distinctive Image Features from Scale-Invariant Keypoints
TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Journal ArticleDOI
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky,Jia Deng,Hao Su,Jonathan Krause,Sanjeev Satheesh,Sean Ma,Zhiheng Huang,Andrej Karpathy,Aditya Khosla,Michael S. Bernstein,Alexander C. Berg,Li Fei-Fei +11 more
TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Journal ArticleDOI
Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.