scispace - formally typeset
Open AccessProceedings ArticleDOI

Large-Scale Image Retrieval with Attentive Deep Local Features

Reads0
Chats0
TLDR
An attentive local feature descriptor suitable for large-scale image retrieval, referred to as DELE (DEep Local Feature), based on convolutional neural networks, which are trained only with image-level annotations on a landmark image dataset.
Abstract
We propose an attentive local feature descriptor suitable for large-scale image retrieval, referred to as DELE (DEep Local Feature). The new feature is based on convolutional neural networks, which are trained only with image-level annotations on a landmark image dataset. To identify semantically useful local features for image retrieval, we also propose an attention mechanism for key point selection, which shares most network layers with the descriptor. This frame-work can be used for image retrieval as a drop-in replacement for other keypoint detectors and descriptors, enabling more accurate feature matching and geometric verification. Our system produces reliable confidence scores to reject false positives–in particular, it is robust against queries that have no correct match in the database. To evaluate the proposed descriptor, we introduce a new large-scale dataset, referred to as Google-Landmarks dataset, which involves challenges in both database and query such as background clutter, partial occlusion, multiple landmarks, objects in variable scales, etc. We show that DELE outperforms the state-of-the-art global and local descriptors in the large-scale setting by significant margins.

read more

Citations
More filters
Posted Content

Deep Stochastic Attraction and Repulsion Embedding for Image Based Localization.

TL;DR: This paper represents a place as a set of exemplar images depicting the same landmarks, instead of some pre-defined geographic locations by partitioning the world, and proposes a new Stochastic Attraction and Repulsion Embedding loss function to facilitate the competitive learning.
Posted Content

Large-scale, real-time visual-inertial localization revisited

TL;DR: In this article, the authors propose an approach that combines server-side localization with real-time visual-inertial-based camera pose tracking to achieve low-latency localization queries and efficient fusion run in realtime on mobile platforms.
Posted Content

Team JL Solution to Google Landmark Recognition 2019.

TL;DR: The full pipeline, after ensembling the models and applying several steps of re-ranking strategies, scores 0.37606 GAP on the private leaderboard which won the 1st place in the competition.
Proceedings ArticleDOI

DAME WEB: DynAmic MEan with Whitening Ensemble Binarization for Landmark Retrieval without Human Annotation

TL;DR: This work proposes a simple yet effective module called DynAmic MEan (DAME), which allows a neural network to dynamically learn to aggregate feature maps at the pooling stage based on the input image, in order to generate global descriptors suitable for landmark retrieval.
Journal ArticleDOI

Graph-based Particular Object Discovery

TL;DR: This work proposes a novel salient region detection method that captures, in an unsupervised manner, patterns that are both discriminative and common in the dataset, and improves particular object retrieval on challenging datasets containing small objects.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Journal ArticleDOI

Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography

TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Related Papers (5)