scispace - formally typeset
Open AccessProceedings ArticleDOI

Large-Scale Image Retrieval with Attentive Deep Local Features

Reads0
Chats0
TLDR
An attentive local feature descriptor suitable for large-scale image retrieval, referred to as DELE (DEep Local Feature), based on convolutional neural networks, which are trained only with image-level annotations on a landmark image dataset.
Abstract
We propose an attentive local feature descriptor suitable for large-scale image retrieval, referred to as DELE (DEep Local Feature). The new feature is based on convolutional neural networks, which are trained only with image-level annotations on a landmark image dataset. To identify semantically useful local features for image retrieval, we also propose an attention mechanism for key point selection, which shares most network layers with the descriptor. This frame-work can be used for image retrieval as a drop-in replacement for other keypoint detectors and descriptors, enabling more accurate feature matching and geometric verification. Our system produces reliable confidence scores to reject false positives–in particular, it is robust against queries that have no correct match in the database. To evaluate the proposed descriptor, we introduce a new large-scale dataset, referred to as Google-Landmarks dataset, which involves challenges in both database and query such as background clutter, partial occlusion, multiple landmarks, objects in variable scales, etc. We show that DELE outperforms the state-of-the-art global and local descriptors in the large-scale setting by significant margins.

read more

Citations
More filters
Journal ArticleDOI

A Light Touch Approach to Teaching Transformers Multi-view Geometry

TL;DR: In this article , a light touch approach is proposed to guide visual Transformers to learn multiple-view geometry but allow them to break free when needed, by using epipolar lines to guide the Transformer's cross-attention maps.
Proceedings ArticleDOI

Looking Beyond Corners: Contrastive Learning of Visual Representations for Keypoint Detection and Description Extraction

TL;DR: CorrNet as discussed by the authors learns to detect repeatable keypoints and extract discriminative descriptions via unsupervised contrastive learning under spatial constraints, and achieves competitive results under viewpoint changes and achieves state-of-the-art performance under illumination changes.
Journal ArticleDOI

Accurate visual localization with semantic masking and attention

TL;DR: Zhang et al. as discussed by the authors proposed a novel relative pose estimation pipeline to address the problem of unreliable regions that contain objects such as the sky, persons, or moving cars, causing noise and interference to localization.
Proceedings ArticleDOI

Dual Task Learning by Leveraging Both Dense Correspondence and Mis-Correspondence for Robust Change Detection With Imperfect Matches

TL;DR: This work proposes SimSaC, a system that concurrently conducts scene flow estimation and change detection and is able to detect changes with imperfect matches and designs an evaluation protocol which reflects performance in realworld settings.
Proceedings ArticleDOI

Descriptor-Driven Keypoint Detection

TL;DR: A methodology for detecting keypoints in such a way that usability of those key points in image matching tasks can be potentially maximized and a novel concept of semi-dense feature representation of images has been preliminarily discussed and illustrated.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Journal ArticleDOI

Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography

TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Related Papers (5)