scispace - formally typeset
Journal ArticleDOI

Iterative Quantization: A Procrustean Approach to Learning Binary Codes for Large-Scale Image Retrieval

Reads0
Chats0
TLDR
This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections by proposing a simple and efficient alternating minimization algorithm, dubbed iterative quantization (ITQ), and demonstrating an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.
Abstract
This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.

read more

Citations
More filters
Proceedings ArticleDOI

Weighted Gaussian Loss based Hamming Hashing

TL;DR: Wang et al. as mentioned in this paper proposed a Weighted Gaussian Loss based Hamming Hashing (WGLHH), which introduces a weighted Gaussian loss to optimize hashing model, which consists of three parts: a novel Gaussian-distribution based loss, a novel badly-trained-pair attention mechanism and a quantization loss.
Journal ArticleDOI

Semantic Cluster Unary Loss for Efficient Deep Hashing

TL;DR: A novel Semantic Cluster Deep Hashing (SCDH) algorithm is proposed by introducing a modified Unary Upper Bound loss, calledSemantic Cluster Unary Loss, which means the hashcodes in the same cluster have similar semantic information.
Posted Content

Discrete Multi-modal Hashing with Canonical Views for Robust Mobile Landmark Search

TL;DR: Wang et al. as discussed by the authors proposed a novel hashing scheme, named as canonical view based discrete multi-modal hashing (CV-DMH), to handle the high bandwidth consumption of query transmission, and the huge visual variations of query images sent from mobile devices.
Journal ArticleDOI

Deep Unsupervised Self-Evolutionary Hashing for Image Retrieval

TL;DR: A simple but effective Deep Unsupervised Self-evolutionary Hashing (DUSH) algorithm, which utilizes a curriculum learning strategy to iteratively select pseudo pairs from easy to hard in low dimensional Hamming space, and results show that the method can significantly outperform the state-of-the-art methods.
Proceedings ArticleDOI

Adaptive Labeling for Deep Learning to Hash

TL;DR: AdaLabelHash, a binary hash function learning approach via deep neural networks, is introduced, which can jointly learn label representations and infer compact binary codes from data.
References
More filters
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Dissertation

Learning Multiple Layers of Features from Tiny Images

TL;DR: In this paper, the authors describe how to train a multi-layer generative model of natural images, using a dataset of millions of tiny colour images, described in the next section.
Journal Article

LIBLINEAR: A Library for Large Linear Classification

TL;DR: LIBLINEAR is an open source library for large-scale linear classification that supports logistic regression and linear support vector machines and provides easy-to-use command-line tools and library calls for users and developers.
Journal ArticleDOI

Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope

TL;DR: The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category.