Journal ArticleDOI
Iterative Quantization: A Procrustean Approach to Learning Binary Codes for Large-Scale Image Retrieval
Reads0
Chats0
TLDR
This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections by proposing a simple and efficient alternating minimization algorithm, dubbed iterative quantization (ITQ), and demonstrating an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.Abstract:
This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.read more
Citations
More filters
Posted Content
Compressing Deep Convolutional Networks using Vector Quantization
TL;DR: This paper is able to achieve 16-24 times compression of the network with only 1% loss of classification accuracy using the state-of-the-art CNN, and finds in terms of compressing the most storage demanding dense connected layers, vector quantization methods have a clear gain over existing matrix factorization methods.
Proceedings ArticleDOI
Deep Hashing Network for Unsupervised Domain Adaptation
TL;DR: In this article, the authors proposed a novel deep learning framework that can exploit labeled source data and unlabeled target data to learn informative hash codes, to accurately classify unseen target data.
Proceedings ArticleDOI
Supervised Discrete Hashing
TL;DR: This work proposes a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification, and introduces an auxiliary variable to reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm.
Proceedings ArticleDOI
Discriminative Learning of Deep Convolutional Feature Point Descriptors
Edgar Simo-Serra,Eduard Trulls,Luis Ferraz,Iasonas Kokkinos,Pascal Fua,Francesc Moreno-Noguer +5 more
TL;DR: This paper uses Convolutional Neural Networks to learn discriminant patch representations and in particular train a Siamese network with pairs of (non-)corresponding patches to develop 128-D descriptors whose euclidean distances reflect patch similarity and can be used as a drop-in replacement for any task involving SIFT.
Journal ArticleDOI
A Survey on Learning to Hash
TL;DR: In this paper, a comprehensive survey of the learning to hash algorithms is presented, categorizing them according to the manners of preserving the similarities into: pairwise similarity preserving, multi-wise similarity preservation, implicit similarity preserving and quantization, and discuss their relations.
References
More filters
Proceedings Article
Cluster Kernels for Semi-Supervised Learning
TL;DR: A framework to incorporate unlabeled data in kernel classifier, based on the idea that two points in the same cluster are more likely to have the same label is proposed by modifying the eigenspectrum of the kernel matrix.
Book ChapterDOI
Efficient object category recognition using classemes
TL;DR: A new descriptor for images is introduced which allows the construction of efficient and compact classifiers with good accuracy on object category recognition, and allows object-category queries to be made against image databases using efficient classifiers such as linear support vector machines.
Proceedings ArticleDOI
Data fusion through cross-modality metric learning using similarity-sensitive hashing
TL;DR: A framework for supervised similarity learning based on embedding the input data from two arbitrary spaces into the Hamming space is proposed and the utility and efficiency of such a generic approach is demonstrated on several challenging applications including cross-representation shape retrieval and alignment of multi-modal medical images.
Proceedings Article
Sequential Projection Learning for Hashing with Compact Codes
TL;DR: This paper proposes a novel data-dependent projection learning method such that each hash function is designed to correct the errors made by the previous one sequentially, and shows significant performance gains over the state-of-the-art methods on two large datasets containing up to 1 million points.
Journal ArticleDOI
Locality sensitive hashing: A comparison of hash function types and querying mechanisms
TL;DR: This paper compares several families of space hashing functions in a real setup and reveals that unstructured quantizer significantly improves the accuracy of LSH, as it closely fits the data in the feature space.