scispace - formally typeset
Search or ask a question
Topic

Feature hashing

About: Feature hashing is a research topic. Over the lifetime, 993 publications have been published within this topic receiving 51462 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Comparisons of receiver operating characteristics (ROC) curve indicate that the proposed LLE-based image hashing outperforms some notable hashing algorithms in classification between robustness and discrimination.

27 citations

Proceedings ArticleDOI
Guoqiang Zhong1, Hui Xu1, Pan Yang1, Sijiang Wang1, Junyu Dong1 
24 Jul 2016
TL;DR: This paper proposes a supervised hashing learning method based on a well designed deep convolutional neural network, which tries to learn hashing code and compact representations of data simultaneously.
Abstract: Hashing-based methods seek compact and efficient binary codes that preserve the similarity between data. For most existing hashing methods, an input (e.g. image) is first encoded as a vector of hand-crafted visual feature, followed by a hash projection and quantization step to obtain the compact binary vector. Most of hand-crafted features only encode low-level information of the input, the feature may not preserve semantic similarities of pairwise inputs. Meanwhile, the hash function learning process is independent with the feature representation, so that the feature may not be optimal for the hash projection. In this paper, we propose a supervised hashing learning method based on a well designed deep convolutional neural network, which tries to learn hashing code and compact representations of data simultaneously. Particularly, the proposed model learns binary codes by adding a compact sigmoid layer before the classifier layer. Experiments on several image data sets show that the proposed model outperforms other state-of-the-art hashing learning approaches.

26 citations

Journal ArticleDOI
TL;DR: Techniques are developed for tuning an important parameter that relates the sizes of the address region and the cellar in order to optimize the average running times of different implementations of the coalesced hashing method.
Abstract: The coalesced hashing method is one of the faster searching methods known today. This paper is a practical study of coalesced hashing for use by those who intend to implement or further study the algorithm. Techniques are developed for tuning an important parameter that relates the sizes of the address region and the cellar in order to optimize the average running times of different implementations. A value for the parameter is reported that works well in most cases. Detailed graphs explain how the parameter can be tuned further to meet specific needs. The resulting tuned algorithm outperforms several well-known methods including standard coalesced hashing, separate (or direct) chaining, linear probing, and double hashing. A variety of related methods are also analyzed including deletion algorithms, a new and improved insertion strategy called varied-insertion, and applications to external searching on secondary storage devices.

26 citations

Journal ArticleDOI
TL;DR: This work proposes a novel approach, Semi-Supervised Semantic Factorization Hashing (S3FH), to improve semantic labels and factorize it into hash codes, which optimizes a joint framework which consists of three interactive parts, including semantic factorization, multi-graph learning and multi-modal correlation.
Abstract: Cross-modal hashing can effectively solve the large-scale cross-modal retrieval by integrating the advantages of traditional cross-modal analysis and hashing techniques. In cross-modal hashing, preserving semantic correlation is important and challenging. However, current hashing methods cannot well preserve the semantic correlation in hash codes. Supervised hashing requires labeled data which is difficult to obtain, and unsupervised hashing cannot effectively learn semantic correlation from multi-modal data. In order to effectively learn semantic correlation to improve hashing performance, we propose a novel approach: Semi-Supervised Semantic Factorization Hashing (S3FH), for large-scale cross-modal retrieval. The main purpose of S3FH is to improve semantic labels and factorize it into hash codes. It optimizes a joint framework which consists of three interactive parts, including semantic factorization, multi-graph learning and multi-modal correlation. Then, an efficient alternating algorithm is derived for optimizing S3FH. Extensive experiments on two real world multi-modal datasets demonstrate the effectiveness of S3FH.

26 citations

Journal ArticleDOI
TL;DR: A semi-supervised deep learning hashing (DLH) method for fast multimedia retrieval that utilizes both visual and label information to learn an optimal similarity graph that can more precisely encode the relationship among training data and then generate the hash codes based on the graph.

26 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Deep learning
79.8K papers, 2.1M citations
83% related
Support vector machine
73.6K papers, 1.7M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202289
202111
202016
201916
201838