Topic
Feature hashing
About: Feature hashing is a research topic. Over the lifetime, 993 publications have been published within this topic receiving 51462 citations.
Papers published on a yearly basis
Papers
More filters
••
01 Sep 2016
TL;DR: The proposed method of SDQ-CSLBP extract texture feature using CSLBP with standard deviation as weight factor with experimental results show that the proposed method is robust against content preserving manipulation and sensitive to content changing and structural tampering.
Abstract: Approach for image hashing is to use powerful feature descriptor which captures essence of an image. Applications of image hashing lies in the area of content authentication, structural tampering detection, retrieval and recognition. Hashing is a compact summarized information of an image. Center Symmetric Local Binary Pattern (CSLBP) is one of the powerful texture feature descriptor which captures the smallest amount of change. Using CSLBP, appressed hash code can be obtained for an image. If CSLBP feature is weighted by a boost factor, it will enhance success rate of an image hashing. The proposed method of SDQ-CSLBP extract texture feature using CSLBP with standard deviation as weight factor. Standard deviation which represents local contrast is also a powerful descriptor. Resultant histogram of CSLBP is of 16 bin for each block of an image. Further it can be compressed to 8 bin by using the flipped difference concept. Without a weight factor, compressed CSLBP has low discrimination power. Experimental results show that the proposed method is robust against content preserving manipulation and sensitive to content changing and structural tampering.
7 citations
•
TL;DR: Zhang et al. as mentioned in this paper proposed a novel deep hashing method, called supervised hierarchical deep hashing (SHDH), to perform hash code learning for hierarchical labeled data by weighting each layer, and design a deep convolutional neural network to obtain a hash code for each data point.
Abstract: Recently, hashing methods have been widely used in large-scale image retrieval. However, most existing hashing methods did not consider the hierarchical relation of labels, which means that they ignored the rich information stored in the hierarchy. Moreover, most of previous works treat each bit in a hash code equally, which does not meet the scenario of hierarchical labeled data. In this paper, we propose a novel deep hashing method, called supervised hierarchical deep hashing (SHDH), to perform hash code learning for hierarchical labeled data. Specifically, we define a novel similarity formula for hierarchical labeled data by weighting each layer, and design a deep convolutional neural network to obtain a hash code for each data point. Extensive experiments on several real-world public datasets show that the proposed method outperforms the state-of-the-art baselines in the image retrieval task.
7 citations
•
TL;DR: This work proposes a general framework for learning hash functions using affinity-based loss functions that closes the loop and optimizes jointly over the hash functions and the binary codes, which is guaranteed to obtain better hash functions while being not much slower.
Abstract: In binary hashing, one wants to learn a function that maps a high-dimensional feature vector to a vector of binary codes, for application to fast image retrieval. This typically results in a difficult optimization problem, nonconvex and nonsmooth, because of the discrete variables involved. Much work has simply relaxed the problem during training, solving a continuous optimization, and truncating the codes a posteriori. This gives reasonable results but is suboptimal. Recent work has applied alternating optimization to the objective over the binary codes and achieved better results, but the hash function was still learned a posteriori, which remains suboptimal. We propose a general framework for learning hash functions using affinity-based loss functions that closes the loop and optimizes jointly over the hash functions and the binary codes. The resulting algorithm can be seen as a corrected, iterated version of the procedure of optimizing first over the codes and then learning the hash function. Compared to this, our optimization is guaranteed to obtain better hash functions while being not much slower, as demonstrated experimentally in various supervised and unsupervised datasets. In addition, the framework facilitates the design of optimization algorithms for arbitrary types of loss and hash functions.
7 citations
••
TL;DR: This article proposes a restricted blocking strategy by investigating effect of two rotation operations on an image and its blocks in both theoretical and experimental ways and applies the proposed blocking strategy for the recently reported non-negative matrix factorization (NMF) hashing.
Abstract: Image hashing is a potential solution for image content authentication (a desired image hashing algorithm should be robust to common image processing operations and various geometric distortions). In the literature, researchers pay more attention to block-based image hashing algorithms due to their robustness to common image processing operations (such as lossy compression, low-pass filtering, and additive noise). However, the block-based hashing strategies are sensitive to rotation processing operations. This indicates that the robustness of the block-based hashing methods against rotation operations is an important issue. Towards this direction, in this article we propose a restricted blocking strategy by investigating effect of two rotation operations on an image and its blocks in both theoretical and experimental ways. Furthermore, we apply the proposed blocking strategy for the recently reported non-negative matrix factorization (NMF) hashing. Experimental results have demonstrated the validity of the block-based hashing algorithms with restricted blocking strategy for rotation operations.
7 citations
••
17 Mar 2017TL;DR: A Softmax-based Ensemble Model, SEM, which adopts only a few key features after feature hashing for CTR estimation in Real-Time Bidding, and outperforms the state-of-the-art approaches effectively when only less than 50 features are adopted in two real datasets.
Abstract: In Real-Time Bidding (RTB) advertising, evaluating the Click-Through Rate (CTR) of a bid request and an ad is important for bidding strategy optimization on Demand-Side Platforms (DSPs). The regression-based approaches are popular for CTR estimation in RTB since this kind of approach is highly efficient and scalable. The information of the bid request and the ad contains categorical attributes (such URL) and numerical attributes (such ad size). To vectorize the information for the input of regression-based approaches, the categorical attributes will be expanded to several binary features in general. However, some categorical attributes have infinite possible values (such as URL). Thus, for these attributes, only observed values in training will be transformed into binary features. If there is a new attribute or value in online environment, this information will be lost after vectorization. In this paper, we first exploit the feature hashing trick to transform the categorical and numerical attributes into the large fixed size vector. Since the vector is large and sparse, we propose a Softmax-based Ensemble Model, SEM, which adopts only a few key features after feature hashing for CTR estimation. The experimental results demonstrate that our proposed approach is able to adapt to the harsh environments in RTB, and outperforms the state-of-the-art approaches effectively when only less than 50 features are adopted in two real datasets.
7 citations