scispace - formally typeset
Search or ask a question
Topic

Locality-sensitive hashing

About: Locality-sensitive hashing is a research topic. Over the lifetime, 1894 publications have been published within this topic receiving 69362 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A multi-dimensional quality ensemble-driven recommendation approach named RecLSH-TOPSIS based on LSH and TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) techniques is proposed, which can make privacy-preserving edge service recommendations with multiple QoS dimensions.

76 citations

Journal ArticleDOI
TL;DR: This work proposes a novel supervised hashing method for scalable face image retrieval, i.e., Deep Hashing based on Classification and Quantization errors (DHCQ), by simultaneously learning feature representations of images, hash codes and classifiers.

76 citations

Proceedings ArticleDOI
Cheng Yang1
21 Oct 2001
TL;DR: The algorithm tries to capture the intuitive notion of similarity perceived by humans: two pieces are similar if they are fully or partially based on the same score, even if they were performed by different people or at different speed.
Abstract: We present a prototype method of indexing raw-audio music files in a way that facilitates content-based similarity retrieval. The algorithm tries to capture the intuitive notion of similarity perceived by humans: two pieces are similar if they are fully or partially based on the same score, even if they are performed by different people or at different speed. Local peaks in signal power are identified in each audio file, and a spectral vector is extracted near each peak. Nearby peaks are selectively grouped together to form "characteristic sequences" which are used as the basis for indexing. A hashing scheme known as "locality-sensitive hashing" is employed to index the high-dimensional vectors. Retrieval results are ranked based on the number of final matches filtered by some linearity criteria.

75 citations

Journal ArticleDOI
TL;DR: This paper introduces a novel supervised cross-modality hashing framework, which can generate unified binary codes for instances represented in different modalities and significantly outperforms the state-of-the-art multimodality hashing techniques.
Abstract: With the dramatic development of the Internet, how to exploit large-scale retrieval techniques for multimodal web data has become one of the most popular but challenging problems in computer vision and multimedia. Recently, hashing methods are used for fast nearest neighbor search in large-scale data spaces, by embedding high-dimensional feature descriptors into a similarity preserving Hamming space with a low dimension. Inspired by this, in this paper, we introduce a novel supervised cross-modality hashing framework, which can generate unified binary codes for instances represented in different modalities. Particularly, in the learning phase, each bit of a code can be sequentially learned with a discrete optimization scheme that jointly minimizes its empirical loss based on a boosting strategy. In a bitwise manner, hash functions are then learned for each modality, mapping the corresponding representations into unified hash codes. We regard this approach as cross-modality sequential discrete hashing (CSDH), which can effectively reduce the quantization errors arisen in the oversimplified rounding-off step and thus lead to high-quality binary codes. In the test phase, a simple fusion scheme is utilized to generate a unified hash code for final retrieval by merging the predicted hashing results of an unseen instance from different modalities. The proposed CSDH has been systematically evaluated on three standard data sets: Wiki, MIRFlickr, and NUS-WIDE, and the results show that our method significantly outperforms the state-of-the-art multimodality hashing techniques.

75 citations

Proceedings ArticleDOI
15 Oct 2018
TL;DR: Cross-device approximate computation reuse is proposed, which minimizes redundant computation by harnessing the "equivalence'' between different input values and reusing previously computed outputs with high confidence.
Abstract: Mobile and IoT scenarios increasingly involve interactive and computation intensive contextual recognition. Existing optimizations typically resort to computation offloading or simplified on-device processing. Instead, we observe that the same application is often invoked on multiple devices in close proximity. Moreover, the application instances often processsimilar contextual data that map to thesame outcome. In this paper, we proposecross-device approximate computation reuse, which minimizes redundant computation by harnessing the "equivalence'' between different input values and reusing previously computed outputs with high confidence. We devise adaptive locality sensitive hashing (A-LSH) and homogenized k nearest neighbors (H-kNN). The former achieves scalable and constant lookup, while the latter provides high-quality reuse and tunable accuracy guarantee. We further incorporate approximate reuse as a service, called ame, in the computation offloading runtime. Extensive evaluation shows that, when given 95% accuracy target, ame\ consistently harnesses over 90% of reuse opportunities, which translates to reduced computation latency and energy consumption by a factor of 3 to 10.

74 citations


Network Information
Related Topics (5)
Deep learning
79.8K papers, 2.1M citations
84% related
Feature extraction
111.8K papers, 2.1M citations
83% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202343
2022108
202188
2020110
2019104
2018139