scispace - formally typeset
Proceedings ArticleDOI

On the resemblance and containment of documents

Andrei Z. Broder
- 11 Jun 1997 - 
- pp 21-29
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.
Abstract
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

ProbMinHash – A Class of Locality-Sensitive Hash Algorithms for the (Probability) Jaccard Similarity

TL;DR: A class of locality-sensitive one-pass hash algorithms that are orders of magnitude faster than the original approach and can be specialized for the conventional Jaccard similarity, resulting in highly efficient algorithms that outperform traditional minwise hashing.
Journal ArticleDOI

Deduplication flash file system with PRAM for non-linear editing

TL;DR: A new deduplication file system is designed for an embedded system based on NAND flash memory to reduce computation overhead, duplication caused by NLE operations is predicted considering causality between I/O operations and garbage collection overhead can be reduced greatly.
Proceedings ArticleDOI

Sorted deduplication: How to process thousands of backup streams

TL;DR: This paper presents a new exact deduplication approach designed for processing thousands of backup streams at the same time on the same fingerprint index that destroys the traditionally exploited temporal chunk locality and creates a new one by sorting fingerprints.
Proceedings ArticleDOI

b-bit minwise hashing in practice

TL;DR: This paper is the first study to demonstrate that b-bit minwise hashing implemented using simple hash functions, e.g., the 2-universal (2U) and 4- universal (4U) hash families, can produce very similar learning results as using fully random permutations.
Posted Content

Information Theoretic Limits of Cardinality Estimation: Fisher Meets Shannon

TL;DR: A new measure of efficiency for cardinality estimators called the Fisher-Shannon (Fish) number H/I is defined, which captures the tension between the limiting Shannon entropy of the sketch and its normalized Fisher information, which characterizes the variance of a statistically efficient, asymptotically unbiased estimator.
References
More filters
Book

The Probabilistic Method

Joel Spencer
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI

Syntactic clustering of the Web

TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI

Min-Wise Independent Permutations

TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article

Finding similar files in a large file system

TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI

Copy detection mechanisms for digital documents

TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).