scispace - formally typeset
Proceedings ArticleDOI

On the resemblance and containment of documents

Andrei Z. Broder
- 11 Jun 1997 - 
- pp 21-29
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.
Abstract
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

RIQ: Fast processing of SPARQL queries on RDF quadruples

TL;DR: This paper proposes a new approach that employs a decrease-and-conquer strategy for fast SPARQL query processing that can outperform its competitors designed to support named graph queries on RDF quads for a variety of queries.
Book ChapterDOI

Automatic detection of local reuse

TL;DR: This paper proposes a new fingerprinting technique for local reuse detection for both text-based and object-based documents which is based on the contiguity of documents which allows the creation of shorter and more flexible fingerprints.
Journal ArticleDOI

Topic discovery in massive text corpora based on Min-Hashing

TL;DR: Sampled Min-Hashing (SMH) as discussed by the authors is a scalable approach to topic discovery which does not require the number of topics to be specified in advance and can handle massive text corpora and large vocabularies using modest computer resources.
Proceedings ArticleDOI

An Intelligent Data De-duplication Based Backup System

TL;DR: The experimental test results show that Backup Ded up employs multi de-duplication strategies simultaneously to substantially eliminate redundant data in the backup process so as to reach the goal of effectively saving storage space and network bandwidth.
Proceedings ArticleDOI

CombiHeader: Minimizing the number of shim headers in redundancy elimination systems

TL;DR: This paper proposes a novel algorithm, CombiHeader, that allows near maximum similarity detection using smaller chunks sizes while using chunk aggregation technique to transmit very few headers with few memory accesses.
References
More filters
Book

The Probabilistic Method

Joel Spencer
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI

Syntactic clustering of the Web

TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI

Min-Wise Independent Permutations

TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article

Finding similar files in a large file system

TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI

Copy detection mechanisms for digital documents

TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).