scispace - formally typeset
Proceedings ArticleDOI

On the resemblance and containment of documents

Andrei Z. Broder
- 11 Jun 1997 - 
- pp 21-29
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.
Abstract
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal IssueDOI

Efficient plagiarism detection for large code repositories

TL;DR: This paper proposes techniques for detecting plagiarism in program code using text similarity measures and local alignment and shows that their approach is highly scalable while maintaining similar levels of effectiveness to that of the popular JPlag and MOSS systems.
Dissertation

Nearest neighbor search : the old, the new, and the impossible

TL;DR: This thesis gives a new algorithm for the approximate NN problem in the d-dimensional Euclidean space, and gives an evidence that the classical approaches to NN under certain hard distances, such as the string edit distance, are likely to fail.
Book ChapterDOI

Estimating Answer Sizes for XML Queries

TL;DR: An extensive experimental evaluation is presented using several XML data sets, both real and synthetic, with a variety of queries, to demonstrate that accurate and robust estimates can be achieved, with limited space, and at a miniscule computational cost.
Journal ArticleDOI

Order statistics and estimating cardinalities of massive data sets

TL;DR: A new class of algorithms to estimate the cardinality of very large multisets using constant memory and doing only one pass on the data is introduced here, based on order statistics rather than on bit patterns in binary representations of numbers.
Posted Content

Fast Memory-efficient Anomaly Detection in Streaming Heterogeneous Graphs

TL;DR: This work introduces a new similarity function for heterogeneous graphs that compares two graphs based on their relative frequency of local substructures, represented as short strings, and proposes StreamSpot, a clustering based anomaly detection approach that addresses challenges in two key fronts: heterogeneity and streaming nature.
References
More filters
Book

The Probabilistic Method

Joel Spencer
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI

Syntactic clustering of the Web

TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI

Min-Wise Independent Permutations

TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article

Finding similar files in a large file system

TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI

Copy detection mechanisms for digital documents

TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).