scispace - formally typeset
Proceedings ArticleDOI

On the resemblance and containment of documents

Andrei Z. Broder
- 11 Jun 1997 - 
- pp 21-29
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.
Abstract
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Sampling dirty data for matching attributes

TL;DR: New similarity measures between sets of strings are proposed, which not only consider set based similarity, but also similarity between strings instances, to make the measures effective and develop efficient algorithms for distributed sample creation and similarity computation.
Journal ArticleDOI

Libra: Scalable k-mer-based tool for massive all-vs-All metagenome comparisons

TL;DR: A tool called Libra is developed that performs an all-vs-all comparison of metagenomes for precise clustering based on their k-mer content in a Hadoop architecture that can scale to any size dataset to enable global-scale analyses and link microbial signatures to biological processes.
Patent

Decreasing the fragility of duplicate document detecting algorithms

TL;DR: In a signature-based duplicate detection system, multiple different lexicons are used to generate a signature for a document that comprises multiple sub-signatures as discussed by the authors, where the signature of an e-mail or other document may be defined as the set of signatures generated based on the multiple different dictionaries.
Dissertation

High-dimensional similarity search and sketching : algorithms and hardness

TL;DR: An algorithm for the ANN problem over the l1 and l2 distances that improves upon the Locality-Sensitive Hashing framework and establishes the equivalence between the existence of short and accurate sketches and good embeddings into lp spaces for 0 < p ≤ 2.
Posted Content

Anonymizing Unstructured Data

TL;DR: This paper formalizes the notion of k-anonymity for set-valued data as a variant of the k-Anonymity model for traditional relational datasets and defines an optimization problem that arises from this definition of anonymity and provides O(klogk) and O(1)-approximation algorithms.
References
More filters
Book

The Probabilistic Method

Joel Spencer
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI

Syntactic clustering of the Web

TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI

Min-Wise Independent Permutations

TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article

Finding similar files in a large file system

TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI

Copy detection mechanisms for digital documents

TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).