scispace - formally typeset
Proceedings ArticleDOI

On the resemblance and containment of documents

Andrei Z. Broder
- 11 Jun 1997 - 
- pp 21-29
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.
Abstract
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

GB-KMV: An Augmented KMV Sketch for Approximate Containment Similarity Search

TL;DR: This paper proposes a novel augmented KMV sketch technique, namely GB-KMV, which is data-dependent and can achieve a good trade-off between the sketch size and the accuracy, and shows that it outperforms the state-of-the-art technique LSH-E in terms of estimation accuracy under practical assumption.
Book ChapterDOI

Frequent Itemset Mining for Clustering Near Duplicate Web Documents

TL;DR: An approach based on computing (closed) sets of attributes having large support as clusters of similar documents as well as other established methods and software on same datasets are used.
Patent

Efficient indexing of error tolerant set containment

TL;DR: In this article, a method and a system for the efficient indexing of error tolerant set containment is presented, which comprises obtaining a frequency threshold and a query set, all tokens or token sets within the query set are determined, and then all minimal infrequent tokens or all minimal Infrequent tokens sets of data records are found and used to build an index.
Book ChapterDOI

Preserving Semantic Neighborhoods for Robust Cross-modal Retrieval

TL;DR: This article proposed novel within-modality losses which encourage semantic coherency in both the text and image subspaces, which does not necessarily align with visual co-herency.
Posted Content

A Review for Weighted MinHash Algorithms.

TL;DR: This review categorizes the weighted MinHash algorithms into quantization-based approaches, “active index”-based ones and others, and shows the evolution and inherent connection of the weightedminHash algorithms, from the integer weighted Min Hash ones to the real-valued weighted Min hash ones.
References
More filters
Book

The Probabilistic Method

Joel Spencer
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI

Syntactic clustering of the Web

TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI

Min-Wise Independent Permutations

TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article

Finding similar files in a large file system

TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI

Copy detection mechanisms for digital documents

TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).