Proceedings ArticleDOI
On the resemblance and containment of documents
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.Abstract:
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.read more
Citations
More filters
Monolingual Text Similarity Measures: A Comparison of Models over Wikipedia Articles Revisions
Andreas Eiselt,Paolo Rosso +1 more
TL;DR: An exhaustive comparison of similarity estimation models is carried out in order to determine which one performs better on different levels of granularity and languages (English, German, Spanish, and Hindi).
Proceedings Article
Finesse: Fine-Grained Feature Locality based Fast Resemblance Detection for Post-Deduplication Delta Compression.
TL;DR: Finesse is proposed, a fine-grained feature-locality-based fast resemblance detection approach that divides each chunk into several fixed-sized subchunk, computes features from these subchunks individually, and then groups the features into super-features.
Journal ArticleDOI
Copy detection in Chinese documents using Ferret
TL;DR: The Ferret copy detector is extended to Chinese, with experiments on corpora of coursework collected from two Chinese universities showing that Ferret can find both artificially constructed plagiarism and actually occurring, previously undetected plagiarism.
Book ChapterDOI
Sketching for big data recommender systems using fast pseudo-random fingerprints
Yoram Bachrach,Ely Porat +1 more
TL;DR: In practice the accuracy achieved by the approach is even better than the accuracy guaranteed by the theoretical bounds, so it suffices to use even shorter fingerprints to obtain high quality results.
Posted Content
Hashing for statistics over k-partitions
TL;DR: It is shown that a tabulation based hash function, mixed tabulation, does yield strong concentration bounds on the most popular applications of k-partitioning similar to those the authors would get using a truly random hash function.
References
More filters
Book
The Probabilistic Method
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI
Syntactic clustering of the Web
TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI
Min-Wise Independent Permutations
TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article
Finding similar files in a large file system
TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI
Copy detection mechanisms for digital documents
TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).