Proceedings ArticleDOI
On the resemblance and containment of documents
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.Abstract:
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.read more
Citations
More filters
Proceedings ArticleDOI
Twister Tries: Approximate Hierarchical Agglomerative Clustering for Average Distance in Linear Time
Michael Cochez,Hao Mou +1 more
TL;DR: This paper proposes the use of locality-sensitive hashing combined with a novel data structure called twister to provide an approximate clustering for average linkage that requires only linear space and is feasible to apply on a larger scale.
Proceedings ArticleDOI
Fast computation of min-Hash signatures for image collections
Ondrej Chum,Jiri Matas +1 more
TL;DR: A new method for highly efficient min-Hash generation for document collections that exploits the inverted file structure which is available in many applications based on a bag or a set of words is proposed.
Proceedings Article
Improved densification of one permutation hashing
Anshumali Shrivastava,Ping Li +1 more
TL;DR: In this article, the authors proposed a new densification procedure which is provably better than the existing scheme, which is more significant for very sparse datasets which are common over the web.
Journal ArticleDOI
To Petabytes and beyond: recent advances in probabilistic and signal processing algorithms and their application to metagenomics.
R. A. Leo Elworth,Qi Wang,Pavan K. Kota,C. J. Barberan,Benjamin Coleman,Advait Balaji,Gaurav Gupta,Richard G. Baraniuk,Anshumali Shrivastava,Todd J. Treangen +9 more
TL;DR: The fundamentals of the most impactful probabilistic and signal processing algorithms are reviewed and more recent advances are highlighted to augment previous reviews in these areas that have taken a broader approach.
References
More filters
Book
The Probabilistic Method
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI
Syntactic clustering of the Web
TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI
Min-Wise Independent Permutations
TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article
Finding similar files in a large file system
TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI
Copy detection mechanisms for digital documents
TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).