Proceedings ArticleDOI
On the resemblance and containment of documents
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.Abstract:
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.read more
Citations
More filters
Proceedings ArticleDOI
Hierarchical substring caching for efficient content distribution to low-bandwidth clients
Utku Irmak,Torsten Suel +1 more
TL;DR: A hierarchical substring caching technique that provides significant savings over this basic approach and is compared to a widely studied alternative approach based on delta compression, and how to integrate the two for best overall performance is shown.
Proceedings ArticleDOI
Stochastic simulation of time-biased gain
TL;DR: Stochastic simulation is used to numerically approximate time-biased gain, a unifying framework for information retrieval evaluation that generalizes many traditional effectiveness measures while accommodating aspects of user behavior not captured by these measures.
Proceedings ArticleDOI
BagMinHash - Minwise Hashing Algorithm for Weighted Sets
TL;DR: The BagMinHash algorithm as mentioned in this paper is a new algorithm that can be orders of magnitude faster than current state of the art without any particular restrictions or assumptions on weights or data dimensionality.
Patent
Software similarity searching
TL;DR: In this article, a similarity analysis of software is performed using a pairwise component analysis, where pairs of files that consist of the input file and files included in a corpus are categorized into one of a possible match and a mismatch.
Proceedings ArticleDOI
Performance evaluation of similarity measures on similar and dissimilar text retrieval
TL;DR: This paper evaluated the performances of eight popular similarity measures on four levels (degree) of textual similarity using a corpus of plagiarised texts, and showed that most of the measures were equal on highly similar texts, with the exception of Euclidean distance and Jensen-Shannon divergence which had poorer performances.
References
More filters
Book
The Probabilistic Method
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI
Syntactic clustering of the Web
TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI
Min-Wise Independent Permutations
TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article
Finding similar files in a large file system
TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI
Copy detection mechanisms for digital documents
TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).