Proceedings ArticleDOI
On the resemblance and containment of documents
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.Abstract:
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.read more
Citations
More filters
Book ChapterDOI
Approximate data exchange
TL;DR: In this paper, the authors relax the classical data exchange problems such as consistency and type checking to their approximate versions based on Property Testing, which provides a natural framework for consistency and safety questions, which first considers approximate solutions and then exact solutions obtained with a Corrector.
Book ChapterDOI
The case of the duplicate documents measurement, search, and science
Justin Zobel,Yaniv Bernstein +1 more
TL;DR: The case of the duplicate documents is used to explore whether and when it is reasonable to claim that research is successful and to highlight a paradox of computer science research.
Journal ArticleDOI
Lexicon randomization for near-duplicate detection with I-Match
Aleksander Kolcz,Abdur Chowdhury +1 more
TL;DR: This work focuses on I-Match and presents a randomization-based technique of increasing its signature stability, with the proposed method consistently outperforming traditional I- match by as high as 40–60% in terms of the relative improvement in near-duplicate recall.
Journal Article
Conceptual similarity and graph-based method for plagiarism detection
TL;DR: A new representation method for text documents called text graph-based representation is discussed, which remarkably outperforms the modern methods for plagiarism detection.
Journal ArticleDOI
Event detection in online social network: Methodologies, state-of-art, and evolution
TL;DR: A comprehensive and in-depth survey of existing works for event detection in online social networks can be found in this article , where a timeline and a taxonomy of existing methods are introduced to elaborate the development of various technologies under the umbrella of event detection.
References
More filters
Book
The Probabilistic Method
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI
Syntactic clustering of the Web
TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI
Min-Wise Independent Permutations
TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article
Finding similar files in a large file system
TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI
Copy detection mechanisms for digital documents
TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).