Proceedings ArticleDOI
On the resemblance and containment of documents
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.Abstract:
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.read more
Citations
More filters
Journal ArticleDOI
De novo yeast genome assemblies from MinION, PacBio and MiSeq platforms.
Francesca Giordano,Louise Aigrain,Michael A. Quail,Paul Coupland,James K. Bonfield,Robert L. Davies,German Tischler,David K. Jackson,Thomas M. Keane,Jing Li,Jia-Xing Yue,Gianni Liti,Richard Durbin,Zemin Ning +13 more
TL;DR: This paper re-sequenced a well characterized genome, the Saccharomyces cerevisiae S288C strain using three different platforms: MinION, PacBio and MiSeq.
Proceedings ArticleDOI
The power of comparative reasoning
TL;DR: A family of algorithms for computing ordinal embeddings based on partial order statistics that provide a nonlinear transformation resulting in sparse binary codes that are well-suited for a large class of machine learning algorithms.
Proceedings ArticleDOI
Counting twig matches in a tree
Zhiyuan Chen,H. V. Jagadish,Flip Korn,Nikolaos Koudas,S. Muthukrishnan,Raymond T. Ng,Divesh Srivastava +6 more
TL;DR: This work proposes several estimation algorithms that apply set hashing and maximal overlap to estimate the number of matches of query twiglets formed using variations on different twiglet decomposition techniques, and demonstrates that accurate and robust estimates can be achieved, even with limited space.
Journal ArticleDOI
Plagiarism detection using stopword n -grams
TL;DR: It is shown that stopword n-grams reveal important information for plagiarism detection since they are able to capture syntactic similarities between suspicious and original documents and they can be used to detect the exact plagiarized passage boundaries.
Book ChapterDOI
Optimized query execution in large search engines with global page ordering
Xiaohui Long,Torsten Suel +1 more
TL;DR: This work study pruning techniques for query execution in large engines in the case where there is a global ranking of pages, as provided by Pagerank or any other method, in addition to the standard term-based approach, and shows that there is significant potential benefit in such techniques.
References
More filters
Book
The Probabilistic Method
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI
Syntactic clustering of the Web
TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI
Min-Wise Independent Permutations
TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article
Finding similar files in a large file system
TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI
Copy detection mechanisms for digital documents
TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).