Proceedings ArticleDOI
On the resemblance and containment of documents
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.Abstract:
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.read more
Citations
More filters
Dissertation
Real-time event detection in massive streams
TL;DR: A modern approach to event detection that scales to unbounded streams of text, without sacrificing accuracy is proposed, which enables us to detect events from large streams like Twitter, which none of the previous approaches were able to do.
Patent
Granular control over the authority of replicated information via fencing and unfencing
Dan Teodosiu,Nikolaj Bjørner +1 more
TL;DR: In this article, a method and system for controlling which content gets precedence and is replicated in a replica set is presented. But this method is not suitable for file-based systems.
Proceedings ArticleDOI
Fast Similarity Sketching
TL;DR: In this article, the authors present a new sketch which obtains essentially the best of both worlds: a fast O(t log t + |A|) expected running time while getting the same strong concentration bounds as MinHash, and demonstrate the power of their new sketch by considering popular applications in large-scale classification with linear SVM as introduced by Li et al.
Journal ArticleDOI
AccountTrade: Accountability Against Dishonest Big Data Buyers and Sellers
Taeho Jung,Xiang-Yang Li,Wenchao Huang,Zhongying Qiao,Jianwei Qian,Linlin Chen,Junze Han,Jiahui Hou +7 more
TL;DR: A uniqueness index is defined and proposed, which is a new rigorous measurement of the data uniqueness for this purpose, and several accountable trading protocols are presented to enable data brokers to blame the misbehaving entities when misbehavior is detected.
Journal ArticleDOI
Overlap graphs and de Bruijn graphs: data structures for de novo genome assembly in the big data era
Raffaella Rizzi,Stefano Beretta,Murray Patterson,Yuri Pirola,Marco Previtali,Gianluca Della Vedova,Paola Bonizzoni +6 more
TL;DR: The most recent advances in the problem of constructing, representing and navigating assembly graphs, focusing on very large datasets are discussed, and some computational techniques to compactly store graphs while keeping all functionalities intact are explored.
References
More filters
Book
The Probabilistic Method
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI
Syntactic clustering of the Web
TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI
Min-Wise Independent Permutations
TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article
Finding similar files in a large file system
TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI
Copy detection mechanisms for digital documents
TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).