Proceedings ArticleDOI
On the resemblance and containment of documents
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.Abstract:
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.read more
Citations
More filters
Journal ArticleDOI
ACRONYM: Context Metrics for Linking People to User-Generated Media Content
TL;DR: The context metrics and combination methods that form the recommendation algorithms used by ACRONYM to determine the people represented in multimedia resources result in an increase in recommendation accuracy for the photograph annotation use case.
Journal ArticleDOI
Continuous similarity search for evolving queries
TL;DR: A novel problem of continuous similarity search for evolving queries given a set of objects and a data stream, where the top-k most similar objects are maintained using the last n items in the stream as an evolving query is studied.
Journal ArticleDOI
OPRCP: approximate nearest neighbor binary search algorithm for hybrid data over WMSN blockchain
TL;DR: The experimental results show that compared with other mainstream methods, the proposed OPRCP method is widely adaptive to massive high-dimensional data in multiple types and can improve the accuracy of query results.
DissertationDOI
Scaling Software Security Analysis to Millions of Malicious Programs and Billions of Lines of Code
TL;DR: It is argued that automatic code reuse detection enables an efficient data reduction of a high volume of incoming malware for downstream analysis and enhances software security by efficiently finding known vulnerabilities across large code bases and automatically discover highly correlated features and malware groups.
Journal ArticleDOI
Query Optimization in Arabic Plagiarism Detection: An Empirical Study
TL;DR: It is found that a systematic combination of different heuristics greatly improves the performance of the document retrieval system.
References
More filters
Book
The Probabilistic Method
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI
Syntactic clustering of the Web
TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI
Min-Wise Independent Permutations
TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article
Finding similar files in a large file system
TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI
Copy detection mechanisms for digital documents
TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).