Proceedings ArticleDOI
On the resemblance and containment of documents
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.Abstract:
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.read more
Citations
More filters
Proceedings ArticleDOI
Exponential time improvement for min-wise based algorithms
TL;DR: This paper defines and gives an efficient time and space construction of approximately k-min-wise independent family of hash functions by extending Indyk's construction of Approximately min-wiseindependent.
Journal ArticleDOI
Fast Discrepancy Identification for RFID-Enabled IoT Networks
TL;DR: This paper designs two discrepant tag identification protocols with different optimization goals including minimum communication data and minimum communication round for radio frequency identification of EPCglobal Network.
Server-Friendly Delta Compression for Efficient Web Access.
Anubhav Savant,Torsten Suel +1 more
TL;DR: This work studies web and proxy server-friendly policies that do not require the maintenance of multiple older versions of a page, but only use reference files accessed by the client within the last few minutes, and shows that there are very simple policies that achieve significant benefits over gzip compression on most web accesses, and that can be efficiently implemented at web or proxy servers.
Patent
Delta compression of probabilistically clustered chunks of data
TL;DR: In this paper, a method and information handling system (IHS) for performing delta compression on probabilistically clustered chunks of data is described. But the method is not suitable for large data sets.
DissertationDOI
Document ranking using web evidence
TL;DR: How web evidence can be used to improve retrieval effectiveness for navigational search tasks is demonstrated, with a linear combination of the two types of evidence found to be particularly effective, achieving the highest retrieval effectiveness of any query-dependent evidence on navigational and Topic Distillation tasks.
References
More filters
Book
The Probabilistic Method
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI
Syntactic clustering of the Web
TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI
Min-Wise Independent Permutations
TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article
Finding similar files in a large file system
TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI
Copy detection mechanisms for digital documents
TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).