Proceedings ArticleDOI
On the resemblance and containment of documents
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.Abstract:
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.read more
Citations
More filters
Proceedings Article
Sparse indexing: large scale, inline deduplication using sampling and locality
Mark Lillibridge,Kave Eshghi,Deepavali Bhagwat,Vinay Deolalikar,Greg Trezise,Peter Thomas Camble +5 more
TL;DR: Sparse indexing, a technique that uses sampling and exploits the inherent locality within backup streams to solve for large-scale backup the chunk-lookup disk bottleneck problem that inline, chunk-based deduplication schemes face, is presented.
Journal ArticleDOI
Learning to Hash for Indexing Big Data—A Survey
TL;DR: Learning-to-Hash (LHT) as mentioned in this paper is one of the most popular methods for approximate nearest neighbor (ANN) search in big data applications, which can exploit information such as data distributions or class labels when optimizing the hash codes or functions.
Book ChapterDOI
Identifying and Filtering Near-Duplicate Documents
TL;DR: The algorithm for filtering near-duplicate documents discussed here has been successfully implemented and has been used for the last three years in the context of the AltaVista search engine.
Proceedings ArticleDOI
Extreme Binning: Scalable, parallel deduplication for chunk-based file backup
TL;DR: Extreme Binning is presented, a scalable deduplication technique for non-traditional backup workloads that are made up of individual files with no locality among consecutive files in a given window of time.
Journal ArticleDOI
A protocol-independent technique for eliminating redundant network traffic
Neil Spring,David Wetherall +1 more
TL;DR: It is found that dynamic content, streaming media and other traffic that is not caught by today's Web caches is nonetheless likely to derive from similar information and similarity detection techniques are adapted to the problem of designing a system to eliminate redundant transfers.
References
More filters
Book
The Probabilistic Method
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI
Syntactic clustering of the Web
TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI
Min-Wise Independent Permutations
TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article
Finding similar files in a large file system
TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI
Copy detection mechanisms for digital documents
TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).