scispace - formally typeset
Proceedings ArticleDOI

On the resemblance and containment of documents

Andrei Z. Broder
- 11 Jun 1997 - 
- pp 21-29
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.
Abstract
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.

read more

Content maybe subject to copyright    Report

Citations
More filters

Large-scale machine learning for classification and search

TL;DR: This thesis proposes several key methods to build scalable semi-supervised kernel machines such that real-world linearly inseparable data can be tackled, and presents a novel kernel-based supervised hashing model which requires a limited amount of supervised information in the form of similar and dissimilar data pairs, and is able to achieve high hashing quality at a practically feasible training cost.

Algorithmic Techniques for Processing Data Streams

TL;DR: More evolved procedures that resort on basic methods of sampling and sketching are presented, and algorithmic schemes for similarity mining, the concept of group testing, and techniques for clustering and summarizing data streams are examined.
Journal ArticleDOI

Dating medieval English charters

TL;DR: Computer-automated statistical methods for dating documents from the tenth through early fourteenth centuries in England with the goal of reducing the considerable efforts required to date them manually and of improving the accuracy of assigned dates are proposed.
Journal ArticleDOI

Code analyzer for an online course management system

TL;DR: This research implements an Online Detection Plagiarism System (ODPS) providing a web-based user interface and a combined approach is proven that it is better than a single approach for source codes of various styles.
Journal ArticleDOI

Secure computation of functionalities based on Hamming distance and its application to computing document similarity

TL;DR: This paper presents protocols which are secure in the sense of full simulatability against malicious adversaries, and shows applications of HDOT, including protocols for checking similarity between documents without disclosing additional information about them.
References
More filters
Book

The Probabilistic Method

Joel Spencer
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI

Syntactic clustering of the Web

TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI

Min-Wise Independent Permutations

TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article

Finding similar files in a large file system

TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI

Copy detection mechanisms for digital documents

TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).