scispace - formally typeset
Proceedings ArticleDOI

On the resemblance and containment of documents

Andrei Z. Broder
- 11 Jun 1997 - 
- pp 21-29
TLDR
The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that could be done independently for each document.
Abstract
Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of "roughly the same" and "roughly contained." The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints.

read more

Content maybe subject to copyright    Report

Citations
More filters

Generalized and efficient outlier detection for spatial, temporal, and high-dimensional data mining

TL;DR: Knowledge Discovery in Databases (KDD) ist der Prozess, nicht-triviale Muster aus grosen Datenbanken zu extrahieren, with dem Ziel, dass diese bisher unbekannt, potentiell nutzlich, statistisch fundiert and verstandlich sind.
Proceedings ArticleDOI

Optimal Las Vegas Locality Sensitive Data Structures

TL;DR: It is shown that approximate similarity (near neighbour) search can be solved in high dimensions with performance matching state of the art ( data independent) Locality Sensitive Hashing, but with a guarantee of no false negatives.
Proceedings ArticleDOI

Graph Neural Networks for Link Prediction with Subgraph Sketching

TL;DR: A novel full-graph GNN called ELPH (Efficient Link Prediction with Hashing) that passes subgraph sketches as messages to approximate the key components of SGNNs without explicit subgraph construction is proposed, which is provably more expressive than Message Passing GNNs (MPNNs).

Learning to Hash for Indexing Big DataVA Survey Thispaperprovidesreaderswithasystematicunderstandingofinsights,pros,andcons of the emerging indexing and search methods for Big Data.

TL;DR: A comprehensive survey of the learning-to-hash framework and representative techniques of various types, including unsupervised, semisupervised, and supervised, is provided and the future direction and trends of research are discussed.
Journal ArticleDOI

Index Structures for Fast Similarity Search for Binary Vectors

TL;DR: Index structures are presented that are based on hash tables and similarity-preserving hashing and also on tree structures, neighborhood graphs, and distributed neural autoassociation memory for fast similarity search for objects represented by binary vectors.
References
More filters
Book

The Probabilistic Method

Joel Spencer
TL;DR: A particular set of problems - all dealing with “good” colorings of an underlying set of points relative to a given family of sets - is explored.
Journal ArticleDOI

Syntactic clustering of the Web

TL;DR: An efficient way to determine the syntactic similarity of files is developed and applied to every document on the World Wide Web, and a clustering of all the documents that are syntactically similar is built.
Journal ArticleDOI

Min-Wise Independent Permutations

TL;DR: This research was motivated by the fact that such a family of permutations is essential to the algorithm used in practice by the AltaVista web index software to detect and filter near-duplicate documents.
Proceedings Article

Finding similar files in a large file system

TL;DR: Application of sif can be found in file management, information collecting, program reuse, file synchronization, data compression, and maybe even plagiarism detection.
Proceedings ArticleDOI

Copy detection mechanisms for digital documents

TL;DR: This paper proposes a system for registering documents and then detecting copies, either complete copies or partial copies, and describes algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security).