scispace - formally typeset
A

Andrei Z. Broder

Researcher at Google

Publications -  241
Citations -  28441

Andrei Z. Broder is an academic researcher from Google. The author has contributed to research in topics: Web search query & Web page. The author has an hindex of 67, co-authored 241 publications receiving 27310 citations. Previous affiliations of Andrei Z. Broder include AmeriCorps VISTA & IBM.

Papers
More filters
Patent

System and method for monitoring web pages by comparing generated abstracts

TL;DR: In this article, a set of documents is stored in memories of server computers and the server computers can be connected to each other by a network such as the Internet. But the search engine also maintains a first abstract for each document that is indexed, which is highly dependent on the content of each document.
Patent

System with a plurality of hash tables each using different adaptive hashing functions

TL;DR: In this article, a data processing system and method particularly useful for network address lookup in interconnected local area networks uses a family of hashing algorithms that allow implementation of a dictionary that is particularly advantageous when the underlying hardware allows parallel memory reads in different memory banks.
Journal ArticleDOI

Existence and Construction of Edge-Disjoint Pathson Expander Graphs

TL;DR: The authors prove sufficient conditions for the existence of edge-disjoint paths connecting any set of $q\leq n/(\log n)^\kappa$ disjoint pairs of vertices on any $n$ vertex bounded degree expander, where $\ kappa$ depends only on the expansion properties of the input graph, and not on $n$.
Patent

Compression protocol with multiple preset dictionaries

TL;DR: In this article, the server computers store a plurality of Web pages and partition them into sets, where each set includes Web pages that are substantially similar in content and a preset compression dictionary is generated for each set of web pages.
Proceedings ArticleDOI

Estimating corpus size via queries

TL;DR: The main idea is to construct an unbiased and low-variance estimator that can closely approximate the size of any set of documents defined by certain conditions, including that each document in the set must match at least one query from a uniformly sampleable query pool of known size, fixed in advance.