Efficient peer-to-peer keyword searching
Patrick Reynolds,Amin Vahdat +1 more
- pp 21-40
TLDR
A distributed search engine based on a distributed hash table is designed and analyzed and the simulation results predict that the search engine can answer an average query in under one second, using under one kilobyte of bandwidth.Abstract:
The recent file storage applications built on top of peer-to-peer distributed hash tables lack search capabilities. We believe that search is an important part of any document publication system. To that end, we have designed and analyzed a distributed search engine based on a distributed hash table. Our simulation results predict that our search engine can answer an average query in under one second, using under one kilobyte of bandwidth.read more
Citations
More filters
Journal ArticleDOI
Adapting a pure decentralized peer-to-peer protocol for grid services invocation
Domenico Talia,Paolo Trunfio +1 more
TL;DR: Gridnut as mentioned in this paper is a modified Gnutella discovery protocol, which makes it suitable for OGSA Grids and uses appropriate message buffering and merging techniques to make Grid Services effective as a way to exchange messages in a P2P fashion.
Book ChapterDOI
Resource discovery considering semantic properties in data grid environments
TL;DR: The performance evaluation of the proposed method proved the discovery cost reduction especially for inter-ontology queries and provide a significant maintenance costs reduction of such system especially when peers frequently join / leave the system.
Journal ArticleDOI
Optimizing hyperspace hashing via analytical modelling and adaptation
TL;DR: This paper first shows that a misconfiguration may significantly affect the performance of the system, and derives a performance model that provides key insights on the behaviour of hyperspace hashing that is derived to automatically and dynamically select the best configuration.
Patent
Generating and using a dynamic bloom filter
TL;DR: A dynamic Bloom filter as mentioned in this paper consists of a cascaded set of Bloom filters, each of which is individually queried to identify a positive or negative in response to a query, and the system recursively generates additional Bloom filters as needed for items remaining after the initial Bloom filter is filled; items are checked to eliminate duplicates.
Book ChapterDOI
Peer-to-Peer Keyword Search: A Retrospective
TL;DR: The history of the field is surveyed, the lasting impacts of peer-to-peer research are looked at, and at least one view of where the authors go from here is provided.
References
More filters
Journal ArticleDOI
The anatomy of a large-scale hypertextual Web search engine
Sergey Brin,Lawrence Page +1 more
TL;DR: This paper provides an in-depth description of Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext and looks at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
Proceedings Article
The PageRank Citation Ranking : Bringing Order to the Web
TL;DR: This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them, and shows how to efficiently compute PageRank for large numbers of pages.
Journal Article
The Anatomy of a Large-Scale Hypertextual Web Search Engine.
Sergey Brin,Lawrence Page +1 more
TL;DR: Google as discussed by the authors is a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext and is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems.
Proceedings ArticleDOI
Chord: A scalable peer-to-peer lookup service for internet applications
TL;DR: Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Journal ArticleDOI
Space/time trade-offs in hash coding with allowable errors
TL;DR: Analysis of the paradigm problem demonstrates that allowing a small number of test messages to be falsely identified as members of the given set will permit a much smaller hash area to be used without increasing reject time.
Related Papers (5)
Pastry: Scalable, Decentralized Object Location, and Routing for Large-Scale Peer-to-Peer Systems
Antony Rowstron,Peter Druschel +1 more