Efficient peer-to-peer keyword searching
Patrick Reynolds,Amin Vahdat +1 more
- pp 21-40
TLDR
A distributed search engine based on a distributed hash table is designed and analyzed and the simulation results predict that the search engine can answer an average query in under one second, using under one kilobyte of bandwidth.Abstract:
The recent file storage applications built on top of peer-to-peer distributed hash tables lack search capabilities. We believe that search is an important part of any document publication system. To that end, we have designed and analyzed a distributed search engine based on a distributed hash table. Our simulation results predict that our search engine can answer an average query in under one second, using under one kilobyte of bandwidth.read more
Citations
More filters
Proceedings Article
Efficient multiple-keyword search in DHT-based decentralized systems
TL;DR: A structured peer-to-peer (P2P) search approach in distributed hash table (DHT) based decentralized systems using multiple keywords to describe file content more accurately and improves retrieval precision, which is important from the userpsilas perspective and as file sharing becomes more popular.
Building a peer-to-peer full-text Web search engine with highly discriminative keys
TL;DR: A novel indexing technique which maintains a global key index in structured P2P overlays, thus limiting the size of the global index, while ensuring scalable search cost and results show reasonable indexing costs while the retrieval quality is comparable to standard centralized solutions with TF-IDF ranking.
Book ChapterDOI
Using information retrieval techniques to route queries in an infobeacons network
TL;DR: The InfoBeacons system, in which a peer-to-peer network of beacons cooperates to route queries to the best information sources, is presented, and alternative architectures for routing queries between beacons are examined.
Book ChapterDOI
Improving query correctness using centralized probably approximately correct (PAC) search
TL;DR: This work proposes a modification to the PAC architecture, introducing a centralized query coordination node, and proposes two heuristic algorithms to iteratively improve the performance of PAC search.
Proceedings Article
Distributed Service Discovery with Guarantees in Peer-to-Peer Networks using Distributed Hashtables.
TL;DR: A way to evaluate the proposed protocol for decentralized service discovery with guarantees by running simulations in comparison with a straightforward way of achieving the same goal in an unstructured, Gnutella-like network.
References
More filters
Journal ArticleDOI
The anatomy of a large-scale hypertextual Web search engine
Sergey Brin,Lawrence Page +1 more
TL;DR: This paper provides an in-depth description of Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext and looks at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
Proceedings Article
The PageRank Citation Ranking : Bringing Order to the Web
TL;DR: This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them, and shows how to efficiently compute PageRank for large numbers of pages.
Journal Article
The Anatomy of a Large-Scale Hypertextual Web Search Engine.
Sergey Brin,Lawrence Page +1 more
TL;DR: Google as discussed by the authors is a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext and is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems.
Proceedings ArticleDOI
Chord: A scalable peer-to-peer lookup service for internet applications
TL;DR: Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Journal ArticleDOI
Space/time trade-offs in hash coding with allowable errors
TL;DR: Analysis of the paradigm problem demonstrates that allowing a small number of test messages to be falsely identified as members of the given set will permit a much smaller hash area to be used without increasing reject time.
Related Papers (5)
Pastry: Scalable, Decentralized Object Location, and Routing for Large-Scale Peer-to-Peer Systems
Antony Rowstron,Peter Druschel +1 more