SCOPE: scalable consistency maintenance in structured P2P systems
read more
Citations
Review: A survey on content-centric technologies for the current Internet: CDN and P2P solutions
IRM: Integrated File Replication and Consistency Maintenance in P2P Systems
Efficient and Scalable Consistency Maintenance for Heterogeneous Peer-to-Peer Systems
A framework for distributed knowledge management: Design and implementation
An efficient data replication and load balancing technique for fog computing environment
References
Chord: A scalable peer-to-peer lookup service for internet applications
Pastry: Scalable, Decentralized Object Location, and Routing for Large-Scale Peer-to-Peer Systems
A scalable content-addressable network
Chord: a scalable peer-to-peer lookup protocol for Internet applications
OceanStore: an architecture for global-scale persistent storage
Related Papers (5)
Frequently Asked Questions (13)
Q2. What is the way to reduce the average routing latency of a query?
Considering the topology mismatch problem between overlays and their physical layers in structured P2P systems, [25] proposed an adaptive topology adjusting method to reduce the average routing latency of a query.
Q3. How many keys is the maximum number of messages on a single node in SCOPE?
When the number of keys is 105, the maximal number of messages on a single node increases to 1410 in SCOPE, but still only about one seventh of 10406 in the centralized solution.
Q4. What is the way to reduce the average streaming start-up time?
In [13], a network of streaming media servers is organized into a structured P2P system to fully utilize local cached copies of an object, so that the average streaming start-up time can be reduced.
Q5. What is the way to maintain consistency among replicas?
Most unstructured P2P systems, including centralized ones (e.g., Napster) and decentralized ones (e.g., Gnutella), do not guarantee consistency among replicas.
Q6. How many bits can be used to determine the identifier?
If the authors use a smaller identifier space, the key identifier can be easily calculated by keeping a certain number of least significant bits.
Q7. How do the authors build an RPT for each key?
1) Basic Structure: After partitioning the identifier space as mentioned above, the authors build an RPT for each key by recursively checking the existence of replicas in the partitions.
Q8. What is the tradeoff between routing latency and storage overhead?
Considering the tradeoff between routing latency and storage overhead, their partitioning scheme could be dynamic, in which the number of partitions is adaptively changed with respect to the popularity of a key.
Q9. What is the way to maintain consistency among a P2P system?
Some existing file-sharing P2P systems assume that the shared data are static or read-only, so that no update mechanism is needed.
Q10. How does the dynamic passive replication scheme work?
In [12], Gedik et al. used a dynamic passive replication scheme to provide reliable service for a P2P Internet monitoring system, where the replication list is maintained by each Continual Queries (CQ) owner.
Q11. how many hops can a subscribe/unsubscribe operation take?
From Lemma 1, at each level, on average a query node can reach the successor of a key in the same partition at level l in log N/2lm hops, the average routing length of a subscribe/unsubscribe operation (hop(sub/unsub)) is:hop(sub/unsub) = log N + log N2m + logN22m+... + log N2log N= log N + (log N − m) + (log N − 2m) +... + (log N − log Nm m)= log2 N 2m − log N 2 .
Q12. How many levels of the routing table are there?
Specified as the Pastry default parameters, the routing table of each node has 40 levels and each level consists of 15 entries; the leaf set of each node has 32 entries.
Q13. How many hops can an update operation take?
Theorem 4: For an N -node network with partition size of 2m, on average, update operations can be finished in O( log2 N 2m )hops.