LayerP2P: Using Layered Video Chunks in P2P Live Streaming
read more
Citations
GreenTube: power optimization for mobile videostreaming via dynamic cache management
Multipath delivery for adaptive streaming
A survey of peer-to-peer live video streaming schemes - An algorithmic perspective
Peer-to-Peer Streaming of Layered Video: Efficiency, Fairness and Incentive
A Scalable User Fairness Model for Adaptive Video Streaming Over SDN-Assisted Future Networks
References
Overview of the Scalable Video Coding Extension of the H.264/AVC Standard
Incentives Build Robustness in Bit-Torrent
A case for end system multicast
A case for end system multicast (keynote address)
Related Papers (5)
Frequently Asked Questions (12)
Q2. What contributions have the authors mentioned in the paper "Layerp2p: using layered video chunks in p2p live streaming" ?
In this paper, the authors propose, prototype, deploy and validate LayerP2P, a P2P live streaming system that addresses all three of these problems. The authors implement LayerP2P ( including seeds, clients, trackers, and layered codecs ), deploy the prototype in PlanetLab, and perform extensive experiments. The authors also examine a wide range of scenarios using trace-driven simulations. The results show that LayerP2P has high efficiency, provides differentiated service, adapts to bandwidth deficient scenarios, and provides protection against free-riders.
Q3. What have the authors stated for future works in "Layerp2p: using layered video chunks in p2p live streaming" ?
The authors implemented LayerP2P ( including seeds, clients, trackers, and layered codecs ), deployed the prototype in PlanetLab, and performed extensive experiments. The authors also believe that LayerP2P can serve as a framework for an open design for P2P live streaming systems.
Q4. What is the recent scalable video coding standard?
H.264/SVC [9], the most recent scalable video coding standard, supports SNR scalability (coarse granularity scalability (CGS) and medium granularity scalability (MGS)), spatial scalability and temporal scalability.
Q5. How long is the lag between a live event and its peers?
The system playback lag, i.e., the lag between a live event being encoded and sent at the source and that being played at the peers, is set to 30 seconds.
Q6. What is the default scheme for error concealing?
At the receiver the authors use MPlayer [8], which uses FFmpeg as the decoder core, employing its default scheme for error concealment (when LCs are lost).
Q7. What is the average received chunk ratio for different types of peers in Planetlab Experiment?
LayerP2P provides more protection to lower layers, so that the received chunk ratios for lower layers are normally higher than those for higher layers.
Q8. What is the simplest way to decode H.264 video?
a well-designed, real-time decoder, FFmpeg[8], which can decode H.264 temporal scalable video, is available in the public domain.
Q9. What is the way to schedule a chunk request?
If the supplier cannot serve a chunk request within time T − τ (where τ represents the round-trip delay between the receiver and the supplier) from when it receives the chunk request, it simply removes this chunk request from its request queue.
Q10. What is the loose requirement on the video coder?
The authors note that this loose requirement on the video coder is enabled by the chunk-based requesting and delivery architecture adopted by LayerP2P.
Q11. Why do the residential peers receive a higher video quality than those in the single-layer video?
Due to the unequal protection to different layers, both types of peers in LayerP2P receive a higher video quality than those in the single-layer video systems.
Q12. What is the order in which the queue is served?
For a particular receiver, the queue is first-in-first-out, where the supplier serves the requests in the order that the requests are received.