scispace - formally typeset
Open AccessProceedings ArticleDOI

Resilient peer-to-peer streaming

Reads0
Chats0
TLDR
A simple tree management algorithm is presented that provides the necessary path diversity and an adaptation framework for MDC based on scalable receiver feedback is described, which shows very significant benefits in using multiple distribution trees and MDC, with a 22 dB improvement in PSNR in some cases.
Abstract
We consider the problem of distributing "live" streaming media content to a potentially large and highly dynamic population of hosts. Peer-to-peer content distribution is attractive in this setting because the bandwidth available to serve content scales with demand. A key challenge, however, is making content distribution robust to peer transience. Our approach to providing robustness is to introduce redundance; both in network paths and in data. We use multiple, diverse distribution trees to provide redundancy in network paths and multiple description coding (MDC) to provide redundancy in data. We present a simple tree management algorithm that provides the necessary path diversity and describe an adaptation framework for MDC based on scalable receiver feedback. We evaluate these using MDC applied to real video data coupled with real usage traces from a major news site that experienced a large flash crowd for live streaming content. Our results show very significant benefits in using multiple distribution trees and MDC, with a 22 dB improvement in PSNR in some cases.

read more

Content maybe subject to copyright    Report

Resilient Peer-to-Peer Streaming
*
Venkata N. Padmanabhan, Helen J. Wang, Philip A. Chou
Microsoft Research
{padmanab, helenw, pachou}@microsoft.com
Abstract
We consider the problem of distributing “live” streaming media
content to a potentially large and highly dynamic population of hosts.
Peer-to-peer content distribution is attractive in this setting because
the bandwidth available to serve content scales with demand. A ke y
challenge, however, is making content distribution robust to peer
transience. Our approach to providing r o bustness is to introduce
redundancy, both in network paths and in data. We use multiple,
diverse distribution trees to provide redundancy in network paths and
multiple description coding (MDC) to provide redundancy in data.
We present a simple tree management algorithm that provides the
necessary path diversity and describe an adaptation framework for
MDC based on scalable receiver feedback. We evaluate these using
MDC applied to real video data coupled with real usage traces
fr om a major news site that experienced a large flash crowd for live
streaming content. Our results show very significant benefits in using
multiple distribution trees and MDC, with a 22 dB improvement in
PSNR in some cases.
I. INTRODUCTION
We consider the problem of distributing “live” streaming
media content from a server to a potentially large and highly
dynamic population of interested clients. We use the term
“live” to refer to the simultaneous distribution of the same
content to all clients; the content itself may either be truly live
or a playback of a recording. Due to the lack of widespread
support for IP multicast (especially at the inter-domain level),
the server may resort to unicasting the stream to individual
clients. However, this approach only scales up to a point. A
surge in the client population, say due to a flash crowd, could
easily overwhelm the server’s bandwidth.
A range of solutions have been proposed in the literature
and employed in practice. The content provider could purchase
additional bandwidth and install a (possibly distributed) cluster
of servers. Alternatively, the services of a content distribution
network (CDN) such as Akamai could be used to achieve
the necessary scaling, thereby relieving the content provider
from the task of scaling their server site. However, these
approaches may not be cost effective, at least for small or
medium sized sites, because the normal traffic levels may not
be high enough to justify the cost of purchasing additional
bandwidth or subscribing to the services of a CDN. In fact, the
volume of traffic at a small site, even during a flash crowd, may
be too low to be of commercial interest to a CDN operator.
(Consider, for instance, a flash crowd that overwhelms a server
that is webcasting a high school football game.) Furthermore,
*
Please visit the CoopNet project page at
http://www.research.microsoft.com/projects/CoopNet/ for additional
information, including a pointer to a more detailed paper [28].
there is some evidence that even large sites (e.g., CNN) are
moving away from CDNs to in-house server farms [23].
An alternative to these infrastructure-based solutions is
end-host-based or peer-to-peer content distribution.
1
AP2P
approach is attractive in this setting because the bandwidth
available to serve content scales with demand (i.e., the number
of interested clients). This is the basis for the CoopNet system
presented in this paper. CoopNet makes selective use of P2P
networking, placing minimal demands on the peers. The goal
is only to help a server tide over crises such as flash crowds
rather than replace the server with a pure P2P system.
There are a few key issues that need to be addressed
in CoopNet. First, users may be wary of dedicating their
bandwidth to the common good, especially when ISPs charge
based on (upstream) bandwidth usage. We address this issue
in CoopNet by insisting that a node participate in and con-
tribute bandwidth for content distribution only so long as the
user is interested in the content. It stops forwarding traffic
when the user tunes out. This requirement makes CoopNet
fundamentally different from many other P2P systems (e.g.,
[12]) where nodes are expected to route traffic so long as
they are online, even if they are themselves not interested in
the corresponding content. We also insist that a node only
contribute as much upstream bandwidth as it consumes in
the downstream direction
2
. This creates a natural incentive
structure where a node may tune in to higher bandwidth (and
better quality) content if and only if it is also willing and
able to forward traffic at the higher rate. We do not, however,
consider the enforcement issue (e.g., blocking free-riders) in
this paper.
A second key issue is that the nodes in CoopNet are
inherently unreliable. The outgoing stream from a node may
be disrupted because the user tunes out, the node crashes or
loses connectivity, or simply because the upstream bandwidth
is temporarily used up by a higher-priority user task (e.g.,
sending out an email with large attachments)
3
. The traditional
approach to end-host-based application-level multicast, which
involves constructing a single distribution tree, is vulnerable
to such failures because the descendants of the failed nodes
might experience severe disruption until the tree is repaired
(or the failed nodes are revived). Parent-driven retransmission
1
We use the terms end-host-based multicast and peer-to-peer multicast
synonymously in this paper.
2
This restriction only applies to the total bandwidth in and out of a node
aggregated over all trees. Thus the individual trees will still be “bushy”, as
explained in Section II-B
3
We term these as “failures” although the node may not have actually failed.
Proceedings of the 11th IEEE International Conference on Network Protocols (ICNP’03)
1092-1648/03 $17.00 © 2003 IEEE

(ARQ) is not a good fit because we are concerned with the
failure of the parent node itself, not just network packet drops.
So we address the robustness issue in CoopNet by introducing
redundancy, both in network paths and in data. Multiple, di-
verse distribution trees spanning the set of participating nodes
are constructed, thus providing redundancy in network paths.
The streaming content is encoded using multiple description
coding (MDC) [19] and the descriptions are distributed over
different trees. As our experimental results show, this approach
significantly improves the quality of the received stream in the
face of a high level of node churn.
The use of multiple trees also enables us to achieve our
goal of making the total upstream and downstream bandwidth
consumptions equal at each node, while still maintaining a
significant fan-out at each node. We explain in Section II-B
and Figure 1 on how this is done.
In CoopNet, the server plays a central role in constructing
and managing the distribution trees. The availability of a
resourceful server that is likely to be far more robust than any
individual peer greatly simplifies the system design. Note that
in this “centralized” design, the most constrained resource, viz.
bandwidth for forwarding the data stream, is still contributed
by the distributed set of peers and scales with the population
size. In this respect, our design is akin to that of the erstwhile
Napster system. While the central server does constitute a
single point of failure, it is also the source of the data stream.
So failures of the server will disrupt the data stream regardless
of how tree management is done.
Here are the specific contributions of this paper:
1) A simple, centralized tree management algorithm to
construct and maintain a diverse set of trees.
2) A framework for adapting MDC based on scalable
receiver feedback.
3) Evaluation of tree management and MDC adaptation
using real video data coupled with real usage traces
derived from the access logs of the MSNBC news
site [2] that experienced a large flash crowd for live
streaming content on Sep 11, 2001. Our results show the
significant benefits of using multiple, diverse distribution
trees and MDC. The peak signal-to-noise ratio (PSNR)
of the received stream improves by up to 22 dB in some
cases. Our results also indicate that MDC outperforms
pure Forward Error Correction (FEC) in the face of wide
variation in loss rate across clients.
In a previous workshop paper [29], we sketched the basic
idea of CoopNet (viz., combining multiple distribution trees
with MDC) and presented some preliminary analysis. This
paper is substantially different in many respects, both in terms
of algorithms and in terms of evaluation. The tree management
algorithm significantly improves over our previous algorithm.
The adaptation framework for MDC based on scalable receiver
feedback, the application of MDC to real video data for
performance evaluation, and the comparative evaluation of
FEC and MDC are new in this paper.
There are some important issues that we do not discuss
in this paper. First, we do not discuss the bandwidth hetero-
geneity issue here. Our longer technical report [28] presents
a framework for accomodating bandwidth heterogeneity and
congestion control based on our recent work on layered
MDC [14]. Second, we do not discuss security issues such
as assuring content integrity, maintaining user privacy, and
preventing free-riders.
The rest of this paper is organized as follows. In Section II,
we present the centralized tree management approach used in
CoopNet. We discuss our MDC construction in Section III and
the adaptation framework based on scalable receiver feedback
in Section IV. We then present a performance evaluation of
these in Section V using real video data and the flash crowd
traces from MSNBC. We discuss related work in Section VI,
and we conclude in Section VII with a summary of our
contributions and an outline of our ongoing work.
II. T
REE MANAGEMENT
We now discuss the problem of constructing and maintain-
ing the distribution trees. The key challenge is to keep up with
the frequent node arrivals and departures that may be typical
of flash crowd scenarios. As noted in Section I, we assume
that nodes participate and contribute bandwidth resources only
for as long as they are interested in receiving content, so they
may depart or fail with little notice.
A. Goals and Design Rationale
There are many and sometimes conflicting goals for the tree
management algorithm:
1) Short trees: The trees should be as short as possible,
i.e., have a minimal number of intermediate end-hosts
between the root and the leaves. Shortness would mini-
mize the probability of disruption due to the departure,
failure, or congestion at an ancestor node. For it to be
short, each tree should be balanced and as “bushy” as
possible, i.e., the out-degree of each node should be as
much as its bandwidth will allow. However, making the
out-degree large (and thus consuming more bandwidth)
may increase the likelihood of disruption in the CoopNet
stream due to competing traffic from other applications.
2) Tree diversity versus efficiency: The distribution trees
should be diverse, i.e., the set of ancestors of a node
in each tree should be as disjoint as possible. The
effectiveness of the MDC-based distribution scheme de-
pends critically on the diversity of the distribution trees.
However, striving for diversity may interfere with the
goal of having efficient trees, i.e., ones whose structure
closely matches the underlying network topology. For
instance, if we wish to connect three nodes, one each
located in New York (NY), San Francisco (SF), and
Los Angeles (LA), the structure NYSFLA would
likely be far more efficient than SFNYLA, where
denotes a parent-child relationship. Note that shortness
could make a tree more efficient but not necessarily so.
3) Quickjoinandleave:The processing of node joins and
leaves should be quick to ensure that an interested node
starts receiving streaming content as soon as possible
Proceedings of the 11th IEEE International Conference on Network Protocols (ICNP’03)
1092-1648/03 $17.00 © 2003 IEEE

after it joins (or migrates to a new parent, as discussed
below) and with minimal interruption (in case one or
more ancestors depart or fail). In particular, the number
of network round-trips needed for the joins and leaves
to complete should be minimal.
4) Scalability: The tree management algorithm should
scale to a large number of nodes, with a correspondingly
high rate of node arrivals and departures. For instance,
in the extreme case of the flash crowd at MSNBC on
September 11, the average rate of node arrivals and
departures was 180 per second while the peak rate was
about 1000 per second (both aggregated over a cluster of
streaming servers). While a distributed algorithm might
scale better than a centralized one, it is generally at
the cost of longer join and leave processing time (i.e.,
more network round-trips are needed compared to the
one needed with centralized tree management).
Some of these goals (appear to) conflict with each other,
so we prioritize them as follows. Since resilience is our main
objective, we choose to focus on building short and diverse
trees with short join and leave times.
We prioritize shortness and diversity over efficiency because
in the CoopNet setting, the peer nodes and their often con-
strained last-hop links are likely to be the causes of disruption.
So it makes sense to try to minimize the number of ancestors
that a node has and maximize their diversity. And since the
live streaming application we consider is non-interactive, a
modest delay (from the root to a node) of a few seconds
may be acceptable. That said, having efficient trees would
likely benefit the network as a whole by reducing bandwidth
consumption on the backbone links. So we include efficiency
as a secondary goal.
To enable quick joins and leaves, we use a centralized
tree management scheme, where a central node (possibly the
streaming server) coordinates tree construction and mainte-
nance. We refer to this node as the “root” to connote the
probability that it is (or is collocated with) the root of the
distribution trees in practice. The root (e.g., the MSNBC
server cluster) is often more resourceful and available than
the individual clients, so leveraging it greatly simplifies tree
management and consequently makes joins and leaves quick.
A join or leave operation only requires one or two network
round trips one to the root and possibly one to the new
parent.
The dependence on the root means that the system is not
self-scaling, but only so with respect to control traffic pertain-
ing to tree management; it is still self-scaling with respect to
(the more expensive) data traffic. Thus the load imposed on
the server is still greatly reduced compared to the situation
today in a client-server setting. Our prototype implementation
can keep up with about 400 joins and leaves per second on
a laptop with a 2 GHz Mobile Pentium 4 processor. The tree
management task is CPU-bound (the memory and network
bandwidth requirements are quite low) and should scale with
CPU speed. Should the tree management processing on one
root node become a bottleneck, it would be easy to scale up
using a (possibly distributed) cluster of roots and directing
each client to one of the roots, say at random. A client would
retain its association with the assigned root until it departs
the system. If in addition the aggregate bandwidth of the root
nodes (i.e., the source nodes of the data stream) is scaled up,
it would result in shorter, and hence better, trees.
Another criticism of centralized tree management might be
that the root is a single point of failure. Nonetheless, this may
be a moot point in our setting because the root (or a node
collocated with it) is also the source of the data stream. So
the failure or disconnection of the root is also likely to disrupt
the data stream
4
.
B. Centralized Tree Management
The root coordinates all tree management functions. When
a node wishes to join, it contacts the root, which responds with
a designated parent node in each tree. The new node then con-
tacts the parents to have the flow of data started. (Alternatively,
the root could directly notify the parent nodes concurrently
with its message to the new node, thereby reducing the join
time by about an RTT.) When a node leaves gracefully, it
informs the root. The root then finds a new parent for the
children of the departed node (in each tree) and notifies the
children of the identities of their new parents.
In addition, there is the problem of ungraceful leaves where
a node departs because of a network disconnection, host crash,
or another reason that gives it no opportunity to notify the
root or its own children. To accommodate such ungraceful
leaves (and general variability in network quality), each node
monitors the packet loss rate of the incoming stream on each
tree. Losses are deduced from gaps in the packet sequence
number, or a stoppage in the packet stream (for instance,
because the parent got disconnected).
If the loss rate on a tree exceeds a threshold, the node checks
with its parent to see if the parent too is experiencing a high
loss rate on that tree. (The network round trip needed for this
check can possibly be saved by having the parent piggyback
its packet loss rate information on the data stream it forwards
to its children.) If the parent is also experiencing a high loss
rate, then the cause of the problem is probably upstream of
the parent. So the node holds off for a while before checking
with its parent again, hoping that the parent (or one of its
ancestors) will resolve the problem in the meantime.
If the parent is not experiencing the problem or it fails to
respond or resolve the problem, the node contacts the root to
request a new parent for itself in the affected tree. In addition
to returning a new parent to the requesting node, the root also
records the “complaint” against the old parent. Such complaint
information could be used to guide future parent selection and
possibly scale back the level of participation of the suspect
parent, but we do not consider this issue further here. Note that
with this protocol, only the root of the affected subtree would
4
This statement is not strictly true because Internet connectivity is not
always transitive [4]. A node may lose direct connectivity to the root and
hence be unable to exchange tree management messages with it but yet be
able to receive the data stream routed via its ancestors (i.e., an overlay path).
Proceedings of the 11th IEEE International Conference on Network Protocols (ICNP’03)
1092-1648/03 $17.00 © 2003 IEEE

contact the server,sothereisnot an implosion of requests at
the tree management server.
We now consider the question of how exactly the root
chooses the set of parents for a node. We discuss two tree
construction algorithms randomized and deterministic.
1) Randomized Tree Construction: This algorithm was pre-
sented in our previous workshop paper [29]. The motivation
is simple: since we would like the trees to be diverse, we ran-
domize the process of tree construction within the constraints
imposed by node bandwidth and the desire for short trees. The
algorithm proceeds as follows. For each tree, we start at the
root (i.e., the source of the data stream) and search down the
tree until we get to a level that has one or more nodes with
spare bandwidth to support a new child. (Note that this search
is performed in the local data structures maintained at the root
and does not involve any network communication.) We then
randomly pick one of these nodes with “room” as the parent
of the new node in that tree. To further increase diversity, we
could randomly pick the parent from among nodes within K
levels of the first level that has room. K would typically be
set to a small value such as 1 or 2 to avoid sacrificing too
much in terms of the shortness of the tree.
While the total upstream bandwidth consumption at a node
aggregated over all trees is equal to the total downstream band-
width consumption, the upstream and downstream bandwidths
on the individual trees may not be equal. A node may have
multiple children on one tree and none in others. So the trees
will be somewhat bushy.
2) Deterministic Tree Construction: While randomization
would result in a degree of tree diversity, the question is
whether we can do better. We leverage the insightful obser-
vation made in the recent work on SplitStream [11] that the
outgoing bandwidth constraint of nodes can be honored by
making each node an interior node in just one tree. (That
said, there are some crucial differences between SplitStream
approach and ours, which are discussed in Section VI-A.)
In our setting, the centralization of tree construction makes
it relatively easy to honor the bandwidth constraints of each
node. But we can use the idea of making each node an interior
node in exactly one tree to make the trees more bushy and
hence shorter. Figure 1 illustrates a simple example where
doing so results in shorter trees than if tree construction where
randomized.
Making the set of interior nodes in each tree disjoint also
contributes to tree diversity and hence robustness. The failure
of a single node would only disrupt one tree. However, in
the MSNBC scenario considered in Section V, multiple nodes
can fail concurrently, so it is not clear to what extent the
disjointness of the interior nodes helps.
The deterministic algorithm proceeds as follows. When a
new node joins, we first decide the tree in which it is going
to be fertile (i.e., be an interior node that can have children);
the node will be sterile (i.e., a leaf node) in all the remaining
trees. We keep track of the number of fertile nodes in each tree,
and (deterministically) pick the tree with the least number of
fertile nodes as the one in which the new node will be fertile
R
1 2
3 4
5 6
R
1 2
4 3
5 6
R
1
3
R
2
4
4
5
6 5
6
2
1
3
(a) Randomized construction (b) Deterministic construction
Fig. 1. The (total) out-degree limit for the root (R) is 4 while the limit for
the other nodes is 2. By concentrating the out-degree of each node in one
tree (its “fertile tree”), deterministic tree construction (case (b)) yields more
bushy and hence shorter trees than randomized tree construction (case (a)).
(we term this the “fertile tree” of the node, the rest being its
“sterile trees”). The goal is to roughly balance the number of
fertile nodes in each tree.
To insert the new node into its fertile tree, we start at the
root and proceed down until we reach a level that either has
a node with room (i.e., with spare bandwidth) or a node with
a sterile child. If a node with room is found at that level, we
designate it as the parent. Otherwise, we designate a node with
a sterile child as the parent of the new node and find a new
parent for the sterile child, as discussed below. (The idea is to
have the upper levels of the tree populated by fertile nodes,
which can support children.) In both cases, the parent is chosen
deterministically (say the first node meeting these criteria that
is encountered in the search through our data structures). The
disjointness of the interior (i.e., fertile) nodes across the trees
makes randomization unnecessary.
To insert the new node into one of its sterile trees, we use a
similar procedure as above except that we only consider nodes
with spare bandwidth when searching for a parent. Since the
new node is sterile in this tree, there is nothing to be gained
from substituting an existing sterile node in the upper levels
of the tree with the new node.
With this deterministic algorithm, it is possible (although
quite unlikely in practice) that a tree runs out of capacity to
support new nodes. This can happen, for instance, if a large
number of departing nodes all happen to have been fertile in
the same tree. When a tree runs out of capacity, we pick a
fertile node from the tree with the largest number of fertile
nodes and “migrate” it to the tree that is starved of capacity.
Migration involves changing the designation of the node from
fertile to sterile in one tree (and finding new parents for each
of its children in that tree) and designating it as fertile in the
starved tree.
Clearly, it would be desirable for both the deterministic and
the randomized tree construction algorithms to be network
topology-aware. We discuss this issue next.
3) Tree Efficiency/Topology Awareness: As noted in Section
II-A, making the trees efficient is a (secondary) goal. The
idea is to make the tree structure match the underlying
network topology to the extent possible, thereby minimizing
duplication of traffic on network links as well as the number of
underlying IP hops traversed. Thus, given a choice of parents
Proceedings of the 11th IEEE International Conference on Network Protocols (ICNP’03)
1092-1648/03 $17.00 © 2003 IEEE

(subject to the diversity and shortness goals discussed above),
we would like to pick a parent that is close in terms of
network distance (and perhaps even on the same ISP network
to conserve expensive egress bandwidth), where possible. Note
that such proximity to parent nodes (in all trees) does not
necessarily compromise tree diversity or robustness in the
CoopNet setting. Given the high rate of node churn, departures
or failures of end-nodes and/or their network links are more
likely causes of disruption than failures in the interior of the
network. So a set of distinct but nearby parents is still diverse
under this failure model.
What we need is an efficient way to pick a proximate
parent for a node without requiring extensive P2P network
measurements. We use the simple delay-coordinates based
“GeoPing” technique proposed in [27] for a somewhat differ-
ent application (viz., determining the geographic location of
Internet hosts). Each node maintains its “delay coordinates”
of (average) ping times to a small set of landmark hosts (say
10 hosts). The pings are repeated at a low frequency and the
averages recomputed to keep the coordinates up-to-date. When
a node wishes to join the distribution trees, it reports its delay
coordinates to the root. Once the root has identified a set of
candidate parents in a tree (subject to the bandwidth and tree
level considerations discussed above), it picks the one whose
delay coordinates are closest to that of the new node (in terms
of Euclidean distance).
We have conducted a separate study to evaluate the efficacy
of the delay coordinates-based approach in finding proximate
peers [21]. The results are encouraging the latency to
the peer selected based on delay coordinates is within 31%
(1.31X) of the optimal 50% of the time and within 74%
(1.74X) of the optimal 90% of the time. The choice based on
delay coordinates is far better than that resulting from random
selection. However, since we do not have delay coordinates
information for the clients in the MSNBC trace, we do not
consider proximity in the evaluation presented in this paper.
III. M
ULTIPLE DESCRIPTION CODING
A. MDC Overview
Multiple description coding (MDC) is a method of encoding
an audio and/or video signal into M>1 separate streams, or
descriptions, such that any subset of these descriptions can
be received and decoded. The distortion with respect to the
original signal is commensurate with the number of descrip-
tions received; i.e., the more descriptions received, the lower
the distortion and the higher the quality of the reconstructed
signal. This differs from layered coding
5
in that in MDC every
subset of descriptions must be decodable, whereas in layered
coding only a nested sequence of subsets must be decodable.
For this extra flexibility, MDC incurs a modest performance
penalty relative to layered coding (Section III-D), which in
turn incurs a slight performance penalty relative to single
description coding.
5
Layered coding is also known as embedded, progressive, or scalable
coding.
Rate-distortion
Curve
GOF
n
GOF
n+1
GOF
n-1
GOF
n+2
GOF
n-2
Bits
Distortion
..
R
0
R
1
R
2
R
3
R
M-1
R
M
D(R
0
)
D(R
1
)
D(R
2
)
D(R
M
)
Packet 1
Packet 2
Packet 3
Packet 4
Packet M
(
M
,
1)
(
M
,
2)
(
M
,
3)
(M,M)
code
RS
Embedded bit stream
Fig. 2. Priority encoding packetization of a group of frames (GOF). The
source bits in the range [R
i1
,R
i
) are mapped to i source blocks and
protected with M i FEC blocks. Any m out of M packets can recover
the initial R
m
bits of the bit stream for the GOF.
Many multiple description coding schemes have been inves-
tigated over the years. For an overview see [19]. A particularly
efficient and practical system is based on layered audio or
video coding [30], [24], Reed-Solomon coding [36], priority
encoded transmission [3], and optimized bit allocation [17],
[33], [26]. In such a system the audio and/or video signal is
partitioned into groups of frames (GOFs), each group having
a duration of T =1second or so, for example. Each GOF
is then independently encoded, error protected, and packetized
into M packets, as shown in Figure 2. Both layered coding and
Forward Error Correction (FEC) are building blocks for MDC.
Layered coding is used by MDC to prioritize the streaming
data. The bits from a GOF are sorted in a decreasing order
of importance (where importance is quantified as the bit’s
contribution towards reducing signal distortion) to form an
embedded bit stream. For example, bits between R
0
and R
1
are more important than the subsequent bits in the embedded
stream in Figure 2. Forward Error Correction (FEC), such as
Reed-Solomon encoding, is then used to protect data units to
different extents depending on their importance.
M descriptions can accommodate up to M priority levels
for a GOF. If any m M packets are received, then the initial
R
m
bits of the bit stream for the GOF can be recovered, result-
ing in distortion D(R
m
),where0=R
0
R
1
··· R
M
and consequently D(R
0
) D(R
1
) ···D(R
M
). Thus all
M packets are equally important; only the number of received
packets determines the reconstruction quality of the GOF.
Further, the expected distortion is
M
m=0
p(m)D(R
m
),where
p(m) is the probability that m out of M packets are received.
Given p(m) and the operational rate-distortion function D(R),
this expected distortion can be minimized using a simple
procedure that adjusts the rate points R
1
,...,R
M
subject to a
constraint on the packet length [17], [33], [26]
6
. By assigning
the mth packet in each GOF to the mth description, the entire
audio and/or video signal is represented by M descriptions,
where each description is a sequence of packets transmitted
6
The “optimizer” in our system (Figure 3) performs this function.
Proceedings of the 11th IEEE International Conference on Network Protocols (ICNP’03)
1092-1648/03 $17.00 © 2003 IEEE

Citations
More filters
Journal ArticleDOI

Bullet: high bandwidth data dissemination using an overlay mesh

TL;DR: This paper presents Bullet, a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh, and finds that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing.
Journal ArticleDOI

Multiple Description Coding for Video Delivery

TL;DR: In this article, the authors describe principles in designing multiple description coding (MDC) video coders employing temporal prediction and present several predictor structures that differ in their tradeoffs between mismatch-induced distortion and coding efficiency.
Proceedings ArticleDOI

Mesh or Multiple-Tree: A Comparative Study of Live P2P Streaming Approaches

TL;DR: It is shown that the main factors attributing in the inferior performance of the tree-based approach are the static mapping of content to a particular tree, and the placement of each peer as an internal node in one tree and as a leaf in all other trees.
Journal ArticleDOI

PRIME: peer-to-peer receiver-driven mesh-based streaming

TL;DR: The main design goal of PRIME is to minimize two performance bottlenecks, namely bandwidth bottleneck and content bottleneck, and it is shown that the global pattern of delivery for each segment of live content should consist of a diffusion phase which is followed by a swarming phase.
Journal ArticleDOI

Opportunities and Challenges of Peer-to-Peer Internet Video Broadcast

TL;DR: The basic taxonomy of peer-to-peer broadcast is described and the major issues associated with the design of broadcast overlays are summarized, including the key challenges and open problems and possible avenues for future directions.
References
More filters
Proceedings ArticleDOI

Chord: A scalable peer-to-peer lookup service for internet applications

TL;DR: Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Proceedings ArticleDOI

A scalable content-addressable network

TL;DR: The concept of a Content-Addressable Network (CAN) as a distributed infrastructure that provides hash table-like functionality on Internet-like scales is introduced and its scalability, robustness and low-latency properties are demonstrated through simulation.
Proceedings Article

A case for end system multicast

TL;DR: The potential benefits of transferring multicast functionality from end systems to routers significantly outweigh the performance penalty incurred and the results indicate that the performance penalties are low both from the application and the network perspectives.
Proceedings ArticleDOI

Measurement study of peer-to-peer file sharing systems

TL;DR: This measurement study seeks to precisely characterize the population of end-user hosts that participate in Napster and Gnutella, and shows that there is significant heterogeneity and lack of cooperation across peers participating in these systems.
Proceedings ArticleDOI

Resilient overlay networks

TL;DR: It is found that forwarding packets via at most one intermediate RON node is sufficient to overcome faults and improve performance in most cases, demonstrating the benefits of moving some of the control over routing into the hands of end-systems.
Related Papers (5)
Frequently Asked Questions (11)
Q1. What contributions have the authors mentioned in the paper "Resilient peer-to-peer streaming*" ?

The authors consider the problem of distributing “ live ” streaming media content to a potentially large and highly dynamic population of hosts. Their approach to providing robustness is to introduce redundancy, both in network paths and in data. The authors use multiple, diverse distribution trees to provide redundancy in network paths and multiple description coding ( MDC ) to provide redundancy in data. The authors present a simple tree management algorithm that provides the necessary path diversity and describe an adaptation framework for MDC based on scalable receiver feedback. 

Several researchers have advocated the use of source coding, possibly in conjunction with path diversity, to make data transfer robust to packet loss. 

The use of multiple description coding in conjunction with multipath routing in (telephone) networks dates back to the late 1970s [19]. 

To insert the new node into its fertile tree, the authors start at the root and proceed down until the authors reach a level that either has a node with room (i.e., with spare bandwidth) or a node with a sterile child. 

Given p(m) and the operational rate-distortion function D(R), this expected distortion can be minimized using a simple procedure that adjusts the rate points R1, . . . , RM subject to a constraint on the packet length [17], [33], [26] 

E. Impact of Repair IntervalThus far the authors have assumed that it takes 1 second for a tree to be repaired following the departure of a node. 

Should the tree management processing on one root node become a bottleneck, it would be easy to scale upusing a (possibly distributed) cluster of roots and directing each client to one of the roots, say at random. 

The authors report the average quality of the received stream at the clients, quantified using the Peak Signal-to-Noise Ratio (PSNR) metric, which is computed from the luminance distortion D: PSNR = 10Log10(2552/D). 

Soon after that, PSNR spikes up as the client population drops, to the point where almost all nodes can directly become children of the root and hence experience little disruption. 

(The network round trip needed for this check can possibly be saved by having the parent piggyback its packet loss rate information on the data stream it forwards to its children.) 

For each GOF interval, the tree manager computes the p(m) distribution (Section III-A) corresponding to the number of descriptions received over all clients.