scispace - formally typeset
Open AccessProceedings ArticleDOI

Multi-Channel Live P2P Streaming: Refocusing on Servers

TLDR
A novel online server capacity provisioning algorithm that proactively adjusts the server capacities available to each of the concurrent channels, such that the supply of server bandwidth in each channel dynamically adapts to the forecasted demand, taking into account the number of peers, the streaming quality, and the priorities of channels.
Abstract
Due to peer instability and time-varying peer upload bandwidth availability in live peer-to-peer (P2P) streaming channels, it is preferable to provision adequate levels of stable upload capacities at dedicated streaming servers, in order to guarantee the streaming quality in all channels. Most commercial P2P streaming systems have resorted to the practice of over-provisioning upload capacities on streaming servers. In this paper, we have performed a detailed analysis on 400 GB and 7 months of run-time traces from UUSee, a commercial P2P streaming system, and observed that available capacities on streaming servers are not able to keep up with the increasing demand imposed by hundreds of channels. We propose a novel online server capacity provisioning algorithm that proactively adjusts the server capacities available to each of the concurrent channels, such that the supply of server bandwidth in each channel dynamically adapts to the forecasted demand, taking into account the number of peers, the streaming quality, and the priorities of channels. The algorithm is able to learn over time, and has full ISP awareness to maximally constrain P2P traffic within ISP boundaries. To evaluate the effectiveness of our solution, our experimental studies are based on an implementation of the algorithm with actual channels of P2P streaming traffic, with real-world traces replayed within a server cluster.

read more

Content maybe subject to copyright    Report

Multi-channel Live P2P Streaming:
Refocusing on Servers
Chuan Wu, Baochun Li
Department of Electrical and Computer Engineering
University of Toronto
Shuqiao Zhao
Multimedia Development Group
UUSee, Inc.
Abstract—Due to peer instability and time-varying peer up-
load bandwidth availability in live peer-to-peer (P2P) streaming
channels, it is preferable to provision adequate levels of stable
upload capacities at dedicated streaming servers, in order to
guarantee the streaming quality in all channels. Most commercial
P2P streaming systems have resorted to the practice of over-
provisioning upload capacities on streaming servers. In this
paper, we have performed a detailed analysis on 400 GB and
7 months of run-time traces from UUSee, a commercial P2P
streaming system, and observed that available capacities on
streaming servers are not able to keep up with the increasing
demand imposed by hundreds of channels. We propose a novel
online server capacity provisioning algorithm that proactively
adjusts the server capacities available to each of the concurrent
channels, such that the supply of server bandwidth in each
channel dynamically adapts to the forecasted demand, taking
into account the number of peers, the streaming quality, and the
priorities of channels. The algorithm is able to learn over time,
and has full ISP awareness to maximally constrain P2P traffic
within ISP boundaries. To evaluate the effectiveness of our solu-
tion, our experimental studies are based on an implementation
of the algorithm with actual channels of P2P streaming traffic,
with real-world traces replayed within a server cluster.
I. INTRODUCTION
With the recent success and commercial deployment of live
P2P streaming [1], hundreds of media channels are routinely
broadcast to millions of users at any given time. The essence of
P2P streaming is the use of peer upload bandwidth to alleviate
the load on dedicated streaming servers [2]. Most existing
research has thus far focused on peer strategies: Should a
mesh or tree topology be constructed? What incentives can be
provisioned to encourage peer bandwidth contribution? How
do we cope with peer churn and maintain the quality of live
streams? We recognize the importance of these open research
challenges, as their solutions seek to maximally utilize peer
upload bandwidth, leading to minimized server costs.
With this paper, however, we shift our focus to the stream-
ing servers. Such refocusing on servers is motivated by our
detailed analysis of 7 months and 400 GB worth of real-
world traces from hundreds of streaming channels in UUSee
[3], a large-scale commercial P2P live streaming system in
China. As all other state-of-the-art live streaming systems
(including PPLive), in order to maintain a satisfactory and
sustained streaming quality, UUSee has so far resorted to
the practice of over-provisioning server capacities to s atisfy
the streaming demand from peers in each of its channels.
Contrary to common belief, we have observed that available
capacities on streaming servers are not able to keep up with
the increasing demand from hundreds of channels. In response,
we advocate to allocate limited s erver capacities to each of the
channels, in order to maximally utilize dedicated servers.
While it is certainly a challenge to determine how much
bandwidth to provision on streaming servers to accommodate
the streaming demand of all concurrent channels, the challenge
is more daunting when we further consider the conflict of
interest between P2P solution providers and ISPs. P2P appli-
cations have significantly increased the volume of inter-ISP
traffic, which in some cases leads to ISP filtering. We seek to
design effective provisioning algorithms on servers with the
awareness of ISP boundaries to minimize inter-ISP traffic.
In this paper, we present Ration, an online server capacity
provisioning algorithm to be carried out on a per-ISP basis.
Ration dynamically computes the minimal amount of server
capacity to be provisioned to each channel inside the ISP, in
order to guarantee a desired level of streaming quality for
each channel. With the analysis of our real-world traces, we
have observed that the number of peers and their contributed
bandwidth in each channel are dynamically varying over time,
and significantly affect the required bandwidth from servers.
Ration is designed to actively predict the bandwidth demand
in each channel in an ISP with time series forecasting and
dynamic regression techniques, utilizing the number of active
peers, the streaming quality, and the server bandwidth usage
within a limited window of recent history. It then proactively
allocates server bandwidth to each channel, respecting the
predicted demand and priority of channels. To show the
effectiveness of Ration, it has been implemented in streaming
servers serving a mesh-based P2P streaming system. In a
cluster of dual-CPU servers, the system emulates real-world
P2P streaming by replaying the scenarios captured by traces.
The remainder of this paper is organized as follows. In
Sec. II, we motivate our focus on servers by showing our
analysis of 7 months of traces from UUSee. In Sec. III,
we present the design of Ration, and discuss how it may
be deployed with ISP awareness to serve real-world P2P
streaming systems. Sec. IV presents our experimental results
evaluating Ration by replaying traces in a P2P streaming
system running in a server cluster. We discuss related work
and conclude the paper in Sec. V and Sec. VI, respectively.
II. R
EFOCUSING ON SERVERS:EVIDENCE FROM
REAL-WORLD TRACES
Why shall we refocus our attention to dedicated streaming
servers in P2P live streaming systems? Starting Sep. 2006,

we have continuously monitored the performance statistics
of a real-world commercial P2P streaming platform, offered
by UUSee Inc., a leading P2P streaming solution provider
with legal contractual rights with mainstream content providers
in China. As other such systems such as PPLive, UUSee
maintains a sizable array of 150 dedicated streaming servers,
to support its P2P streaming topologies with hundreds of
channels to millions of users, mostly in 400 Kbps media
streams. UUSee utilizes the “pull-based” design on mesh P2P
topologies, that allows peers to s erve other peers (“partners”)
by exchanging media blocks in a sliding window of the stream.
To maximally utilize peer upload bandwidth and alleviate
server load, UUSee incorporates a number of algorithms in
peer selection. Each peer applies an algorithm to estimate
its maximum upload capacity, and continuously estimates its
aggregate instantaneous sending throughput to its partners.
If its estimated sending throughput is lower than its upload
capacity for 30 seconds, it will inform one of the tracking
servers that it is able to r eceive new connections. The tracking
servers keep a list of such peers, and assign them upon
requests of partners from other peers. During the streaming
process, neighboring peers also recommend known peers to
each other based on their current streaming quality, represented
by the number of available blocks in their current playback
buffers. A peer may contact a tracking server again to obtain
additional peers with better qualities, once it has experienced
low buffering levels for a sustained period of time.
To inspect the run-time behavior of UUSee P2P streaming,
we have implemented extensive measurement and reporting
capabilities within its P2P client application. Each peer collects
a set of its vital statistics, and reports to dedicated trace
servers every 5 minutes via UDP. The statistics include its
IP address, the channel it is watching, its buffer availability
map, the number of available blocks in its current playback
buffer (henceforth referred to as the buffer count), as well as
a list of all its partners, with their corresponding IP addresses,
TCP/UDP ports, and current sending/receiving throughput
to/from each partner. Each dedicated streaming server in
UUSee utilizes a similar P2P protocol as deployed on regular
peers, is routinely selected to serve the peers, and reports its
related statistics periodically as well.
A. Insufficient “supply” of server bandwidth
What have we discovered from the traces, that represent
snapshots of the system every 5 minutes throughout the 7
months? The first observation we made is related to the in-
sufficient “supply” of server bandwidth, as more channels are
added over time. Such insufficiency has gradually affected the
streaming quality, in both popular and less popular channels.
In order to show bandwidth usage over 7 months and at
different times of a day within one figure, we choose to show
all our 5-minute measurements on representative dates in each
month. One such date, February 17 2007, is intentionally
chosen to coincide with the Chinese New Year event, with
typical flash crowds due to the broadcast of a celebration
show on a number of the channels. Fig. 1(A) shows the total
server bandwidth usage on 150 streaming servers. We may
0
0.5
1
Streaming quality
0
0.5
1
Streaming quality
0
500
1000
Month
Total number of channels
09/06
10/06
11/06
12/06
01/07
02/07
03/07
9/15/06
10/15/06
11/15/06
0
10
20
30
Date
Server upload capacity
usage (Gbps)
12/15/06
1/15/07
2/17/07
3/15/07
Date
(A) Server capacity usage over time.
(C)The streaming quality of a popular channel.
(B) Number of channels deployed over time.
(D) The streaming quality of a less popular channel.
9/15/06
10/15/06
11/15/06
12/15/06
1/15/07
2/17/07
3/15/07
Date
9/15/06
10/15/06
11/15/06
12/15/06
1/15/07
2/17/07
3/15/07
Fig. 1. The evolution of server bandwidth, channels, and streaming quality
over a period of 7 months.
observe that an increasing amount of server bandwidth has
been consumed over time, but stabilizing in January 2007. This
rising trend can be explained by the rapidly increasing number
of channels deployed during this period, as shown in Fig. 1(B).
The interesting phenomenon that such bandwidth usage has
stabilized, even during the Chinese New Year flash crowd, has
led to the conjecture that the total uplink capacity of all servers
has been reached. The daily variation of server bandwidth
usage coincides with the daily pattern of peer population.
Our conjecture that server capacities have saturated is
confirmed when we investigate the streaming quality in each
channel. The streaming quality in a channel at each time
is evaluated as the percentage of high-quality peers in the
channel, where a high-quality peer has a buffer count of more
than 80% of the total size of its playback buffer. Representative
results with a popular channel (CCTV1, with more than 10,000
concurrent users) and a less popular channel (CCTV12, with
fewer than 1000 concurrent users) are shown in Fig. 1(C) and
(D), respectively. The streaming quality of both channels has
been decreasing over time, as server capacities are saturated.
During the Chinese New Year flash crowd, the streaming
quality of CCTV1 degraded significantly, due to the lack of
bandwidth to serve a flash crowd of users in the channel.
Would it be possible that the lack of peer bandwidth contri-
bution has overwhelmed the servers? As we noted, the protocol
in UUSee uses optimizing algorithms to maximize peer upload
bandwidth utilization, which in our opinion represents one
of the state-of-the-art peer strategies in P2P streaming. The
following back-of-the-envelope calculation with data from the
traces may be convincing: At one time on October 15, 2006,
about 100, 000 peers in the entire network have each achieved
a streaming rate around 400 Kbps, by consuming a bandwidth
level of 2 Gbps from the servers. The upload bandwidth
contributed by peers can be computed as 100, 000 × 400
2, 000, 000 = 38, 000, 000 Kbps, which is 380 Kbps per peer
on average. This represents quite an achievement, as most
of the UUSee clientele are ADSL users in China, with a
maximum of 500 Kbps upload capacity.
Indeed, server capacities have i ncreasingly become a bot-
tleneck in real-world P2P live streaming s olutions.
B. Increasing volume of inter-ISP traffic
The current UUSee protocol is not aware of ISPs. We now
investigate the volume of inter-ISP traffic during the 7-month

period, computed as the throughput sum of all links across
ISP boundaries at each time, by mapping IP addresses to the
ISPs using a database from UUSee. Fig. 2 reveals that both
the inter-ISP peer-to-peer and server-to-peer traffic have been
increasing, quadrupled over t he 7-month period, due to the
increased number of channels and peers.
0
40
80
Date
Overall inter−ISP traffic (Gbps)
Peer−to−peer
Server−to−peer
9/15/06 10/15/06 11/15/0612/15/06 1/15/07 2/17/07 3/15/07
Fig. 2. The volume of inter-ISP traffic increases over time.
In China, the two nation-wide ISPs, Netcom and Telecom,
charge each other based on the difference of inter-ISP traffic
volume in both directions, and regional ISPs are charged based
on traffic to and from the nation-wide ISPs. Both charging
mechanisms have made it important for ISPs to limit inter-
ISP traffic. Considering the large and persistent bandwidth
consumption for live streaming, we believe that P2P streaming
systems should be designed to minimize inter-ISP traffic,
which remains one of our objectives in this paper.
C. What is the required server bandwidth for each channel?
(A)
0 0.4 0.8 1.2 1.6
0
0.5
1
Server upload capacity usage (Gbps)
Streaming quality
2/13−2/15
0 10000 20000 30000 40000
0
0.5
1
Number of peers
Streaming quality
2/13−2/15
0 100000 200000 300000
0
0.5
1
Number of peers
Streaming quality
0 10000 20000 30000 40000
0
0.5
1
Number of peers
Streaming quality
0 0.4 0.8 1.2
0
0.5
1
Server upload capacity usage (Gbps)
Streaming quality
2/13
2/13
2/17
(B)
1
2
3
Fig. 3. Relationship among server upload bandwidth, number of peers, and
streaming quality for channel CCTV1.
To determine the amount of server bandwidth needed for
each channel, we wish to explore the relation among server
upload bandwidth usage, the number of peers, and the achieved
streaming quality in each channel. Based on a detailed trace
analysis, we have identified no strong correlation among the
quantities over a longer period of time. For example, Fig. 3(A)
plots the correlation of the quantities for channel CCTV1 in
a period of three days (February 13-15 2007). There do not
exist any evident correlations in this period.
Nevertheless, if we focus on a shorter time scale, the
correlation becomes more evident. For example, Fig. 3(B)-1
plots the correlation between server upload bandwidth usage
and the streaming quality during a three-hour period (8pm-
11pm) on February 13, which exhibits a positive square-root
relation between the two quantities. Meanwhile, a negative
correlation is shown to exist between the number of peers and
the streaming quality, in Fig. 3(B)-2. We have also observed
that the shape of such short-term correlation varies from time
to time. For example, during the same time period on February
17, the r elation between the number of peers and the streaming
quality represents a reciprocal curve, as shown in Fig. 3(B)-3.
We have observed from the traces that such variations exist in
other channels as well, which can be attributed to the time-
varying peer upload bandwidth availability in the channels.
All of our observations thus far point to the challenging
nature of our problem at hand: how much server bandwidth
should we allocate in each channel to assist peers in each ISP?
III. R
ATION :ONLINE SERVER CAPACITY PROVISIONING
Our proposal is Ration, an online server capacity provi-
sioning algorithm to be carried out on a per-ISP basis, that
dynamically assigns a minimal amount of server capacity to
each channel to achieve a desired level of streaming quality.
A. Problem formulation
We consider a P2P live streaming system with multiple
channels (such as UUSee). We assume that the tracking server
in the system is aware of ISPs: when it supplies any requesting
peer with information of new partners, it first assigns peers (or
dedicated servers) with available upload bandwidth from the
same ISP. Only when no such peers or servers exist, will the
tracking server assign peers from other ISPs.
The focus of Ration is the dynamic provisioning of server
capacity in each ISP, carried out by a designated server in
the ISP. In the ISP that we consider, there are a total of M
concurrent channels to be deployed, represented as a set C.
There are n
c
peers in channel c, c ∈C.Lets
c
denote the
server upload bandwidth to be assigned to channel c, and q
c
denote the streaming quality of channel c, i.e., the percentage
of high-quality peers in the channel that have a buffer count
of more than 80% of the size of its playback buffer. Let U
be the total amount of server capacity to be deployed in the
ISP.
1
We assume a priority level p
c
for each channel c, that
can be assigned different values by the P2P streaming s olution
provider to reflect the relative importance of the channels.
At each time t, Ration proactively computes the amount
of server capacity s
c
t+1
to be allocated to each channel c for
time t +1, that achieves optimal utilization of the limited
overall server capacity across all the channels, based on their
priority and popularity (as defined by the number of peers in
the channel) at time t +1. Such an objective can be formally
represented by the optimization problem Provision(t+1) as
follows (t =1, 2,...), in which a streaming quality function
F
c
t+1
is included to represent the relationship among q
c
, s
c
and n
c
at time t +1:
Provision(t+1):
max
c∈C
p
c
n
c
t+1
q
c
t+1
(1)
subject to
c∈C
s
c
t+1
U,
q
c
t+1
= F
c
t+1
(s
c
t+1
,n
c
t+1
), c ∈C, (2)
0 q
c
t+1
1,s
c
t+1
0, c ∈C.
1
U can be implemented in practice with a number of servers deployed,
with the number decided by U and the upload capacity of each server.

Weighting the streaming quality q
c
t+1
of each channel c with
its priority p
c
, the objective function in (1) reflects our wish to
differentiate channel qualities based on their priorities. With
channel popularity n
c
t+1
in the weights, we aim to provide
better streaming qualities for channels with more peers. Noting
that n
c
q
c
represents the number of high-quality peers in
channel c, in this way, we guarantee that, overall, more peers
in the network can achieve satisfying streaming qualities.
The challenges in solving Provision(t+1) at time t to derive
the optimal values of s
c
t+1
, c ∈C, lie in (1) the uncertainty
of the channel popularity n
c
t+1
, i.e., the number of peers in
each channel in the future, and (2) the dynamic relationship
F
c
t+1
among q
c
, s
c
, and n
c
of each channel c at time t +1.
In what follows, we present our solutions to both challenges.
B. Active prediction of channel popularity
We first estimate the number of active peers in each channel
c at the future time t +1, i.e., n
c
t+1
, c ∈C. Existing work
has been modeling the evolution of the number of peers in
P2P streaming systems based on Poisson arrivals and Pareto
life time distributions (e.g., [4]). We argue that these models
represent ideal simplifications of real-world P2P live streaming
systems, where peer dynamics are actually affected by many
random factors. To dynamically and accurately predict the
number of peers in a channel, we employ time series forecast-
ing techniques. We treat the number of peers in each channel c,
i.e., n
c
t
,t=1, 2,..., as an unknown random process evolving
over time, and use the recent historical values to forecast the
most likely values of the process in the future.
As the time series of channel popularity is generally non-
stationary (i.e., its values do not vary around a fixed mean),
we utilize the autoregressive integrated moving average model,
ARIMA(p,d,q), which is a standard linear predictor to tackle
non-stationary time series. With ARIMA(p,d,q), a time series,
z
t
,t =1, 2,..., is differenced d times to derive a station-
ary series, w
t
,t =1, 2,..., and each value of w
t
can be
expressed as the linear weighted sum of p previous values
in the series, w
t1
,...,w
tp
, and q previous random errors,
a
t1
,...,a
tq
. The employment of an ARIMA(p,d,q) model
involves two steps: (1) model identification, i.e., the decision
of model parameters p, d, q, and (2) model estimation, i.e.,
the estimation of p + q coefficients in the linear weighted
summation.
For model identification of time series n
c
t
,t =1, 2,...,
we have derived d =2based on the differencing analysis
of actual channel popularity time series from the UUSee
traces, and have derived p =0and q =1with standard
model identification techniques using autocorrelation and par-
tial autocorrelation functions for the differenced time series
([5], pp. 187). Due to space constraints, interested readers
are referred to our technical report [6] for details. Having
identified an ARIMA(0,2,1) model, the channel popularity
prediction for time t +1, ¯n
c
t+1
, can be expressed as follows:
¯n
c
t+1
=2n
c
t
n
c
t1
+ a
t+1
θa
t
, (3)
where θ is the coefficient for the random error term a
t
and
can be estimated with a least squares algorithm. When we use
(3) for prediction in practice, the random error at future time
t +1, i.e., a
t+1
, can be treated as zero, and the random error
at time t can be approximated by a
t
= n
c
t
¯n
c
t
[5]. Therefore,
the prediction function is simplified to
¯n
c
t+1
=2n
c
t
n
c
t1
θ(n
c
t
¯n
c
t
). (4)
To dynamically refine the model for accurate prediction of
popularity of a channel c over time, we propose to carry out the
forecasting in a dynamic fashion: To start, the ARIMA(0,2,1)
model is trained with channel popularity statistics in channel c
in the most recent N
1
time steps, and the value of coefficient
θ is derived. Then at each following time t, ¯n
c
t+1
is predicted
using (4), and the confidence interval of the predicted value (at
a certain confidence level, e.g., 95%) is computed. When time
t +1 comes, the actual number of peers, n
c
t+1
, is collected
and tested against the confidence bounds. If the real value lies
out of the confidence interval and such prediction errors have
occurred T
1
out of T
2
consecutive times, the forecasting model
is retrained, and the above process repeats.
C. Dynamic learning of the streaming quality function
Next, we dynamically derive the relationship among stream-
ing quality, server bandwidth usage, and the number of peers
in each channel c, denoted as the streaming quality function
F
c
in (2), with a statistical regression approach.
From the traces, we have observed q
c
(s
c
)
α
c
at short
time scales, where α
c
is the exponent of s
c
, e.g., q
c
(s
c
)
0.5
in Fig. 3(B)-1. We also observed q
c
(n
c
)
β
c
, where β
c
is
the exponent of n
c
, e.g., q
c
(n
c
)
1
in Fig. 3(B)-3. As
we have made similar relationship observations from a broad
trace analysis of channels over different times, we model the
streaming quality function as
q
c
= γ
c
(s
c
)
α
c
(n
c
)
β
c
, (5)
where γ
c
> 0 is a weight parameter. Such a function model
is advantageous in that it can be transformed into a multiple
linear regression problem, by taking logarithm at both sides:
log(q
c
) = log(γ
c
)+α
c
log(s
c
)+β
c
log(n
c
).
Let Q
c
= log(q
c
), S
c
= log(s
c
), N
c
= log(n
c
), Γ
c
=
log(γ
c
). We derive the following multiple linear regression
problem
Q
c
c
+ α
c
S
c
+ β
c
N
c
+
c
, (6)
where S
c
and N
c
are r egressors, Q
c
is the r esponse variable,
and
c
is the error term. Γ
c
, α
c
, and β
c
are regression param-
eters, which can be estimated with least squares algorithms.
As we have observed in trace analysis that the relationship in
(5) is evident on short time scales but varies over a longer term,
we dynamically re-learn the regression model in (6) for each
channel c in the following fashion: To start, the designated
server trains the regression model with collected channel popu-
larity statistics, server bandwidth usage and channel streaming
quality during the most recent N
2
time steps, and derives the
values of regression parameters. At each following time t,it
uses the model to estimate the streaming quality based on
the used server bandwidth and the collected number of peers

in the channel at t, and examines the fitness of the current
regression model by comparing the estimated value with the
collected actual streaming quality. If the actual value exceeds
the confidence interval of the predicted value for T
1
out of T
2
consecutive times, the regression model is retrained with the
most recent historical data.
We note that the signs of exponents α
c
and β
c
in (5) reflect
positive or negative correlations between the streaming quality
and its two deciding variables, respectively. Intuitively, we
should always have 0
c
< 1, as the streaming quality
could not be worse when more server capacity is provisioned,
and its improvement slows down with more and more server
capacity provided, until it finally reaches the upper bound of 1.
On the other hand, the sign of β
c
may be uncertain, depending
on the peer upload bandwidth availability at different times: if
more peers with high upload capacities (e.g., Ethernet peers)
are present, the streaming quality can be improved with more
peers in the channel (β
c
> 0); otherwise, more peer joining
the channel could lead to a downgrade of the streaming quality
(β
c
< 0).
D. Optimal allocation of server capacity
Based on the predicted channel popularity and the most
recently derived streaming quality function for each channel,
we are now ready to proactively assign the optimal amount
of server capacity to each channel for time t +1, by solving
problem Provision(t+1) in (1). Replacing q
c
with its function
model in (5), we transform the problem in (1) i nto:
Provision(t+1)’:
max G (7)
subject to
c∈C
s
c
t+1
U, (8)
s
c
t+1
B
c
t+1
, c ∈C, (9)
s
c
t+1
0, c ∈C, (10)
where the objective function
G =
c∈C
p
c
n
c
t+1
q
c
t+1
=
c∈C
p
c
γ
c
(n
c
t+1
)
(1+β
c
)
(s
c
t+1
)
α
c
,
and B
c
t+1
=(γ
c
(n
c
t+1
)
β
c
)
1
α
c
, denoting the maximal server
capacity requirement for channel c at time t +1, that achieves
q
c
t+1
=1.
The optimal server bandwidth provisioning for each chan-
nel, s
c
t+1
, c ∈C, can be obtained with a water-filling
approach. The implication of the approach is to maximally
allocate the server capacity, at the total amount of U,tothe
channels with the current largest marginal utility, as computed
with
dG
ds
c
t+1
, as long as the upper bound of s
c
t+1
indicated in
(9) has not been reached.
In Ration, the server capacity assignment is periodically
carried out to adapt to the changing demand in each of the
channels over time. To minimize the computation overhead,
we propose an incremental water-filling approach, that adjusts
server capacity shares among the channels from their previous
values, instead of a complete re-computation from the very
beginning:
The approach starts with s
c
t+1
= s
c
t
, c ∈C.Itfirst
computes whether there exists any surplus of the overall
provisioned server capacity, that occurs when not all the server
capacity has been used with respect to the current allocation,
i.e., U
c∈C
s
c
t+1
> 0, or the allocated capacity of some
channel c exceeds its maximal server capacity requirement for
time t+1, i.e., s
c
t+1
>B
c
t+1
. If so, it adds up the surpluses and
allocates them to the channels whose maximal server capacity
requirement has not been reached, starting from the channel
with the current maximal marginal utility (
dG
ds
c
t+1
). After this,
it further adjusts the server capacity assignment towards the
achievement of a same marginal utility (water-level) across
the channels, by repeatedly identifying the channel with the
current smallest marginal utility and the channel with the
current largest marginal utility, and moving bandwidth from
the former to the latter. This process repeats until all channels
have reached the same marginal utility, or have reached their
respective maximum server bandwidth requirement.
In our accompanying technical report [6], we have included
detailed steps of the incremental water-filling approach and
more discussions based on a graphical illustration. Interested
readers are referred to [6] due to space constraints.
Theorem 1. Given the channel popularity prediction
n
c
t+1
, c ∈C, and the most recent streaming quality function
q
c
t+1
= γ
c
(s
c
t+1
)
α
c
(n
c
t+1
)
β
c
, c ∈C, the incremental water-
filling approach obtains an optimal server capacity provision-
ing across all the channels for time t +1, i.e., s
c
t+1
, c ∈C,
which solves the problem Provision(t+1) in (1).
Again, interested readers are referred to [6] for the proof.
E. Ration: the complete algorithm
Our complete algorithm is summarized in Table I, which
is periodically carried out on a designated server in each ISP.
The only peer participation required is to have each peer in
the ISP send periodical heartbeat messages to the server, each
of which includes its current playback buffer count.
We note that in practice, the allocation interval is decided by
the P2P streaming solution provider based on need, e.g.,every
30 minutes, and peer heartbeat intervals can be shorter, e.g.,
every 5 minutes. To train ARIMA(0,2,1), generally no more
than 30 50 samples are required, i.e., N
1
< 50, and even
less samples are needed to learn the streaming quality function.
Therefore, only a small amount of historical data needs to be
maintained at the server for the execution of Ration.
F. Practical implications
Finally, we discuss the practical application of Ration in
real-world P2P live streaming systems. In such systems with
unknown demand for server capacity in each ISP, Ration
can make full utilization of the currently provisioned server
capacity, U, and meanwhile provide excellent guidelines for
the adjustment of U , based on different relationships between
the supply and demand for server capacity.
If the P2P streaming system is operating at the over-
provisioning mode in an ISP, i.e., the total deployed server
capacity exceeds the overall demand from all channels to
achieve the required streaming rate at their peers, Ration

Citations
More filters
Journal ArticleDOI

Challenges, design and analysis of a large-scale p2p-vod system

TL;DR: The challenges and the architectural design issues of a large-scale P2P-VoD system based on the experiences of a real system deployed by PPLive are discussed and a number of results on user behavior, various system performance metrics, including user satisfaction, are presented.
Proceedings ArticleDOI

CloudMedia: When Cloud on Demand Meets Video on Demand

TL;DR: A dynamic cloud resource provisioning algorithm is proposed which can effectively support VoD streaming with low cloud utilization cost and is verified and extensively evaluated using large-scale experiments under dynamic realistic settings on a home-built cloud platform.
Journal ArticleDOI

Scaling social media applications into geo-distributed clouds

TL;DR: This paper proposes efficient proactive algorithms for dynamic, optimal scaling of a social media application in a geo-distributed cloud by exploiting social influences among users and verifies the effectiveness of the online algorithm by solid theoretical analysis, as well as thorough comparisons to ready algorithms including the ideal offline optimum.
Proceedings ArticleDOI

NetTube: Exploring Social Networks for Peer-to-Peer Short Video Sharing

TL;DR: This paper presents NetTube, a novel peer-to- peer assisted delivering framework that explores the clustering in social networks for short video sharing, including a bi-layer overlay, an efficient indexing scheme and a pre-fetching strategy leveraging social networks.
Proceedings ArticleDOI

Queuing Network Models for Multi-Channel P2P Live Streaming Systems

TL;DR: This paper develops infinite-server queueing network models to analytically study the performance of multi-channel P2P streaming systems, and shows that VUD design generally performs significantly better, particularly for systems with heterogeneous channel popularities and streaming rates.
References
More filters
Book

Time series analysis, forecasting and control

TL;DR: In this article, a complete revision of a classic, seminal, and authoritative book that has been the model for most books on the topic written since 1970 is presented, focusing on practical techniques throughout, rather than a rigorous mathematical treatment of the subject.
Journal ArticleDOI

Time Series Analysis Forecasting and Control

TL;DR: This revision of a classic, seminal, and authoritative book explores the building of stochastic models for time series and their use in important areas of application —forecasting, model specification, estimation, and checking, transfer function modeling of dynamic relationships, modeling the effects of intervention events, and process control.
Proceedings ArticleDOI

CoolStreaming/DONet: a data-driven overlay network for peer-to-peer live media streaming

TL;DR: This paper presents DONet, a data-driven overlay network for live media streaming, and presents an efficient member and partnership management algorithm, together with an intelligent scheduling algorithm that achieves real-time and continuous distribution of streaming contents.
Journal ArticleDOI

A Measurement Study of a Large-Scale P2P IPTV System

TL;DR: In this paper, an in-depth measurement study of one of the most popular P2P IPTV systems, namely, PPLive, has been conducted, which enables the authors to study the global characteristics of the mesh-pull peer-to-peer IPTV system.

CoolStreaming/DONet: A Data-Driven Overlay Network for Efficient Live Media Streaming

TL;DR: This paper presents DONet, a Data-driven Overlay Network for live media streaming, and presents an efficient memberand partnership management algorithm, together with an intelligent scheduling algorithm that achieves real-time and continuous distribution of streaming contents.
Related Papers (5)
Frequently Asked Questions (13)
Q1. What are the contributions mentioned in the paper "Multi-channel live p2p streaming: refocusing on servers" ?

In this paper, the authors have performed a detailed analysis on 400 GB and 7 months of run-time traces from UUSee, a commercial P2P streaming system, and observed that available capacities on streaming servers are not able to keep up with the increasing demand imposed by hundreds of channels. The authors propose a novel online server capacity provisioning algorithm that proactively adjusts the server capacities available to each of the concurrent channels, such that the supply of server bandwidth in each channel dynamically adapts to the forecasted demand, taking into account the number of peers, the streaming quality, and the priorities of channels. 

As the authors noted, the protocol in UUSee uses optimizing algorithms to maximize peer upload bandwidth utilization, which in their opinion represents one of the state-of-the-art peer strategies in P2P streaming. 

When a channel is not allocated any server capacity due to very low popularity or priority during a period of time, the channel is not to be deployed in the ISP during this time. 

On this platform, the authors are able to emulate hundreds of concurrent peers on each cluster server, and emulate all network parameters, such as node/link capacities. 

As the time series of channel popularity is generally nonstationary (i.e., its values do not vary around a fixed mean), the authors utilize the autoregressive integrated moving average model, ARIMA(p,d,q), which is a standard linear predictor to tackle non-stationary time series. 

The implication of the approach is to maximally allocate the server capacity, at the total amount of U , to the channels with the current largest marginal utility, as computed with dGdsct+1, as long as the upper bound of sct+1 indicated in (9) has not been reached. 

To start, the designated server trains the regression model with collected channel popularity statistics, server bandwidth usage and channel streaming quality during the most recent N2 time steps, and derives the values of regression parameters. 

The authors treat the number of peers in each channel c, i.e., nct , t = 1, 2, . . ., as an unknown random process evolving over time, and use the recent historical values to forecast the most likely values of the process in the future. 

In addition, with Ration, the P2P streaming solution provider can dynamically make decisions on channel deployment in each ISP, when it is not possible or necessary to deploy every one of the hundreds or thousands of channels in each ISP. 

Also based on fluid theory, Kumar et al. [4] have modeled the streaming quality in a mesh-based P2P streaming system in terms of both server and peer upload capacities. 

When the P2P streaming solution provider discovers that the system is always operating at the over-provisioning mode, they may consider to reduce their total server capacity deployment in the ISP. 

In contrast, their study is the first to investigate the impact and evolution of inter-ISP P2P live streaming traffic, and their proposal emphasizes on the dynamic provisioning of server capacity on a per-ISP basis to maximally guarantee the success of ISP-aware P2P streaming. 

At each following time t, it uses the model to estimate the streaming quality based on the used server bandwidth and the collected number of peersin the channel at t, and examines the fitness of the current regression model by comparing the estimated value with the collected actual streaming quality.