scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Confused, timid, and unstable: picking a video streaming rate is hard

14 Nov 2012-pp 225-238
TL;DR: This work measures three popular video streaming services -- Hulu, Netflix, and Vudu -- and finds that accurate client-side bandwidth estimation above the HTTP layer is hard, and rate selection based on inaccurate estimates can trigger a feedback loop, leading to undesirably variable and low-quality video.
Abstract: Today's commercial video streaming services use dynamic rate selection to provide a high-quality user experience. Most services host content on standard HTTP servers in CDNs, so rate selection must occur at the client. We measure three popular video streaming services -- Hulu, Netflix, and Vudu -- and find that accurate client-side bandwidth estimation above the HTTP layer is hard. As a result, rate selection based on inaccurate estimates can trigger a feedback loop, leading to undesirably variable and low-quality video. We call this phenomenon the "downward spiral effect", and we measure it on all three services, present insights into its root causes, and validate initial solutions to prevent it.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
07 Aug 2017
TL;DR: P Pensieve is proposed, a system that generates ABR algorithms using reinforcement learning (RL), and outperforms the best state-of-the-art scheme, with improvements in average QoE of 12%--25%.
Abstract: Client-side video players employ adaptive bitrate (ABR) algorithms to optimize user quality of experience (QoE). Despite the abundance of recently proposed schemes, state-of-the-art ABR algorithms suffer from a key limitation: they use fixed control rules based on simplified or inaccurate models of the deployment environment. As a result, existing schemes inevitably fail to achieve optimal performance across a broad set of network conditions and QoE objectives.We propose Pensieve, a system that generates ABR algorithms using reinforcement learning (RL). Pensieve trains a neural network model that selects bitrates for future video chunks based on observations collected by client video players. Pensieve does not rely on pre-programmed models or assumptions about the environment. Instead, it learns to make ABR decisions solely through observations of the resulting performance of past decisions. As a result, Pensieve automatically learns ABR algorithms that adapt to a wide range of environments and QoE metrics. We compare Pensieve to state-of-the-art ABR algorithms using trace-driven and real world experiments spanning a wide variety of network conditions, QoE metrics, and video properties. In all considered scenarios, Pensieve outperforms the best state-of-the-art scheme, with improvements in average QoE of 12%--25%. Pensieve also generalizes well, outperforming existing schemes even on networks for which it was not explicitly trained.

946 citations


Cites background from "Confused, timid, and unstable: pick..."

  • ...However, many of these goals are inherently conflicting [3, 18, 21]....

    [...]

  • ...However, selecting the right bitrate can be very challenging due to (1) the variability of network throughput [18, 41, 48, 51, 52]; (2) the conflicting video QoE requirements (high bitrate, minimal rebuffering, smoothness, etc....

    [...]

  • ...occupancy and chunk download times [18, 21]....

    [...]

Proceedings ArticleDOI
17 Aug 2014
TL;DR: This work suggests an alternative approach: rather than presuming that capacity estimation is required, it is perhaps better to begin by using only the buffer, and then ask whencapacity estimation is needed, which allows us to reduce the rebuffer rate by 10-20% compared to Netflix's then-default ABR algorithm, while delivering a similar average video rate.
Abstract: Existing ABR algorithms face a significant challenge in estimating future capacity: capacity can vary widely over time, a phenomenon commonly observed in commercial services. In this work, we suggest an alternative approach: rather than presuming that capacity estimation is required, it is perhaps better to begin by using only the buffer, and then ask when capacity estimation is needed. We test the viability of this approach through a series of experiments spanning millions of real users in a commercial service. We start with a simple design which directly chooses the video rate based on the current buffer occupancy. Our own investigation reveals that capacity estimation is unnecessary in steady state; however using simple capacity estimation (based on immediate past throughput) is important during the startup phase, when the buffer itself is growing from empty. This approach allows us to reduce the rebuffer rate by 10-20% compared to Netflix's then-default ABR algorithm, while delivering a similar average video rate, and a higher video rate in steady state.

931 citations


Cites background from "Confused, timid, and unstable: pick..."

  • ...Hulu [9] and YouTube [21] are based on capacity estimation....

    [...]

  • ...In the presence of competing TCP flows, the ON-OFF pattern can trigger a bad interaction between TCP and the ABR algorithm, causing a further underestimate of capacity and a downward spiral in video quality [9]....

    [...]

Journal ArticleDOI
TL;DR: A generally accepted definition for SDN is presented, including decoupling the control plane from the data plane and providing programmability for network application development, and its three-layer architecture is dwelled on, including an infrastructure layer, a control layer, and an application layer.
Abstract: Emerging mega-trends (e.g., mobile, social, cloud, and big data) in information and communication technologies (ICT) are commanding new challenges to future Internet, for which ubiquitous accessibility, high bandwidth, and dynamic management are crucial. However, traditional approaches based on manual configuration of proprietary devices are cumbersome and error-prone, and they cannot fully utilize the capability of physical network infrastructure. Recently, software-defined networking (SDN) has been touted as one of the most promising solutions for future Internet. SDN is characterized by its two distinguished features, including decoupling the control plane from the data plane and providing programmability for network application development. As a result, SDN is positioned to provide more efficient configuration, better performance, and higher flexibility to accommodate innovative network designs. This paper surveys latest developments in this active research area of SDN. We first present a generally accepted definition for SDN with the aforementioned two characteristic features and potential benefits of SDN. We then dwell on its three-layer architecture, including an infrastructure layer, a control layer, and an application layer, and substantiate each layer with existing research efforts and its related research areas. We follow that with an overview of the de facto SDN implementation (i.e., OpenFlow). Finally, we conclude this survey paper with some suggested open research challenges.

894 citations


Cites background from "Confused, timid, and unstable: pick..."

  • ...the case of network-oblivious P2P applications [14] and video streaming rate picking [15]....

    [...]

Proceedings ArticleDOI
17 Aug 2015
TL;DR: A principled control-theoretic model is developed that can optimally combine throughput and buffer occupancy information to outperform traditional approaches in bitrate adaptation in client-side players and is presented as a novel model predictive control algorithm.
Abstract: User-perceived quality-of-experience (QoE) is critical in Internet video applications as it impacts revenues for content providers and delivery systems. Given that there is little support in the network for optimizing such measures, bottlenecks could occur anywhere in the delivery system. Consequently, a robust bitrate adaptation algorithm in client-side players is critical to ensure good user experience. Previous studies have shown key limitations of state-of-art commercial solutions and proposed a range of heuristic fixes. Despite the emergence of several proposals, there is still a distinct lack of consensus on: (1) How best to design this client-side bitrate adaptation logic (e.g., use rate estimates vs. buffer occupancy); (2) How well specific classes of approaches will perform under diverse operating regimes (e.g., high throughput variability); or (3) How do they actually balance different QoE objectives (e.g., startup delay vs. rebuffering). To this end, this paper makes three key technical contributions. First, to bring some rigor to this space, we develop a principled control-theoretic model to reason about a broad spectrum of strategies. Second, we propose a novel model predictive control algorithm that can optimally combine throughput and buffer occupancy information to outperform traditional approaches. Third, we present a practical implementation in a reference video player to validate our approach using realistic trace-driven emulations.

851 citations

Proceedings ArticleDOI
10 Dec 2012
TL;DR: A principled understanding of bit-rate adaptation is presented and a suite of techniques that can systematically guide the tradeoffs between stability, fairness, and efficiency are developed, which lead to a general framework for robust video adaptation.
Abstract: Many commercial video players rely on bitrate adaptation logic to adapt the bitrate in response to changing network conditions. Past measurement studies have identified issues with today's commercial players with respect to three key metrics---efficiency, fairness, and stability---when multiple bitrate-adaptive players share a bottleneck link. Unfortunately, our current understanding of why these effects occur and how they can be mitigated is quite limited.In this paper, we present a principled understanding of bitrate adaptation and analyze several commercial players through the lens of an abstract player model. Through this framework, we identify the root causes of several undesirable interactions that arise as a consequence of overlaying the video bitrate adaptation over HTTP. Building on these insights, we develop a suite of techniques that can systematically guide the tradeoffs between stability, fairness and efficiency and thus lead to a general framework for robust video adaptation. We pick one concrete instance from this design space and show that it significantly outperforms today's commercial players on all three key metrics across a range of experimental scenarios.

806 citations

References
More filters
01 Apr 1999
TL;DR: This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery, as well as discussing various acknowledgment generation methods.
Abstract: This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. In addition, the document specifies how TCP should begin transmission after a relatively long idle period, as well as discussing various acknowledgment generation methods.

2,237 citations


"Confused, timid, and unstable: pick..." refers background in this paper

  • ...The problem is that during the 4-second OFF period, the TCP congestion window (cwnd) times out — due to inactivity longer than 200ms — and resets cwnd to its initial value of 10 packets [5, 6]....

    [...]

Proceedings ArticleDOI
24 Oct 2007
TL;DR: This paper presents a traffic characterization study of the popular video sharing service, YouTube, and finds that as with the traditional Web, caching could improve the end user experience, reduce network bandwidth consumption, and reduce the load on YouTube's core server infrastructure.
Abstract: This paper presents a traffic characterization study of the popular video sharing service, YouTube. Over a three month period we observed almost 25 million transactions between users on an edge network and YouTube, including more than 600,000 video downloads. We also monitored the globally popular videos over this period of time.In the paper we examine usage patterns, file properties, popularity and referencing characteristics, and transfer behaviors of YouTube, and compare them to traditional Web and media streaming workload characteristics. We conclude the paper with a discussion of the implications of the observed characteristics. For example, we find that as with the traditional Web, caching could improve the end user experience, reduce network bandwidth consumption, and reduce the load on YouTube's core server infrastructure. Unlike traditional Web caching, Web 2.0 provides additional meta-data that should be exploited to improve the effectiveness of strategies like caching.

990 citations

Proceedings ArticleDOI
23 Feb 2011
TL;DR: This paper focuses on the rate-adaptation mechanisms of adaptive streaming and experimentally evaluates two major commercial players (Smooth Streaming, Netflix) and one open source player (OSMF).
Abstract: Adaptive (video) streaming over HTTP is gradually being adopted, as it offers significant advantages in terms of both user-perceived quality and resource utilization for content and network service providers. In this paper, we focus on the rate-adaptation mechanisms of adaptive streaming and experimentally evaluate two major commercial players (Smooth Streaming, Netflix) and one open source player (OSMF). Our experiments cover three important operating conditions. First, how does an adaptive video player react to either persistent or short-term changes in the underlying network available bandwidth. Can the player quickly converge to the maximum sustainable bitrate? Second, what happens when two adaptive video players compete for available bandwidth in the bottleneck link? Can they share the resources in a stable and fair manner? And third, how does adaptive streaming perform with live content? Is the player able to sustain a short playback delay? We identify major differences between the three players, and significant inefficiencies in each of them.

729 citations

Proceedings ArticleDOI
15 Aug 2011
TL;DR: This paper uses a unique dataset that spans different content types, including short video on demand, long VoD, and live content from popular video con- tent providers, to measure quality metrics such as the join time, buffering ratio, average bitrate, rendering quality, and rate of buffering events.
Abstract: As the distribution of the video over the Internet becomes main- stream and its consumption moves from the computer to the TV screen, user expectation for high quality is constantly increasing. In this context, it is crucial for content providers to understand if and how video quality affects user engagement and how to best invest their resources to optimize video quality. This paper is a first step towards addressing these questions. We use a unique dataset that spans different content types, including short video on demand (VoD), long VoD, and live content from popular video con- tent providers. Using client-side instrumentation, we measure quality metrics such as the join time, buffering ratio, average bitrate, rendering quality, and rate of buffering events.We quantify user engagement both at a per-video (or view) level and a per-user (or viewer) level. In particular, we find that the percentage of time spent in buffering (buffering ratio) has the largest impact on the user engagement across all types of content. However, the magnitude of this impact depends on the content type, with live content being the most impacted. For example, a 1% increase in buffering ratio can reduce user engagement by more than three minutes for a 90-minute live video event. We also see that the average bitrate plays a significantly more important role in the case of live content than VoD content.

687 citations

Proceedings ArticleDOI
25 Mar 2012
TL;DR: A measurement study of Netflix is performed to uncover its architecture and service strategy, and finds that Netflix employs a blend of data centers and Content Delivery Networks (CDNs) for content distribution.
Abstract: Netflix is the leading provider of on-demand Internet video streaming in the US and Canada, accounting for 29.7% of the peak downstream traffic in US. Understanding the Netflix architecture and its performance can shed light on how to best optimize its design as well as on the design of similar on-demand streaming services. In this paper, we perform a measurement study of Netflix to uncover its architecture and service strategy. We find that Netflix employs a blend of data centers and Content Delivery Networks (CDNs) for content distribution. We also perform active measurements of the three CDNs employed by Netflix to quantify the video delivery bandwidth available to users across the US. Finally, as improvements to Netflix's current CDN assignment strategy, we propose a measurement-based adaptive CDN selection strategy and a multiple-CDN-based video delivery strategy, and demonstrate their potentials in significantly increasing user's average bandwidth.

521 citations

Trending Questions (1)
What is the lowest Internet speed for Netflix?

We measure three popular video streaming services -- Hulu, Netflix, and Vudu -- and find that accurate client-side bandwidth estimation above the HTTP layer is hard.