scispace - formally typeset
Search or ask a question
Author

Konstantin Miller

Other affiliations: STMicroelectronics
Bio: Konstantin Miller is an academic researcher from Technical University of Berlin. The author has contributed to research in topics: Quality of experience & Video quality. The author has an hindex of 9, co-authored 25 publications receiving 548 citations. Previous affiliations of Konstantin Miller include STMicroelectronics.

Papers
More filters
Proceedings ArticleDOI
10 May 2012
TL;DR: This work designed and implemented a receiver-driven adaptation algorithm for adaptive streaming that does not rely on cross-layer information or server assistance and integrated the algorithm with a prototype implementation of a streaming client based on the MPEG DASH (Dynamic Adaptive Streaming over HTTP) standard.
Abstract: Internet video makes up a significant part of the Internet traffic and its fraction is constantly growing. In order to guarantee best user experience throughout different network access technologies with dynamically varying network conditions, it is fundamental to adopt technologies enabling a proper delivery of the media content. One of such technologies is adaptive streaming. It allows to dynamically adapt the bit-rate of the stream to varying network conditions. There are various approaches to adaptive streaming. In our work, we focus on the receiver-driven approach where the media file is subdivided into segments, each of the segments is provided at multiple bit-rates, and the task of the client is to select the appropriate bit-rate for each of the segments. With this approach, the challenges are (i) to properly estimate the dynamics of the available network throughput, (ii) to control the filling level of the client buffer in order to avoid underflows resulting in playback interruptions, (iii) to maximize the quality of the stream, while avoiding unnecessary quality fluctuations, and, finally, (iv) to minimize the delay between the user's request and the start of the playback. During our work, we designed and implemented a receiver-driven adaptation algorithm for adaptive streaming that does not rely on cross-layer information or server assistance. We integrated the algorithm with a prototype implementation of a streaming client based on the MPEG DASH (Dynamic Adaptive Streaming over HTTP) standard. We evaluated the implemented prototype in real-world scenarios and found that it performes remarkably well even under challenging network conditions. Further, it exhibits stable and fair operation if a common link is shared among multiple clients.

253 citations

Journal ArticleDOI
TL;DR: This work introduces an adaptation algorithm for HTTP-based live streaming called LOLYPOP (short for low-latency prediction-based adaptation), which is designed to operate with a transport latency of a few seconds, and leverages Transmission Control Protocol throughput predictions on multiple time scales.
Abstract: Recently, Hypertext Transfer Protocol (HTTP)-based adaptive streaming has become the de facto standard for video streaming over the Internet. It allows clients to dynamically adapt media characteristics to the varying network conditions to ensure a high quality of experience (QoE)—that is, minimize playback interruptions while maximizing video quality at a reasonable level of quality changes. In the case of live streaming, this task becomes particularly challenging due to the latency constraints. The challenge further increases if a client uses a wireless access network, where the throughput is subject to considerable fluctuations. Consequently, live streams often exhibit latencies of up to 20 to 30 seconds. In the present work, we introduce an adaptation algorithm for HTTP-based live streaming called LOLYPOP (short for low-latency prediction-based adaptation), which is designed to operate with a transport latency of a few seconds. To reach this goal, LOLYPOP leverages Transmission Control Protocol throughput predictions on multiple time scales, from 1 to 10 seconds, along with estimations of the relative prediction error distributions. In addition to satisfying the latency constraint, the algorithm heuristically maximizes the QoE by maximizing the average video quality as a function of the number of skipped segments and quality transitions. To select an efficient prediction method, we studied the performance of several time series prediction methods in IEEE 802.11 wireless access networks. We evaluated LOLYPOP under a large set of experimental conditions, limiting the transport latency to 3 seconds, against a state-of-the-art adaptation algorithm called FESTIVE. We observed that the average selected video representation index is by up to a factor of 3 higher than with the baseline approach. We also observed that LOLYPOP is able to reach points from a broader region in the QoE space, and thus it is better adjustable to the user profile or service provider requirements.

56 citations

Journal ArticleDOI
TL;DR: In this paper, a proportional-integral-derivative (PID) controller is proposed to support unicast streaming sessions in a dense wireless access network, and a control-theoretic approach is used to efficiently utilize available wireless resources, providing high quality of experience (QoE) to a large number of users.
Abstract: Recently, the way people consume video content has been undergoing a dramatic change. Plain TV sets, that have been the center of home entertainment for a long time, are losing ground to hybrid TVs, PCs, game consoles, and, more recently, mobile devices such as tablets and smartphones. The new predominant paradigm is: watch what I want, when I want, and where I want. The challenges of this shift are manifold. On the one hand, broadcast technologies such as DVB-T/C/S need to be extended or replaced by mechanisms supporting asynchronous viewing, such as IPTV and video streaming over best-effort networks, while remaining scalable to millions of users. On the other hand, the dramatic increase of wireless data traffic begins to stretch the capabilities of the existing wireless infrastructure to its limits. Finally, there is a challenge to video streaming technologies to cope with a high heterogeneity of end-user devices and dynamically changing network conditions, in particular in wireless and mobile networks. In the present work, our goal is to design an efficient system that supports a high number of unicast streaming sessions in a dense wireless access network. We address this goal by jointly considering the two problems of wireless transmission scheduling and video quality adaptation, using techniques inspired by the robustness and simplicity of proportional-integral-derivative (PID) controllers. We show that the control-theoretic approach allows to efficiently utilize available wireless resources, providing high quality of experience (QoE) to a large number of users.

47 citations

Proceedings ArticleDOI
09 Jun 2010
TL;DR: This work integrates network coding into the Ethernet Passive Optical Network architecture to increase downlink throughput by up to 50% without changing the physical layer and suggests to code packets not only between pairs of nodes but also between an arbitrary number of nodes forming a cycle.
Abstract: One of the most promising technologies for high-speed access to the Internet are Passive Optical Networks as they provide high data rates at low cost. We integrate network coding into the Ethernet Passive Optical Network architecture to increase downlink throughput by up to 50% without changing the physical layer. In contrast to previous work, we suggest to code packets not only between pairs of nodes but also between an arbitrary number of nodes forming a cycle. We characterize the expected gain analytically and by means of simulations and investigate the trade-off between queuing delay, traffic variability, and throughput gain. We show that in practical scenarios, a simple scheme already achieves a reasonable amount of the maximum possible coding gain.

43 citations

Journal ArticleDOI
TL;DR: This work considers the worst-case efficiency of cost sharing methods in resource allocation games in terms of the ratio of the minimum guaranteed surplus of a Nash equilibrium and the maximal surplus, and demonstrates the power of the upper bound on the efficiency loss.
Abstract: Resource allocation problems play a key role in many applications, including traffic networks, telecommunication networks, and economics. In most applications, the allocation of resources is determined by a finite number of independent players, each optimizing an individual objective function. An important question in all these applications is the degree of suboptimality caused by selfish resource allocation. We consider the worst-case efficiency of cost sharing methods in resource allocation games in terms of the ratio of the minimum guaranteed surplus of a Nash equilibrium and the maximal surplus. Our main technical result is an upper bound on the efficiency loss that depends on the class of allowable cost functions and the class of allowable cost sharing methods. We demonstrate the power of this bound by evaluating the worst-case efficiency loss for three well-known cost sharing methods: incremental cost sharing, marginal cost pricing, and average cost sharing.

33 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The analysis of time series: An Introduction, 4th edn. as discussed by the authors by C. Chatfield, C. Chapman and Hall, London, 1989. ISBN 0 412 31820 2.
Abstract: The Analysis of Time Series: An Introduction, 4th edn. By C. Chatfield. ISBN 0 412 31820 2. Chapman and Hall, London, 1989. 242 pp. £13.50.

1,583 citations

Proceedings ArticleDOI
10 Dec 2012
TL;DR: A principled understanding of bit-rate adaptation is presented and a suite of techniques that can systematically guide the tradeoffs between stability, fairness, and efficiency are developed, which lead to a general framework for robust video adaptation.
Abstract: Many commercial video players rely on bitrate adaptation logic to adapt the bitrate in response to changing network conditions. Past measurement studies have identified issues with today's commercial players with respect to three key metrics---efficiency, fairness, and stability---when multiple bitrate-adaptive players share a bottleneck link. Unfortunately, our current understanding of why these effects occur and how they can be mitigated is quite limited.In this paper, we present a principled understanding of bitrate adaptation and analyze several commercial players through the lens of an abstract player model. Through this framework, we identify the root causes of several undesirable interactions that arise as a consequence of overlaying the video bitrate adaptation over HTTP. Building on these insights, we develop a suite of techniques that can systematically guide the tradeoffs between stability, fairness and efficiency and thus lead to a general framework for robust video adaptation. We pick one concrete instance from this design space and show that it significantly outperforms today's commercial players on all three key metrics across a range of experimental scenarios.

806 citations

Journal ArticleDOI
TL;DR: The technical development of HAS, existing open standardized solutions, but also proprietary solutions are reviewed in this paper as fundamental to derive the QoE influence factors that emerge as a result of adaptation.
Abstract: Changing network conditions pose severe problems to video streaming in the Internet. HTTP adaptive streaming (HAS) is a technology employed by numerous video services that relieves these issues by adapting the video to the current network conditions. It enables service providers to improve resource utilization and Quality of Experience (QoE) by incorporating information from different layers in order to deliver and adapt a video in its best possible quality. Thereby, it allows taking into account end user device capabilities, available video quality levels, current network conditions, and current server load. For end users, the major benefits of HAS compared to classical HTTP video streaming are reduced interruptions of the video playback and higher bandwidth utilization, which both generally result in a higher QoE. Adaptation is possible by changing the frame rate, resolution, or quantization of the video, which can be done with various adaptation strategies and related client- and server-side actions. The technical development of HAS, existing open standardized solutions, but also proprietary solutions are reviewed in this paper as fundamental to derive the QoE influence factors that emerge as a result of adaptation. The main contribution is a comprehensive survey of QoE related works from human computer interaction and networking domains, which are structured according to the QoE impact of video adaptation. To be more precise, subjective studies that cover QoE aspects of adaptation dimensions and strategies are revisited. As a result, QoE influence factors of HAS and corresponding QoE models are identified, but also open issues and conflicting results are discussed. Furthermore, technical influence factors, which are often ignored in the context of HAS, affect perceptual QoE influence factors and are consequently analyzed. This survey gives the reader an overview of the current state of the art and recent developments. At the same time, it targets networking researchers who develop new solutions for HTTP video streaming or assess video streaming from a user centric point of view. Therefore, this paper is a major step toward truly improving HAS.

746 citations

Journal ArticleDOI
Zhi Li1, Xiaoqing Zhu1, Josh Gahm1, Rong Pan1, Hao Hu1, Ali C. Begen1, David R. Oran1 
TL;DR: It is argued that it is necessary to design at the application layer using a "probe and adapt" principle for video bitrate adaptation, which is akin, but also orthogonal to the transport-layer TCP congestion control, and PANDA - a client-side rate adaptation algorithm for HAS is presented.
Abstract: Today, the technology for video streaming over the Internet is converging towards a paradigm named HTTP-based adaptive streaming (HAS), which brings two new features. First, by using HTTP/TCP, it leverages network-friendly TCP to achieve both firewall/NAT traversal and bandwidth sharing. Second, by pre-encoding and storing the video in a number of discrete rate levels, it introduces video bitrate adaptivity in a scalable way so that the video encoding is excluded from the closed-loop adaptation. A conventional wisdom in HAS design is that since the TCP throughput observed by a client would indicate the available network bandwidth, it could be used as a reliable reference for video bitrate selection. We argue that this is no longer true when HAS becomes a substantial fraction of the total network traffic. We show that when multiple HAS clients compete at a network bottleneck, the discrete nature of the video bitrates results in difficulty for a client to correctly perceive its fair-share bandwidth. Through analysis and test bed experiments, we demonstrate that this fundamental limitation leads to video bitrate oscillation and other undesirable behaviors that negatively impact the video viewing experience. We therefore argue that it is necessary to design at the application layer using a "probe and adapt" principle for video bitrate adaptation (where "probe" refers to trial increment of the data rate, instead of sending auxiliary piggybacking traffic), which is akin, but also orthogonal to the transport-layer TCP congestion control. We present PANDA - a client-side rate adaptation algorithm for HAS - as a practical embodiment of this principle. Our test bed results show that compared to conventional algorithms, PANDA is able to reduce the instability of video bitrate selection by over 75% without increasing the risk of buffer underrun.

545 citations

Proceedings ArticleDOI
14 Nov 2012
TL;DR: This work measures three popular video streaming services -- Hulu, Netflix, and Vudu -- and finds that accurate client-side bandwidth estimation above the HTTP layer is hard, and rate selection based on inaccurate estimates can trigger a feedback loop, leading to undesirably variable and low-quality video.
Abstract: Today's commercial video streaming services use dynamic rate selection to provide a high-quality user experience. Most services host content on standard HTTP servers in CDNs, so rate selection must occur at the client. We measure three popular video streaming services -- Hulu, Netflix, and Vudu -- and find that accurate client-side bandwidth estimation above the HTTP layer is hard. As a result, rate selection based on inaccurate estimates can trigger a feedback loop, leading to undesirably variable and low-quality video. We call this phenomenon the "downward spiral effect", and we measure it on all three services, present insights into its root causes, and validate initial solutions to prevent it.

372 citations