scispace - formally typeset
Search or ask a question
Author

Ali C. Begen

Bio: Ali C. Begen is an academic researcher from Özyeğin University. The author has contributed to research in topics: Network packet & Bandwidth (computing). The author has an hindex of 30, co-authored 133 publications receiving 4431 citations. Previous affiliations of Ali C. Begen include Georgia Institute of Technology & Cisco Systems, Inc..


Papers
More filters
Proceedings ArticleDOI
23 Feb 2011
TL;DR: This paper focuses on the rate-adaptation mechanisms of adaptive streaming and experimentally evaluates two major commercial players (Smooth Streaming, Netflix) and one open source player (OSMF).
Abstract: Adaptive (video) streaming over HTTP is gradually being adopted, as it offers significant advantages in terms of both user-perceived quality and resource utilization for content and network service providers. In this paper, we focus on the rate-adaptation mechanisms of adaptive streaming and experimentally evaluate two major commercial players (Smooth Streaming, Netflix) and one open source player (OSMF). Our experiments cover three important operating conditions. First, how does an adaptive video player react to either persistent or short-term changes in the underlying network available bandwidth. Can the player quickly converge to the maximum sustainable bitrate? Second, what happens when two adaptive video players compete for available bandwidth in the bottleneck link? Can they share the resources in a stable and fair manner? And third, how does adaptive streaming perform with live content? Is the player able to sustain a short playback delay? We identify major differences between the three players, and significant inefficiencies in each of them.

729 citations

Journal ArticleDOI
Zhi Li1, Xiaoqing Zhu1, Josh Gahm1, Rong Pan1, Hao Hu1, Ali C. Begen1, David R. Oran1 
TL;DR: It is argued that it is necessary to design at the application layer using a "probe and adapt" principle for video bitrate adaptation, which is akin, but also orthogonal to the transport-layer TCP congestion control, and PANDA - a client-side rate adaptation algorithm for HAS is presented.
Abstract: Today, the technology for video streaming over the Internet is converging towards a paradigm named HTTP-based adaptive streaming (HAS), which brings two new features. First, by using HTTP/TCP, it leverages network-friendly TCP to achieve both firewall/NAT traversal and bandwidth sharing. Second, by pre-encoding and storing the video in a number of discrete rate levels, it introduces video bitrate adaptivity in a scalable way so that the video encoding is excluded from the closed-loop adaptation. A conventional wisdom in HAS design is that since the TCP throughput observed by a client would indicate the available network bandwidth, it could be used as a reliable reference for video bitrate selection. We argue that this is no longer true when HAS becomes a substantial fraction of the total network traffic. We show that when multiple HAS clients compete at a network bottleneck, the discrete nature of the video bitrates results in difficulty for a client to correctly perceive its fair-share bandwidth. Through analysis and test bed experiments, we demonstrate that this fundamental limitation leads to video bitrate oscillation and other undesirable behaviors that negatively impact the video viewing experience. We therefore argue that it is necessary to design at the application layer using a "probe and adapt" principle for video bitrate adaptation (where "probe" refers to trial increment of the data rate, instead of sending auxiliary piggybacking traffic), which is akin, but also orthogonal to the transport-layer TCP congestion control. We present PANDA - a client-side rate adaptation algorithm for HAS - as a practical embodiment of this principle. Our test bed results show that compared to conventional algorithms, PANDA is able to reduce the instability of video bitrate selection by over 75% without increasing the risk of buffer underrun.

545 citations

Proceedings ArticleDOI
07 Jun 2012
TL;DR: This paper describes how the typical behavior of an adaptive streaming player in its Steady-State, which includes periods of activity followed by periods of inactivity (ON-OFF periods), is the main root cause behind the problems listed above.
Abstract: With an increasing demand for high-quality video content over the Internet, it is becoming more likely that two or more adaptive streaming players share the same network bottleneck and compete for available bandwidth. This competition can lead to three performance problems: player instability, unfairness between players, and bandwidth underutilization. However, the dynamics of such competition and the root cause for the previous three problems are not yet well understood. In this paper, we focus on the problem of competing video players and describe how the typical behavior of an adaptive streaming player in its Steady-State, which includes periods of activity followed by periods of inactivity (ON-OFF periods), is the main root cause behind the problems listed above. We use two adaptive players to experimentally showcase these issues. Then, focusing on the issue of player instability, we test how several factors (the ON-OFF durations, the available bandwidth and its relation to available bitrates, and the number of competing players) affect stability.

356 citations

Journal ArticleDOI
TL;DR: This survey provides an overview of the different methods proposed over the last several years of bitrate adaptation algorithms for HTTP adaptive streaming, leaving it to system builders to innovate and implement their own method.
Abstract: In this survey, we present state-of-the-art bitrate adaptation algorithms for HTTP adaptive streaming (HAS). As a key distinction from other streaming approaches, the bitrate adaptation algorithms in HAS are chiefly executed at each client, i.e. , in a distributed manner. The objective of these algorithms is to ensure a high quality of experience (QoE) for viewers in the presence of bandwidth fluctuations due to factors like signal strength, network congestion, network reconvergence events, etc. While such fluctuations are common in public Internet, they can also occur in home networksor even managed networks where there is often admission control and QoS tools. Bitrate adaptation algorithms may take factors like bandwidth estimations, playback buffer fullness, device features, viewer preferences, and content features into account, albeit with different weights. Since the viewer’s QoE needs to be determined in real-time during playback, objective metrics are generally used including number of buffer stalls, duration of startup delay, frequency and amount of quality oscillations, and video instability. By design, the standards for HAS do not mandate any particular adaptation algorithm, leaving it to system builders to innovate and implement their own method. This survey provides an overview of the different methods proposed over the last several years.

289 citations

Journal ArticleDOI
TL;DR: In this first part of a two-part article, the authors describe both conventional and emerging streaming solutions using Web and non-Web protocols.
Abstract: The average US consumer watches TV for almost five hours a day. While the majority of viewed content is still broadcast TV programming, the share of the time-shifted content is on the rise. One-third of US viewers currently use a digital video recorder like device, but trends indicate that more consumers are migrating to the Web to watch their favorite shows and movies. Increasingly, the Web is coming to digital TV, which incorporates movie downloads and streaming via Web protocols. In this first part of a two-part article, the authors describe both conventional and emerging streaming solutions using Web and non-Web protocols.

263 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
05 Mar 2012
TL;DR: Computer Networking: A Top-Down Approach Featuring the Internet explains the engineering problems that are inherent in communicating digital information from point to point, and presents the mathematics that determine the best path, show some code that implements those algorithms, and illustrate the logic by using excellent conceptual diagrams.
Abstract: Certain data-communication protocols hog the spotlight, but all of them have a lot in common. Computer Networking: A Top-Down Approach Featuring the Internet explains the engineering problems that are inherent in communicating digital information from point to point. The top-down approach mentioned in the subtitle means that the book starts at the top of the protocol stack--at the application layer--and works its way down through the other layers, until it reaches bare wire. The authors, for the most part, shun the well-known seven-layer Open Systems Interconnection (OSI) protocol stack in favor of their own five-layer (application, transport, network, link, and physical) model. It's an effective approach that helps clear away some of the hand waving traditionally associated with the more obtuse layers in the OSI model. The approach is definitely theoretical--don't look here for instructions on configuring Windows 2000 or a Cisco router--but it's relevant to reality, and should help anyone who needs to understand networking as a programmer, system architect, or even administration guru.The treatment of the network layer, at which routing takes place, is typical of the overall style. In discussing routing, authors James Kurose and Keith Ross explain (by way of lots of clear, definition-packed text) what routing protocols need to do: find the best route to a destination. Then they present the mathematics that determine the best path, show some code that implements those algorithms, and illustrate the logic by using excellent conceptual diagrams. Real-life implementations of the algorithms--including Internet Protocol (both IPv4 and IPv6) and several popular IP routing protocols--help you to make the transition from pure theory to networking technologies. --David WallTopics covered: The theory behind data networks, with thorough discussion of the problems that are posed at each level (the application layer gets plenty of attention). For each layer, there's academic coverage of networking problems and solutions, followed by discussion of real technologies. Special sections deal with network security and transmission of digital multimedia.

1,079 citations

Proceedings ArticleDOI
07 Aug 2017
TL;DR: P Pensieve is proposed, a system that generates ABR algorithms using reinforcement learning (RL), and outperforms the best state-of-the-art scheme, with improvements in average QoE of 12%--25%.
Abstract: Client-side video players employ adaptive bitrate (ABR) algorithms to optimize user quality of experience (QoE). Despite the abundance of recently proposed schemes, state-of-the-art ABR algorithms suffer from a key limitation: they use fixed control rules based on simplified or inaccurate models of the deployment environment. As a result, existing schemes inevitably fail to achieve optimal performance across a broad set of network conditions and QoE objectives.We propose Pensieve, a system that generates ABR algorithms using reinforcement learning (RL). Pensieve trains a neural network model that selects bitrates for future video chunks based on observations collected by client video players. Pensieve does not rely on pre-programmed models or assumptions about the environment. Instead, it learns to make ABR decisions solely through observations of the resulting performance of past decisions. As a result, Pensieve automatically learns ABR algorithms that adapt to a wide range of environments and QoE metrics. We compare Pensieve to state-of-the-art ABR algorithms using trace-driven and real world experiments spanning a wide variety of network conditions, QoE metrics, and video properties. In all considered scenarios, Pensieve outperforms the best state-of-the-art scheme, with improvements in average QoE of 12%--25%. Pensieve also generalizes well, outperforming existing schemes even on networks for which it was not explicitly trained.

946 citations

Proceedings ArticleDOI
17 Aug 2014
TL;DR: This work suggests an alternative approach: rather than presuming that capacity estimation is required, it is perhaps better to begin by using only the buffer, and then ask whencapacity estimation is needed, which allows us to reduce the rebuffer rate by 10-20% compared to Netflix's then-default ABR algorithm, while delivering a similar average video rate.
Abstract: Existing ABR algorithms face a significant challenge in estimating future capacity: capacity can vary widely over time, a phenomenon commonly observed in commercial services. In this work, we suggest an alternative approach: rather than presuming that capacity estimation is required, it is perhaps better to begin by using only the buffer, and then ask when capacity estimation is needed. We test the viability of this approach through a series of experiments spanning millions of real users in a commercial service. We start with a simple design which directly chooses the video rate based on the current buffer occupancy. Our own investigation reveals that capacity estimation is unnecessary in steady state; however using simple capacity estimation (based on immediate past throughput) is important during the startup phase, when the buffer itself is growing from empty. This approach allows us to reduce the rebuffer rate by 10-20% compared to Netflix's then-default ABR algorithm, while delivering a similar average video rate, and a higher video rate in steady state.

931 citations

Proceedings ArticleDOI
17 Aug 2015
TL;DR: A principled control-theoretic model is developed that can optimally combine throughput and buffer occupancy information to outperform traditional approaches in bitrate adaptation in client-side players and is presented as a novel model predictive control algorithm.
Abstract: User-perceived quality-of-experience (QoE) is critical in Internet video applications as it impacts revenues for content providers and delivery systems. Given that there is little support in the network for optimizing such measures, bottlenecks could occur anywhere in the delivery system. Consequently, a robust bitrate adaptation algorithm in client-side players is critical to ensure good user experience. Previous studies have shown key limitations of state-of-art commercial solutions and proposed a range of heuristic fixes. Despite the emergence of several proposals, there is still a distinct lack of consensus on: (1) How best to design this client-side bitrate adaptation logic (e.g., use rate estimates vs. buffer occupancy); (2) How well specific classes of approaches will perform under diverse operating regimes (e.g., high throughput variability); or (3) How do they actually balance different QoE objectives (e.g., startup delay vs. rebuffering). To this end, this paper makes three key technical contributions. First, to bring some rigor to this space, we develop a principled control-theoretic model to reason about a broad spectrum of strategies. Second, we propose a novel model predictive control algorithm that can optimally combine throughput and buffer occupancy information to outperform traditional approaches. Third, we present a practical implementation in a reference video player to validate our approach using realistic trace-driven emulations.

851 citations