scispace - formally typeset
Search or ask a question
Author

Bart De Vleeschauwer

Other affiliations: Ghent University
Bio: Bart De Vleeschauwer is an academic researcher from Bell Labs. The author has contributed to research in topics: Access network & Quality of experience. The author has an hindex of 14, co-authored 42 publications receiving 612 citations. Previous affiliations of Bart De Vleeschauwer include Ghent University.

Papers
More filters
Proceedings ArticleDOI
10 Oct 2005
TL;DR: This paper outlines the architecture of such a system and describes a set of algorithms that assign the microcells to the available servers and the maximum load experienced by a server is used as a minimization criterion.
Abstract: With the number of players of massively multiplayer online games (MMOG) going beyond the millions, there is a need for an efficient way to manage these huge digital worlds. These virtual environments are dynamic and sudden increases in player density in a part of the world have an impact on the load of the server responsible for that section of the virtual world. In this paper we propose the division of the world into several interacting microcells that can be dynamically assigned to a set of servers. We outline the architecture of such a system and describe a set of algorithms that assign the microcells to the available servers. The maximum load experienced by a server is used as a minimization criterion. The different algorithms are compared with each other and with the standard approach used in these games.

87 citations

Proceedings Article
27 May 2013
TL;DR: The performance of AVC- and SVC-based HAS is characterized in terms of perceived video quality, network load and client characteristics, with the goal of identifying advantages and disadvantages of both options.
Abstract: HTTP Adaptive Streaming (HAS) is quickly becoming the dominant type of video streaming in Over-The-Top multimedia services. HAS content is temporally segmented and each segment is offered in different video qualities to the client. It enables a video client to dynamically adapt the consumed video quality to match with the capabilities of the network and/or the client's device. As such, the use of HAS allows a service provider to offer video streaming over heterogeneous networks and to heterogeneous devices. Traditionally, the H.264/AVC video codec is used for encoding the HAS content: for each offered video quality, a separate AVC video file is encoded. Obviously, this leads to a considerable storage redundancy at the video server as each video is available in a multitude of qualities. The recent Scalable Video Codec (SVC) extension of H.264/AVC allows encoding a video into different quality layers: by dowloading one or more additional layers, the video quality can be improved. While this leads to an immediate reduction of required storage at the video server, the impact of using SVC-based HAS on the network and perceived quality by the user are less obvious. In this article, we characterize the performance of AVC- and SVC-based HAS in terms of perceived video quality, network load and client characteristics, with the goal of identifying advantages and disadvantages of both options.

66 citations

Journal ArticleDOI
TL;DR: The Knowledge Plane is presented as an autonomic layer that optimizes the QoE in multimedia access networks from the service originator to the user through autonomously detects network problems and determines an appropriate corrective action.

52 citations

Proceedings ArticleDOI
22 Oct 2012
TL;DR: An in-network video rate adaptation algorithm is presented that maximizes the provider's revenue and offered QoE and the synergy between the proposed solution and HAS-enabled video clients is evaluated.
Abstract: HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for adaptive streaming solutions. In HAS, video content is split into segments and encoded into multiple qualities, such that the quality of a video can be dynamically adapted during the HTTP download process. This has given rise to intelligent video players that strive to maximize Quality of Experience (QoE) by adapting the displayed quality based on the user's available bandwidth and device characteristics. HAS-based techniques have been widely used in Over-the-Top (OTT) video services. Recently, academia and industry have started investigating the merits of HAS in managed IPTV scenarios. However, the adoption of HAS in a managed environment is complicated by the fact that the quality adaptation component is controlled solely by the end-user. This prevents the service provider from offering any type of QoE guarantees to its subscribers. Moreover, as every user independently makes decisions, this approach does not support coordinated management and global optimization. These shortcomings can be overcome by introducing additional intelligence into the provider's network, which allows overriding the client's decisions. In this paper we investigate how such intelligence can be introduced into a managed multimedia access network. More specifically, we present an in-network video rate adaptation algorithm that maximizes the provider's revenue and offered QoE. Furthermore, the synergy between our proposed solution and HAS-enabled video clients is evaluated.

45 citations

Journal ArticleDOI
TL;DR: This paper investigates the opportunity of combining HAS with scalable video coding and shows that this combination creates possibilities to reduce the client buffer, which implies improvements for live and interactive video, and reduces storage requirements, increases the cache hit-ratio for supporting content delivery network (CDN) nodes, and demonstrates more robust behavior in the HAS client.
Abstract: HTTP adaptive streaming (HAS) is rapidly evolving into a key video delivery technology, supported by implementations from Microsoft, Apple, and Adobe, and actively pursued by standardization organizations. Using segments in multiple video qualities, distributed via an already available Hypertext Transfer Protocol (HTTP) delivery infrastructure, a HAS client is able to seamlessly adapt to the available bandwidth in the network. However, existing HAS solutions have a number of disadvantages such as the additional storage and bandwidth requirements, a large playout buffer to absorb network impairments, and a non-optimal quality selection under fluctuating network conditions. In this paper, we investigate the opportunity of combining HAS with scalable video coding. We show that this combination creates possibilities to reduce the client buffer, which implies improvements for live and interactive video, and reduces storage requirements, increases the cache hit-ratio for supporting content delivery network (CDN) nodes, and demonstrates more robust behavior in the HAS client, ultimately improving the quality of experience (QoE) for the viewer. © 2012 Alcatel-Lucent.

41 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: The technical development of HAS, existing open standardized solutions, but also proprietary solutions are reviewed in this paper as fundamental to derive the QoE influence factors that emerge as a result of adaptation.
Abstract: Changing network conditions pose severe problems to video streaming in the Internet. HTTP adaptive streaming (HAS) is a technology employed by numerous video services that relieves these issues by adapting the video to the current network conditions. It enables service providers to improve resource utilization and Quality of Experience (QoE) by incorporating information from different layers in order to deliver and adapt a video in its best possible quality. Thereby, it allows taking into account end user device capabilities, available video quality levels, current network conditions, and current server load. For end users, the major benefits of HAS compared to classical HTTP video streaming are reduced interruptions of the video playback and higher bandwidth utilization, which both generally result in a higher QoE. Adaptation is possible by changing the frame rate, resolution, or quantization of the video, which can be done with various adaptation strategies and related client- and server-side actions. The technical development of HAS, existing open standardized solutions, but also proprietary solutions are reviewed in this paper as fundamental to derive the QoE influence factors that emerge as a result of adaptation. The main contribution is a comprehensive survey of QoE related works from human computer interaction and networking domains, which are structured according to the QoE impact of video adaptation. To be more precise, subjective studies that cover QoE aspects of adaptation dimensions and strategies are revisited. As a result, QoE influence factors of HAS and corresponding QoE models are identified, but also open issues and conflicting results are discussed. Furthermore, technical influence factors, which are often ignored in the context of HAS, affect perceptual QoE influence factors and are consequently analyzed. This survey gives the reader an overview of the current state of the art and recent developments. At the same time, it targets networking researchers who develop new solutions for HTTP video streaming or assess video streaming from a user centric point of view. Therefore, this paper is a major step toward truly improving HAS.

746 citations

Journal ArticleDOI
TL;DR: This survey provides an overview of the different methods proposed over the last several years of bitrate adaptation algorithms for HTTP adaptive streaming, leaving it to system builders to innovate and implement their own method.
Abstract: In this survey, we present state-of-the-art bitrate adaptation algorithms for HTTP adaptive streaming (HAS). As a key distinction from other streaming approaches, the bitrate adaptation algorithms in HAS are chiefly executed at each client, i.e. , in a distributed manner. The objective of these algorithms is to ensure a high quality of experience (QoE) for viewers in the presence of bandwidth fluctuations due to factors like signal strength, network congestion, network reconvergence events, etc. While such fluctuations are common in public Internet, they can also occur in home networksor even managed networks where there is often admission control and QoS tools. Bitrate adaptation algorithms may take factors like bandwidth estimations, playback buffer fullness, device features, viewer preferences, and content features into account, albeit with different weights. Since the viewer’s QoE needs to be determined in real-time during playback, objective metrics are generally used including number of buffer stalls, duration of startup delay, frequency and amount of quality oscillations, and video instability. By design, the standards for HAS do not mandate any particular adaptation algorithm, leaving it to system builders to innovate and implement their own method. This survey provides an overview of the different methods proposed over the last several years.

289 citations

Journal ArticleDOI
TL;DR: This survey presents a tutorial overview of the popular video streaming techniques deployed for stored videos, followed by identifying various metrics that could be used to quantify the QoE for video streaming services; and presents a comprehensive survey of the literature on various tools and measurement methodologies that have been proposed to measure or predict theQoE of online video streaming Services.
Abstract: Video-on-demand streaming services have gained popularity over the past few years. An increase in the speed of the access networks has also led to a larger number of users watching videos online. Online video streaming traffic is estimated to further increase from the current value of 57% to 69% by 2017, Cisco, 2014. In order to retain the existing users and attract new users, service providers attempt to satisfy the user's expectations and provide a satisfactory viewing experience. The first step toward providing a satisfactory service is to be able to quantify the users' perception of the current service level. Quality of experience (QoE) is a quality metric that provides a holistic measure of the users' perception of the quality. In this survey, we first present a tutorial overview of the popular video streaming techniques deployed for stored videos, followed by identifying various metrics that could be used to quantify the QoE for video streaming services; finally, we present a comprehensive survey of the literature on various tools and measurement methodologies that have been proposed to measure or predict the QoE of online video streaming services.

206 citations