scispace - formally typeset
Search or ask a question
Author

Danny De Vleeschauwer

Other affiliations: Alcatel-Lucent
Bio: Danny De Vleeschauwer is an academic researcher from Bell Labs. The author has contributed to research in topics: Network packet & Video quality. The author has an hindex of 21, co-authored 113 publications receiving 1733 citations. Previous affiliations of Danny De Vleeschauwer include Alcatel-Lucent.


Papers
More filters
Proceedings ArticleDOI
30 Aug 2004
TL;DR: In this paper, both subjective perceived quality and objective measurements are investigated and will show that both are indeed influenced by even these small delay and jitter values.
Abstract: There have been several studies in the past years that investigate the impact of network delay on multi-user applications. Primary examples of these applications are real-time multiplayer games. These studies have shown that high network delays and jitter may indeed influence the player's perception of the quality of the game. However, the proposed test values, which are often high, are not always representative for a large percentile of on-line game players. We have therefore investigated the influence of delay and jitter with numbers that are more representative for typical access networks. This in effect allows us to simulate a setup with multiplayer game servers that are located at ISP level and players connected through that ISP's access network. To obtain further true-to-life results, we opted to carry out the test using a recent first person shooter (FPS) game, Unreal Tournament 2003. It can, after all, be expected that this new generation of games has built-in features to diminish the effect of small delay values, given the popularity of playing these games over the Internet. In this paper, we have investigated both subjective perceived quality and objective measurements and will show that both are indeed influenced by even these small delay and jitter values.

153 citations

Proceedings ArticleDOI
23 Feb 2011
TL;DR: The benefits of using the Scalable Video Coding (SVC) for such a DASH environment is shown, which helps video clients dynamically adapt the requested video quality for ongoing video flows, to match their current download rate as good as possible.
Abstract: HTTP-based delivery for Video on Demand (VoD) has been gaining popularity within recent years. Progressive Download over HTTP, typically used in VoD, takes advantage of the widely deployed network caches to relieve video servers from sending the same content to a high number of users in the same access network. However, due to a sharp increase in the requests at peak hours or due to cross-traffic within the network, congestion may arise in the cache feeder link or access link respectively. Since the connection characteristics may vary over the time, with Dynamic Adaptive Streaming over HTTP (DASH), a technique that has been recently proposed, video clients may dynamically adapt the requested video quality for ongoing video flows, to match their current download rate as good as possible. In this work we show the benefits of using the Scalable Video Coding (SVC) for such a DASH environment.

126 citations

Book ChapterDOI
01 Jun 2010
TL;DR: This paper presents an overview of various techniques for measuring QoE, thereby mostly focusing on freely available tools and methodologies.
Abstract: Quality of Experience (QoE) relates to how users perceive the quality of an application. To capture such a subjective measure, either by subjective tests or via objective tools, is an art on its own. Given the importance of measuring users’ satisfaction to service providers, research on QoE took flight in recent years. In this paper we present an overview of various techniques for measuring QoE, thereby mostly focusing on freely available tools and methodologies.

118 citations

Journal ArticleDOI
TL;DR: It is argued that the availability of larger buffers in the network enables IPTV to better offer new services (in particular, time-shifted TV, network personal video recorder, and video-on-demand) than the competing platforms.
Abstract: Currently, digital television is gradually replacing analogue TV. Although these digital TV services can be delivered via various broadcast networks (e.g., terrestrial, cable, satellite), Internet Protocol TV over broadband telecommunication networks offers much more than traditional broadcast TV. Not only can it improve the quality that users experience with this linear programming TV service, but it also paves the way for new TV services, such as video-on- demand, time-shifted TV, and network personal video recorder services, because of its integral return channel and the ability to address individual users. This article first provides an overview of a typical IPTV network architecture and some basic video coding concepts. Based on these, we then explain how IPTV can increase the linear programming TV quality experienced by end users by reducing channel-change latency and mitigating packet loss. For the latter, forward error correction and automatic repeat request techniques are discussed, whereas for the former a solution based on a circular buffer strategy is described. This article further argues that the availability of larger buffers in the network enables IPTV to better offer new services (in particular, time-shifted TV, network personal video recorder, and video-on-demand) than the competing platforms.

97 citations

Proceedings Article
27 May 2013
TL;DR: All QoE models analyzed are based on the HAS profiles constructed by intercepting the HTTP get messages, and the sensitivity of the models to unseen profiles, content and devices is investigated.
Abstract: The end user QoE (quality of experience) of content delivered over a radio network is influenced by the radio conditions in the RAN (radio access network). This paper analyses various QoE models for video delivered over a radio network (e.g. LTE (long term evolution)) using HAS (HTTP adaptive streaming). All QoE models analyzed are based on the HAS profiles constructed by intercepting the HTTP get messages. The performance of all optimally trained models is compared and the sensitivity of the models to unseen profiles, content and devices is investigated.

94 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The technical development of HAS, existing open standardized solutions, but also proprietary solutions are reviewed in this paper as fundamental to derive the QoE influence factors that emerge as a result of adaptation.
Abstract: Changing network conditions pose severe problems to video streaming in the Internet. HTTP adaptive streaming (HAS) is a technology employed by numerous video services that relieves these issues by adapting the video to the current network conditions. It enables service providers to improve resource utilization and Quality of Experience (QoE) by incorporating information from different layers in order to deliver and adapt a video in its best possible quality. Thereby, it allows taking into account end user device capabilities, available video quality levels, current network conditions, and current server load. For end users, the major benefits of HAS compared to classical HTTP video streaming are reduced interruptions of the video playback and higher bandwidth utilization, which both generally result in a higher QoE. Adaptation is possible by changing the frame rate, resolution, or quantization of the video, which can be done with various adaptation strategies and related client- and server-side actions. The technical development of HAS, existing open standardized solutions, but also proprietary solutions are reviewed in this paper as fundamental to derive the QoE influence factors that emerge as a result of adaptation. The main contribution is a comprehensive survey of QoE related works from human computer interaction and networking domains, which are structured according to the QoE impact of video adaptation. To be more precise, subjective studies that cover QoE aspects of adaptation dimensions and strategies are revisited. As a result, QoE influence factors of HAS and corresponding QoE models are identified, but also open issues and conflicting results are discussed. Furthermore, technical influence factors, which are often ignored in the context of HAS, affect perceptual QoE influence factors and are consequently analyzed. This survey gives the reader an overview of the current state of the art and recent developments. At the same time, it targets networking researchers who develop new solutions for HTTP video streaming or assess video streaming from a user centric point of view. Therefore, this paper is a major step toward truly improving HAS.

746 citations

Journal ArticleDOI
TL;DR: In this paper, the authors compare the performance of the D2D caching and coded multicasting with the conventional unicasting and harmonic broadcasting in terms of the scaling law of wireless networks.
Abstract: As wireless video is the fastest growing form of data traffic, methods for spectrally efficient on-demand wireless video streaming are essential to both service providers and users. A key property of video on-demand is the asynchronous content reuse , such that a few popular files account for a large part of the traffic but are viewed by users at different times. Caching of content on wireless devices in conjunction with device-to-device (D2D) communications allows to exploit this property, and provide a network throughput that is significantly in excess of both the conventional approach of unicasting from cellular base stations and the traditional D2D networks for “regular” data traffic. This paper presents in a tutorial and concise form some recent results on the throughput scaling laws of wireless networks with caching and asynchronous content reuse, contrasting the D2D approach with other alternative approaches such as conventional unicasting, harmonic broadcasting , and a novel coded multicasting approach based on caching in the user devices and network-coded transmission from the cellular base station only. Somehow surprisingly, the D2D scheme with spatial reuse and simple decentralized random caching achieves the same near-optimal throughput scaling law as coded multicasting. Both schemes achieve an unbounded throughput gain (in terms of scaling law) with respect to conventional unicasting and harmonic broadcasting, in the relevant regime where the number of video files in the library is smaller than the total size of the distributed cache capacity in the network. To better understand the relative merits of these competing approaches, we consider a holistic D2D system design incorporating traditional microwave (2 GHz) and millimeter-wave (mm-wave) D2D links; the direct connections to the base station can be used to provide those rare video requests that cannot be found in local caches. We provide extensive simulation results under a variety of system settings and compare our scheme with the systems that exploit transmission from the base station only. We show that, also in realistic conditions and nonasymptotic regimes, the proposed D2D approach offers very significant throughput gains.

617 citations

Journal ArticleDOI
TL;DR: A caching strategy based on deterministic assignment of subpackets of the library files, and a coded delivery strategy where the users send linearly coded messages to each other in order to collectively satisfy their demands are proposed.
Abstract: We consider a wireless device-to-device (D2D) network where communication is restricted to be single-hop. Users make arbitrary requests from a finite library of files and have pre-cached information on their devices, subject to a per-node storage capacity constraint. A similar problem has already been considered in an infrastructure setting, where all users receive a common multicast (coded) message from a single omniscient server (e.g., a base station having all the files in the library) through a shared bottleneck link. In this paper, we consider a D2D infrastructureless version of the problem. We propose a caching strategy based on deterministic assignment of subpackets of the library files, and a coded delivery strategy where the users send linearly coded messages to each other in order to collectively satisfy their demands. We also consider a random caching strategy, which is more suitable to a fully decentralized implementation. Under certain conditions, both approaches can achieve the information theoretic outer bound within a constant multiplicative factor. In our previous work, we showed that a caching D2D wireless network with one-hop communication, random caching, and uncoded delivery (direct file transmissions) achieves the same throughput scaling law of the infrastructure-based coded multicasting scheme, in the regime of large number of users and files in the library. This shows that the spatial reuse gain of the D2D network is order-equivalent to the coded multicasting gain of single base station transmission. It is, therefore, natural to ask whether these two gains are cumulative, i.e., if a D2D network with both local communication (spatial reuse) and coded multicasting can provide an improved scaling law. Somewhat counterintuitively, we show that these gains do not cumulate (in terms of throughput scaling law). This fact can be explained by noticing that the coded delivery scheme creates messages that are useful to multiple nodes, such that it benefits from broadcasting to as many nodes as possible, while spatial reuse capitalizes on the fact that the communication is local, such that the same time slot can be reused in space across the network. Unfortunately, these two issues are in contrast with each other.

598 citations

Journal ArticleDOI
TL;DR: Latency determines not only how players experience online gameplay but also how to design the games to mitigate its effects and meet player expectations.
Abstract: Latency determines not only how players experience online gameplay but also how to design the games to mitigate its effects and meet player expectations.

537 citations