scispace - formally typeset
Search or ask a question
Author

Alberto Blanc

Other affiliations: Institut Mines-Télécom
Bio: Alberto Blanc is an academic researcher from École nationale supérieure des télécommunications de Bretagne. The author has contributed to research in topics: The Internet & Wireless network. The author has an hindex of 9, co-authored 25 publications receiving 381 citations. Previous affiliations of Alberto Blanc include Institut Mines-Télécom.

Papers
More filters
Proceedings ArticleDOI
18 Mar 2015
TL;DR: An integer linear program (ILP) to maximize the average user quality of experience (QoE) and a heuristic algorithm that can scale to large number of videos and users that can efficiently satisfy a time varying demand with an almost constant amount of computing resources.
Abstract: More and more users are watching online videos produced by non-professional sources (e.g., gamers, teachers of online courses, witnesses of public events) by using an increasingly diverse set of devices to access the videos (e.g., smartphones, tablets, HDTV). Live streaming service providers can combine adaptive streaming technologies and cloud computing to satisfy this demand. In this paper, we study the problem of preparing live video streams for delivery using cloud computing infrastructure, e.g., how many representations to use and the corresponding parameters (resolution and bit-rate). We present an integer linear program (ILP) to maximize the average user quality of experience (QoE) and a heuristic algorithm that can scale to large number of videos and users. We also introduce two new datasets: one characterizing a popular live streaming provider (Twitch) and another characterizing the computing resources needed to transcode a video. They are used to set up realistic test scenarios. We compare the performance of the optimal ILP solution with current industry standards, showing that the latter are sub-optimal. The solution of the ILP also shows the importance of the type of video on the optimal streaming preparation. By taking advantage of this, the proposed heuristic can efficiently satisfy a time varying demand with an almost constant amount of computing resources.

81 citations

Proceedings ArticleDOI
08 May 2017
TL;DR: It is shown that it is indeed possible to exploit measurements collected to predict the achievable data rate, with a reasonable accuracy, before establishing the connection to the content provider, on the operator's network and on the mobile node.
Abstract: Downlink data rates can vary significantly in cellular networks, with a potentially non-negligible effect on the user experience. Content providers address this problem by using different representations (e.g., picture resolution, video resolution and rate) of the same content and switch among these based on measurements collected during the connection. If it were possible to know the achievable data rate before the connection establishment, content providers could choose the most appropriate representation from the very beginning. We have conducted a measurement campaign involving 60 users connected to a production network in France, to determine whether it is possible to predict the achievable data rate using measurements collected, before establishing the connection to the content provider, on the operator's network and on the mobile node. We show that it is indeed possible to exploit these measurements to predict, with a reasonable accuracy, the achievable data rate.

68 citations

Journal ArticleDOI
TL;DR: This article forms an integer linear program that maximizes users' average satisfaction, taking into account network dynamics, type of video content, and user population characteristics, and proposes a few theoretical guidelines that can be used, in realistic settings, to choose the encoding parameters based on the user characteristics, the network capacity and the type ofVideo content.
Abstract: Adaptive streaming addresses the increasing and heterogeneous demand of multimedia content over the Internet by offering several encoded versions for each video sequence. Each version (or representation) is characterized by a resolution and a bit rate, and it is aimed at a specific set of users, like TV or mobile phone clients. While most existing works on adaptive streaming deal with effective playout-buffer control strategies on the client side, in this article we take a providers' perspective and propose solutions to improve user satisfaction by optimizing the set of available representations. We formulate an integer linear program that maximizes users' average satisfaction, taking into account network dynamics, type of video content, and user population characteristics. The solution of the optimization is a set of encoding parameters corresponding to the representations set that maximizes user satisfaction. We evaluate this solution by simulating multiple adaptive streaming sessions characterized by realistic network statistics, showing that the proposed solution outperforms commonly used vendor recommendations, in terms of user satisfaction but also in terms of fairness and outage probability. The simulation results show that video content information as well as network constraints and users' statistics play a crucial role in selecting proper encoding parameters to provide fairness among users and to reduce network resource usage. We finally propose a few theoretical guidelines that can be used, in realistic settings, to choose the encoding parameters based on the user characteristics, the network capacity and the type of video content.

67 citations

Proceedings ArticleDOI
19 Mar 2014
TL;DR: This paper proposes a few practical guidelines that can be used to choose the encoding parameters based on the user base characteristics, the network capacity and the type of video content, and shows that video content information as well as network constraints and users' statistics play a crucial role in selecting proper encoding parameters.
Abstract: Adaptive streaming addresses the increasing and heterogenous demand of multimedia content over the Internet by offering several streams for each video. Each stream has a different resolution and bit rate, aimed at a specific set of users, e.g., TV, mobile phone. While most existing works on adaptive streaming deal with optimal playout-control strategies at the client side, in this paper we concentrate on the providers' side, showing how to improve user satisfaction by optimizing the encoding parameters. We formulate an integer linear program that maximizes users' average satisfaction, taking into account the network characteristics, the type of video content, and the user population. The solution of the optimization is a set of encoding parameters that outperforms commonly used vendor recommendations, in terms of user satisfaction and total delivery cost. Results show that video content information as well as network constraints and users' statistics play a crucial role in selecting proper encoding parameters to provide fairness among users and reduce network usage. By combining patterns common to several representative cases, we propose a few practical guidelines that can be used to choose the encoding parameters based on the user base characteristics, the network capacity and the type of video content.

48 citations

Proceedings ArticleDOI
18 Jun 2012
TL;DR: This paper presents Wi2Me Traces Explorer, an extensible mobile sensing application to characterize current deployments, which allows any mobile user to gather not only access point locations but also their performance in terms of bandwidth, link quality and successful connection rate.
Abstract: With the increasing popularity of WiFi technologies, mobile users may now take advantage of heterogeneous wireless networks. In contrast to cellular networks, community networks, based on sharing WiFi residential accesses, show a high access points density in urban areas but uncontrolled performance. We present Wi2Me Traces Explorer, an extensible mobile sensing application to characterize current deployments. This application allows any mobile user to gather not only access point locations but also their performance in terms of bandwidth, link quality and successful connection rate.

26 citations


Cited by
More filters
Proceedings ArticleDOI
18 Mar 2015
TL;DR: This paper presents a dataset that contains data on the two main user-generated live streaming systems: Twitch and the live service of YouTube, and shows that both systems generate a significant traffic with frequent peaks at more than 1 Tbps.
Abstract: User-Generated live video streaming systems are services that allow anybody to broadcast a video stream over the Internet. These Over-The-Top services have recently gained popularity, in particular with e-sport, and can now be seen as competitors of the traditional cable TV. In this paper, we present a dataset for further works on these systems. This dataset contains data on the two main user-generated live streaming systems: Twitch and the live service of YouTube. We got three months of traces of these services from January to April 2014. Our dataset includes, at every five minutes, the identifier of the online broadcaster, the number of people watching the stream, and various other media information. In this paper, we introduce the dataset and we make a preliminary study to show the size of the dataset and its potentials. We first show that both systems generate a significant traffic with frequent peaks at more than 1 Tbps. Thanks to more than a million unique uploaders, Twitch is in particular able to offer a rich service at anytime. Our second main observation is that the popularity of these channels is more heterogeneous than what have been observed in other services gathering user-generated content.

176 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed service recommendation approach based on collaborative filtering and QoS prediction based on user mobility can significantly improve on the accuracy of service recommendation in mobile edge computing.

164 citations

Journal ArticleDOI
TL;DR: The latest revision of IP, IPv6 (IP version 6), supports 16-byte addresses and provides an unreliable datagram service and ensures that any host can exchange packets with any other host.
Abstract: The Internet relies heavily on two protocols. In the network layer, IP (Internet Protocol) provides an unreliable datagram service and ensures that any host can exchange packets with any other host. Since its creation in the 1970s, IP has seen the addition of several features, including multicast, IPsec (IP security), and QoS (quality of service). The latest revision, IPv6 (IP version 6), supports 16-byte addresses.

129 citations

Journal ArticleDOI
TL;DR: This paper develops a machine learning based prediction framework, LinkForecast, that identifies the most important features and uses these features to predict link bandwidth in real time and investigates the prediction performance when using lower-layer features obtained through standard APIs provided by the operating system, instead of specialized tools.
Abstract: Accurate cellular link bandwidth prediction can benefit upper-layer protocols significantly. In this paper, we investigate how to predict cellular link bandwidth in LTE networks. We first conduct an extensive measurement study in two major commercial LTE networks in the US, and identify five types of lower-layer information that are correlated with cellular link bandwidth. We then develop a machine learning based prediction framework, LinkForecast , that identifies the most important features (from both upper and lower layers) and uses these features to predict link bandwidth in real time. Our evaluation shows that LinkForecast is lightweight and the prediction is highly accurate: At the time granularity of one second, the average prediction error is in the range of 3.9 to 17.0 percent for all the scenarios we explore. We further investigate the prediction performance when using lower-layer features obtained through standard APIs provided by the operating system, instead of specialized tools. Our results show that, while the features thus obtained have lower fidelity compared to those from specialized tools, they lead to similar prediction accuracy, indicating that our approach can be easily used over commercial off-the-shelf mobile devices.

101 citations

Journal ArticleDOI
TL;DR: This survey analyzes the latest research efforts revolving on Big Data for the transportation and mobility industry, its applications, baselines scenarios, fields and use case such as routing, planning, infrastructure monitoring, network design, among others.
Abstract: Big Data is an emerging paradigm and has currently become a strong attractor of global interest, specially within the transportation industry. The combination of disruptive technologies and new concepts such as the Smart City upgrades the transport data life cycle. In this context, Big Data is considered as a new pledge for the transportation industry to effectively manage all data this sector required for providing safer, cleaner and more efficient transport means, as well as for users to personalize their transport experience. However, Big Data comes along with its own set of technological challenges, stemming from the multiple and heterogeneous transportation/mobility application scenarios. In this survey we analyze the latest research efforts revolving on Big Data for the transportation and mobility industry, its applications, baselines scenarios, fields and use case such as routing, planning, infrastructure monitoring, network design, among others. This analysis will be done strictly from the Big Data perspective, focusing on those contributions gravitating on techniques, tools and methods for modeling, processing, analyzing and visualizing transport and mobility Big Data. From the literature review a set of trends and challenges is extracted so as to provide researchers with an insightful outlook on the field of transport and mobility.

95 citations