scispace - formally typeset
Search or ask a question
Author

Jeroen Famaey

Bio: Jeroen Famaey is an academic researcher from University of Antwerp. The author has contributed to research in topics: The Internet & Computer science. The author has an hindex of 27, co-authored 167 publications receiving 2463 citations. Previous affiliations of Jeroen Famaey include Ghent University & Waterford Institute of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel rate adaptation algorithm, capable of increasing clients’ Quality of Experience (QoE) and achieving fairness in a multiclient setting, is proposed, which can improve fairness up to 80% compared to state-of-the-art HAS heuristics in a scenario with three networks.
Abstract: HTTP Adaptive Streaming (HAS) is quickly becoming the de facto standard for video streaming services. In HAS, each video is temporally segmented and stored in different quality levels. Rate adaptation heuristics, deployed at the video player, allow the most appropriate level to be dynamically requested, based on the current network conditions. It has been shown that today’s heuristics underperform when multiple clients consume video at the same time, due to fairness issues among clients. Concretely, this means that different clients negatively influence each other as they compete for shared network resources. In this article, we propose a novel rate adaptation algorithm called FINEAS (Fair In-Network Enhanced Adaptive Streaming), capable of increasing clients’ Quality of Experience (QoE) and achieving fairness in a multiclient setting. A key element of this approach is an in-network system of coordination proxies in charge of facilitating fair resource sharing among clients. The strength of this approach is threefold. First, fairness is achieved without explicit communication among clients and thus no significant overhead is introduced into the network. Second, the system of coordination proxies is transparent to the clients, that is, the clients do not need to be aware of its presence. Third, the HAS principle is maintained, as the in-network components only provide the clients with new information and suggestions, while the rate adaptation decision remains the sole responsibility of the clients themselves. We evaluate this novel approach through simulations, under highly variable bandwidth conditions and in several multiclient scenarios. We show how the proposed approach can improve fairness up to 80p compared to state-of-the-art HAS heuristics in a scenario with three networks, each containing 30 clients streaming video at the same time.

114 citations

Proceedings ArticleDOI
21 Jun 2016
TL;DR: A sub-1Ghz PHY model and the 802.11ah MAC protocol in ns-3 is implemented and simulation shows that, with appropriate grouping, the RAW mechanism substantially improves throughput, latency and energy efficiency in dense IoT network scenarios.
Abstract: IEEE 802.11ah is a new Wi-Fi draft for sub-1Ghz communications, aiming to address the major challenges of the Internet of Things (IoT): connectivity among a large number of power-constrained stations deployed over a wide area. The new Restricted Access Window (RAW) mechanism promises to increase throughput and energy efficiency by dividing stations into different RAW groups. Only the stations in the same group can access the channel simultaneously, which reduces collision probability in dense scenarios. However, the draft does not specify any RAW grouping algorithms, while the grouping strategy is expected to severely impact RAW performance. To study the impact of parameters such as traffic load, number of stations and RAW group duration on optimal number of RAW groups, we implemented a sub-1Ghz PHY model and the 802.11ah MAC protocol in ns-3 to evaluate its transmission range, throughput, latency and energy efficiency in dense IoT network scenarios. The simulation shows that, with appropriate grouping, the RAW mechanism substantially improves throughput, latency and energy efficiency. Furthermore, the results suggest that the optimal grouping strategy depends on many parameters, and intelligent RAW group adaptation is necessary to maximize performance under dynamic conditions. This paper provides a major leap towards such a strategy.

103 citations

Journal ArticleDOI
TL;DR: A novel Reinforcement Learning (RL)-based HAS client that dynamically adapts its behaviour by interacting with the environment to optimize the Quality of Experience (QoE), the quality as perceived by the end-user.
Abstract: HTTP Adaptive Streaming (HAS) is becoming the de facto standard for Over-The-Top (OTT)-based video streaming services such as YouTube and Netflix. By splitting a video into multiple segments of a couple of seconds and encoding each of these at multiple quality levels, HAS allows a video client to dynamically adapt the requested quality during the playout to react to network changes. However, state-of-the-art quality selection heuristics are deterministic and tailored to specific network configurations. Therefore, they are unable to cope with a vast range of highly dynamic network settings. In this letter, a novel Reinforcement Learning (RL)-based HAS client is presented and evaluated. The self-learning HAS client dynamically adapts its behaviour by interacting with the environment to optimize the Quality of Experience (QoE), the quality as perceived by the end-user. The proposed client has been thoroughly evaluated using a network-based simulator and is shown to outperform traditional HAS clients by up to 13% in a mobile network environment.

102 citations

Proceedings ArticleDOI
15 Jun 2016
TL;DR: The details of the implemented 802.11ah physical and MAC layer in the ns-3 network simulator, which more closely reflects actual protocol behavior and can more easily be adapted to evaluate a broad range of network and traffic conditions, are presented.
Abstract: IEEE 802.11ah or HaLow is a new Wi-Fi standard for sub-1Ghz communications, aiming to address the major challenges of the Internet of Things: connectivity among a large number of power-constrained stations deployed over a wide area. Existing research on the performance evaluation of 802.11ah is generally based on analytical models, which does not accurately represent real network dynamics and is hard to adjust to different network conditions. To address this hiatus, we implemented the 802.11ah physical and MAC layer in the ns-3 network simulator, which, compared to analytical models, more closely reflects actual protocol behavior and can more easily be adapted to evaluate a broad range of network and traffic conditions. In this paper, we present the details of our implementation, including a sub-1Ghz physical layer model and several novel MAC layer features. Moreover, simulations based on the implemented model are conducted to evaluate performance of the novel features of IEEE 802.11ah.

77 citations

Proceedings ArticleDOI
01 Nov 2017
TL;DR: A quantitative experiment to verify the published current levels of different operating modes in a LoRa chip's datasheet and to compare the battery lifetime for the LoRa class A and C modes of operation, and the energy drain can be calculated and compared across the different spreading factors and classes.
Abstract: Many Internet of Things (IoT) applications benefit greatly from low-power long-range connectivity. A promising technology to achieve the low-power and long-range requirements is seen in LoRaWAN, a media access control (MAC) protocol maintained by the LoRa Alliance and leveraging Semtech's patented LoRa radio modulation technology. LoRaWAN provides three different device classes (A, B and C), which provide a tradeoff between performance (i.e., throughput and latency) and energy consumption. This paper offers a theoretical and experimental comparison of these classes. The objective of the quantitative experiment was twofold: to verify the published current levels of different operating modes in a LoRa chip's datasheet and to compare the battery lifetime for the LoRa class A and C modes of operation. We used a high-end current sensing circuit to gather the voltage levels and temporal variation with increasing payload sizes and spreading factors. Using the Ohmic Law, the energy drain can be calculated and compared across the different spreading factors (SF) and classes.

76 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper analyzes the MEC reference architecture and main deployment scenarios, which offer multi-tenancy support for application developers, content providers, and third parties, and elaborates further on open research challenges.
Abstract: Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks This paper introduces a survey on MEC and focuses on the fundamental key enabling technologies It elaborates MEC orchestration considering both individual services and a network of MEC platforms supporting mobility, bringing light into the different orchestration deployment options In addition, this paper analyzes the MEC reference architecture and main deployment scenarios, which offer multi-tenancy support for application developers, content providers, and third parties Finally, this paper overviews the current standardization activities and elaborates further on open research challenges

1,351 citations

Proceedings ArticleDOI
07 Aug 2017
TL;DR: P Pensieve is proposed, a system that generates ABR algorithms using reinforcement learning (RL), and outperforms the best state-of-the-art scheme, with improvements in average QoE of 12%--25%.
Abstract: Client-side video players employ adaptive bitrate (ABR) algorithms to optimize user quality of experience (QoE). Despite the abundance of recently proposed schemes, state-of-the-art ABR algorithms suffer from a key limitation: they use fixed control rules based on simplified or inaccurate models of the deployment environment. As a result, existing schemes inevitably fail to achieve optimal performance across a broad set of network conditions and QoE objectives.We propose Pensieve, a system that generates ABR algorithms using reinforcement learning (RL). Pensieve trains a neural network model that selects bitrates for future video chunks based on observations collected by client video players. Pensieve does not rely on pre-programmed models or assumptions about the environment. Instead, it learns to make ABR decisions solely through observations of the resulting performance of past decisions. As a result, Pensieve automatically learns ABR algorithms that adapt to a wide range of environments and QoE metrics. We compare Pensieve to state-of-the-art ABR algorithms using trace-driven and real world experiments spanning a wide variety of network conditions, QoE metrics, and video properties. In all considered scenarios, Pensieve outperforms the best state-of-the-art scheme, with improvements in average QoE of 12%--25%. Pensieve also generalizes well, outperforming existing schemes even on networks for which it was not explicitly trained.

946 citations