scispace - formally typeset
Search or ask a question
Author

Steven Latre

Other affiliations: IMEC, Ghent University
Bio: Steven Latre is an academic researcher from University of Antwerp. The author has contributed to research in topics: Wireless network & Quality of experience. The author has an hindex of 26, co-authored 216 publications receiving 2853 citations. Previous affiliations of Steven Latre include IMEC & Ghent University.


Papers
More filters
Journal ArticleDOI
TL;DR: This article introduces NFV and gives an overview of the MANO framework that has been proposed by ETSI, and presents representative projects and vendor products that focus on MANO, and discusses their features and relationship with the framework.
Abstract: NFV continues to draw immense attention from researchers in both industry and academia. By decoupling NFs from the physical equipment on which they run, NFV promises to reduce CAPEX and OPEX, make networks more scalable and flexible, and lead to increased service agility. However, despite the unprecedented interest it has gained, there are still obstacles that must be overcome before NFV can advance to reality in industrial deployments, let alone delivering on the anticipated gains. While doing so, important challenges associated with network and function MANO need to be addressed. In this article, we introduce NFV and give an overview of the MANO framework that has been proposed by ETSI. We then present representative projects and vendor products that focus on MANO, and discuss their features and relationship with the framework. Finally, we identify open MANO challenges as well as opportunities for future research.

277 citations

Journal ArticleDOI
TL;DR: A novel rate adaptation algorithm, capable of increasing clients’ Quality of Experience (QoE) and achieving fairness in a multiclient setting, is proposed, which can improve fairness up to 80% compared to state-of-the-art HAS heuristics in a scenario with three networks.
Abstract: HTTP Adaptive Streaming (HAS) is quickly becoming the de facto standard for video streaming services. In HAS, each video is temporally segmented and stored in different quality levels. Rate adaptation heuristics, deployed at the video player, allow the most appropriate level to be dynamically requested, based on the current network conditions. It has been shown that today’s heuristics underperform when multiple clients consume video at the same time, due to fairness issues among clients. Concretely, this means that different clients negatively influence each other as they compete for shared network resources. In this article, we propose a novel rate adaptation algorithm called FINEAS (Fair In-Network Enhanced Adaptive Streaming), capable of increasing clients’ Quality of Experience (QoE) and achieving fairness in a multiclient setting. A key element of this approach is an in-network system of coordination proxies in charge of facilitating fair resource sharing among clients. The strength of this approach is threefold. First, fairness is achieved without explicit communication among clients and thus no significant overhead is introduced into the network. Second, the system of coordination proxies is transparent to the clients, that is, the clients do not need to be aware of its presence. Third, the HAS principle is maintained, as the in-network components only provide the clients with new information and suggestions, while the rate adaptation decision remains the sole responsibility of the clients themselves. We evaluate this novel approach through simulations, under highly variable bandwidth conditions and in several multiclient scenarios. We show how the proposed approach can improve fairness up to 80p compared to state-of-the-art HAS heuristics in a scenario with three networks, each containing 30 clients streaming video at the same time.

114 citations

Proceedings ArticleDOI
01 Sep 2016
TL;DR: The City of Things testbed is presented, which is a smart city testbed located in the city of Antwerp, Belgium that allows the setup and validation of new smart city experiments both at a technology and user level and illustrates this by a case study on air quality.
Abstract: While smart cities have the potential to monitor and control the city in real-time through sensors and actuators, there is still an important road ahead to evolve from isolated smart city experiments to real large-scale deployments. Important research questions remain on how and which wireless technologies should be setup for connecting the city, how the data should be analysed and how the acceptance by users of applications can be assessed. In this paper we present the City of Things testbed, which is a smart city testbed located in the city of Antwerp, Belgium to address these questions. It allows the setup and validation of new smart city experiments both at a technology and user level. City of Things consists of a multi-wireless technology network infrastructure, the capacity to easily perform data experiments on top and a living lab approach to validate the experiments. In comparison to other smart city testbeds, City of Things consists of an integrated approach, allowing experimentation on three different layers: networks, data and living lab while supporting a wide range of wireless technologies. We give an overview of the City of Things architecture, explain how researchers can perform smart city experiments and illustrate this by a case study on air quality.

112 citations

Proceedings ArticleDOI
21 Jun 2016
TL;DR: A sub-1Ghz PHY model and the 802.11ah MAC protocol in ns-3 is implemented and simulation shows that, with appropriate grouping, the RAW mechanism substantially improves throughput, latency and energy efficiency in dense IoT network scenarios.
Abstract: IEEE 802.11ah is a new Wi-Fi draft for sub-1Ghz communications, aiming to address the major challenges of the Internet of Things (IoT): connectivity among a large number of power-constrained stations deployed over a wide area. The new Restricted Access Window (RAW) mechanism promises to increase throughput and energy efficiency by dividing stations into different RAW groups. Only the stations in the same group can access the channel simultaneously, which reduces collision probability in dense scenarios. However, the draft does not specify any RAW grouping algorithms, while the grouping strategy is expected to severely impact RAW performance. To study the impact of parameters such as traffic load, number of stations and RAW group duration on optimal number of RAW groups, we implemented a sub-1Ghz PHY model and the 802.11ah MAC protocol in ns-3 to evaluate its transmission range, throughput, latency and energy efficiency in dense IoT network scenarios. The simulation shows that, with appropriate grouping, the RAW mechanism substantially improves throughput, latency and energy efficiency. Furthermore, the results suggest that the optimal grouping strategy depends on many parameters, and intelligent RAW group adaptation is necessary to maximize performance under dynamic conditions. This paper provides a major leap towards such a strategy.

103 citations

Journal ArticleDOI
TL;DR: A novel Reinforcement Learning (RL)-based HAS client that dynamically adapts its behaviour by interacting with the environment to optimize the Quality of Experience (QoE), the quality as perceived by the end-user.
Abstract: HTTP Adaptive Streaming (HAS) is becoming the de facto standard for Over-The-Top (OTT)-based video streaming services such as YouTube and Netflix. By splitting a video into multiple segments of a couple of seconds and encoding each of these at multiple quality levels, HAS allows a video client to dynamically adapt the requested quality during the playout to react to network changes. However, state-of-the-art quality selection heuristics are deterministic and tailored to specific network configurations. Therefore, they are unable to cope with a vast range of highly dynamic network settings. In this letter, a novel Reinforcement Learning (RL)-based HAS client is presented and evaluated. The self-learning HAS client dynamically adapts its behaviour by interacting with the environment to optimize the Quality of Experience (QoE), the quality as perceived by the end-user. The proposed client has been thoroughly evaluated using a network-based simulator and is shown to outperform traditional HAS clients by up to 13% in a mobile network environment.

102 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI
TL;DR: In this article, the authors survey the state-of-the-art in NFV and identify promising research directions in this area, and also overview key NFV projects, standardization efforts, early implementations, use cases, and commercial products.
Abstract: Network function virtualization (NFV) has drawn significant attention from both industry and academia as an important shift in telecommunication service provisioning. By decoupling network functions (NFs) from the physical devices on which they run, NFV has the potential to lead to significant reductions in operating expenses (OPEX) and capital expenses (CAPEX) and facilitate the deployment of new services with increased agility and faster time-to-value. The NFV paradigm is still in its infancy and there is a large spectrum of opportunities for the research community to develop new architectures, systems and applications, and to evaluate alternatives and trade-offs in developing technologies for its successful deployment. In this paper, after discussing NFV and its relationship with complementary fields of software defined networking (SDN) and cloud computing, we survey the state-of-the-art in NFV, and identify promising research directions in this area. We also overview key NFV projects, standardization efforts, early implementations, use cases, and commercial products.

1,634 citations

Journal ArticleDOI
TL;DR: This paper analyzes the MEC reference architecture and main deployment scenarios, which offer multi-tenancy support for application developers, content providers, and third parties, and elaborates further on open research challenges.
Abstract: Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks This paper introduces a survey on MEC and focuses on the fundamental key enabling technologies It elaborates MEC orchestration considering both individual services and a network of MEC platforms supporting mobility, bringing light into the different orchestration deployment options In addition, this paper analyzes the MEC reference architecture and main deployment scenarios, which offer multi-tenancy support for application developers, content providers, and third parties Finally, this paper overviews the current standardization activities and elaborates further on open research challenges

1,351 citations