scispace - formally typeset
Search or ask a question
Author

Pieter-Jan Maenhaut

Bio: Pieter-Jan Maenhaut is an academic researcher from Ghent University. The author has contributed to research in topics: Cloud computing & Scalability. The author has an hindex of 8, co-authored 19 publications receiving 147 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This survey investigates how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model, and identifies several challenges and possible opportunities.
Abstract: Cloud computing heavily relies on virtualization, as with cloud computing virtual resources are typically leased to the consumer, for example as virtual machines. Efficient management of these virtual resources is of great importance, as it has a direct impact on both the scalability and the operational costs of the cloud environment. Recently, containers are gaining popularity as virtualization technology, due to the minimal overhead compared to traditional virtual machines and the offered portability. Traditional resource management strategies however are typically designed for the allocation and migration of virtual machines, so the question arises how these strategies can be adapted for the management of a containerized cloud. Apart from this, the cloud is also no longer limited to the centrally hosted data center infrastructure. New deployment models have gained maturity, such as fog and mobile edge computing, bringing the cloud closer to the end user. These models could also benefit from container technology, as the newly introduced devices often have limited hardware resources. In this survey, we provide an overview of the current state of the art regarding resource management within the broad sense of cloud computing, complementary to existing surveys in literature. We investigate how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model. Furthermore, we identify several challenges and possible opportunities for future research.

43 citations

Proceedings ArticleDOI
23 Apr 2018
TL;DR: This paper shows that it is possible to train accurate machine learning models which can predict the type of traffic going through an IPsec or TOR tunnel based on features extracted from the encrypted streams.
Abstract: Internet applications rely on strong encryption techniques to protect the content of all communications between client and server. These encryption algorithms ensure that third parties are unable to obtain the plain text data but also make it hard for the network administrator to enforce restrictions on the types of traffic that are allowed. In this paper we show that we can train accurate machine learning models which can predict the type of traffic going through an IPsec or TOR tunnel based on features extracted from the encrypted streams. We use small, fast to execute machine learning models that work on small windows of data. This makes it possible to use our approach in real-time, for example as part of a Quality of Service (QoS) system.

19 citations

Journal ArticleDOI
TL;DR: The steps necessary to migrate existing applications to a public cloud environment and the steps required to add multi‐tenancy to these applications are described and verified by means of two case studies.
Abstract: Cloud computing is a technology that enables elastic, on-demand resource provisioning, allowing application developers to build highly scalable systems. Multi-tenancy, the hosting of multiple customers by a single application instance, leads to improved efficiency, improved scalability, and less costs. While these technologies make it possible to create many new applications, legacy applications can also benefit from the added flexibility and cost savings of cloud computing and multi-tenancy. In this article, we describe the steps required to migrate existing applications to a public cloud environment, and the steps required to add multi-tenancy to these applications. We present a generic approach and verify this approach by means of two case studies, a commercial medical communications software package mainly used within hospitals for nurse call systems and a schedule planner for managing medical appointments. Both case studies are subject to stringent security and performance constraints, which need to be taken into account during the migration. In our evaluation, we estimate the required investment costs and compare them to the long-term benefits of the migration. Copyright © 2015 John Wiley & Sons Ltd.

18 citations

Proceedings ArticleDOI
01 Jun 2017
TL;DR: A general approach for the experimental validation of cloud resource management strategies is presented, together with the introduction of a cloud testbed adapter which was designed to facilitate the step from simulations towards experimental validation on physical cloud test beds.
Abstract: With cloud computing, the efficient management of resources is of great importance as an increased utilization of the available resources can result in higher scalability and significant energy and cost reductions. Experimental validation of novel resource management strategies is costly and time consuming, and often requires in-depth knowledge of and control over the underlying cloud platform. As a result, many novel strategies are only evaluated by means of simulations, in which the whole cloud computing environment is modelled and simulated. Nonetheless, experimental validation should also be considered during the validation, as these types of experiments can often result in new insights or they can be used to fine-tune some specific parameters. In this paper we present a general approach for the experimental validation of cloud resource management strategies, together with the introduction of a cloud testbed adapter which was designed to facilitate the step from simulations towards experimental validation on physical cloud testbeds. We illustrate our solution by means of two case studies, focusing on two different types of testbeds. The adapter mainly acts as a dispatcher towards specific services of the evaluated cloud setup, and allows researchers to easily validate their ideas without having to dive deep into the complex details of the underlying cloud platform.

18 citations

Proceedings ArticleDOI
14 May 2019
TL;DR: Measurements show that the proposed logging mechanism enables organizations to create a log of service interactions with limited delay imposed on the data exchange process.
Abstract: Organizations nowadays are largely computerized, with a mixture of internal and external services providing them with on-demand functionality. In some situations (e.g. emergency situations), cross-organizational collaboration is needed, providing external users access to internal services. Trust between partners in such a collaboration can however be an issue. Although (federated) access control policies may be in place, it is unclear which data was requested and delivered after a collaboration has finished. This may lead to disputes between participating organizations. The open-source permissioned blockchain Hyperledger Fabric is utilized to create a logging mechanism for the actions performed by the participants in such a collaboration. This paper presents the architecture needed for such a logging mechanism and provides details on its operation. A prototype was designed in order to evaluate the performance of an asynchronous logging approach. Measurements show that the proposed logging mechanism enables organizations to create a log of service interactions with limited delay imposed on the data exchange process.

13 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A dynamic resource provisioning method (DRPM) with fault tolerance for the data-intensive meteorological workflows is proposed in this article and the nondominated sorting genetic algorithm II (NSGA-II) is employed to minimize the makespan and improve the load balance.
Abstract: Cloud computing is a formidable paradigm to provide resources for handling the services from Industrial Internet of Things (IIoT), such as meteorological industry. Generally, the meteorological services, with complex interdependent logics, are modeled as workflows. When any of the computing nodes for hosting the meteorological workflows fail, all sorts of consequences (e.g., data loss, makespan enlargement, performance degradation, etc.) could arise. Thus recovering the failed tasks as well as optimizing the makespan and the load balance of the computing nodes is still a critical challenge. To address this challenge, a dynamic resource provisioning method (DRPM) with fault tolerance for the data-intensive meteorological workflows is proposed in this article. Technically, the Virtual Layer 2 (VL2) network topology is exploited to build meteorological cloud infrastructure. Then, the nondominated sorting genetic algorithm II (NSGA-II) is employed to minimize the makespan and improve the load balance. Finally, comprehensive experimental analysis of DRPM are proceeded.

96 citations

Proceedings ArticleDOI
01 Aug 2020
TL;DR: This paper identifies tunneling activities that utilize DNS communications over HTTPS by presenting a two-layered approach to detect and characterize DoH traffic using time-series classifiers.
Abstract: Computer networks have fallen easy prey to cyber attacks in the ever-evolving internet services. Domain Name System (DNS) has also not remained untouched with these cybercrime attempts. Encrypted HyperText Transfer Protocol (HTTP) traffic over Secure Socket Layer (SSL), alternatively called HTTPS, has succeeded to prevent DNS attacks to a great extent. To secure DNS traffic, the security community has introduced the concept of DNS over HTTPS (DoH) to improve user privacy and security by combating eavesdropping and DNS data manipulation on the way to prevent Man-in-the-Middle (MitM) attacks. This paper discusses one of the persistent security concerns, abuse of DNS protocol to create covert channels by tunneling data through DNS packets. We identify tunneling activities that utilize DNS communications over HTTPS by presenting a two-layered approach to detect and characterize DoH traffic using time-series classifiers.

89 citations

Journal ArticleDOI
TL;DR: Variability‐based, Pattern‐driven Architecture Migration allows an organization to select appropriate migration patterns, compose them to define a migration plan, and extend them based on the identification of new patterns in new contexts.
Abstract: Summary Many organizations migrate on-premise software applications to the cloud. However, current coarse-grained cloud migration solutions have made such migrations a non transparent task, an endeavor based on trial-and-error. This paper presents Variability-based, Pattern-driven Architecture Migration (V-PAM), a migration method based on (i) a catalogue of fine-grained service-based cloud architecture migration patterns that target multi-cloud, (ii) a situational migration process framework to guide pattern selection and composition, and (iii) a variability model to structure system migration into a coherent framework. The proposed migration patterns are based on empirical evidence from several migration projects, best practice for cloud architectures and a systematic literature review of existing research. Variability-based, Pattern-driven Architecture Migration allows an organization to (i) select appropriate migration patterns, (ii) compose them to define a migration plan, and (iii) extend them based on the identification of new patterns in new contexts. The patterns are at the core of our solution, embedded into a process model, with their selection governed by a variability model. Copyright © 2016 John Wiley & Sons, Ltd.

51 citations

Journal ArticleDOI
TL;DR: A comprehensive review of various data representation methods, and the different objectives of Internet traffic classification and obfuscation techniques, largely considering the ML-based solutions.
Abstract: Traffic classification acquired the interest of the Internet community early on Different approaches have been proposed to classify Internet traffic to manage both security and Quality of Service (QoS) However, traditional classification approaches consisting of modifying the Transmission Control Protocol/Internet Protocol (TCP/IP) scheme have not been adopted due to their complex management In addition, port-based methods and deep packet inspection have limitations in dealing with new traffic characteristics (eg, dynamic port allocation, tunneling, encryption) Conversely, machine learning (ML) solutions effectively classify traffic down to the device type and specific user action Another research direction aims to anonymize Internet traffic and thwart classification to maintain user privacy Existing traffic surveys focus on classification and do not consider anonymization Here, we review the Internet traffic classification and obfuscation techniques, largely considering the ML-based solutions In addition, this paper presents a comprehensive review of various data representation methods, and the different objectives of Internet traffic classification Finally, we present the key findings, limitations, and recommendations for future research

46 citations

Journal ArticleDOI
TL;DR: A self‐learning fuzzy approach for proactive resource provisioning in cloud environment, where key is to predict parameters of the probability distribution of the incoming players in each period, and results indicate that the proposed approach is able to allocate resources more efficiently than other approaches.
Abstract: The development of a communication infrastructure has made possible the expansion of the popular massively multiplayer online games. In these games, players all over the world can interact with one another in a virtual environment. The arrival rate of new players to the game environment causes fluctuations and players always expect services to be available and offer an acceptable service‐level agreement (SLA), especially in terms of response time and cost. Cloud computing emerged in the recent years as a scalable alternative to respond to the dynamic changes of the workload. In massively multiplayer online games applications, players are allowed to lease resources from a cloud provider in an on‐demand basis model. Proactive management of cloud resources in the face of workload fluctuations and dynamism upon the arrival of players are challenging issues. This paper presents a self‐learning fuzzy approach for proactive resource provisioning in cloud environment, where key is to predict parameters of the probability distribution of the incoming players in each period. In addition, we propose a self‐learning fuzzy autoscaling decision‐maker algorithm to compute the proper number of resources to be allocated to each tier in the massively multiplayer online games by applying the predicted workload and user SLA. We evaluate the effectiveness of the proposed approach under real and synthetic workloads. The experimental results indicate that the proposed approach is able to allocate resources more efficiently than other approaches.

45 citations