scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Network and Systems Management in 2020"


Journal ArticleDOI
TL;DR: This survey investigates how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model, and identifies several challenges and possible opportunities.
Abstract: Cloud computing heavily relies on virtualization, as with cloud computing virtual resources are typically leased to the consumer, for example as virtual machines. Efficient management of these virtual resources is of great importance, as it has a direct impact on both the scalability and the operational costs of the cloud environment. Recently, containers are gaining popularity as virtualization technology, due to the minimal overhead compared to traditional virtual machines and the offered portability. Traditional resource management strategies however are typically designed for the allocation and migration of virtual machines, so the question arises how these strategies can be adapted for the management of a containerized cloud. Apart from this, the cloud is also no longer limited to the centrally hosted data center infrastructure. New deployment models have gained maturity, such as fog and mobile edge computing, bringing the cloud closer to the end user. These models could also benefit from container technology, as the newly introduced devices often have limited hardware resources. In this survey, we provide an overview of the current state of the art regarding resource management within the broad sense of cloud computing, complementary to existing surveys in literature. We investigate how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model. Furthermore, we identify several challenges and possible opportunities for future research.

43 citations


Journal ArticleDOI
TL;DR: This article articulate the technical challenges to enable a future AR/VR end-to-end architecture, that combines 5G URLLC and Tactile IoT technology to support this next generation of interconnected AR/ VR applications.
Abstract: Despite remarkable advances, current augmented and virtual reality (AR/VR) applications are a largely individual and local experience. Interconnected AR/VR, where participants can virtually interact across vast distances, remains a distant dream. The great barrier that stands between current technology and such applications is the stringent end-to-end latency requirement, which should not exceed 20 ms in order to avoid motion sickness and other discomforts. Bringing AR/VR to the next level to enable immersive interconnected AR/VR will require significant advances towards 5G ultra-reliable low-latency communication (URLLC) and a Tactile Internet of Things (IoT). In this article, we articulate the technical challenges to enable a future AR/VR end-to-end architecture, that combines 5G URLLC and Tactile IoT technology to support this next generation of interconnected AR/VR applications. Through the use of IoT sensors and actuators, AR/VR applications will be aware of the environmental and user context, supporting human-centric adaptations of the application logic, and lifelike interactions with the virtual environment. We present potential use cases and the required technological building blocks. For each of them, we delve into the current state of the art and challenges that need to be addressed before the dream of remote AR/VR interaction can become reality.

34 citations


Journal ArticleDOI
TL;DR: Challenges and opportunities are provided along with possible solution approaches and opportunities for research as the networking landscape is expected to undergo profound changes over the course of the next decade.
Abstract: The networking landscape is expected to undergo profound changes over the course of the next decade. New network services are expected to emerge that will enable new applications such as the Tactile Internet, Holographic-Type Communications, or Tele-Driving. Many of these services will be characterized by very high degrees of precision with which end-to-end service levels must be supported. This will have profound implications on the management of those networks and services, from the need to support new methods for assurance of ultra-high-precision services to the need for new network programming models that will allow the industry to move beyond DevOps and SDN towards User-Defined Networking. This article analyzes those implications and provides an overview of challenges along with possible solution approaches and opportunities for research.

31 citations


Journal ArticleDOI
TL;DR: To detect malicious miners that claim bigger computing capacity, the present work provided a machine learning module to estimate the real miners’ capacities and the efficiency of the proposed trust model is studied, and the obtained simulation results are presented and discussed.
Abstract: In blockchain, transactions between parties are regrouped into blocks, in order to be added to the blockchain’s distributed ledger. Miners are nodes of the network that generate new blocks according to the consensus protocol. The miner that adds a valid block to the distributed ledger is rewarded. However, to find a valid block, the miner needs to solve a computationally difficult problem, which makes it difficult to a single miner to gain rewards. Therefore, miners join mining pools, where the powers’ of miners are federated to ensure stable revenues. In public blockchains, access to mining pools is not restricted, which makes mining pools vulnerable to considerable threats such as: block withholding (BWH) attacks and distributed denial of service (DDoS) attacks. In the present work, we propose a new reputation based blockchain named PoolCoin based on a distributed trust model for a mining pools. The trust model used by PoolCoin is inspired from the job market signaling model. The proposed PoolChain blockchain allows pool managers the selection of trusted miners in their mining pools, while miners are able to evaluate them. Furthermore, to detect malicious miners that claim bigger computing capacity, we also provided a machine learning module to estimate the real miners’ capacities. The efficiency of the proposed trust model is studied and the obtained simulation results are presented and discussed. Thus, the model parameters’ are optimized in order to detect and exclude misbehaving miners, while honest miners are maintained in the mining pool.

29 citations


Journal ArticleDOI
TL;DR: This paper uses the information entropy TOPSIS method to rank the importance of substrate nodes with an aim to choose the most appropriate substrate node for accommodating the virtual node in the virtual network embedding process and uses the shortest path algorithm to perform the link mapping process.
Abstract: Network virtualization is an effective manner to address the ossification issue of Internet architecture. Virtual network embedding is one of the most critical techniques in network virtualization environments. Several security problems about virtual network embedding are introduced due to the fact that virtual network embedding adds a virtual layer into the internet architecture. In this paper, we proposed an approach for security aware virtual network embedding called SA-VNE to address the security problems in virtual network embedding process. Firstly, we use the information entropy TOPSIS method to rank the importance of substrate nodes with an aim to choose the most appropriate substrate node for accommodating the virtual node. Secondly, we use the shortest path algorithm to perform the link mapping process. Simulation results demonstrated that our proposed SA-VNE algorithm behaves better that those of state-of-the-art existing security aware virtual network embedding algorithms in terms of the long-term average revenue, the long-term average VN acceptance ratio, the long-term average revenue to cost ratio and the running time.

29 citations


Journal ArticleDOI
TL;DR: An integrated framework for network security risk management is presented which is based on a probabilistic graphical model called Bayesian decision network (BDN), which shows that network security level enhances significantly due to precise assessment and appropriate mitigation of risks.
Abstract: Network security risk management is comprised of several essential processes, namely risk assessment, risk mitigation and risk validation and monitoring, which should be done accurately to maintain the overall security level of a network in an acceptable level. In this paper, an integrated framework for network security risk management is presented which is based on a probabilistic graphical model called Bayesian decision network (BDN). Using BDN, we model the information needed for managing security risks, such as information about vulnerabilities, risk-reducing countermeasures and the effects of implementing them on vulnerabilities, with the minimum need for expert’s knowledge. In order to increase the accuracy of the proposed risk assessment process, vulnerabilities exploitation probability and impact of vulnerabilities exploitation on network assets are calculated using inherent, temporal and environmental factors. In the risk mitigation process, a cost-benefit analysis is efficiently done using modified Bayesian inference algorithms even in case of budget limitation. The experimental results show that network security level enhances significantly due to precise assessment and appropriate mitigation of risks.

25 citations


Journal ArticleDOI
TL;DR: A model that allocates the network cost to the different deployed slices, which can then later be used to price the different E2E services is proposed, which is made from a network infrastructure provider perspective.
Abstract: Within the upcoming fifth generation (5G) mobile networks, a lot of emerging technologies, such as Software Defined Network (SDN), Network Function Virtualization (NFV) and network slicing are proposed in order to leverage more flexibility, agility and cost-efficient deployment. These new networking paradigms are shaping not only the network architectures but will also affect the market structure and business case of the stakeholders involved. Due to its capability of splitting the physical network infrastructure into several isolated logical sub-networks, network slicing opens the network resources to vertical segments aiming at providing customized and more efficient end-to-end (E2E) services. While many standardization efforts within the 3GPP body have been made regarding the system architectural and functional features for the implementation of network slicing in 5G networks, techno-economic analysis of this concept is still at a very incipient stage. This paper initiates this techno-economic work by proposing a model that allocates the network cost to the different deployed slices, which can then later be used to price the different E2E services. This allocation is made from a network infrastructure provider perspective. To feed the proposed model with the required inputs, a resource allocation algorithm together with a 5G network function (NF) dimensioning model are also proposed. Results of the different models as well as the cost saving on the core network part resulting from the use of NFV are discussed as well.

25 citations


Journal ArticleDOI
TL;DR: A workload prediction model based on extreme learning machines (ELM) whose learning time is very low and forecasts the workload more accurately is presented and outperforms the state-of-art techniques by reducing the mean prediction error up to 100% and 99% on CPU and memory request traces respectively.
Abstract: Cloud computing has drastically transformed the means of computing in past few years. Apart from numerous advantages, it suffers with a number of issues including resource under-utilization, load balancing and power consumption. The workload prediction is being widely explored to solve these issues using time series analysis regression and neural networks based models. The time series analysis based models are unable to capture the dynamics in the workload behavior whereas neural network based models offer better accuracy on the cost of high training time. This paper presents a workload prediction model based on extreme learning machines (ELM) whose learning time is very low and forecasts the workload more accurately. The performance of the model is evaluated over two real world cloud server workloads i.e. CPU and Memory demand traces of Google cluster and compared with predictive models based on state-of-art techniques including Auto Regressive Integrated Moving Average (ARIMA), Support Vector Regression (SVR), Linear Regression (LR), Differential Evolution (DE), Blackhole Algorithm (BhA), and Propagation (BP). It is observed that the proposed model outperforms the state-of-art techniques by reducing the mean prediction error up to 100% and 99% on CPU and memory request traces respectively.

24 citations


Journal ArticleDOI
TL;DR: This paper provides an extensive investigation of the state of the art MCDM-based service selection schemes proposed in the literature and provides the required background knowledge and puts forward a taxonomy of the investigatedservice selection schemes regarding their applied M CDM methods.
Abstract: The growing number of services that can meet the users’ functional requirements, inspired many researchers to provide some approaches to rank and select the best possible services regarding their quality of service (QoS) and users’ preferences. Considering various criteria which should be considered in the service selection process, multi-criteria decision making (MCDM) techniques have been vastly applied to help a decision-maker in determining the weight of each QoS factor and ranking the services provided by different service providers. This paper provides an extensive investigation of the state of the art MCDM-based service selection schemes proposed in the literature. It provides the required background knowledge and puts forward a taxonomy of the investigated service selection schemes regarding their applied MCDM methods. Also, it describes how the MCDM methods are adapted by the studied schemes, which datasets and QoS criteria are employed by each system, and which factors and environments are utilized to evaluate the service selection schemes. Finally, the concluding remarks are provided, and directions for future studies are highlighted.

22 citations


Journal ArticleDOI
TL;DR: A taxonomy of decision fusion methods that rely on the theory of belief for the Internet of Things (DFIOT) based on Dempster–Shafer (D–S) theory and an adaptive weighted fusion algorithm is proposed.
Abstract: In Internet of Things (IoT) ubiquitous environments, a high volume of heterogeneous data is produced from different devices in a quick span of time. In all IoT applications, the quality of information plays an important role in decision making. Data fusion is one of the current research trends in this arena that is considered in this paper. We particularly consider typical IoT scenarios where the sources measurements highly conflict, which makes intuitive fusions prone to wrong and misleading results. This paper proposes a taxonomy of decision fusion methods that rely on the theory of belief. It proposes a data fusion method for the Internet of Things (DFIOT) based on Dempster–Shafer (D–S) theory and an adaptive weighted fusion algorithm. It considers the reliability of each device in the network and the conflicts between devices when fusing data. This is while considering the information lifetime, the distance separating sensors and entities, and reducing computation. The proposed method uses a combination of rules based on the Basic Probability Assignment (BPA) to represent uncertain information or to quantify the similarity between two bodies of evidence. To investigate the effectiveness of the proposed method in comparison with D–S, Murphy, Deng and Yuan, a comprehensive analysis is provided using both benchmark data simulation and real dataset from a smart building testbed. Results show that DFIOT outperforms all the above mentioned methods in terms of reliability, accuracy and conflict management. The accuracy of the system reached up to $$99.18\%$$ on benchmark artificial datasets and $$98.87\%$$ on real datasets with a conflict of $$0.58 \%$$ . We also examine the impact of this improvement from the application perspective (energy saving), and the results show a gain of up to $$90\%$$ when using DFIOT.

21 citations


Journal ArticleDOI
TL;DR: The novel notion of a responsible Internet is proposed, which provides higher degrees of trust and sovereignty for critical service providers and all kinds of other users by improving the transparency, accountability, and controllability of the Internet at the network-level.
Abstract: Policy makers in regions such as Europe are increasingly concerned about the trustworthiness and sovereignty of the foundations of their digital economy, because it often depends on systems operated or manufactured elsewhere. To help curb this problem, we propose the novel notion of a responsible Internet, which provides higher degrees of trust and sovereignty for critical service providers (e.g., power grids) and all kinds of other users by improving the transparency, accountability, and controllability of the Internet at the network-level. A responsible Internet accomplishes this through two new distributed and decentralized systems. The first is the Network Inspection Plane (NIP), which enables users to request measurement-based descriptions of the chains of network operators (e.g., ISPs and DNS and cloud providers) that handle their data flows or could potentially handle them, including the relationships between them and the properties of these operators. The second is the Network Control Plane (NCP), which allows users to specify how they expect the Internet infrastructure to handle their data (e.g., in terms of the security attributes that they expect chains of network operators to have) based on the insights they gained from the NIP. We discuss research directions and starting points to realize a responsible Internet by combining three currently largely disjoint research areas: large-scale measurements (for the NIP), open source-based programmable networks (for the NCP), and policy making (POL) based on the NIP and driving the NCP. We believe that a responsible Internet is the next stage in the evolution of the Internet and that the concept is useful for clean slate Internet systems as well.

Journal ArticleDOI
TL;DR: Simulation results demonstrate that FNCR outperforms the formerly proposed approaches employing network coding, in terms of throughput, end to end delay, packet delivery ratio, and lifetime of the network.
Abstract: Network coding, as one of the foremost techniques boosting the performance of the wireless networks, has recently acquired notable popularity. As a result, a new category of routing approaches named as the coding-aware routing scheme, has been emerged. In such routing schemes, the possible coding opportunities are identified prior to the path establishment, and paths containing coding opportunities are prioritized to be established. Motivated by the appreciable efficiency of the coding-aware routing schemes, this paper leverages fuzzy logic and proposes a novel coding-aware routing approach alluded to as Fuzzy-logic-based Network Coding-aware Routing (FNCR) protocol. Notwithstanding a number of previously proposed coding-aware routing schemes which merely endeavor to establish paths including more coding opportunities, FNCR suggests embedding a purposefully designed fuzzy system in each node in order to calculate the overall desirability of the nodes in terms of some momentous factors such as the coding capability, the remaining energy, and the workload of the node. In addition to a new routing metric which utilizes the calculated overall desirability, the previously proposed coding conditions are modified such that more possible coding opportunities can now be identified. Simulation results demonstrate that FNCR outperforms the formerly proposed approaches employing network coding, in terms of throughput, end to end delay, packet delivery ratio, and lifetime of the network.

Journal ArticleDOI
TL;DR: A modified dragonfly algorithm is applied for VM placement for better resource utilization at cloud data centers and observations exhibit the superiority of the proposed model in solving VM placement problem.
Abstract: The ease and affordability offered by the cloud computing has attracted large number of customers towards it. Cloud service providers offer its services, to the cloud customers, usually in form of Virtual Machines (VMs). With the growth in the number of customers, cloud data centers encounter overwhelming number of VM requests. These requests need to be mapped on the real cloud hardware and therefore, VM placement has been an important research area in the cloud research community. Virtual machine placement, being an NP hard problem, is modelled as an optimization problem with the objective to optimize resource wastage. Dragonfly Algorithm (DA), a nature inspired technique, originates from static and dynamic swarming behavior of dragonfly and is well suited to solve VM placement problem. Therefore, in the proposed work, a modified dragonfly algorithm is applied for VM placement for better resource utilization at cloud data centers. The performance of the proposed model is analyzed through simulation and comparative study. Observations, obtained from the experiments, exhibit the superiority of the proposed model in solving VM placement problem.

Journal ArticleDOI
TL;DR: The cooperative Blockchain Signaling System (BloSS) defines an effective and alternative solution for security management, especially cooperative defenses, by exploiting Blockchains and Software-Defined Networks for sharing attack information, an exchange of incentives, and tracking of reputation in a fully distributed and automated fashion.
Abstract: Distributed Denial-of-Service (DDoS) attacks are one of the major causes of concerns for communication service providers. When an attack is highly sophisticated and no countermeasures are available directly, sharing hardware and defense capabilities become a compelling alternative. Future network and service management can base its operations on equally distributed systems to neutralize highly distributed DDoS attacks. A cooperative defense allows for the combination of detection and mitigation capabilities, the reduction of overhead at a single point, and the blockage of malicious traffic near its source. Main challenges impairing the widespread deployment of existing cooperative defense are: (a) high complexity of operation and coordination, (b) need for trusted and secure communications, (c) lack of incentives for service providers to cooperate, and (d) determination on how operations of these systems are affected by different legislation, regions, and countries. The cooperative Blockchain Signaling System (BloSS) defines an effective and alternative solution for security management, especially cooperative defenses, by exploiting Blockchains (BC) and Software-Defined Networks (SDN) for sharing attack information, an exchange of incentives, and tracking of reputation in a fully distributed and automated fashion. Therefore, BloSS was prototyped and evaluated through a global experiment, without the burden to maintain, design, and develop special registries and gossip protocols.

Journal ArticleDOI
TL;DR: A multi-fuzzy, dynamic and hierarchical trust model (FDTM-IoT) is proposed that provides high performance in detecting attacks and improves network performance in a variety of criteria, including end-to-end delay and packet loss rates.
Abstract: Internet of Things (IoT) could be described as the pervasive and global network where real-world entities augmented with computing devices, sensors and actuators are connected to the Internet, enabling them to publish their generated data. Thus, an efficient and secure routing service is required to enable efficient communication of information among IoT nodes. This sophisticated, dynamic, and ultra-large-scale network requires the use of contextual information, attention to security issues and the consideration of service quality to make proper routing decisions. The routing protocol for low-power and lossy networks (RPL) and improved versions of it are experiencing severe performance gaps under network attacks such as BLACKHOLE, SYBIL and RANK. This paper uses the concept of trust as an umbrella to cover countermeasures for addressing the consequences of attacks. Accordingly, a multi-fuzzy, dynamic and hierarchical trust model (FDTM-IoT) is proposed. The main dimensions of this model are contextual information (CI), quality of service (QoS) and quality of P2P communication (QPC). Each dimension also has its own sub-dimensions or criteria. FDTM-IoT is integrated into RPL (FDTM-RPL) as objective function. FDTM-RPL use trust to deal with attacks. In the proposed method, fuzzy logic has been used in trust calculations to consider uncertainty as one of the most important inherent characteristics of trust. The efficiency of FDTM-RPL in various scenarios (including small-scale to large-scale networks, mobile environment as well as different transmission rates and under different attacks) has been compared with standard RPL protocols. FDTM-RPL provides high performance in detecting attacks. Additionally, it improves network performance in a variety of criteria, including end-to-end delay and packet loss rates.

Journal ArticleDOI
TL;DR: This work provides a review of the applications of machine learning in network and system management and presents the current opportunities and challenges in and highlights the need for dependable, reliable and secure machine learning for network andSystem management.
Abstract: Modern networks and systems pose many challenges to traditional management approaches. Not only the number of devices and the volume of network traffic are increasing exponentially, but also new network protocols and technologies require new techniques and strategies for monitoring controlling and managing up and coming networks and systems. Moreover, machine learning has recently found its successful applications in many fields due to its capability to learn from data to automatically infer patterns for network analytics. Thus, the deployment of machine learning in network and system management has become imminent. This work provides a review of the applications of machine learning in network and system management. Based on this review, we aim to present the current opportunities and challenges in and highlight the need for dependable, reliable and secure machine learning for network and system management.

Journal ArticleDOI
TL;DR: This paper analyzes the characteristics and requirements of future networking applications, and puts forward a novel network architecture adapted to the Tactile Internet called FlexNGIA, a Flexible Next-Generation Internet Architecture.
Abstract: From virtual reality and telepresence, to augmented reality, holoportation, and remotely controlled robotics, these future network applications promise an unprecedented development for society, economics and culture by revolutionizing the way we live, learn, work and play. In order to deploy such futuristic applications and to cater to their performance requirements, recent trends stressed the need for the “Tactile Internet”, an Internet that, according to the International Telecommunication Union (ITU), combines ultra low latency with extremely high availability, reliability and security (ITU-T Technology Watch Report. The Tactile Internet, 2014). Unfortunately, today’s Internet falls short when it comes to providing such stringent requirements due to several fundamental limitations in the design of the current network architecture and communication protocols. This brings the need to rethink the network architecture and protocols, and efficiently harness recent technological advances in terms of virtualization and network softwarization to design the Tactile Internet of the future. In this paper, we start by analyzing the characteristics and requirements of future networking applications. We then highlight the limitations of the traditional network architecture and protocols and their inability to cater to these requirements. Afterward, we put forward a novel network architecture adapted to the Tactile Internet called FlexNGIA, a Flexible Next-Generation Internet Architecture. We then describe some use-cases where we discuss the potential mechanisms and control loops that could be offered by FlexNGIA in order to ensure the required performance and reliability guarantees for future applications. Finally, we identify the key research challenges to further develop FlexNGIA towards a full-fledged architecture for the future Tactile Internet.

Journal ArticleDOI
TL;DR: This work analyzes from an architectural point of view, the required coordination for the provisioning of 5G services over multiple network segments/domains by means of network slicing, considering as well the use of sensors and actuators to maintain slices performance during its lifetime.
Abstract: The current deployment of 5G networks in a way to support the highly demanding service types defined for 5G, has brought the need for using new techniques to accommodate legacy networks to such requirements. Network Slicing in turn, enables sharing the same underlying physical infrastructure among services with different requirements, thus providing a level of isolation between them to guarantee their proper functionality. In this work, we analyse from an architectural point of view, the required coordination for the provisioning of 5G services over multiple network segments/domains by means of network slicing, considering as well the use of sensors and actuators to maintain slices performance during its lifetime. We set up an experimental multi-segment testbed to demonstrate end-to-end service provisioning and its guarantee in terms of specific QoS parameters, such as latency, throughput and Virtual Network Function (VNF) CPU/RAM consumption. The results provided, demonstrate the workflow between different network components to coordinate the deployment of slices, besides providing a set of examples for slice maintenance through service monitoring and the use of policy-based actuations.

Journal ArticleDOI
TL;DR: A reasonable global model of load redistribution for the communication network is designed and it is found that the betweenness centrality can accurately reflect the scale of cascading failure, and the closeness centrality is negatively correlated to the frequency of failure participation.
Abstract: In communication networks, the cascading failure, which is initiated by influential nodes, may cause local paralysis of communication networks and make network management systems face big challenges in both fault location and the rational use of maintenance resource. As network failure is inevitable, how to find the fragile nodes and the root cause of cascade failure has been recognized as an important research problem in both academia and industry. In this paper, we focus on the problem of identifying critical nodes when cascading failures occur in communication networks. Based on the Barabasi–Albert (BA) model, which is used to generate the scale-free network, we design a reasonable global model of load redistribution for the communication network, and we also find that the betweenness centrality can accurately reflect the scale of cascading failure, and the closeness centrality is negatively correlated to the frequency of failure participation, by (1) establishing a reasonable model of fault propagation, (2) extracting and analyzing the dataset derived from the topology information. Simulation results demonstrate that our model can effectively identify critical nodes of networks and the global redistribution model is more robust than other existing models.

Journal ArticleDOI
TL;DR: A new technique to detect and mitigate DDoS attacks in NDN that depends on cooperation among NDN routers with the help of a centralized controller is proposed and offers better performance comparing with the previously proposed ones.
Abstract: Named Data Networking (NDN) is a new and attractive paradigm that got a broad interest in recent researches as a potential alternative for the existing IP-based (host-based) Internet architecture. Security is considered explicitly as one of the most critical issues about NDN. Despite that NDN architecture presents higher resilience against most existing attacks, its architecture, nevertheless, can be exploited to start a DDoS attack. In the DDoS attack, the attacker tries to create and transmit a large number of fake Interest packets to increase network congestion and thus dropping legitimate interests by NDN routers. This paper proposes a new technique to detect and mitigate DDoS attacks in NDN that depends on cooperation among NDN routers with the help of a centralized controller. The functionality of these routers depends on their positions inside the autonomous system (AS). The simulation results show that the suggested technique is effective and precise to detect the fake name prefixes and, it offers better performance comparing with the previously proposed ones.

Journal ArticleDOI
TL;DR: This work presents a multi-technology flow-management load balancing approach for heterogeneous wireless networks that dynamically re-routes traffic through heterogeneous networks, in order to maximize the global throughput.
Abstract: The number of connected devices has reached 18 billion in 2017 and this will nearly double by 2022, while also new wireless communication technologies become available. Since these modern devices support the use of multiple communication technologies, efforts have been made to enable simultaneous usage and handovers between the different technologies for these devices. However, existing solutions are missing the intelligence to decide on fine-grained (e.g. flow or packet level) optimizations that can drastically enhance the network’s performance (e.g., throughput) and user experience. To this extent, we present a multi-technology flow-management load balancing approach for heterogeneous wireless networks that dynamically re-routes traffic through heterogeneous networks, in order to maximize the global throughput. This dynamic approach can be deployed on top of existing solutions and takes into account the specific characteristics of the different technologies, as well as station mobility. We both present a mathematical problem formulation and a heuristic that ensures practical scalability. We demonstrate the heuristic’s ability to increase the network-wide throughput by more than 100% across a variety of scenarios and scalability up to 10,000 devices.

Journal ArticleDOI
TL;DR: The proposed intelligent multi-source events detection and propagation model can learn from previous propagation to better discover and propagate the hot events under users’ changing interest and broadens the influence scope of hot events.
Abstract: The social network is a huge source of information, which plays an increasingly crucial role in people’s daily lives. As a form of online social network management, much information can be discovered via posts, which allows people to exchange and propagate real-life events. Multi-source event propagation involves relevant posts of interesting topics from some key users to others in microblogging network users for network management. However, there are many noisy data in traditional microblogging network management. Meanwhile few people study the spontaneous transmission of events in microblogging network management, as well as the cooperation and competition among multiple event sources. To this end, the event detection and multi-source propagation model, is established. Specifically, for efficient and accurate result of the hot event detection and propagation, we obtain the information of previous event detection and propagation to create some experience sets for the intelligent event propagation. And a multi-source events propagation model based on individual interest is established to describe the process of multi-source event information detection and dissemination, and to describe the key role of users and information characteristics in the process of communication and network management. Meanwhile, the experimental results show that the proposed intelligent multi-source events detection and propagation model can learn from previous propagation to better discover and propagate the hot events under users’ changing interest. Besides, the interaction broadens the influence scope of hot events. This helps to explain the formation of microblogging hot events dissemination, to provide a theoretical basis for the research and network management of the guiding strategy.

Journal ArticleDOI
TL;DR: An optimization approach that employs machine learning Monte Carlo Tree Search (MCTS) algorithm for the simulation of future traffic to improve the performance of the network regarding the request blocking and the operational cost and concludes that the approach based on the policy of Last-Good-Reply with Forgetting enables more efficient cloud resource allocation.
Abstract: The rapid development of Cloud Computing and Content Delivery Networks (CDNs) brings a significant increase in data transfers that leads to new optimization challenges in inter-data center networks. In this article, we focus on the cross-stratum optimization of an inter-data center Elastic Optical Network (EON). We develop an optimization approach that employs machine learning Monte Carlo Tree Search (MCTS) algorithm for the simulation of future traffic to improve the performance of the network regarding the request blocking and the operational cost. The key novelty of our approach is using various selection strategies applied to the phase of building a search tree under different network scenarios. We evaluate the performance of these selection strategies using representative topologies and real-data provided by Amazon Web Services. The main conclusion is that the approach based on the policy of Last-Good-Reply with Forgetting enables more efficient cloud resource allocation, which results in lower request blocking, thus, reduces the operational cost of the network.

Journal ArticleDOI
TL;DR: This survey paper focuses on the management of big geospatial data that are generated by IoT data sources and defines a conceptual framework and matches the works of the recent literature with it, and identifies future research frontiers in the field depending on the surveyed works.
Abstract: The high abundance of IoT devices have caused an unprecedented accumulation of avalanches of geo-referenced IoT spatial data that if could be analyzed correctly would unleash important information. This can feed decision support systems for better decision making and strategic planning regarding important aspects of our lives that depend heavily on location-based services. Several spatial data management systems for IoT data in Cloud has recently gained momentum. However, the literature is still missing a comprehensive survey that conceptualize a convenient framework that classify those frameworks under appropriate categories. In this survey paper, we focus on the management of big geospatial data that are generated by IoT data sources. We also define a conceptual framework and match the works of the recent literature with it. We then identify future research frontiers in the field depending on the surveyed works.

Journal ArticleDOI
TL;DR: This work proposes an adaptive ML-based approach for frame size selection on a per-user basis by taking into account both specific channel conditions and global performance indicators, and relies on standard frame aggregation mechanisms.
Abstract: Software-Defined Networking (SDN) is gaining a lot of traction in wireless systems with several practical implementations and numerous proposals being made. Despite instigating a shift from monolithic network architectures towards more modulated operations, automated network management requires the ability to extract, utilise and improve knowledge over time. Beyond simply scrutinizing data, Machine Learning (ML) is evolving from a simple tool applied in networking to an active component in what is known as Knowledge-Defined Networking (KDN). This work discusses the inclusion of ML techniques in the specific case of Software-Defined Wireless Local Area Networks (SD-WLANs), paying particular attention to the frame length optimization problem. With this in mind, we propose an adaptive ML-based approach for frame size selection on a per-user basis by taking into account both specific channel conditions and global performance indicators. By relying on standard frame aggregation mechanisms, the model can be seamlessly embedded into any Enterprise SD-WLAN by obtaining the data needed from the control plane, and then returning the output back to this in order to efficiently adapt the frame size to the needs of each user. Our approach has been gauged by analysing a multitude of scenarios, with the results showing an average improvement of 18.36% in goodput over standard aggregation mechanisms.

Journal ArticleDOI
TL;DR: An efficient method for dynamically relocating VNFs by considering changes of a user’s location and the resources currently available at the NFV nodes is proposed.
Abstract: In a Software-Defined Wireless Network (SDWN), Network Function Virtualization (NFV) technology enables implementation of network services using software. These softwarized network services running on NFV nodes, i.e., commercial servers with NFV capability, as virtual machines are called Virtual Network Functions (VNFs). To provide services to users several different VNFs can be configured into one logical chain referred to as a Service Function Chain (SFC). While receiving services from a specific VNF located at an NFV node, a mobile user may change its location. This user may continue to receive service from an associated VNF by routing flows through a new NFV node that is closest to its current location. This may introduce an inefficient routing path which may degrade the network performance. Therefore, it is feasible to relocate the VNFs associated with the service chain of the user to other NFV nodes. To relocate VNFs optimally, we need a new optimal routing path. However, if some NFV nodes on this new path are overloaded, placing these VNFs on overloaded NFV nodes affects the performance of the service chain. To solve this problem, this paper proposes an efficient method for dynamically relocating VNFs by considering changes of a user’s location and the resources currently available at the NFV nodes. The performance of the proposed scheme is evaluated using simulations and an experimental testbed for multiple scenarios under three different network topologies. Results indicate that the proposed scheme balances the load on NFV nodes, reduces SFC blocking rates, and improves the network throughput.

Journal ArticleDOI
TL;DR: The 5G could provide a broad array of services to the society, including weather monitoring, medical ser‐ vices, transportation and vehicular services, defense applications, and smart cities applications.
Abstract: In recent years, people have become more connected to the network, sharing infor‐ mation, collaborating, and generating/consuming a huge amount of data. This “hyperconnected world” is driven by the next generation of the Internet, where dif‐ ferent networks, such as the traditional Internet, Internet of things (IoT), smart cities, smart grids, and intelligent transportation systems, are federated under the umbrella of one network called 5G. Indeed, the 5G is gaining momentum as it extends the regular Internet by connecting a diverse range of “things” or physical objects like electronic appliances, cars, thermostats, and other devices. The 5G could provide a broad array of services to the society, including weather monitoring, medical ser‐ vices, transportation and vehicular services, defense applications, and smart cities applications.

Journal ArticleDOI
TL;DR: The application-aware firewall mechanism for SDN, which can be implemented as an extension to the network’s controller, is proposed and implemented using a Python-based POX controller and the network topology was built using Mininet emulation tool.
Abstract: Software-Defined-Networking (SDN) has been recently arising as a new technology in the IT industry. It is a network architecture that hopes to provide better solutions to most of the constraints in contemporary networks. SDN is a centralized control architecture for networking in which the control plane is separated from the data plane, the network services are abstracted from the underlying forwarding devices, and the network’s intelligence is centralized in a software-based directly-programmed device called a controller. These features of SDN provide more flexible, programmable and innovative network’s architecture. However, they may pose new vulnerabilities and may lead to new security problems. In this paper, we propose the application-aware firewall mechanism for SDN, which can be implemented as an extension to the network’s controller. In order to provide more control and visibility in applications running over the network, the system is able to detect network applications that may at some point affect network’s performance, and it is capable to dynamically enforce constraint rules on applications. The firewall architecture is designed as four cooperating modules: the Main Module, the Filtering Module, the Application Identification Module, and the Security-Enforcement Module. The proposed mechanism checks the network traffic at the network, transport, and application levels, and installs appropriate security instructions down into the network. The proposed solution features were implemented and tested using a Python-based POX controller, and the network topology was built using Mininet emulation tool.

Journal ArticleDOI
TL;DR: A new approach to optimize the switch-to-controller assignment problem with load balancing support is proposed using an improvement of the Hungarian algorithm and the proposed solution outperforms parallel schemes in terms of flow setup time and load balancing.
Abstract: Software defined networking (SDN) gains a lot of interest from network operators due to its ability to offer flexibility, efficiency and fine-grained control over forwarding elements (FE) by decoupling control and data planes. In the control plane, a centralized node, denoted controller, receives requests from ingress switches and makes decisions on path forwarding. Unfortunately, requests processing may lead to controller performance degradation as the number of incoming requests goes up. This paper deals with the controller performance issue in Software Defined WAN (SD-WAN). Mainly, it proposes a new approach to optimize the switch-to-controller assignment problem with load balancing support. The issue is formulated as a Minimum Cost Bipartite Assignment optimization problem which is solved using an improvement of the Hungarian algorithm. The new algorithm is based on the introduction of the load-driven penalty concept which aims to achieve a trade-off between the round trip time and the controller load. Finally, a new protocol denoted Distributed Hungarian-based Assignment Protocol (DHAP) is described as an implementation of the proposed solution in multi-controller environments. As shown in results, the proposed solution outperforms parallel schemes in terms of flow setup time and load balancing.

Journal ArticleDOI
TL;DR: ISDSDN is implemented as an extension module of POX controller and is evaluated under different attack scenarios, showing that the proposed mechanism is very effective in defending against SYN flood attacks.
Abstract: Software defined networking (SDN) has emerged over the past few years as a novel networking technology that enables fast and easy network management. Separating the control plane and the data plane in SDNs allows for dynamic network management, implementation of new applications, and implementing network specific functions in software. This paper addresses the problem of SYN flood attacks in SDNs which are considered among the most challenging threats because their effect exceeds the targeted end system to the controller and TCAM of OpenFlow switches. These attacks exploit the three-way handshaking connection establishment mechanism in TCP, where attackers overwhelm the victim machine with flood of spoofed SYN packets resulting in a large number of half-open connections that would never complete. Therefore, degrading the performance of the controller and populating OpenFlow switches’ TCAMs with spoofed entries. In this paper, we propose ISDSDN, a mechanism for SYN flood attack mitigation in software defined networks. The proposed mechanism adopts the idea of intentional dropping to distinguish between legitimate and attack SYN packets in the context of software defined networks. ISDSDN is implemented as an extension module of POX controller and is evaluated under different attack scenarios. Performance evaluation shows that the proposed mechanism is very effective in defending against SYN flood attacks.