scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Network and Service Management in 2022"


Journal ArticleDOI
TL;DR: This state-of-the-art review integrated blockchain and AI/ML with UAV networks utilizing the 6G ecosystem to address security and privacy, intelligence, and energy-efinity issues faced by swarms of UAVs operating in 6G mobile network.
Abstract: Fifth-generation (5G) cellular networks have led to the implementation of beyond 5G (B5G) networks, which are capable of incorporating autonomous services to swarm of unmanned aerial vehicles (UAVs). They provide capacity expansion strategies to address massive connectivity issues and guarantee ultra-high throughput and low latency, especially in extreme or emergency situations where network density, bandwidth, and traffic patterns fluctuate. On the one hand, 6G technology integrates AI/ML, IoT, and blockchain to establish ultra-reliable, intelligent, secure, and ubiquitous UAV networks. 6G networks, on the other hand, rely on new enabling technologies such as air interface and transmission technologies, as well as a unique network design, posing new challenges for the swarm of UAVs.Keeping these challenges in mind, this article focuses on the security and privacy, intelligence, and energy-efficiency issues faced by swarms of UAVs operating in 6G mobile network. In this state-of-the-art review, we integrated blockchain and AI/ML with UAV networks utilizing the 6G ecosystem. The key findings are then presented, and potential research challenges are identified. We conclude the review by shedding light on future research in this emerging field of research.

29 citations


Journal ArticleDOI
TL;DR: In this paper , an intelligent multi-attribute routing scheme (MARS) for two-layered software-defined vehicle networks (SDVNs) is proposed by employing fuzzy logic and design a technique of order preference by similarity to ideal solution (TOPSIS) algorithm to find the next hop forwarder.
Abstract: Due to the complicated and changing urban traffic conditions and the dynamic mobility of vehicles, the network topology can rapidly change which causes the communication links between vehicles disconnected frequently, and further affects the performance of vehicular networking. To overcome this problem, we propose a intelligent multi-attribute routing scheme (MARS) for two-layered software-defined vehicle networks (SDVNs). The proposed scheme is divided into two phases, the routing path calculation and the multi-attribute vehicle autonomous routing decision-making. In this paper, we construct the topology diagram in SDVNs for finding the efficient routing paths. To increase the packet arrival rate and reduce the end-to-end delay, an intelligent multi-attribute routing scheme is proposed by employing fuzzy logic and design a technique of order preference by similarity to ideal solution (TOPSIS) algorithm to find the next-hop forwarder. To solve the uncertainty problem of multiple attributes, we apply the fuzzy logic to identify the weight of each attribute in TOPSIS algorithm. Simulation results demonstrate that MARS can effectively improve packet delivery ratio and reduce average end-to-end delay in urban environments compared with its counterparts.

24 citations


Journal ArticleDOI
TL;DR: A horizontal-based federated learning architecture, empowered by fog federations, devised for the mobile environment is proposed and results show that the proposed model can achieve better accuracy and quality of service than other models presented in the literature.
Abstract: Federated learning using fog computing can suffer from the dynamic behavior of some of the participants in its training process, especially in Internet-of-Vehicles where vehicles are the targeted participants. For instance, the fog might not be able to cope with the vehicles’ demands in some areas due to resource shortages when the vehicles gather for events, or due to traffic congestion. Moreover, the vehicles are exposed to unintentionally leaving the fog coverage area which can result in the task being dropped as the communications between the server and the vehicles weaken. The aforementioned limitations can affect the federated learning model accuracy for critical applications, such as autonomous driving, where the model inference could influence road safety. Recent works in the literature have addressed some of these problems through active sampling techniques, however, they suffer from many complications in terms of stability, scalability, and efficiency of managing the available resources. To address these limitations, we propose a horizontal-based federated learning architecture, empowered by fog federations, devised for the mobile environment. In our architecture, fog computing providers form stable fog federations using a Hedonic game-theoretical model to expand their geographical footprints. Hence, providers belonging to the same federations can migrate services upon demand in order to cope with the federated learning requirements in an adaptive fashion. We conduct the experiments using a road traffic signs dataset modeled with intermodal traffic systems. The simulation results show that the proposed model can achieve better accuracy and quality of service than other models presented in the literature.

22 citations


Journal ArticleDOI
TL;DR: In this article , a horizontal-based federated learning architecture, empowered by fog federations, is devised for the mobile environment, where fog computing providers form stable fog federators using a Hedonic game-theoretical model to expand their geographical footprints.
Abstract: Federated learning using fog computing can suffer from the dynamic behavior of some of the participants in its training process, especially in Internet-of-Vehicles where vehicles are the targeted participants. For instance, the fog might not be able to cope with the vehicles’ demands in some areas due to resource shortages when the vehicles gather for events, or due to traffic congestion. Moreover, the vehicles are exposed to unintentionally leaving the fog coverage area which can result in the task being dropped as the communications between the server and the vehicles weaken. The aforementioned limitations can affect the federated learning model accuracy for critical applications, such as autonomous driving, where the model inference could influence road safety. Recent works in the literature have addressed some of these problems through active sampling techniques, however, they suffer from many complications in terms of stability, scalability, and efficiency of managing the available resources. To address these limitations, we propose a horizontal-based federated learning architecture, empowered by fog federations, devised for the mobile environment. In our architecture, fog computing providers form stable fog federations using a Hedonic game-theoretical model to expand their geographical footprints. Hence, providers belonging to the same federations can migrate services upon demand in order to cope with the federated learning requirements in an adaptive fashion. We conduct the experiments using a road traffic signs dataset modeled with intermodal traffic systems. The simulation results show that the proposed model can achieve better accuracy and quality of service than other models presented in the literature.

19 citations


Journal ArticleDOI
TL;DR: This work evaluates QUIC performance over the web, cloud storage, and video workloads and compares them to traditional TLS/TCP and observes that QUIC tends to depict better video content delivery with reduced stall events and up to 50% lower stall durations due to its lower latency overheads.
Abstract: QUIC was launched in 2013 with a goal to provide reliable, connection-oriented and end-to-end encrypted transport and is recently standardized in May 2021 by the Internet Engineering Task Force (IETF). This work evaluates QUIC performance over the web, cloud storage, and video workloads and compares them to traditional TLS/TCP. To this end, we have designed tests (quic_perf, tls_perf and video) and conducted measurements from 2018 – 2021 using multiple vantage points: an educational network, a high-bandwidth low-RTT residential link in Germany and a low-bandwidth high-RTT residential link in India. We target Alexa Top-1M for web workloads and probe them towards the support for QUIC, TLS 1.2 and TLS 1.3. By measuring $\lt 5.7$ K websites that support QUIC, we observe that QUIC has up to $\approx 140$ % lower mean connection times than TLS 1.2/1.3 over TCP for low-bandwidth and high-RTT networks. When comparing different versions of QUIC, we observe that IETF QUIC connection times are slightly better than different versions (Q050, Q046, Q044, Q043, Q039 and Q035) of gQUIC. For cloud storage workloads, we observe that TLS 1.2 over TCP exhibits higher throughput for larger file sizes (>20 MB up to 2 GB), while QUIC exhibits higher throughput for smaller file sizes ( $\leq 20$ MB) while downloading files from Google Drive. At the same time, QUIC has much higher CPU utilization than TLS 1.2 over TCP, almost double while downloading a large file (200 MB) from Google Drive due to in-kernel optimizations that benefit TCP. For video workloads, we observe that QUIC is 534 ms faster than TLS 1.2 over TCP from India (406 ms from Germany) in establishing a connection to YouTube media servers. Although we witness that (similar to cloud storage workloads) the overall download rate is higher over TLS, QUIC still tends to depict better video content delivery with reduced stall events and up to 50% lower stall durations due to its lower latency overheads. To support reproducibility, the developed tests and the collected data are made publicly available to the community.

15 citations


Journal ArticleDOI
TL;DR: This paper presents CLB, a programmable switch-based general-purpose in-network load balancer that can adapt to traffic changes at a very high speed and leads to performance improvement compared to other load balancing schemes.
Abstract: This paper presents CLB, a programmable switch-based general-purpose in-network load balancer that can adapt to traffic changes at a very high speed. It uses Weighted-Cost Multipath (WCMP) mechanism for traffic-aware load balancing over many paths at a coarse-grained precision. CLB can be configured to match the load balancing requirements of a wide range of applications at line rate. We have analytically shown that CLB can achieve a bounded response time to traffic changes in the data plane. We implement CLB using the P4 programming language. Our experimental evaluation shows CLB can successfully distribute the incoming load over multiple paths for a given path-weight distribution and leads to performance improvement compared to other load balancing schemes.

14 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a cost-sensitive deep learning approach to increase the robustness of deep learning classifiers against the imbalanced class problem in NTC, where the dataset is divided into different partitions, and a cost matrix is created for each partition by considering the data distribution.
Abstract: Network traffic classification (NTC) plays an important role in cyber security and network performance, for example in intrusion detection and facilitating a higher quality of service. However, due to the unbalanced nature of traffic datasets, NTC can be extremely challenging and poor management can degrade classification performance. While existing NTC methods seek to re-balance data distribution through resampling strategies, such approaches are known to suffer from information loss, overfitting, and increased model complexity. To address these challenges, we propose a new cost-sensitive deep learning approach to increase the robustness of deep learning classifiers against the imbalanced class problem in NTC. First, the dataset is divided into different partitions, and a cost matrix is created for each partition by considering the data distribution. Then, the costs are applied to the cost function layer to penalize classification errors. In our approach, costs are diverse in each type of misclassification because the cost matrix is specifically generated for each partition. To determine its utility, we implement the proposed cost-sensitive learning method in two deep learning classifiers, namely: stacked autoencoder and convolution neural networks. Our experiments on the ISCX VPN-nonVPN dataset show that the proposed approach can obtain higher classification performance on low-frequency classes, in comparison to three other NTC methods.

13 citations


Journal ArticleDOI
TL;DR: In this article , a blockchain and onion routing (OR)-based secure and trusted framework is proposed to improve the initial detection rate of malicious data requests from smart home sensors, where the anonymity of the proposed OR network is maintained by storing and tracking the onion nodes threshold values through the blockchain network.
Abstract: Sensor communication in the smart home environment is still in its infancy as the information exchange between sensors is vulnerable to security threats. Many traditional solutions use single-layer or multi-layer (i.e., onion routing protocol) encryption/decryption algorithms. But, in the traditional onion routing protocol, if the directory server is compromised, it may not track the malicious onion nodes within the onion network. It questioned the path anonymity of the onion routing protocol. Motivated by this, we proposed a blockchain and onion routing (OR)-based secure and trusted framework in the paper. The anonymity of the proposed OR network is maintained by storing and tracking the onion nodes threshold values through the blockchain network. A long short-term memory (LSTM) model is also utilized to classify the sensors data requests as malicious and non-malicious. The performance of the proposed system is evaluated with different performance metrics such as F1 score and accuracy. The LSTM model significantly improves the initial detection rate of malicious data requests from smart home sensors. Over these benefits, we considered the entire communication via 6G channel, reducing the overall communication latency. Additionally, the OR network is simulated over the shadow simulator to analyze the OR network’s performance considering parameters such as packet delivery ratio and malicious onion node detection rate.

13 citations


Journal ArticleDOI
TL;DR: In this article , a consortium blockchain network that supports hyperledger smart contract (SC) is deployed to set up secure resource trading among seller and buyer MVNOs, where the pricing and demand problem of the seller and buyers were modeled as a two-stage Stackelberg game.
Abstract: The advent of radio access network (RAN) slicing is envisioned as a new paradigm for accommodating different virtualized networks on a single infrastructure in 5G and beyond. Consequently, infrastructure providers (InPs) desire virtualized networks to share their subleased resources for effective resource management. Nonetheless, security and privacy challenges in the wireless network deter operators from collaborating with one another for resource trading. Lately, blockchain technology has received overwhelming attention for secure resource trading thanks to its security features. This paper proposes a novel hierarchical framework for blockchain-based resource trading among peer-to-peer (P2P) mobile virtual network operators (MVNOs), for autonomous resource slicing in 5G RAN. Specifically, a consortium blockchain network that supports hyperledger smart contract (SC) is deployed to set up secure resource trading among seller and buyer MVNOs. With the aim of designing a fair incentive mechanism, we model the pricing and demand problem of the seller and buyers as a two-stage Stackelberg game, where the seller MVNO is the leader and buyer MVNOs are followers. To achieve a Stackelberg equilibrium (SE) for the formulated game, a dueling deep Q-network (Dueling DQN) scheme is designed to achieve optimal pricing and demand policies for autonomous resource allocation at negotiation interval. Comprehensive simulation results analysis prove that the proposed scheme reduces double spending attacks by 12% in resource trading settings, and maximizes the utilities of players. The proposed scheme also outperforms deep Q-Network (DQN), Q-learning (QL) and greedy algorithm (GA), in terms of slice and system level satisfaction and resource utilization.

12 citations


Journal ArticleDOI
TL;DR: The first framework, XeNIDS, for reliable cross-evaluations based on Network Flows is proposed, demonstrating the concealed potential, but also the risks, of cross- evaluations of ML-NIDS.
Abstract: Enhancing Network Intrusion Detection Systems (NIDS) with supervised Machine Learning (ML) is tough. ML-NIDS must be trained and evaluated, operations requiring data where benign and malicious samples are clearly labeled. Such labels demand costly expert knowledge, resulting in a lack of real deployments, as well as on papers always relying on the same outdated data. The situation improved recently, as some efforts disclosed their labeled datasets. However, most past works used such datasets just as a ‘yet another’ testbed, overlooking the added potential provided by such availability. In contrast, we promote using such existing labeled data to cross-evaluate ML-NIDS. Such approach received only limited attention and, due to its complexity, requires a dedicated treatment. We hence propose the first cross-evaluation model. Our model highlights the broader range of realistic use-cases that can be assessed via cross-evaluations, allowing the discovery of still unknown qualities of state-of-the-art ML-NIDS. For instance, their detection surface can be extended—at no additional labeling cost. However, conducting such cross-evaluations is challenging. Hence, we propose the first framework, XeNIDS, for reliable cross-evaluations based on Network Flows. By using XeNIDS on six well-known datasets, we demonstrate the concealed potential, but also the risks, of cross-evaluations of ML-NIDS.

12 citations


Journal ArticleDOI
TL;DR: Citrus as mentioned in this paper is a novel intrusion detection framework which is adept at tackling emerging threats through the collection and labelling of live attack data by utilizing diverse Internet vantage points in order to detect and classify malicious behaviour using graph-based metrics as well as a range of machine learning (ML) algorithms.
Abstract: The Internet of Things (IoT), in combination with advancements in Big Data, communications and networked systems, offers a positive impact across a range of sectors including health, energy, manufacturing and transport. By virtue of current business models adopted by manufacturers and ICT operators, IoT devices are deployed over various networked infrastructures with minimal security, opening up a range of new attack vectors. Conventional rule-based intrusion detection mechanisms used by network management solutions rely on pre-defined attack signatures and hence are unable to identify new attacks. In parallel, anomaly detection solutions tend to suffer from high false positive rates due to the limited statistical validation of ground truth data, which is used for profiling normal network behaviour. In this work we go beyond current solutions and leverage the coupling of anomaly detection and Cyber Threat Intelligence (CTI) with parallel processing for the profiling and detection of emerging cyber attacks. We demonstrate the design, implementation, and evaluation of Citrus : a novel intrusion detection framework which is adept at tackling emerging threats through the collection and labelling of live attack data by utilising diverse Internet vantage points in order to detect and classify malicious behaviour using graph-based metrics as well as a range of machine learning (ML) algorithms. Citrus considers the importance of ground truth data validation and its flexible software architecture enables both the real-time and offline profiling, detection and classification of emerging cyber-attacks under optimal computational costs. Thus, establishing it as a viable and practical solution for next generation network defence and resilience strategies.

Journal ArticleDOI
TL;DR: A Fault Tolerant Elastic Resource Management framework is proposed that addresses aforementioned problem from a different perspective by inducing high-availability in servers and VMs and improved the availability of the services up to 34.47% and scales down VM-migration and power-consumption up to 88.6% and 62.4%, respectively over without FT-ERM approach.
Abstract: Cloud computing has become inevitable for every digital service which has exponentially increased its usage. However, a tremendous surge in cloud resource demand stave off service availability resulting into outages, performance degradation, load imbalance, and excessive power-consumption. The existing approaches mainly attempt to address the problem by using multi-cloud and running multiple replicas of a virtual machine (VM) which accounts for high operational-cost. This paper proposes a Fault Tolerant Elastic Resource Management (FT-ERM) framework that addresses aforementioned problem from a different perspective by inducing high-availability in servers and VMs. Specifically, (1) an online failure predictor is developed to anticipate failure-prone VMs based on predicted resource contention; (2) the operational status of server is monitored with the help of power analyser, resource estimator and thermal analyser to identify any failure due to overloading and overheating of servers proactively; and (3) failure-prone VMs are assigned to proposed fault-tolerance unit composed of decision matrix and safe box to trigger VM migration and handle any outage beforehand while maintaining desired level of availability for cloud users. The proposed framework is evaluated and compared against state-of-the-arts by executing experiments using two real-world datasets. FT-ERM improved the availability of the services up to 34.47% and scales down VM-migration and power-consumption up to 88.6% and 62.4%, respectively over without FT-ERM approach.

Journal ArticleDOI
TL;DR: In this article , the authors provide a survey on Telecommunication Services Marketplaces (TSMs) which employ blockchain technology as the main trust enabling entity in order to avoid any intermediaries.
Abstract: Digital marketplaces were created recently to accelerate the delivery of applications and services to customers. Their appealing feature is to activate and dynamize the demand, supply, and development of digital goods, applications, or services. By being an intermediary between producer and consumer, the primary business model for a marketplace is to charge the producer with a commission on the amount paid by the consumer. However, most of the time, the commission is dictated by the marketplace facilitator itself and creates an imbalance in value distribution, where producer and consumer sides suffer monetarily. In order to eliminate the need for a centralized entity between the producer and consumer, a blockchain-based decentralized digital marketplace concept was introduced. It provides marketplace actors with the tools to perform business transactions in a trusted manner and without the need for an intermediary. In this work, we provide a survey on Telecommunication Services Marketplaces (TSMs) which employ blockchain technology as the main trust enabling entity in order to avoid any intermediaries. We provide an overview of scientific and industrial proposals on the blockchain-based online digital marketplaces at large, and TSMs in particular. We consider in this study the notion of telecommunication services as any service enabling the capability for information transfer and, increasingly, information processing provided to a group of users by a telecommunications system. We discuss the main standardization activities around the concepts of TSMs and provide particular use-cases for the TSM business transactions such as SLA settlement. Also, we provide insights into the main foundational services provided by the TSM, as well as a survey of the scientific and industrial proposals for such services. Finally, a prospect for future developments is given.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a dynamic distributed multi-path load balancing algorithm that relies on dynamic hashing computing for network flow distribution in DCNs, which dynamically adjusts traffic flow distribution at microsecond level according to the inverse ratio of the buffer occupancy.
Abstract: Benefiting from dense connections in data center networks (DCNs), load balancing algorithms are capable of steering traffic into multiple paths for the sake of preventing traffic congestion. However, given each path’s time-varying and asymmetrical traffic state, this may also lead to worse congestion when some paths are overutilised. Especially in the two-tier hybrid optical/electrical DCNs (Hoe-DCNs), the port contentions and large-grained optical packets of the fast optical switch (FOS) require the top-of-rack (TOR) switch to have microsecond-level load balancing capability for microburst traffic. This paper establishes a leaf-spine Hoe-DCN model to illustrate the principal characteristic of dynamic load balancing in TOR switches for the first time. Moreover, we propose the dynamic distributed multi-path (DDMP) load balancing algorithm that relies on dynamic hashing computing for network flow distribution in DCNs, which dynamically adjusts traffic flow distribution at microsecond level according to the inverse ratio of the buffer occupancy. The simulation results show that our proposed algorithm reduces the TOR-to-TOR latency by 15.88% and decreases the packet loss by 22.06% compared to conventional algorithms under regular load conditions, which effectively improves the overall performance of the Hoe-DCNs. Moreover, our proposed algorithm prevents more than 90% packet loss under low load conditions.

Journal ArticleDOI
TL;DR: In this article , a distributed computation offloading scheme based on deep reinforcement learning (DCODRL) was proposed to minimize the weighted average cost, including the latency cost and the energy cost.
Abstract: In heterogeneous wireless networks, massive mobile terminals randomly generate a large number of computation tasks (payloads). How to better manage these mobile terminals located in wireless networks to achieve acceptable quality of service (QoS) such as latency minimization, energy consumption minimization is crucial. A multi-access edge computing (MEC) server can be leveraged to execute the offloaded payloads generated from mobile terminals owing to its powerful processing power and location proximity features. However, an MEC server cannot tackle all offloaded tasks from multiple mobile terminals, and its energy consumption needs further consideration. We introduce an edge server model combined with the unmanned aerial vehicle (UAV) and equipped with the macro base station (MBS-MEC) to process the arrival payloads, and all UAVs and MBS-MECs can harvest renewable energy by using energy harvesting equipment. Furthermore, we model the computation offloading as a deep reinforcement learning scheme without priori knowledge. Considering the infeasibility of deep-reinforcement learning-based centralized learning for the proposed edge computing framework, we propose a distributed computation offloading scheme based on deep reinforcement learning (DCODRL) to minimize the weighted average cost, including the latency cost and the energy cost. Each mobile terminal can be regarded as a learning agent for the DCODRL. To compensate for the lack of cooperation of the DCODRL, we propose a gated-recurrent-unit-assisted multi-agent computation offloading scheme based on deep reinforcement learning (MCODRL) to improve the offloading policy by obtaining global observation information and designing a common reward for all agents. Comprehensive numerical results reflect the convergence and effectiveness of the DCODRL and MCODRL, and the efficacy of the proposed algorithms is further verified through comparisons with two baseline algorithms.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a Fault Tolerant Elastic Resource Management (FT-ERM) framework to improve the availability of cloud services by inducing high availability in servers and VMs.
Abstract: Cloud computing has become inevitable for every digital service which has exponentially increased its usage. However, a tremendous surge in cloud resource demand stave off service availability resulting into outages, performance degradation, load imbalance, and excessive power-consumption. The existing approaches mainly attempt to address the problem by using multi-cloud and running multiple replicas of a virtual machine (VM) which accounts for high operational-cost. This paper proposes a Fault Tolerant Elastic Resource Management (FT-ERM) framework that addresses aforementioned problem from a different perspective by inducing high-availability in servers and VMs. Specifically, (1) an online failure predictor is developed to anticipate failure-prone VMs based on predicted resource contention; (2) the operational status of server is monitored with the help of power analyser, resource estimator and thermal analyser to identify any failure due to overloading and overheating of servers proactively; and (3) failure-prone VMs are assigned to proposed fault-tolerance unit composed of decision matrix and safe box to trigger VM migration and handle any outage beforehand while maintaining desired level of availability for cloud users. The proposed framework is evaluated and compared against state-of-the-arts by executing experiments using two real-world datasets. FT-ERM improved the availability of the services up to 34.47% and scales down VM-migration and power-consumption up to 88.6% and 62.4%, respectively over without FT-ERM approach.

Journal ArticleDOI
TL;DR: In this article , a novel time-based anomaly detection system, Chronos, was proposed to detect anomalous DDoS traffic using an autoencoder and a threshold selection heuristic that maximizes the F1 score across various DDoS attacks.
Abstract: Cognitive network management is becoming quintessential to realize autonomic networking. However, the wide spread adoption of the Internet of Things (IoT) devices, increases the risk of cyber attacks. Adversaries can exploit vulnerabilities in IoT devices, which can be harnessed to launch massive Distributed Denial of Service (DDoS) attacks. Therefore, intelligent security mechanisms are needed to harden network security against these threats. In this paper, we propose Chronos, a novel time-based anomaly detection system. The anomaly detector, primarily an Autoencoder, leverages time-based features over multiple time windows to efficiently detect anomalous DDoS traffic. We develop a threshold selection heuristic that maximizes the F1-score across various DDoS attacks. Further, we compare the performance of Chronos against state-of-the-art approaches. We show that Chronos marginally outperforms another time-based system using a less complex anomaly detection pipeline, while out classing flow-based approaches with superior precision. In addition, we showcase the robustness of Chronos in the face of zero-day attacks, noise in training data, and a small number of training packets, asserting its suitability for online deployment.

Journal ArticleDOI
TL;DR: In this article , a management framework for edge networks realized with flying ad-hoc networks (FANET) consisting of a set of UAVs to provide a remote geographic area with computing and networking facilities for delay-sensitive applications is proposed.
Abstract: The next generation of wireless communications networks, namely 6G, will be aimed at realizing a fully connected world, and at providing ubiquitous connectivity to people and objects even in remote areas that are very far from the structured Internet core network. These goals include the definition and the design of intelligent communications environments mainly characterized by pervasive artificial intelligence and large-scale automation. The target of this paper is the design of a management framework for edge networks realized with Flying Ad-Hoc Networks (FANET) consisting of a set of Unmanned Aerial Vehicles (UAVs) to provide a remote geographic area with computing and networking facilities for delay-sensitive applications. To this purpose, each UAV is equipped with a Computing Element (CE) to process jobs received through vertical offloading from ground devices. In addition, horizontal offload among UAVs of the FANET is introduced for load balancing purposes, to guarantee that the FANET computation delay for each received job is minimized and is almost independent of the activity state of the area covered by the UAV receiving that job. The proposed FANET management framework is based on Deep Reinforcement Learning (DRL) to allow zero-touch adaptation to the time-variant activity state of the area covered by each UAV. Numerical results demonstrate the power of the proposed framework and the enhancements achieved with respect to the current literature.

Journal ArticleDOI
TL;DR: In this paper , a service-driven fragmentation metric (SDFM) is proposed to estimate the fragmentation in the used path and neighboring links, which leads to the minimum value of SDFM.
Abstract: To support the fifth-generation bandwidth-hungry applications, such as the Internet of Things, virtual reality, augmented reality, and cloud computing, elastic optical networks have become the most promising infrastructure that allocates bandwidths for services flexibility. Fragmentation caused by dynamic resource allocation deteriorates the availability of resources in networks, increasing the blocking of requests. The fragmentation occurs not only in the used path but also in the neighboring links that are not included in the used path; they are connected to the used path. This paper proposes a service-driven fragmentation-aware (SDFA) resource allocation scheme to enhance resource utilization by avoiding fragmentation with the joint consideration of the used path and neighboring links. A service-driven fragmentation metric (SDFM) is, for the first time, presented to estimate the fragmentation in the used path and neighboring links. The SDFA scheme prefers to assign services at the spectrum slots, which leads to the minimum value of SDFM. Simulation results indicate that SDFA outperforms four conventional fragmentation-aware resource allocation schemes in terms of blocking probability and resource utilization due to a lower fragmentation in the network.

Journal ArticleDOI
TL;DR: TSAGen as discussed by the authors is a time series generation tool that can generate KPI data with anomalies and controllable characteristics for KPI anomaly detection, which can be used for comprehensive evaluation of anomaly detection algorithms with diverse user defined what-if scenarios.
Abstract: A key performance indicator (KPI) consists of critical time series data that reflect the runtime states of network systems (e.g., response time and available bandwidth). Despite the importance of KPI, datasets for KPI anomaly detection available to the public are very limited, due to privacy concerns and the high overhead in manually labelling the data. The insufficiency of public KPI data poses a great barrier for network researchers and practitioners to evaluate and test what-if scenarios in the development of artificial intelligence for IT operations (AIOps) and anomaly detection algorithms. To tackle the difficulty, we develop a univariate time series generation tool called TSAGen, which can generate KPI data with anomalies and controllable characteristics for KPI anomaly detection. Experiment results show that the data generated by TSAGen can be used for comprehensive evaluation of anomaly detection algorithms with diverse user-defined what-if scenarios.

Journal ArticleDOI
TL;DR: In this article , a machine learning based comprehensive security solution for network intrusion detection using ensemble supervised ML framework and ensemble feature selection methods is presented, which can identify 99.3% of intrusions successfully with the lowest 0.5% of false alarms.
Abstract: Proper security solutions in the cyber world are crucial for enforcing network security by providing real-time network protection against network vulnerabilities and data exploitation. An effective intrusion detection strategy is capable of taking a holistic approach for protecting critical systems against unauthorized access or attack. In this paper, we describe a machine learning (ML) based comprehensive security solution for network intrusion detection using ensemble supervised ML framework and ensemble feature selection methods. In addition, we provide a comparative analysis of several ML models and feature selection methods. The goal of this research is to design a generic detection mechanism and achieve higher accuracy with minimal false positive rates (FPR). NSL-KDD, UNSW-NB15, and CICIDS2017 datasets are used in the experiment, and results show that our detection model can identify 99.3% of intrusions successfully with the lowest 0.5% of false alarms, which depicts better performance metrics compared to existing solutions.

Journal ArticleDOI
TL;DR: This paper first formulate this problem as a mathematical model with the maximal profits for ISP, and proposes the novel Deterministic SFC Deployment algorithm and SFC Adjustment algorithm to efficiently solve the SFC lifetime management problem.
Abstract: Deterministic Networking (DetNet) has recently attracted much attention. It aims at studying the deterministic bounded latency and low latency variation for time-sensitive applications (e.g., industrial automation). To improve the quality of service (QoS) guarantee and make the network management efficient, it is desirable for Internet Service Provider (ISP) to obtain an optimal service function chain (SFC) provision strategy while providing deterministic service performance for the time-sensitive applications. In this paper, we will study the deterministic SFC lifetime management problem in beyond 5G edge fabric with the objective of maximizing the overall profits and ensuring the deterministic latency and jitter of SFC requests. We first formulate this problem as a mathematical model with the maximal profits for ISP. Then, the novel Deterministic SFC Deployment algorithm (Det-SFCD) and SFC Adjustment algorithm (Det-SFCA) due to traffic load variation are proposed to efficiently solve the SFC lifetime management problem. Extensive simulation results show that our proposed algorithms can achieve better performance in terms of SFC request acceptance rates, overall profits and latency variation compared with the benchmark algorithm.

Journal ArticleDOI
TL;DR: A blockchain and onion routing (OR)-based secure and trusted framework that significantly improves the initial detection rate of malicious data requests from smart home sensors and simulated the entire communication via 6G channel over the shadow simulator to analyze the OR network’s performance.
Abstract: Sensor communication in the smart home environment is still in its infancy as the information exchange between sensors is vulnerable to security threats. Many traditional solutions use single-layer or multi-layer (i.e., onion routing protocol) encryption/decryption algorithms. But, in the traditional onion routing protocol, if the directory server is compromised, it may not track the malicious onion nodes within the onion network. It questioned the path anonymity of the onion routing protocol. Motivated by this, we proposed a blockchain and onion routing (OR)-based secure and trusted framework in the paper. The anonymity of the proposed OR network is maintained by storing and tracking the onion nodes threshold values through the blockchain network. A long short-term memory (LSTM) model is also utilized to classify the sensors data requests as malicious and non-malicious. The performance of the proposed system is evaluated with different performance metrics such as F1 score and accuracy. The LSTM model significantly improves the initial detection rate of malicious data requests from smart home sensors. Over these benefits, we considered the entire communication via 6G channel, reducing the overall communication latency. Additionally, the OR network is simulated over the shadow simulator to analyze the OR network’s performance considering parameters such as packet delivery ratio and malicious onion node detection rate.

Journal ArticleDOI
Liyan Yang, Yubo Song, Shan Gao, Aiqun Hu, Bin Xiao 
TL;DR: Griffin, a NIDS that uses unsupervised machine learning expertise to detect both known and zero-day intrusion attacks in real-time with high accuracy is proposed and utilizes the differential privacy framework during training autoencoders to protect datasets’ privacy which is inherent in machine learning approaches.
Abstract: Many efforts have been devoted to the development of efficient Network Intrusion Detection System (NIDS) using machine learning approaches in Software-defined Network (SDN). Unfortunately, existing solutions failed to detect real-time and zero-day attacks due to their limited throughput and prior knowledge-based detection. To this end, we propose Griffin, a NIDS that uses unsupervised machine learning expertise to detect both known and zero-day intrusion attacks in real-time with high accuracy. Specifically, Griffin uses an efficient feature extraction framework to capture the sequential features of the traffic packets. Then, it utilizes cluster analysis to reduce the feature scale to achieve low throughput. Moreover, an ensemble autoencoder is built automatically to further extract features with low complexity and high precision to train the model. We evaluate the accuracy, robustness, and complexity of the system using open datasets. The result shows that Griffin’s complexity is about 40% lower, and its accuracy is at most 19% higher than existing NIDS.Additionally, even in the situation with evasion, the Griffin has at most 9% decrease of AUC, which is a good performance compared with other solutions. Furthermore, this paper also utilizes the differential privacy framework during training autoencoders to protect datasets’ privacy which is inherent in machine learning approaches.

Journal ArticleDOI
TL;DR: In this article , a customizable and communication-efficient federated anomaly detection scheme (hereafter referred to as FedLog), designed to facilitate the identification of abnormal log patterns in large-scale IoT systems, is presented.
Abstract: Runtime log-based anomaly detection is one of several key building blocks in ensuring system security, as well as post-incident forensic investigations. However, existing log-based anomaly detection approaches that are implemented on large-scale Internet of Things (IoT) systems generally upload local data from edge devices to a centralized (cloud) server for processing and analysis. Such a workflow incurs significant communication and computation overheads, with potential privacy implications. Hence, in this paper, we propose a customizable and communication-efficient federated anomaly detection scheme (hereafter referred to as FedLog), designed to facilitate the identification of abnormal log patterns in large-scale IoT systems. Specifically, we first craft a Temporal Convolutional Network-Attention Mechanism-based Convolutional Neural Network (TCN-ACNN) model, to effectively extract fine-grained features from system logs. Second, we develop a new federated learning framework to support IoT devices in establishing a comprehensive anomaly detection model in a collaborative and privacy-preserving manner. Third, a lottery ticket hypothesis based masking strategy is designed to achieve customizable and communication-efficient federated learning in handling non-Independent and Identically Distributed (non-IID) log datasets. We then evaluate the performance of our proposed scheme with those of DeepLog (published in CCS, 2017) and Loganomaly (published in IJCAI, 2019) in both centralized learning and federated learning settings, using two publicly available and widely used real-world datasets (i.e., HDFS and BGL). The findings demonstrate the utility of the proposed FedLog scheme, in terms of log-based anomaly detection.

Journal ArticleDOI
TL;DR: In this paper, a customizable and communication-efficient federated anomaly detection scheme (hereafter referred to as FedLog), designed to facilitate the identification of abnormal log patterns in large-scale IoT systems, is presented.
Abstract: Runtime log-based anomaly detection is one of several key building blocks in ensuring system security, as well as post-incident forensic investigations. However, existing log-based anomaly detection approaches that are implemented on large-scale Internet of Things (IoT) systems generally upload local data from edge devices to a centralized (cloud) server for processing and analysis. Such a workflow incurs significant communication and computation overheads, with potential privacy implications. Hence, in this paper, we propose a customizable and communication-efficient federated anomaly detection scheme (hereafter referred to as FedLog), designed to facilitate the identification of abnormal log patterns in large-scale IoT systems. Specifically, we first craft a Temporal Convolutional Network-Attention Mechanism-based Convolutional Neural Network (TCN-ACNN) model, to effectively extract fine-grained features from system logs. Second, we develop a new federated learning framework to support IoT devices in establishing a comprehensive anomaly detection model in a collaborative and privacy-preserving manner. Third, a lottery ticket hypothesis based masking strategy is designed to achieve customizable and communication-efficient federated learning in handling non-Independent and Identically Distributed (non-IID) log datasets. We then evaluate the performance of our proposed scheme with those of DeepLog (published in CCS, 2017) and Loganomaly (published in IJCAI, 2019) in both centralized learning and federated learning settings, using two publicly available and widely used real-world datasets (i.e., HDFS and BGL). The findings demonstrate the utility of the proposed FedLog scheme, in terms of log-based anomaly detection.

Journal ArticleDOI
TL;DR: A service-degradation algorithm aware of application characteristics in Elastic Optical Networks (EONs) that considers a proportional Quality-of-Service (QoS) model and cross-layer information to decide which lightpath to be degraded, and it aims to reduce the impact of resource unavailability on delay and bandwidth sensitive applications.
Abstract: Optical networks can support service degradation by providing bandwidth lower than that required to adapt the network provisioning when optical resources are insufficient. This paper proposes a service-degradation algorithm that is aware of application characteristics in Elastic Optical Networks (EONs). The algorithm considers a proportional Quality-of-Service (QoS) model and cross-layer information to decide which lightpath to be degraded, and it aims to reduce the impact of resource unavailability on delay and bandwidth sensitive applications. Results show that the proposed strategy can decrease blocking probability by up to 93% and reduce the number of applications penalized by service degradation by 100% compared to other approaches unaware of application characteristics.

Journal ArticleDOI
TL;DR: This paper proposes a decentralized and scalable networking strategy based on a specially designed private blockchain that can support the privacy requirements of dynamic charging coordination, authentication, and billing and relies on group signature and distributed random number generators to support the desirable features.
Abstract: Dynamic wireless charging of electric vehicles (EVs) enables the exchange of power between a mobile EV and the electricity grid via a set of charging pads (CPs) deployed along the road. Accordingly, dynamic charging coordination can be introduced for a group of mobile EVs to specify where each EV can charge (i.e., from which CPs). This coordination mechanism maximizes the satisfied charging requests given the limited available energy supply. Upon specifying the optimal set of pads for a given EV, a fast authentication mechanism is required between the EV and the CPs to start the charging process. However, both the coordination and authentication mechanisms require exchanging private information, e.g., EV identities and locations. Hence, there is a need for a strategy that enables privacy-preservation in dynamic charging via supporting: (i) user anonymity and (ii) data unlinkability. In this paper, we propose a decentralized and scalable networking strategy based on a specially designed private blockchain that can support the privacy requirements of dynamic charging coordination, authentication, and billing. The proposed networking strategy relies on group signature and distributed random number generators to support the desirable features. Simulation results demonstrate the efficiency and low complexity of the proposed blockchain-based networking strategy.

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed Griffin, a NIDS that uses unsupervised machine learning expertise to detect both known and zero-day intrusion attacks in real-time with high accuracy.
Abstract: Many efforts have been devoted to the development of efficient Network Intrusion Detection System (NIDS) using machine learning approaches in Software-defined Network (SDN). Unfortunately, existing solutions failed to detect real-time and zero-day attacks due to their limited throughput and prior knowledge-based detection. To this end, we propose Griffin, a NIDS that uses unsupervised machine learning expertise to detect both known and zero-day intrusion attacks in real-time with high accuracy. Specifically, Griffin uses an efficient feature extraction framework to capture the sequential features of the traffic packets. Then, it utilizes cluster analysis to reduce the feature scale to achieve low throughput. Moreover, an ensemble autoencoder is built automatically to further extract features with low complexity and high precision to train the model. We evaluate the accuracy, robustness, and complexity of the system using open datasets. The result shows that Griffin’s complexity is about 40% lower, and its accuracy is at most 19% higher than existing NIDS.Additionally, even in the situation with evasion, the Griffin has at most 9% decrease of AUC, which is a good performance compared with other solutions. Furthermore, this paper also utilizes the differential privacy framework during training autoencoders to protect datasets’ privacy which is inherent in machine learning approaches.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a multi-layered framework by combining both symmetric and asymmetric key cryptographic techniques to ensure high availability, integrity, confidentiality, authentication and scalability.
Abstract: Supervisory Control and Data Acquisition (SCADA) networks play a vital role in industrial control systems. Industrial organizations perform operations remotely through SCADA systems to accelerate their processes. However, this enhancement in network capabilities comes at the cost of exposing the systems to cyber-attacks. Consequently, effective solutions are required to secure industrial infrastructure as cyber-attacks on SCADA systems can have severe financial and/or safety implications. Moreover, SCADA field devices are equipped with microcontrollers for processing information and have limited computational power and resources. This makes the deployment of sophisticated security features challenging. As a result, effective lightweight cryptography solutions are needed to strengthen the security of industrial plants against cyber threats. In this paper, we have proposed a multi-layered framework by combining both symmetric and asymmetric key cryptographic techniques to ensure high availability, integrity, confidentiality, authentication and scalability. Further, an efficient session key management mechanism is proposed by merging random number generation with a hashed message authentication code. Moreover, for each session, we have introduced three symmetric key cryptography techniques based on the concept of Vernam cipher and a pre-shared session key, namely, random prime number generator, prime counter, and hash chaining. The proposed scheme satisfies the SCADA requirements of real-time request response mechanism by supporting broadcast, multicast, and point to point communication.