scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Network and Service Management in 2020"


Journal ArticleDOI
TL;DR: In this paper, the authors presented a lightweight deep learning DDoS detection system called Lucid, which exploits the properties of Convolutional Neural Networks (CNNs) to classify traffic flows as either malicious or benign.
Abstract: Distributed Denial of Service (DDoS) attacks are one of the most harmful threats in today’s Internet, disrupting the availability of essential services. The challenge of DDoS detection is the combination of attack approaches coupled with the volume of live traffic to be analysed. In this paper, we present a practical, lightweight deep learning DDoS detection system called Lucid, which exploits the properties of Convolutional Neural Networks (CNNs) to classify traffic flows as either malicious or benign. We make four main contributions; (1) an innovative application of a CNN to detect DDoS traffic with low processing overhead, (2) a dataset-agnostic preprocessing mechanism to produce traffic observations for online attack detection, (3) an activation analysis to explain Lucid’s DDoS classification, and (4) an empirical validation of the solution on a resource-constrained hardware platform. Using the latest datasets, Lucid matches existing state-of-the-art detection accuracy whilst presenting a 40x reduction in processing time, as compared to the state-of-the-art. With our evaluation results, we prove that the proposed approach is suitable for effective DDoS detection in resource-constrained operational environments.

181 citations


Journal ArticleDOI
TL;DR: The features of quantum walk are utilized to construct a new S-box method which plays a significant role in block cipher techniques for 5G-IoT technologies and a new robust video encryption mechanism is proposed.
Abstract: Fifth generation (5G) networks are the base communication technology for connecting objects in the Internet of Things (IoT) environment. 5G is being developed to provide extremely large capacity, robust integrity, high bandwidth, and low latency. With the development and innovating new techniques for 5G-IoT, it surely will drive to new enormous security and privacy challenges. Consequently, secure techniques for data transmissions will be needed as the basis for 5G-IoT technology to address these arising challenges. Therefore, various traditional security mechanisms are provided for 5G-IoT technologies and most of them are built on mathematical foundations. With the growth of quantum technologies, traditional cryptographic techniques may be compromised due to their mathematical computation based construction. Quantum walks (QWs) is a universal quantum computational model, which possesses inherent cryptographic features that can be utilized to build efficient cryptographic mechanisms. In this paper, we use the features of quantum walk to construct a new S-box method which plays a significant role in block cipher techniques for 5G-IoT technologies. As an application of the presented S-box mechanism and controlled alternate quantum walks (CAQWs) for 5G-IoT technologies a new robust video encryption mechanism is proposed. As well as to fulfill needs of encryption for varied files in 5G-IoT, we utilize the features of quantum walk to propose a novel encryption strategy for secure transmission of sensitive files in 5G-IoT paradigm. The analyses and results of the proposed cryptosystems show that it has better security properties and efficacy in terms of cryptographic performance.

147 citations


Journal ArticleDOI
TL;DR: IoT-Keeper is a lightweight system which secures the communication of IoT and uses the proposed anomaly detection technique to perform traffic analysis at edge gateways, and can detect and mitigate various network attacks—without requiring explicit attack signatures or sophisticated hardware.
Abstract: IoT devices are notoriously vulnerable even to trivial attacks and can be easily compromised. In addition, resource constraints and heterogeneity of IoT devices make it impractical to secure IoT installations using traditional endpoint and network security solutions. To address this problem, we present IoT-Keeper, a lightweight system which secures the communication of IoT. IoT-Keeper uses our proposed anomaly detection technique to perform traffic analysis at edge gateways. It uses a combination of fuzzy C-means clustering and fuzzy interpolation scheme to analyze network traffic and detect malicious network activity. Once malicious activity is detected, IoT-Keeper automatically enforces network access restrictions against IoT device generating this activity, and prevents it from attacking other devices or services. We have evaluated IoT-Keeper using a comprehensive dataset, collected from a real-world testbed, containing popular IoT devices. Using this dataset, our proposed technique achieved high accuracy (≈0.98) and low false positive rate (≈0.02) for detecting malicious network activity. Our evaluation also shows that IoT-Keeper has low resource footprint, and it can detect and mitigate various network attacks—without requiring explicit attack signatures or sophisticated hardware.

96 citations


Journal ArticleDOI
TL;DR: In this paper, a large-scale study based on data mined from Twitter is presented, where extensive analysis has been performed on approximately one million COVID-19 related tweets collected over a period of two months.
Abstract: News creation and consumption has been changing since the advent of social media. An estimated 2.95 billion people in 2019 used social media worldwide. The widespread of the Coronavirus COVID-19 resulted with a tsunami of social media. Most platforms were used to transmit relevant news, guidelines and precautions to people. According to WHO, uncontrolled conspiracy theories and propaganda are spreading faster than the COVID-19 pandemic itself, creating an infodemic and thus causing psychological panic, misleading medical advises, and economic disruption. Accordingly, discussions have been initiated with the objective of moderating all COVID-19’s communications, except those initiated from trusted sources such as the WHO and authorized governmental entities. This article presents a large-scale study based on data mined from Twitter . Extensive analysis has been performed on approximately one million COVID-19 related tweets collected over a period of two months. Furthermore, the profiles of 288,000 users were analyzed including unique users’ profiles, meta-data and tweets’ context. The study noted various interesting conclusions including the critical impact in term of reach level of the (1) exploitation of the COVID-19 crisis to redirect readers to irrelevant topics and (2) widespread of unauthentic medical precautions and information. Further data analysis revealed the importance of using social networks in a global pandemic crisis by relying on credible users with variety of occupations, content developers and influencers in specific fields. In this context, several insights and findings have been provided while elaborating computing and non-computing implications and research directions for potential solutions and social networks management strategies during crisis periods.

93 citations


Journal ArticleDOI
TL;DR: An efficient and secure multi-user multi-task computation offloading model with guaranteed performance in latency, energy, and security for mobile-edge computing and can scale well for large-scale IoT networks.
Abstract: Mobile edge computing (MEC) is a new paradigm to alleviate resource limitations of mobile IoT networks through computation offloading with low latency. This article presents an efficient and secure multi-user multi-task computation offloading model with guaranteed performance in latency, energy, and security for mobile-edge computing. It does not only investigate offloading strategy but also considers resource allocation, compression and security issues. Firstly, to guarantee efficient utilization of the shared resource in multi-user scenarios, radio and computation resources are jointly addressed. In addition, JPEG and MPEG4 compression algorithms are used to reduce the transfer overhead. To fulfill security requirements, a security layer is introduced to protect the transmitted data from cyber-attacks. Furthermore, an integrated model of resource allocation, compression, and security is formulated as an integer nonlinear problem with the objective of minimizing the weighted sum of energy under a latency constraint. As this problem is considered as NP-hard, linearization and relaxation approaches are applied to transform the problem into a convex one. Finally, an efficient offloading algorithm is designed with detailed processes to make the computation offloading decision for computation tasks of mobile users. Simulation results show that our model not only saves about 46% of system overhead consumption in comparison with local execution but also scale well for large-scale IoT networks.

90 citations


Journal ArticleDOI
TL;DR: A reinforcement learning (RL)-based offloading scheme which enables MUs to make optimal offloading decisions based on blockchain transaction states, wireless channel qualities between MUs and MEC server and user’s power hash states and a deep RL algorithm by using deep Q-network which can efficiently solve large state space without any prior knowledge of the system dynamics is proposed.
Abstract: Blockchain technology with its secure, transparent and decentralized nature has been recently employed in many mobile applications. However, the process of executing extensive tasks such as computation-intensive data applications and blockchain mining requires high computational and storage capability of mobile devices, which would hinder blockchain applications in mobile systems. To meet this challenge, we propose a mobile edge computing (MEC) based blockchain network where multi-mobile users (MUs) act as miners to offload their data processing tasks and mining tasks to a nearby MEC server via wireless channels. Specially, we formulate task offloading, user privacy preservation and mining profit as a joint optimization problem which is modelled as a Markov decision process, where our objective is to minimize the long-term system offloading utility and maximize the privacy levels for all blockchain users. We first propose a reinforcement learning (RL)-based offloading scheme which enables MUs to make optimal offloading decisions based on blockchain transaction states, wireless channel qualities between MUs and MEC server and user’s power hash states. To further improve the offloading performances for larger-scale blockchain scenarios, we then develop a deep RL algorithm by using deep Q-network which can efficiently solve large state space without any prior knowledge of the system dynamics. Experiment and simulation results show that the proposed RL-based offloading schemes significantly enhance user privacy, and reduce the energy consumption as well as computation latency with minimum offloading costs in comparison with the benchmark offloading schemes.

86 citations


Journal ArticleDOI
TL;DR: This article proposes HitAnomaly, a log-based anomaly detection model utilizing a hierarchical transformer structure to model both log template sequences and parameter values and assess the robustness of the proposed model on unstable log data.
Abstract: Enterprise systems often produce a large volume of logs to record runtime status and events. Anomaly detection from system logs is crucial for service management and system maintenance. Most existing log-based anomaly detection methods use log event indexes parsed from log data to detect anomalies. Those methods cannot handle unseen log templates and lead to inaccurate anomaly detection. Some recent studies focused on the semantics of log templates but ignored the information of parameter values . Therefore, their approaches failed to address the abnormal logs caused by parameter values. In this article, we propose HitAnomaly, a log-based anomaly detection model utilizing a hierarchical transformer structure to model both log template sequences and parameter values. We designed a log sequence encoder and a parameter value encoder to obtain their representations correspondingly. We then use an attention mechanism as our final classification model. In this way, HitAnomaly is able to capture the semantic information in both log template sequence and parameter values and handle various types of anomalies. We evaluated our proposed method on three log datasets. Our experimental results demonstrate that HitAnomaly has outperformed other existing log-based anomaly detection methods. We also assess the robustness of our proposed model on unstable log data.

81 citations


Journal ArticleDOI
TL;DR: A secure data query framework for cloud and fog computing that not only guarantees the reliability of required data but also effectively protects data against man-in-the-middle attack, single node attack and collusion attack of malicious users is proposed.
Abstract: Fog computing is mainly used to process a large amount of data produced by terminal devices. As fog nodes are the closest acquirers to the terminal devices, the processed data may be tampered with or illegally captured by some malicious nodes while the data is transferred or aggregated. When some applications need to require real-time process with high security, cloud service may sample some data from fog service to check final results. In this paper, we propose a secure data query framework for cloud and fog computing. We use cloud service to check queried data from fog network when fog network provides queried data to users. In the framework, cloud server pre-designates some data aggregation topology trees to fog network, and then fog network may acquire related data from fog nodes according to one of the pre-designated data aggregation trees. Additionally, some fog nodes are assigned as sampled nodes that can feed back related data to cloud server. Based on the security requirements of fog computing, we analyze the security of our proposed framework. Our framework not only guarantees the reliability of required data but also effectively protects data against man-in-the-middle attack, single node attack and collusion attack of malicious users. Also, the experiments show our framework is effective and efficient.

69 citations


Journal ArticleDOI
TL;DR: This approach overcomes the current limitations by providing available fog devices with the ability to have services deployed on the fly, and leverages intelligent container placement scheme that produces efficient volunteers’ selection and distribution of services.
Abstract: With the increasing number of IoT devices, fog computing has emerged, providing processing resources at the edge for the tremendous amount of sensed data and IoT computation. The advantage of the fog gets eliminated if it is not present near IoT devices. Fogs nowadays are pre-configured in specific locations with pre-defined services, which limit their diverse availabilities and dynamic service update. In this paper, we address the aforementioned problem by benefiting from the containerization and micro-service technologies to build our on-demand fog framework with the help of the volunteering devices. Our approach overcomes the current limitations by providing available fog devices with the ability to have services deployed on the fly. Volunteering devices form a resource capacity for building the fog computing infrastructure. Moreover, our framework leverages intelligent container placement scheme that produces efficient volunteers’ selection and distribution of services. An Evolutionary Memetic Algorithm (MA) is elaborated to solve our multi-objective container placement optimization problem. Real life and simulated experiments demonstrate various improvements over existing approaches interpreted by the relevance and efficiency of (1) forming volunteering fog devices near users with maximum time availability and shortest distance, and (2) deploying services on the fly on selected fogs with improved QoS.

63 citations


Journal ArticleDOI
TL;DR: An intelligent network slice reconfiguration algorithm (INSRA) is developed based on the discrete BDQ network and the numerical results reveal that INSRA can minimize the long-term resource consumption and achieve high resource efficiency compared with several benchmark algorithms.
Abstract: It is widely acknowledged that network slicing can tackle the diverse usage scenarios and connectivity services that the 5G-and-beyond system needs to support. To guarantee performance isolation while maximizing network resource utilization under dynamic traffic load, network slice needs to be reconfigured adaptively. However, it is commonly believed that the fine-grained resource reconfiguration problem is intractable due to the extremely high computational complexity caused by numerous variables. In this article, we investigate the reconfiguration within a core network slice with aim of minimizing long-term resource consumption by exploiting Deep Reinforcement Learning (DRL). This problem is also intractable by using conventional Deep Q Network (DQN), as it has a multi-dimensional discrete action space which is difficult to explore efficiently. To address the curse of dimensionality, we propose to exploit Branching Dueling Q-network which incorporates the action branching architecture into DQN to drastically decrease the number of estimated actions. Based on the discrete BDQ network, we develop an intelligent network slice reconfiguration algorithm (INSRA). Extensive simulation experiments are conducted to evaluate the performance of INSRA and the numerical results reveal that INSRA can minimize the long-term resource consumption and achieve high resource efficiency compared with several benchmark algorithms.

63 citations


Journal ArticleDOI
TL;DR: Evaluation results show that the machine learning based detection system can learn from limited ground truth and detect new malicious insiders in unseen data with a high accuracy.
Abstract: Malicious insider attacks represent one of the most damaging threats to networked systems of companies and government agencies. There is a unique set of challenges that come with insider threat detection in terms of hugely unbalanced data, limited ground truth, as well as behaviour drifts and shifts. This work proposes and evaluates a machine learning based system for user-centered insider threat detection. Using machine learning, analysis of data is performed on multiple levels of granularity under realistic conditions for identifying not only malicious behaviours, but also malicious insiders. Detailed analysis of popular insider threat scenarios with different performance measures are presented to facilitate the realistic estimation of system performance. Evaluation results show that the machine learning based detection system can learn from limited ground truth and detect new malicious insiders in unseen data with a high accuracy. Specifically, up to 85% of malicious insiders are detected at only 0.78% false positive rate. The system is also able to quickly detect the malicious behaviours, as low as 14 minutes after the first malicious action. Comprehensive result reporting allows the system to provide valuable insights to analysts in investigating insider threat cases.

Journal ArticleDOI
TL;DR: A method to quantify the overhead imposed on the network by the aforementioned parameters while investigating the use-case scenario of an SDN-enabled satellite space segment is proposed and compared with alternative solutions in the state-of-the-art given the aforementioned performance metrics.
Abstract: In the context of the 5G ecosystem, the integration between the terrestrial and satellite networks is envisioned as a potential approach to further enhance the network capabilities. In light of this integration, the satellite community is revisiting its role in the next generation 5G networks. Emerging technologies such as Software-Defined Networking (SDN) which rely on programmable and reconfigurable concepts, are foreseen to play a major role in this regard. Therefore, an interesting research topic is the introduction of management architecture solutions for future satellite networks driven by means of SDN. This anticipates the separation of the data layer from the control layer of the traditional satellite networks, where the control logic is placed on programmable SDN controllers within traditional satellite devices. While a centralized control layer promises delay reductions, it introduces additional overheads due to reconfiguration and migration costs. In this paper, we propose a method to quantify the overhead imposed on the network by the aforementioned parameters while investigating the use-case scenario of an SDN-enabled satellite space segment. We make use of an optimal controller placement and satellite-to-controller assignment which minimizes the average flow setup time with respect to varying traffic demands. Furthermore, we provide insights on the network performance with respect to the migration and reconfiguration cost for our proposed SDN-enabled architecture. Finally, we compare our proposed space segment SDN-enabled architecture with alternative solutions in the state-of-the-art given the aforementioned performance metrics.

Journal ArticleDOI
TL;DR: This work presents Multi-Agent Model-Free Reinforcement Learning schemes namely Q-Learning (Q-L) and State-Action-Reward- ( next) State- (next) Action (SARSA) for resource allocation which mitigates interference and eliminate the need of network model.
Abstract: The most prominent challenge to the wireless community is to meet the demand for radio resources. Cognitive Radio (CR) is envisioned as a potential solution that utilizes its cognition ability intended to enhance the proper utilization of available radio resources and improves energy efficiency. However, due to the co-existence of Primary Base Stations (PU-BSs) and Cognitive Base Stations (CR-BSs) in CR networks, the problem of aggregated interference occurs which poses a critical challenge for resource allocation in CR networks. Moreover, in practical scenarios, it is difficult to form the correct network model due to complex network dynamics beforehand. Therefore, this work presents Multi-Agent Model-Free Reinforcement Learning schemes namely Q-Learning (Q-L) and State-Action-Reward- (next) State- (next) Action (SARSA) for resource allocation which mitigates interference and eliminate the need of network model. The proposed schemes are implemented in a decentralized cooperative manner with CRs act as multi-agent, forms a stochastic dynamic team to obtain optimal energy-efficient resource allocation strategy. Numerical results reveal that: 1) proposed cooperative scheme 1 (Cooperative Q-L scheme) expedites the convergence; 2) proposed cooperative scheme 2 (Cooperative SARSA scheme) achieves significant improvement in network capacity. Both the proposed cooperative schemes demonstrate its effectiveness by providing significant improvement in energy efficiency and maintain users’ QoS.

Journal ArticleDOI
TL;DR: The experimental results demonstrate that Sel-INT can not only adjust the sampling rate of INT in runtime but also program the corresponding data types dynamically, and they confirm that the proposal can ensure proper accuracy and timeliness for network monitoring while greatly reducing the overheads of INT.
Abstract: It is known that by leveraging programmable data plane, in-band network telemetry (INT) can be realized to provide a powerful and promising method to collect realtime network statistics for monitoring and troubleshooting. However, existing INT implementations still exhibit a few drawbacks such as lack of runtime-programmability and relatively high overheads due to per-packet operation. In this work, we propose and design a runtime-programmable selective INT system, namely, Sel-INT, to resolve these issues. Specifically, we first design a runtime-programmable selective INT scheme based on protocol oblivious forwarding (POF), and then prototype our design by extending the famous OpenvSwitch (OVS) platform to obtain a software switch that supports Sel-INT and implementing a Data Analyzer to parse, extract and analyze the INT data. Our implementation of Sel-INT is verified and evaluated in a real network testbed that consists of a few stand-alone software switches. The experimental results demonstrate that Sel-INT can not only adjust the sampling rate of INT in runtime but also program the corresponding data types dynamically, and they also confirm that our proposal can ensure proper accuracy and timeliness for network monitoring while greatly reducing the overheads of INT.

Journal ArticleDOI
TL;DR: Experimental results show that a generic architecture of cloud-edge computing with the aim of providing both vertical and horizontal offloading between service nodes can significantly reduce total system costs by about 34%, compared to traditional designs which only support vertical offloading.
Abstract: A collaborative integration between cloud and edge computing is proposed to be able to exploit the advantages of both technologies. However, most of the existing studies have only considered two-tier cloud-edge computing systems which merely support vertical offloading between local edge nodes and remote cloud servers. This paper thus proposes a generic architecture of cloud-edge computing with the aim of providing both vertical and horizontal offloading between service nodes. To investigate the effectiveness of the design for different operational scenarios, we formulate it as a workload and capacity optimization problem with the objective of minimizing the system computation and communication costs. Because such a mixed-integer nonlinear programming (MINLP) problem is NP-hard, we further develop an approximation algorithm which applies a branch-and-bound method to obtain optimal solutions iteratively. Experimental results show that such a cloud-edge computing architecture can significantly reduce total system costs by about 34%, compared to traditional designs which only support vertical offloading. Our results also indicate that, to accommodate the same number of input workloads, a heterogeneous service allocation scenario requires about a 23% higher system costs than a homogeneous scenario.

Journal ArticleDOI
TL;DR: This work proposes the first framework that can protect botnet detectors from adversarial attacks through deep reinforcement learning mechanisms and paves the way to novel and more robust cybersecurity detectors based on machine learning applied to network traffic analytics.
Abstract: As cybersecurity detectors increasingly rely on machine learning mechanisms, attacks to these defenses escalate as well. Supervised classifiers are prone to adversarial evasion, and existing countermeasures suffer from many limitations. Most solutions degrade performance in the absence of adversarial perturbations; they are unable to face novel attack variants; they are applicable only to specific machine learning algorithms. We propose the first framework that can protect botnet detectors from adversarial attacks through deep reinforcement learning mechanisms. It automatically generates realistic attack samples that can evade detection, and it uses these samples to produce an augmented training set for producing hardened detectors. In such a way, we obtain more resilient detectors that can work even against unforeseen evasion attacks with the great merit of not penalizing their performance in the absence of specific attacks. We validate our proposal through an extensive experimental campaign that considers multiple machine learning algorithms and public datasets. The results highlight the improvements of the proposed solution over the state-of-the-art. Our method paves the way to novel and more robust cybersecurity detectors based on machine learning applied to network traffic analytics.

Journal ArticleDOI
TL;DR: A VNF-decomposition-based backup strategy is proposed together with a delay-aware hybrid multipath routing scheme for enhancing the reliability of NFV-enabled network services while jointly reducing delays these services experience.
Abstract: Network Function Virtualization (NFV) converts network functions executed by costly middleboxes into instances of Virtual Network Functions (VNFs) hosted by industry-standard Physical Machines (PMs). This has proven to be quite an efficient approach when it comes to enabling automated network operations and the elastic provisioning of resources to support heterogeneous services. Today’s revolutionary services impose a remarkably elevated reliability together with ultra-low latency requirements. Therefore, in addition to having highly reliable VNFs, these VNFs have to be optimally placed in such a way to rapidly route traffic among them with the least utilization of bandwidth. Hence, the proper selection of PMs to meet the above-mentioned reliability and delay requirements becomes a remarkably challenging problem. None of the existing publications addressing such a problem concurrently adopts VNF decomposition to enhance the flexibility of the VNFs’ placement and a hybrid routing scheme to achieve an optimal trade-off between the above-mentioned objectives. In this paper, a VNF-decomposition-based backup strategy is proposed together with a delay-aware hybrid multipath routing scheme for enhancing the reliability of NFV-enabled network services while jointly reducing delays these services experience. The problem is formulated as a Mixed Integer Linear Program (MILP) whose resolution yields an optimal VNF placement and traffic routing policy. Next, the delay-aware hybrid shortest path-based heuristic algorithm is proposed to work around the MILP’s complexity. Thorough numerical analysis and simulations are conducted to validate the proposed algorithm and evaluate its performance. Results show that the proposed algorithm outperforms its existing counterparts by 7.53% in terms of computing resource consumption.

Journal ArticleDOI
TL;DR: A multi-agent reinforcement LEarning based Smart handover Scheme, named LESS, is proposed, with the purpose of minimizing handover cost while maintaining user QoS, and simulation results show that LESS can significantly improve network performance.
Abstract: Network slicing is identified as a fundamental architectural technology for future mobile networks since it can logically separate networks into multiple slices and provide tailored quality of service (QoS). However, the introduction of network slicing into radio access networks (RAN) can greatly increase user handover complexity in cellular networks. Specifically, both physical resource constraints on base stations (BSs) and logical connection constraints on network slices (NSs) should be considered when making a handover decision. Moreover, various service types call for an intelligent handover scheme to guarantee the diversified QoS requirements. As such, in this article, a multi-agent reinforcement LEarning based Smart handover Scheme, named LESS, is proposed, with the purpose of minimizing handover cost while maintaining user QoS. Due to the large action space introduced by multiple users and the data sparsity caused by user mobility, conventional reinforcement learning algorithms cannot be applied directly. To solve these difficulties, LESS exploits the unique characteristics of slicing in designing two algorithms: 1) LESS-DL, a distributed ${Q}$ -learning algorithm to make handover decisions with reduced action space but without compromising handover performance; 2) LESS-QVU, a modified ${Q}$ -value update algorithm which exploits slice traffic similarity to improve the accuracy of ${Q}$ -value evaluation with limited data. Thus, LESS uses LESS-DL to choose the target BS and NS when a handover occurs, while ${Q}$ -values are updated by using LESS-QVU. The convergence of LESS is theoretically proved in this article. Simulation results show that LESS can significantly improve network performance. In more detail, the number of handovers, handover cost and outage probability are reduced by around 50%, 65%, and 45%, respectively, when compared with traditional methods.

Journal ArticleDOI
TL;DR: A Continuous-Decision virtual network embedding scheme relying on Reinforcement Learning (CDRL) is proposed in this paper, which regards the node embedding of the same request as a time-series problem formulated by the classic seq2seq model.
Abstract: Network Virtualization (NV) techniques allow multiple virtual network requests to beneficially share resources on the same substrate network, such as node computational resources and link bandwidth. As the most famous family member of NV techniques, virtual network embedding is capable of efficiently allocating the limited network resources to the users on the same substrate network. However, traditional heuristic virtual network embedding algorithms generally follow a static operating mechanism, which cannot adapt well to the dynamic network structures and environments, resulting in inferior nodes ranking and embedding strategies. Some reinforcement learning aided embedding algorithms have been conceived to dynamically update the decision-making strategies, while the node embedding of the same request is discretized and its continuity is ignored. To address this problem, a Continuous-Decision virtual network embedding scheme relying on Reinforcement Learning (CDRL) is proposed in our paper, which regards the node embedding of the same request as a time-series problem formulated by the classic seq2seq model. Moreover, two traditional heuristic embedding algorithms as well as the classic reinforcement learning aided embedding algorithm are used for benchmarking our prpposed CDRL algorithm. Finally, simulation results show that our proposed algorithm is superior to the other three algorithms in terms of long-term average revenue, revenue to cost and acceptance ratio.

Journal ArticleDOI
TL;DR: A privacy protection approach PBCN (Privacy Preserving Approach Based on Clustering and Noise) is proposed, composed of five algorithms including random disturbance based on clustering, graph reconstruction after disturbing degree sequence and noise nodes generation, etc.
Abstract: Currently, lots of real social relations in social networks force users to face the potential risk of privacy leakage. Consequently, data holders would like to disturbor anonymize their individual data before publishing them, for the purpose of privacy protection. Due to the characteristics of high sensitivity and large volume data of social network graph structure, it is difficult for privacy protection schemes to enable a reasonable allocation of noises while keeping desirable data availability and execution efficiency. On the basis of differential privacy model, combining with clustering and randomization algorithms, a privacy protection approach PBCN (Privacy Preserving Approach Based on Clustering and Noise) is proposed. This proposal is composed of five algorithms including random disturbance based on clustering, graph reconstruction after disturbing degree sequence and noise nodes generation, etc. Furthermore, a privacy measure algorithm based on adjacency degree is put forward in order to objectively evaluate the privacy-preserving strength of various schemes against graph structure and degree attacks. Simulation experiments are conducted to achieve performance comparisons between PBCN, Spctr Add/Del, Spctr Switch, DER and HPDP. The experimental results show that PBCN realizes more satisfactory data availability and execution efficiency. Finally, parameters utility analysis demonstrates PBCN can achieve a “trade-off” between data availability and privacy protection level.

Journal ArticleDOI
TL;DR: This paper presents BlockP2P-EP, a novel trust-enhanced blockchain P2P topology which takes transmission rate and transmission reliability into consideration and can exhibit promising network performance in terms of transmission Rate and Transmission reliability compared to Bitcoin and Ethereum.
Abstract: Blockchain technology offers an intelligent amalgamation of distributed ledger, Peer-to-Peer (P2P), cryptography, and smart contracts to enable trustworthy applications without any third parties. Existing blockchain systems have successfully either resolved the scalability issue by advancing the distributed consensus protocols from the control plane, or complemented the security issue by updating the block structure and encryption algorithms from the data plane. Yet, we argue that the underlying P2P network plane remains as an important but unaddressed barrier for accelerating the overall blockchain system performance, which can be discussed from how fast and reliable the network is. In order to improve the blockchain network performance about enabling fast and reliable broadcast, we establish a trust-enhanced blockchain P2P topology which takes transmission rate and transmission reliability into consideration. Transmission rate reflects blockchain network speed to disseminate transactions and blocks, and transmission reliability reveals whether transmission rate changes drastically on unreliable network connection. This paper presents BlockP2P-EP , a novel trust-enhanced blockchain topology to accelerate transmission rate and meanwhile retain transmission reliability. BlockP2P-EP first operates the geographical proximity sensing clustering, which leverages K-Means algorithm for gathering proximity peer nodes into clusters. It follows by the hierarchical topological structure that ensures strong connectivity and small diameter based on node attribute classification. Then we propose establishing trust-enhanced network topology. On top of the trust-enhanced blockchain topology, BlockP2P-EP conducts the parallel spanning tree broadcast algorithm to enable fast data broadcast among nodes both intra- and inter- clusters. Finally, we adopt an effective node inactivation detection method to reduce network load. To verify the validity of BlockP2P-EP protocol, we carefully design and implement a blockchain network simulator. Evaluation results show that BlockP2P-EP can exhibit promising network performance in terms of transmission rate and transmission reliability compared to Bitcoin and Ethereum.

Journal ArticleDOI
TL;DR: A group behavior model that can not only effectively analyze user group behavior regarding rumor but also accurately reflect the competition and symbiotic relation between rumor and anti-rumor diffusion is proposed.
Abstract: The traditional rumor diffusion model primarily studies the rumor itself and user behavior as the entry points. The complexity of user behavior, multidimensionality of the communication space, imbalance of the data samples, and symbiosis and competition between rumor and anti-rumor are challenges associated with the in-depth study on rumor communication. Given these challenges, this study proposes a group behavior model for rumor and anti-rumor. First, this study considers the diversity and complexity of the rumor propagation feature space and the advantages of representation learning in the feature extraction of data. Further, we adopt the corresponding representation learning methods for their content and structure of the rumor and anti-rumor to reduce the spatial feature dimension of the rumor-spreading data and to uniformly and densely express the full-featured information feature representation. Second, this paper introduces an evolutionary game theory, which is combined with the user-influenced rumor and anti-rumor, to reflect the conflict and symbiotic relationship between rumor and anti-rumor. we obtain a network structural feature expression of the influence degree of users on rumor and anti-rumor when expressing the structural characteristics of group communication relationships. Finally, aiming at the timeliness of rumor topic evolution, the whole model is proposed. Time slice and discretize the life cycle of rumor is used to synthesize the full-featured information feature representation of rumor and anti-rumor. The experiments denote that the model can not only effectively analyze user group behavior regarding rumor but also accurately reflect the competition and symbiotic relation between rumor and anti-rumor diffusion.

Journal ArticleDOI
TL;DR: This paper proposes BotChase, a two-phased graph-based bot detection system that leverages both unsupervised and supervised ML and outperforms an end-to-end system that employs flow-based features and performs particularly well in an online setting.
Abstract: Bot detection using machine learning (ML), with network flow-level features, has been extensively studied in the literature. However, existing flow-based approaches typically incur a high computational overhead and do not completely capture the network communication patterns, which can expose additional aspects of malicious hosts. Recently, bot detection systems that leverage communication graph analysis using ML have gained attention to overcome these limitations. A graph-based approach is rather intuitive, as graphs are true representation of network communications. In this paper, we propose BotChase, a two-phased graph-based bot detection system that leverages both unsupervised and supervised ML. The first phase prunes presumable benign hosts, while the second phase achieves bot detection with high precision. Our prototype implementation of BotChase detects multiple types of bots and exhibits robustness to zero-day attacks. It also accommodates different network topologies and is suitable for large-scale data. Compared to the state-of-the-art, BotChase outperforms an end-to-end system that employs flow-based features and performs particularly well in an online setting.

Journal ArticleDOI
TL;DR: A hierarchical attack graph model is developed that provides a network’s vulnerability and network topology, which can be utilized for the MTD shuffling decisions in selecting highly exploitable hosts in a given network, and determining the frequency of shuffling the hosts’ network configurations.
Abstract: Moving target defense (MTD) has emerged as a proactive defense mechanism aiming to thwart a potential attacker. The key underlying idea of MTD is to increase uncertainty and confusion for attackers by changing the attack surface (i.e., system or network configurations) that can invalidate the intelligence collected by the attackers and interrupt attack execution; ultimately leading to attack failure. Recently, the significant advance of software-defined networking (SDN) technology has enabled several complex system operations to be highly flexible and robust; particularly in terms of programmability and controllability with the help of SDN controllers. Accordingly, many security operations have utilized this capability to be optimally deployed in a complex network using the SDN functionalities. In this paper, by leveraging the advanced SDN technology, we developed an attack graph-based MTD technique that shuffles a host’s network configurations (e.g., MAC/IP/port addresses) based on its criticality, which is highly exploitable by attackers when the host is on the attack path(s). To this end, we developed a hierarchical attack graph model that provides a network’s vulnerability and network topology, which can be utilized for the MTD shuffling decisions in selecting highly exploitable hosts in a given network, and determining the frequency of shuffling the hosts’ network configurations. The MTD shuffling with a high priority on more exploitable, critical hosts contributes to providing adaptive, proactive, and affordable defense services aiming to minimize attack success probability with minimum MTD cost. We validated the out performance of the proposed MTD in attack success probability and MTD cost via both simulation and real SDN testbed experiments.

Journal ArticleDOI
TL;DR: This paper develops a multi-stage architecture of inference models that use flow-level attributes to automatically distinguish IoT devices from non-IoTs, classify individual types of IoT devices, and identify their states during normal operations.
Abstract: Cyber-security risks for Internet of Things (IoT) devices sourced from a diversity of vendors and deployed in large numbers, are growing rapidly. Therefore, management of these devices is becoming increasingly important to network operators. Existing network monitoring technologies perform traffic analysis using specialized acceleration on network switches, or full inspection of packets in software, which can be complex, expensive, inflexible, and unscalable. In this paper, we use SDN paradigm combined with machine learning to leverage the benefits of programmable flow-based telemetry with flexible data-driven models to manage IoT devices based on their network activity. Our contributions are three-fold: (1) We analyze traffic traces of 17 real consumer IoT devices collected in our lab over a six-month period and identify a set of traffic flows (per-device) whose time-series attributes computed at multiple timescales (from a minute to an hour) characterize the network behavior of various IoT device types, and their operating states ( i.e., booting, actively interacted with user, or being idle); (2) We develop a multi-stage architecture of inference models that use flow-level attributes to automatically distinguish IoT devices from non-IoTs, classify individual types of IoT devices, and identify their states during normal operations. We train our models and validate their efficacy using real traffic traces; and (3) We quantify the trade-off between performance and cost of our solution, and demonstrate how our monitoring scheme can be used in operation for detecting behavioral changes (firmware upgrade or cyber attacks).

Journal ArticleDOI
TL;DR: The experimental results show that, compared to state-of-art, WisdomSDN can effectively detect/mitigate DNS amplification attack quickly with high detection rate, less false positive rate, and low overhead making it a promising solution to mitigate DNS amplified attack in a SDN environment.
Abstract: As one of the most devastating types of Distributed Denial of Service (DDoS) attacks, Domain Name System (DNS) amplification attack represents a big threat and one of the main Internet security problems to nowadays networks. Many protocols that form the Internet infrastructure expose a set of vulnerabilities that can be exploited by attackers to carry out a set of attacks. DNS, one of the most critical elements of the Internet, is among these protocols. It is vulnerable to DDoS attacks mainly because all exchanges in this protocol use User Datagram Protocol (UDP). These attacks are difficult to defeat because attackers spoof the IP address of the victim and flood him with valid DNS responses coming from legitimate DNS servers. In this paper, we propose an efficient and scalable solution, called WisdomSDN, to effectively mitigate DNS amplification attack in the context of software defined networks (SDN). WisdomSDN covers both detection and mitigation of illegitimate DNS requests and responses. WisdomSDN consists of: (1) a novel proactive and stateful scheme (PAS) to perform one-to-one mapping between DNS requests and DNS responses; it operates proactively by sending only legitimate responses, excluding amplified illegitimate DNS responses; (2) a machine learning DDoS detection module to detect, in real-time, illegitimate DNS requests. This module consists of (a) Flow statistics collection scheme (FSC) to gather the features of flows in an efficient and scalable way using sFlow protocol; (b) Entropy calculation scheme (ECS) to measure randomness of network traffic; and (c) Bayes Network based Filtering scheme (BNF) to classify, based on entropy values, illegitimate DNS requests; and (3) DNS Mitigation scheme (DM) to effectively mitigate illegitimate DNS requests. The experimental results show that, compared to state-of-art, WisdomSDN can effectively detect/mitigate DNS amplification attack quickly with high detection rate, less false positive rate, and low overhead making it a promising solution to mitigate DNS amplification attack in a SDN environment.

Journal ArticleDOI
TL;DR: An experimental-based review of neural-based methods applied to intrusion detection issues, including deep-based approaches or weightless neural networks, which feature surprising outcomes and quantifies the value of neural networks when state-of-the-art datasets are used to train the models.
Abstract: The use of Machine Learning (ML) techniques in Intrusion Detection Systems (IDS) has taken a prominent role in the network security management field, due to the substantial number of sophisticated attacks that often pass undetected through classic IDSs. These are typically aimed at recognizing attacks based on a specific signature, or at detecting anomalous events. However, deterministic, rule-based methods often fail to differentiate particular (rarer) network conditions (as in peak traffic during specific network situations) from actual cyber attacks. In this article we provide an experimental-based review of neural-based methods applied to intrusion detection issues. Specifically, we i) offer a complete view of the most prominent neural-based techniques relevant to intrusion detection, including deep-based approaches or weightless neural networks, which feature surprising outcomes; ii) evaluate novel datasets (updated w.r.t. the obsolete KDD99 set) through a designed-from-scratch Python-based routine; iii) perform experimental analyses including time complexity and performance (accuracy and F-measure), considering both single-class and multi-class problems, and identifying trade-offs between resource consumption and performance. Our evaluation quantifies the value of neural networks, particularly when state-of-the-art datasets are used to train the models. This leads to interesting guidelines for security managers and computer network practitioners who are looking at the incorporation of neural-based ML into IDS.

Journal ArticleDOI
TL;DR: This paper presents an anomaly detection method, called SA-Detector, for dealing with a family of saturation attacks through IP spoofing, ICMP flooding, UDP flooding, and other types of TCP flooding, in addition to SYN flooding.
Abstract: As a new networking paradigm, Software-Defined Networking (SDN) separates data and control planes to facilitate programmable functions and improve the efficiency of packet delivery. Recent studies have shown that there exist various security threats in SDN. For example, a saturation attack may disturb the normal delivery of packets and even make the SDN system out of service by flooding the data plane, the control plane, or both. The existing research has focused on saturation attacks caused by SYN flooding. This paper presents an anomaly detection method, called SA-Detector, for dealing with a family of saturation attacks through IP spoofing, ICMP flooding, UDP flooding, and other types of TCP flooding, in addition to SYN flooding. SA-Detector builds upon the study of self-similarity characteristics of OpenFlow traffic between the control and data planes. Our work has shown that the normal and abnormal traffic flows through the OpenFlow communication channel have different statistical properties. Specifically, normal OpenFlow traffic has a low self-similarity degree whereas the occurrences of saturation attacks typically imply a higher degree of self-similarity. Therefore, SA-Detector exploits statistical results and self-similarity degrees of OpenFlow traffic, measured by Hurst exponents, for anomaly detection. We have evaluated our approach in both physical and simulation SDN environments with various time intervals, network topologies and applications, Internet protocols, and traffic generation tools. For the physical SDN environment, the average accuracy of detection is 97.68% and the average precision is 94.67%. For the simulation environment, the average accuracy is 96.54% and the average precision is 92.06%. In addition, we have compared SA-Detector with the existing saturation attack detection methods in terms of the aforementioned performance metrics and controller’s CPU utilization. The experiment results indicate that SA-Detector is effective for the detection of saturation attacks in SDN.

Journal ArticleDOI
TL;DR: An optimization model is formulated, Apt-RAN, that optimizes the energy consumption of the CU pool and the number of handovers, considering different functional splits, and a lightweight polynomial time heuristic algorithm is proposed.
Abstract: The recent adoption of virtualized technologies in Next Generation Radio Access Network (NG-RAN) has driven a significant impact on energy consumption by subsequently decreasing the number of active base stations. The base station (gNodeB) of 5G is segregated into cost-efficient Central Units (CU) hosted on virtual platforms and cheaper & smaller Distributed Units (DU) present at the cell sites. Multiple CUs are pooled together in a single powerful central cloud, known as CU pool. The logical connection between DU and CU can be dynamically adjusted and can potentially affect the energy consumption of the CU pool. The deployment of NG-RAN imposes strict latency requirements on the fronthaul link that connects DUs to CU. To relax these strict latency requirements, various alternate architectures such as Flexible RAN Functional Splits have been proposed by 3GPP. In this paper, we first evaluate the energy consumption of DU and CU for various functional split options using OpenAirInterface (OAI), a real-time open source software radio solution. We find that lower layer splits have high energy consumption at CU as compared to higher layer split options. We also observe the variation in energy consumption due to traffic heterogeneity. Motivated by the above study, we formulate an optimization model, Apt-RAN , that optimizes the energy consumption of the CU pool and the number of handovers, considering different functional splits. To address the computational complexity of solving the optimization model, a lightweight polynomial time heuristic algorithm is proposed. Simulation results demonstrate that our proposed model outperforms existing state-of-art schemes.

Journal ArticleDOI
TL;DR: DDCOL, an unsupervised online anomaly detection algorithm with parameter adaptation from the perspective of anomalies for the first time, can be robust to KPI expected concept drifts, and obtain a good feature distribution of normal data in KPIs.
Abstract: IT companies need to monitor various Key Performance Indicators (KPIs) and detect anomalies in real time to ensure the quality and reliability of Internet-based services. However, due to the diversity of KPIs, the ambiguity and scarcity of anomalies and the lack of labels, anomaly detection for various KPIs has been a great challenge. Existing KPI anomaly detection methods have not explored the properties of anomalies in KPIs in detail to our best knowledge. Therefore, we explore anomalies in KPIs and recognize a common and important form of anomalies named abrupt changes , which often indicate potential failures in the relevant services. For abrupt changes in various KPIs, we propose DDCOL , an unsupervised online anomaly detection algorithm with parameter adaptation from the perspective of anomalies for the first time. We propose three techniques: high order ${D}$ ifference extraction and combination, ${D}$ ensity-based ${C}$ lustering with parameter adaptation and ${O}\text{n}{L}$ ine detection with subsampling ( DDCOL ). Compared with traditional statistical methods and unsupervised learning methods, extensive experimental results and analysis on a large number of public KPIs show the competitive performance of DDCOL and the significance of abrupt changes . Furthermore, we provide an interpretation for the promising results, which shows that DDCOL can be robust to KPI expected concept drifts, and obtain a good feature distribution of normal data in KPIs.