scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Network Science and Engineering in 2020"


Journal ArticleDOI
TL;DR: This article proposes a novel mechanism for data uploading in smart cyber-physical systems, which considers both energy conservation and privacy preservation, and proposes a heuristic algorithm that achieves an energy-efficient scheme for data upload by introducing an acceptable number of extra contents.
Abstract: To provide fine-grained access to different dimensions of the physical world, the data uploading in smart cyber-physical systems suffers novel challenges on both energy conservation and privacy preservation. It is always critical for participants to consume as little energy as possible for data uploading. However, simply pursuing energy efficiency may lead to extreme disclosure of private information, especially when the uploaded contents from participants are more informative than ever. In this article, we propose a novel mechanism for data uploading in smart cyber-physical systems, which considers both energy conservation and privacy preservation. The mechanism preserves privacy by concealing abnormal behaviors of participants, while still achieves an energy-efficient scheme for data uploading by introducing an acceptable number of extra contents. To derive an optimal uploading scheme is proved to be NP-hard. Accordingly, we propose a heuristic algorithm and analyze its effectiveness. The evaluation results towards a real-world dataset demonstrate that the performance of the proposed algorithm is comparable with the optimal results.

447 citations


Journal ArticleDOI
TL;DR: This paper proposes an efficient online CSI prediction scheme, called OCEAN, for predicting CSI from historical data in 5G wireless communication systems, and designs a learning framework that is an integration of a CNN and a long short term with memory (LSTM) network.
Abstract: Channel state information (CSI) estimation is one of the most fundamental problems in wireless communication systems. Various methods, so far, have been developed to conduct CSI estimation. However, they usually require high computational complexity, which makes them unsuitable for 5G wireless communications due to employing many new techniques (e.g., massive MIMO, OFDM, and millimeter-Wave (mmWave)). In this paper, we propose an efficient online CSI prediction scheme, called OCEAN, for predicting CSI from historical data in 5G wireless communication systems. Specifically, we first identify several important features affecting the CSI of a radio link and a data sample consists of the information of the features and the CSI. We then design a learning framework that is an integration of a CNN (convolutional neural network) and a long short term with memory (LSTM) network. We also further develop an offline-online two-step training mechanism, enabling the prediction results to be more stable when applying it to practical 5G wireless communication systems. To validate OCEAN's efficacy, we consider four typical case studies, and conduct extensive experiments in the four scenarios, i.e., two outdoor and two indoor scenarios. The experiment results show that OCEAN not only obtains the predicted CSI values very quickly but also achieves highly accurate CSI prediction with up to 2.650-3.457 percent average difference ratio (ADR) between the predicted and measured CSI.

302 citations


Journal ArticleDOI
TL;DR: This paper proposes CiFi, deep convolutional neural networks (DCNN) for indoor localization with commodity 5GHz WiFi, and implements the system with commodity Wi-Fi devices in the 5GHz band and verifies its performance with extensive experiments in two representative indoor environments.
Abstract: With the increasing demand of location-based services, Wi-Fi based localization has attracted great interest because it provides ubiquitous access in indoor environments. In this paper, we propose CiFi, deep convolutional neural networks (DCNN) for indoor localization with commodity 5GHz WiFi. Leveraging a modified device driver, we extract phase data of channel state information (CSI), which is used to estimate the angle of arrival (AoA). We then create estimated AoA images as input to a DCNN, to train the weights in the offline phase. The location of mobile device is predicted based using the trained DCNN and new CSI AoA images. We implement the proposed CiFi system with commodity Wi-Fi devices in the 5GHz band and verify its performance with extensive experiments in two representative indoor environments.

181 citations


Journal ArticleDOI
TL;DR: This work proposes a load balancing scheme in a fog network to minimize the latency of data flows in the communications and processing procedures by associating IoT devices to suitable BSs and proves the convergence and the optimality of the proposed workload balancing scheme.
Abstract: As latency is the key performance metric for IoT applications, fog nodes co-located with cellular base stations can move the computing resources close to IoT devices Therefore, data flows of IoT devices can be offloaded to fog nodes in their proximity, instead of the remote cloud, for processing However, the latency of data flows in IoT devices consist of both the communications latency and computing latency Owing to the spatial and temporal dynamics of IoT device distributions, some BSs and fog nodes are lightly loaded, while others, which may be overloaded, may incur congestion Thus, the traffic load allocation among base stations (BSs) and computing load allocation among fog nodes affect the communications latency and computing latency of data flows, respectively To solve this problem, we propose a workload balancing scheme in a fog network to minimize the latency of data flows in the communications and processing procedures by associating IoT devices to suitable BSs We further prove the convergence and the optimality of the proposed workload balancing scheme Through extensive simulations, we have compared the performance of the proposed load balancing scheme with other schemes and verified its advantages for fog networking

142 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed method can reconstruct end-to-end network traffic with a high degree of accuracy, and in comparison with previous methods, this approach exhibits a significant performance improvement.
Abstract: Estimation of end-to-end network traffic plays an important role in traffic engineering and network planning. The direct measurement of a network's traffic matrix consumes large amounts of network resources and is thus impractical in most cases. How to accurately construct traffic matrix remains a great challenge. This paper studies end-to-end network traffic reconstruction in large-scale networks. Applying compressive sensing theory, we propose a novel reconstruction method for end-to-end traffic flows. First, the direct measurement of partial Origin-Destination (OD) flows is determined by random measurement matrix, providing partial measurements. Then, we use the K-SVD approach to obtain a sparse matrix. Combined with compressive sensing, this partially known OD flow matrix can be used to recover the entire end-to-end network traffic matrix. Simulation results show that the proposed method can reconstruct end-to-end network traffic with a high degree of accuracy. Moreover, in comparison with previous methods, our approach exhibits a significant performance improvement.

137 citations


Journal ArticleDOI
TL;DR: Re reinforcement learning is exploited to transform the two formulated problems and solve them by leveraging the deep deterministic policy gradient (DDPG) and hierarchical learning architectures and the proposed resource management schemes can achieve high delay/QoS satisfaction ratios.
Abstract: In this paper, we study joint allocation of the spectrum, computing, and storing resources in a multi-access edge computing (MEC)-based vehicular network. To support different vehicular applications, we consider two typical MEC architectures and formulate multi-dimensional resource optimization problems accordingly, which are usually with high computation complexity and overlong problem-solving time. Thus, we exploit reinforcement learning (RL) to transform the two formulated problems and solve them by leveraging the deep deterministic policy gradient (DDPG) and hierarchical learning architectures. Via off-line training, the network dynamics can be automatically learned and appropriate resource allocation decisions can be rapidly obtained to satisfy the quality-of-service (QoS) requirements of vehicular applications. From simulation results, the proposed resource management schemes can achieve high delay/QoS satisfaction ratios.

129 citations


Journal ArticleDOI
TL;DR: The PRDSA scheme is proposed to resist Sinkhole attack and guarantee security for IoT, which is the first work that can detect, bypass and locate the sinkhole at the same time.
Abstract: Internet of Things (IoT) applications have been growing significantly in recent years, however, the security issue has not been well studied in the literature for the IoT ecosystem. The sinkhole attack is one of serious destructive attacks for IoT as it is easy to launch the attack and difficult to defend it. In this paper, a Probe Route based Defense Sinkhole Attack (PRDSA) scheme is proposed to resist Sinkhole attack and guarantee security for IoT, which is the first work that can detect, bypass and locate the sinkhole at the same time. The PRDSA scheme proposes a routing mechanism combining the far-sink reverse routing, equal-hop routing, and minimum hop routing, which can effectively circumvent the sinkhole attacks and find a safe route to the real sink, so that the scheme can achieve better sinkhole detection. More importantly, the PRDSA scheme overcomes the limitation of previous schemes that they cannot locate the sinkhole. During the detection of the sinkhole attack, the PRDSA scheme requires the nodes and the sink node to return the signature of the information (e.g., IDs, etc.), so that the location of the sinkhole can be determined. Furthermore, the PRDAS scheme mainly utilizes the characteristics of network energy consumption. The probe route of sinkhole attack mainly occurs in the region where the remaining energy exists. Thus, the PRDSA scheme has little impact on the network lifetime. Theory and experiments show that this scheme can achieve better performance than existing schemes in term of network security and lifetime.

128 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a time-dependent SIR model that tracks the transmission and recovering rate at time $t$. Using the data provided by China authority, they show their one-day prediction errors are almost less than $3\%$.
Abstract: In this paper, we conduct mathematical and numerical analyses for COVID-19. To predict the trend of COVID-19, we propose a time-dependent SIR model that tracks the transmission and recovering rate at time $t$ . Using the data provided by China authority, we show our one-day prediction errors are almost less than $3\%$ . The turning point and the total number of confirmed cases in China are predicted under our model. To analyze the impact of the undetectable infections on the spread of disease, we extend our model by considering two types of infected persons: detectable and undetectable infected persons. Whether there is an outbreak is characterized by the spectral radius of a $2 \times 2$ matrix. If $R_0>1$ , then the spectral radius of that matrix is greater than 1, and there is an outbreak. We plot the phase transition diagram of an outbreak and show that there are several countries on the verge of COVID-19 outbreaks on Mar. 2, 2020. To illustrate the effectiveness of social distancing, we analyze the independent cascade model for disease propagation in a configuration random network. We show two approaches of social distancing that can lead to a reduction of the effective reproduction number $R_e$ .

120 citations


Journal ArticleDOI
TL;DR: These studies demonstrates that big data technologies can indeed be utilized to effectively capture network behaviors and predict network activities so that they can help perform highly effective network managements.
Abstract: This paper uses big data technologies to study base stations’ behaviors and activities and their predictability in mobile cellular networks. With new technologies quickly appearing, current cellular networks have become more larger, more heterogeneous, and more complex. This provides network managements and designs with larger challenges. How to use network big data to capture cellular network behavior and activity patterns and perform accurate predictions is recently one of main problems. To the end, first we exploit big data platform and technologies to analyze cellular network big data, i.e., Call Detail Records (CDRs). Our CDRs data set, which includes more than 1,000 cellular towers, more than million lines of CDRs, and several million users and sustains for more than 100 days, is collected from a national cellular network. Second, we propose our methodology to analyze these big data. The data pre-handling and cleaning approach is proposed to obtain the valuable big data sets for our further studies. The feature extraction and call predictability methods are presented to capture base stations’ behaviors and dissect their predictability. Third, based on our method, we perform the detailed activity pattern analysis, including call distributions, cross correlation features, call behavior patterns, and daily activities. The detailed analysis approaches are also proposed to dig out base stations’ activities. A series of findings are found and observed in the analysis process. Finally, a study case is proposed to validate the predictability of base stations’ behaviors and activities. Our studies demonstrates that big data technologies can indeed be utilized to effectively capture network behaviors and predict network activities so that they can help perform highly effective network managements.

112 citations


Journal ArticleDOI
TL;DR: A mobility-driven cloud-fog-edge collaborative real-time framework, Mobi-IoST, has been proposed, which has IoT, Edge, Fog and Cloud layers and exploits the mobility dynamics of the moving agent.
Abstract: The design of mobility-aware framework for edge/fog computing for IoT systems with back-end cloud is gaining research interest. In this paper, a mobility-driven cloud-fog-edge collaborative real-time framework, Mobi-IoST, has been proposed, which has IoT, Edge, Fog and Cloud layers and exploits the mobility dynamics of the moving agent. The IoT and edge devices are considered to be the moving agents in a 2-D space, typically over the road-network. The framework analyses the spatio-temporal mobility data (GPS logs) along with the other contextual information and employs machine learning algorithm to predict the location of the moving agents (IoT and Edge devices) in real-time. The accumulated spatio-temporal traces from the moving agents are modelled using probabilistic graphical model. The major features of the proposed framework are: (i) hierarchical processing of the information using IoT-Edge-Fog-Cloud architecture to provide better QoS in real-time applications, (ii) uses mobility information for predicting next location of the agents to deliver processed information, and (iii) efficiently handles delay and power consumption. The performance evaluations yield that the proposed Mobi-IoST framework has approximately 93% accuracy and reduced the delay and power by approximately 23–26% and 37–41% respectively than the existing mobility-aware task delegation system.

94 citations


Journal ArticleDOI
TL;DR: This paper exploits the intrinsic nature of social networks, i.e., the trust formed through social relationships among users, to enable users to share resources under the framework of 3C and applies a novel deep reinforcement learning approach to automatically make a decision for optimally allocating the network resources.
Abstract: Social networks have continuously been expanding and trying to be innovative. The recent advances of computing, caching, and communication (3C) can have significant impacts on mobile social networks (MSNs). MSNs can leverage these new paradigms to provide a new mechanism for users to share resources (e.g., information, computation-based services). In this paper, we exploit the intrinsic nature of social networks, i.e., the trust formed through social relationships among users, to enable users to share resources under the framework of 3C. Specifically, we consider the mobile edge computing (MEC), in-network caching and device-to-device (D2D) communications. When considering the trust-based MSNs with MEC, caching and D2D, we apply a novel deep reinforcement learning approach to automatically make a decision for optimally allocating the network resources. The decision is made purely through observing the network's states, rather than any handcrafted or explicit control rules, which makes it adaptive to variable network conditions. Google TensorFlow is used to implement the proposed deep $Q$ -learning approach. Simulation results with different network parameters are presented to show the effectiveness of the proposed scheme.

Journal ArticleDOI
TL;DR: A Reinforcement Learning (RL) based job scheduling algorithm by combining RL with neural network (NN) is proposed to solve the cost minimization problem of big data analytics on geo-distributed data centers connected to renewable energy sources with unpredictable capacity.
Abstract: In the age of big data, companies tend to deploy their services in data centers rather than their own servers. The demands of big data analytics grow significantly, which leads to an extremely high electricity consumption at data centers. In this paper, we investigate the cost minimization problem of big data analytics on geo-distributed data centers connected to renewable energy sources with unpredictable capacity. To solve this problem, we propose a Reinforcement Learning (RL) based job scheduling algorithm by combining RL with neural network (NN). Moreover, two techniques are developed to enhance the performance of our proposal. Specifically, Random Pool Sampling (RPS) is proposed to retrain the NN via accumulated training data, and a novel Unidirectional Bridge Network (UBN) structure is designed for further enhancing the training speed by using the historical knowledge stored in the trained NN. Experiment results on real Google cluster traces and electricity price from Energy Information Administration show that our approach is able to reduce the data centers’ cost significantly compared with other benchmark algorithms.

Journal ArticleDOI
TL;DR: This paper aims to design a memristor-based sparse compact convolutional neural network (MSCCNN) to reduce the number of memristors and achieves superior accuracy rates while greatly reducing the scale of the hardware circuit.
Abstract: Memristor has been widely studied for hardware implementation of neural networks due to the advantages of nanometer size, low power consumption, fast switching speed and functional similarity to biological synapse. However, it is difficult to realize memristor-based deep neural networks for there exist a large number of network parameters in general structures such as LeNet, FCN, etc. To mitigate this problem, this paper aims to design a memristor-based sparse compact convolutional neural network (MSCCNN) to reduce the number of memristors. We firstly use an average pooling and $1\times 1$ convolutional layer to replace fully connected layers. Meanwhile, depthwise separation convolution is utilized to replace traditional convolution to further reduce the number of parameters. Furthermore, a network pruning method is adopted to remove the redundant memristor crossbars for depthwise separation convolutional layers. Therefore, a more compact network structure is obtained while the recognition accuracy remaining unchanged. Simulation results show that the designed model achieves superior accuracy rates while greatly reducing the scale of the hardware circuit. Compared with traditional designs of memristor-based CNN, our proposed model has smaller area and lower power consumption.

Journal ArticleDOI
TL;DR: A data-driven IDS is designed by analyzing the link load behaviors of the Road Side Unit in the IoV against various attacks leading to the irregular fluctuations of traffic flows and a deep learning architecture based on the Convolutional Neural Network is designed to extract the features of link loads, and detect the intrusion aiming at RSUs.
Abstract: As an industrial application of Internet of Things (IoT), Internet of Vehicles (IoV) is one of the most crucial techniques for Intelligent Transportation System (ITS), which is a basic element of smart cities. The primary issue for the deployment of ITS based on IoV is the security for both users and infrastructures. The Intrusion Detection System (IDS) is important for IoV users to keep them away from various attacks via the malware and ensure the security of users and infrastructures. In this paper, we design a data-driven IDS by analyzing the link load behaviors of the Road Side Unit (RSU) in the IoV against various attacks leading to the irregular fluctuations of traffic flows. A deep learning architecture based on the Convolutional Neural Network (CNN) is designed to extract the features of link loads, and detect the intrusion aiming at RSUs. The proposed architecture is composed of a traditional CNN and a fundamental error term in view of the convergence of the backpropagation algorithm. Meanwhile, a theoretical analysis of the convergence is provided by the probabilistic representation for the proposed CNN-based deep architecture. We finally evaluate the accuracy of our method by way of implementing it over the testbed.

Journal ArticleDOI
TL;DR: A task-oriented user selection incentive mechanism (TRIM) is proposed, in an effort toward a task-centered design framework in MCS, which achieves feasible and efficient user selection while ensuring the privacy and security of the sensing user in M CS.
Abstract: The designs of existing incentive mechanisms in mobile crowdsensing (MCS) are primarily platform-centered or user-centered, while overlooking the multidimensional consideration of sensing task requirements. Therefore, the user selection fails to effectively address the task requirements or the relevant maximization and diversification problems. To tackle these issues, in this paper, with the aid of edge computing, we propose a task-oriented user selection incentive mechanism (TRIM), in an effort toward a task-centered design framework in MCS. Initially, an edge node is deployed to publish the sensing task according to its requirements, and constructs a task vector from multiple dimensions to maximize the satisfaction of the task requirements. Meanwhile, a sensing user constructs a user vector to formalize the personalized preferences for participating in the task response. Furthermore, by introducing a privacy-preserving cosine similarity computing protocol, the similarity level between the task vector and the user vector can be calculated, and subsequently a target user candidate set can be obtained according to the similarity level. In addition, considering the constraint of the task budget, the edge node performs a secondary sensing user selection based on the ratio of the similarity level and the expected reward of the sensing user. By designing a secure multi-party sorting protocol, enhanced by fuzzy closeness and the fuzzy comprehensive evaluation method, the target user set is determined aiming at maximizing the similarity of the task requirements and the user's preferences, while minimizing the payment of the edge node, and ensuring the fairness of the sensing user being selected. The simulation results show that TRIM achieves feasible and efficient user selection while ensuring the privacy and security of the sensing user in MCS. Among the dynamic changes of task requirements, TRIM excels with user selections reaching nearly 90% on the data quality level compliance rate and 70% on the task budget consumption ratio, superior to the other incentive mechanisms.

Journal ArticleDOI
TL;DR: A randomized approximation algorithm which is provably superior to the state-of-the art methods with respect to running time is presented.
Abstract: Social networks allow rapid spread of ideas and innovations while negative information can also propagate widely. When a user receives two opposing opinions, they tend to believe the one arrives first. Therefore, once misinformation or rumor is detected, one containment method is to introduce a positive cascade competing against the rumor. Given a budget $k$ , the rumor blocking problem asks for $k$ seed users to trigger the spread of a positive cascade such that the number of the users who are not influenced by rumor can be maximized. The prior works have shown that the rumor blocking problem can be approximated within a factor of $(1-1/e)$ by a classic greedy algorithm combined with Monte Carlo simulation. Unfortunately, the Monte Carlo simulation based methods are time consuming and the existing algorithms either trade performance guarantees for practical efficiency or vice versa. In this paper, we present a randomized approximation algorithm which is provably superior to the state-of-the art methods with respect to running time. The superiority of the proposed algorithm is demonstrated by experiments done on both the real-world and synthetic social networks.

Journal ArticleDOI
TL;DR: A Reduced Variable Neighborhood Search (RVNS)-based sEnsor Data Processing Framework (REDPF) is proposed to enhance reliability of data transmission and processing speed and a new scheme is designed to evaluate the health status of the elderly people.
Abstract: In recent years, healthcare IoT have been helpful in mitigating pressures of hospital and medical resources caused by aging population to a large extent. As a safety-critical system, the rapid response from the health care system is extremely important. To fulfill the low latency requirement, fog computing is a competitive solution by deploying healthcare IoT devices on the edge of clouds. However, these fog devices generate huge amount of sensor data. Designing a specific framework for fog devices to ensure reliable data transmission and rapid data processing becomes a topic of utmost significance. In this paper, a Reduced Variable Neighborhood Search (RVNS)-based sEnsor Data Processing Framework (REDPF) is proposed to enhance reliability of data transmission and processing speed. Functionalities of REDPF include fault-tolerant data transmission, self-adaptive filtering and data-load-reduction processing. Specifically, a reliable transmission mechanism, managed by a self-adaptive filter, will recollect lost or inaccurate data automatically. Then, a new scheme is designed to evaluate the health status of the elderly people. Through extensive simulations, we show that our proposed scheme improves network reliability, and provides a faster processing speed.

Journal ArticleDOI
TL;DR: A privacy-aware task allocation and data aggregation scheme (PTAA) is proposed leveraging bilinear pairing and homomorphic encryption and security analysis shows that PTAA can achieve the desirable security goals.
Abstract: Spatial crowdsourcing (SC) enables task owners (TOs) to outsource spatial-related tasks to a SC-server who engages mobile users in collecting sensing data at some specified locations with their mobile devices. Data aggregation, as a specific SC task, has drawn much attention in mining the potential value of the massive spatial crowdsensing data. However, the release of SC tasks and the execution of data aggregation may pose considerable threats to the privacy of TOs and mobile users, respectively. Besides, it is nontrivial for the SC-server to allocate numerous tasks efficiently and accurately to qualified mobile users, as the SC-server has no knowledge about the entire geographical user distribution. To tackle these issues, in this paper, we introduce a fog-assisted SC architecture, in which many fog nodes deployed in different regions can assist the SC-server to distribute tasks and aggregate data in a privacy-aware manner. Specifically, a privacy-aware task allocation and data aggregation scheme (PTAA) is proposed leveraging bilinear pairing and homomorphic encryption. PTAA supports representative aggregate statistics (e.g., sum, mean, variance, and minimum) with efficient data update while providing strong privacy protection. Security analysis shows that PTAA can achieve the desirable security goals. Extensive experiments also demonstrate its feasibility and efficiency.

Journal ArticleDOI
Weichao Gao1, Wei Yu1, Fan Liang1, William G. Hatcher1, Chao Lu1 
TL;DR: This paper proposes a generic Privacy-Preserving Auction Scheme (PPAS), in which the two independent entities of Auctioneer and Intermediate Platform comprise an untrusted third-party trading platform and leverages an additional signature verification mechanism to improve the security of the privacy-preserving auction.
Abstract: Cyber-Physical Systems (smart grid, smart transportation, smart cities, etc.), driven by advances in Internet of Things (IoT) technologies, will provide the infrastructure and integration of smart applications to accelerate the generation and collection of big data to an unprecedented scale. As a fundamental commodity in our current information age, big data is a crucial key to competitiveness in modern commerce. In this paper, we address the issue of privacy preservation for data auction in CPS by leveraging the concept of homomorphic cryptography and secure network protocol design. Specifically, we propose a generic Privacy-Preserving Auction Scheme (PPAS), in which the two independent entities of Auctioneer and Intermediate Platform comprise an untrusted third-party trading platform. Via the implementation of homomorphic encryption and one-time pad, a winner in the auction process can be determined and all bidding information is disguised. Yet, to further improve the security of the privacy-preserving auction, we additionally propose an Enhanced Privacy-Preserving Auction Scheme (EPPAS) that leverages an additional signature verification mechanism. The feasibilities of both schemes are validated through detailed theoretical analyses and extensive performance evaluations, including assessment of the resilience to attacks. In addition, we discuss some open issues and extensions relevant to our scheme.

Journal ArticleDOI
TL;DR: Contract theory is used to model the incentive mechanism in OFDM-based cognitive IoT network under a practical scenario with incomplete information, where UIDs’ private information are not known by PUs, and a heuristic UIDs' selection method with a finite PUs’ budget is proposed.
Abstract: Internet of Things (IoT) is considered as the future network to support machine-to-machine communications. To realize IoT network, a large number of IoT devices need to be deployed, which will lead a problem of allocating sufficient spectrum for the rapidly increasing devices. Through cooperative spectrum sharing, the unlicensed IoT devises (UIDs) help forward the signal of primary users (PUs) to exchange for dedicated spectrum to transmit their signals, which can effectively improve the spectrum utilization for the IoT network. However, UIDs are selfish and rational. They may not be willing to take participate in the cooperative spectrum sharing after considering the potential costs. It is necessary to provide effective incentives to encourage them to take part in the cooperative spectrum sharing. In this paper, we use contract theory to model the incentive mechanism in OFDM-based cognitive IoT network under a practical scenario with incomplete information, where UIDs’ private information (i.e., transmission cost and wireless channel characteristics) are not known by PUs. By using contract theory, the negotiations between PUs and UIDs are modeled as a labor market, where PUs and UIDs act as the employer and employees, respectively. We first study the optimal contract design to maximize PUs’ utility as well as the social welfare, in which the contract consists of a menu of expected signal-to-ratios and payments over each subcarrier. Then, we propose heuristic UIDs’ selection method with a finite PUs’ budget. Finally, simulation results demonstrate the efficiency of our proposed contract design and UIDs selection method.

Journal ArticleDOI
TL;DR: A priority-based secondary user (SU) call admission and channel allocation scheme that outperforms the greedy non-priority and fair proportion schemes and reduces the blocking probability of higher-priority SU calls while maintaining a sufficient level of channel utilization.
Abstract: The Internet of Things (IoT) is a network of interconnected objects, in which every object in the world seeks to communicate and exchange information actively. This exponential growth of interconnected objects increases the demand for wireless spectrum. However, providing wireless channel access to every communicating object while ensuring its guaranteed quality of service (QoS) requirements is challenging and has not yet been explored, especially for IoT-enabled mission-critical applications and services. Meanwhile, Cognitive Radio-enabled Internet of Things (CR-IoT) is an emerging field that is considered the future of IoT. The combination of CR technology and IoT can better handle the increasing demands of various applications such as manufacturing, logistics, retail, environment, public safety, healthcare, food, and drugs. However, due to the limited and dynamic resource availability, CR-IoT cannot accommodate all types of users. In this paper, we first examine the availability of a licensed channel on the basis of its primary users’ activities (e.g., traffic patterns). Second, we propose a priority-based secondary user (SU) call admission and channel allocation scheme, which is further based on a priority-based dynamic channel reservation scheme. The objective of our study is to reduce the blocking probability of higher-priority SU calls while maintaining a sufficient level of channel utilization. The arrival rates of SU calls of all priority classes are estimated using a Markov chain model, and further channels for each priority class are reserved based on this analysis. We compare the performance of the proposed scheme with the greedy non-priority and fair proportion schemes in terms of the SU call-blocking probability, SU call-dropping probability, channel utilization, and throughput. Numerical results show that the proposed priority scheme outperforms the greedy non-priority and fair proportion schemes.

Journal ArticleDOI
TL;DR: The excellent approximation ability of fuzzy logic systems and adaptive method are used to approximate the unknown system dynamics in the design of virtual controllers for different orders of the subsystems to design the consensus control protocol.
Abstract: This paper deals with the finite-time consensus control problem for a class of nonlinear strict-feedback multi-agent systems with heterogeneous dynamics. Due to the existence of unknown nonlinear dynamics of the system, this paper adopts the excellent approximation ability of fuzzy logic systems to design the consensus control protocol. Moreover, fuzzy logic systems and adaptive method are used to approximate the unknown system dynamics in the design of virtual controllers for different orders of the subsystems. The proposed direct adaptive fuzzy tracking controller for every agent is constructed via a backstepping design process. The proposed adaptive fuzzy controller can ensure that the outputs of all agents can track a common desired trajectory in finite time. Finally, the performance of the proposed fuzzy adaptive consensus control algorithm is demonstrated by two numerical studies of MASs including both homogeneous and heterogeneous dynamics of the interacting agents.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a statistical approach based on graphon theory to measure the importance of nodes in the presence of uncertainty, and established their connections to classical graph centrality measures.
Abstract: As relational datasets modeled as graphs keep increasing in size and their data-acquisition is permeated by uncertainty, graph-based analysis techniques can become computationally and conceptually challenging. In particular, node centrality measures rely on the assumption that the graph is perfectly known – a premise not necessarily fulfilled for large, uncertain networks. Accordingly, centrality measures may fail to faithfully extract the importance of nodes in the presence of uncertainty. To mitigate these problems, we suggest a statistical approach based on graphon theory: we introduce formal definitions of centrality measures for graphons and establish their connections to classical graph centrality measures. A key advantage of this approach is that centrality measures defined at the modeling level of graphons are inherently robust to stochastic variations of specific graph realizations. Using the theory of linear integral operators, we define degree, eigenvector, Katz and PageRank centrality functions for graphons and establish concentration inequalities demonstrating that graphon centrality functions arise naturally as limits of their counterparts defined on sequences of graphs of increasing size. The same concentration inequalities also provide high-probability bounds between the graphon centrality functions and the centrality measures on any sampled graph, thereby establishing a measure of uncertainty of the measured centrality score.

Journal ArticleDOI
TL;DR: A stochastic blockchain scheme to limit the number of cooperative nodes and distribute the load to IoT edge nodes and propose a lightweight mining process to make only the IoT edge node compete for block generation and share the block with other nodes.
Abstract: This paper proposes a blockchain-based data checking scheme to protect data integrity in Internet of Things (IoT). Traditional data integrity schemes such as symmetric key approaches and public key infrastructure (PKI) suffer from the single-point of failure and network congestion due to the centralized architecture. Motivated by the distributed data authentication in blockchain, we propose to adopt blockchain to ensure the data integrity in IoT networks. However, the existing blockchain scheme cannot be directly applied to IoT nodes with limited computing and network resources. Hence, we develop a stochastic blockchain scheme to limit the number of cooperative nodes and distribute the load to IoT edge nodes. In our scheme, the IoT data are broadcast by randomly selected cooperative nodes, thereby introducing uncertainty to the attacker and improving the system security level. Finally, we propose a lightweight mining process to make only the IoT edge nodes compete for block generation and share the block with other nodes. If our scheme is used in the case of having 9,000 legitimate nodes and 1,000 compromised nodes, only three cooperative nodes can achieve the probability of successful defense over 99 percent.

Journal ArticleDOI
TL;DR: This work develops a reinforcement learning routing algorithm (RL-Routing) to solve a traffic engineering problem of SDN in terms of throughput and delay and considers comprehensive network information for state representation and use one-to-many network configuration for routing choices.
Abstract: Communication networks are difficult to model and predict because they have become very sophisticated and dynamic. We develop a reinforcement learning routing algorithm (RL-Routing) to solve a traffic engineering (TE) problem of SDN in terms of throughput and delay. RL-Routing solves the TE problem via experience, instead of building an accurate mathematical model. We consider comprehensive network information for state representation and use one-to-many network configuration for routing choices. Our reward function, which uses network throughput and delay, is adjustable for optimizing either upward or downward network throughput. After appropriate training, the agent learns a policy that predicts future behavior of the underlying network and suggests better routing paths between switches. The simulation results show that RL-Routing obtains higher rewards and enables a host to transfer a large file faster than Open Shortest Path First (OSPF) and Least Loaded (LL) routing algorithms on various network topologies. For example, on the NSFNet topology, the sum of rewards obtained by RL-Routing is 119.30, whereas those of OSPF and LL are 106.59 and 74.76, respectively. The average transmission time for a 40GB file using RL-Routing is $\text{25.2}~s$ . Those of OSPF and LL are $\text{63}~s$ and $\text{53.4}~s$ , respectively.

Journal ArticleDOI
TL;DR: An Energy-Efficient Task Offloading (EETO) policy combined with a hierarchical fog network for handling energy-performance trade-off by jointly scheduling and offloading the real-time IoT applications and a constraint restricted progressive online task offloading policy is incurred to mitigate the backlog tasks of the queues.
Abstract: In recent times, fog computing becomes an emerging technology that can exhilarate the cloud services towards the network edge for increasing the speeds up of various Internet-of-Things (IoT) applications. In this context, integrating priority-aware scheduling and data offloading allow the service providers to efficiently handle a large number of real-time IoT applications and enhance the capability of the fog networks. But the energy consumption has become skyrocketing, and it gravely affects the performance of the fog networks. To address this issue, in this paper, we introduce an Energy-Efficient Task Offloading ( EETO ) policy combined with a hierarchical fog network for handling energy-performance trade-off by jointly scheduling and offloading the real-time IoT applications. To achieve this objective, we formulate a heuristic technique for assigning a priority on each incoming task and formulate a stochastic-aware data offloading issue with an efficient virtual queue stability approach, namely the Lyapunov optimization technique. The proposed technique utilizes the current state information for minimizing the queue waiting time and overall energy consumption while meeting drift-plus-penalty . Furthermore, a constraint restricted progressive online task offloading policy is incurred to mitigate the backlog tasks of the queues. Extensive simulation with various Quality-of-Service (QoS) parameters signifies that the proposed EETO mechanism performs better and saves about 23.79% of the energy usage as compared to the existing ones.

Journal ArticleDOI
TL;DR: This work proposes a novel and effective technique that incorporates latent structural constraints into binary compressed sensing and shows high accuracy and robust effectiveness of the method by analyzing artificial small-world and scale-free networks, as well as two empirical networks.
Abstract: A complex network is a model representation of interactions within technological, social, information, and biological networks. Oftentimes, we are interested in identifying the underlying network structure from limited and noisy observational data, which is a challenging problem. Here, to address this problem, we propose a novel and effective technique that incorporates latent structural constraints into binary compressed sensing. We show high accuracy and robust effectiveness of our proposed method by analyzing artificial small-world and scale-free networks, as well as two empirical networks. Our method requires a relatively small number of observations and it is robust against strong measurement noise. These results suggest that incorporating latent structural constraints into an algorithm for identifying the underlying network structure improves the inference of connections in complex networks.

Journal ArticleDOI
TL;DR: In this paper, a machine learning framework was proposed to deal with the user association, subchannel and power allocation problems in such a complex scenario, where the authors focused on maximizing the energy efficiency of the system under the constraints of quality of service (QoS), interference limitation, and power limitation.
Abstract: With the rapid development of future wireless communication, the combination of NOMA technology and millimeter-wave(mmWave) technology has become a research hotspot The application of NOMA in mmWave heterogeneous networks can meet the diverse needs of users in different applications and scenarios in future communications In this paper, we propose a machine learning framework to deal with the user association, subchannel and power allocation problems in such a complex scenario We focus on maximizing the energy efficiency (EE) of the system under the constraints of quality of service (QoS), interference limitation, and power limitation Specifically, user association is solved through the Lagrange dual decomposition method, while semi-supervised learning and deep neural network (DNN) are used for the subchannel and power allocation, respectively In particular, unlabeled samples are introduced to improve approximation and generalization ability for subchannel allocation The simulation indicates that the proposed scheme can achieve higher EE with lower complexity

Journal ArticleDOI
TL;DR: An in-depth analysis of TC of anonymity tools (and deeper, of their running services and applications) via a truly hierarchical approach is provided and a general improvement over the flat approach in terms of all the classification metrics is highlighted.
Abstract: Privacy-preserving protocols and tools are increasingly adopted by Internet users nowadays. These mechanisms are challenged by the process of traffic classification (TC) which, other than being an important workhorse for several network management tasks, becomes a key factor in the assessment of their privacy level, both from offensive (malign) and defensive (benign) standpoints. In this paper, we propose TC of anonymity tools (and deeper, of their running services and applications) via a truly hierarchical approach . Capitalizing a public dataset released in 2017 containing anonymity traffic, we provide an in-depth analysis of TC and we compare the proposed hierarchical approach with a flat counterpart. The proposed framework is investigated in both the usual TC setup and its “early” variant (i.e., only the first segments of traffic aggregate are used to take a decision). Results highlight a general improvement over the flat approach in terms of all the classification metrics. Further performance gains are also accomplished by tuning the thresholds ensuring progressive censoring. Finally, fine-grain performance investigation allows us to demonstrate lower severity of errors incurred by the hierarchical approach (as opposed to the flat case) and highlight poorly classifiable services/applications of each anonymity tool, gathering useful feedback on their privacy level.

Journal ArticleDOI
TL;DR: An air-ground integrated network in B5G wireless communications is presented, where an UAV is deployed as an aerial radio access platform to formulate system strategy intelligently, as well as provide task offloading and energy harvesting opportunities for terrestrial devices.
Abstract: Providing the ubiquitous network accessibility is the key goal for 5G and Beyond 5G (B5G) networks. As the number of devices and data increase, ensuring the Quality of Service (QoS) by the existing network with a singular focus could be challenging. Meanwhile, it is imfortant to perform the computation-intensive or delay-sensitive tasks well and provide long-term services for the B5G networks. Therefore, we present an air-ground B5G integrated network, where an UAV is deployed as an aerial platform to formulate system strategy intelligently, as well as provide task offloading and energy harvesting opportunities for terrestrial devices. To get more insight, we propose an intelligent charging-offloading scheme and formulate the joint multi-task charging-offloading scheduling as an optimization problem aiming to minimizing the service latency of all devices by jointly optimizing the task offloading decisions, connection schedulings, charging and computation resources allocations. However, the formulated problem is a Mixed-Integer Nonlinear Programming problem which is challenging to solve in general. Therefore, we decompose it into multiple convex sub-problems based on Block-Coordinate Descent method. Performance evaluation demonstrates that our scheme outperforms the benchmarks in terms of the system service latency of all UDs. Moreover, we represent the system working process and industrial applications.