scispace - formally typeset
Search or ask a question

Showing papers by "Mohsen Guizani published in 2023"


Journal ArticleDOI
TL;DR: In this paper , the authors conduct an in-depth survey on the existing intrusion detection solutions proposed for the IoT ecosystem which includes the IoT devices as well as the communications between the IoT, fog computing, and cloud computing layers.
Abstract: In the past several years, the world has witnessed an acute surge in the production and usage of smart devices which are referred to as the Internet of Things (IoT). These devices interact with each other as well as with their surrounding environments to sense, gather and process data of various kinds. Such devices are now part of our everyday’s life and are being actively used in several verticals, such as transportation, healthcare, and smart homes. IoT devices, which usually are resource-constrained, often need to communicate with other devices, such as fog nodes and/or cloud computing servers to accomplish certain tasks that demand large resource requirements. These communications entail unprecedented security vulnerabilities, where malicious parties find in this heterogeneous and multiparty architecture a compelling platform to launch their attacks. In this work, we conduct an in-depth survey on the existing intrusion detection solutions proposed for the IoT ecosystem which includes the IoT devices as well as the communications between the IoT, fog computing, and cloud computing layers. Although some survey articles already exist, the originality of this work stems from the three following points: 1) discuss the security issues of the IoT ecosystem not only from the perspective of IoT devices but also taking into account the communications between the IoT, fog, and cloud computing layers; 2) propose a novel two-level classification scheme that first categorizes the literature based on the approach used to detect attacks and then classify each approach into a set of subtechniques; and 3) propose a comprehensive cybersecurity framework that combines the concepts of explainable artificial intelligence (XAI), federated learning, game theory, and social psychology to offer future IoT systems a strong protection against cyberattacks.

17 citations


Journal ArticleDOI
TL;DR: In this article , a novel blockchain-based approach is introduced to manage multi-drone collaboration during a swarm operation. And the authors aim to improve the security of the consensus achievement process of multidrone collaboration, energy efficiency, and connectivity during the environment's exploration while maintaining consensus achievement effectiveness.
Abstract: The Internet of Drones (IoD) allows drones to collaborate safely while operating in a restricted airspace for numerous applications in Industry 4.0 world. Energy efficiency and sharing sensing data are the main challenges in swarm-drone collaboration for performing complex tasks effectively and efficiently in real-time. Information security of consensus achievement is required for multi-drone collaboration in the presence of Byzantine drones. Byzantine drones may be enough to cause present swarm coordination techniques to collapse, resulting in unpredictable or calamitous results. One or more Byzantine drones may lead to failure in consensus while exploring the environment. Moreover, Blockchain technology is in the early stage for swarm drone collaboration. Therefore, we introduce a novel blockchain-based approach to managing multi-drone collaboration during a swarm operation. Within drone swarms, blockchain technology is utilized as a communication tool to broadcast instructions to the swarm. This paper aims to improve the security of the consensus achievement process of multi-drone collaboration, energy efficiency, and connectivity during the environment’s exploration while maintaining consensus achievement effectiveness. Improving the security of consensus achievement among drones will increase the possibility and stability of multi-drone applications by improving connectivity and energy efficiency in the smart world and solving real environmental issues.

11 citations


Journal ArticleDOI
TL;DR: In this paper , the authors investigated a UAV-enabled WPT system that transmits power to a set of sensor nodes at unknown positions and formulated a multi-objective optimization problem to jointly optimize these objectives.
Abstract: Due to the outstanding merits such as mobility, high maneuverability, and flexibility, Unmanned Aerial Vehicles (UAVs) are viable mobile power transmitters that can be rapidly deployed in geographically constrained regions. They are good candidates for supplying power to energy-limited Sensor Nodes (SNs) with Wireless Power Transfer (WPT) technology. In this paper, we investigate a UAV-enabled WPT system that transmits power to a set of SNs at unknown positions. A key challenge is how to efficiently gather the locations of SNs and design a power transfer scheme. We formulate a multi-objective optimization problem to jointly optimize these objectives: maximization of UAV's search efficiency, maximization of total harvested energy, minimization of UAV's flight energy consumption and maximization of UAV's energy utilization efficiency. To tackle these issues, we present a two-stage strategy that includes a UAV Motion Control (UMC) algorithm for obtaining the coordinates of SNs and a Dynamic Genetic Clustering (DGC) algorithm for power transfer via grouping SNs into clusters. First, the UMC algorithm enables the UAV to autonomously control its own motion and conduct target search missions. The objective is to make the energy-restricted UAV find as many SNs as feasible without any apriori knowledge of their information. Second, the DGC algorithm is used to optimize the energy consumption of the UAV by combining a genetic clustering algorithm with a dynamic clustering strategy to maximize the amount of energy harvested by SNs and the energy utilization efficiency of the UAV. Finally, experimental results show that the proposed algorithms outperform their counterparts.

8 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a deep distributed learning-based POI recommendation method for situations of mobile edge networks, in which hidden feature components from both local and global subspaces are deeply abstracted via representative learning schemes.
Abstract: With the rapid development of edge intelligence in wireless communication networks, mobile-edge networks (MENs) have been broadly discussed in academia. Supported by considerable geographical data acquisition ability of mobile Internet of Things (IoT), the MENs can also provide spatial locations-based social service to users. Therefore, suggesting reasonable points-of-interest (POIs) to users is essential to improve user experience of MENs. As the simple user-location data is usually sparse and not informative, existing literature attempted to extend feature space from two perspectives: 1) contextual patterns and 2) semantic patterns. However, previous approaches mainly focused on internal features of users, yet ignoring latent external features among them. To address this challenge, in this article, a deep distributed-learning-based POI recommendation (Deep-PR) method is proposed for situations of MENs. In particular, hidden feature components from both local and global subspaces are deeply abstracted via representative learning schemes. Besides, propagation operations are embedded to iteratively reoptimize expressions of the feature space. The successive effect of the above two aspects contributes a lot to more fine-grained feature spaces, so that a recommendation accuracy can be ensured. Two types of experiments are also carried out on three real-world data sets to assess both efficiency and stability of the proposed Deep-PR. Compared with seven typical baselines with respect to four evaluation metrics, obtained results of the overall performance of the Deep-PR are excellent.

6 citations


Journal ArticleDOI
TL;DR: In this paper , a multi-timescale VNF embedding and floW scheduling algorithm named NEWS is proposed to maximize throughput while reducing VNF cost and energy consumption in the multi-mode green Internet of things (IoT).
Abstract: The multi-mode green Internet of things (IoT) provides a communication support for social assets of smart park connecting to power grid for low-carbon operation. Software defined networking (SDN) and network function virtualization (NFV) can flexibly integrate heterogeneous communication modes through network resource scheduling and route management. However, the joint optimization of virtual network functions (VNF) embedding and flow scheduling faces several challenges of differentiated QoS guarantee, coupling and externality of VNF embedding, and route selection conflicts. In this work, a multi-timescale VNF Embedding and floW Scheduling algorithm named NEWS is proposed to maximize throughput while reducing VNF embedding cost and energy consumption. Specifically, the joint optimization problem is transformed into three subproblems, i.e., large-timescale VNF embedding, small-timescale admission control, small-timescale route selection and computation resource allocation. A swap matching-based low-cost VNF embedding algorithm is proposed for the first subproblem. Then, a queue backlog threshold-based admission control strategy is proposed for the second subproblem. Next, the third subproblem is decomposed into two stages, where a collaborative Q-learning-based backpressure-aware algorithm is presented in the first stage, and a greedy-based computation resource allocation algorithm is given in the second stage. Simulations demonstrate that NEWS performs superior in throughput, embedding cost, and energy consumption.

4 citations


Journal ArticleDOI
TL;DR: In this article , a wave energy prediction model based on Gated Recurrent Unit network (GRU), Bayesian optimization algorithm, and attention mechanism are introduced to improve the model performance.

4 citations


Journal ArticleDOI
TL;DR: In this paper , an AI-enabled secure communication mechanism in fog computing-based healthcare system (in short, AISCM-FH) has been proposed, which provides superior security and extra functionality attributes as compared to those for other competing existing approaches.
Abstract: Fog computing-based Internet of Things (IoT) architecture is useful for various types of delay efficient network communications and services, like digital healthcare. However, there are privacy and security issues with the fog computing-based healthcare systems, which can further increase the risk of leakage of sensitive healthcare data. Therefore, a security mechanism, such as access control for fog computing-based healthcare systems, is needed to protect its data against various potential attacks. Moreover, the blockchain technology can be used to solve the digital healthcare’s data integrity related problems. The use of Artificial Intelligence (AI) further makes the system more effective in case of prediction of health related diseases. In this paper, an AI-enabled secure communication mechanism in fog computing-based healthcare system (in short, AISCM-FH) has been proposed. The security analysis of the proposed AISCM-FH is provided using the standard random oracle model and also with the heuristic (non-mathematical) security analysis. A pragmatic study determines the impact of the proposed AISCM-FH on key performance indicators. Moreover, we include a detailed performance comparison of AISCM-FH with other relevant existing schemes to show that it has low communication and computation costs, and provides superior security and extra functionality attributes as compared to those for other competing existing approaches.

4 citations


Journal ArticleDOI
01 Feb 2023
TL;DR: C-HealthIER as discussed by the authors is a cooperative health intelligent emergency response system that aims to reduce the time of receiving the first emergency treatment for passengers with abnormal health conditions, and conducts cooperative behavior in response to health emergencies by vehicletovehicle and vehicle-to-infrastructure information sharing to find the nearest treatment provider.
Abstract: The advancement of wireless connectivity in smart cities will enhance connections between their various key elements. Federated intelligent health monitoring systems inside autonomous vehicles will achieve smart cities’ goal of improving the quality of life. This paper proposes a novel cooperative health emergency response system within Cooperative Intelligent Transportation Environment, namely, C-HealthIER. C-HealthIER is a cooperative health intelligent emergency response system that aims to reduce the time of receiving the first emergency treatment for passengers with abnormal health conditions. C-HealthIER continuously monitors passengers’ health and conducts cooperative behavior in response to health emergencies by vehicle-to-vehicle and vehicle-to-infrastructure information sharing to find the nearest treatment provider. A conducted simulation that integrates three different tools (Veins, SUMO, and OMNET++) to simulate the proposed system showed that C-HealthIER reduces the total time to receive the emergency treatment by at least 92.5% and the time to receive the first emergency treatment by at least 73.2% compared to the time taken by AutoPilot mode in self-driving cars. C-HealthIER also reduces the travel distance to the first emergency treatment place by 40.9% and thus reduces the travel time by 43.8% compared to receiving the treatment at the same hospital in the AutoPilot mode.

4 citations


Journal ArticleDOI
TL;DR: In this article , the authors proposed a novel approach for an adaptive upgrade of clients resources in federated learning (FL) systems, where the goal is to discover and train in each round the maximum number of samples with the highest quality in order to target the desired performance.
Abstract: Conventional systems are usually constrained to store data in a centralized location. This restriction has either precluded sensitive data from being shared or put its privacy on the line. Alternatively, federated learning (FL) has emerged as a promising privacy-preserving paradigm for exchanging model parameters instead of private data of Internet of Things (IoT) devices known as clients. FL trains a global model by communicating local models generated by selected clients throughout many communication rounds until ensuring high learning performance. In these settings, the FL performance highly depends on selecting the best available clients. This process is strongly related to the quality of their models and their training data. Such selection-based schemes have not been explored yet, particularly regarding participating clients having high-quality data yet with limited resources. To address these challenges, we propose in this article FedAUR, a novel approach for an adaptive upgrade of clients resources in FL. We first introduce a method to measure how a locally generated model affects and improves the global model if selected for aggregation without revealing raw data. Next, based on the significance of each client parameters and the resources of their devices, we design a selection scheme that manages and distributes available resources on the server among the appropriate subset of clients. This client selection and resource allocation problem is thus formulated as an optimization problem, where the purpose is to discover and train in each round the maximum number of samples with the highest quality in order to target the desired performance. Moreover, we present a Kubernetes-based prototype that we implemented to evaluate the performance of the proposed approach.

4 citations


Journal ArticleDOI
TL;DR: In this paper , a machine learning-based framework for intelligent resource provisioning mechanisms for micro-grid connected green SCBSs with a completely modified ring parametric distribution method was proposed. And an algorithmic implementation is proposed for prediction-based renewable resource re-distribution with energy flow control unit mechanism for grid-connected SCBS, eliminating the need for centralized hardware.
Abstract: Optimal resource provisioning and management of the next generation communication networks are crucial for attaining a seamless quality of service with reduced environmental impact. Considering the ecological assessment, urban and rural telecommunication infrastructure is moving toward deploying green cellular base stations to cater to the needs of the ever-growing traffic demands of heterogeneous networks. In such scenarios, the existing learning-based renewable resource provisioning methods lack intelligent and optimal resource management at the small cell base stations (SCBS). Therefore, in this article, we present a novel machine learning-based framework for intelligent resource provisioning mechanisms for micro-grid connected green SCBSs with a completely modified ring parametric distribution method. In addition, an algorithmic implementation is proposed for prediction-based renewable resource re-distribution with energy flow control unit mechanism for grid-connected SCBS, eliminating the need for centralized hardware. Moreover, this modeling enables the prediction mechanism to estimate the future on-demand traffic provisioning capability of SCBS. Furthermore, we present the numerical analysis of the proposed framework showcasing the systems’ ability to attain a balanced energy convergence level of all the SCBS at the end of the periodic cycle, signifying our model’s merits.

3 citations


Journal ArticleDOI
TL;DR: In this article , the authors proposed a joint multi-task offloading and resource allocation scheme in satellite IoT to improve the offloading efficiency, where tasks with dependencies were modeled as directed acyclic graphs and an attention mechanism and proximal policy optimization (A-PPO) collaborative algorithm was proposed to learn the best offloading strategy.
Abstract: For multi-task mobile edge computing (MEC) systems in satellite Internet of Things (IoT), there are dependencies between different tasks, which need to be collected and jointly offloaded. It is crucial to allocate the computing and communication resources reasonably due to the scarcity of satellite communication and computing resources. To address this issue, we propose a joint multi-task offloading and resource allocation scheme in satellite IoT to improve the offloading efficiency. We first construct a novel resource allocation and task scheduling system in which tasks are collected and decided by multiple unmanned aerial vehicles (UAV) based aerial base stations, the edge computing services are provided by satellites. Furthermore, we investigate the multi-task joint computation offloading problem in the framework. Specifically, we model tasks with dependencies as directed acyclic graphs (DAG), then we propose an attention mechanism and proximal policy optimization (A-PPO) collaborative algorithm to learn the best offloading strategy. The simulation results show that the A-PPO algorithm can converge in 25 steps. Furthermore, the A-PPO algorithm reduces cost by at least 8.87$\%$ compared to several baseline algorithms. In summary, this paper provides a new insight for the cost optimization of multi-task MEC systems in satellite IoT.

Journal ArticleDOI
TL;DR: In this article , the authors discuss the role of a metaverse in enabling wireless applications and present an overview, key enablers, design aspects (i.e., metaverse for wireless and wireless for metaverse), and a novel highlevel architecture of metaverse-based wireless systems.
Abstract: The growing landscape of emerging wireless applications is a key driver toward the development of novel wireless system designs. Such a design can be based on the metaverse that uses a virtual model of the physical world systems along with other schemes/technologies (e.g., optimization theory, machine learning, and blockchain). A metaverse using a virtual model performs proactive intelligent analytics prior to a user request for efficient management of the wireless system resources. Additionally, a metaverse will enable self-sustainability to operate wireless systems with the least possible intervention from network operators. Although the metaverse can offer many benefits, it faces some challenges as well. Therefore, in this tutorial, we discuss the role of a metaverse in enabling wireless applications. We present an overview, key enablers, design aspects (i.e., metaverse for wireless and wireless for metaverse), and a novel high-level architecture of metaverse-based wireless systems. We discuss metaverse management, reliability, and security of the metaverse-based system. Furthermore, we discuss recent advances and standardization of metaverse-enabled wireless system. Finally, we outline open challenges and present possible solutions.

Journal ArticleDOI
TL;DR: In this article , a modal-aware resource allocation for cross-modal collaborative communication is proposed to improve quality of service (QoS) and users' satisfaction in IIoT.
Abstract: With the development of human-machine interactions, users are increasingly evolving towards an immersion experience with multi-dimensional stimuli. Facing this trend, cross-modal collaborative communication is considered an effective technology in the Industrial Internet of Things (IIoT). In this paper, we focus on open issues about resource reuse, pair interactivity, and user assurance in cross-modal collaborative communication to improve quality of service (QoS) and users’ satisfaction. Therefore, we propose a novel architecture of modal-aware resource allocation to solve these contradictions. First, taking all the characteristics of multi-modal into account, we introduce network slices to visualize resource allocation, which is modeled as a Markov Decision Process (MDP). Second, we decompose the problem by the transformation of probabilistic constraint and Lyapunov Optimization. Third, we propose a deep reinforcement learning (DRL) decentralized method in the dynamic environment. Meanwhile, a federated DRL framework is provided to overcome the training limitations of local DRL models. Finally, numerical results demonstrate that our proposed method performs better than other decentralized methods and achieves superiority in cross-modal collaborative communications.

Journal ArticleDOI
TL;DR: In this paper , the authors conduct a survey to explore enhanced AI-based solutions to achieve energy sustainability in IoT applications, which is relevant through the integration of various Machine Learning (ML) and Swarm Intelligence (SI) techniques in the design of existing protocols.
Abstract: The massive number of Internet of Things (IoT) devices connected to the Internet is continuously increasing. The operations of these devices rely on consuming huge amounts of energy. Power limitation is a major issue hindering the operation of IoT applications and services. To improve operational visibility, Low-power devices which constitute IoT networks, drive the need for sustainable sources of energy to carry out their tasks for a prolonged period of time. Moreover, the means to ensure energy sustainability and QoS must consider the stochastic nature of the energy supplies and dynamic IoT environments. Artificial Intelligence (AI) enhanced protocols and algorithms are capable of predicting and forecasting demand as well as providing leverage at different stages of energy use to supply. AI will improve the efficiency of energy infrastructure and decrease waste in distributed energy systems, ensuring their long-term viability. In this paper, we conduct a survey to explore enhanced AI-based solutions to achieve energy sustainability in IoT applications. AI is relevant through the integration of various Machine Learning (ML) and Swarm Intelligence (SI) techniques in the design of existing protocols. ML mechanisms used in the literature include variously supervised and unsupervised learning methods as well as reinforcement learning (RL) solutions. The survey constitutes a complete guideline for readers who wish to get acquainted with recent development and research advances in AI-based energy sustainability in IoT Networks. The survey also explores the different open issues and challenges.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a solution for the security and interoperability challenges using Self-Sovereign Identity (SSI) integrated with blockchain, where the users are the only holders and owners of their identity.
Abstract: With the advancement in computing power and speed, the Internet is being transformed from screen-based information to immersive and extremely low latency communication environments in web 3.0 and the Metaverse. With the emergence of the Metaverse technology, more stringent demands are required in terms of connectivity such as secure access and data privacy. Future technologies such as 6G, Blockchain, and Artificial Intelligence (AI) can mitigate some of these challenges. The Metaverse is now on the verge where security and privacy concerns are crucial for the successful adaptation of such disruptive technology. The Metaverse and web 3.0 are to be decentralized, anonymous, and interoperable. Metaverse is the virtual world of Digital Twins and non-fungible tokens (NFTs). The control and possession of users' data on centralized servers are the cause of numerous security and privacy concerns. This paper proposes a solution for the security and interoperability challenges using Self-Sovereign Identity (SSI) integrated with blockchain. The philosophy of Self-Sovereign Identity, where the users are the only holders and owners of their identity, comes in handy to solve the questions of decentralization, trust, and interoperability in the Metaverse. This work also discusses the vision of a single, open standard, trustworthy, and interoperable Metaverse with initial design and implementation of SSI concepts.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a deep reinforcement learning-based scheme to solve the joint optimization problem of dynamic UDs-stations association and resource allocation, thereby minimizing the energy consumption within a limited time delay.
Abstract: Federated learning (FL) allows user devices (UDs) to upload local model parameters to participate in a global model training, which protects UDs' data privacy. Nevertheless, FL still faces challenges such as core network congestion, UDs' limited resources and less efficient mapping between devices and cyber systems. Therefore, in this article, we integrate the digital twin (DT) and the mobile edge computing (MEC) technologies into a hierarchical FL framework in the heterogeneous cellular network scenario. When the UDs are not in the service range of the small base stations (SBSs), the framework allows macro base stations to assist UDs' local computation, thus reducing the transmission delay. It also protects the user privacy and allows more users to join in the training in order to improve the FL accuracy. In addition, we propose a deep reinforcement learning-based scheme to solve the joint optimization problem of dynamic UDs-stations association and resource allocation, thereby minimizing the energy consumption within a limited time delay. Simulation results show that our proposed scheme not only effectively reduces the task transmission failure rate and energy consumption compared with the baseline scheme, but also saves the communication cost through the DT network.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed an intelligent resource trading framework that integrates multi-agent deep reinforcement learning (MADRL), blockchain, and game theory to manage dynamic resource trading environments.
Abstract: With the Industrial Internet of Things (IIoT), mobile devices (MDs) and their demands for low-latency data communication are increasing. Due to the limited resources of MDs, such as energy, computation, storage, and bandwidth, IIoT systems cannot meet MDs’ quality of service (QoS) and security requirements. Recently, UAVs have been deployed as aerial base stations in the IIoT network to provide connectivity and share resources with MDs. We consider a resource trading environment where multiple resource providers compete to sell their resources to MDs and maximize their profit by continually adjusting their pricing strategies. Multiple MDs, on the other hand, interact with the environment to make purchasing decisions based on the prices set by resource providers to reduce costs and improve QoS. We propose a novel intelligent resource trading framework that integrates multi-agent deep reinforcement Learning (MADRL), blockchain, and game theory to manage dynamic resource trading environments. A consortium blockchain with a smart contract is deployed to ensure the security and privacy of the resource transactions. We formulated the optimization problem using a Stackelberg game. However, the formulated optimization problem in the multi-agent IIoT environment is complex and dynamic, making it difficult to solve directly. Thus, we transform it into a stochastic game to solve the dynamics of the optimization problem. We propose a dynamic pricing algorithm that combines the Stackelberg game with the MADRL algorithm to solve the formulated stochastic game. The simulation results show that our proposed scheme outperforms others to improve resource trading in UAV-assisted IIoT networks.

Journal ArticleDOI
TL;DR: In this article , an application-specific channel selection (ASCS) scheme with a cache-enabled two-tier 6G Heterogeneous Network (HetNet) was proposed.
Abstract: The number of wireless communication devices has grown exponentially with the evolution of wireless systems. Moreover, the development of technologies such as the Internet of things (IoT) and massive machine-type communications (mMTC) are also fueling the rapid increase in the number of communication devices. It is anticipated that through 6G, the wireless communication networks will move in the direction of ultra-dense distribution of enormous devices with assorted service and rate requirements. This requires a huge network energy, capacity, and efficient deployment of the spectrum. This paper employs the massive multi-input multi-output (mMIMO) and non-orthogonal multiple access (NOMA) techniques that increase the multiplexing gain and network capacity. This paper utilizes the benefits of millimeter Wave (mmWave) and Terahertz (THz) channels that poseess considerably higher communication capacity. However, these schemes are highly complex. To mitigate this problem, we use BS caching as diverse applications in 6G will have varied rate requirements. In addition, we propose an application-specific channel selection (ASCS) scheme with a cache-enabled two-tier 6G Heterogeneous Network (HetNet). In the proposed scheme, bestowing to the application requirement of the small cell users, the small cell base station (SBS) switches downlink channels dynamically. The simulation results illustrate that the proposed ASCS scheme can achieve higher spectral efficiency, energy efficiency, and throughput for the cache-enabled two-tier 6G HetNet.

Journal ArticleDOI
TL;DR: In this article , the authors combine ocean research with the IoT, in order to investigate the wave height prediction to assist ships to improve the economy and safety of maritime transportation and propose an ocean IoT Green Ocean of Things (GOoT) with a green and low-carbon concept.
Abstract: Nowadays, the application fields of the Internet of Things (IoT) involve all aspects. This article combines ocean research with the IoT, in order to investigate the wave height prediction to assist ships to improve the economy and safety of maritime transportation and proposes an ocean IoT Green Ocean of Things (GOoT) with a green and low-carbon concept. In the wave height prediction, we apply a hybrid model (EMD-TCN) combining the temporal convolutional network (TCN) and the empirical mode decomposition (EMD) to the buoy observation data. We then compare it with TCN, long short-term memory (LSTM), and hybrid model EMD-LSTM. By testing the data of eight selected NDBC buoys distributed in different sea areas, the effectiveness of the EMD-TCN hybrid model in wave height prediction is verified. The hysteresis problem in previous wave height prediction research is eliminated, while improving the accuracy of the wave height prediction. In the 24 h, 36 h, and 48 h wave height prediction, the minimum mean absolute errors are 0.1265, 0.1689, and 0.1963, respectively; the maximum coefficient of determination are 0.9388, 0.9019, and 0.8712, respectively. In addition, in the short-term prediction, the EMD-TCN hybrid model also performs well, and has strong versatility.

Journal ArticleDOI
TL;DR: In this paper , a federated learning-based algorithm is proposed to solve the embedding problem of SFCs in SAGIN, and an SFC scheduling mechanism is proposed that allows SFC reconfiguration to reduce the service blocking rate.
Abstract: Traditional terrestrial wireless communication networks cannot support the requirements for high-quality services for artificial intelligence applications such as smart cities. The space–air–ground-integrated network (SAGIN) could provide a solution to address this challenge. However, SAGIN is heterogeneous, time-varying, and multidimensional information sources, making it difficult for traditional network architectures to support resource allocation in large-scale complex network environments. This article proposes a service provision method based on service function chaining (SFC) to solve this problem. Network function virtualization (NFV) is essential for efficient resource allocation in SAGIN to meet the resource requirements of user service requests. We propose a federated learning (FL)-based algorithm to solve the embedding problem of SFCs in SAGIN. The algorithm considers different characteristics of nodes and resource load to balance resource consumption. Then, an SFC scheduling mechanism is proposed that allows SFC reconfiguration to reduce the service blocking rate. Simulation results show that our proposed FL-VNFE algorithm is more advantageous compared to other algorithms, with 12.9%, 2.52%, and 10.5% improvement in long-term average revenue, acceptance rate, and long-term average revenue–cost ratio, respectively.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a dynamic resource allocation framework that synergies blockchain and multi-agent deep reinforcement learning for multi-UAV-enabled 5G-RAN to allocate resources to smart mobile user equipment (SMUE) with optimal costs.
Abstract: In 5G and B5G networks, real-time and secure resource allocation with the common telecom infrastructure is challenging. This problem may be more severe when mobile users are growing and connectivity is interrupted by natural disasters or other emergencies. To address the resource allocation problem, the network slicing technique has been applied to assign virtualized resources to multiple network slices, guaranteeing the 5G-RAN quality of service. Moreover, to tackle connectivity interruptions during emergencies, UAVs have been deployed as airborne base stations, providing various services to ground networks. However, this increases the complexity of resource allocation in the shared infrastructure of 5G-RAN. Therefore, this paper proposes a dynamic resource allocation framework that synergies blockchain and multi-agent deep reinforcement learning for multi-UAV-enabled 5G-RAN to allocate resources to smart mobile user equipment (SMUE) with optimal costs. The blockchain ensures the security of virtual resource transactions between SMUEs, infrastructure providers (InPs), and virtual network operators (VNOs). We formulate a virtualized resource allocation problem as a hierarchical Stackelberg game containing InPs, VNOs, and SMUEs, and then transform it into a stochastic game model. Then, we adopt a Multi-agent Deep Deterministic Policy Gradient (MADDPG) algorithm to solve the formulated problem and obtain the optimal resource allocation policies that maximize the utility function. The simulation results show that the MADDPG method outperforms the state-of-the-art methods in terms of utility optimization and quality of service satisfaction.

Proceedings ArticleDOI
01 Mar 2023
TL;DR: In this article , a twin-delayed deep deterministic policy gradient (FL-TD3) framework is proposed as a solution to the formulated problem, which provides the maximum accuracy ratio of FL divided by the device's energy consumption.
Abstract: Federated learning (FL) is increasingly considered to circumvent the disclosure of private data in mobile edge computing (MEC) systems. Training with large data can enhance FL learning accuracy, which is associated with non-negligible energy use. Scheduled edge devices with small data save energy but decrease FL learning accuracy due to a reduction in energy consumption. A trade-off between the energy consumption of edge devices and the learning accuracy of FL is formulated in this proposed work. The FL-enabled twin-delayed deep deterministic policy gradient (FL-TD3) framework is proposed as a solution to the formulated problem because its state and action spaces are large in a continuous domain. This framework provides the maximum accuracy ratio of FL divided by the device’s energy consumption. A comparison of the numerical results with the state-of-the-art demonstrates that the ratio has been improved significantly.

Proceedings ArticleDOI
09 May 2023
TL;DR: Huang et al. as discussed by the authors proposed a hierarchical federated learning (HSFL) architecture that combines SFL with a hierarchical fashion of learning to avoid a single point of failure and fairness issues.
Abstract: Federated learning (FL) uses distributed fashion of training via local models (e.g., convolutional neural network) computation at devices followed by central aggregation at the edge or cloud. Such distributed training uses a significant amount of computational resources (i.e., CPU-cycles/sec) that seem difficult to be met by Internet of Things (IoT) sensors. Addressing these challenges, split FL (SFL) was recently proposed based on computing a part of a model at devices and remaining at edge/cloud servers. Although SFL resolves devices computing resources constraints, it still suffers from fairness issues and slow convergence. To enable FL with these features, we propose a novel hierarchical SFL (HSFL) architecture that combines SFL with a hierarchical fashion of learning. To avoid a single point of failure and fairness issues, HSFL has a truly distributed nature (i.e., distributed aggregations). We also define a cost function that can be minimized relative local accuracy, transmit power, resource allocation, and association. Due to the non-convex nature, we propose a block successive upper bound minimization (BSUM) based solution. Finally, numerical results are presented.

Journal ArticleDOI
TL;DR: In this paper , an innovative user cooperation (UC) scheme with integrated backscatter communication (BackCom) and active communication (AC) was proposed to enhance the system performance in general single-user and multi-user scenarios.
Abstract: —The integrated backscatter communication (BackCom) and active communication (AC) scheme can improve wireless powered mobile edge computing (WPMEC) system performance in general single-user and multi-user scenarios. However, there is little research in the cooperation-assisted WPMEC scenario. In this paper, we consider a cooperation-assisted WPMEC system consisting of a source node (SN), a helper and a hybrid access point (HAP) integrated with MEC servers. An innovative user cooperation (UC) scheme with integrated BackCom and AC is proposed to enhance the system performance. As a relay, the helper can help the SN to transmit its computing tasks due to the poor communication link between the SN and the HAP. To be specific, we aim at maximizing the user energy efficiency (EE) by jointly optimizing backscatter reflection coefficient for BackCom, transmission power for AC, system time and tasks allocation while considering the minimum computation bits requirement, the channel capacity and energy constraints. Based on a fractional program, the EE maximization problem first is transformed to an equivalent one. Then, we exploit variable substitution and convex optimization to transform this non-convex problem into a convex problem. In addition, semi-closed form expressions of the optimal solution are deduced. An energy efficiency maximization algorithm is proposed to solve this problem. Simulation results demonstrate that the proposed scheme significantly improves the user EE than the existing schemes.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed an adaptive link state perception scheme (ALPS) for SDVN, which enables the controller to timely obtain the link-state within the beacon interval.
Abstract: The software-defined vehicular networking (SDVN) paradigm alleviates the deficiencies brought on by distributed vehicular. The separation of the control plane and the data plane allows the controller to manage the network based on global information. Most existing routing schemes in SDVN obtain the link-state through which vehicles periodically send beacon messages to the controller. However, due to the high mobility of the vehicles and the dynamic communication environment, the link-state changes within the beacon interval. In this case, the controller may select an expired link to transmit data during routing calculation, which will undoubtedly result in packet loss. Therefore, it is important for the controller to timely obtain the link-state during the beacon interval. If the controller can timely obtain the information after a link becomes unavailable, the risk of selecting unavailable links can be significantly reduced. In this paper, we propose an adaptive link-state perception scheme (ALPS) for SDVN, which enables the controller timely obtain the link-state within the beacon interval. We obtain the link-state by detecting the loss of packets on a link. A link quality evaluation method based on fuzzy logic is present to evaluate the possibility of link failure. After the link evaluation, we present an adaptive threshold adjustment method to dynamically adjust the detection range to decrease the detection cost. Simulation results demonstrate that ALPS can effectively reduce the packet loss ratio at a low cost.

Journal ArticleDOI
TL;DR: In this paper , a gammatone filter bank-based feature extraction method is proposed to hide and extract the information, a dither modulation information hiding scheme based on the adaptive optimization is proposed.
Abstract: Visible Light Communication (VLC) is an emerging short-range optical communication technology that can alleviate spectrum congestion. However, VLC faces security problems with man-in-the-middie hijacking and bypass listening. Since the covert transmission of visible light information has the characteristics of strong concealment and difficult detection, VLC based on information hiding becomes a new paradigm to solve these security problems. This article presents a method in which Red, Green, Blue (RGB) Light-emitting Diode (LED) is used to secure the VLC by using the information hiding technique. Specifically, a gammatone filter bank-based feature extraction method is suggested. This method extracts robust vector for information hiding and extraction. To hide and extract the information, a dither modulation information hiding scheme based on the adaptive optimization is proposed. The presented method uses dither modulation to hide information in the extracted feature vector, which can effectively improve the success rate of information extraction. On the basis of performance analysis, the superiority of the presented method is illustrated by comparing with the existing methods.

Journal ArticleDOI
TL;DR: In this paper , a 3D multi-UAV trajectory optimization based on ground devices (GDs) selecting the target UAV for task computing is investigated, and the minimum energy consumption under the premise of fairness and the efficiency of model processing tasks is achieved.
Abstract: Unmanned aerial vehicles (UAVs)-assisted mobile-edge computing (MEC) communication system has recently gained increasing attention. In this article, we investigate a 3-D multi-UAV trajectory optimization based on ground devices (GDs) selecting the target UAV for task computing. Specifically, we first design a 3-D dynamic multi-UAV-assisted MEC system in which GDs have real-time mobility and task update. Next, we formulate the system communication, computation, and flight energy consumption as objective functions based on fairness among UAVs. Then, to pursue fairness among UAVs, we theoretically deduce and mathematically prove the optimal GDs’ selectivity and offloading strategy, that is, how GDs select the optimal UAV for task offloading and how much to offload. While ensuring the optimal offloading strategy and GDs’ selectivity between UAVs and GDs at each step, we model UAV trajectories as a sequence of location updates of all UAVs and apply a multiagent deep deterministic policy gradient (MADDPG) algorithm to find the optimal solution. Simulation results demonstrate that we achieve the minimum energy consumption under the premise of fairness and the efficiency of model processing tasks.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a muting strategy managed by positioning functions that utilizes a combination of optimized pseudo-random sequences (CO-PRS) for multiple BSs to coordinate the muting of PRS resources.
Abstract: Device positioning has generally been recognized as an enabling technology for numerous vehicular applications in intelligent transportation systems (ITS). The downlink time difference of arrival (DL-TDOA) technique in cellular networks requires range information of geographically diverse base stations (BSs) to be measured by user equipment (UE) through the positioning reference signal (PRS). However, inter-cell interference from surrounding BSs can be particularly serious under poor network planning or dense deployments. This may lead to a relatively longer measurement time to locate the UE, causing an unacceptable location update rate to time-sensitive applications. In this case, PRS muting of certain wireless resources has been envisioned as a promising solution to increase the detectability of a weak BS. In this paper, to reduce UE measurement latency while ensuring high location accuracy, we propose a muting strategy managed by positioning functions that utilizes a combination of optimized pseudo-random sequences (CO-PRS) for multiple BSs to coordinate the muting of PRS resources. The original sequence is first truncated according to the muting period, and a modified greedy selection is performed to form a set of control sequences as the muting configurations (MC) with balance and concurrency constraints. Moreover, efficient information exchange can be achieved with the seeds used for regenerating the MC. Extensive simulations demonstrate that the proposed scheme outperforms the conventional random and ideal muting benchmarks in terms of measurement latency by about 30%, especially when dealing with severe near-far problems in cellular networks.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors introduced the common security vulnerabilities in blockchain smart contracts, and then classifies the vulnerabilities detection tools for smart contracts into six categories according to the different detection methods: 1) formal verification method; 2) symbol execution method; 3) fuzzy testing method; 4) intermediate representation method; 5) stain analysis method; and 6) deep learning method.
Abstract: With the wide application of Internet of Things and blockchain, research on smart contracts has received increased attention, and security threat detection for smart contracts is one of the main focuses. This article first introduces the common security vulnerabilities in blockchain smart contracts, and then classifies the vulnerabilities detection tools for smart contracts into six categories according to the different detection methods: 1) formal verification method; 2) symbol execution method; 3) fuzzy testing method; 4) intermediate representation method; 5) stain analysis method; and 6) deep learning method. We test 27 detection tools and analyze them from several perspectives, including the capability of detecting a smart contract version. Finally, it is concluded that most of the current vulnerability detection tools can only detect vulnerabilities in a single and old version of smart contracts. Although the deep learning method detects fewer types of smart contract vulnerabilities, it has higher detection accuracy and efficiency. Therefore, the combination of static detection methods, such as deep learning method and dynamic detection methods, including the fuzzy testing method to detect more types of vulnerabilities in multi-version smart contracts to achieve higher accuracy is a direction worthy of research in the future.

Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a lightweight authentication and key exchange protocol with anonymity for IoT devices, which supports mutual authentication between IoT devices and the server, and compared security and performance with other protocols, which shows that their protocol has the advantages of being lightweight and secure.
Abstract: The number of IoT devices is growing rapidly, and the interaction between devices and servers is also more frequent. However, IoT devices are often at the edge of the network, which leads their communications with the server to be completely exposed, making it more vulnerable to attacks. Moreover, IoT devices have limited energy and computational resources. Therefore, we propose in this paper a lightweight authentication and key exchange protocol with anonymity for IoT devices. The two-way authentication proposed scheme supports mutual authentication between IoT devices and the server. We verify the security of the protocol through formal and informal analyses. Finally, we compare security and performance with other protocols, which shows that our protocol has the advantages of being lightweight and secure.