scispace - formally typeset
Search or ask a question

Showing papers on "Transmission delay published in 2020"


Proceedings ArticleDOI
07 Jun 2020
TL;DR: A novel framework is proposed to implement distributed federated learning (FL) algorithms within a UAV swarm that consists of a leading UAV and several following UAVs and shows that the joint design strategy can reduce the number of communication rounds needed for convergence by as much as 35% compared with the baseline design.
Abstract: Unmanned aerial vehicle (UAV) swarms must exploit machine learning (ML) in order to execute various tasks ranging from coordinated trajectory planning to cooperative target recognition. However, due to the lack of continuous connections between the UAV swarm and ground base stations (BSs), using centralized ML will be challenging, particularly when dealing with a large volume of data. In this paper, a novel framework is proposed to implement distributed federated learning (FL) algorithms within a UAV swarm that consists of a leading UAV and several following UAVs. Each following UAV trains a local FL model based on its collected data and then sends this trained local model to the leading UAV who will aggregate the received models, generate a global FL model, and transmit it to followers over the intra-swarm network. To identify how wireless factors, like fading, transmission delay, and UAV antenna angle deviations resulting from wind and mechanical vibrations, impact the performance of FL, a rigorous convergence analysis for FL is performed. Then, a joint power allocation and scheduling design is proposed to optimize the convergence rate of FL while taking into account the energy consumption during convergence and the delay requirement imposed by the swarm's control system. Simulation results validate the effectiveness of the FL convergence analysis and show that the joint design strategy can reduce the number of communication rounds needed for convergence by as much as 35% compared with the baseline design.

107 citations


Journal ArticleDOI
TL;DR: This work proposes deep actor-critic reinforcement learning based policies for both centralized and decentralized content caching for offload data traffic in wireless networks and considers both the cache hit rate and transmission delay as performance metrics.
Abstract: With the purpose to offload data traffic in wireless networks, content caching techniques have recently been studied intensively. Using these techniques and caching a portion of the popular files at the local content servers, the users can be served with less delay. Most of the content replacement policies are based on the content popularity, that depends on the users’ preferences. In practice, such information varies over time. Therefore, an approach to determine the file popularity patterns must be incorporated into caching policies. In this context, we study content caching at the wireless network edge using a deep reinforcement learning framework with Wolpertinger architecture. In particular, we propose deep actor-critic reinforcement learning based policies for both centralized and decentralized content caching. For centralized edge caching, we aim at maximizing the cache hit rate. In decentralized edge caching, we consider both the cache hit rate and transmission delay as performance metrics. The proposed frameworks are assumed to neither have any prior information on the file popularities nor know the potential variations in such information. Via simulation results, the superiority of the proposed frameworks is verified by comparing them with other policies, including least frequently used (LFU), least recently used (LRU), and first-in-first-out (FIFO) policies.

107 citations


Journal ArticleDOI
TL;DR: A new hybrid algorithm is proposed that hybridizes the concept of dragon fly and firefly algorithm algorithms, termed fire fly replaced position update in dragonfly, to develop a new clustering model with optimal cluster head selection by considering four major criteria like energy, delay, distance, and security.
Abstract: Energy efficiency has become a primary issue in wireless sensor networks (WSN). The sensor networks are powered by battery and thus they turn out to be dead after a particular interval. Hence, enhancing the data dissipation in energy efficient manner remains to be more challenging for increasing the life span of sensor devices. It has been already proved that the clustering method could improve or enhance the life span of WSNs. In the clustering model, the selection of cluster head (CH) in each cluster regards as the capable method for energy efficient routing, which minimizes the transmission delay in WSN. However, the main problem dealt with the selection of optimal CH that makes the network service prompt. Till now, more research works have been processing on solving this issue by considering different constraints. Under this scenario, this paper attempts to develop a new clustering model with optimal cluster head selection by considering four major criteria like energy, delay, distance, and security. Further, for selecting the optimal CHs, this paper proposes a new hybrid algorithm that hybridizes the concept of dragon fly and firefly algorithm algorithms, termed fire fly replaced position update in dragonfly. Finally, the performance of the proposed work is carried out by comparing with other conventional models in terms of number of alive nodes, network energy, delay and risk probability.

105 citations


Posted Content
TL;DR: A quantification of the risk for an unreliable VR performance is conducted through a novel and rigorous characterization of the tail of the end-to-end (E2E) delay, and system reliability for scenarios with guaranteed line-of-sight (LoS) is derived as a function of THz network parameters after deriving a novel expression for the probability distribution function of the THz transmission delay.
Abstract: Wireless virtual reality (VR) imposes new visual and haptic requirements that are directly linked to the quality-of-experience (QoE) of VR users. These QoE requirements can only be met by wireless connectivity that offers high-rate and high-reliability low latency communications (HRLLC), unlike the low rates usually considered in vanilla ultra-reliable low latency communication scenarios. The high rates for VR over short distances can only be supported by an enormous bandwidth, which is available in terahertz (THz) wireless networks. Guaranteeing HRLLC requires dealing with the uncertainty that is specific to the THz channel. To explore the potential of THz for meeting HRLLC requirements, a quantification of the risk for an unreliable VR performance is conducted through a novel and rigorous characterization of the tail of the end-to-end (E2E) delay. Then, a thorough analysis of the tail-value-at-risk (TVaR) is performed to concretely characterize the behavior of extreme wireless events crucial to the real-time VR experience. System reliability for scenarios with guaranteed line-of-sight (LoS) is then derived as a function of THz network parameters after deriving a novel expression for the probability distribution function of the THz transmission delay. Numerical results show that abundant bandwidth and low molecular absorption are necessary to improve the reliability. However, their effect remains secondary compared to the availability of LoS, which significantly affects the THz HRLLC performance. In particular, for scenarios with guaranteed LoS, a reliability of 99.999% (with an E2E delay threshold of 20 ms) for a bandwidth of 15 GHz can be achieved by the THz network, compared to a reliability of 96% for twice the bandwidth, when blockages are considered.

79 citations


Journal ArticleDOI
TL;DR: The Lyapunov–Krasovskii functional is constructed with the distributed kernel to make full use of the delay probability distribution, and sufficient conditions for ensuring the stability of the closed-loop system with prescribed inline-formula performance are formulated in linear matrix inequalities.
Abstract: This article contributes to design an event-triggered $H_{\infty }$ controller for networked control systems with network channel delay. First, the network channel delay is modeled as a distributed delay with a probability density function as its kernel. Then, the closed-loop event-triggered control system is established as a distributed delay system. To make full use of the delay probability distribution, the Lyapunov–Krasovskii functional is constructed with the distributed kernel. By applying the Lyapunov method, sufficient conditions for ensuring the stability of the closed-loop system with prescribed $H_{\infty }$ performance are formulated in linear matrix inequalities. A numerical example shows that the proposed method is less conservative than some existing results considering delay distribution.

73 citations


Journal ArticleDOI
TL;DR: Simulation results are performed to prove that the proposed algorithm substantially outperforms recent techniques with regards to energy efficiency, energy consumption reduction, throughput, and transmission delay performance.
Abstract: The distributed cooperative offloading technique with wireless setting and power transmission provides a possible solution to meet the requirements of next-generation Multi-access Edge Computation (MEC). MEC is a model which avails cloud computing the aptitude to smoothly compute data at the edge of a largely dense network and in nearness to smart communicating devices (SCDs). This paper presents a cooperative offloading technique based on the Lagrangian Suboptimal Convergent Computation Offloading Algorithm (LSCCOA) for multi-access MEC in a distributed Internet of Things (IoT) network. A computational competition of the SCDs for limited resources which tends to obstructs smooth task offloading for MEC in an IoT high demand network is considered. The proposed suboptimal computational algorithm is implemented to perform task offloading which is optimized at the cloud edge server without relocating it to the centralized network. These resulted in a minimized weighted sum of transmit power consumption and outputs as a mixed-integer optimization problem. Also, the derived fast-convergent suboptimal algorithm is implemented to resolve the non-deterministic polynomial-time (NP)-hard problem. In conclusion, simulation results are performed to prove that the proposed algorithm substantially outperforms recent techniques with regards to energy efficiency, energy consumption reduction, throughput, and transmission delay performance.

67 citations


Journal ArticleDOI
TL;DR: The results of the experimental tests show that the proposed multi-path reliable transmission method can effectively reduce data packet loss rate, reduce transmission delay and increase network lifetime.
Abstract: In the application environment having dense distribution of marginal wireless sensor network (WSN), the data transmission process will generate a large number of conflicts, which will result in loss of transmission data and increase of transmission delay. The multi-path data transmission method can effectively solve the problem of large data loss and transmission delay caused by collisions. A new approach of multi-path reliable transmission for application of marginal WSN (named RCB-MRT) is proposed in this paper. It adopts redundancy mechanism to realize the reliability of data transmission, and uses concurrent woven multi-path technology to improve the transmission efficiency of data packets. Firstly, it divides the data packets that the sensor node needs to transmit into several sub-packets with data redundancy, and then forwards the sub-packets to the aggregation node through multi-path by the intermediate nodes of marginal environment. The results of our experimental tests show that the proposed multi-path reliable transmission method can effectively reduce data packet loss rate, reduce transmission delay and increase network lifetime. The method is very useful for the applications of marginal wireless sensor network.

67 citations


Journal ArticleDOI
TL;DR: The theoretical analysis show that the amount of redundancy data, the transmission delay and energy efficiency of nodes improved significantly.

57 citations


Journal ArticleDOI
TL;DR: An ant colony optimization-based algorithm is devised to solve the problem and achieve a near-optimal solution to the cooperative edge caching scheme, which allows vehicles to fetch one content from multiple caching servers cooperatively.
Abstract: In this article, we propose a cooperative edge caching scheme, which allows vehicles to fetch one content from multiple caching servers cooperatively In specific, we consider two types of vehicular content requests, ie, location-based and popular contents, with different delay requirements Both types of contents are encoded according to fountain code and cooperatively cached at multiple servers The proposed scheme can be optimized by finding an optimal cooperative content placement that determines the placing locations and proportions for all contents To this end, we first analyze the upper bound proportion of content caching at a single server, which is determined by both the downloading rate and the association duration when the vehicle drives through the server's coverage For both types of contents, the respective theoretical analysis of transmission delay and service cost (including content caching and transmission cost) are provided We then formulate an optimization problem of cooperative content placement to minimize the overall transmission delay and service cost As the problem is a multi-objective multi-dimensional multi-choice knapsack problem, which is proved to be NP-hard, we devise an ant colony optimization-based algorithm to solve the problem and achieve a near-optimal solution Simulation results are provided to validate the performance of the proposed algorithm, including its convergence and optimality of caching, while guaranteeing low transmission delay and service cost

57 citations


Posted Content
TL;DR: In this paper, a distributed federated learning (FL) algorithm for UAV swarms is proposed, where each following UAV trains a local FL model based on its collected data and then sends this trained local model to the leading UAV who will aggregate the received models, generate a global FL model, and transmit it to followers over the intra-swarm network.
Abstract: Unmanned aerial vehicle (UAV) swarms must exploit machine learning (ML) in order to execute various tasks ranging from coordinated trajectory planning to cooperative target recognition. However, due to the lack of continuous connections between the UAV swarm and ground base stations (BSs), using centralized ML will be challenging, particularly when dealing with a large volume of data. In this paper, a novel framework is proposed to implement distributed federated learning (FL) algorithms within a UAV swarm that consists of a leading UAV and several following UAVs. Each following UAV trains a local FL model based on its collected data and then sends this trained local model to the leading UAV who will aggregate the received models, generate a global FL model, and transmit it to followers over the intra-swarm network. To identify how wireless factors, like fading, transmission delay, and UAV antenna angle deviations resulting from wind and mechanical vibrations, impact the performance of FL, a rigorous convergence analysis for FL is performed. Then, a joint power allocation and scheduling design is proposed to optimize the convergence rate of FL while taking into account the energy consumption during convergence and the delay requirement imposed by the swarm's control system. Simulation results validate the effectiveness of the FL convergence analysis and show that the joint design strategy can reduce the number of communication rounds needed for convergence by as much as 35% compared with the baseline design.

56 citations


Journal ArticleDOI
TL;DR: A new stability criterion for linear systems with both sampling and transmission delay is proposed using the Wirtinger-based integral inequality and its affine version, applicable for both time-invariant and time-varying transmission delays.
Abstract: To analyze the delay-dependent stability of a load frequency control (LFC) system with transmission delays, the continuous-time model with delay is usually adopted. However, practical LFC is actually a sampled-data system, where the power commands sent to generation units are updated every few seconds. It is therefore desirable to analyze the delay-dependent stability of LFC when sampling is introduced. This paper undertakes stability analysis of LFC with both sampling and transmission delay. The model of the LFC system is first modified to consider sampling and transmission delay separately. Based on Lyapunov stability theory and linear matrix inequalities, a new stability criterion for linear systems with both sampling and transmission delay is proposed using the Wirtinger-based integral inequality and its affine version. The proposed criterion is applicable for both time-invariant and time-varying transmission delays. Case studies are undertaken on both single-area and two-area LFC systems to verify the effectiveness and advantage of the proposed method.

Journal ArticleDOI
TL;DR: Simulation studies with two-area and five-area interconnected power systems show that the proposed method can avoid optimistic or conservative design effectively while the stability and dynamic performance can be ensured simultaneously.
Abstract: Random transmission delay and packet loss may cause load frequency control (LFC) performance degradation or even instability in interconnected power systems. Traditional robust methods mainly focus on guaranteeing the asymptotic stability with pre-estimating a maximum delay case. As a result, the designed controller cannot satisfy the actual operational requirements completely since the improper network performance estimation. In this paper, based on jump system theory, the state-space model for LFC system with the different transmission delay is established as the jumping decision variable, which describes the possible dynamics of the networked LFC system with the bounded time delay. Secondly, in order to avoid the defect of conservative or optimistic design caused by imprecise pre-set delay, the upper boundary of the transmission delay in the power communication system is calculated by deterministic network theory. Furthermore, the asymptotic stability constraints in bilinear matrix inequality forms are deduced via construction of a Lyapunov function. Moreover, to reduce the peak value, peak time and setting time of frequency deviations caused by power mismatches, an iterative optimization algorithm is proposed to obtain the optimal feedback matrix. Simulation studies with two-area and five-area interconnected power systems show that the proposed method can avoid optimistic or conservative design effectively while the stability and dynamic performance can be ensured simultaneously.

Journal ArticleDOI
TL;DR: An optimal task offloading scheme based on a semi-Markov decision process (SMDP) based on the Bellman equation is proposed to maximize the long-term reward of the system where 802.11p is employed as the transmission protocol for the communications between vehicles.
Abstract: Vehicular fog computing (VFC) is envisioned as a promising solution to process the explosive tasks in autonomous vehicular networks. In the VFC system, task offloading is the key technique to process the computation-intensive tasks efficiently. In the task offloading, the task is transmitted to the VFC system according to the 802.11p standard and processed by the computation resources in the VFC system. The delay of task offloading, consisting of the transmission delay and computing delay, is extremely critical especially for some delay-sensitive applications. Furthermore, the long-term reward of the system (i.e., jointly considers the transmission delay, computing delay, available resources, and diversity of vehicles and tasks) becomes a significantly important issue for providers. Thus, in this article, we propose an optimal task offloading scheme to maximize the long-term reward of the system where 802.11p is employed as the transmission protocol for the communications between vehicles. Specifically, a task offloading problem based on a semi-Markov decision process (SMDP) is formulated. To solve this problem, we utilize an iterative algorithm based on the Bellman equation to approach the desired solution. The performance of the proposed scheme has been demonstrated by extensive numerical results.

Journal ArticleDOI
TL;DR: This work develops a complete cross-modal signal restoration, reconstruction, and rendering architecture through semantic-based signal fusion and sharing and demonstrates that the proposed mechanism can improve the immersive experience of the user substantially and considerably.
Abstract: It is well known that cross-modal services, including audio, video, and haptic signals, will inevitably become the mainstream of multimedia applications. However, because there are significant differences among these three kinds of signals in terms of transmission delay, jitter, and reliability, how to effectively transmit and process them is an extremely challenging problem. Unlike the traditional tactile Internet which mainly focuses on haptic, this work proposes a collaborative communications mechanism by exploring the temporal, spatial, and semantic relevance of cross-modal signals. On one hand, we design a content-driven scheduling scheme to guarantee high-quality cross-modal services by leveraging the spatio-temporal transmission characteristics. On the other hand, we develop a complete cross-modal signal restoration, reconstruction, and rendering architecture through semantic-based signal fusion and sharing. Importantly, we demonstrate that the proposed mechanism can improve the immersive experience of the user substantially and considerably.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an optimal task offloading scheme to maximize the long-term reward of the system where 802.11p is employed as the transmission protocol for the communications between vehicles.
Abstract: Vehicular fog computing (VFC) is envisioned as a promising solution to process the explosive tasks in autonomous vehicular networks. In the VFC system, task offloading is the key technique to process the computation-intensive tasks efficiently. In the task offloading, the task is transmitted to the VFC system according to the 802.11p standard and processed by the computation resources in the VFC system. The delay of task offloading, consisting of the transmission delay and computing delay, is extremely critical especially for some delay-sensitive applications. Furthermore, the long-term reward of the system (i.e., jointly considers the transmission delay, computing delay, available resources, and diversity of vehicles and tasks) becomes a significantly important issue for providers. Thus, in this article, we propose an optimal task offloading scheme to maximize the long-term reward of the system where 802.11p is employed as the transmission protocol for the communications between vehicles. Specifically, a task offloading problem based on a semi-Markov decision process (SMDP) is formulated. To solve this problem, we utilize an iterative algorithm based on the Bellman equation to approach the desired solution. The performance of the proposed scheme has been demonstrated by extensive numerical results.

Journal ArticleDOI
TL;DR: An actor-critic deep reinforcement learning algorithm based on a fuzzy normalized radial basis function neural network (called AC-FNRBF) is proposed to efficiently solve the intelligent transmission scheduling problem in CIoT systems under high-dimensional variables.
Abstract: The cognitive Internet of Things (CIoT) has attracted much interest recently in wireless networks due to its wide applications in smart cities, intelligent transportation systems, and smart metering networks. However, how to smartly schedule the packet transmission in CIoT systems is still a key challenge, that is, how to design a smart agent to realize the intelligent decision making and effective interoperability. In this paper, we model the system state transformation as a Markov decision process, and an actor-critic deep reinforcement learning algorithm based on a fuzzy normalized radial basis function neural network (called AC-FNRBF) is proposed to efficiently solve the intelligent transmission scheduling problem in CIoT systems under high-dimensional variables. The proposed AC-FNRBF algorithm can better approximate both the action function of the actor and the state–action value function of the critic without requiring the system prior knowledge, and a new reward function is established to maximize the system benefit, which jointly takes the transmission packet rate, the system throughput, the power consumption, and the transmission delay into account. Moreover, the AC-FNRBF has the ability to adjust its learning structure and parameters in dynamic environments. Simulation results verify that the proposed algorithm achieves higher transmission packet rate and system throughput with lower power consumption and transmission delay, compared with other existing reinforcement learning algorithms.

Journal ArticleDOI
10 Mar 2020-Sensors
TL;DR: From this evaluation, it is proved that the E2S-DRL reduces energy consumption, reduces delays by up to 40% and enhances throughput and network lifetime up to 35% compared to the existing cTDMA, DRA, LDC and iABC methods.
Abstract: Over the recent era, Wireless Sensor Network (WSN) has attracted much attention among industrialists and researchers owing to its contribution to numerous applications including military, environmental monitoring and so on. However, reducing the network delay and improving the network lifetime are always big issues in the domain of WSN. To resolve these downsides, we propose an Energy-Efficient Scheduling using the Deep Reinforcement Learning (DRL) (E2S-DRL) algorithm in WSN. E2S-DRL contributes three phases to prolong network lifetime and to reduce network delay that is: the clustering phase, duty-cycling phase and routing phase. E2S-DRL starts with the clustering phase where we reduce the energy consumption incurred during data aggregation. It is achieved through the Zone-based Clustering (ZbC) scheme. In the ZbC scheme, hybrid Particle Swarm Optimization (PSO) and Affinity Propagation (AP) algorithms are utilized. Duty cycling is adopted in the second phase by executing the DRL algorithm, from which, E2S-DRL reduces the energy consumption of individual sensor nodes effectually. The transmission delay is mitigated in the third (routing) phase using Ant Colony Optimization (ACO) and the Firefly Algorithm (FFA). Our work is modeled in Network Simulator 3.26 (NS3). The results are valuable in provisions of upcoming metrics including network lifetime, energy consumption, throughput and delay. From this evaluation, it is proved that our E2S-DRL reduces energy consumption, reduces delays by up to 40% and enhances throughput and network lifetime up to 35% compared to the existing cTDMA, DRA, LDC and iABC methods.

Journal ArticleDOI
TL;DR: A centralized hierarchical deep reinforcement learning based method is proposed to find an optimal solution for the relay selection problem in multihop 5G mmWave device to device (D2D) transmissions and a power level allocation problem of mmWave D2D links.
Abstract: 5G millimeter wave (mmWave) communication is an efficient technique for low delay and high data rate transmission in vehicular networks. Due to the high path loss in 5G mmWave band, 5G base stations need to be densely deployed, which may result in great deployment expenditures. In this letter, we jointly consider a relay selection problem in multihop 5G mmWave device to device (D2D) transmissions and a power level allocation problem of mmWave D2D links. We propose a centralized hierarchical deep reinforcement learning based method to find an optimal solution for the problem. The proposed method does not rely on the information of links, and it tries to find an optimal solution based on the information of vehicles. Simulation results show that the convergence of the proposed method, and the transmission delay performance of proposed method is better than a link-quality-prediction based method, and close to a link-quality-known method.

Journal ArticleDOI
TL;DR: A reinforcement learning (RL) framework based on slow fading parameters and statistical information is proposed, which can significantly optimize the total capacity of V2I links and ensure the latency and reliability requirements of the V2V links.
Abstract: A 5G network is the key driving factor in the development of vehicle-to-vehicle (V2V) communication technology, and V2V communication in 5G has recently attracted great interest. In the V2V communication network, users can choose different transmission modes and power levels for communication, to guarantee their quality-of-service (QoS), high capacity of vehicle-to-infrastructure (V2I) links and ultra-reliability of V2Vlinks. Aiming atV2V communication mode selection and power adaptation in 5G communication networks, a reinforcement learning (RL) framework based on slow fading parameters and statistical information is proposed. In this paper, our objective is to maximize the total capacity of V2I links while guaranteeing the strict transmission delay and reliability constraints of V2V links. Considering the fast channel variations and the continuous-valued state in a high mobility vehicular environment, we use a multi-agent double deep Q-learning (DDQN) algorithm. Each V2V link is considered as an agent, learning the optimal policy with the updated Q-network by interacting with the environment. Experiments verify the convergence of our algorithm. The simulation results show that the proposed scheme can significantly optimize the total capacity of the V2I links and ensure the latency and reliability requirements of the V2V links.

Journal ArticleDOI
19 Apr 2020-Sensors
TL;DR: This paper proposes a modified architecture of the Long-Term Evolution mobile network to provide services for the Internet of Things by allocating a narrow bandwidth and transferring the scheduling functions from the eNodeB base station to an NB-IoT controller, and develops “smart queue” management algorithms for the IoT traffic prioritization.
Abstract: This paper proposes a modified architecture of the Long-Term Evolution (LTE) mobile network to provide services for the Internet of Things (IoT). This is achieved by allocating a narrow bandwidth and transferring the scheduling functions from the eNodeB base station to an NB-IoT controller. A method for allocating uplink and downlink resources of the LTE/NB-IoT hybrid technology is applied to ensure the Quality of Service (QoS) from end-to-end. This method considers scheduling traffic/resources on the NB-IoT controller, which allows eNodeB planning to remain unchanged. This paper also proposes a prioritization approach within the IoT traffic to provide End-to-End (E2E) QoS in the integrated LTE/NB-IoT network. Further, we develop “smart queue” management algorithms for the IoT traffic prioritization. To demonstrate the feasibility of our approach, we performed a number of experiments using simulations. We concluded that our proposed approach ensures high end-to-end QoS of the real-time traffic by reducing the average end-to-end transmission delay.

Journal ArticleDOI
TL;DR: An integrated closed-loop system with state-dependent uncertainties is constructed by taking the AETS, deception attacks and data buffers into account, and sufficient conditions that guarantee the mean-square exponential stability of the closed- loop system are presented by employing the Lyapunov functional method.
Abstract: The problem of decentralized event-triggered control for a class of network-based state-dependent uncertain systems subject to network transmission delay and deception attacks is considered in this article. To reduce network load, a novel decentralized adaptive event-triggered scheme (AETS) is developed to transmit necessary sampled signals. During network transmission, a more practical deception attack phenomenon is considered, where the attack behaviors in different channels are governed by independent Bernoulli processes. Moreover, a set of improved data buffers are applied in the controller side to organize the decentralized triggered data and alleviate the impact of network transmission delay, such that the transmitted data can be utilized timely. Then, an integrated closed-loop system with state-dependent uncertainties is constructed by taking the AETS, deception attacks and data buffers into account. Sufficient conditions that guarantee the mean-square exponential stability of the closed-loop system are presented by employing the Lyapunov functional method, and the design criterion of the controller gain is given by an exact expression. Finally, the proposed method is applied to the control of electronic circuits to verify its practicability and effectiveness.

Journal ArticleDOI
TL;DR: The comparison between the experiment and the classical opportunistic network algorithm shows that the multiperceived domain algorithm has outstanding performance in reducing the resource consumption of data transmission and improving the efficiency of information transmission.
Abstract: With the advent of 5G communication standards, the number of 5G base stations increases steadily, and the number of mobile terminals and IoT (Internet of Things) devices increases sharply, which sharps a large number of IoT devices and forms a complex network. These devices can take as nodes of a community in the opportunistic social network. However, in the environment of traditional opportunistic network algorithm and mass data transmission, information transmission is only carried out at several source nodes in the community, which usually leads to transmission delay, excessive energy consumption, and source node death. Therefore, we propose an effective data delivery based on the multiperceived domain algorithm, which recombines communities based on the correlation degree of nodes, and new communities assist source nodes to transmit information in solving these problems. The comparison between the experiment and the classical opportunistic network algorithm shows that the method has outstanding performance in reducing the resource consumption of data transmission and improving the efficiency of information transmission.

Journal ArticleDOI
09 Jul 2020-Sensors
TL;DR: In the current research, the metrics of LoRa were quantified to facilitate its practical application in smart buildings and may be the first academic research evaluating RTT performance of Lo Ra via practical experiments.
Abstract: The Internet of things presents tremendous opportunities for the energy management and occupant comfort improvement in smart buildings by making data of environmental and equipment parameters more readily and continuously available. Long-range (LoRa) technology provides a comprehensive wireless solution for data acquisition and communication in smart buildings through its superior performance, such as the long-range transmission, low power consumption and strong penetration. Starting with two vital indicators (network transmission delay and packet loss rate), this study explored the coverage and transmission performances of LoRa in buildings in detail. We deployed three LoRa receiver nodes on the same floor and eight LoRa receiver nodes on different floors in a 16-story building, respectively, where data acquisition terminal was located in the center of the whole building. The communication performance of LoRa was evaluated by changing the send power, communication rate, payload length and position of the wireless module. In the current research, the metrics of LoRa were quantified to facilitate its practical application in smart buildings. To the best of our knowledge, this may be the first academic research evaluating RTT performance of LoRa via practical experiments.

Journal ArticleDOI
TL;DR: A framework of priority-aware packet transmission scheduling (PPTS) in cluster-based IWSNs is proposed, where the PPTS strategy, the optimization theory, and the implementation design are systematically considered.
Abstract: Industrial wireless sensor networks (IWSNs) are the fundamental components in the next-generation factories. Due to massive heterogeneous data generated from large-scale IWSNs, it is still challenging to achieve predictable, deterministic, and real-time transmission scheduling. In this article, a framework of priority-aware packet transmission scheduling (PPTS) in cluster-based IWSNs is proposed, where the PPTS strategy, the optimization theory, and the implementation design are systematically considered. In particular, the proposed PPTS strategy not only minimizes the transmission delay of high priority packets but also greatly improves the transmission delay of low priority packets. The optimization theory for end-to-end priority-aware scheduling in cluster-based IWSNs is formalized, which contributes to the optimal solution for multidimensional network resources allocation and achieves the minimum of average transmission delay. Finally, the advantages of the PPTS framework over some existing solutions are demonstrated by a case study.

Journal ArticleDOI
TL;DR: It can be found that ABAH can avoid the communication overhead and privacy leakage caused by the revocation list, ensure the integrity of batch verification information, meet the security performance of the vehicular ad hoc network under the Internet of Things, and protect the privacy of users from being disclosed.
Abstract: To study the security performance of the Internet of multimedia things on the privacy protection of user identity, behavior trajectory, and preference under the new information technology industry wave, in this study, aiming at the problems of the sharing of Internet of things perception data and the exposure of users’ privacy information, the Anonymous Batch Authentication Scheme (ABAH) for privacy protection is designed. Hash-based Message Authentication Code is used to cancel the list-checking process and analyze its security performance. Compared with the methods of elliptic curve digital signature algorithm, Bayes least-square method, identity-based bulk verification, anonymous batch authentication and key protocol, conditional privacy authentication scheme, and expert message authentication protocol, the transmission delay, packet loss rate, and computation cost are studied without considering the undo list and during the undo check. The results show that with the increase of information size, the transmission delay and packet loss rate also increase, and the transmission delay of ABAH increases by about 15%, while the correlation between speed and transmission delay is small. In the case of the same amount of validation information, ABAH has the highest validation efficiency, and it still has an efficient validation effect in the case of invalid information. The message packet loss rate for ABAH is always 0 when the undo check validation overhead is considered. It can be found that ABAH can avoid the communication overhead and privacy leakage caused by the revocation list, ensure the integrity of batch verification information, meet the security performance of the vehicular ad hoc network under the Internet of Things, and protect the privacy of users from being disclosed.

Journal ArticleDOI
TL;DR: This article aims to investigate whether the widely used MAC mechanism, carrier sense multiple access/collision avoidance (CSMA/CA), is suitable for wireless blockchain networks or not, and proposes a stochastic model to analyze the security issue taking into account the malicious double-spending attack.
Abstract: The impact of communication transmission delay on the original blockchain, has not been well considered and studied since it is primarily designed in stable wired communication environment with high communication capacity. However, in a wireless scenario, due to the scarcity of spectrum resource, a blockchain user may have to compete for wireless channel to broadcast transactions following media access control (MAC) mechanism. As a result, the communication transmission delay may be significant and pose a bottleneck on the blockchain system performance and security. To facilitate blockchain applications in wireless industrial Internet of Things (IIoTs), this article aims to investigate whether the widely used MAC mechanism, carrier sense multiple access/collision avoidance (CSMA/CA), is suitable for wireless blockchain networks or not. Based on tangle, as an example to analyze the system performance in term of confirmation delay, transaction per second and transaction loss probability by considering the impact of queueing and transmission delay caused by CSMA/CA. Next, a stochastic model is proposed to analyze the security issue taking into account the malicious double-spending attack. Simulation results provide valuable insights when running blockchain in wireless network, the performance would be limited by the traditional CSMA/CA protocol. Meanwhile, we demonstrate that the probability of launching a successful double-spending attack would be affected by CSMA/CA as well.

Journal ArticleDOI
TL;DR: In order to avoid unnecessary information transmissions among agents and achieve better resource utilization, the developed event-triggered condition with dynamically adjustable threshold parameter is introduced.
Abstract: The practical leader–follower issue is addressed for multi-agent systems via adaptive event-triggered observer-based distributed control. Besides the network transmission delay is considered when the data is transferred from sensor to controller in a shared network communication. Furthermore, in order to avoid unnecessary information transmissions among agents and achieve better resource utilization, we introduce the developed event-triggered condition with dynamically adjustable threshold parameter. And as a expansion, based on the proposed controller and event-triggered scheme, we study the chaotic system. Finally, simulations which contain a linear system and a Chua’s circuit system are performed to demonstrate the availability of the proposed theoretical results.

Journal ArticleDOI
TL;DR: This work describes LFC system dynamic by a switched delay system model, and a design method is presented to solve state feedback control gain to solve denial-of-service attacks and transmission delay.

Journal ArticleDOI
TL;DR: The presented mathematical analyses and simulation results demonstrate that the proposed routing strategy for vehicle-to-vehicle (V2V) communication in urban VANETs is feasible and that it achieves relatively high performance.
Abstract: Due to the characteristics of urban vehicular ad hoc networks (VANETs), many difficulties exist when designing routing protocols. In this paper, we focus on designing an efficient routing strategy for vehicle-to-vehicle (V2V) communication in urban VANETs. Because, the characteristics of urban VANET routing performance are affected mainly by intersections, traffic lights, and traffic conditions, we propose an intersection-based distributed routing (IDR) strategy. In view of the fact that traffic lights are used to cause vehicles to stop at intersections, we propose an intersection vehicle fog (IVF) model, in which waiting vehicles dynamically form a collection or fog of vehicles at an intersection. Acting as infrastructure components, the IVFs proactively establish multihop links with adjacent intersections and analyze the traffic conditions on adjacent road segments using fuzzy logic. This approach offloads a large part of the routing work. During routing, the IVFs adjust the routing direction based on the real-time position of the destination, thus avoiding rerouting. Each time an IVF makes a distributed routing decision, the IDR model employs the ant colony optimization (ACO) algorithm to identify an optimal routing path whose connectivity is based on the traffic conditions existing in the multihop links between intersections. Because of the high connectivity of the routing path, the model requires only packet forwarding and not carrying when transmitting along the routing path, which reduces the transmission delay and increases the transmission ratio. The presented mathematical analyses and simulation results demonstrate that our proposed routing strategy is feasible and that it achieves relatively high performance.

Journal ArticleDOI
TL;DR: The experimental results suggest that the edge cooperative caching can effectively improve the cache hit rate and the content cache space utilization, and shorten the average content transmission delay.
Abstract: In order to effectively reduce the network transmission delay and improve the network transmission quality, the concept of Content Delivery Network (CDN) is brought forth to provide necessary technical support. In this paper, the edge cooperative caching (ECC) based on machine learning and greedy algorithm is put forward. To start with, the neural collaborative filtering is used to design the content popularity prediction algorithm to realize more accurate prediction of content popularity. Following that, the greedy algorithm after optimization is used to obtain the content delivery strategy of various servers in the cooperative cache domain. Finally, the ECC is adopted to achieve the optimization goal of minimal average content transmission delay. Meanwhile, the simulation experiment is carried out to verify the performance of the ECC. The experimental results suggest that the ECC can effectively improve the cache hit rate and the content cache space utilization, and shorten the average content transmission delay.