scispace - formally typeset
Search or ask a question

Showing papers on "Transmission delay published in 2019"


Journal ArticleDOI
TL;DR: This paper proves that a global optimal solution can be found in a convex subset of the original feasible region for ultra-reliable and low-latency communications (URLLC), where the blocklength of channel codes is short.
Abstract: In this paper, we aim to find the global optimal resource allocation for ultra-reliable and low-latency communications (URLLC), where the blocklength of channel codes is short. The achievable rate in the short blocklength regime is neither convex nor concave in bandwidth and transmit power. Thus, a non-convex constraint is inevitable in optimizing resource allocation for URLLC. We first consider a general resource allocation problem with constraints on the transmission delay and decoding error probability, and prove that a global optimal solution can be found in a convex subset of the original feasible region. Then, we illustrate how to find the global optimal solution for an example problem, where the energy efficiency (EE) is maximized by optimizing antenna configuration, bandwidth allocation, and power control under the latency and reliability constraints. To improve the battery life of devices and EE of communication systems, both uplink and downlink resources are optimized. The simulation and numerical results validate the analysis and show that the circuit power is dominated by the total power consumption when the average inter-arrival time between packets is much larger than the required delay bound. Therefore, optimizing antenna configuration and bandwidth allocation without power control leads to minor EE loss.

166 citations


Journal ArticleDOI
TL;DR: In this article, a self-adaptive discrete particle swarm optimization algorithm with genetic algorithm operators (GA-DPSO) was proposed to optimize the data transmission time when placing data for a scientific workflow.
Abstract: Compared to traditional distributed computing environments such as grids, cloud computing provides a more cost-effective way to deploy scientific workflows. Each task of a scientific workflow requires several large datasets that are located in different datacenters, resulting in serious data transmission delays. Edge computing reduces the data transmission delays and supports the fixed storing manner for scientific workflow private datasets, but there is a bottleneck in its storage capacity. It is a challenge to combine the advantages of both edge computing and cloud computing to rationalize the data placement of scientific workflow, and optimize the data transmission time across different datacenters. In this study, a self-adaptive discrete particle swarm optimization algorithm with genetic algorithm operators (GA-DPSO) was proposed to optimize the data transmission time when placing data for a scientific workflow. This approach considered the characteristics of data placement combining edge computing and cloud computing. In addition, it considered the factors impacting transmission delay, such as the bandwidth between datacenters, the number of edge datacenters, and the storage capacity of edge datacenters. The crossover and mutation operators of the genetic algorithm were adopted to avoid the premature convergence of traditional particle swarm optimization algorithm, which enhanced the diversity of population evolution and effectively reduced the data transmission time. The experimental results show that the data placement strategy based on GA-DPSO can effectively reduce the data transmission time during workflow execution combining edge computing and cloud computing.

147 citations


Journal ArticleDOI
TL;DR: Simulation results demonstrate that the proposed centralized routing scheme outperforms others in terms of transmission delay, and the transmission performance of the proposed routing scheme is more robust with varying vehicle velocity.
Abstract: Establishing and maintaining end-to-end connections in a vehicular ad hoc network (VANET) is challenging due to the high vehicle mobility, dynamic inter-vehicle spacing, and variable vehicle density. Mobility prediction of vehicles can address the aforementioned challenge, since it can provide a better routing planning and improve overall VANET performance in terms of continuous service availability. In this paper, a centralized routing scheme with mobility prediction is proposed for VANET assisted by an artificial intelligence powered software-defined network (SDN) controller. Specifically, the SDN controller can perform accurate mobility prediction through an advanced artificial neural network technique. Then, based on the mobility prediction, the successful transmission probability and average delay of each vehicle's request under frequent network topology changes can be estimated by the roadside units (RSUs) or the base station (BS). The estimation is performed based on a stochastic urban traffic model in which the vehicle arrival follows a non-homogeneous Poisson process. The SDN controller gathers network information from RSUs and BS that are considered as the switches. Based on the global network information, the SDN controller computes optimal routing paths for switches (i.e., BS and RSU). While the source vehicle and destination vehicle are located in the coverage area of the same switch, further routing decision will be made by the RSUs or the BS independently to minimize the overall vehicular service delay. The RSUs or the BS schedule the requests of vehicles by either vehicle-to-vehicle or vehicle-to-infrastructure communication, from the source vehicle to the destination vehicle. Simulation results demonstrate that our proposed centralized routing scheme outperforms others in terms of transmission delay, and the transmission performance of our proposed routing scheme is more robust with varying vehicle velocity.

145 citations


Journal ArticleDOI
TL;DR: A novel framework is proposed for optimizing a platoon's operation while jointly taking into account the delay of the wireless V2V network and the stability of the vehicle’s control system, and guidelines for designing an autonomous platoon so as to realize the required wireless network reliability and control system stability.
Abstract: Autonomous vehicular platoons will play an important role in improving on-road safety in tomorrow’s smart cities. Vehicles in an autonomous platoon can exploit vehicle-to-vehicle (V2V) communications to collect environmental information so as to maintain the target velocity and inter-vehicle distance. However, due to the uncertainty of the wireless channel, V2V communications within a platoon will experience a wireless system delay. Such system delay can impair the vehicles’ ability to stabilize their velocity and distances within their platoon. In this paper, the problem of integrated communication and control system is studied for wireless connected autonomous vehicular platoons. In particular, a novel framework is proposed for optimizing a platoon’s operation while jointly taking into account the delay of the wireless V2V network and the stability of the vehicle’s control system. First, stability analysis for the control system is performed and the maximum wireless system delay requirements which can prevent the instability of the control system are derived. Then, delay analysis is conducted to determine the end-to-end delay, including queuing, processing, and transmission delay for the V2V link in the wireless network. Subsequently, using the derived wireless delay, a lower bound and an approximated expression of the reliability for the wireless system, defined as the probability that the wireless system meets the control system’s delay needs, are derived. Then, the parameters of the control system are optimized in a way to maximize the derived wireless system reliability. Simulation results corroborate the analytical derivations and study the impact of parameters, such as the packet size and the platoon size, on the reliability performance of the vehicular platoon. More importantly, the simulation results shed light on the benefits of integrating control system and wireless network design while providing guidelines for designing an autonomous platoon so as to realize the required wireless network reliability and control system stability.

122 citations


Journal ArticleDOI
TL;DR: A novel sliding mode estimation-based controller is designed to predict time delays and microgrid states, and to reject the disturbance of estimation errors by regarding estimation errors as disturbance of sliding mode control (SMC).
Abstract: This paper deals with the sliding mode estimation for microgrid with time delays. Delay has a great impact on large power grids’ management for microgrid, which terribly reduces the stability and quality of microgrid. Random delay caused by load dependent congestion, constant transmission delay and constant delay in microgrid are considered in this paper. To eliminate the adverse effects of delays, a novel sliding mode estimation-based controller is designed to predict time delays and microgrid states, and to reject the disturbance of estimation errors. The mathematical inverter model containing electrical characteristics is regarded as the model of practical microgrid system. Delay estimation with learning parameter and state estimation are derived according to the inverter model. By regarding estimation errors as disturbance of sliding mode control (SMC), the control signal of SMC is adaptively changed in the sliding mode estimation-based control loop to ensure the stability of system and accuracy of estimation. Exponential reaching law (ERL) is implemented to improve the chattering issues and reaching performance of SMC. Lyapunov approach is exploited to analyze the stability of sliding motion. Finally, the proposed SMC strategy is validated by simulation experiments of a microgrid with time delays.

102 citations


Journal ArticleDOI
29 Jan 2019-Sensors
TL;DR: Evaluating vehicular Internet-based video services traffic and Vehicle-to-Vehicle (V2V) communications in urban and rural scenarios shows satisfactory performance to the IoV communications requirements when adopting the 5G network with V2X communications.
Abstract: The Fifth Generation (5G) cellular network can be considered the way to the ubiquitous Internet and pervasive paradigm.The Internet of Vehicles (IoV) uses the network infrastructure to allow cars to be connected to new radio technologies, and can be supported by 5G networks. In this way, the Vehicle-to-Everything (V2X) integration needs 5G connections unavoidably. This paper presents a 5G V2X ecosystem to provide IoV. The proposed ecosystem is based on the Software-Defined Networking (SDN) concept. Considering vehicles as entertainment consumer points, the network infrastructure must be huge enough to guarantee delivery and quality. For this purpose, this paper evaluates vehicular Internet-based video services traffic and Vehicle-to-Vehicle (V2V) communications in urban and rural scenarios. Simulations were performed through the Network Simulator ns-3, employing millimeter Wave (mmWave) communications. Three metrics, data transfer rate, transmission delay, and Packet Delivery Ratio (PDR), were analyzed and compared for rural and urban IoV scenarios. The results have shown satisfactory performance to the IoV communications requirements when adopting the 5G network with V2X communications.

101 citations


Journal ArticleDOI
Juan Luo1, Luxiu Yin1, Jinyu Hu1, Chun Wang1, Xuan Liu1, Xin Fan1, Haibo Luo2 
TL;DR: Experimental results show that the proposed service models and scheduling algorithm can reduce service latency, improve fog node efficiency, and prolong WSNs life cycle through energy balancing.

97 citations


Journal ArticleDOI
TL;DR: An artificial spider geographic routing in urban VAENTs (ASGR) is proposed, which performs best in terms of packet delivery ratio and average transmission delay with an up to 15% and 94% improvement, respectively.
Abstract: Recently, vehicular ad hoc networks (VANETs) have been attracting significant attention for their potential for guaranteeing road safety and improving traffic comfort. Due to high mobility and frequent link disconnections, it becomes quite challenging to establish a reliable route for delivering packets in VANETs. To deal with these challenges, an artificial spider geographic routing in urban VAENTs (ASGR) is proposed in this paper. First, from the point of bionic view, we construct the spider web based on the network topology to initially select the feasible paths to the destination using artificial spiders. Next, the connection-quality model and transmission-latency model are established to generate the routing selection metric to choose the best route from all the feasible paths. At last, a selective forwarding scheme is presented to effectively forward the packets in the selected route, by taking into account the nodal movement and signal propagation characteristics. Finally, we implement our protocol on NS2 with different complexity maps and simulation parameters. Numerical results demonstrate that, compared with the existing schemes, when the packets generate speed, the number of vehicles and number of connections are varying, our proposed ASGR still performs best in terms of packet delivery ratio and average transmission delay with an up to 15% and 94% improvement, respectively.

93 citations


Journal ArticleDOI
TL;DR: In this paper, a distributed deep learning algorithm that brings together new neural network ideas from liquid state machine (LSM) and echo state networks (ESNs) is proposed to address the problem of content caching and transmission for a wireless virtual reality (VR) network in which cellular-connected UAVs capture videos on live games or sceneries and transmit them to small base stations (SBSs) that service the VR users.
Abstract: In this paper, the problem of content caching and transmission is studied for a wireless virtual reality (VR) network in which cellular-connected unmanned aerial vehicles (UAVs) capture videos on live games or sceneries and transmit them to small base stations (SBSs) that service the VR users. To meet the VR delay requirements, the UAVs can extract specific visible content (e.g., user field of view) from the original 360° VR data and send this visible content to the users so as to reduce the traffic load over backhaul and radio access links. The extracted visible content consists of 120° horizontal and 120° vertical images. To further alleviate the UAV-SBS backhaul traffic, the SBSs can also cache the popular contents that users request. This joint content caching and transmission problem are formulated as an optimization problem whose goal is to maximize the users’ reliability defined as the probability that the content transmission delay of each user satisfies the instantaneous VR delay target. To address this problem, a distributed deep learning algorithm that brings together new neural network ideas from liquid state machine (LSM), and echo state networks (ESNs) is proposed. The proposed algorithm enables each SBS to predict the users’ reliability so as to find the optimal contents to cache and content transmission format for each cellular-connected UAV. Analytical results are derived to expose the various network factors that impact content caching and content transmission format selection. Simulation results show that the proposed algorithm yields 25.4% and 14.7% gains, in terms of reliability compared to Q-learning and a random caching algorithm, respectively.

90 citations


Journal ArticleDOI
TL;DR: A new certificateless aggregate signcryption scheme (CLASC) is proposed by using a fog computing framework that supports mobility, low latency, and location awareness and is proved to be unforgeability and confidentiality under the random oracle model.
Abstract: In recent years, with the development of intelligent vehicles and wireless sensor network technology, the research on road safety has attracted much attention in vehicular ad-hoc networks (VANETs). By sensing events on the road, vehicles can broadcast information to inform others of traffic jams or accidents. However, the mobile vehicle network has a large transmission delay, which makes real-time content transmission impossible. In this paper, a new certificateless aggregate signcryption scheme (CLASC) is proposed by using a fog computing framework that supports mobility, low latency, and location awareness. It is combined with online/offline encryption (OOE) technology, which reduces many time-consuming operations and improves the security of vehicle users and the efficiency of message authentication. In addition, the scheme has the characteristics of mutual authentication, anonymity, untraceability, and nondeniability. Based on the difficulty of the discrete logarithm problem (DLP) and the computational Diffie–Hellman (CDH) problem, the scheme is further proved to be unforgeability and confidentiality under the random oracle model. The simulation results show that compared with the existing schemes, this scheme can not only ensure the security requirements of the system but also achieve higher efficiency in computing and communication.

76 citations


Journal ArticleDOI
TL;DR: An analytical method is presented to quantify the impacts on the microgrid operation reliability from cyber system element failures and transmission interference, and the results demonstrate the effectiveness of the proposed model and method.
Abstract: A microgrid is a typical cyber-physical system. The coordinated control of each unit in a microgrid mainly relies on the cyber system. Once performance degradation of the cyber system happens, such as outages or transmission delay, the stable operation of the microgrid will likely be affected. In this paper, an analytical method is presented to quantify the impacts on the microgrid operation reliability from cyber system element failures and transmission interference. First, the link model and the information transmission model of the cyber system are established to describe the cyber routing path and the information transmission service quality, respectively. Then, the sequential Monte Carlo method is employed to simulate the operation of the microgrid. Finally, the impacts of the element failure and the transmission interference of the cyber system on microgrid operational reliability is quantitatively analyzed. Additionally, sensitivity analysis is carried out, focusing on the impacts of the cyber network topology, element failure rate and different types of information transmission quality indexes including delays, routing error, and information transmission error. The results of the numerical examples demonstrate the effectiveness of the proposed model and method.

Journal ArticleDOI
TL;DR: A dynamic duty cycle (DDC) scheme is proposed for minimizing the delay in WSNs and it is demonstrated that the DDC scheme can outperform other schemes.

Journal ArticleDOI
TL;DR: A delay-based workload allocation problem is formulated which suggests the optimal workload allocations among local edge server, neighbor edge servers, and cloud toward the minimal energy consumption as well as the delay guarantee in an IoT-edge-cloud system.
Abstract: Edge computing has recently emerged as an extension to cloud computing for quality of service (QoS) provisioning particularly delay guarantee for delay-sensitive applications. By offloading the computationally intensive workloads to edge servers, the quality of computation experience, e.g., network transmission delay and transmission energy consumption, could be improved greatly. However, the computation resource of an edge server is so scarce that it cannot respond quickly to the bursting computation requirements. Accordingly, queuing delay is un-negligible in a computationally intensive environment, e.g., a computing environment consists of the Internet of Things (IoT) applications. In addition, the computation energy consumption in edge servers may be higher than that in clouds when the workload is heavy. To provide QoS for end users while achieving green computing for computing systems, the cooperation between edge servers and the cloud is significantly important. In this paper, the energy-efficient and delay-guaranteed workload allocation problem in an IoT-edge-cloud computing system are investigated. We formulate a delay-based workload allocation problem which suggests the optimal workload allocations among local edge server, neighbor edge servers, and cloud toward the minimal energy consumption as well as the delay guarantee. The problem is then tackled using a delay-base workload allocation (DBWA) algorithm based on Lyapunov drift-plus-penalty theory. The theoretical analysis and simulation results have been conducted to demonstrate the efficiency of the proposal for energy efficiency and delay guarantee in an IoT-edge-cloud system.

Journal ArticleDOI
TL;DR: An integral-based model is proposed for designing a new event-triggered scheme, which relies on the mean of the system state and the last triggered state, and the co-design conditions of triggering parameters and controller gains are given in linear matrix inequalities to ensure the asymptotic stability of the resulting closed-loop system.
Abstract: This paper presents the event-triggered control for Takagi–Sugeno (T–S) fuzzy networked systems with transmission delay. An integral-based model is proposed for designing a new event-triggered scheme, which relies on the mean of the system state and the last triggered state. To handle the asynchronous premises of the fuzzy system and fuzzy controller, a novel triggering condition is added into the event-triggered mechanism. Then, the closed-loop T–S fuzzy event-triggered control system is established as a distributed delay system. With the help of the Legendre polynomials and their properties, the co-design conditions of triggering parameters and controller gains are given in linear matrix inequalities to ensure the asymptotic stability of the resulting closed-loop system. Finally, an experiment via a practical wireless network is implemented to illustrate the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: A data dissemination technique using a time barrier mechanism to reduce the overhead of messages that can clutter the network and is based on the concept of a super-node to timely disseminate the messages.
Abstract: With the advancement in technology and inception of smart vehicles and smart cities, every vehicle can communicate with the other vehicles either directly or through ad-hoc networks. Therefore, such platforms can be utilized to disseminate time-critical information. However, in an ad-hoc situation, information coverage can be restricted in situations, where no relay vehicle is available. Moreover, the critical information must be delivered within a specific period of time; therefore, timely message dissemination is extremely important. The existing data dissemination techniques in VANETs generate a large number of messages through techniques such as broadcast or partial broadcast. Thus, the techniques based on broadcast schemes can cause congestion as all the recipients re-broadcast the message and vehicles receive multiple copies of same messages. Further, re-broadcast can degrade the coverage delivery ratio due to channel congestion. Moreover, the traditional cluster-based approach cannot work efficiently. As clustering schemes add additional delays due to communication with cluster head only. In this paper, we propose a data dissemination technique using a time barrier mechanism to reduce the overhead of messages that can clutter the network. The proposed solution is based on the concept of a super-node to timely disseminate the messages. Moreover, to avoid unnecessary broadcast which can also cause the broadcast storm problem, the time barrier technique is adapted to handle this problem. Thus, only the farthest vehicle rebroadcasts the message which can cover more distance. Therefore, the message can reach the farthest node in less time and thus, improves the coverage and reduces the delay. The proposed scheme is compared with traditional probabilistic approaches. The evaluation section shows the reduction in message overhead, transmission delay, improved coverage, and packet delivery ratio.

Journal ArticleDOI
TL;DR: The results show that the resilience of CBTC systems can be enhanced using the introduced scheme, and not only the gaps in optimal velocity versus distance curve are smaller, but also the unplanned traction and breaking are reduced as well.
Abstract: High reliability and low latency are crucial for urban rail transits. In this paper, we introduce communication strategies for communication-based train control (CBTC) systems using long-term evolution for metro (LTE-M) to improve the reliability and latency. To be specific, the FlashLinQ-based Train-to-Train (T2T) communication schemes are introduced considering both the transmission delay and the packet drop. The quantified resilience is also introduced as a system metric to evaluate the preservation and recovery performance of CBTC systems. First, a novel urban rail transit wireless communication model is established using FlashLinQ-based T2T communications. Then, we introduce a novel cognitive control scheme based on LTE-M with T2T communication to enhance the quality of service and the resilience of multi-train CBTC systems. In the introduced scheme, Q-learning is used to generate optimal control strategies considering both wireless communication parameters adaption and train control parameters. Extensive simulations are conducted and the results show that the resilience of CBTC systems can be enhanced using the introduced scheme. Furthermore, using the introduced scheme, not only the gaps in optimal velocity versus distance curve are smaller, but also the unplanned traction and breaking are reduced as well.

Journal ArticleDOI
TL;DR: Considering versatile users’ quality of service (QoS) requirements on transmission delay and rate, coarse resource provisioning scheme and deep reinforcement learning-based autonomous slicing refinement algorithm are proposed and a shape-based heuristic algorithm for user resource customization is devised to improve resource utilization and QoS satisfaction.
Abstract: Network slicing has been introduced in fifth-generation (5G) systems to satisfy requirements of diverse applications from various service providers operating on a common shared infrastructure. However, heterogeneous characteristics of slices have not been widely explored. In this paper, we investigate dynamic network slicing strategies with mixed traffics in virtualized radio access network (RAN). Considering versatile users’ quality of service (QoS) requirements on transmission delay and rate, coarse resource provisioning scheme and deep reinforcement learning-based autonomous slicing refinement algorithm are proposed. Then, a shape-based heuristic algorithm for user resource customization is devised to improve resource utilization and QoS satisfaction. In principle, the DQN algorithm allocates only the necessary resource to slices to satisfy users’ QoS requirements. For fairness in comparison, we reserve all the unused resources back to the slices. In case there is a sudden change in user population in one slice, the algorithm provides isolation. To validate the advantage, system-level simulations are conducted. The results show that the proposed algorithm balances the satisfaction up to about 100% and resource utilization up to 80% against state-of-the-art solutions. The proposed algorithm also improves the performance of slices in mixed traffics against state-of-the-art benchmarks, which fail to balance satisfaction and resource utilization in some slices.

Journal ArticleDOI
TL;DR: A feasible solution to minimize the response time for traffic management service is put forward, by enabling real-time content dissemination based on heterogeneous network access in IoV systems by designing a crowdsensing-based system model and investigating a cluster-based optimization framework.
Abstract: As an application of “smart transport” for Internet of Things, Internet of Vehicle (IoV) has emerged as a new research field based on vehicular ad hoc networks (VANETs). With the development of smart vehicles and the integration of sensors, applications of traffic management and road safety in large-scale IoV systems have drawn great attentions. By sensing events occurred on roads, vehicles can broadcast messages to inform others about traffic jams or accidents. However, the store-carry-and-forward transmission pattern may cause a large transmission delay, making the implementation of large-scale traffic management difficult. In this paper, we put forward a feasible solution to minimize the response time for traffic management service, by enabling real-time content dissemination based on heterogeneous network access in IoV systems. We first design a crowdsensing-based system model for large-scale IoV systems. Then, a cluster-based optimization framework is investigated to provide timely responses for traffic management. Specifically, we estimate the message transmission delay by stochastic theory, which can provide a guideline for the next-hop relay selection in our delay-sensitive routing scheme. Furthermore, network performances are evaluated based on two city-road maps, and performance metrics, containing average delivery delay, average delivery ratio, average communication cost, and access ratio, demonstrate the superiority of our system. Finally, we conclude our work and discuss the further work.

Journal ArticleDOI
TL;DR: A distributed routing protocol DGGR is proposed, which comprehensively takes into account sparse and dense environments to make routing decisions and performs best in terms of average transmission delay and packet delivery ratio by varying the packet generating speed and density.
Abstract: Due to the random delay, local maximum and data congestion in vehicular networks, the design of a routing is really a challenging task especially in the urban environment. In this paper, a distributed routing protocol DGGR is proposed, which comprehensively takes into account sparse and dense environments to make routing decisions. As the guidance of routing selection, a road weight evaluation (RWE) algorithm is presented to assess road segments, the novelty of which lies that each road segment is assigned a weight based on two built delay models via exploiting the real-time link property when connected or historic traffic information when disconnected. With the RWE algorithm, the determined routing path can greatly alleviate the risk of local maximum and data congestion. Specially, in view of the large size of a modern city, the road map is divided into a series of Grid Zones (GZs). Based on the position of the destination, the packets can be forwarded among different GZs instead of the whole city map to reduce the computation complexity, where the best path with the lowest delay within each GZ is determined. The backbone link consisting of a series of selected backbone nodes at intersections and within road segments, is built for data forwarding along the determined path, which can further avoid the MAC contentions. Extensive simulations reveal that compared with some classic routing protocols, DGGR performs best in terms of average transmission delay and packet delivery ratio by varying the packet generating speed and density.

Proceedings ArticleDOI
01 May 2019
TL;DR: To jointly optimize the computing and communication resources in the fog node, a delay-sensitive data offloading problem is formulated that mainly considers the local task execution delay and transmission delay and an approximate solution is obtained via Quadratically Constraint Quadratic Programming (QCQP).
Abstract: Computational offloading becomes an important and essential research issue for the delay-sensitive task completion at resource-constraint end-users. Fog computing that extends the computing and storage resources of the cloud computing to the network edge emerges as a potential solution towards low-latency task provisioning via computational offloading. In our offloading scenario, each end-user will first offload the task to its primary fog node. When the primary fog node cannot meet the tolerable latency, it has the possibility to offload to the cloud and/or assisting fog node to obtain extra computing resource to shorten the computing latency at the expense of additional transmission latency. Therefore, a trade-off needs to be carefully made in the offloading decision. At the same time, in addition to the task data from the end-users under its primary coverage, the primary fog node receives the tasks from other end-users via its neighbor fog nodes. Thus, to jointly optimize the computing and communication resources in the fog node, we formulate a delay-sensitive data offloading problem that mainly considers the local task execution delay and transmission delay. An approximate solution is obtained via Quadratically Constraint Quadratic Programming (QCQP). Finally, the extensive simulation results demonstrate the effectiveness of the proposed solution, while guaranteeing minimum end-to-end latency for various task processing densities and traffic intensity levels.

Journal ArticleDOI
TL;DR: In this article, the problem of wireless VR resource management is investigated for a wireless VR network in which VR contents are sent by a cloud to cellular small base stations (SBSs) over the uplink, in order to generate the VR content and transmit it to the end-users using downlink cellular links.
Abstract: Providing seamless connectivity for wireless virtual reality (VR) users has emerged as a key challenge for future cloud-enabled cellular networks. In this paper, the problem of wireless VR resource management is investigated for a wireless VR network in which VR contents are sent by a cloud to cellular small base stations (SBSs). The SBSs will collect tracking data from the VR users, over the uplink, in order to generate the VR content and transmit it to the end-users using downlink cellular links. For this model, the data requested or transmitted by the users can exhibit correlation, since the VR users may engage in the same immersive virtual environment with different locations and orientations. As such, the proposed resource management framework can factor in such spatial data correlation, so as to better manage uplink and downlink traffic. This potential spatial data correlation can be factored into the resource allocation problem to reduce the traffic load in both the uplink and downlink. In the downlink, the cloud can transmit 360° contents or specific visible contents (e.g., user field of view) that are extracted from the original 360° contents to the users according to the users’ data correlation so as to reduce the backhaul traffic load. In the uplink, each SBS can associate with the users that have similar tracking information so as to reduce the tracking data size. This data correlation-aware resource management problem is formulated as an optimization problem whose goal is to maximize the users’ successful transmission probability, defined as the probability that the content transmission delay of each user satisfies an instantaneous VR delay target. To solve this problem, a machine learning algorithm that uses echo state networks (ESNs) with transfer learning is introduced. By smartly transferring information on the SBS’s utility, the proposed transfer-based ESN algorithm can quickly cope with changes in the wireless networking environment due to users’ content requests and content request distributions. Simulation results demonstrate that the developed algorithm achieves up to 15.8% and 29.4% gains in terms of successful transmission probability compared to Q-learning with data correlation and Q-learning without data correlation, respectively.

Journal ArticleDOI
TL;DR: A heuristic offloading method, named HOM, is proposed to minimize the total transmission delay and an offloading framework for deep learning edge services is built upon centralized unit (CU)-distributed unit (DU) architecture.
Abstract: With the continuous development of the Internet of Things (IoT) and communications technology, especially under the epoch of 5G, mobile tasks with big scales of data have a strong demand in deep learning such as virtual speech recognition and video classification. Considering the limited computing resource and battery consumption of mobile devices (MDs), these tasks are often offloaded to the remote infrastructure, like cloud platforms, which leads to the unavoidable offloading transmission delay. Edge computing (EC) is a novel computing paradigm, capable of offloading the computation tasks to the edge of networks, which reduces the transmission delay between the MDs and cloud. Therefore, combining deep learning and EC is expected to be a solution for these tasks. However, how to decide the offloading destination [cloud or deep learning-enabled edge computing nodes (ECNs)] for computation offloading is still a challenge. In this paper, a heuristic offloading method, named HOM, is proposed to minimize the total transmission delay. To be more specific, an offloading framework for deep learning edge services is built upon centralized unit (CU)-distributed unit (DU) architecture. Then, we acquire the appropriate offloading strategy by the origin-destination ECN distance estimation and heuristic searching of the destination virtual machines for accommodating the offloaded computation tasks. Finally, the effectiveness of the scheme is verified by detailed experimental evaluations.

Journal ArticleDOI
TL;DR: The mathematical proof reveals that the time average of the control strength is crucial for reaching synchronization, and together with the agent dynamics and the topology, this average also governs the largest admissible delay.
Abstract: This paper investigates the synchronization problem of nonlinear multi-agent systems with time-varying control in the presence of transmission delay over a communication network. To facilitate the study, a novel delayed differential inequality with time-varying coefficients is first established. Then, the synchronization problem is recast into the stability problem of a delayed differential system with certain time-dependent parameter. A sufficient criterion is further formulated to guarantee the synchronization. The mathematical proof reveals that the time average of the control strength is crucial for reaching synchronization. Together with the agent dynamics and the topology, this average also governs the largest admissible delay. Moreover, the criterion is applied to synchronization problems with general on-off coupling under data sampling and delayed communications, respectively. Some useful corollaries are consequently deduced. Finally, numerical simulations are presented to illustrate the validity of our theoretical results.

Journal ArticleDOI
TL;DR: In order to reduce the frequency of data transmission and save network resources, sampled-data-based event-triggered scheme is utilized and novel conditions are formulated to ensure the stochastic stability for stochastically semi-Markovian jump time-delay systems in terms of linear matrix inequalities(LMIs).

Journal ArticleDOI
Guangjie Han1, Tang Zhengkai1, Yu He1, Jinfang Jiang1, James Adu Ansere1 
TL;DR: A district partition-based data collection algorithm with event dynamic competition in UASNs with significantly reduces energy consumption to guarantee load balancing while reducing end-to-end transmission delay is proposed.
Abstract: The advent of underwater acoustic sensor networks (UASNs) has enhanced marine environmental monitoring, auxiliary navigation, and marine military defense. One of the core functions of UASNs is data collection. However, current underwater data collection schemes generally encounter problems such as high energy consumption and high latency. Furthermore, the application of multiple autonomous underwater vehicles (AUVs) has contributed to more problems of task assignment and load balancing. This leads to significant failure in data collections and controlling of spontaneous emergencies. To address these problems, a district partition-based data collection algorithm with event dynamic competition in UASNs has been proposed. In this algorithm, the value of information of the packet determines the priority of its transmission to the cluster head. The navigation position of the mobile sink and the area under the responsibility of each AUV are determined by the spatial region division. The path of the AUV in the subregion is then planned using reinforcement learning. Subsequently, the dynamic competition of multiple AUVs is used to handle emergency tasks. The simulation demonstrates that our proposed algorithm significantly reduces energy consumption to guarantee load balancing while reducing end-to-end transmission delay.

Journal ArticleDOI
TL;DR: Dynamic power control for NOMA transmissions in wireless caching networks is studied in this letter, which can be adjusted based on the status of content transmissions and focuses on minimizing the transmission delay with the considerations of each user’s transmission deadline and the total power constraint.
Abstract: Non-orthogonal multiple access (NOMA) technique is capable of improving the efficiency of delivering and pushing contents in wireless caching networks. However, due to the differences of the data volume and the channel condition, the static power control schemes cannot fully explore the potential of NOMA. To solve this problem, dynamic power control for NOMA transmissions in wireless caching networks is studied in this letter, which can be adjusted based on the status of content transmissions. In particular, we focus on minimizing the transmission delay with the considerations of each user’s transmission deadline and the total power constraint. An iterative algorithm is first proposed to approach the optimal solution of dynamic power control. Then a deep neural network (DNN)-based method is designed to keep a balance between the performance and the computational complexity. Finally, Monte-Carlo simulations are provided for verifications.

Journal ArticleDOI
TL;DR: Experimental results show that the TBRS scheme is superior to the existing schemes in terms of delivery rate, transmission delay and reliability, and the resistance against illegal eavesdropping attacks has increased by an average of 32.53% when compared to other algorithms.

Journal ArticleDOI
TL;DR: A reliable energy-efficient emergency notification system for epileptic seizure detection, based on conceptual learning and fuzzy classification, and a selective data transfer scheme, which opts for the most convenient way for data transmission depending on the detected patient’s conditions.
Abstract: Smart healthcare systems require recording, transmitting and processing large volumes of multimodal medical data generated from different types of sensors and medical devices, which is challenging and may turn some of the remote health monitoring applications impractical. Moving computational intelligence to the network edge is a promising approach for providing efficient and convenient ways for continuous-remote monitoring. Implementing efficient edge-based classification and data reduction techniques are of paramount importance to enable smart healthcare systems with efficient real-time and cost-effective remote monitoring. Thus, we present our vision of leveraging edge computing to monitor, process, and make autonomous decisions for smart health applications. In particular, we present and implement an accurate and lightweight classification mechanism that, leveraging some time-domain features extracted from the vital signs, allows for a reliable seizures detection at the network edge with precise classification accuracy and low computational requirement. We then propose and implement a selective data transfer scheme, which opts for the most convenient way for data transmission depending on the detected patient’s conditions. In addition to that, we propose a reliable energy-efficient emergency notification system for epileptic seizure detection, based on conceptual learning and fuzzy classification. Our experimental results assess the performance of the proposed system in terms of data reduction, classification accuracy, battery lifetime, and transmission delay. We show the effectiveness of our system and its ability to outperform conventional remote monitoring systems that ignore data processing at the edge by: (i) achieving 98.3% classification accuracy for seizures detection, (ii) extending battery lifetime by 60%, and (iii) decreasing average transmission delay by 90%.

Journal ArticleDOI
TL;DR: This article considers exponential stabilization of linear Networked Control Systems with periodic event-triggered control for a given network specification in terms of a maximum number of successive dropouts and a constant transmission delay.

Proceedings ArticleDOI
24 Jun 2019
TL;DR: In this article, the achievable reliability and latency of VR services over terahertz (THz) links are characterized, and a novel expression for the probability distribution function of the transmission delay is derived as a function of system parameters.
Abstract: Guaranteeing ultra reliable low latency communications (URLLC) with high data rates for virtual reality (VR) services is a key challenge to enable a dual VR perception: visual and haptic. In this paper, a terahertz (THz) cellular network is considered to provide high-rate VR services, thus enabling a successful visual perception. For this network, guaranteeing URLLC with high rates requires overcoming the uncertainty stemming from the THz channel. To this end, the achievable reliability and latency of VR services over THz links are characterized. In particular, a novel expression for the probability distribution function of the transmission delay is derived as a function of the system parameters. Subsequently, the end-to-end (E2E) delay distribution that takes into account both processing and transmission delay is found and a tractable expression of the reliability of the system is derived as a function of the THz network parameters such as the molecular absorption loss and noise, the transmitted power, and the distance between the VR user and its respective small base station (SBS). Numerical results show the effects of various system parameters such as the bandwidth and the region of non-negligible interference on the reliability of the system. In particular, the results show that THz can deliver rates up to 16.4 Gbps and a reliability of 99.999% (with a delay threshold of 30 ms) provided that the impact of the molecular absorption on the THz links, which substantially limits the communication range of the SBS, is alleviated by densifying the network accordingly.