scispace - formally typeset
Search or ask a question

Showing papers on "Transmission delay published in 2021"


Journal ArticleDOI
TL;DR: Experimental results outwards show that the intelligent module provides energy-efficient, secured transmission with low computational time as well as a reduced bit error rate, which is a key requirement considering the intelligent manufacturing of VSNs.
Abstract: Due to technology advancement, smart visual sensing required in terms of data transfer capacity, energy-efficiency, security, and computational-efficiency. The high-quality image transmission in visual sensor networks (VSNs) consumes more space, energy, transmission delay which may experience the various security threats. Image compression is a key phase of visual sensing systems that needs to be effective. This motivates us to propose a fast and efficient intelligent image transmission module to achieve the energy-efficiency, minimum delay, and bandwidth utilization. Compressive sensing (CS) introduced to speedily compressed the image to reduces the consumption of energy, time minimization, and efficient bandwidth utilization. However, CS cannot achieve security against the different kinds of threats. Several methods introduced since the last decade to address the security challenges in the CS domain, but efficiency is a key requirement considering the intelligent manufacturing of VSNs. Furthermore, the random variables selected for the CS having the problem of recovering the image quality due to the accumulation of noise. Thus concerning the above challenges, this paper introduced a novel one-way image transmission module in multiple input multiple output that provides secure and energy-efficient with the CS model. The secured transmission in the CS domain proposed using the security matrix which is called a compressed secured matrix and perfect reconstruction with the random matrix measurement in the CS. Experimental results outwards that the intelligent module provides energy-efficient, secured transmission with low computational time as well as a reduced bit error rate.

262 citations


Journal ArticleDOI
TL;DR: An adaptive event-triggered scheme for S-MJSs that is more effective than conventional event- triggered strategy for decreasing network transmission information is developed and a new adaptive law is designed that can dynamically adjust the event-Triggered threshold is designed.
Abstract: This paper examines the adaptive event-triggered fault detection problem of semi-Markovian jump systems (S-MJSs) with output quantization. First, we develop an adaptive event-triggered scheme for S-MJSs that is more effective than conventional event-triggered strategy for decreasing network transmission information. Meanwhile, we design a new adaptive law that can dynamically adjust the event-triggered threshold. Second, we consider output signal quantization and transmission delay in the proposed fault detection scheme. Moreover, we establish novel sufficient conditions for the stochastic stability in the proposed fault detection scheme with an $H_{\infty }$ performance with the help of linear matrix inequalities (LMIs). Finally, we provide simulation results to demonstrate the usefulness of the developed theoretical results.

183 citations


Journal ArticleDOI
TL;DR: A novel stability criterion is developed for the LFC of the power system by considering the sampling period, and transmission delay of the communication network, which ensures that the proposed scheme operates in large sampling periods, and under transmission delays.
Abstract: Uncertain transmission delays, sampling periods, parameters uncertainties regarding the power system, load fluctuations, and the intermittent generation of renewable energy sources (RESs) will significantly influence a power system's frequency. This article designs a robust delay-dependent PI-based load frequency control (LFC) scheme for a power system based on sampled-data control. First, a sampled-data-based delay-dependent LFC model of power system is constructed. Then, by applying the Lyapunov theory, and the linear matrix inequality technique, a novel stability criterion is developed for the LFC of the power system by considering the sampling period, and transmission delay of the communication network, which ensures that the proposed scheme operates in large sampling periods, and under transmission delays. Next, an exponential decay rate (EDR) is introduced to guide the design of a robust PI-based LFC scheme. The LFC scheme with robustness is designed by setting a small EDR. The values of EDR are adjusted by the given robust performance evaluation conditions of parameter uncertainties, and $H_\infty$ performance. Finally, case studies are carried out based on a one-area power system, and a three-area power system with RESs. Simulation results show that the proposed LFC scheme performs strong robustness against parameter uncertainties regarding the power system, and communication network, load fluctuations, and the intermittent generation of RESs.

105 citations


Journal ArticleDOI
TL;DR: The placement problem of ESs in the IoV is studied, and the six-objective ES deployment optimization model is constructed by simultaneously considering transmission delay, workload balancing, energy consumption, deployment costs, network reliability, and ES quantity.
Abstract: The development of the Internet of Vehicles (IoV) has made transportation systems into intelligent networks. However, with the increase in vehicles, an increasing number of data need to be analyzed and processed. Roadside units (RSUs) can offload the data collected from vehicles to remote cloud servers for processing, but they cause significant network latency and are unfriendly to applications that require real-time information. Edge computing (EC) brings low service latency to users. There are many studies on computing offloading strategies for vehicles or other mobile devices to edge servers (ESs), and the deployment of ESs cannot be ignored. In this paper, the placement problem of ESs in the IoV is studied, and the six-objective ES deployment optimization model is constructed by simultaneously considering transmission delay, workload balancing, energy consumption, deployment costs, network reliability, and ES quantity. In addition, the deployment problem of ESs is optimized by a many-objective evolutionary algorithm. By comparing with the state-of-the-art methods, the effectiveness of the algorithm and model is verified.

82 citations


Journal ArticleDOI
TL;DR: The simulation results prove that the proposed density-based content distribution method can obviously reduce the average transmission delay of content distribution under different network conditions and has better stability and self-adaptability under continuous time variation.
Abstract: The satellite-terrestrial networks (STN) utilize the spacious coverage and low transmission latency of Low Earth Orbit (LEO) constellation to distribute requested content for ground subscribers. With the development of storage and computing capacity of satellite onboard equipment, it is considered promising to leverage in-network caching technology on STN to improve content distribution efficiency. However, traditional ground network caching schemes are not suitable in STN, considering dynamic satellite propagation and time-varying topology. More specifically, the unevenness of user distribution results in difficulties for assurance of quality of experience. To address these problems, we firstly propose a density-based block division algorithm to divide the content subscribers into a series of blocks with different sizes according to user density. The LEO satellite orbit and time-varying network model is established to describe STN topology. Next, we propose an approximate minimum coverage vertex set algorithm and a novel cache node selection algorithm for optimal user blocks matching. The simulation results prove that the proposed density-based content distribution method can obviously reduce the average transmission delay of content distribution under different network conditions and has better stability and self-adaptability under continuous time variation.

81 citations


Journal ArticleDOI
TL;DR: The IoV model of task offloading and migration built by intelligent edge computing can significantly improve the load sharing rate, offloading efficiency, packet loss ratio, and transmission delay when the IoV is processing tasks and uploading data.
Abstract: To investigate the diversified technologies in Internet of Vehicles (IoV) under intelligent edge computing, artificial intelligence, intelligent edge computing, and IoV are combined. Also, it proposes an IoV model for intelligent edge computing task offloading and migration under the SDVN (Software Defined Vehicular Networks) architecture, that is, the JDE-VCO (Joint Delay and Energy-Vehicle Computational task Offloading) optimization. And the simulation is performed. The results show that in the analysis of the impact of different offloading strategies on the IoV, it is found that the JDE-VCO algorithm is superior to other schemes in terms of transmission delay and total offloading energy consumption. In the analysis of the impact of the task unloading of the IoV, the JDE-VCO algorithm is less than RTO (Random Tasks Offloading) and UTO (Uniform Tasks Offloading) algorithm schemes in terms of the number of tasks per unit time, and the average task completion time for the same amount of uploaded data. In the analysis of the packet loss ratio and transmission delay, it can be found that the packet loss ratio and transmission delay of the JDE-VCO algorithm are less than the RTO and UTO algorithms. Moreover, the packet loss ratio of the JDE-VCO algorithm is about 0.1, and the transmission delay is stable at 0.2s, which has obvious advantages. Therefore, through research, the IoV model of task offloading and migration built by intelligent edge computing can significantly improve the load sharing rate, offloading efficiency, packet loss ratio, and transmission delay when the IoV is processing tasks and uploading data. It provides experimental basis for the improvement of the IoV system.

71 citations


Journal ArticleDOI
TL;DR: In this article, a clustering-based routing protocol combining a modified K-means algorithm with Continuous Hopfield Network and Maximum Stable Set Problem (KMRP) for VANET is proposed.
Abstract: Vehicular Ad-hoc Networks (VANET) offer several user applications for passengers and drivers, as well as security and internet access applications. To ensure efficient data transmission between vehicles, a reliable routing protocol is considered a significant challenge. This paper suggests a new clustering-based routing protocol combining a modified K-Means algorithm with Continuous Hopfield Network and Maximum Stable Set Problem (KMRP) for VANET. In this way, the basic input parameters of the K-Means algorithm, such as the number of clusters and the initial cluster heads, will not be selected randomly, but using Maximum Stable Set Problem and Continuous Hopfield Network. Then the assignment of vehicles to clusters will be carried out according to Link Reliability Model as a metric that replaces the distance parameter in the K-Means algorithm. Finally, the cluster head is selected by weight function according to the amount of free buffer space, the speed, and the node degree. The simulation results have proved that the designed protocol performs better in a highway vehicular environment, compared to the most recent schemes designed for the same objective. In fact, KMRP reduces traffic congestion, and thus provides a significant increase in Throughput. In addition, KMRP decreases the transmission delay and guarantees the stability of the clusters in high density and mobility, which acts better in terms of the Packet Delivery Ratio.

53 citations


Journal ArticleDOI
TL;DR: In this article, a taxonomy of practical network coding methods and illustrate three practical directions on cutting computational complexity and enhancing progressive decoding is proposed, and the benefit and cost of current network coding algorithms along with the outline of future research are discussed.
Abstract: Network coding is an elegant and novel technique to improve network throughput and performance It is considered as a critical technology to facilitate ever-increasing demands of future wireless networks It exploits the broadcast nature of wireless media and cooperatively codes packets from different senders to provide reliable, secure, and efficient transmissions Current research focuses on either transmission delay, coding complexity, forwarding security, or end-to-end throughput Network coding-aided solutions can recover lost packets without feedback, eliminate latency, reduce the routing cost on diverse paths, or optimize the capacity of unstable wireless networks However, devices or smart sensors usually have limited computational capacity and some applications could not tolerate high decoding delay, which prevents network coding from being widely deployed in the real world In recent years, many research methods consider simplifying decoding matrix or coding algorithm to alleviate the shortcoming of network coding and further satisfy the extreme demands of the future wireless network This article summarizes complexity-optimized methods and explains the interaction effect of coding opportunities and decoding overhead We propose a taxonomy of practical network coding methods and illustrate three practical directions on cutting computational complexity and enhancing progressive decoding We also conclude the benefit and cost of current network coding algorithms along with the outline of future research

49 citations


Journal ArticleDOI
TL;DR: In this article, a joint task, spectrum, and transmit power allocation problem is investigated for a wireless network in which the BSs are equipped with mobile-edge computing (MEC) servers to jointly provide computational and communication services to users.
Abstract: In this article, a joint task, spectrum, and transmit power allocation problem is investigated for a wireless network in which the base stations (BSs) are equipped with mobile-edge computing (MEC) servers to jointly provide computational and communication services to users. Each user can request one computational task from three types of computational tasks. Since the data size of each computational task is different, as the requested computational task varies, the BSs must adjust their resource (subcarrier and transmit power) and task allocation schemes to effectively serve the users. This problem is formulated as an optimization problem whose goal is to minimize the maximal computational and transmission delay among all users. A multistack reinforcement learning (RL) algorithm is developed to solve this problem. Using the proposed algorithm, each BS can record the historical resource allocation schemes and users’ information in its multiple stacks to avoid learning the same resource allocation scheme and users’ states, thus improving the convergence speed and learning efficiency. The simulation results illustrate that the proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard $Q$ -learning algorithm.

46 citations


Journal ArticleDOI
TL;DR: In this article, a clustering-based long short-term memory (C-LTSM) approach was proposed to predict the number of content requests using historical request information.
Abstract: Coded caching is effective in leveraging the accumulated storage size in wireless networks by distributing different coded segments of each file in multiple cache nodes. This paper aims to find a wireless coded caching policy to minimize the total discounted network cost, which involves both transmission delay and cache replacement cost, using tools from deep learning. The problem is known to be challenging due to the unknown, time-variant content popularity as well as the continuous, high-dimensional action space. We first propose a clustering based long short-term memory (C-LTSM) approach to predict the number of content requests using historical request information. This approach exploits the correlation of the historical request information between different files through clustering. Based on the predicted results, we then propose a supervised deep deterministic policy gradient (SDDPG) approach. This approach, on one hand, can learn the caching policy in continuous action space by using the actor-critic architecture. On the other hand, it accelerates the learning process by pre-training the actor network based on the solution of an approximate problem that minimizes the per-slot cost. Real-world trace-based numerical results show that the proposed prediction and caching policy using deep learning outperform the considered existing methods.

40 citations


Journal ArticleDOI
TL;DR: The Memetic-based RSU (M-RSU) placement algorithm is proposed to reduce communication delay and increase the coverage area among IoV devices through an optimum RSU deployment and a Distributed ML (DML)-based Intrusion Detection System (IDS) that prevents the SD-IoV network from disastrous security failures.
Abstract: The massive increase in computing and network capabilities has resulted in a paradigm shift from vehicular networks to the Internet of Vehicles (IoV). Owing to the dynamic and heterogeneous nature of IoV, it requires efficient resource management using smart technologies, such as software-defined network (SDN), machine learning (ML), and so on. Roadside units (RSUs) in software-defined-IoV (SD-IoV) networks are responsible for network efficiency and offer several safety functions. However, it is not viable to deploy enough RSUs, and also the existing RSU placement lacks universal coverage within a region. Furthermore, any disruption in network performance or security impacts vehicular activities severely. Thus, this work aims to improve network efficiency through optimal RSU placement and enhance security with a malicious IoV detection algorithm in an SD-IoV network. Therefore, the memetic-based RSU (M-RSU) placement algorithm is proposed to reduce communication delay and increase the coverage area among IoV devices through an optimum RSU deployment. Besides the M-RSU algorithm, the work also proposes a distributed ML (DML)-based intrusion detection system (IDS) that prevents the SD-IoV network from disastrous security failures. The simulation results show that M-RSU placement reduces the transmission delay. The DML-based IDS detects the malicious IoV with an accuracy of 89.82% compared to traditional ML algorithms.

Journal ArticleDOI
TL;DR: In this article, a sliding mode control for networked Markovian jump systems with partially-known transition probabilities via an event-triggered scheme was proposed to reduce network bandwidth usage and save network resources.

Journal ArticleDOI
TL;DR: A framework of edge-based communication optimization is studied to reduce the number of end devices directly connected to the server while avoiding uploading unnecessary local updates, and a model cleaning method based on cosine similarity is proposed to avoid unnecessary communication.
Abstract: Federated learning can achieve the purpose of distributed machine learning without sharing privacy and sensitive data of end devices. However, high concurrent access to the server increases the transmission delay of model updates, and the local model may be an unnecessary model with the opposite gradient from the global model, thus incurring a large number of additional communication costs. To this end, we study a framework of edge-based communication optimization to reduce the number of end devices directly connected to the server while avoiding uploading unnecessary local updates. Specifically, we cluster devices in the same network location and deploy mobile edge nodes in different network locations to serve as hubs for cloud and end devices communications, thereby avoiding the latency associated with high server concurrency. Meanwhile, we propose a model cleaning method based on cosine similarity. If the value of similarity is less than a preset threshold, the local update will not be uploaded to the mobile edge nodes, thus avoid unnecessary communication. Experimental results show that compared with traditional federated learning, the proposed scheme reduces the number of local updates by 60%, and accelerates the convergence speed of the regression model by 10.3%.

Journal ArticleDOI
TL;DR: In this article, the problem of UAV deployment, power allocation, and bandwidth allocation is investigated for a UAV-assisted wireless system operating at terahertz (THz) frequencies.
Abstract: In this letter, the problem of unmanned aerial vehicle (UAV) deployment, power allocation, and bandwidth allocation is investigated for a UAV-assisted wireless system operating at terahertz (THz) frequencies. In the studied model, one UAV can service ground users using the THz frequency band. However, the highly uncertain THz channel will introduce new challenges to the UAV location, user power, and bandwidth allocation optimization problems. Therefore, it is necessary to design a novel framework to deploy UAVs in the THz wireless systems. This problem is formally posed as an optimization problem whose goal is to minimize the total delays of the uplink and downlink transmissions between the UAV and the ground users by jointly optimizing the deployment of the UAV, the transmit power and the bandwidth of each user. The communication delay is crucial for emergency communications. To tackle this nonconvex delay minimization problem, an alternating algorithm is proposed while iteratively solving three subproblems: location optimization subproblem, power control subproblem, and bandwidth allocation subproblem. Simulation results show that the proposed algorithm can reduce the transmission delay by up to 59.3%, 49.8% and 75.5% respectively compared to baseline algorithms that optimize only UAV location, bandwidth allocation or transmit power control.

Journal ArticleDOI
TL;DR: A deep learning (DL)-based CSI prediction scheme is proposed to address channel aging problem by exploiting the correlation of changing channels and numerical results demonstrate that the proposed DL-based predictor can mitigate the channel Aging problem in LEO satellite mMIMO system effectively.
Abstract: Low Earth orbit (LEO) satellite is one of the most promising infrastructures for realizing next-generation global wireless networks with enhanced data rates. Applying massive multiple-input multiple-output (mMIMO) to LEO satellite communication systems is a novel idea to enhance communication capacity and realize the global high-speed interconnection. However, obtaining effective instantaneous channel state information (iCSI) is challenging due to the time-varying propagation environment and long transmission delay. In this letter, a deep learning (DL)-based CSI prediction scheme is proposed to address channel aging problem by exploiting the correlation of changing channels. Specifically, we design a satellite channel predictor (SCP) that is composed by long short term with memory (LSTM) units. The predictor is first trained by offline learning and then feeds back the corresponding output results online based on the input data to realize channel feature extraction and future CSI prediction in LEO satellite scenarios. Numerical results demonstrate that the proposed DL-based predictor can mitigate the channel aging problem in LEO satellite mMIMO system effectively.

Journal ArticleDOI
TL;DR: This work builds a network of smart nodes where each node comprises a Radio-Frequency Identification (RFID) tag, reduced function RFID reader (RFRR), and sensors, and two levels of security algorithms, including an AES 128 bit with hashing, have been implemented.
Abstract: COVID-19 surprised the whole world by its quick and sudden spread. Coronavirus pushes all community sectors: government, industry, academia, and nonprofit organizations to take forward steps to stop and control this pandemic. It is evident that IT-based solutions are urgent. This study is a small step in this direction, where health information is monitored and collected continuously. In this work, we build a network of smart nodes where each node comprises a Radio-Frequency Identification (RFID) tag, reduced function RFID reader (RFRR), and sensors. The smart nodes are grouped in clusters, which are constructed periodically. The RFRR reader of the clusterhead collects data from its members, and once it is close to the primary reader, it conveys its data and so on. This approach reduces the primary RFID reader’s burden by receiving data from the clusterheads only instead of reading every tag when they pass by its vicinity. Besides, this mechanism reduces the channel access congestion; thus, it reduces the interference significantly. Furthermore, to protect the exchanged data from potential attacks, two levels of security algorithms, including an AES 128 bit with hashing, have been implemented. The proposed scheme has been validated via mathematical modeling using Integer programming, simulation, and prototype experimentation. The proposed technique shows low data delivery losses and a significant drop in transmission delay compared to contemporary approaches.

Journal ArticleDOI
TL;DR: A cluster-based systematic data aggregation model (CSDAM) for real-time data processing that minimizes the consumption of energy and transmission delay effectively thereby increasing the network lifespan.
Abstract: In present decade, wireless sensor networks is applied in a variety of applications such as health monitoring, agriculture, traffic management, security domains, pollution management, and so on. Owing to the node density, the same data are collected by multiple sensors that introduce redundancy, which should be avoided by means of proper data aggregation methodology. With that note, this paper presents a cluster-based systematic data aggregation model (CSDAM) for real-time data processing. First, the network is formed into a cluster with active and sleep state nodes and cluster-head (CH) is selected based on ranking given to sensors with two criteria: existing energy level (EEL) and geographic-location (GL) to base station (BS), [i.e., Rank(EEL,GL)]. Here, the CH is the aggregator. Second, Aggregation is carried out in 3 levels where the data processing of level 3 has been reduced by aggregating the data at level 1 and level 2. If the energy of aggregator goes below the threshold, we choose another aggregator. Third, Real time application should be given more precedence than other applications, so additionally an application type field is added to each sensor node from which the priority of data processing is given first to real time applications. The simulation results show that CSDAM minimizes the consumption of energy and transmission delay effectively, thereby increasing the network lifespan.

Journal ArticleDOI
TL;DR: A novel Software Defined Networking (SDN)-controlled and Cognitive Radio (CR)-enabled V2X routing approach to achieve ultra-high data rate, by using predictive V2 X routing that supports the intelligent switching between two 5G technologies: millimeter-wave (mmWave) and terahertz (THz).

Journal ArticleDOI
TL;DR: A multi-stage multicast rate adaptation scheme for NDN WLAN, named NDN-MMRA, to minimize the total transmission time with reliability guarantee for multicast group members, and is implemented in NS-3 by adopting the ndnSIM module.
Abstract: Named Data Networking (NDN) is considered as a prominent architecture towards future Wireless Local Area Networks (WLAN), and multicast plays an important role in data delivery such as media streaming, multipoint videoconferencing, etc. However, to achieve high-efficiency multicast in NDN WLAN is challenging for two significant reasons. First, without feedback mechanism in IEEE 802.11 standards, to guarantee reliability, the current multicast scheme transmits the multicast data with the basic rate (e.g., 1 Mbps for IEEE 802.11b), which inevitably increases the transmission delay for high-speed consumers. Second, as a NDN multicast group is constituted by consumers who are requesting the same content, multicast groups are easy to form and evolve rapidly, where a data rate adaptation scheme is requisite to accommodate differential multicast groups. In this paper, we propose a multi-stage multicast rate adaptation scheme for NDN WLAN, named NDN-MMRA , to minimize the total transmission time with reliability guarantee for multicast group members. In NDN-MMRA , by checking the Pending Interest Table (PIT) status information, the number of consumers in each multicast group as well as their receiving capabilities are known ahead; with the available data rates in a specific 802.11 standard, NDN-MMRA determines: 1) how many transmission stages are required; and 2) in each stage, which data rate should be adopted. The merit is that with multi-stage transmissions, the data rate can be adapted in descending order to accommodate high-speed consumers with delay minimized, and low-speed consumers with reliability guaranteed. We implement NDN-MMRA in NS-3 by adopting the ndnSIM module, and conduct extensive experiments to demonstrate its efficacy under different IEEE 802.11 standards and various underlying WLAN topologies.

Journal ArticleDOI
TL;DR: In this article, the authors proposed to combine the control theory and DRL technology to achieve an efficient network control scheme for traffic engineering (TE) in SDN networks, which employs the idea from the pinning control theory to select a subset of links in the network and name them critical links.

Journal ArticleDOI
TL;DR: A brief analysis of the solutions addressing recent research problems in WSN comprising conflicting goals, i.e. multi-objective optimization (MOO) technique is delivered.
Abstract: Wireless sensor networks (WSNs) plays a significant role in the field of surveillance, monitoring the real time applications. Regardless its strong ability to handle such tasks, it is difficult to maintain a trade-off between the conflicting goals of network lifetime, transmission delay, high coverage and packet loss. Various solutions have been proposed by the researchers to address these issues comprising the solution in real-time network scenarios. This paper delivers a brief analysis of the solutions addressing recent research problems in WSN comprising conflicting goals, i.e. multi-objective optimization (MOO) technique. Firstly, an illustration of key optimization objective in WSNs is given which constitutes existing issues such as power control, rate control ant routing. Then, an elaboration of various objective functions used in MOO with its merits and demerits is also provided. Later, existing approaches for improving optimizing metric, applications performance of existing approaches and proposed architecture have been discussed.

Journal ArticleDOI
TL;DR: A fog computing-based VNET is presented in this article, where the resource allocation as the corresponding key technique is researched and results show that the proposed solution can significantly reduce the transmission delay with fast convergence.
Abstract: As a typical and prominent component of the Internet of Things, vehicular communication and the corresponding vehicular networks (VNETs) are promising to improve spectral efficiency, decrease transmission delay, and increase reliability. The ever-increasing number of vehicles and the demand of passengers/drivers for rich multimedium services bring key challenges to VNETs, which requiring huge capacity, ultralow delay, and ultrahigh reliability. To meet these performance requirements, a fog computing-based VNET is presented in this article, where the resource allocation as the corresponding key technique is researched. In particular, joint optimization of user association and radio resource allocation scheme is investigated to minimize the transmission delay of the concerned VNET. The proposed optimization problem is formulated as a mixed-integer nonlinear program and transformed into a convex problem by Perron–Frobenius theory and a weighted minimum mean square error method. Numerical results show that the proposed solution can significantly reduce the transmission delay with fast convergence.

Journal ArticleDOI
TL;DR: In this article, a joint recommendation, caching, and beamforming scheme is proposed for multi-cell multi-antenna recommendation aware fog-RANs to minimize the content transmission latency.
Abstract: Content caching is recognized as a promising solution to release the heavy burden of backhaul links and decrease the content transmission latency in Fog radio access networks (Fog-RANs). However, the content caching design is still a challenging problem with considering the user request patterns, the content delivery strategies, and the limited caching capacity. Recommendation has the capability of reshaping users’ content requests for further prompting caching gain. The joint recommendation, caching, beamforming holds the potential to improve the system performance of Fog-RANs. In this paper, a joint recommendation, caching, and beamforming scheme is proposed for multi-cell multi-antenna recommendation aware Fog-RANs. Aiming at minimizing the content transmission latency, we formulate a joint recommendation, caching, and beamforming optimization problem. The minimization problem is a very challenging two-timescale mixed integer nonlinear programming problem, which is hard to solve in general. By exploring structural properties of the problem, we propose an alternative optimization algorithm with low complexity through decomposing the original problem into three sub-problems. Extensive simulations show that our proposed method can significantly reduce the content transmission delay.

Journal ArticleDOI
TL;DR: Simulation results prove that the proposed MAC protocol can effectively avoid transmission collision, reduce transmission delay, improve system energy efficiency, and have good adaptability for different network topologies.

Journal ArticleDOI
TL;DR: This work investigates the performance of the HST-CD network with the NOMA scheme, where the satellite proactively pushes/broadcasts the popular contents to the cache-enabled relay, and then the user is able to directly retrieve the required content from the relay with less transmission delay.
Abstract: Wireless content delivery (CD) and non-orthogonal multiple access (NOMA) have been confirmed to be promising and effective approaches to gain substantial performance improvement for hybrid satellite-terrestrial (HST) networks. To improve the spectrum efficiency and reduce the delay of retrieving the content for the satellite user, we investigate the performance of the HST-CD network with the NOMA scheme, where the satellite proactively pushes/broadcasts the popular contents to the cache-enabled relay, and then the user is able to directly retrieve the required content from the relay with less transmission delay. Specifically, based on the practical propagation model along with the stochastic geometry, the outage probability for the cache-enabled relays of the considered network is theoretically derived. Besides, the hit probability of the user in the HST-CD network is also provided. Finally, both simulation and analytical results are provided to validate the effect of the HST-CD network with the NOMA scheme and proclaim the influence of key factors on the performance.

Journal ArticleDOI
TL;DR: An unmanned aerial vehicle (UAV)-aided emergency environmental monitoring system using a LoRa mesh networking approach, which can quickly build a large range of wireless communication networks suitable for small data transmission and has an upper bound which can meet the real-time requirement of emergency response.
Abstract: Rapid acquisition of environmental parameters of emergency sites is crucial for emergency response. However, in areas where public ground networks do not provide sufficient coverage or are destroyed by disasters, the rapid backhaul of monitoring data from emergency sites is difficult. Due to the urgent demand for data collection in such areas, we propose an Unmanned Aerial Vehicles (UAVs)-aided emergency environmental monitoring system using LoRa mesh networking approach, which can quickly build a large range of wireless communication network suitable for small data transmission. First, we designed a LoRa-based mesh network protocol by using custom slotted ALOHA medium access mechanism. Second, we designed and implemented a custom software and hardware platform of the monitoring system. The system adopts a communication mode, including multiple sensor nodes (SNs), multiple relay nodes (RNs), a gateway (GW), and a network server. Finally, we demonstrated the performance of the protocol and the system by in-lab and outdoor tests. The results show that when data rate is 0.293 kbps, the line of sight 1-hop transmission distance is more than 10 km, and the 1-hop transmission delay is less than 6 seconds; the end-to-end delay has an upper bound which can meet the real-time requirement of emergency response; the least upper bound of throughput per slot of our mesh protocol is much higher than that of LoRaWAN. Evaluation results also demonstrate the network robustness to environmental changes.

Journal ArticleDOI
TL;DR: A software defined network (SDN) based scheduling algorithm that leverages generative adversarial network (GAN) based deep distributional Q-network (GAN-DDQN) for learning the action-value distribution for intelligent transmission scheduling and a reward-clipping technique is proposed for stabilizing the training of GAN- DDQN against the effect of broadly spanning utility values.
Abstract: The Cognitive Internet of Vehicles (CIoV) is an intelligent network that embeds the cognitive mechanism in the Internet of Vehicles (IoV) to sense the environment and observe the network states to learn the optimal policies adaptively. However, one of the key challenges in CIoV systems is to design a smart agent that can smartly schedule the packet transmission for ultra-reliable low latency communication (URLLC) under extreme random and noisy network conditions. We propose a software defined network (SDN) based scheduling algorithm that leverages generative adversarial network (GAN) based deep distributional Q-network (GAN-DDQN) for learning the action-value distribution for intelligent transmission scheduling. A reward-clipping technique is proposed for stabilizing the training of GAN-DDQN against the effect of broadly spanning utility values. The extensive simulation results verify that GAN-Scheduling achieves higher spectral efficiency (SE), service level agreement (SLA), system throughput, transmission packet rate with lower transmission delay, and power consumption compared to the existing reinforcement learning algorithms.

Journal ArticleDOI
TL;DR: In this paper, a delay deadline constrained federated learning framework was designed to avoid extremely long training delay, and then a dynamic client selection problem was formulated for computing utility maximization in such learning framework.
Abstract: Smart grid applications, such as predicting energy consumption, grid user behavior analysis and predicting energy theft, etc., are data-driven applications that require machine learning with a wealth of data generated from Internet of Things (IoT) based metering devices. However, traditional methods of uploading this huge data to the remote cloud for data analytics may be low efficient due to the non-negligible network transmission delay. By deploying a number of computing-enabled devices at the network edge, edge computing supports the implementation of machine learning close to the power grid environment. Considering the limited computing resources of edge devices and non-independent and identical (non-IID) data source, federated learning is a feasible edge computing based machine learning model. In federated learning, distributed mobile clients and a federated server collaborate to perform machine learning. Generally, the more clients to join the federated learning, the faster to obtain learning convergence and the higher resource utility. However, the communications between clients and the server in training rounds of federated learning may fail due to time-varying link reliability properties in a wireless network of smart grid, which not only slows down the model convergence rate but also wastes resources, such as energy consumption for invalid local training. This paper studies a dynamic federated learning problem in a power grid mobile edge computing (GMEC) environment, considering the high dynamic of link reliability. We design a delay deadline constrained federated learning framework to avoid extremely long training delay, and then formulate a dynamic client selection problem for computing utility maximization in such learning framework. Two online client selection algorithms, including cli-max greedy and uti-positive guarantee , are proposed to address the problem. The theoretical analysis and simulation results are conducted to illustrate the efficiency of the proposal.

Journal ArticleDOI
TL;DR: A delay-aware content caching (DCC) algorithm in the IoV is proposed, which consists of vehicle associations, content caching, and precaching decisions optimization, and the effectiveness of the proposed DCC algorithm is verified.
Abstract: With the emergence of a large number of computational resource-intensive applications and various content delivery services, there is an explosion of data growth in the Internet of Vehicles (IoV) To improve the transmission performance of the IoV, caching content on the edge of the network is considered as a potential solution to reduce the content transmission delay In this article, we investigate the content caching decisions optimization method in the IoV to minimize the content fetching delay for vehicles, which is based on the vehicle-to-vehicle (V2V) collaboration A delay-aware content caching (DCC) algorithm in the IoV is proposed, which consists of vehicle associations, content caching, and precaching decisions optimization First, a delay-aware vehicle associations (DVAs) algorithm is proposed to optimize the vehicle associations Consequently, based on the vehicle associations results, the content caching decisions are optimized in two network scenarios according to the existence of the handover vehicles Finally, a practical scenario of Shanghai with time-varying traffic flow is used for simulations and the effectiveness of the proposed DCC algorithm is verified

Journal ArticleDOI
TL;DR: The Age of Information is introduced to mathematically characterize the impacts of packet loss and transmission delay on the state estimation error and the AoI of sensory data and the co-design of state estimation and sensory data transmission for marine IoT systems is investigated.
Abstract: In smart ocean, unmanned surface vehicles (USVs) are deployed to monitor the marine environment in a coordinated manner. The ubiquitous situation awareness of marine environment can be achieved by state estimation with the sensory data collected by USVs. Therefore, the transmission performance in terms of packet loss and delay of sensory data plays an important role in the state estimation of marine IoT systems. However, it is challenging to achieve the high-reliable and low-latency transmission for sensory data due to the path loss, spectrum scarcity and transmit power limitation. In this article, we introduce the Age of Information (AoI) to mathematically characterize the impacts of packet loss and transmission delay on the state estimation error. We first explore the relationship between the state estimation error and the AoI of sensory data. We then investigate the co-design of state estimation and sensory data transmission for marine IoT systems. Specifically, a mother ship (MS)-assisted cooperative transmission scheme is proposed to mitigate the impact of limited resources and path loss on the estimation performance. Then, the MS location, channel allocation, and transmit power are jointly optimized to minimize the mean-square error of state estimation, which is achieved by formulating a constrained minimization problem and solving it with the decomposition method. Simulation results demonstrate that the proposed scheme has superiorities in reducing the estimation error and the power consumption.