scispace - formally typeset
Search or ask a question

Showing papers on "Transmission delay published in 2022"


Journal ArticleDOI
TL;DR: In this paper , a Generative Adversarial Network and Deep Distribution Q Network (GAN-DDQN) is proposed to enhance smart packet transmission scheduling by reducing the distance between the estimated and target action-value particles.
Abstract: The convergence of Artificial Intelligence (AI) can overcome the complexity of network defects and support a sustainable and green system. AI has been used in the Cognitive Internet of Things (CIoT), improving a large volume of data, minimizing energy consumption, managing traffic, and storing data. However, improving smart packet transmission scheduling (TS) in CIoT is dependent on choosing an optimum channel with a minimum estimated Packet Error Rate (PER), packet delays caused by channel errors, and the subsequent retransmissions. Therefore, we propose a Generative Adversarial Network and Deep Distribution Q Network (GAN-DDQN) to enhance smart packet TS by reducing the distance between the estimated and target action-value particles. Furthermore, GAN-DDQN training based on reward clipping is used to evaluate the value of each action for certain states to avoid large variations in the target action value. The simulation results show that the proposed GAN-DDQN increases throughput and transmission packet while reducing power consumption and Transmission Delay (TD) when compared to fuzzy Radial Basis Function (fuzzy-RBF) and Distributional Q-Network (DQN). Furthermore, GAN-DDQN provides a high rate of 38 Mbps, compared to actor-critic fuzzy-RBF’s rate of 30 Mbps and the DQN algorithm’s rate of 19 Mbps.

17 citations


Journal ArticleDOI
TL;DR: In this article , an adaptive event-triggering scheme with switching structure is utilized to achieve better adjustment in triggering frequency. And an active packet loss approach for handling packets disorder issues caused by large transmission delays is presented.
Abstract: This article investigates stabilization problems of networked event-triggered switched systems under multiasynchronous switching. Different from the existing asynchronous literature, a novel problem is considered in this article, i.e., the event-triggering scheme and controller both have independent switching delays relative to the system, which is called multiasynchronous switching. An adaptive event-triggering scheme with switching structure is utilized to achieve better adjustment in triggering frequency. A systematic framework for analyzing multiasynchronous switching stability and solving controller gains is established. Different Lyapunov functionals are employed, and new tight bound conditions on average dwell time are constructed. Besides, an active packet loss approach for handling packets disorder issues caused by large transmission delays is presented. Since the activation instants of the system, triggering scheme, and controller are mutually staggered with the data updating instants of the actuator, a novel analytical method aiming at the coupling effect is proposed. Finally, the validity of the adopted method in this article is demonstrated.

14 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the user-perceived delay-aware service placement and user-allocation problem in an MEC-enabled network, and proposed a LOCal-search based algorithm for USER-PERceived delay aware service placement.
Abstract: In the multi-access edge computing environment, app vendors deploy their services and applications at the network edges, and edge users offload their computation tasks to edge servers. We study the user-perceived delay-aware service placement and user-allocation problem in edge environment. We model the MEC-enabled network, where the user-perceived delay consists of computing delay and transmission delay. The total cost in the offloading system is defined as the sum of service placement, edge server usage and energy consumption cost, and we need to minimize the total cost by determining the overall service-placing decision and user-allocation decision, while guaranteeing that the user-perceived delay requirement of each user is fulfilled. Our considered problem is formulated as a Mixed Integer Linear Programming problem, and we prove its NP-hardness. Due to the intractability of the considered problem, we propose a LOCal-search based algorithm for USer-perceived delay-aware service placement and user-allocation in edge environment, named LOCUS, which starts with a feasible solution and then repeatedly reduces the total cost by performing local-search steps. After that, we analyze the time complexity of LOCUS and prove that it achieves provable guaranteed performance. Finally, we compare LOCUS with other existing methods and show its good performance through experiments.

13 citations


Journal ArticleDOI
TL;DR: This work investigates the problem of joint beam training and data transmission control for mmWave delay-sensitive communications and transforms the constrained MDP into an unconstrained one, which is then solved via a parallel-rollout-based reinforcement learning method in a data-driven manner.
Abstract: Future communication networks call for new solutions to support their capacity and delay demands by leveraging potentials of the millimeter wave (mmWave) frequency band. However, the beam training procedure in mmWave systems incurs significant overhead as well as huge energy consumption. As such, deriving an adaptive control policy is beneficial to both delay-sensitive and energy-efficient data transmission over mmWave networks. To this end, we investigate the problem of joint beam training and data transmission control for mmWave delay-sensitive communications in this paper. Specifically, the considered problem is firstly formulated as a constrained Markov Decision Process (MDP), which aims to minimize the cumulative energy consumption over the whole considered period of time under delay constraint. By introducing a Lagrange multiplier, we transform the constrained MDP into an unconstrained one, which is then solved via a parallel-rollout-based reinforcement learning method in a data-driven manner. Our numerical results demonstrate that the optimized policy via parallel-rollout significantly outperforms other baseline policies in both energy consumption and delay performance.

10 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the Takagi-Sugeno fuzzy-model-based networked control system under the fuzzy event-triggered H ∞ control scheme.

6 citations


Journal ArticleDOI
TL;DR: In this article , a multi-path routing strategy based on SDN is proposed, which uses a combination of delay, bandwidth and node load to construct a link transmission cost calculation model, and adjusts the route for end-to-end transmission in time by real-time sensing of the network status and the load of the switching nodes in the network, thereby improving transmission efficiency.

6 citations


Journal ArticleDOI
01 Jan 2022
TL;DR: In this paper, the effects of traffic-induced delay and dropout on the finite-horizon quality of control of an individual stochastic linear time-invariant system, where quality-of-control is measured by an expected quadratic cost function, are analyzed.
Abstract: Transmission delay and packet dropout are inevitable network-induced phenomena that severely compromise the control performance of network control systems. The real-time network traffic is a major dynamic parameter that directly influences delay and reliability of transmission channels, and thus, acts as an unavoidable source of induced coupling among all network sharing systems. In this letter, we analyze the effects of traffic-induced delay and dropout on the finite-horizon quality-of-control of an individual stochastic linear time-invariant system, where quality-of-control is measured by an expected quadratic cost function. We model delay and dropout of the channel as generic stochastic processes that are correlated with the real-time network traffic induced by the rest of network users. This approach provides a pathway to determine the required networking capabilities to achieve a guaranteed quality-of-control for systems operating over a shared-traffic network. Numerical evaluations are performed using realistic stochastic models for delay and dropout. As a special case, we consider exponential distribution for delay with its rate parameter being traffic-correlated, and traffic-correlated Markov-based packet drop model.

6 citations


Proceedings ArticleDOI
16 May 2022
TL;DR: In this article , a routing scheme inspired by artificial fish swarm (AFS) and ant colony optimization (ACO) algorithms is proposed to find the path with the shortest delay.
Abstract: With the development of marine exploitation, underwater acoustic sensor networks (UWA-SNs) have become a hot research field. However, the harsh environment poses a threat to the security of underwater communication, as most routing protocols ignore the curve transmission of acoustic wave, which are more susceptible to transmission interference with higher transmission delay. To cope with these problems above, this work exploits a model under the assumption that the sound curve propagation relies on positive sound speed gradient. In order to find the path with the shortest delay, we design a routing scheme inspired by artificial fish swarm (AFS) and ant colony optimization (ACO) algorithms. Furthermore, we establish the path comprehensive benefit (PCB) to make a tradeoff between transmission delay and the lifetime of network. The simulation results validate that the algorithm proposed in this work is capable of improving the system performance compared to the benchmark algorithms in terms of both transmission delay and load-balance, and meanwhile ensuring paths reliability and security of the entire network.

5 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed an adaptive blocklength transmission framework to minimize the important part of the end-to-end delay of wireless networks, where they focus on the transmission delay and queuing delay.
Abstract: With the very stringent demand for real-time transmission of wireless communication services, the requirement of sub-millisecond ultra-low end-to-end delay has been initially proposed in the sixth generation (6G) communication networks. Finite blocklength transmission is one of the potential technologies to meet such a low end-to-end delay demand for the next generation networks. However, as the finite blocklength decreases, the transmission delay decreases while the queuing delay increases, which results in the tradeoff between the transmission delay and the queuing delay. To achieve the optimal balance, in this paper we propose an adaptive blocklength transmission framework to minimize the important part of the end-to-end delay of wireless networks, where we focus on the transmission delay and queuing delay. A dynamic buffering model for variable transmission time interval (V-TTI) is introduced for the time-varying arrival of packets adaptation. Then, we propose the Flexible proximal Alternating direction method of multipliers based Blocklength Optimization (FaBo) scheme to minimize the important part of the end-to-end delay for the single user case. We also propose the Multiple deep Q-learning network based Resource Allocation (MuRa) scheme, which can efficiently balance the transmission delay and queuing delay, to minimize the important part of the end-to-end delay for the multi-user case. Numerical results show that the proposed adaptive blocklength framework can reduce the important part of the end-to-end delay compared with that of long-term evolution and the fifth generation (5G) new radio. We also show that our proposed schemes can quickly converge to the minimum end-to-end delay.

5 citations


Journal ArticleDOI
TL;DR: The simulation results validate that the proposed upper confidence bound (UCB)-based dynamic CoAP mode selection algorithm can flexibly balance the tradeoff between packet-loss ratio and transmission delay as well as satisfy the differentiated QoS in distribution IoT.
Abstract: Lightweight constrained application protocol (CoAP) has emerged as a common communication protocol for resource-constrained equipment in distribution internet of things (IoT). CoAP introduces two modes for data transmission, i.e., non-confirmed mode for reducing transmission delay and confirmed mode for reducing packet-loss ratio, which can be dynamically selected to satisfy the service requirements. However, there are still some challenges in dynamic CoAP mode selection, including incomplete information and differentiated quality of service (QoS) requirements of distributed IoT services. In this paper, we propose a upper confidence bound (UCB)-based dynamic CoAP mode selection algorithm for data transmission to address these challenges. The simulation results validate that, compared with the fixed mode selection algorithm, the proposed algorithm can flexibly balance the tradeoff between packet-loss ratio and transmission delay as well as satisfy the differentiated QoS in distribution IoT.

5 citations


Journal ArticleDOI
01 Jul 2022-Sensors
TL;DR: An optimal message bundling scheme based on an objective function for the total energy consumption of a WSN, which also takes into account the effects of packet retransmissions and strikes the optimal balance between the number of bundled messages and theNumber of retransmission given a link quality is proposed.
Abstract: In a wireless sensor network (WSN), reducing the energy consumption of battery-powered sensor nodes is key to extending their operating duration before battery replacement is required. Message bundling can save on the energy consumption of sensor nodes by reducing the number of message transmissions. However, bundling a large number of messages could increase not only the end-to-end delays and message transmission intervals, but also the packet error rate (PER). End-to-end delays are critical in delay-sensitive applications, such as factory monitoring and disaster prevention. Message transmission intervals affect time synchronization accuracy when bundling includes synchronization messages, while an increased PER results in more message retransmissions and, thereby, consumes more energy. To address these issues, this paper proposes an optimal message bundling scheme based on an objective function for the total energy consumption of a WSN, which also takes into account the effects of packet retransmissions and, thereby, strikes the optimal balance between the number of bundled messages and the number of retransmissions given a link quality. The proposed optimal bundling is formulated as an integer nonlinear programming problem and solved using a self-adaptive global-best harmony search (SGHS) algorithm. The experimental results, based on the Cooja emulator of Contiki-NG, demonstrate that the proposed optimal bundling scheme saves up to 51.8% and 8.8% of the total energy consumption with respect to the baseline of no bundling and the state-of-the-art integer linear programming model, respectively.

Journal ArticleDOI
TL;DR: In this paper , a joint resource allocation for ultra-reliable and low-latency radio access networks (URLLRANs) with edge computing is investigated, where effective information as well as energy consumption as performance metrics based on the definition of short packets are taken into consideration.
Abstract: This paper investigates a joint resource allocation for ultra-reliable and low-latency radio access networks (URLLRANs) with edge computing. Compared with conventional networks, URLLRANs have more restrictive latency and reliability requirements, and always feature short packet communications. It is a challenging work to provide edge computing services in URLLRANs, since the processing and transmission delay as well as packet loss during computation and communications should all be taken into considerations. Along these lines, to specify the trade-off between latency and reliability, this paper defines computation rates and transmission rates for short packets. Different from the existing work, the proposal takes effective information as well as energy consumption as performance metrics based on the definition. The packet request rates, computation latency, service rates, communication power, blocklength, and transmission information amounts are jointly optimized to reduce energy consumption and meanwhile generate more effective information for both the computation system and the communication system. To solve the NP-hard problem, the locally optimal solution and global optimal solution are both derived. Simulation results validate the performance advantage of the proposal and also indicate that the locally optimal solution can greatly reduce the computation complexity with only a small performance loss when compared with the global optimal solution.

Journal ArticleDOI
TL;DR: The DetNet meets the requirements of deterministic delay, low jitter, and high bandwidth for telesurgery, which may provide effective network guarantee for developing the telemedicine system.
Abstract: Deterministic Networking (DetNet) is a new technology that can effectively control network delay and may promote the revolution of telemedicine. This study verified the feasibility and advantage of deterministic networking in telesurgery.

Journal ArticleDOI
TL;DR: In this article , a novel augmented Lyapunov functional consisting of a mixed-delay-based augmented part and a time-squared two-sided looped part is proposed to fill the gap.
Abstract: The synchronization control for delayed neural networks (DNNs) via a sampled-data controller considering communication delay is studied by input delay approach. Although few scholars have put forward the coexistence of transmission delay and communication delay in this problem, no report has clarified the interaction between transmission delay and communication delay. Also, the time-squared terms are underutilized. Thus, a novel augmented Lyapunov functional, which consists of a mixed-delay-based augmented part and a time-squared two-sided looped part, is proposed to fill this gap. In the mixed-delay-based augmented part, not only the information of transmission delay and communication delay themselves, but also the interaction between those two delays is considered. Time-dependent quadratic terms as well as the sampling integral states are introduced in the two-sided looped part, so that more characteristic information of the sampling pattern is encompassed and the relationship of the states at the sampling instant is enhanced. Then, this novel augmented functional is applied to the synchronization control of DNNs. A less conservative synchronization criterion is obtained in the form of linear matrix inequalities. A numerical example illustrates the validity and superiority of the presented synchronization criterion.

Journal ArticleDOI
TL;DR: In this paper , a deep reinforcement learning-based medium access control (DL-MAC) protocol for underwater acoustic networks (UANs) is proposed, where one agent node employing the proposed DL-MAC protocol coexists with other nodes employing traditional protocols, such as TDMA or q-Aloha.
Abstract: This article proposes a novel deep-reinforcement learning-based medium access control (DL-MAC) protocol for underwater acoustic networks (UANs) where one agent node employing the proposed DL-MAC protocol coexists with other nodes employing traditional protocols, such as time division multiple access (TDMA) or q-Aloha. The DL-MAC agent learns to exploit the large propagation delays inherent in underwater acoustic communications to improve system throughput by either a synchronous or an asynchronous transmission mode. In the sync-DL-MAC protocol, the agent action space is transmission or no transmission, while in the async-DL-MAC, the agent can also vary the start time in each transmission time slot to further exploit the spatiotemporal uncertainty of the UANs. The deep Q-learning algorithm is applied to both sync-DL-MAC and async-DL-MAC agents to learn the optimal policies. A theoretical analysis and computer simulations demonstrate the performance gain obtained by both DL-MAC protocols. The async-DL-MAC protocol outperforms the sync-DL-MAC protocol significantly in sum throughput and packet success rate by adjusting the transmission start time and reducing the length of time slot.


Journal ArticleDOI
TL;DR: In this article , a novel data-driven time-delay evaluation method for cyber-physical smart grid systems is proposed, where the transmitted process of the data packet in a cyber layer is considered, and the probability distribution function of multiple latencies in the spatial sequence is depicted by using M/M/1 queuing theory and signal convolution methods.

Proceedings ArticleDOI
20 May 2022
TL;DR: In this article , a set of efficient and reliable long message packet loss feedback mechanism is proposed and tested on the designed BeiDou communication terminal, which significantly improves the success rate of transmitting BeiDou long messages and makes the long messages received by the user.
Abstract: The short-message communication of the BeiDou satellite navigation system has the disadvantages of low success rate of transmitting long messages and no communication receipt. In order to improve the success rate and reliability of BeiDou long message data transmission, by improving the original BeiDou communication protocol, a set of efficient and reliable long message packet loss feedback mechanism is proposed and tested on the designed BeiDou communication terminal. The results show that the packet loss rate of the terminal with the packet loss feedback mechanism is reduced by an average of 55.78% compared with the packet loss rate without the feedback mechanism, which significantly improves the success rate of transmitting BeiDou long messages and makes the long messages received by the user. The data are more complete and accurate.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed multistream concurrent adaptive transmission control method for heterogeneous networks is superior in terms of delay and packet loss rate compared with the general load balancing streaming decision method, in Terms of transmission efficiency and accuracy.
Abstract: With the development of wireless communication technology, video and multimedia have become an integral part of visual communication design. Designers want higher interactivity, diversity, humanization, and plurality of attributes in the process of visual communication. This makes the process of visual communication have high requirements for the quality and real-time data transmission. To address the problem of transmitting HD video in a heterogeneous wireless network with multiple concurrent streams to improve the transmission rate and thus enhance the user experience, with the optimization goal of minimizing the system transmission delay and the delay difference between paths, the video sender and receiver are jointly considered, and the video transmission rate and the cache size at the receiver are adaptively adjusted to improve the user experience, and a cooperative wireless communication video transmission based on the control model for video transmission based on cooperative wireless communication is established, and video streams with self-similarity and long correlation are studied based on Pareto distribution and P / P / l queuing theory, based on which an adaptive streaming decision method for video streams in heterogeneous wireless networks is proposed. Simulation results show that the proposed multistream concurrent adaptive transmission control method for heterogeneous networks is superior in terms of delay and packet loss rate compared with the general load balancing streaming decision method, in terms of transmission efficiency and accuracy.


Journal ArticleDOI
01 May 2022-Sensors
TL;DR: In this paper , a leader-follower approach using linear quadratic regulator control was proposed to improve the performance of nonholonomic automated guided vehicles in the presence of packet loss, where a long short-term memory neural network was used to predict the position of the leader by the followers.
Abstract: This paper presents the formation tracking problem for non-holonomic automated guided vehicles. Specifically, we focus on a decentralized leader–follower approach using linear quadratic regulator control. We study the impact of communication packet loss—containing the position of the leader—on the performance of the presented formation control scheme. The simulation results indicate that packet loss degrades the formation control performance. In order to improve the control performance under packet loss, we propose the use of a long short-term memory neural network to predict the position of the leader by the followers in the event of packet loss. The proposed scheme is compared with two other prediction methods, namely, memory consensus protocol and gated recurrent unit. The simulation results demonstrate the efficiency of the long short-term memory in packet loss compensation in comparison with memory consensus protocol and gated recurrent unit.

Journal ArticleDOI
TL;DR: In this article , a delay minimization network that uses deep learning is proposed to solve the end-to-end delay minimisation problem in multi-hop time-slotted time-division multiple access (TDMA) networks.

Journal ArticleDOI
TL;DR: In this paper , a flexible index mapping scheme was proposed to fully utilize the available frequency channels and transmission timings to increase the number of bits transmitted by an index and avoid packet collision.
Abstract: Long range wide area network (LoRaWAN) attracts attention due to its ability to realize massive machine-type communication (MTC); however, the throughput is low due to its narrow-band transmission and chirp spread spectrum. Packet-level index modulation (PLIM) increases the throughput by utilizing a data packet’s frequency channel and transmission timing as an information-bearing index. This letter proposes a flexible index mapping scheme to fully utilize the available frequency channels and transmission timings to increase the number of bits transmitted by an index and avoid packet collision. Numerical results show that the proposed scheme improves the throughput and significantly reduces the required memory size of end nodes.

Journal ArticleDOI
TL;DR: An event-triggered network predictive control method was developed for networked control systems with random network delays, packet disorders, and packet dropouts in the feedback and forward channels and based on a time-delay state feedback control law, it was used to actively compensate for the time delay that exceeds the allowable.
Abstract: An event-triggered network predictive control method, which uses allowable time delays, was developed for networked control systems with random network delays, packet disorders, and packet dropouts in the feedback and forward channels. In this method, random communication constraints are uniformly treated as a time delay at each time instant. Subsequently, based on a time-delay state feedback control law, the proposed method is used to actively compensate for the time delay that exceeds the allowable. In addition, the introduction of an event-triggered mechanism reduces communication loads and saves network resources. A necessary and sufficient stability condition for the closed-loop system is provided, which is independent of random time delays and is related to the allowable delay. Finally, the simulation results of the two systems verified the effectiveness of the proposed method.

Journal ArticleDOI
01 Oct 2022-Sensors
TL;DR: A reinforcement learning-based Intelligent Tactile Edge (ITE) framework to ensure both transparency and stability of teleoperation systems with high packet rates and variable time delay communication networks and indicates that the communication system can successfully achieve the QoS and QoE requirements by employing the proposed ITE framework.
Abstract: With the advancement in next-generation communication technologies, the so-called Tactile Internet is getting more attention due to its smart applications, such as haptic-enabled teleoperation systems. The stringent requirements such as delay, jitter, and packet loss of these delay-sensitive and loss-intolerant applications make it more challenging to ensure the Quality of Service (QoS) and Quality of Experience (QoE). In this regard, different haptic codec and control schemes were proposed for QoS and QoE provisioning in the Tactile Internet. However, they maximize the QoE while degrading the system’s stability under varying delays and high packet rates. In this paper, we present a reinforcement learning-based Intelligent Tactile Edge (ITE) framework to ensure both transparency and stability of teleoperation systems with high packet rates and variable time delay communication networks. The proposed ITE first estimates the network challenges, including communication delay, jitter, and packet loss, and then utilizes a Q-learning algorithm to select the optimal haptic codec scheme to reduce network load. The proposed framework aims to explore the optimal relationship between QoS and QoE parameters and make the tradeoff between stability and transparency during teleoperations. The simulation result indicates that the proposed strategy chooses the optimal scheme under different network impairments corresponding to the congestion level in the communication network while improving the QoS and maximizing the QoE. The end-to-end performance of throughput (1.5 Mbps) and average RTT (70 ms) during haptic communication is achieved with a learning rate and discounted factor value of 0.5 and 0.8, respectively. The results indicate that the communication system can successfully achieve the QoS and QoE requirements by employing the proposed ITE framework.

Journal ArticleDOI
TL;DR: In this article , the influence of time-delay interactions on the collective motion of swarming locusts was investigated, and it was shown that time delays of different types can affect the directional switches.
Abstract: Coordinated directional switches often emerge in moving biological groups replete with individual-level interactions. Recent self-propelled particles models can somewhat mimic the patterns of directional switches, but they usually do not include the effects of time delays in the interactions. Here, we focus on investigating the influence of time-delay interactions on the collective motion of swarming locusts, an experimentally well-studied system that exhibits ordered switches between clockwise and counterclockwise movement. We show, both analytically and numerically, that time delays of different types can affect the directional switches. Specifically, for the sufficiently small response delay, increasing the transmission delay can increase the mean switching time, while, for the large response delay, increasing the transmission delay may destroy the ordered directional switches. Our results decipher the role of time-delay interactions in the collective motion, which could be beneficial to the design of collective intelligent devices.

Journal ArticleDOI
TL;DR: In this article , an algorithm based on the alternating direction method of multipliers (ADMM) is designed using auxiliary variables and reformulation linearization technology (RLT) to solve the problem of task offloading in edge computing.
Abstract: Task offloading in edge computing is important for the Industrial Internet of Things (IIoT) to implement computation-intensive applications in real time. However, achieving efficient task offloading in IIoT is very challenging due to the limited computing resources of IIoT devices, the coupling of computing and communication resources, and the unreliability in multihop wireless transmission. In this article, we construct a link model by considering the influence of unreliable links in multihop transmission to reveal the relationship between reliability and transmission delay. Then, a nonconvex optimization problem that minimizes task processing delay is formulated, and task offloading is decided by considering transmission path selection, bandwidth allocation, and computational resource allocation. To solve this problem, an algorithm based on the alternating direction method of multipliers (ADMM) is designed using auxiliary variables and reformulation linearization technology (RLT). The simulation results show that our proposed algorithm can fully utilize the computing power of the edge server and reduce the task processing delay. Compared with the centralized algorithms, the performance of the proposed scheme is only 1% worse, but the calculation time can be reduced by 40%.

Journal ArticleDOI
TL;DR: Theoretical analysis and test results show that the proposed protocol can meet the requirements of data transmission between the on-board and trackside equipment and has the characteristics of simple deployment, high reliability, and high stability.
Abstract: In order to solve the problem that the railway application services have strict requirements on the reliability, delay, and other indicators of mobile data communication, a parallel redundancy protocol for the railway wireless data communication network is proposed. By introducing a redundancy adaptation layer into the standard TCP/IP model, multi-link parallel data transmission is realized and transparent to the network layer and above. Transmitting multiple copies of the same packet through different paths makes the application systems have redundant communication capability without changing the existing data transmission mode. At the same time, according to the requirements of the receiving entity for redundant data elimination, an improved Bloom filter is proposed. By expanding the bit array and using the countdown mechanism, the elements in the data set can be dynamically inserted, deleted, and retrieved over time. Theoretical analysis and test results show that the proposed protocol can meet the requirements of data transmission between the on-board and trackside equipment and has the characteristics of simple deployment, high reliability, and high stability. Compared to using a single transmission link, the packet loss rate from field test can be reduced by 39.67% at the maximum, and the throughput and end-to-end data transmission delay are also significantly improved. The test in this paper also shows that the additional delay introduced is a maximum of 5.07 μs, which will not significantly affect the data transmission performance.

Journal ArticleDOI
TL;DR: Simulation results demonstrated that efficiency of the proposed sensor data sharing method is superior to baseline solutions in terms of the packet loss ratio, transmission time, and packet dissemination ratio.
Abstract: Due to good maneuverability, UAVs and vehicles are often used for environment perception in smart cities. In order to improve the efficiency of sensor data sharing in UAV-assisted mmWave vehicular network (VN), this paper proposes a sensor data sharing method based on blockage effect identification and network coding. The concurrent sending vehicles selection method is proposed based on the availability of mmWave link, the number of target vehicles of sensor data packet, the distance between a sensor data packet and target vehicle, the number of concurrent sending vehicles, and the waiting time of sensor data packet. The construction method of the coded packet is put forward based on the status information about the existing packets of vehicles. Simulation results demonstrated that efficiency of the proposed method is superior to baseline solutions in terms of the packet loss ratio, transmission time, and packet dissemination ratio.

Journal ArticleDOI
Jin Wang, Wei Ding, Man He, Jin-Feng Hu, N. Xiong 
TL;DR: In this paper , a coding-based random packet spraying (CRPS) scheme is proposed to reduce the tail latency caused by retransmission, where the source host transmits forward error correction encoded packets and dynamically adjusts the data redundancy based on the packet loss rate.