scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 2020"


Journal ArticleDOI
TL;DR: In this paper, the authors considered a scenario where the central controller transmits different packets to a robot and an actuator, where the actuator is located far from the controller, and the robot can move between the controller and the actuators.
Abstract: Ultra-reliable and low-latency communication (URLLC) is one of three pillar applications defined in the fifth generation new radio (5G NR), and its research is still in its infancy due to the difficulties in guaranteeing extremely high reliability (say 10−9 packet loss probability) and low latency (say 1 ms) simultaneously. In URLLC, short packet transmission is adopted to reduce latency, such that conventional Shannon’s capacity formula is no longer applicable, and the achievable data rate in finite blocklength becomes a complex expression with respect to the decoding error probability and the blocklength. To provide URLLC service in a factory automation scenario, we consider that the central controller transmits different packets to a robot and an actuator, where the actuator is located far from the controller, and the robot can move between the controller and the actuator. In this scenario, we consider four fundamental downlink transmission schemes, including orthogonal multiple access (OMA), non-orthogonal multiple access (NOMA), relay-assisted, and cooperative NOMA (C-NOMA) schemes. For all these transmission schemes, we aim for jointly optimizing the blocklength and power allocation to minimize the decoding error probability of the actuator subject to the reliability requirement of the robot, the total energy constraints, as well as the latency constraints. We further develop low-complexity algorithms to address the optimization problems for each transmission scheme. For the general case with more than two devices, we also develop a low-complexity efficient algorithm for the OMA scheme. Our results show that the relay-assisted transmission significantly outperforms the OMA scheme, while the NOMA scheme performs well when the blocklength is very limited. We further show that the relay-assisted transmission has superior performance over the C-NOMA scheme due to larger feasible region of the former scheme.

134 citations


Journal ArticleDOI
TL;DR: In this article, a simple but accurate analytical model to analyze the distributed network split probability is proposed to predict the network split time and probability in theory and optimize the parameters in Raft consensus algorithm.
Abstract: Consensus is one of the key problems in blockchains. There are many articles analyzing the performance of threat models for blockchains. But the network stability seems lack of attention, which in fact affects the blockchain performance. This paper studies the performance of a well adopted consensus algorithm, Raft, in networks with non-negligible packet loss rate. In particular, we propose a simple but accurate analytical model to analyze the distributed network split probability. At a given time, we explicitly present the network split probability as a function of the network size, the packet loss rate, and the election timeout period. To validate our analysis, we implement a Raft simulator and the simulation results coincide with the analytical results. With the proposed model, one can predict the network split time and probability in theory and optimize the parameters in Raft consensus algorithm.

120 citations


Journal ArticleDOI
TL;DR: A combination of deep reinforcement learning (DRL) and the long-short-term memory (LSTM) network is adopted to accelerate the convergence speed of the algorithm and the quality of experience (QoE) is introduced to evaluate the results of UAV sharing.
Abstract: The formation flights of multiple unmanned aerial vehicles (UAV) can improve the success probability of single-machine. Dynamic spectrum interaction solves the problem of the ordered communication of multiple UAVs with limited bandwidth via spectrum interaction between UAVs. By introducing reinforcement learning algorithm, UAVs can continuously obtain the optimal strategy by continuously interacting with the environment. In this paper, two types of UAV formation communication methods are studied. One method allows for information sharing between two UAVs in the same time slot. The other method is the adoption of a dynamic time slot allocation scheme to complete the alternate use of time slots by the UAV to realize information sharing. The quality of experience (QoE) is introduced to evaluate the results of UAV sharing, and the M/G/1 queuing model is used for priority and to evaluate the packet loss of UAV. In terms of algorithms, a combination of deep reinforcement learning (DRL) and the long-short-term memory (LSTM) network is adopted to accelerate the convergence speed of the algorithm. The experimental results show that, compared with the Q-learning and deep Q-network (DQN) methods, the proposed method achieves faster convergence and better performance with respect to the throughput rate.

96 citations


Journal ArticleDOI
TL;DR: A fuzzy sliding mode congestion control algorithm (FSMC) is presented, which adaptively regulates the queue length of buffer in congested nodes and significantly reduces the impact of external uncertain disturbance and has good performance, such as rapid convergence, lower average delay, less packet loss ratio and higher throughput.
Abstract: Wireless sensor networks (WSNs) act as a building block of Internet of Things and have been used in various applications to sense environment and transmit data to the Internet. However, WSNs are very vulnerable to congestion problem, resulting in higher packet loss ratio, longer delay and lower throughput. To address this issue, this paper presents a fuzzy sliding mode congestion control algorithm (FSMC) for WSNs. Firstly, by applying the signal-to-noise ratio of wireless channel to TCP model, a new cross-layer congestion control model between transmission layer and MAC layer is proposed. Then, by combining fuzzy control with sliding mode control (SMC), a fuzzy sliding mode controller (FSMC) is designed, which adaptively regulates the queue length of buffer in congested nodes and significantly reduces the impact of external uncertain disturbance. Finally, numerous simulations are implemented in MATLAB/Simulink and NS-2.35 by comparing with traditional control strategies such as fuzzy, PID and SMC, which show that the proposed FSMC effectively adapts to the change of queue length and has good performance, such as rapid convergence, lower average delay, less packet loss ratio and higher throughput.

93 citations


Proceedings ArticleDOI
30 Jul 2020
TL;DR: A Aeolus, a solution focusing on "pre-credit" packet transmission as a building block for proactive transports, which contains unconventional design principles such as scheduled-packet-first (SPF) that de-prioritizes the first-RTT packets, instead of prioritizing them as prior work.
Abstract: As datacenter network bandwidth keeps growing, proactive transport becomes attractive, where bandwidth is proactively allocated as "credits" to senders who then can send "scheduled packets" at a right rate to ensure high link utilization, low latency, and zero packet loss. While promising, a fundamental challenge is that proactive transport requires at least one-RTT for credits to be computed and delivered. In this paper, we show such one-RTT "pre-credit" phase could carry a substantial amount of flows at high link-speeds, but none of existing proactive solutions treats it appropriately. We present Aeolus, a solution focusing on "pre-credit" packet transmission as a building block for proactive transports. Aeolus contains unconventional design principles such as scheduled-packet-first (SPF) that de-prioritizes the first-RTT packets, instead of prioritizing them as prior work. It further exploits the preserved, deterministic nature of proactive transport as a means to recover lost first-RTT packets efficiently. We have integrated Aeolus into ExpressPass[14], NDP[18] and Homa[29], and shown, through both implementation and simulations, that the Aeolus-enhanced solutions deliver signiicant performance or deployability advantages. For example, it improves the average FCT of ExpressPass by 56%, cuts the tail FCT of Homa by 20x, while achieving similar performance as NDP without switch modifications.

86 citations


Journal ArticleDOI
TL;DR: A smart gateway-based authentication and authorization method to prevent and protect more sensitive physiological data from an attacker and malicious users is proposed.
Abstract: As health data are very sensitive, there is a need to prevent and control the health data with end-to-end security solutions. In general, a number of authentication and authorization schemes are available to prevent and protect the sensitive data, which are collected with the help of wearable Internet of Things (IoT) devices. The transport layer security (TLS) protocol is designed to transfer the data from source to destination in more reliable manner. This protocol enables a user to overcome the no lost or reordered messages. The more challenge with TLS is to tolerate unreliability. In order to overcome this issue, Datagram transport layer security (DTLS) protocol has been designed and used in low-power wireless constrained networks. The DTLS protocol consists of a base protocol, record layer, handshake protocol, ChangeCipherSpec and alert protocol. The complex issue with the DTLS protocol is the possibility of an attacker could send a number of ClientHello messages to a server. This scenario would cause a denial-of-service (DOS) attack against the server. This DoS attack enables new connection between the attacker and server, increasing attacker bandwidth, and allocation of resources for every ClientHello message. In order to overcome this issue, we have proposed a smart gateway-based authentication and authorization method to prevent and protect more sensitive physiological data from an attacker and malicious users. The enhanced smart gateway-based DTLS is demonstrated with the help of Contiki Network Simulator. The packet loss ratio is calculated for the CoAP, host identity protocol, CoAP-DTLS and CoAP-enhanced DTLS to evaluate the performance of the proposed work. Data transmission and handshake time are also calculated to evaluate the efficiency of the enhanced DTLS.

72 citations


Journal ArticleDOI
TL;DR: It is revealed through experimental results in MATLAB that the proposed EEA scheme performs better than the constant TPC by enhancing energy efficiency, sustainability, and reliability during data transmission for elderly healthcare.
Abstract: Deep learning (DL) driven cardiac image processing methods manage and monitor the massive medical data collected by the internet of things (IoT) based on wearable devices. A Joint DL and IoT platform are known as Deep-IoMT that extracts the accurate cardiac image data from noisy conventional devices and tools. Besides, smart and dynamic technological trends have caught the attention of every corner such as, healthcare, which is possible through portable and lightweight sensor-enabled devices. Tiny size and resource-constrained nature restrict them to perform several tasks at a time. Thus, energy drain, limited battery lifetime, and high packet loss ratio (PLR) are the keys challenges to be tackled carefully for ubiquitous medical care. Sustainability (i.e., longer battery lifetime), energy efficiency, and reliability are the vital ingredients for wearable devices to empower a cost-effective and pervasive healthcare environment. Thus, the key contribution of this paper is the sixth fold. First, a novel self-adaptive power control-based enhanced efficient-aware approach (EEA) is proposed to reduce energy consumption and enhance the battery lifetime and reliability. The proposed EEA and conventional constant TPC are evaluated by adopting real-time data traces of static (i.e., sitting) and dynamic (i.e., cycling) activities and cardiac images. Second, a novel joint DL-IoMT framework is proposed for the cardiac image processing of remote elderly patients. Third, DL driven layered architecture for IoMT is proposed. Forth, the battery model for IoMT is proposed by adopting the features of a wireless channel and body postures. Fifth, network performance is optimized by introducing sustainability, energy drain, and PLR and average threshold RSSI indicators. Sixth, a Use-case for cardiac image-enabled elderly patient's monitoring is proposed. Finally, it is revealed through experimental results in MATLAB that the proposed EEA scheme performs better than the constant TPC by enhancing energy efficiency, sustainability, and reliability during data transmission for elderly healthcare.

68 citations


Journal ArticleDOI
TL;DR: A novel routing protocol for urban VANETs called RSU-assisted Q-learning-based Traffic-Aware Routing (QTAR) is introduced, combining the advantages of geographic routing with the static road map information, which outperforms the existing traffic-aware routing protocols.
Abstract: In urban vehicular ad hoc networks (VANETs), the high mobility of vehicles along street roads poses daunting challenges to routing protocols and has a great impact on network performance. In addition, the frequent network partition caused by an uneven distribution of vehicles in an urban environment further places higher requirements on the routing protocols in VANETs. More importantly, the high vehicle density during the traffic peak hours and a variety of natural obstacles, such as tall buildings, other vehicles and trees, greatly increase the difficulty of protocol design for high quality communications. Considering these issues, in this paper, we introduce a novel routing protocol for urban VANETs called RSU-assisted Q-learning-based Traffic-Aware Routing (QTAR). Combining the advantages of geographic routing with the static road map information, QTAR learns the road segment traffic information based on the Q-learning algorithm. In QTAR, a routing path consists of multiple dynamically selected high reliability connection road segments that enable packets to reach their destination effectively. For packet forwarding within a road segment, distributed V2V Q-learning (Q-learning occurs between vehicles) integrated with QGGF (Q-greedy geographical forwarding) is adopted to reduce delivery delay and the effect of fast vehicle movements on path sensitivity, while distributed R2R Q-learning (Q-learning occurs between RSU units) is designed for packet forwarding at each intermediate intersection. In the case of a local optimum occurring in QGGF, SCF (store-carry-forward) is used to reduce the possibility of packet loss. Detailed simulation experimental results demonstrate that QTAR outperforms the existing traffic-aware routing protocols, in terms of 7.9% and 16.38% higher average packet delivery ratios than those of reliable traffic-aware routing (RTAR) and greedy traffic-aware routing (GyTAR) in high vehicular density scenarios and 30.96% and 46.19% lower average end-to-end delays with respect to RTAR and GyTAR in low vehicular density scenarios, respectively.

67 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed cluster based Data Aggregation Scheme for Latency and Packet Loss Reduction in WSN reduces the latency and overhead and increases the packet delivery ratio and residual energy.

59 citations


Proceedings ArticleDOI
30 Jul 2020
TL;DR: A network-wide architectural design OmniMon, which simultaneously achieves resource efficiency and full accuracy in flow-level telemetry for large-scale data centers and addresses consistency in network- wide epoch synchronization and accountability in error-free packet loss inference.
Abstract: Network telemetry is essential for administrators to monitor massive data traffic in a network-wide manner. Existing telemetry solutions often face the dilemma between resource efficiency (i.e., low CPU, memory, and bandwidth overhead) and full accuracy (i.e., error-free and holistic measurement). We break this dilemma via a network-wide architectural design OmniMon, which simultaneously achieves resource efficiency and full accuracy in flow-level telemetry for large-scale data centers. OmniMon carefully coordinates the collaboration among different types of entities in the whole network to execute telemetry operations, such that the resource constraints of each entity are satisfied without compromising full accuracy. It further addresses consistency in network-wide epoch synchronization and accountability in error-free packet loss inference. We prototype OmniMon in DPDK and P4. Testbed experiments on commodity servers and Tofino switches demonstrate the effectiveness of OmniMon over state-of-the-art telemetry designs.

57 citations


Journal ArticleDOI
TL;DR: Simulation studies with two-area and five-area interconnected power systems show that the proposed method can avoid optimistic or conservative design effectively while the stability and dynamic performance can be ensured simultaneously.
Abstract: Random transmission delay and packet loss may cause load frequency control (LFC) performance degradation or even instability in interconnected power systems. Traditional robust methods mainly focus on guaranteeing the asymptotic stability with pre-estimating a maximum delay case. As a result, the designed controller cannot satisfy the actual operational requirements completely since the improper network performance estimation. In this paper, based on jump system theory, the state-space model for LFC system with the different transmission delay is established as the jumping decision variable, which describes the possible dynamics of the networked LFC system with the bounded time delay. Secondly, in order to avoid the defect of conservative or optimistic design caused by imprecise pre-set delay, the upper boundary of the transmission delay in the power communication system is calculated by deterministic network theory. Furthermore, the asymptotic stability constraints in bilinear matrix inequality forms are deduced via construction of a Lyapunov function. Moreover, to reduce the peak value, peak time and setting time of frequency deviations caused by power mismatches, an iterative optimization algorithm is proposed to obtain the optimal feedback matrix. Simulation studies with two-area and five-area interconnected power systems show that the proposed method can avoid optimistic or conservative design effectively while the stability and dynamic performance can be ensured simultaneously.

Journal ArticleDOI
TL;DR: Peekaboo is a novel learning-based multipath scheduler that is aware of the dynamic characteristics of the heterogeneous paths and able to learn scheduling decisions to adopt over time based on the current path characteristics and dynamicity levels - from both deterministic and stochastic perspectives.
Abstract: Multipath transport protocols utilize multiple network paths (e.g., WiFi and cellular) to achieve improved performance and reliability, compared with their single-path counterparts. The scheduler of a multipath transport protocol determines how to distribute the data packets onto different paths. However, state-of-the-art multipath schedulers face the challenge when dealing with heterogeneous paths with dynamic path characteristics (i.e., packet loss, fluctuation of delay). In this paper, we propose Peekaboo, a novel learning-based multipath scheduler that is aware of the dynamic characteristics of the heterogeneous paths. Peekaboo is able to learn scheduling decisions to adopt over time based on the current path characteristics and dynamicity levels - from both deterministic and stochastic perspectives. We implement Peekaboo in Multipath QUIC (MPQUIC) and compare it with state-of-the-art multipath schedulers for a wide range of dynamic heterogeneous environments, upon both emulated and real networks. Our results show that Peekaboo outperforms the other schedulers by up to 31.2% in emulated networks and up to 36.3% in real network scenarios.

Journal ArticleDOI
TL;DR: An intelligent approach for energy efficient trajectory design called Neuro fuzzy Emperor Penguin Optimization (NF-EPO) approach for mobile sink based IoT supported WSNs and an adaptive Neuro fuzzy inference system (ANFIS) for optimal cluster head selection is presented.
Abstract: In WSN, mobility of the sink node is worthwhile since it creates potential way to gather information from sensor nodes through direct communication. To mitigate the delay experienced during the visiting period of entire network cluster head nodes, a mobile sink only gather information from limited special nodes called as rendezvous points and rest of cluster head nodes transmit their information to the nearby RP. It is extremely problematic to discover an optimal group of rendezvous points and decide the trajectory of mobile sink. In this paper, propose an intelligent approach for energy efficient trajectory design called Neuro fuzzy Emperor Penguin Optimization (NF-EPO) approach for mobile sink based IoT supported WSNs. This paper presented an adaptive Neuro fuzzy inference system (ANFIS) for optimal cluster head selection. Here, we considered the three input parameters of residual energy, neighbour node sharing and node behaviour history to choose the best CH. Finally, the effective routing algorithm of emperor penguin optimization (EPO) is used to find the rendezvous points and travelling path for mobile sink. The simulation outcomes illustrate that proposed method provides superior performance compared to other existing routing schemes. The efficiency of the system is determined using different metrics network lifetime, energy consumption with static and dynamic sink, end to end delay, bit error rate, packet loss ratio, buffer occupancy, channel load, jitter, latency, packet delivery ratio, and throughput.

Proceedings ArticleDOI
23 Mar 2020
TL;DR: This article extends the current mobile edge offloading models and presents a model for multi-server device-to-device, edge, and cloud offloading, and introduces a new task allocation algorithm exploiting this model for MAR offloading.
Abstract: Mobile Augmented Reality (MAR) applications employ computationally demanding vision algorithms on resource-limited devices. In parallel, communication networks are becoming more ubiquitous. Offloading to distant servers can thus overcome the device limitations at the cost of network delays. Multipath networking has been proposed to overcome network limitations but it is not easily adaptable to edge computing due to the server proximity and networking differences. In this article, we extend the current mobile edge offloading models and present a model for multi-server device-to-device, edge, and cloud offloading. We then introduce a new task allocation algorithm exploiting this model for MAR offloading. Finally, we evaluate the allocation algorithm against naive multipath scheduling and single path models through both a real-life experiment and extensive simulations. In case of sub-optimal network conditions, our model allows reducing the latency compared to single-path offloading, and significantly decreases packet loss compared to random task allocation. We also display the impact of the variation of WiFi parameters on task completion. We finally demonstrate the robustness of our system in case of network instability. With only 70% WiFi availability, our system keeps the excess latency below 9 ms. We finally evaluate the capabilities of the upcoming 5G and 802.11ax.

Journal ArticleDOI
TL;DR: Numerical investigations of the novel architecture under realistic traffic model indicate that dynamically allocating the TRXs and elastically controlling the WSS, a packet loss below 1E-5 and a server-to-server latency lower than 3 μs can be guaranteed for different traffic patterns at load of 0.4.
Abstract: Fast and high capacity optical switching techniques have the potential to enable low latency and high throughput optical data center networks (DCNs) to afford the rapid increasing traffic boosted by multiple applications Flexibility of the DCN is of key importance to provide adaptive and dynamic bandwidth and capacity to handle the variable traffic patterns of heterogeneous applications Aiming at improving the network performance and the system flexibility of optical DCNs, we propose and investigate a novel optical DCN architecture named ROTOS based on reconfigurable optical top of rack (ToR) and fast optical switches In the proposed DCN architecture, the novel optical flexible ToRs employing multiple transceivers (TRXs) and a wavelength selective switch (WSS) are reconfigured by the software-defined networking (SDN) control plane The bandwidth can be dynamically allocated to the dedicated optical links on-demand according to the desired oversubscription (OV) and intra-/inter-cluster traffic matrix Numerical investigations of the novel architecture under realistic traffic model indicate that dynamically allocating the TRXs and elastically controlling the WSS, a packet loss below 1E-5 and a server-to-server latency lower than 3 μs can be guaranteed for different traffic patterns at load of 04 With respect to the DCN with static interconnections, the average packet loss of ROTOS decreases two orders of magnitude and the average server-to-server latency performance improves by 215% Scalability investigation to a large number of servers shows limited (11%) performance degradation as the network scale from 2560 to 40960 servers Additionally, the dynamic bandwidth allocation of the DCN is experimentally validated Network performance results show a packet loss of 005 and 585 μs end-to-end latency at the load of 08 Finally, investigations on the cost and power consumption confirm that the ROTOS DCN architecture has 284% lower cost and 350% better improvement for power efficiency with respect to the electrical switch based DCNs

Journal ArticleDOI
TL;DR: The proposed techniques exploit the concept of Highest Rank Common Ancestor (HRCA) to find a common ancestor with the highest rank among all the ancestors that a pair of nodes have in the target network tree and make the mitigation process lightweight and fast.

Journal ArticleDOI
TL;DR: A novel AI-based reliable and interference-free mobility management algorithm (RIMMA) for fog computing intra-vehicular networks, which significantly improves computation, communication, cooperation, and storage space and outperforms the traditional technique for intercity vehicular networks.
Abstract: Artificial intelligence (AI)-driven fog computing (FC) and its emerging role in vehicular networks is playing a remarkable role in revolutionizing daily human lives. Fog radio access networks are accommodating billions of Internet of Things devices for real-time interactive applications at high reliability. One of the critical challenges in today's vehicular networks is the lack of standard wireless channel models with better quality of service (QoS) for passengers while enjoying pleasurable travel (i.e., highly visualized videos, images, news, phone calls to friends/relatives). To remedy these issues, this article contributes significantly in four ways. First, we develop a novel AI-based reliable and interference-free mobility management algorithm (RIMMA) for fog computing intra-vehicular networks, because traffic monitoring and driver's safety management are important and basic foundations. The proposed RIMMA in association with FC significantly improves computation, communication, cooperation, and storage space. Furthermore, its self-adaptive, reliable, intelligent, and mobility-aware nature, and sporadic contents are monitored effectively in highly mobile vehicles. Second, we propose a reliable and delay-tolerant wireless channel model with better QoS for passengers. Third, we propose a novel reliable and efficient multi-layer fog driven inter-vehicular framework. Fourth, we optimize QoS in terms of mobility, reliability, and packet loss ratio. Also, the proposed RIMMA is compared to an existing competitive conventional method (i.e., baseline). Experimental results reveal that the proposed RIMMA outperforms the traditional technique for intercity vehicular networks.

Journal ArticleDOI
TL;DR: A design of a complete Software Defined Vehicular Network prototype is proposed and several SDN controllers are used to implement routing algorithms to transport vehicles control plane through the backbone, to process information received from vehicles to predict topology and compute routing path for V2V and V2I communication, and finally to manage mobility schemes.
Abstract: The next frontier for automotive revolution will be connected vehicles and the Internet of Vehicles (IoV) that will transform a vehicle into a much more than a smartphone on wheels. The fifth-generation (5G) networks are expected to meet various communication requirements for vehicles. This will enable the creation of a large market of services with the exploitation of the generated data from in-vehicle sensors. The “cloudification” of vehicular network resources through Software Defined Networking (SDN) is a new network paradigm. A critical part of SDN is the transport of control plane. Therefore, it is crucial for SDN-based On-Board Units, i.e., SDN switches, to keep a robust connection with the SDN controller. In the literature, several works propose SDN-based architectures for vehicular communication. However, nearly all performance evaluations are based on theory or simulation results. This paper proposes a design of a complete Software Defined Vehicular Network prototype. At first, the SDN-based backbone is tested in real hardware composed of OpenFlow switches. Next, the SDN-based Radio Access is tested based on WiFi Access Points that support Click Modular Router and OpenvSwitch/OpenFlow. Finally, a Single Board Computer is used as On-Board Unit (OBU) in which are implemented OpenFlow switch functionalities. Several SDN controllers are used to implement routing algorithms to transport vehicles control plane through the backbone, to process information received from vehicles to predict topology and compute routing path for V2V and V2I communication, and finally to manage mobility schemes. Communication quality is evaluated by measuring throughput, delay, processing time, handoff latency and packet loss.

Journal ArticleDOI
TL;DR: The proposed scheme prolongs the lifespan of WSNs and as well as an individual node against exiting schemes in the operational environment and surpasses the existing schemes in terms of lifespan of individual nodes, throughput, packet loss ratio, latency, communication costs and computation costs, etc.
Abstract: Resource limited networks have various applications in our daily life. However, a challenging issue associated with these networks is a uniform load balancing strategy to prolong their lifespan. In literature, various schemes try to improve the scalability and reliability of the networks, but majority of these approaches assume homogeneous networks. Moreover, most of the technique uses distance, residual energy and hop count values to balance the energy consumption of participating nodes and prolong the network lifetime. Therefore, an energy efficient load balancing scheme for heterogeneous wireless sensor networks (WSNs) need to be developed. In this article, an energy gauge node (EGN) based communication infrastructure is presented to develop a uniform load balancing strategy for resource-limited networks. EGN measures the residual energy of the participating nodes i.e., C i ∈ Network. Moreover, EGN nodes advertise hop selection information in the network which is used by ordinary nodes to update their routing tables. Likewise, ordinary nodes use this information to uni-cast its collected data to the destination. EGN nodes work on built-in configuration to categorize their neighboring nodes such as powerful, normal and critical energy categories. EGN uses the strength of packet reply (SPR) and round trip time (RTT) values to measure the neighboring node's residual energy (E r ) and those node(s) which have a maximum E r values are advertised as reliable paths for communication. Furthermore, EGN transmits a route request (RREQ) in the network and receives route reply (RREP) from every node reside in its closed proximity which is used to compute the E r energy values of the neighboring node(s). If E r value of a neighboring node is less than the defined category threshold value then this node is advertised as non-available for communication as a relaying node. The simulation results show that our proposed scheme surpasses the existing schemes in terms of lifespan of individual nodes, throughput, packet loss ratio (PLR), latency, communication costs and computation costs, etc,. Moreover, our proposed scheme prolongs the lifespan of WSNs and as well as an individual node against exiting schemes in the operational environment.

Journal ArticleDOI
TL;DR: A genetic algorithm (GA) intelligent latency-aware resource allocation scheme (GI-LARE) is proposed that outperformed the static slicing resource allocation; the spatial branch and bound-based scheme; and, an optimal resource allocation algorithm (ORA) via Monte Carlo simulation.
Abstract: In 5G slice networks, the multi-tenant, multi-tier heterogeneous network will be critical in meeting the quality of service (QoS) requirement of the different slice use cases and in reduction of the capital expenditure (CAPEX) and operational expenditure (OPEX) of mobile network operators. Hence, 5G slice networks should be as flexible as possible to accommodate different network dynamics such as user location and distribution, different slice use case QoS requirements, cell load, intra-cluster interference, delay bound, packet loss probability, and service level agreement (SLA) of mobile virtual network operators (MVNO). Motivated by this condition, this paper addresses a latency-aware dynamic resource allocation problem for 5G slice networks in a multi-tenant, multi-tier heterogeneous environment, for efficient radio resource management. The latency-aware dynamic resource allocation problem is formulated as a maximum utility optimisation problem. The optimisation problem is transformed and the hierarchical decomposition technique is adopted to reduce the complexities in solving the optimisation problem. Furthermore, we propose a genetic algorithm (GA) intelligent latency-aware resource allocation scheme (GI-LARE). We compare GI-LARE with the static slicing (SS) resource allocation; the spatial branch and bound-based scheme; and, an optimal resource allocation algorithm (ORA) via Monte Carlo simulation. Our findings reveal that GI-LARE outperformed these other schemes.

Journal ArticleDOI
TL;DR: The finite-iteration tracking for singular coupled systems with packet-dropping learning controllers with iterative learning controllers designed to consider the case of packet losses is discussed.
Abstract: The finite-iteration tracking for singular coupled systems with packet-dropping learning controllers is discussed in this paper. The singular coupled systems are constructed to describe systems with some algebraic constraints, and iterative learning controllers are then designed to achieve the finite-iteration tracking of singular systems. The definition of the finite-iteration tracking is first proposed and the settling iteration is explicitly calculated. Moreover, the iterative learning controllers are designed to consider the case of packet losses. Simulation results are given to elaborate the correctness of the given theorems.

Journal ArticleDOI
TL;DR: Improved Rider Optimization Algorithm termed as Bypass-Linked Attacker Update-based ROA (BAU-ROA) is used for performing the optimal DBN as well as optimal shortest path selection, which validates the fruitful performance of the proposed model.

Journal ArticleDOI
TL;DR: FloodDefender is presented, an efficient and protocol-independent defense framework for SDN/OpenFlow networks that can precisely identify and efficiently mitigate the SDN-aimed DoS attacks with very little overhead.
Abstract: The introduction of software-defined networking (SDN) has emerged as a new network paradigm for network innovations. By decoupling the control plane from the data plane in traditional networks, SDN provides high programmability to control and manage networks. However, the communication between the two planes can be a bottleneck of the whole network. SDN-aimed DoS attacks can cause long packet delay and high packet loss rate by using massive table-miss packets to jam links between the two planes. To detect and mitigate SDN-aimed DoS attacks, this paper presents FloodDefender, an efficient and protocol-independent defense framework for SDN/OpenFlow networks. FloodDefender stands between the controller platform and other controller apps, and conforms to the OpenFlow policy without additional devices. The detection module in FloodDefender utilizes new frequency features to precisely identify SDN-aimed DoS attacks. The mitigation module uses three new techniques to efficiently mitigate attack traffic: table-miss engineering to prevent the communication bandwidth from being exhausted; packet filter to filter out attack traffic and save computational resources of the control plane; and flow rule management to eliminate most of useless flow entries in the switch flow table. Our evaluation on a prototype implementation of FloodDefender shows that the defense framework can precisely identify and efficiently mitigate the SDN-aimed DoS attacks with very little overhead.

Journal ArticleDOI
TL;DR: It is theoretically prove that refined time-delay function can reduce the conservatism and the Lyapunov stability theory is utilized to obtain the limitations on the frequency and the duration of packet losses which guarantees the consensus of multiagent systems.
Abstract: This paper considers the sample-data-based consensus problem of multiagent systems with time-varying delay and packet losses. To distinguish the time delays caused by network-induced time-delay and packet losses, the switched system is utilized. A lower gain controller is designed based on the solution of a parametric algebraic Riccati equation. The Lyapunov stability theory is utilized to obtain the limitations on the frequency and the duration of packet losses which guarantees the consensus of multiagent systems. On the other hand, we theoretically prove that refined time-delay function can reduce the conservatism. A simulation example is given to illustrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: A communication compensation block (CCB) is proposed to enhance the robustness of distributed SC against communication impairments in the MGs operated at the higher BW and power hardware-in-the-loop experimental tests show the merits and applicability of the proposed method.
Abstract: The dynamic response speed of the secondary control (SC) in microgrids (MGs) is limited both by the communication network time delays and the bandwidth (BW) of the primary control (PC). By increasing the PC BW, which depends on how to design and tune the PC, the time delay issues will be multiplied due to the speed range of the communication modules and technologies. This article proposes a communication compensation block (CCB) to enhance the robustness of distributed SC against communication impairments in the MGs operated at the higher BW. The proposed method mitigates malicious time delays and communication non-ideality in distributed networked controls employed in the secondary layer of the MG by prediction, estimation and finally decision on transmitted data. A comprehensive mathematical model of the employed communication network is presented in details. Then, a robust data prediction algorithm based on the temporal and spatial correlation is applied into the SC to compensate for time delays and data packet loss. Furthermore, the effect of the number of stored packets and burst packet loss on the average success rate of the communication block, and the small signal stability analysis of the system in the presence of the CCB are investigated. Power hardware-in-the-loop (PHiL) experimental tests show the merits and applicability of the proposed method.

Journal ArticleDOI
TL;DR: This work reveals theoretically and empirically that controlling the IP packet size is much more effective in avoiding Incast than cutting congestion window under severe congestion, and designs a general supporting scheme Packet Slicing, which adjusts the IP packets on widely used commodity switches.
Abstract: In data center networks, a large number of concurrent TCP connections suffer the TCP Incast throughput collapse due to packet drops in shallow-buffered Ethernet switches. In this work, we first reveal theoretically and empirically that controlling the IP packet size is much more effective in avoiding Incast than cutting congestion window under severe congestion. We further design a general supporting scheme Packet Slicing, which adjusts the IP packet on widely used commodity switches. The design uses standard ICMP signaling, which makes no modification on TCP protocols and can be transparently utilized by various TCP protocols. To alleviate the impact of micro-burst caused by high flow concurrency, we utilize the TCP Pacing scheme to disperse packets over the round trip time, helping Packet Slicing to support more concurrent TCP flows. We integrate Packet Slicing with three state-of-the-art data center TCP protocols on NS2 simulation and a physical testbed. The experimental results show that Packet Slicing broadly improves the goodput of different data center TCP protocols by average 26x, while having almost no effect on the I/O performance of switches and end hosts.

Journal ArticleDOI
TL;DR: The present study proposed a scheme named L2RMR, compromising a novel Objective Function (OF) and a new routing metric based on the minimization of path routes, which could enhance the average Packet Loss Ratio, End-to-End Delay, and energy consumption criteria in an environment incorporated with RPL and other comparative approaches.

Journal ArticleDOI
TL;DR: A brief review of the relationship between congestion control and ML, and the recent works that apply ML to congestion control that help the agent to make an intelligent congestion control decision or achieve enhanced performance.
Abstract: End-to-end congestion control has been extensively studied for over 30 years as one of the most important mechanisms to ensure efficient and fair sharing of network resources among users. As future networks are becoming more and more complex, conventional rule-based congestion control approaches tend to become inefficient and even ineffective. Inspired by the great success that machine learning (ML) has achieved in addressing large-scale and complex problems, researchers have begun to shift their attention from the rule-based method to an ML-based approach. This article presents a selected review of the recent applications of ML to the field of end-to-end congestion control. In this survey, we start with a brief review of the relationship between congestion control and ML. We then review the recent works that apply ML to congestion control. These works either help the agent to make an intelligent congestion control decision or achieve enhanced performance. Finally, we highlight a series of realistic challenges and shed light on potential future research directions.

Journal ArticleDOI
TL;DR: A dynamic Multi-hop Energy Efficient Routing Protocol (DMEERP) is proposed to balance the path reliability ratio and energy consumption and energy model is implemented based on channel capacity model.

Journal ArticleDOI
06 May 2020-Sensors
TL;DR: This study proposes two SF allocation schemes to enhance the packet success ratio by lowering the impact of interference, and shows that the SFs have been adaptively applied to each ED, and the proposed schemes enhance the packets success delivery ratio as compared to the typicalSF allocation schemes.
Abstract: A long-range wide area network (LoRaWAN) adapts the ALOHA network concept for channel access, resulting in packet collisions caused by intra- and inter-spreading factor (SF) interference. This leads to a high packet loss ratio. In LoRaWAN, each end device (ED) increments the SF after every two consecutive failed retransmissions, thus forcing the EDs to use a high SF. When numerous EDs switch to the highest SF, the network loses its advantage of orthogonality. Thus, the collision probability of the ED packets increases drastically. In this study, we propose two SF allocation schemes to enhance the packet success ratio by lowering the impact of interference. The first scheme, called the channel-adaptive SF recovery algorithm, increments or decrements the SF based on the retransmission of the ED packets, indicating the channel status in the network. The second approach allocates SF to EDs based on ED sensitivity during the initial deployment. These schemes are validated through extensive simulations by considering the channel interference in both confirmed and unconfirmed modes of LoRaWAN. Through simulation results, we show that the SFs have been adaptively applied to each ED, and the proposed schemes enhance the packet success delivery ratio as compared to the typical SF allocation schemes.