scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 2016"


Journal ArticleDOI
TL;DR: Simulation results show that the proposed strategy significantly improves the delay, throughput, and packet loss ratio in comparison with other congestion control strategies using the proposed congestion control strategy.
Abstract: In an urban environment, intersections are critical locations in terms of road crashes and number of killed or injured people. Vehicular ad hoc networks (VANETs) can help reduce the traffic collisions at intersections by sending warning messages to the vehicles. However, the performance of VANETs should be enhanced to guarantee delivery of the messages, particularly safety messages to the destination. Data congestion control is an efficient way to decrease packet loss and delay and increase the reliability of VANETs. In this paper, a centralized and localized data congestion control strategy is proposed to control data congestion using roadside units (RSUs) at intersections. The proposed strategy consists of three units for detecting congestion, clustering messages, and controlling data congestion. In this strategy, the channel usage level is measured to detect data congestion in the channels. The messages are gathered, filtered, and then clustered by machine learning algorithms. $K$ - means algorithm clusters the messages based on message size, validity of messages, and type of messages. The data congestion control unit determines appropriate values of transmission range and rate, contention window size, and arbitration interframe spacing for each cluster. Finally, RSUs at the intersections send the determined communication parameters to the vehicles stopped before the red traffic lights to reduce communication collisions. Simulation results show that the proposed strategy significantly improves the delay, throughput, and packet loss ratio in comparison with other congestion control strategies using the proposed congestion control strategy.

174 citations


Journal ArticleDOI
TL;DR: A priority-based frame selection scheme to suppress the number of redundant data transmissions between sensor nodes and the UAV and a novel routing protocol to reduce the transmission distances between senders and receivers is presented.
Abstract: This paper proposes a novel data acquisition framework in sensor networks using an unmanned aerial vehicle (UAV) with the goal of increasing the efficiency of the data gathering efforts. To maximize the system throughput, we introduce a priority-based frame selection scheme to suppress the number of redundant data transmissions between sensor nodes and the UAV. Toward this goal, we classify the nodes inside the UAV's coverage area into different frames according to their locations. Taking advantage of the mobility of the UAV, we assign different transmission priorities to nodes in different frames. To do that, we introduce an adjustment to the contention window value used in IEEE 802.11 MAC, thereby defining a lower contention window range to the frame with higher priority (urgent area) and a higher contention window range to the frame with lower priority (less important area). The proposed framework leads to a reduction in packet collisions and, at the same time, minimizes the packet loss originated from nodes in the rear-side of the UAV when the UAV moves in the forward direction. To optimize the networks' energy consumption, we present a novel routing protocol based on the aforementioned framework. By leveraging the proposed framework and routing algorithm, we aim to reduce the transmission distances between senders and receivers. A shorter distance leads to better channel quality and energy savings, as is verified by our simulation studies and results.

148 citations


Proceedings ArticleDOI
11 Apr 2016
TL;DR: Although DDS results in higher bandwidth usage than MQTT, its superior performance with regard to data latency and reliability makes it an attractive choice for medical IoT applications and beyond.
Abstract: One of the challenges faced by today's Internet of Things (IoT) is to efficiently support machine-to-machine communication, given that the remote sensors and the gateway devices are connected through low bandwidth, unreliable, or intermittent wireless communication links. In this paper, we quantitatively compare the performance of IoT protocols, namely MQTT (Message Queuing Telemetry Transport), CoAP (Constrained Application Protocol), DDS (Data Distribution Service) and a custom UDP-based protocol in a medical setting. The performance of the protocols was evaluated using a network emulator, allowing us to emulate a low bandwidth, high system latency, and high packet loss wireless access network. This paper reports the observed performance of the protocols and arrives at the conclusion that although DDS results in higher bandwidth usage than MQTT, its superior performance with regard to data latency and reliability makes it an attractive choice for medical IoT applications and beyond.

143 citations


Journal ArticleDOI
TL;DR: It is shown that, unlike many other cases such as intermittent observations or TCP-like systems, the system state follows a Gaussian mixture distribution with exponentially increasing terms, which leads to aGaussian sum filter-based optimal estimation.
Abstract: We investigate the optimal estimation problem in lossy networked control systems where the control packets are randomly dropped without acknowledgment to the estimator. Most existing results for this setup are concerned with the design of controller, while the optimal estimation and its performance evaluation have been rarely treated. In this paper, we show that, unlike many other cases such as intermittent observations or TCP-like systems, the system state follows a Gaussian mixture distribution with exponentially increasing terms, which leads to a Gaussian sum filter-based optimal estimation. We develop an auxiliary estimator method to establish necessary and sufficient conditions for the stability of the mean estimation error covariance matrices. It is revealed that the stability is independent of the packet loss rate, and is not affected by the lack of acknowledgment. A suboptimal filtering algorithm with improved computational efficiency is then developed. Numerical examples and simulations are employed to illustrate the theoretical results.

131 citations


Journal ArticleDOI
TL;DR: TC-QS (Quick Start) from the TCP protocol control is put forward in this paper in order to improve the performance of congestion control mechanism.
Abstract: With the development of Internet, various kinds of new applications appear constantly. They all have high requirements to the time delay, throughput, especially strong real-time applications such as mobile monitoring, video calls. The satellite network in Navigation Satellite System, which is necessary for the mobile monitoring, has many disadvantages such as asymmetric bandwidth, unstable network, high bit error rate and so on. This is a new challenge to the existing congestion control method. In order to improve the performance of congestion control mechanism, we put forward TCP-QS (Quick Start) from the TCP protocol control in this paper. TCP-QS algorithm mainly optimize the slow start stage. At the beginning of the connection, the value of parameter cwnd is set as a larger value according to the detected network bandwidth in which way, the time of the slow start stage is shortened during the transmission, and is adjusted the value of parameter ssthresh dynamically according to the change of network. When packet loss occurs, it takes different methods according to the different reasons.

113 citations


Journal ArticleDOI
TL;DR: This study presents a novel quAlity-Driven MultIpath TCP (ADMIT) scheme that integrates the utility maximization based Forward Error Correction (FEC) coding and rate allocation and develops an analytical framework to model the MPTCP-based video delivery quality over multiple communication paths.
Abstract: The proliferating wireless infrastructures with complementary characteristics prompt the bandwidth aggregation for concurrent video transmission in heterogeneous access networks. Multipath TCP (MPTCP) is an important transport-layer protocol recommended by IETF to integrate different access medium (e.g., Cellular and Wi-Fi). This paper investigates the problem of mobile video delivery using MPTCP in heterogeneous wireless networks with multihomed terminals. To achieve the optimal quality of real-time video streaming, we have to seriously consider the path asymmetry in different access networks and the disadvantages of the data retransmission mechanism in MPTCP. Motivated by addressing these critical issues, this study presents a novel quAlity-Driven MultIpath TCP (ADMIT) scheme that integrates the utility maximization based Forward Error Correction (FEC) coding and rate allocation. We develop an analytical framework to model the MPTCP-based video delivery quality over multiple communication paths. ADMIT is able to effectively integrate the most reliable access networks with FEC coding to minimize the end-to-end video distortion. The performance of ADMIT is evaluated through extensive semi-physical emulations in Exata involving H.264 video streaming. Experimental results show that ADMIT outperforms the reference transport protocols in terms of video PSNR (Peak Signal-to-Noise Ratio), end-to-end delay, and goodput. Thus, we recommend ADMIT for streaming high-quality mobile video in heterogeneous wireless networks with multihomed terminals.

109 citations


Proceedings ArticleDOI
22 Aug 2016
TL;DR: It is shown that call performance can potentially improve by 40%-80% on average, with techniques closely matching it, in the context of the well-provisioned, managed network of a cloud provider rather than peer-to-peer as has been considered in past work.
Abstract: Interactive real-time streaming applications such as audio-video conferencing, online gaming and app streaming, place stringent requirements on the network in terms of delay, jitter, and packet loss. Many of these applications inherently involve client-to-client communication, which is particularly challenging since the performance requirements need to be met while traversing the public wide-area network (WAN). This is different from the typical situation of cloud-to-client communication, where the WAN can often be bypassed by moving a communication end-point to a cloud “edge”, close to the client. Can we nevertheless take advantage of cloud resources to improve the performance of real-time client-to-client streaming over the WAN? In this paper, we start by analyzing data from a large VoIP provider whose clients are spread across over 21,000 AS’es and nearly all the countries, to understand the challenges faced by interactive audio streaming in the wild. We find that while inter-AS and international paths exhibit significantly worse performance than intra-AS and domestic paths, the pattern of poor performance is nevertheless quite scattered, both temporally and spatially. So any effort to improve performance would have to be fine-grained and dynamic. Then, we turn to the idea of overlay routing, but in the context of the well-provisioned, managed network of a cloud provider rather than peer-to-peer as has been considered in past work. Such a network typically has a global footprint and peers with a large number of network providers. When the performance of a call via the direct path is predicted to be poor, the call traffic could be directed to enter the managed network close to one end point and exit it close to the other end point, thereby avoiding wide-area communication over the public Internet. We present and evaluate data-driven techniques to deciding whether to relay a call through the managed network and if so how to pick the ingress and egress relays to maximize performance, all while operating within a budget for relaying calls via the managed overlay network. We show that call performance can potentially improve by 40%-80% on average, with our techniques closely matching it.

101 citations


Journal ArticleDOI
TL;DR: A channel-aware reputation system with adaptive detection threshold (CRS-A) to detect selective forwarding attacks in WSNs and identify the compromised sensor nodes is proposed, while the attack-tolerant data forwarding scheme can significantly improve the data delivery ratio of the network.
Abstract: Wireless sensor networks (WSNs) are vulnerable to selective forwarding attacks that can maliciously drop a subset of forwarding packets to degrade network performance and jeopardize the information integrity. Meanwhile, due to the unstable wireless channel in WSNs, the packet loss rate during the communication of sensor nodes may be high and vary from time to time. It poses a great challenge to distinguish the malicious drop and normal packet loss. In this paper, we propose a channel-aware reputation system with adaptive detection threshold (CRS-A) to detect selective forwarding attacks in WSNs. The CRS-A evaluates the data forwarding behaviors of sensor nodes, according to the deviation of the monitored packet loss and the estimated normal loss. To optimize the detection accuracy of CRS-A, we theoretically derive the optimal threshold for forwarding evaluation, which is adaptive to the time-varied channel condition and the estimated attack probabilities of compromised nodes. Furthermore, an attack-tolerant data forwarding scheme is developed to collaborate with CRS-A for stimulating the forwarding cooperation of compromised nodes and improving the data delivery ratio of the network. Extensive simulation results demonstrate that CRS-A can accurately detect selective forwarding attacks and identify the compromised sensor nodes, while the attack-tolerant data forwarding scheme can significantly improve the data delivery ratio of the network.

100 citations


Journal ArticleDOI
TL;DR: In the proposed control strategy, a novel state space model is introduced, where, unlike the conventional state space models, the tracking error and the state variables are combined and optimized together.
Abstract: In order to deal with the networked control system (NCS) under random packet loss and uncertainties, an improved model predictive tracking control is provided in this paper. In the proposed control strategy, a novel state space model is introduced, where, unlike the conventional state space models, the tracking error and the state variables are combined and optimized together. Based on the improved state space model, more design degrees can be provided and better control performance can be acquired. A classical angular positioning system with uncertainties and a NCS with packet loss are introduced to illustrate the effectiveness of the proposed model predictive tracking control strategy, at the same time, the conventional model predictive control (MPC) approach is introduced as comparisons.

98 citations


Journal ArticleDOI
TL;DR: A novel energy-efficient adaptive power control (APC) algorithm is proposed that adaptively adjusts transmission power (TP) level based on the feedback from base station that achieves significant higher energy savings than Monte Carlo simulations in MATLAB.
Abstract: An important constraint in wireless body area network (WBAN) is to maximise the energy-efficiency of wearable devices due to their limited size and light weight. Two experimental scenarios; ‘right wrist to right hip’ and ‘chest to right hip’ with body posture of walking are considered. It is analyzed through extensive real-time data sets that due to large temporal variations in the wireless channel, a constant transmission power and a typical conventional transmission power control (TPC) methods are not suitable choices for WBAN. To overcome these problems a novel energy-efficient adaptive power control (APC) algorithm is proposed that adaptively adjusts transmission power (TP) level based on the feedback from base station. The main advantages of the proposed algorithm are saving more energy with acceptable packet loss ratio (PLR) and lower complexity in implementation of desired tradeoff between energy savings and link reliability. We adapt, optimise and theoretically analyse the required parameters to enhance the system performance. The proposed algorithm sequentially achieves significant higher energy savings of 40.9%, which is demonstrated by Monte Carlo simulations in MATLAB. However, the only limitation of proposed algorithm is a slightly higher PLR in comparison to conventional TPC such as Gao's and Xiao's methods.

95 citations


Journal ArticleDOI
TL;DR: Simulation results suggest that the proposed scheme of networked control is superior with respect to other design methods available in the literature, and thus robust to communication imperfections.
Abstract: In this paper, the quality of service of communication infrastructure implemented in multiarea power system for load frequency control application is assessed in smart grid environment. In this study, network induced effects time delay, packet loss, bandwidth, quantization, and change in communication topology (CT) has been addressed to examine the system performance in closed loop. Uncertainty in time delay is approximated using deterministic and stochastic models. The network is modeled considering different network configurations, i.e., change in CT. The modeled communication network guarantees control relevant properties, i.e., stability. The decentralized controller and linear matrix inequality-based linear quadratic regulator is implemented to reduce the dynamic performance (mean square error of states variables) of power system as CT changes. Simulation results suggest that the proposed scheme of networked control is superior with respect to other design methods available in the literature, and thus robust to communication imperfections.

Journal ArticleDOI
TL;DR: A routing protocol for Emergency Response IoT based on Global Information Decision (ERGID) is proposed to improve the performances of reliable data transmission and efficient emergency response in IoT and design and realize a mechanism called Delay Iterative Method (DIM), which is based on delay estimation, to solve the problem of ignoring valid paths.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: This paper proposes infrastructure-based vehicle control system that shares internal states between edge and cloud servers, dynamically allocates computational resources and switches necessary computation on collected sensors according to network conditions in order to achieve safe driving.
Abstract: One of the challenges in autonomous driving is limited sensing from a single vehicle that causes spurious warnings and dead-lock situations. We posit that cloud-based vehicle control system[1] is promising when a number of vehicles must be controlled, since we can collect information from sensors across multiple vehicles for coordination. However, since cloud based control has inherent challenge in long-haul communication susceptible to prolonged latency and packet loss caused by congestion, mobile edge computing (MEC)[2] recently attracts attention in ITS in the next generation mobile network such as 5G. Although edge servers can perform data processing from the vehicles in ultra low latency in MEC, computational resources at edge servers are limited compared to cloud. Therefore, dynamic resource allocation and coordination between edge and cloud servers are necessary. In this paper, we propose infrastructure-based vehicle control system that shares internal states between edge and cloud servers, dynamically allocates computational resources and switches necessary computation on collected sensors according to network conditions in order to achieve safe driving. We implement a prototype system using micro-cars and evaluate the stability of infrastructure-based vehicle control. We show that proposed system mitigates instability of cloud control caused by latency fluctuation. As a result, when controlled from the cloud with 150ms latency, micro-cars deviate by over 0.095m from the course for the 40% of the entire trajectory possibly causing car accidents. On the other hand, MEC-based control stabilizes the driving trajectory. Also, our proposed system automatically switches control from cloud and from edge server according to the network condition without degrading the stability in driving trajectory. Even when the ratio of time of control by edge server to that by cloud is suppressed to 54%, we can achieve almost the same stability as in full control by edge controller.

Journal ArticleDOI
Tie Qiu1, Xize Liu1, Lin Feng1, Yu Zhou1, Kaiyu Zheng1 
TL;DR: Simulation results show that the proposed protocol can construct a reliable tree-based network quickly and the success rate of packet in ETSP is much higher compared with ad~hoc on demand distance vector routing and destination sequenced distance vectors routing.
Abstract: Tree networks are widely applied in sensor networks of Internet of Things (IoTs). This paper proposes an efficient tree-based self-organizing protocol (ETSP) for sensor networks of IoTs. In ETSP, all nodes are divided into two kinds: network nodes and non-network nodes. Network nodes can broadcast packets to their neighboring nodes. Non-network nodes collect the broadcasted packets and determine whether to join the network. During the self-organizing process, we use different metrics, such as number of child nodes, hop, communication distance, and residual energy to reach available sink nodes’ weight; the node with max weight will be selected as a sink node. Non-network nodes can be turned into network nodes when they join the network successfully. Then, a tree-based network can be obtained one layer by one layer. The topology is adjusted dynamically to balance energy consumption and prolong network lifetime. We conduct experiments with NS2 to evaluate ETSP. Simulation results show that our proposed protocol can construct a reliable tree-based network quickly. With the network scale increasing, the self-organization time, average hop, and packet loss ratio will not increase more. Furthermore, the success rate of packet in ETSP is much higher compared with $ad~hoc$ on demand distance vector routing and destination sequenced distance vector routing.

Journal ArticleDOI
TL;DR: A clustering-tree topology control algorithm based on the energy forecast (CTEF) is proposed for saving energy and ensuring network load balancing, while considering the link quality, packet loss rate, etc.
Abstract: How to design an energy-efficient algorithm to maximize the network lifetime in complicated scenarios is a critical problem for heterogeneous wireless sensor networks (HWSN). In this paper, a clustering-tree topology control algorithm based on the energy forecast (CTEF) is proposed for saving energy and ensuring network load balancing, while considering the link quality, packet loss rate, etc. In CTEF, the average energy of the network is accurately predicted per round (the lifetime of the network is denoted by rounds) in terms of the difference between the ideal and actual average residual energy using central limit theorem and normal distribution mechanism, simultaneously. On this basis, cluster heads are selected by cost function (including the energy, link quality and packet loss rate) and their distance. The non-cluster heads are determined to join the cluster through the energy, distance and link quality. Furthermore, several non-cluster heads in each cluster are chosen as the relay nodes for transmitting data through multi-hop communication to decrease the load of each cluster-head and prolong the lifetime of the network. The simulation results show the efficiency of CTEF. Compared with low-energy adaptive clustering hierarchy (LEACH), energy dissipation forecast and clustering management (EDFCM) and efficient and dynamic clustering scheme (EDCS) protocols, CTEF has longer network lifetime and receives more data packets at base station.

Proceedings ArticleDOI
06 Dec 2016
TL;DR: LossRadar is proposed, a system that can capture individual lost packets and their detailed information in the entire network on a fine time scale and is easy to implement in hardware switches, achieves low memory and bandwidth overhead, while providing detailed information about individuallost packets.
Abstract: Packet losses are common in data center networks, may be caused by a variety of reasons (e.g., congestion, blackhole), and have significant impacts on application performance and network operations. Thus, it is important to provide fast detection of packet losses independent of their root causes. We also need to capture both the locations and packet header information of the lost packets to help diagnose and mitigate these losses. Unfortunately, existing monitoring tools that are generic in capturing all types of network events often fall short in capturing losses fast with enough details and low overhead. Due to the importance of loss in data centers, we propose a specific monitoring system designed for loss detection. We propose LossRadar, a system that can capture individual lost packets and their detailed information in the entire network on a fine time scale. Our extensive evaluation on prototypes and simulations demonstrates that LossRadar is easy to implement in hardware switches, achieves low memory and bandwidth overhead, while providing detailed information about individual lost packets. We also build a loss analysis tool that demonstrates the usefulness of LossRadar with a few example applications.

Journal ArticleDOI
TL;DR: This paper proposes a mitigation technique against black hole attack with low packet loss and high reliability, and demonstrates that the proposed approach increases packet delivery rate significantly and detects black holes attack effectively.
Abstract: Routing Protocol for Low Power and Lossy Networks (RPL) has been standardized by the Internet Engineering Task Force to efficiently manage the functions of the network layer when providing Internet connectivity for wireless sensor networks RPL has been designed for constrained devices and networks Owing to their constrained nature, RPL-based networks can be exposed to a wide variety of security attacks One of the most serious attacks in RPL is a black hole attack, where a malicious node silently drops all the packets that it is supposed to forward This paper proposes a mitigation technique against black hole attack with low packet loss and high reliability The proposed technique consists of a local decision and a global verification process First, each node observes the communication behavior of its neighboring nodes by overhearing packets transmitted by its neighbors and attempts to identify suspicious nodes based on their behavior In the second process, if a node identifies a suspicious node, then it verifies whether the suspicious node is a black hole We demonstrate that the proposed approach increases packet delivery rate significantly and detects black hole attack effectively Copyright © 2016 John Wiley & Sons, Ltd

Journal ArticleDOI
TL;DR: This paper study and compare the two classes of congestion control algorithms, i.e., reactive state-based and linear adaptive, and identifies the root causes and introduces stable reactive algorithms.
Abstract: Channel congestion is one of the major challenges for IEEE 802.11p-based vehicular networks. Unless controlled, congestion increases with vehicle density, leading to high packet loss and degraded safety application performance. We study two classes of congestion control algorithms, i.e., reactive state-based and linear adaptive. In this paper, the reactive state-based approach is represented by the decentralized congestion control framework defined in the European Telecommunications Standards Institute. The linear adaptive approach is represented by the LInear MEssage Rate Integrated Control (LIMERIC) algorithm. Both approaches control safety message transmissions as a function of channel load [i.e., channel busy percentage (CBP)]. A reactive state-based approach uses CBP directly, defining an appropriate transmission behavior for each CBP value, e.g., via a table lookup. By contrast, a linear adaptive approach identifies the transmission behavior that drives CBP toward a target channel load. Little is known about the relative performance of these approaches and any existing comparison is limited by incomplete implementations or stability anomalies. To address this, this paper makes three main contributions. First, we study and compare the two aforementioned approaches in terms of channel stability and show that the reactive state-based approach can be subject to major oscillation. Second, we identify the root causes and introduce stable reactive algorithms. Finally, we compare the performance of the stable reactive approach with the linear adaptive approach and the legacy IEEE 802.11p. It is shown that the linear adaptive approach still achieves a higher message throughput for any given vehicle density for the defined performance metrics.

Journal ArticleDOI
TL;DR: Experimental results show that CAASS can dynamically adjust the service level according to the environment variation and outperforms the existing streaming approaches in adaptive streaming media distribution according to peak signal-to-noise ratio (PSNR).
Abstract: We consider the problem of streaming media transmission in a heterogeneous network from a multisource server to home multiple terminals. In wired network, the transmission performance is limited by network state (e.g., the bandwidth variation, jitter, and packet loss). In wireless network, the multiple user terminals can cause bandwidth competition. Thus, the streaming media distribution in a heterogeneous network becomes a severe challenge which is critical for QoS guarantee. In this paper, we propose a context-aware adaptive streaming media distribution system (CAASS), which implements the context-aware module to perceive the environment parameters and use the strategy analysis (SA) module to deduce the most suitable service level. This approach is able to improve the video quality for guarantying streaming QoS. We formulate the optimization problem of QoS relationship with the environment parameters based on the QoS testing algorithm for IPTV in ITU-T G.1070. We evaluate the performance of the proposed CAASS through 12 types of experimental environments using a prototype system. Experimental results show that CAASS can dynamically adjust the service level according to the environment variation (e.g., network state and terminal performances) and outperforms the existing streaming approaches in adaptive streaming media distribution according to peak signal-to-noise ratio (PSNR).

Journal ArticleDOI
TL;DR: This paper extensively evaluates the MLQoE using three unidirectional datasets containing VoIP calls over wireless networks under various network conditions and feedback from subjects, and performs a preliminary analysis to assess the generality of the methodology using bidirectional VoIP and video traces.
Abstract: The impact of the network performance on the quality of experience (QoE) for various services is not well-understood. Assessing the impact of different network and channel conditions on the user experience is important for improving the telecommunication services. The QoE for various wireless services including VoIP, video streaming, and web browsing, has been in the epicenter of recent networking activities. The majority of such efforts aim to characterize the user experience, analyzing various types of measurements often in an aggregate manner. This paper proposes the MLQoE, a modular algorithm for user-centric QoE prediction. The MLQoE employs multiple machine learning (ML) algorithms, namely, Artificial Neural Networks, Support Vector Regression machines, Decision Trees, and Gaussian Naive Bayes classifiers, and tunes their hyper-parameters. It uses the Nested Cross Validation (nested CV) protocol for selecting the best classifier and the corresponding best hyper-parameter values and estimates the performance of the final model. The MLQoE is conservative in the performance estimation despite multiple induction of models. The MLQoE is modular, in that, it can be easily extended to include other ML algorithms. The MLQoE selects the ML algorithm that exhibits the best performance and its parameters automatically given the dataset used as input. It uses empirical measurements based on network metrics (e.g., packet loss, delay, and packet interarrival) and subjective opinion scores reported by actual users. This paper extensively evaluates the MLQoE using three unidirectional datasets containing VoIP calls over wireless networks under various network conditions and feedback from subjects (collected in field studies). Moreover, it performs a preliminary analysis to assess the generality of our methodology using bidirectional VoIP and video traces. The MLQoE outperforms several state-of-the-art algorithms, resulting in fairly accurate predictions.

Journal ArticleDOI
TL;DR: The proposed CMT-NC solution avoids data reordering to mitigate buffer blocking and compensates for the lost packets to reduce the number of retransmissions, and reduces the encoding complexity and fully ensures decoding feasibility.
Abstract: The growing popularity of multihoming mobile terminals has encouraged the use of concurrent multipath transfer (CMT) to provide network diversity and accelerated content distribution in ubiquitous and heterogeneous wireless network environments. However, CMT severely degrades its performance, which is mostly due to both data reordering required as a result of great path dissimilarity and frequent packet loss due to wireless channel unreliability. Most delivery approaches follow the packet sequence numbers and thereby result in strict in-order and packet-specific reception. Passively adapting to the network variations, those approaches are not general enough to address CMT problems. This paper proposes to apply network coding (NC) principles to CMT, to break the strong binding between data packets and their sequence numbers, and then improve its performance. The proposed CMT-NC solution avoids data reordering to mitigate buffer blocking and compensates for the lost packets to reduce the number of retransmissions. Its specific encoding approach reduces the encoding complexity and fully ensures decoding feasibility. Furthermore, the group-based transmission management enhances the robustness and reliability of the data transfer. Simulation results show how CMT-NC is a highly efficient data transport solution outperforming existing state-of-the-art solutions.

Journal ArticleDOI
TL;DR: The use of an Artificial Immune System (AIS) to defend against wormhole attack is investigated and it is shown that the proposed approach offers better performance than other schemes defending against the worm hole attack.
Abstract: MANETs are mobile networks that are spontaneously deployed over a geographically limited area without requiring any pre-existing infrastructure. Typically, nodes are both autonomous and self-organized without requiring a central administration or a fixed network infrastructure. Due to their distributed nature, MANET is vulnerable to a specific routing misbehavior, called wormhole attack. In a wormhole attack, one malicious node tunnels packets from its location to the other malicious node. Such wormhole attacks result in a false route with fewer hop count. If the source node follows this fake route, malicious nodes have the option of delivering the packets or dropping them. This article aims at removing these attacks. For this purpose, it investigates the use of an Artificial Immune System AIS to defend against wormhole attack. The proposed approach learns rapidly how to detect and bypass the wormhole nodes without affecting the overall performance of the network. The proposed approach is evaluated in comparison with other existing solutions in terms of dropped packet count, packet loss ratio, throughput, packet delivery ratio, and end-to-end delay. A simulation result shows that the proposed approach offers better performance than other schemes defending against the wormhole attack.

Journal ArticleDOI
TL;DR: This paper extensively reviews and discusses the algorithms developed to address the challenges and the techniques of integrating IP over WSNs, the attributes of mobility management within the IPv4 and IPv6, respectively, and special focus is given on a comprehensive review encompassing mechanisms, advantages, and disadvantages on related workWithin the IPv6 mobility management.
Abstract: Internet of Thing (IoT) or also referred to as IP-enabled wireless sensor network (IP-WSN) has become a rich area of research. This is due to the rapid growth in a wide spectrum of critical application domains. However, the properties within these systems such as memory size, processing capacity, and power supply have led to imposing constraints on IP-WSN applications and its deployment in the real world. Consequently, IP-WSN is constantly faced with issues as the complexity further rises due to IP mobility. IP mobility management is utilized as a mechanism to resolve these issues. The management protocols introduced to support mobility has evolved from host-based to network-based mobility management protocols. The presence of both types of solutions is dominant but depended on the nature of systems being deployed. The mobile node (MN) is involved with the mobility-related signaling in host-based protocols, while network-based protocols shield the host by transferring the mobility-related signaling to the network entities. The features of the IoT are inclined towards the network-based solutions. The wide spectrum of strategies derived to achieve enhanced performance evidently displays superiority in performance and simultaneous issues such as long handover latency, intense signaling, and packet loss which affects the QoS for the real-time applications. This paper extensively reviews and discusses the algorithms developed to address the challenges and the techniques of integrating IP over WSNs, the attributes of mobility management within the IPv4 and IPv6, respectively, and special focus is given on a comprehensive review encompassing mechanisms, advantages, and disadvantages on related work within the IPv6 mobility management. The paper is concluded with the proposition of several pertinent open issues which are of high research value.

Journal ArticleDOI
TL;DR: The research was focused on the quality of video data delivery in many scenarios included different packet loss rate and delay variation values in the network, and an extended QoS model for estimation of triple play services was designed.
Abstract: The aim of this work is to bring complex view on video streaming service performance within IP-based networks. Video quality as a part of multimedia technology has a crucial role nowadays due to this increase. Since architecture of IP network has not been designed for real-time services like audio or video, there are many factors that can influence the final quality of service, especially packet loss and delay variation (also known as Jitter). The research was focused on the quality of video data delivery in many scenarios included different packet loss rate and simulating of different delay variation values in the network. Performed tests were evaluated by using of video objective methods. Based on results of these measurements, an extended QoS model for estimation of triple play services was designed. The proposed model allows us to compute the estimated objective quality parameters that describe the final quality of video service as a part of triple play services.

Proceedings ArticleDOI
10 Apr 2016
TL;DR: This paper describes a set of update scenarios called flow swaps, for which Time4 is the optimal update approach, yielding less packet loss than existing update approaches, and presents the design, implementation, and evaluation of a time4-enabled OpenFlow prototype.
Abstract: With the rise of Software Defined Networks (SDN), there is growing interest in dynamic and centralized traffic engineering, where decisions about forwarding paths are taken dynamically from a network-wide perspective. Frequent path reconfiguration can significantly improve the network performance, but should be handled with care, so as to minimize disruptions that may occur during network updates. In this paper we introduce Time4, an approach that uses accurate time to coordinate network updates. We characterize a set of update scenarios called flow swaps, for which Time4 is the optimal update approach, yielding less packet loss than existing update approaches. We define the lossless flow allocation problem, and formally show that in environments with frequent path allocation, scenarios that require simultaneous changes at multiple network devices are inevitable. We present the design, implementation, and evaluation of a time4-enabled OpenFlow prototype. The prototype is publicly available as open source. Our work includes an extension to the OpenFlow protocol that has been adopted by the Open Networking Foundation (ONF), and is now included in OpenFlow 1.5. Our experimental results demonstrate the significant advantages of Time4 compared to other network update approaches.

Journal ArticleDOI
TL;DR: A new adaptive geographic routing scheme is proposed for establishing a simplex VOD transmission in urban environments with a number of independent routes discovered between source and destination vehicles whose number of routes depends on the volume of the requested video and lifetime for each route.
Abstract: Vehicular ad hoc networks (VANETs) have attracted many researchers' attention in recent years. Due to the highly dynamic nature of these networks, providing guaranteed quality-of-service (QoS) video-on-demand (VOD) sessions is a challenging problem. In this paper, a new adaptive geographic routing scheme is proposed for establishing a simplex VOD transmission in urban environments. In this scheme, rather than one route, a number of independent routes are discovered between source and destination vehicles whose number of routes depends on the volume of the requested video and lifetime (span of time in which a route is almost fixed) for each route. A closed-form equation is derived for estimating the connectivity probability of a route, which is used to select best connected routes. Simulation results show the QoS parameters: packet loss ratio is decreased by 40.79% and freezing delay is significantly improved by 25 ms compared with those of junction-based multipath source routing at the cost of 2-ms degradation in the end-to-end delay.


Journal ArticleDOI
TL;DR: Simulation results show that the proposed FEC scheme can enhance the perceptual quality of videos, compared to conventional FEC methods for video communications, and can reduce network overhead by 41% on average.
Abstract: With the exponential growth of video traffic over wireless networked and embedded devices, mechanisms are needed to predict and control perceptual video quality to meet the quality of experience (QoE) requirements in an energy-efficient way. This paper proposes an energy-efficient QoE support framework for wireless video communications. It consists of two components: 1) a perceptual video quality model that allows the prediction of video quality in real-time and with low complexity, and 2) an application layer energy-efficient and content-aware forward error correction (FEC) scheme for preventing quality degradation caused by network packet losses. The perceptual video quality model characterizes factors related to video content as well as distortion caused by compression and transmission. Prediction of perceptual quality is achieved through a decision tree using a set of observable features from the compressed bitstream and the network. The proposed model can achieve prediction accuracy of 88.9% and 90.5% on two distinct testing sets. Based on the proposed quality model, a novel FEC scheme is introduced to protect video packets from losses during transmission. Given a user-defined perceptual quality requirement, the FEC scheme adjusts the level of protection for different components in a video stream to minimize network overhead. Simulation results show that the proposed FEC scheme can enhance the perceptual quality of videos. Compared to conventional FEC methods for video communications, the proposed FEC scheme can reduce network overhead by 41% on average.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: An error-resilient video compression technique based on hybrid multiple descriptions and redundant pictures to overcome impact of packet loss in underwater acoustic transmission is proposed.
Abstract: Addressing transmission errors in underwater acoustic channels is a key challenge for the real-time video communication between an autonomous underwater vehicle and a surface station. In this paper, we propose an error-resilient video compression technique based on hybrid multiple descriptions and redundant pictures to overcome impact of packet loss in underwater acoustic transmission. Video sequences are split into two descriptions with selective redundant pictures in order to achieve a balance between coding efficiency and error resiliency. Experiments with underwater video sequences are presented to assess the performance of the proposed approach under various packet loss rates, in comparison to state-of-the-art single and multiple description coding techniques.

Journal ArticleDOI
TL;DR: A fairness-based license-assisted access and resource scheduling scheme are designed for the coexisting systems, incorporating long term evolution-advanced and WiFi systems in the unlicensed band and a novel resource scheduling approach employing linear programming is proposed to maximize the utility function.
Abstract: In face of the explosive surge of mobile data services, spectrum aggregation or carrier aggregation technology has been proposed to improve system throughput and spectrum efficiency (SE) by aggregating licensed and unlicensed spectrum bands. However, the system performances would be severely deteriorated by the channel access collision if the channel access and resource scheduling approaches are not coordinated among different networks in the same spectrum band. Therefore, in order to improve the system throughput and the SE, a fairness-based license-assisted access and resource scheduling scheme are designed for the coexisting systems, incorporating long term evolution-advanced and WiFi systems in the unlicensed band. The optimal sizes of the contention window in the proposed fairness-based channel access approach are obtained in terms of various density ratios between these two systems. Furthermore, a novel resource scheduling approach employing linear programming is proposed to maximize the utility function with the goal of improving the service experience of users and the SE with various spectrum qualities. The theoretical proofs and simulation results verify the enhanced performances of the proposed approaches in terms of key metrics, such as throughput, SE, delay, and packet loss ratio.