scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 2012"


Proceedings ArticleDOI
13 Aug 2012
TL;DR: It is demonstrated that PDQ significantly outperforms TCP, RCP and D3 in data center environments, and is stable, resilient to packet loss, and preserves nearly all its performance gains even given inaccurate flow information.
Abstract: Today's data centers face extreme challenges in providing low latency. However, fair sharing, a principle commonly adopted in current congestion control protocols, is far from optimal for satisfying latency requirements.We propose Preemptive Distributed Quick (PDQ) flow scheduling, a protocol designed to complete flows quickly and meet flow deadlines. PDQ enables flow preemption to approximate a range of scheduling disciplines. For example, PDQ can emulate a shortest job first algorithm to give priority to the short flows by pausing the contending flows. PDQ borrows ideas from centralized scheduling disciplines and implements them in a fully distributed manner, making it scalable to today's data centers. Further, we develop a multipath version of PDQ to exploit path diversity.Through extensive packet-level and flow-level simulation, we demonstrate that PDQ significantly outperforms TCP, RCP and D3 in data center environments. We further show that PDQ is stable, resilient to packet loss, and preserves nearly all its performance gains even given inaccurate flow information.

506 citations


Journal ArticleDOI
TL;DR: This paper presents the most relevant research efforts made around RPL routing protocol that pertain to its performance evaluation, implementation, experimentation, deployment and improvement, and points out open research challenges on the RPL design.

303 citations


Proceedings ArticleDOI
16 Apr 2012
TL;DR: In this paper, a framework for runtime adaptation of low-power MAC protocol parameters is presented, based on application requirements expressed as network lifetime, end-to-end latency, and endtoend reliability, pTunes automatically determines optimized parameter values.
Abstract: We present pTunes, a framework for runtime adaptation of low-power MAC protocol parameters. The MAC operating parameters bear great influence on the system performance, yet their optimal choice is a function of the current network state. Based on application requirements expressed as network lifetime, end-to-end latency, and end-to-end reliability, pTunes automatically determines optimized parameter values to adapt to link, topology, and traffic dynamics. To this end, we introduce a flexible modeling approach, separating protocol-dependent from protocol-independent aspects, which facilitates using pTunes with different MAC protocols, and design an efficient system support that integrates smoothly with the application. To demonstrate its effectiveness, we apply pTunes to X-MAC and LPP. In a 44-node testbed, pTunes achieves up to three-fold lifetime gains over static MAC parameters optimized for peak traffic, the latter being current - and almost unavoidable - practice in real deployments. pTunes promptly reacts to changes in traffic load and link quality, reducing packet loss by 80% during periods of controlled wireless interference. Moreover, pTunes helps the routing protocol recover quickly from critical network changes, reducing packet loss by 70% in a scenario where multiple core routing nodes fail.

129 citations


Proceedings ArticleDOI
30 Oct 2012
TL;DR: Wang et al. as mentioned in this paper proposed a delay-based algorithm for multipath congestion control, which uses packet queuing delay as congestion signals, thus achieving fine-grained load balancing.
Abstract: With the aid of multipath transport protocols, a multihomed host can shift some of its traffic from more congested paths to less congested ones, thus compensating for lost bandwidth on some paths by moderately increasing transmission rates on other ones. However, existing multipath proposals achieve only coarse-grained load balancing due to a rough estimate of network congestion using packet losses. This paper formulates the problem of multipath congestion control and proposes an approximate iterative algorithm to solve it. We prove that a fair and efficient traffic shifting implies that every flow strives to equalize the extent of congestion that it perceives on all its available paths.We call this result “Congestion Equality Principle”. By instantiating the approximate iterative algorithm, we develop weighted Vegas (wVegas), a delay-based algorithm for multipath congestion control, which uses packet queuing delay as congestion signals, thus achieving fine-grained load balancing. Our simulations show that, compared with loss-based algorithms, wVegas is more sensitive to changes of network congestion and thus achieves more timely traffic shifting and quicker convergence. Additionally, as it occupies fewer link buffers, wVegas rarely causes packet losses and shows better intra-protocol fairness.

128 citations



Book ChapterDOI
21 May 2012
TL;DR: In this article, the authors introduce two new network coding signature schemes, which are provably secure in the standard model, rely on standard assumptions, and are in the same efficiency class as previous solutions based on random oracles.
Abstract: Network Coding is a routing technique where each node may actively modify the received packets before transmitting them.While this departure from passive networks improves throughput and resilience to packet loss it renders transmission susceptible to pollution attacks where nodes can misbehave and change in a malicious way the messages transmitted. Nodes cannot use standard signature schemes to authenticate the modified packets: this would require knowledge of the original sender's signing key. Network coding signature schemes offer a cryptographic solution to this problem. Very roughly, such signatures allow signing vector spaces (or rather bases of such spaces), and these signatures are homomorphic: given signatures on a set of vectors it is possible to create signatures for any linear combination of these vectors. Designing such schemes is a difficult task, and the few existent constructions either rely on random oracles or are rather inefficient. In this paper we introduce two new network coding signature schemes. Both of our schemes are provably secure in the standard model, rely on standard assumptions, and are in the same efficiency class as previous solutions based on random oracles.

88 citations


Journal ArticleDOI
TL;DR: An analytical queueing model based on the embedded Markov chain is presented to study and analyze the performance of rule-based firewalls when subjected to normal traffic flows as well as DoS attack flows targeting different rule positions.
Abstract: Network firewalls act as the first line of defense against unwanted and malicious traffic targeting Internet servers. Predicting the overall firewall performance is crucial to network security engineers and designers in assessing the effectiveness and resiliency of network firewalls against DDoS (Distributed Denial of Service) attacks as those commonly launched by today's Botnets. In this paper, we present an analytical queueing model based on the embedded Markov chain to study and analyze the performance of rule-based firewalls when subjected to normal traffic flows as well as DoS attack flows targeting different rule positions. We derive equations for key features and performance measures of engineering and design significance. These features and measures include throughput, packet loss, packet delay, and firewall's CPU utilization. In addition, we verify and validate our analytical model using simulation and real experimental measurements.

83 citations


Journal ArticleDOI
TL;DR: This work introduces an APP/MAC/PHY cross-layer architecture that enables optimizing perceptual quality for delay-constrained scalable video transmission and proposes an online QoS-to-QoE mapping technique to quantify the loss visibility of packets from each video layer according to packet loss visibility.
Abstract: Delivering high perceptual quality video over wireless channels is challenging due to the changing channel quality and the variations in the importance of one source packet to the next for the end-user's perceptual experience. Leveraging perceptual metrics in concert with link adaptation to maximize perceptual quality and satisfy real-time delay constraints is largely unexplored. We introduce an APP/MAC/PHY cross-layer architecture that enables optimizing perceptual quality for delay-constrained scalable video transmission. We propose an online QoS-to-QoE mapping technique to quantify the loss visibility of packets from each video layer using the ACK history and perceptual metrics. At the PHY layer, we develop a link adaptation technique that uses the QoS-to-QoE mapping to provide perceptually-optimized unequal error protection per layer according to packet loss visibility. At the APP layer, the source rate is adapted by selecting the set of temporal and quality layers to be transmitted based on the channel statistics, source rates, and playback buffer state. The proposed cross-layer optimization framework allows the channel to adapt at a faster time scale than the video codec. Furthermore, it provides a tradeoff between playback buffer occupancy and perceptual quality. We show that the proposed architecture prevents playback buffer starvation, provides immunity against short-term channel fluctuations, regulates the buffer size, and achieves a 30% increase in video capacity versus throughput-optimal link adaptation.

77 citations


Patent
08 May 2012
TL;DR: In this paper, the authors present a method for operating a network processor that receives a first data packet in a stream of data packets and a set of receive-queues adapted to store receive data packets.
Abstract: According to embodiments of the invention, there is provided a method for operating a network processor. The network processor receiving a first data packet in a stream of data packets and a set of receive-queues adapted to store receive data packets. The network processor processing the first data packet by reading a flow identification in the first data packet; determining a quality of service for the first data packet; mapping the flow identification and the quality of service into an index for selecting a first receive-queue for routing the first data packet; and utilizing the index to route the first data packet to the first receive-queue.

75 citations


Journal ArticleDOI
TL;DR: Improved criteria for stability and stabilization of sampled-data systems with control packet loss are derived and are proved theoretically to be less conservative than the existing results.
Abstract: This technical note presents a new method for stability analysis and stabilization of sampled-data systems with control packet loss. It is assumed that if the control packet from the controller to the actuator is lost, then the actuator input to the plant is set to zero. The new method is based on a novel construction of piecewise differentiable Lyapunov functionals by using an impulsive system representation of sampled-data systems. A significant feature of the new Lyapunov functionals is that they are continuous at impulse times but not necessarily positive definite inside the impulse intervals. Applying the new Lyapunov functionals to sampled-data systems with control packet loss, improved criteria for stability and stabilization are derived. The new criteria are proved theoretically to be less conservative than the existing results. Illustrative examples are given which substantiate the usefulness of the proposed method.

75 citations


Journal ArticleDOI
TL;DR: This work considers the practical barriers to HEVC streaming in realistic environments and proposes HEVStream, a streaming and evaluation framework for HEVC encoded content that fills the current gap in enabling networked HEVC visual applications and permits the implementation, testing and evaluation of HeVC encoded video streaming under a range of packet loss, bandwidth restriction and network delay scenarios in a realistic testbed environment.
Abstract: High Efficiency Video Coding (HEVC) is the next generation video compression standard currently under development within the ITU-T/ISO sponsored Joint Collaborative Team on Video Coding (JCT-VC). The standardization, and eventual adoption, of HEVC will contribute significantly to the future development of many consumer devices. Areas such as broadcast television, multimedia streaming, mobile communications and multimedia/video content storage will all be impacted by implementation of the emerging HEVC standard. Up to this point in time the research focus of HEVC has been on improvements to video compression efficiency and little work has been conducted into streaming of HEVC. In this work we consider the practical barriers to HEVC streaming in realistic environments and propose HEVStream, a streaming and evaluation framework for HEVC encoded content. Our framework fills the current gap in enabling networked HEVC visual applications and permits the implementation, testing and evaluation of HEVC encoded video streaming under a range of packet loss, bandwidth restriction and network delay scenarios in a realistic testbed environment. We provide a basic error concealment method for HEVC to overcome limitations within the decoder and an RTP packetisation format for HEVC Network Abstraction Layer (NAL) units. Comprehensive results of HEVC streaming experiments under various network circumstances are reported. These results provide an insight into the reduction in picture quality, measured as peak signal to noise ratio (PSNR), that can be expected under a wide range of network constraint and packet loss conditions. We report an average loss of 3.61dB when a bandwidth reduction of 10% is applied. We believe that this work will be amongst the first to report on successful design and implementation of HEVC network applications, and evaluation of the effects of network constraints or limitations on the quality of HEVC encoded video streams.

Journal ArticleDOI
TL;DR: A new generalized analysis of the unslotted IEEE 802.15.4 medium access control protocol concludes that heterogeneous traffic and limited carrier-sensing range play an essential role on the performance and that routing should account for the presence of dominant nodes to balance the traffic distribution across the network.
Abstract: Many of existing analytical studies of the IEEE 802154 medium access control (MAC) protocol are not adequate because they are often based on assumptions such as homogeneous traffic and ideal carrier sensing, which are far from reality for multi-hop networks, particularly in the presence of mobility In this paper, a new generalized analysis of the unslotted IEEE 802154 MAC is presented The analysis considers the effects induced by heterogeneous traffic due to multi-hop routing and different traffic generation patterns among the nodes of the network and the hidden terminals due to reduced carrier-sensing capabilities The complex relation between MAC and routing protocols is modeled, and novel results on this interaction are derived For various network configurations, conditions under which routing decisions based on packet loss probability or delay lead to an unbalanced distribution of the traffic load across multi-hop paths are studied It is shown that these routing decisions tend to direct traffic toward nodes with high packet generation rates, with potential catastrophic effects for the node's energy consumption It is concluded that heterogeneous traffic and limited carrier-sensing range play an essential role on the performance and that routing should account for the presence of dominant nodes to balance the traffic distribution across the network

01 Jan 2012
TL;DR: Simulation results, provides fast message verification, identifies black hole and discovers the safe routing and avoiding the black hole attack are shown.
Abstract: An ad hoc network is a collection of mobile nodes that dynamically form a temporary network. It operates without the use of existing infrastructure. One of the principal routing protocols used in ad hoc networks is AODV (ad hoc on demand distance vector) protocol. This is anticipated to offer a range of flexible services to mobile and nomadic users by means of integrated homogeneous architecture. Energy constrained node, low channel bandwidth, node mobility, high channel error rates, channel variability and packet loss are some of the limitations of MANETs. The security of the AODV protocol is compromised by a particular type of attack called 'Black Hole attack'. Black hole attack is one of the security threat in which the traffic is redirected to such a node that actually does not exist in the network. In this attack a malicious node advertises itself as having the shortest path to the node whose packets it wants to intercept. This paper shows simulation results, provides fast message verification, identifies black hole and discovers the safe routing and avoiding the black hole attack.

Journal ArticleDOI
01 Dec 2012
TL;DR: A directional flooding-based routing protocol, called DFR, is proposed in order to achieve reliable packet delivery in underwater sensor networks, and a simulation study using ns-2 simulator proves that DFR is more suitable for UWSNs, especially when links are prone to packet loss.
Abstract: Unlike terrestrial sensor networks, underwater sensor networks (UWSNs) have salient features such as a long propagation delay, narrow bandwidth, and high packet loss over links. Hence, path setup-based routing protocols proposed for terrestrial sensor networks are not applicable because a large latency of the path establishment is observed, and packet delivery is not reliable in UWSNs. Even though routing protocols such as VBF (vector based forwarding) and HHVBF (hop-by-hop VBF) were introduced for UWSNs, their performance in terms of reliability deteriorates at high packet loss. In this paper, we therefore propose a directional flooding-based routing protocol, called DFR, in order to achieve reliable packet delivery. DFR performs a so-called controlled flooding, where DFR changes the number of nodes which participate in forwarding a packet according to their link quality. When a forwarding node has poor link quality to its neighbor nodes geographically advancing toward the sink, DFR allows more nodes to participate in forwarding the packet. Otherwise, a few nodes are enough to forward the packet reliably. In addition, we identify two types of void problems which can occur during the controlled flooding and introduce their corresponding solutions. Our simulation study using ns-2 simulator proves that DFR is more suitable for UWSNs, especially when links are prone to packet loss. Copyright © 2011 John Wiley & Sons, Ltd. (This paper is an extended version of our previous work presented in MTS/IEEE OCEANS 2008, Quebec City, Canada, September 2008.)

Journal ArticleDOI
TL;DR: An overview of existing QoS techniques and a parametric comparison made with recent developments is given, mainly concentrates on network congestion in WSN environment.
Abstract: A wireless sensor network (WSN) is a one made up of small sensing devices equipped with processors, memory, and short-range wireless communication. Sensor nodes, are autonomous nodes, which include smart dust sensors, motes and so on. They co-operatively monitor physical or environmental conditions and send the sensed data to the sink node. They differ from traditional computer networks due to resource constraints, unbalanced mixture traffic, data redundancy, network dynamics, and energy balance. These kinds of networks support a wide range of applications that have strong requirements to reduce end-to-end delay and losses during data transmissions. When large numbers of sensors are deployed in a sensor field and are active in transmitting the data, there is a possibility of congestion. Congestion may occur due to buffer overflow, channel contention, packet collision, a high data rate, many to one nature, and so on. This leads to packet loss which causes a decrease in throughput and lifetime. Maximum throughput, energy efficiency and minimum error rate can be achieved by minimizing the congestion. A number of quality of service (QoS) techniques has been developed to improve the quality of the network. This article gives an overview of existing QoS techniques and a parametric comparison made with recent developments. This article mainly concentrates on network congestion in WSN environment.

Proceedings ArticleDOI
01 Dec 2012
TL;DR: An effective geographic mobility prediction routing protocol is proposed to improve the performance of routing among UAVs and can provide effective and reliable data routing with acceptable communication overhead in the highly dynamic environment of Ad Hoc UAV Network.
Abstract: Unmanned Aerial Vehicles (UAVs) play more and more important roles in modern warfare. However, the data routing for communication among UAVs faces several challenges, such as packet loss or routing path failure etc. The main problem of UAVs data routing is caused by the high mobility of UAVs. In this paper, an effective geographic mobility prediction routing protocol is proposed to improve the performance of routing among UAVs. First, a Gaussian distribution of UAVs movement probability density function is deduced to reduce the impact of high mobility. Then, two-hop perimeter forwarding is proposed to reduce the impact of routing void. The experiment results show that the proposed approach can provide effective and reliable data routing with acceptable communication overhead in the highly dynamic environment of Ad Hoc UAV Network.

Journal ArticleDOI
TL;DR: The basic video quality can be efficiently guaranteed to all subscribers while creating most utility out of limited resources on enhancement information, and fast and effective algorithms to bridge the gap between theoretical throughput capacity and implementation concerns are designed.
Abstract: We propose Opportunistic Layered Multicasting (OLM), a joint user scheduling and resource allocation algorithm that provides enhanced quality and efficiency for layered video multicast over Mobile WiMAX. This work is a lead off and complete synergy of layered video multicasting with opportunistic concept. The target application is characterized by groups of users acquiring popular video programs over a fading channel. To accommodate various bandwidth requirements and device capability, video streams are coded into base and enhancement layers using scalable video coding technology. Correspondingly, the optimization problems, which select the best subset of users to receive a specific video layer and assign the most appropriate modulation and coding scheme for this video layer, are specifically formulated for both video layer types. We also design fast and effective algorithms to bridge the gap between theoretical throughput capacity and implementation concerns. Thus, the basic video quality can be efficiently guaranteed to all subscribers while creating most utility out of limited resources on enhancement information. To overcome the inevitable packet loss in a multicast session, an FEC rate adaptation scheme to approach theoretical performance is also presented. Favorable performance of the proposed algorithms is demonstrated by simulations utilizing realistic Mobile WiMAX parameters.

Proceedings ArticleDOI
25 Feb 2012
TL;DR: The Speculative Reservation Protocol is presented, a new network congestion control mechanism that relieves the effect of hot-spot traffic in high bandwidth, low latency, lossless computer networks and performs comparably to networks without congestion control on benign traffic patterns.
Abstract: Congestion caused by hot-spot traffic can significantly degrade the performance of a computer network. In this study, we present the Speculative Reservation Protocol (SRP), a new network congestion control mechanism that relieves the effect of hot-spot traffic in high bandwidth, low latency, lossless computer networks. Compared to existing congestion control approaches like Explicit Congestion Notification (ECN), which react to network congestion through packet marking and rate throttling, SRP takes a proactive approach of congestion avoidance. Using a light-weight endpoint reservation scheme and speculative packet transmission, SRP avoids hot-spot congestion while incurring minimal overhead. Our simulation results show that SRP responds more rapidly to the onset of severe hot-spots than ECN and has a higher network throughput on bursty network traffic. SRP also performs comparably to networks without congestion control on benign traffic patterns by reducing the latency and throughput overhead commonly associated with reservation protocols.

Patent
Hamid Assarpour1, Marten Terpstra1
29 Jun 2012
TL;DR: In this article, an operating system adds an application signature as a tag in a packet header and uses the tag alone or in combination with one or more additional header fields to map the packet to a network virtualization identifier segregating the application traffic on the network.
Abstract: An operating system adds an application signature as a tag in a packet header. In one embodiment the tag is inserted as a Q-tag in an Ethernet header. When a network element receives the tagged packet, it uses the tag alone or in combination with one or more additional header fields to map the packet to a network virtualization identifier segregating the application traffic on the network. Services are applied to packets according to network virtualization identifier to enable distributed application of services without requiring network elements to maintain state associated with packet flows.

Journal ArticleDOI
TL;DR: This paper proposes a routing algorithm, named as learning automata based fault-tolerant routing algorithm (LAFTRA), which is capable of routing in the presence of faulty nodes in MANETs using multipath routing.
Abstract: Reliable routing of packets in a Mobile Ad Hoc Network (MANET) has always been a major concern. The open medium and the susceptibility of the nodes of being fault-prone make the design of protocols for these networks a challenging task. The faults in these networks, which occur either due to the failure of nodes or due to reorganization, can eventuate to packet loss. Such losses degrade the performance of the routing protocols running on them. In this paper, we propose a routing algorithm, named as learning automata based fault-tolerant routing algorithm (LAFTRA), which is capable of routing in the presence of faulty nodes in MANETs using multipath routing. We have used the theory of Learning Automata (LA) for optimizing the selection of paths, reducing the overhead in the network, and for learning about the faulty nodes present in the network. The proposed algorithm can be juxtaposed to any existing routing protocol in a MANET. The results of simulation of our protocol using network simulator 2 (ns-2) shows the increase in packet delivery ratio and decrease in overhead compared to the existing protocols. The proposed protocol gains an edge over FTAR, E2FT by nearly 2% and by more than 10% when compared with AODV in terms of packet delivery ratio with nearly 30% faulty nodes in the network. The overhead generated by our protocol is lesser by 1% as compared to FTAR and by nearly 17% as compared to E2FT when there are nearly 30% faulty nodes.

Patent
24 Feb 2012
TL;DR: In this article, a linear combination of packets to transmit from a transmit queue is determined, and an acknowledgement (ACK) is generated, wherein a packet is acknowledged when a receiving node receives the linear combination and determines which packet of the linear combinations of packets has been newly seen.
Abstract: A method, apparatus and computer program product for providing network based flow control is presented. A linear combination of packets to transmit from a transmit queue is determined. The linear combination of packets is transmitted across a network using a sliding window protocol. An acknowledgement (ACK) is generated, wherein a packet is acknowledged when a receiving node receives the linear combination of packets and determines which packet of the linear combination of packets has been newly seen.

Proceedings ArticleDOI
David Smith1, Leif Hanlen1, Dino Miniutti1
01 Apr 2012
TL;DR: This work presents a predictor for real Body-Area-Network channels that is accurate for up to 2 seconds, even with a nominal channel coherence time of 500 ms, and shows that the accurate channel predictor does not translate into substantial reduction in packet loss or power usage over a simple sample-and-hold method.
Abstract: We present a predictor for real Body-Area-Network (BAN) channels that is accurate for up to 2 seconds, even with a nominal channel coherence time of 500 ms. The predictor utilizes the partial-periodicity of measured BAN channels using the previous 4 seconds of channel gain values. We demonstrate use of this predictor for power control with open-access and private channel measurements. When used under a realistic setting for IEEE 802.15.6, with packet loss less than 10%, we show that the accurate channel predictor does not translate into substantial reduction in packet loss or power usage over a simple sample-and-hold method, even though it is a more accurate predictor than sample-and-hold.

Proceedings ArticleDOI
21 May 2012
TL;DR: A fault-tolerant wireless “black channel” is achieved that is able to fulfill soft real-time availability plus providing redundancy, and reliability and performance characteristics are derived from measurements on an experimental setup with SafetyNET p nodes.
Abstract: WLAN according to standard IEEE 802.11 is widely regarded unsuitable as communication channel for real-time and safety applications. Non-determinism and interference liability leads to packet loss, exceeded and variable latency times due to retransmissions. This work proposes a method that compensates such consequences of stochastic channel fading by the parallel operation of diverse wireless channels, applying frequency and space diversity techniques. A fault-tolerant wireless “black channel” is achieved that is able to fulfill soft real-time availability plus providing redundancy. This is realized with standard WLAN components and the “Parallel Redundancy Protocol” (PRP) according to IEC 62439-3. Reliability and performance characteristics are derived from measurements on an experimental setup with SafetyNET p nodes.

Patent
02 Nov 2012
TL;DR: In this paper, a packet inspection unit is coupled with a congestion unit and is configured to determine a level of congestion in the communication network, associated with a capacity of the wireless channel.
Abstract: Access nodes and methods adjust a bit rate of a data stream in a communication network. The access nodes and methods have a packet inspection unit configured to inspect one or more of the data packets to determine that the data stream includes video data. A congestion unit is coupled to the packet inspection unit and is configured to determine a level of congestion in the communication network, the level of congestion associated with a capacity of the wireless channel, the level of congestion capable of varying over time, and the capacity of the wireless channel capable of varying with the level of congestion. A video scaling unit is configured to adjust the bit rate of the data stream responsive to the packet inspection unit and the congestion unit.

Journal ArticleDOI
TL;DR: The state estimator needs to adjust to this new communication media as it is affected by interruptions from primary users, resulting in packet losses, and two different cases are considered.
Abstract: Cognitive radio is a very popular research area in the communication community as cost and bandwidth can be saved by sensing the available licensed spectrum for unlicensed users. This paves the way for the application of cognitive radio in control systems over wireless communication links. A typical control system comprises a sensor interconnected with an actuator, a plant and controller. In this paper, it is assumed that the sensor and estimator communicate through a cognitive radio link. The state estimator needs to adjust to this new communication media as it is affected by interruptions from primary users, resulting in packet losses. The cognitive radio link is modeled as multiple semi-Markov processes each of which can capture the channel dynamics. Two different cases are considered. The first case assumes that acknowledgement of packet arrivals is available at the estimator while the second case does not. For the first case, sufficient conditions are derived for the stability of the peak covariance process which guarantees the stability of the estimator covariance. For the second case, an estimator is designed. Numerical examples are given to show the performance of the proposed estimators. Several applications of the proposed work are also discussed.

Journal ArticleDOI
TL;DR: Under the aforementioned assumptions, delay-dependent conditions for the solvability of the addressed problem are presented in terms of linear matrix inequalities.

Journal ArticleDOI
TL;DR: This paper proposes a cooperative WBAN environment that supports multi-hop transmission through cooperation involving both environmental sensors and WBAN nodes that allows interaction between WBAN and environmental sensors in order to ensure data delivery from WBANs to a distant gateway.
Abstract: Wireless Body Area Network (WBAN) in recent years have received significant attention, due to their potential for increasing efficiency in healthcare monitoring. Typical sensors used for WBAN are low powered single transceiver devices utilizing a single channel for transmission at the Medium Access Control (MAC) layer. However, performance of these devices usually degrades when the density of sensors increases. One approach to counter this performance degradation is to exploit multiple channels at the MAC layer, where optimal usage of the channels is achieved through cooperation between the sensor nodes. In this paper we propose a cooperative WBAN environment that supports multi-hop transmission through cooperation involving both environmental sensors and WBAN nodes. Our solution extends the cooperation at the MAC layer to a cross-layered gradient based routing solution that allows interaction between WBAN and environmental sensors in order to ensure data delivery from WBANs to a distant gateway. Extensive simulations for healthcare scenarios have been performed to validate the cooperation at the MAC layer, as well as the cross-layered gradient based routing. Comparisons to other cooperative multi-channel MAC and routing solutions have shown the overall performance improvement of the proposed approach evaluated in terms of packet loss, power consumption and delay.

Journal ArticleDOI
TL;DR: A novel cross-layer approach, which is referred to as Buffer-Aware Network Coding, or BANC, which allows transmission of some packets without network coding to reduce the packet delay and results will show that the proposed approach can strike the optimal tradeoff between power efficiency and QoS.
Abstract: Network coding, which can combine various traffic flows or packets via algebraic operations, has the potential of achieving substantial throughput and power efficiency gains in wireless networks. As such, it is considered as a powerful solution to meet the stringent demands and requirements of next-generation wireless systems. However, because of the random and asynchronous packet arrivals, network coding may result in severe delay and packet loss because packets need to wait to be network-coded with each others. To overcome this and guarantee quality of service (QoS), we present a novel cross-layer approach, which we shall refer to as Buffer-Aware Network Coding, or BANC, which allows transmission of some packets without network coding to reduce the packet delay. We shall derive the average delay and power consumption of BANC by presenting a random mapping description of BANC and Markov models of buffer states. A cross-layer optimization problem that minimizes the average delay under a given power constraint is then proposed and analyzed. Its solution will not only demonstrate the fundamental performance limits of BANC in terms of the achievable delay region and delay-power tradeoff, but also obtains the delay-optimal BANC schemes. Simulation results will show that the proposed approach can strike the optimal tradeoff between power efficiency and QoS.

Journal ArticleDOI
15 Aug 2012-Sensors
TL;DR: This paper presents a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes.
Abstract: Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: This paper introduces a novel end-to-end congestion control algorithm called IA-TCP that avoids the TCP incast congestion problem effectively and validates that the algorithm is scalable in terms of the number of workers achieving enhanced goodput and zero timeouts.
Abstract: In recent years, the data center networks commonly accommodate applications such as MapReduce and web search that inherently shows the incast communication pattern; multiple workers simultaneously transmit TCP data to a single aggregator. In this environment, the TCP performance is significantly degraded in terms of goodput and query completion time, as a result of the severe packet loss at Top of Rack (ToR) switches. The TCP senders aggressively transmit packets causing throughput collapse even though the network pipe size, i.e., bandwidth-delay product, is extremely small. In this paper, we introduce a novel end-to-end congestion control algorithm called IA-TCP that avoids the TCP incast congestion problem effectively. IA-TCP employs the rate-based algorithm at the aggregator node, which controls both the window size of workers and ACK delay. Through extensive NS-2 simulations, we validate that our algorithm is scalable in terms of the number of workers achieving enhanced goodput and zero timeouts.