scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 2013"


Proceedings ArticleDOI
27 Aug 2013
TL;DR: In this paper, a minimalistic datacenter transport design that provides near theoretically optimal flow completion times even at the 99th percentile for short flows, while still minimizing average flow completion time for long flows is presented.
Abstract: In this paper we present pFabric, a minimalistic datacenter transport design that provides near theoretically optimal flow completion times even at the 99th percentile for short flows, while still minimizing average flow completion time for long flows. Moreover, pFabric delivers this performance with a very simple design that is based on a key conceptual insight: datacenter transport should decouple flow scheduling from rate control. For flow scheduling, packets carry a single priority number set independently by each flow; switches have very small buffers and implement a very simple priority-based scheduling/dropping mechanism. Rate control is also correspondingly simpler; flows start at line rate and throttle back only under high and persistent packet loss. We provide theoretical intuition and show via extensive simulations that the combination of these two simple mechanisms is sufficient to provide near-optimal performance.

765 citations


Journal ArticleDOI
TL;DR: This paper studies an event-triggered communication scheme and an H"~ control co-design method for networked control systems (NCSs) with communication delay and packet loss with a novel Lyapunov-Krasovskii functional.

547 citations


Proceedings ArticleDOI
27 Aug 2013
TL;DR: This paper presents the design of novel loss recovery mechanisms for TCP that judiciously use redundant transmissions to minimize timeout-driven recovery and are compatible both with middleboxes and with TCP's existing congestion control and loss recovery.
Abstract: To serve users quickly, Web service providers build infrastructure closer to clients and use multi-stage transport connections. Although these changes reduce client-perceived round-trip times, TCP's current mechanisms fundamentally limit latency improvements. We performed a measurement study of a large Web service provider and found that, while connections with no loss complete close to the ideal latency of one round-trip time, TCP's timeout-driven recovery causes transfers with loss to take five times longer on average.In this paper, we present the design of novel loss recovery mechanisms for TCP that judiciously use redundant transmissions to minimize timeout-driven recovery. Proactive, Reactive, and Corrective are three qualitatively-different, easily-deployable mechanisms that (1) proactively recover from losses, (2) recover from them as quickly as possible, and (3) reconstruct packets to mask loss. Crucially, the mechanisms are compatible both with middleboxes and with TCP's existing congestion control and loss recovery. Our large-scale experiments on Google's production network that serves billions of flows demonstrate a 23% decrease in the mean and 47% in 99th percentile latency over today's TCP.

228 citations


Journal ArticleDOI
TL;DR: The idea is to design an Incast congestion Control for TCP (ICTCP) scheme on the receiver side that adjusts the TCP receive window proactively before packet loss occurs, and achieves almost zero timeouts and high goodput for TCP incast.
Abstract: Transport Control Protocol (TCP) incast congestion happens in high-bandwidth and low-latency networks when multiple synchronized servers send data to the same receiver in parallel. For many important data-center applications such as MapReduce and Search, this many-to-one traffic pattern is common. Hence TCP incast congestion may severely degrade their performances, e.g., by increasing response time. In this paper, we study TCP incast in detail by focusing on the relationships between TCP throughput, round-trip time (RTT), and receive window. Unlike previous approaches, which mitigate the impact of TCP incast congestion by using a fine-grained timeout value, our idea is to design an Incast congestion Control for TCP (ICTCP) scheme on the receiver side. In particular, our method adjusts the TCP receive window proactively before packet loss occurs. The implementation and experiments in our testbed demonstrate that we achieve almost zero timeouts and high goodput for TCP incast.

215 citations


Proceedings Article
02 Apr 2013
TL;DR: This work creates an engineered network and routing protocol that can almost instantaneously reestablish connectivity and load balance, even in the presence of multiple failures, and shows that following network link and switch failures, F10 has less than 1/7th the packet loss of current schemes.
Abstract: The data center network is increasingly a cost, reliability and performance bottleneck for cloud computing. Although multi-tree topologies can provide scalable bandwidth and traditional routing algorithms can provide eventual fault tolerance, we argue that recovery speed can be dramatically improved through the co-design of the network topology, routing algorithm and failure detector. We create an engineered network and routing protocol that directly address the failure characteristics observed in data centers. At the core of our proposal is a novel network topology that has many of the same desirable properties as FatTrees, but with much better fault recovery properties. We then create a series of failover protocols that benefit from this topology and are designed to cascade and complement each other. The resulting system, F10, can almost instantaneously reestablish connectivity and load balance, even in the presence of multiple failures. Our results show that following network link and switch failures, F10 has less than 1/7th the packet loss of current schemes. A trace-driven evaluation of MapReduce performance shows that F10's lower packet loss yields a median application-level 30% speedup.

208 citations


Journal ArticleDOI
TL;DR: A novel quality-aware adaptive concurrent multipath transfer solution (CMT-QA) that utilizes SCTP for FTP-like data transmission and real-time video delivery in wireless heterogeneous networks and outperforms existing solutions in terms of performance and quality of service.
Abstract: Mobile devices equipped with multiple network interfaces can increase their throughput by making use of parallel transmissions over multiple paths and bandwidth aggregation, enabled by the stream control transport protocol (SCTP) However, the different bandwidth and delay of the multiple paths will determine data to be received out of order and in the absence of related mechanisms to correct this, serious application-level performance degradations will occur This paper proposes a novel quality-aware adaptive concurrent multipath transfer solution (CMT-QA) that utilizes SCTP for FTP-like data transmission and real-time video delivery in wireless heterogeneous networks CMT-QA monitors and analyses regularly each path's data handling capability and makes data delivery adaptation decisions to select the qualified paths for concurrent data transfer CMT-QA includes a series of mechanisms to distribute data chunks over multiple paths intelligently and control the data traffic rate of each path independently CMT-QA's goal is to mitigate the out-of-order data reception by reducing the reordering delay and unnecessary fast retransmissions CMT-QA can effectively differentiate between different types of packet loss to avoid unreasonable congestion window adjustments for retransmissions Simulations show how CMT-QA outperforms existing solutions in terms of performance and quality of service

188 citations


Journal ArticleDOI
TL;DR: Various network conditions required for different control purposes, such as the minimum rate coding for stabilizability of linear systems in the presence of time-varying channel capacity, and the critical packet loss condition for stability of the Kalman filter are discussed.

171 citations


Proceedings Article
27 May 2013
TL;DR: This paper addresses the problem of placing controllers in SDNs, so as to maximize the reliability of control networks, and develops several placement algorithms that can significantly improve the credibility of SDN control networks.
Abstract: The Software-Defined Network (SDN) approach decouples control and forwarding planes. Such separation introduces reliability design issues of the SDN control network, since disconnection between the control and forwarding planes may lead to severe packet loss and performance degradation. This paper addresses the problem of placing controllers in SDNs, so as to maximize the reliability of control networks. After presenting a metric to characterize the reliability of SDN control networks, several placement algorithms are developed. We evaluate these algorithms and further quantify the impact of controller number on the reliability of control networks using real topologies. Our approach can significantly improve the reliability of SDN control networks without introducing unacceptable latencies.

155 citations


01 Jan 2013
TL;DR: Since the accuracy of data is important to the whole system's performance, detecting nodes with faulty readings is an essenti al issue in network management and removal from a system or replacing them with good ones improve the wholesystem's performance and at the same time prolong the lifetime of the network.
Abstract: Since the accuracy of data is important to the whole system's performance, detecting nodes with faulty readings is an essenti al issue in network management. Removing nodes with faulty readings from a system or replacing them with good ones improve the whole system's performance and at the same time prolong the lifetime of the network. In general, wireless sensor nodes may experience two types of faults that would lead to the degradation of performance. One type is function fault, which typically results in the crash of individual nodes, packet loss, routing failure or network partition. The other type of error is data fault, in which a node behaves normally in all aspects except for its sensing results, leading to either significant biased or random errors.

128 citations


Proceedings ArticleDOI
25 Jun 2013
TL;DR: This study shows that forecasting the short-term performance in cellular networks is possible in part due to the channel estimation scheme on the device and the radio resource scheduling algorithm at the base station, and develops a system interface called PROTEUS, which passively collects current network performance, such as throughput, loss, and one-way delay, and then uses regression trees to forecast future network performance.
Abstract: Real-time communication (RTC) applications such as VoIP, video conferencing, and online gaming are flourishing. To adapt and deliver good performance, these applications require accurate estimations of short-term network performance metrics, e.g., loss rate, one-way delay, and throughput. However, the wide variation in mobile cellular network performance makes running RTC applications on these networks problematic. To address this issue, various performance adaptation techniques have been proposed, but one common problem of such techniques is that they only adjust application behavior reactively after performance degradation is visible. Thus, proactive adaptation based on accurate short-term, fine-grained network performance prediction can be a preferred alternative that benefits RTC applications. In this study, we show that forecasting the short-term performance in cellular networks is possible in part due to the channel estimation scheme on the device and the radio resource scheduling algorithm at the base station. We develop a system interface called PROTEUS, which passively collects current network performance, such as throughput, loss, and one-way delay, and then uses regression trees to forecast future network performance. PROTEUS successfully predicts the occurrence of packet loss within a 0.5s time window for 98% of the time windows and the occurrence of long one-way delay for 97% of the time windows. We also demonstrate how PROTEUS can be integrated with RTC applications to significantly improve the perceptual quality. In particular, we increase the peak signal-to-noise ratio of a video conferencing application by up to 15dB and reduce the perceptual delay in a gaming application by up to 4s.

120 citations


Journal ArticleDOI
TL;DR: In this paper, robust and reliable H ∞ ∞ filter design for a class of nonlinear networked control systems is investigated, and four new theorems are proved to cover the conditions for the robust mean square stability of the systems under study in terms of LMIs, and a decoupling method for the filter design is developed.
Abstract: SUMMARY This paper investigates robust and reliable H ∞ filter design for a class of nonlinear networked control systems: (i) a T-S fuzzy model with its own uncertainties is used to approximate the nonlinear dynamics of the plant, (ii) a new sensor failure model with uncertainties is proposed, and (iii) the signal transfer of the closed-loop system is under a networked communication scheme and therefore is subject to time delay, packet loss, and/or packet out of order. Four new theorems are proved to cover the conditions for the robust mean square stability of the systems under study in terms of LMIs, and a decoupling method for the filter design is developed. Two examples, one of them is based on a model of an inverted pendulum, are provided to demonstrate the design method. Copyright © 2011 John Wiley & Sons, Ltd.

Patent
24 May 2013
TL;DR: In this paper, a set of packet capture rules, including a trigger condition and an action to perform when the trigger condition is detected, are described. But the work in this paper is focused on selective packet capture.
Abstract: Methods and systems for providing selective packet capture are described. One example method includes identifying a packet capture rule from a set of packet capture rules, the packet capture rule including a trigger condition and an action to perform when the trigger condition is detected; monitoring a network flow to detect whether the network flow satisfies the packet capture rule's trigger condition, wherein monitoring the network flow includes analyzing one or more packets included in the network flow to determine a set of protocol metadata associated with the network flow; and selectively performing the action associated with the packet capture rule on the network flow based on a result of the monitoring.

Journal ArticleDOI
TL;DR: It is proposed that the flocking behavior of birds can guide the design of a robust, scalable and self-adaptive congestion control protocol in the context of wireless sensor networks (WSNs).

Journal ArticleDOI
TL;DR: An energy efficient MAC protocol for WSNs is presented that avoids overhearing and reduces contention and delay by asynchronously scheduling the wakeup time of neighboring nodes and an energy consumption analysis for multi-hop networks is provided.

Journal ArticleDOI
TL;DR: This article presents the design and implementation of SenseCode, a collection protocol for sensor networks—and, to the best of the knowledge, the first such implemented protocol to employ network coding, and shows that it reduces end-to-end packet error rate in highly dynamic environments, while consuming a comparable amount of network resources.
Abstract: Designing a communication protocol for sensor networks often involves obtaining the right trade-off between energy efficiency and end-to-end packet error rate. In this article, we show that network coding provides a means to elegantly balance these two goals. We present the design and implementation of SenseCode, a collection protocol for sensor networks—and, to the best of our knowledge, the first such implemented protocol to employ network coding. SenseCode provides a way to gracefully introduce a configurable amount of redundant information into the network, thereby decreasing end-to-end packet error rate in the face of packet loss. We compare SenseCode to the best (to our knowledge) existing alternative and show that it reduces end-to-end packet error rate in highly dynamic environments, while consuming a comparable amount of network resources. We have implemented SenseCode as a TinyOS module and evaluate it through extensive TOSSIM simulations.

BookDOI
01 Jan 2013
TL;DR: High-Performance Network Traffic Processing Systems Using Commodity Hardware and Active Techniques for Available Bandwidth Estimation: Comparison and Application.
Abstract: High-Performance Network Traffic Processing Systems Using Commodity Hardware.- Active Techniques for Available Bandwidth Estimation: Comparison and Application.- Internet Topology Discovery.- Internet PoP Level Maps.- Analysis of Packet Transmission Processes in Peer-to-Peer Networks by Statistical Inference Methods.- Reviewing Traffic Classification.- A Methodological Overview on Anomaly Detection.- Changepoint Detection Techniques for VoIP Traffic.- Distribution-Based Anomaly Detection in Network Traffic.- From Packets to People: Quality of Experience as a New Measurement Challenge.- Internet Video Delivery in YouTube: From Traffic Measurements to Quality of Experience.- Quality Evaluation in Peer-to-Peer IPTV Services.- Cross-Layer FEC-Based Mechanism for Packet Loss Resilient Video Transmission.- Approaches for Utility-Based QoE-Driven Optimization of Network Resource Allocation for Multimedia Services.- Active Techniques for Available Bandwidth Estimation: Comparison and Application.- Internet Topology Discovery.- Internet PoP Level Maps.- Analysis of Packet Transmission Processes in Peer-to-Peer Networks by Statistical Inference Methods.- Reviewing Traffic Classification.- A Methodological Overview on Anomaly Detection.- Changepoint Detection Techniques for VoIP Traffic.- Distribution-Based Anomaly Detection in Network Traffic.- From Packets to People: Quality of Experience as a New Measurement Challenge.- Internet Video Delivery in YouTube: From Traffic Measurements to Quality of Experience.- Quality Evaluation in Peer-to-Peer IPTV Services.- Cross-Layer FEC-Based Mechanism for Packet Loss Resilient Video Transmission.- Approaches for Utility-Based QoE-Driven Optimization of Network Resource Allocation for Multimedia Services.

Journal ArticleDOI
TL;DR: A fuzzy predictive controller which guarantees the stability of the closed-loop system is obtained by solving a set of linear matrix inequalities (LMIs) by applying sector bound approach.
Abstract: This paper studies the approach of model predictive control (MPC) for the non-linear systems under networked environment where both data quantization and packet loss may occur. The non-linear controlled plant in the networked control system (NCS) is represented by a Tagaki-Sugeno (T-S) model. The sensed data and control signal are quantized in both links and described as sector bound uncertainties by applying sector bound approach. Then, the quantized data are transmitted in the communication networks and may suffer from the effect of packet losses, which are modeled as Bernoulli process. A fuzzy predictive controller which guarantees the stability of the closed-loop system is obtained by solving a set of linear matrix inequalities (LMIs). A numerical example is given to illustrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a fully distributed algorithm for joint clock skew and offset estimation in wireless sensor networks based on belief propagation, which does not require any centralized information processing or coordination, and is scalable with network size.
Abstract: In this paper, we propose a fully distributed algorithm for joint clock skew and offset estimation in wireless sensor networks based on belief propagation. In the proposed algorithm, each node can estimate its clock skew and offset in a completely distributed and asynchronous way: some nodes may update their estimates more frequently than others using outdated message from neighboring nodes. In addition, the proposed algorithm is robust to random packet loss. Such algorithm does not require any centralized information processing or coordination, and is scalable with network size. The proposed algorithm represents a unified framework that encompasses both classes of synchronous and asynchronous algorithms for network-wide clock synchronization. It is shown analytically that the proposed asynchronous algorithm converges to the optimal estimates with estimation mean-square-error at each node approaching the centralized Cramer-Rao bound under any network topology. Simulation results further show that {the convergence speed is faster than that corresponding to a synchronous algorithm.

Proceedings ArticleDOI
20 May 2013
TL;DR: A lightweight and energy-efficient joint mechanism, called AJIA (Adaptive Joint protocol based on Implicit ACK), for packet loss recovery and route quality evaluation in the IoT.
Abstract: This paper addresses the Internet of Things (IoT) paradigm, which is gaining substantial ground in modern wireless telecommunications. The IoT describes a vision where heterogeneous objects like computers, sensors, Radio-Frequency IDentification (RFID) tags or mobile phones are able to communicate and cooperate efficiently to achieve common goals thanks to a common IP addressing scheme. This paper focuses on the reliability of emergency applications under IoT technology. These applications' success is contingent upon the delivery of high-priority events from many scattered objects to one or more objects without packet loss. Thus, the network has to be self-adaptive and resilient to errors by providing efficient mechanisms for information distribution especially in the multi-hop scenario. As future perspective, we propose a lightweight and energy-efficient joint mechanism, called AJIA (Adaptive Joint protocol based on Implicit ACK), for packet loss recovery and route quality evaluation in the IoT. In this protocol, we use the overhearing feature, characterizing the wireless channels, as an implicit ACK mechanism. In addition, the protocol allows for an adaptive selection of the routing path based on the link quality. .

Journal ArticleDOI
TL;DR: It is proved that the MEMTCS problem is NP-hard, and it is unlikely to have an approximation algorithm with a performance ratio of (1 - 0(1) ln Δ), where Δ is the maximum node degree in a network, and a polynomial-time approximation algorithm is proposed.
Abstract: In duty-cycled wireless sensor networks, the nodes switch between active and dormant states, and each node may determine its active/dormant schedule independently. This complicates the Minimum-Energy Multicasting (MEM) problem, which was primarily studied in always-active wireless ad hoc networks. In this paper, we study the duty-cycle-aware MEM problem in wireless sensor networks both for one-to-many multicasting and for all-to-all multicasting. In the case of one-to-many multicasting, we present a formalization of the Minimum-Energy Multicast Tree Construction and Scheduling (MEMTCS) problem. We prove that the MEMTCS problem is NP-hard, and it is unlikely to have an approximation algorithm with a performance ratio of (1 - 0(1)) ln Δ, where Δ is the maximum node degree in a network. We propose a polynomial-time approximation algorithm for the MEMTCS problem with a performance ratio of O (H(Δ + 1)), where H(·) is the harmonic number. In the case of all-to-all multicasting, we prove that the Minimum-Energy Multicast Backbone Construction and Scheduling (MEMBCS) problem is also NP-hard and present an approximation algorithm for it, which has the same approximation ratio as that of the proposed algorithm for the MEMTCS problem. We also provide a distributed implementation of our algorithms, as well as a simple but efficient collision-free scheduling scheme to avoid packet loss. Finally, we perform extensive simulations, and the results demonstrate that our algorithms significantly outperform other known algorithms in terms of the total transmission energy cost, without sacrificing much of the delay performance.

Journal ArticleDOI
TL;DR: The results demonstrated that EEQR has prolonged the network and coverage lifetime, as well as has improved the other QoS routing parameters, such as delay, packet loss ratio, and throughput.

Journal ArticleDOI
04 Jan 2013-Sensors
TL;DR: An adaptive RTO method is proposed, which consists in using a Smooth Round-trip Time and multiplying it by a constant parameter (K) so that the reliability mechanism of MQTT-S and CoAP would be able to react properly to packet loss and would also be lightweight in terms of energy, memory and computing for sensor nodes where these resources are critical.
Abstract: MQTT-S and CoAP are two protocols able to use the publish/subscribe model in Wireless Sensor Networks (WSNs). The high scalability provided by the publish/subscribe model may incur a high packet loss and therefore requires an efficient reliability mechanism to cope with this situation. The reliability mechanism of MQTT-S and CoAP employs a method which defines a fixed value for the retransmission timeout (RTO). This article argues that this method is not efficient for deploying publish/subscribe in WSN, because it may be unable to recover a packet, therefore resulting in a lower packet delivery ratio (PDR) at the subscriber nodes. This article proposes and evaluates an adaptive RTO method, which consists in using a Smooth Round-trip Time and multiplying it by a constant parameter (K). Thanks to this method, the reliability mechanism of MQTT-S and CoAP would be able to react properly to packet loss and would also be lightweight in terms of energy, memory and computing for sensor nodes where these resources are critical. We present a detailed evaluation of the effects of the K value on the calculation of the adaptive RTO method. We also establish the setting for obtaining the highest PDR on the subscriber nodes for single-hop and multi-hop scenarios. The results for single-hop scenario show that use of the appropriate K value for the adaptive RTO method increases the PDR up to 76% for MQTT-S and up to 38% for CoAP when compared with the use of fixed RTO method for both protocols, respectively. Meanwhile the same comparison for multi-hop scenario, the adaptive RTO method increases the PDR up to 36% for MQTT-S and up to 14% for CoAP.

Proceedings ArticleDOI
13 Oct 2013
TL;DR: An energy-oriented routing mechanism to improve RPL routing protocol by combining the expected transmission count (ETX) and remaining energy metrics is proposed and the simulation will be conducted to analyze the performance of the proposed mechanism.
Abstract: The design and implementation of healthcare system using sensor network technology has been one of the most active research topics currently. The research on wireless sensor network mostly focuses on energy saving, such as duty cycle scheduling, and path routing, such as routing protocol for low power and lossy networks (RPL) which however only takes into account a single metric, reliability or energy, as routing decision. If RPL only considers the reliability metric, nodes will suffer from uneven energy. If it only considers the energy metric, nodes will suffer from the rise of packet loss ratio. In this paper, we propose an energy-oriented routing mechanism to improve RPL routing protocol by combining the expected transmission count (ETX) and remaining energy metrics. The simulation will be conducted to analyze the performance of the proposed mechanism.

Journal ArticleDOI
TL;DR: This paper presents an in-depth comparison of the TCP/IP engine of the Contiki embedded operating system to support both trickle multicast (TM) and SMRF, and demonstrates that SMRF achieves significant delay and energy efficiency improvements at the cost of a small increase in packet loss.
Abstract: In wireless sensor deployments, network layer multicast can be used to improve the bandwidth and energy efficiency for a variety of applications, such as service discovery or network management. However, despite efforts to adopt IPv6 in networks of constrained devices, multicast has been somewhat overlooked. The Multicast Forwarding Using Trickle (Trickle Multicast) internet draft is one of the most noteworthy efforts. The specification of the IPv6 routing protocol for low power and lossy networks (RPL) also attempts to address the area but leaves many questions unanswered. In this paper we highlight our concerns about both these approaches. Subsequently, we present our alternative mechanism, called stateless multicast RPL forwarding algorithm (SMRF), which addresses the aforementioned drawbacks. Having extended the TCP/IP engine of the Contiki embedded operating system to support both trickle multicast (TM) and SMRF, we present an in-depth comparison, backed by simulated evaluation as well as by experiments conducted on a multi-hop hardware testbed. Results demonstrate that SMRF achieves significant delay and energy efficiency improvements at the cost of a small increase in packet loss. The outcome of our hardware experiments show that simulation results were realistic. Lastly, we evaluate both algorithms in terms of code size and memory requirements, highlighting SMRF's low implementation complexity. Both implementations have been made available to the community for adoption.

Journal ArticleDOI
TL;DR: A method for dynamic congestion detection and control routing (DCDR) in ad hoc networks based on the estimations of the average queue length at the node level is proposed, which showed better performance than the EDOCR, EDCSCAODV, EDAODV and AODV routing protocols.
Abstract: In mobile ad hoc networks (MANETs), congestion can occur in any intermediate node, often due to limitation in resources, when data packets are being transmitted from the source to the destination. Congestion will lead to high packet loss, long delay and waste of resource utilization time. The primary objective of congestion control is to best utilize the available network resources and keep the load below the capacity. The congestion control techniques to deal with TCP have been found inadequate to handle congestion in ad hoc networks, because ad hoc networks involve special challenges like high mobility of nodes and frequent changes of topology. This paper proposes a method for dynamic congestion detection and control routing (DCDR) in ad hoc networks based on the estimations of the average queue length at the node level. Using the average queue length, a node detects the present congestion level and sends a warning message to its neighbors. The neighbors then attempt to locate a congestion-free alternative path to the destination. This dynamic congestion estimate mechanism supporting congestion control in ad hoc networks ensures reliable communication within the MANET. According to our simulation results, the DCDR showed better performance than the EDOCR, EDCSCAODV, EDAODV and AODV routing protocols.

Journal ArticleDOI
TL;DR: A congestion control and service prioritization protocol for real time monitoring of patients’ vital signs using wireless biomedical sensor networks that can detect an anomaly in the received vital signs from a patient and hence assign more priority to patients in need.
Abstract: Recent developments in biosensor and wireless technology have led to a rapid progress in wearable real time health monitoring. Unlike wired networks, wireless networks are subject to more packet loss and congestion. In this paper, we propose a congestion control and service prioritization protocol for real time monitoring of patients' vital signs using wireless biomedical sensor networks. The proposed system is able to discriminate between physiological signals and assign them different priorities. Thus, it would be possible to provide a better quality of service for transmitting highly important vital signs. Congestion control is performed by considering both the congestion situation in the parent node and the priority of the child nodes in assigning network bandwidth to signals from different patients. Given the dynamic nature of patients' health conditions, the proposed system can detect an anomaly in the received vital signs from a patient and hence assign more priority to patients in need. Simulation results confirm the superior performance of the proposed protocol. To our knowledge, this is the first attempt at a special-purpose congestion control protocol specifically designed for wireless biosensor networks.

Proceedings ArticleDOI
17 Jul 2013
TL;DR: The infinite horizon LQG control problem is explored and conditions to ensure its convergence are investigated and it is shown how the results presented in this paper can be employed in the case that also the observation packet may be dropped.
Abstract: This paper is concerned with the optimal LQG control of a system through lossy data networks. In particular we will focus on the case where control commands are issued to the system over a communication network where packets may be randomly dropped according to a two-state Markov chain. Under these assumptions, the optimal finite-horizon LQG problem is solved by means of dynamic programming arguments. The infinite horizon LQG control problem is explored and conditions to ensure its convergence are investigated. Finally it is shown how the results presented in this paper can be employed in the case that also the observation packet may be dropped. A numerical simulation shows the relationship between the convergence of the LQG cost and the value of the parameters of the Markov chain.

Journal ArticleDOI
TL;DR: This paper describes the architecture for a system consisting of a robotic manipulator controlled by a digital controller over a wireless network and shows that the system is stable even in the presence of time-varying delays and is insensitive to network uncertainties.
Abstract: Real-life cyber physical systems, such as automotive vehicles, building automation systems, and groups of unmanned vehicles are monitored and controlled by networked control systems (NCS). The overall system dynamics emerges from the interaction among physical dynamics, computational dynamics, and communication networks. Network uncertainties such as time-varying delay and packet loss cause significant challenges. This paper proposes a passive control architecture for designing NCS that are insensitive to network uncertainties. We describe the architecture for a system consisting of a robotic manipulator controlled by a digital controller over a wireless network and show that the system is stable even in the presence of time-varying delays. Experimental results demonstrate the advantages of the passivity-based architecture with respect to stability and performance and show that the system is insensitive to network uncertainties.

Book ChapterDOI
18 Mar 2013
TL;DR: This study, following on previous active measurement studies over the past decade, shows marked and continued increase in the deployment of ECN-capable servers, and usability ofECN on the majority of paths to such servers.
Abstract: Explicit Congestion Notification (ECN) is a TCP/IP extension that can avoid packet loss and thus improve network performance. Though standardized in 2001, it is barely used in today's Internet. This study, following on previous active measurement studies over the past decade, shows marked and continued increase in the deployment of ECN-capable servers, and usability of ECN on the majority of paths to such servers. We additionally present new measurements of ECN on IPv6, passive observation of actual ECN usage from flow data, and observations on other congestion-relevant TCP options (SACK, Timestamps and Window Scaling). We further present initial work on burst loss metrics for loss-based congestion control following from our findings.

Patent
22 Nov 2013
TL;DR: In this article, the authors present methods, systems, paradigms and structures for processing data packets in a communication network by a multi-core network processor, including a plurality of multi-threaded core processors and special purpose processors for processing the data packets atomically and in parallel.
Abstract: Disclosed are methods, systems, paradigms and structures for processing data packets in a communication network by a multi-core network processor. The network processor includes a plurality of multi-threaded core processors and special purpose processors for processing the data packets atomically, and in parallel. An ingress module of the network processor stores the incoming data packets in the memory and adds them to an input queue. The network processor processes a data packet by performing a set of network operations on the data packet in a single thread of a core processor. The special purpose processors perform a subset of the set of network operations on the data packet atomically. An egress module retrieves the processed data packets from a plurality of output queues based on a quality of service (QoS) associated with the output queues, and forwards the data packets towards their destination addresses.