scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 2005"


Book ChapterDOI
24 Feb 2005
TL;DR: Chainsaw, a p2p overlay multicast system that completely eliminates trees, is presented and it is shown with simulations that Chainsaw has a short startup time, good resilience to catastrophic failure and essentially no packet loss.
Abstract: In this paper, we present Chainsaw, a p2p overlay multicast system that completely eliminates trees. Peers are notified of new packets by their neighbors and must explicitly request a packet from a neighbor in order to receive it. This way, duplicate data can be eliminated and a peer can ensure it receives all packets. We show with simulations that Chainsaw has a short startup time, good resilience to catastrophic failure and essentially no packet loss. We support this argument with real-world experiments on Planetlab and compare Chainsaw to Bullet and Splitstream using MACEDON.

436 citations


Proceedings ArticleDOI
02 May 2005
TL;DR: The router configuration checker rcc as discussed by the authors detects two broad classes of faults: route validity faults and path visibility faults, where routers may fail to learn routes for paths that exist in the network.
Abstract: The Internet is composed of many independent autonomous systems (ASes) that exchange reachability information to destinations using the Border Gateway Protocol (BGP). Network operators in each AS configure BGP routers to control the routes that are learned, selected, and announced to other routers. Faults in BGP configuration can cause forwarding loops, packet loss, and unintended paths between hosts, each of which constitutes a failure of the Internet routing infrastructure.This paper describes the design and implementation of rcc, the router configuration checker, a tool that finds faults in BGP configurations using static analysis. rcc detects faults by checking constraints that are based on a high-level correctness specification. rcc detects two broad classes of faults: route validity faults, where routers may learn routes that do not correspond to usable paths, and path visibility faults, where routers may fail to learn routes for paths that exist in the network. rcc enables network operators to test and debug configurations before deploying them in an operational network, improving on the status quo where most faults are detected only during operation. rcc has been downloaded by more than sixty-five network operators to date, some of whom have shared their configurations with us. We analyze network-wide configurations from 17 different ASes to detect a wide variety of faults and use these findings to motivate improvements to the Internet routing infrastructure.

364 citations


Journal ArticleDOI
TL;DR: The problems TCP exhibits in the wireless IP communication environment are analyzed, viable solutions are illustrated by detailed examples, and the standard TCP protocol is modified for improved performance.
Abstract: The Internet provides a platform for rapid and timely information exchange among a disparate array of clients and servers. TCP and IP are separately designed and closely tied protocols that define the rules of communication between end hosts, and are the most commonly used protocol suite for data transfer in the Internet. The combination of TCP/IP dominates today's communication in various networks from the wired backbone to the heterogeneous network due to its remarkable simplicity and reliability, TCP has become the de facto standard used in most applications ranging from interactive sessions such as Telnet and HTTP, to bulk data transfer like FTP. TCP was originally designed primarily for wired networks. In a wired network, random bit error rate, a characteristic usually more pronounced in the wireless network, is negligible, and congestion is the main cause of packet loss. The emerging wireless applications, especially high-speed multimedia services and the advent of wireless IP communications carried by the Internet, call for calibration and sophisticated enhancement or modifications of this protocol suite for improved performance. Based on the assumption that packet losses are signals of network congestion, the additive increase multiplicative decrease congestion control of the standard TCP protocol reaches the steady state, which reflects the protocol's efficiency in terms of throughput and link utilization. However, this assumption does not hold when the end-to-end path also includes wireless links. Factors such as high BER, unstable channel characteristics, and user mobility may all contribute to packet losses. Many studies have shown that the unmodified standard TCP performs poorly in a wireless environment due to its inability to distinguish packet losses caused by network congestion from those attributed to transmission errors. In this article, following a brief introduction to TCP, we analyze the problems TCP exhibits in the wireless IP communication environment, and illustrate viable solutions by detailed examples.

354 citations


Journal ArticleDOI
01 Apr 2005
TL;DR: A protocol that supports the sharing of resources that exist in different packet switching networks is presented and provides for variation in individual network packet sizes, transmission failures, sequencing, flow control, end-to-end error checking, and the creation and destruction of logical process- to-process connections.
Abstract: A protocol that supports the sharing of resources that exist in different packet switching networks is presented. The protocol provides for variation in individual network packet sizes, transmission failures, sequencing, flow control, end-to-end error checking, and the creation and destruction of logical process-to-process connections. Some implementation issues are considered, and problems such as internetwork routing, accounting, and timeouts are exposed.

342 citations


Proceedings ArticleDOI
30 Apr 2005
TL;DR: A technique to detect and recover messages from packet collisions by exploiting the capture effect can differentiate between collisions and packet loss and can identify the nodes involved in the collisions.
Abstract: In this paper we evaluate a technique to detect and recover messages from packet collisions by exploiting the capture effect. It can differentiate between collisions and packet loss and can identify the nodes involved in the collisions. This information is provided at virtually no extra cost and can produce significant improvements in existing collision mediation schemes. We characterize this technique using controlled collision experiments and evaluate it in real world flooding experiments on a 36-node sensor network.

331 citations


Patent
24 Mar 2005
TL;DR: In this paper, the authors present a method for generating a network topology representation based on inspection of application messages at a network device. But the method is limited to the case where the network device receives a request packet, routes the packet to the destination, and extracts and stores correlation information from a copy of the request packet in order to determine application-to-application mapping and calculate application response times.
Abstract: A method is disclosed for generating a network topology representation based on inspection of application messages at a network device. According to one aspect, a network device receives a request packet, routes the packet to the destination, and extracts and stores correlation information from a copy of the request packet. When the network device receives a response packet, it examines the contents of a copy of the response packet using context-based correlation rules and matches the response packet with the appropriate stored request packet correlation information. It analyzes recorded correlation information to determine application-to-application mapping and calculate application response times. Another embodiment inserts custom headers that contain information used to match a response packet with a request packet into request packets.

302 citations


Journal ArticleDOI
TL;DR: The simulation results show that by controlling the total traffic rate, the original 802.11 protocol can support strict QoS requirements, such as those required by voice over Internet protocol (VoIP) or streaming video, and at the same time achieve high channel utilization.
Abstract: This paper studies an important problem in the IEEE 802.11 distributed coordination function (DCF)-based wireless local area network (WLAN): how well can the network support quality of service (QoS). Specifically, this paper analyzes the network's performance in terms of maximum protocol capacity or throughput, delay, and packet loss rate. Although the performance of the 802.11 protocol, such as throughput or delay, has been extensively studied in the saturated case, it is demonstrated that maximum protocol capacity can only be achieved in the nonsaturated case and is almost independent of the number of active nodes. By analyzing packet delay, consisting of medium access control (MAC) service time and waiting time, accurate estimates were derived for delay and delay variation when the throughput increases from zero to the maximum value. Packet loss rate is also given for the nonsaturated case. Furthermore, it is shown that the channel busyness ratio provides precise and robust information about the current network status, which can be utilized to facilitate QoS provisioning. The authors have conducted a comprehensive simulation study to verify their analytical results and to tune the 802.11 to work at the optimal point with maximum throughput and low delay and packet loss rate. The simulation results show that by controlling the total traffic rate, the original 802.11 protocol can support strict QoS requirements, such as those required by voice over Internet protocol (VoIP) or streaming video, and at the same time achieve high channel utilization.

278 citations


Journal ArticleDOI
TL;DR: The emerging low-rate Wireless Personal Area Network technology as specified in the Institute of Electrical and Electronics Engineers 802.15.4 standard is considered and its suitability to the medical environment is evaluated.

249 citations


Journal ArticleDOI
TL;DR: This paper studies TCP performance in a stationary multihop wireless network using IEEE 802.11 for channel access control, and proposes link RED that fine-tunes the link-layer packet dropping probability to stabilize the TCP window size around W*.
Abstract: This paper studies TCP performance in a stationary multihop wireless network using IEEE 802.11 for channel access control. We first show that, given a specific network topology and flow patterns, there exists an optimal window size W* at which TCP achieves the highest throughput via maximum spatial reuse of the shared wireless channel. However, TCP grows its window size much larger than W* leading to throughput reduction. We then explain the TCP throughput decrease using our observations and analysis of the packet loss in an overloaded multihop wireless network. We find out that the network overload is typically first signified by packet drops due to wireless link-layer contention, rather than buffer overflow-induced losses observed in the wired Internet. As the offered load increases, the probability of packet drops due to link contention also increases, and eventually saturates. Unfortunately the link-layer drop probability is insufficient to keep the TCP window size around W'*. We model and analyze the link contention behavior, based on which we propose link RED that fine-tunes the link-layer packet dropping probability to stabilize the TCP window size around W*. We further devise adaptive pacing to better coordinate channel access along the packet forwarding path. Our simulations demonstrate 5 to 30 percent improvement of TCP throughput using the proposed two techniques.

238 citations


Patent
22 Jul 2005
TL;DR: In this paper, the authors proposed a remote access architecture for peer-to-peer communications and remote access connectivity, which provides a method for establishing a direct connection between peer computing devices via a third computing device such as a gateway.
Abstract: The present invention is generally directed towards a remote access architecture for providing peer-to-peer communications and remote access connectivity. In one embodiment, the remote access architecture of the present provides a method for establishing a direct connection between peer computing devices via a third computing device, such as a gateway. Additionally, the present invention provides the following techniques to optimize peer-to-peer communications: 1) false acknowledgement of receipt of network packets allowing communications via a lossless protocol of packets constructed for transmission via a lossy protocol, 2) payload shifting of network packets allowing communications via a lossless protocol of packets constructed for transmission via a lossy protocol, 3) reduction of packet fragmentation by adjusting the maximum transmission unit (MTU) parameter, accounting for overhead due to encryption, 4) application-aware prioritization of client-side network communications, and 5) network disruption shielding for reliable and persistent network connectivity and access.

234 citations


Proceedings ArticleDOI
02 Nov 2005
TL;DR: Results from analysis, simulation and an experimental 48 Mica2 mote testbed show that virtual sinks can scale mote networks by effectively managing growing traffic demands while minimizing the impact on application fidelity.
Abstract: There is a critical need for new thinking regarding overload traffic management in sensor networks. It has now become clear that experimental sensor networks (e.g., mote networks) and their applications commonly experience periods of persistent congestion and high packet loss, and in some cases even congestion collapse. This significantly impacts application fidelity measured at the physical sinks, even under light to moderate traffic loads, and is a direct product of the funneling effect; that is, the many-to-one multi-hop traffic pattern that characterizes sensor network communications. Existing congestion control schemes are effective at mitigating congestion through rate control and packet drop mechanisms, but do so at the cost of significantly reducing application fidelity measured at the sinks. To address this problem we propose to exploit the availability of a small number of all wireless, multi-radio virtual sinks that can be randomly distributed or selectively placed across the sensor field. Virtual sinks are capable of siphoning off data events from regions of the sensor field that are beginning to show signs of high traffic load. In this paper, we present the design, implementation, and evaluation of Siphon, a set of fully distributed algorithms that support virtual sink discovery and selection, congestion detection, and traffic redirection in sensor networks. Siphon is based on a Stargate implementation of virtual sinks that uses a separate longer-range radio network (based on IEEE 802.11) to siphon events to one or more physical sinks, and a short-range mote radio to interact with the sensor field at siphon points. Results from analysis, simulation and an experimental 48 Mica2 mote testbed show that virtual sinks can scale mote networks by effectively managing growing traffic demands while minimizing the impact on application fidelity.

Proceedings ArticleDOI
13 Mar 2005
TL;DR: A radio interference detection protocol (RID and its variation) to detect run-time radio interference relations among nodes and the interference detection results are used to design real collision-free TDMA protocols.
Abstract: In wireless sensor networks, many protocols assume that if node A is able to interfere with node B's packet reception, node B is within node A's communication range. It is also assumed that if node B is within node A's communication range, node A is able to interfere with node B's packet reception from any transmitter. While these assumptions may be useful in protocol design, they are not valid, according to the real experiments we conducted in MICA2 platform. For a strong link that has a high packet delivery ratio, the interference range is observed smaller than the communication range, while for a weak link that has a low packet delivery ratio, the interference range is larger than the communication range. So using communication range information alone is not enough to design real collision-free media access control protocols. This paper presents a radio interference detection protocol (RID) and its variation (RID-B) to detect run-time radio interference relations among nodes. The interference detection results are used to design real collision-free TDMA protocols. With extensive simulations in GlomoSim, and with sensor network application scenarios, we observe that the TDMA which uses the interference detection results has 100% packet delivery ratio, while the traditional TDMA has packet loss up to 60%, in heavy load. In addition to the scheduling-based TDMA protocols, we also explore the application of interference detection on contention-based MAC protocols.

Patent
16 Feb 2005
TL;DR: In this paper, a system and method for policing one or more flows of a data stream of packets associated with differing transmission protocols is presented, where the current capacity level for each flow is determined, as is the packet protocol associated with each packet.
Abstract: A system and method for policing one or more flows of a data stream of packets associated with differing transmission protocols. The current capacity level for each flow is determined, as is the packet protocol associated with each packet. A packet parameter in the packet that is indicative of the bandwidth consumption of the packet is identified. The packet parameter is converted to a predetermined format if the packet is not associated with a predetermined packet protocol. A common bandwidth capacity test is performed to determine whether the packet is conforming or non-conforming, and is a function of the packet parameter and the current bandwidth capacity level.

Journal ArticleDOI
TL;DR: A model for TCP/IP congestion control mechanism is presented and an explicit expression for the throughput of a TCP connection is obtained and bounds on the throughput when there is a limit on the window size.
Abstract: In this paper, we present a model for TCP/IP congestion control mechanism. The rate at which data is transmitted increases linearly in time until a packet loss is detected. At this point, the transmission rate is divided by a constant factor. Losses are generated by some exogenous random process which is assumed to be stationary ergodic. This allows us to account for any correlation and any distribution of inter-loss times. We obtain an explicit expression for the throughput of a TCP connection and bounds on the throughput when there is a limit on the window size. In addition, we study the effect of the Timeout mechanism on the throughput. A set of experiments is conducted over the real Internet and a comparison is provided with other models that make simple assumptions on the inter-loss time process. The comparison shows that our model approximates well the throughput of TCP for many distributions of inter-loss times.

Journal ArticleDOI
TL;DR: Two queueing models are proposed to simulate the stochastic process of packet delay jitter and loss under DoS attacks and Mitigating measures based on packet filtering are shown to be capable of ameliorating the performance degradation.
Abstract: Replacing specialized industrial networks with the Internet is a growing trend in industrial informatics, where packets are used to transmit feedback and control signals between a plant and a controller. Today, denial of service (DoS) attacks cause significant disruptions to the Internet, which will threaten the operation of network-based control systems (NBCS). In this paper, we propose two queueing models to simulate the stochastic process of packet delay jitter and loss under DoS attacks. The motivation is to quantitatively investigate how these attacks degrade the performance of NBCS. The example control system consists of a proportional integral controller, a second-order plant, and two one-way delay vectors induced by attacks. The simulation results indicate that Model I attack (local network DoS attack) impairs the performance because a large number of NBCS packets are lost. Model II attack (nonlocal network DoS attack) deteriorates the performance or even destabilizes the system. In this case, the traffic for NBCS exhibits strong autocorrelation of delay jitter and packet loss. Mitigating measures based on packet filtering are discussed and shown to be capable of ameliorating the performance degradation.

Book ChapterDOI
30 Jun 2005
TL;DR: A thermal-aware routing protocol is proposed that routes the data away from high temperature areas (hot spots) and indicates the capability of load balance, which leads to less packet loss in high load situations.
Abstract: Implanted biological sensors are a special class of wireless sensor networks that are used in-vivo for various medical applications. One of the major challenges of continuous in-vivo sensing is the heat generated by the implanted sensors due to communication radiation and circuitry power consumption. This paper addresses the issues of routing in implanted sensor networks. We propose a thermal-aware routing protocol that routes the data away from high temperature areas (hot spots). With this protocol each node estimates temperature change of its neighbors and routes packets around the hot spot area by a withdraw strategy. The proposed protocol can achieve a better balance of temperature rise and only experience a modest increased delay compared with shortest hop, but thermal-awareness also indicates the capability of load balance, which leads to less packet loss in high load situations.

Patent
22 Jul 2005
TL;DR: In this article, a packet interceptor/processor is coupled with the network so as to be able to intercept and process packets flowing over the network and provides external connectivity to other devices that wish to intercept packets as well.
Abstract: An apparatus and method for enhancing the infrastructure of a network such as the Internet is disclosed. A packet interceptor/processor apparatus is coupled with the network so as to be able to intercept and process packets flowing over the network. Further, the apparatus provides external connectivity to other devices that wish to intercept packets as well. The apparatus applies one or more rules to the intercepted packets which execute one or more functions on a dynamically specified portion of the packet and take one or more actions with the packets. The apparatus is capable of analyzing any portion of the packet including the header and payload. Actions include releasing the packet unmodified, deleting the packet, modifying the packet, logging/storing information about the packet or forwarding the packet to an external device for subsequent processing. Further, the rules may be dynamically modified by the external devices.

Proceedings ArticleDOI
22 Aug 2005
TL;DR: This work introduces a new algorithm for packet loss measurement that is designed to overcome the deficiencies in standard Poisson-based tools and develops and implements a prototype tool, called BADABING, which reports loss characteristics far more accurately than traditional loss measurement tools.
Abstract: Measurement and estimation of packet loss characteristics are challenging due to the relatively rare occurrence and typically short duration of packet loss episodes. While active probe tools are commonly used to measure packet loss on end-to-end paths, there has been little analysis of the accuracy of these tools or their impact on the network. The objective of our study is to understand how to measure packet loss episodes accurately with end-to-end probes. We begin by testing the capability of standard Poisson-modulated end-to-end measurements of loss in a controlled laboratory environment using IP routers and commodity end hosts. Our tests show that loss characteristics reported from such Poisson-modulated probe tools can be quite inaccurate over a range of traffic conditions. Motivated by these observations, we introduce a new algorithm for packet loss measurement that is designed to overcome the deficiencies in standard Poisson-based tools. Specifically, our method creates a probe process that (1) enables an explicit trade-off between accuracy and impact on the network, and (2) enables more accurate measurements than standard Poisson probing at the same rate. We evaluate the capabilities of our methodology experimentally by developing and implementing a prototype tool, called BADABING. The experiments demonstrate the trade-offs between impact on the network and measurement accuracy. We show that BADABING reports loss characteristics far more accurately than traditional loss measurement tools.

Proceedings ArticleDOI
Sik Choi1, Gyung-Ho Hwang1, Taesoo Kwon1, Ae-Ri Lim, Dong-Ho Cho 
05 Dec 2005
TL;DR: This paper proposes an enhanced link-layer handover algorithm that an MSS can receive downlink data before synchronization with uplink during handover process and reduces data transmission delay and packet loss probability for real-time downlink service.
Abstract: IEEE 802.16 WirelessMAN aiming to broadband wireless access (BWA) is evolving to 4G mobile communication systems through the standardization of IEEE 802.16e supporting mobility on existing fixed WirelessMAN systems. Because IEEE 802.16e system is based on OFDM(A) technology, a mobile subscriber station (MSS) basically conducts hard handover operation when it moves to another base station (BS). Therefore, the MSS is not able to send or receive the data during handover process and these data should be delayed. As a result, real-time packet could be dropped by handover delay. In this paper, we propose an enhanced link-layer handover algorithm that an MSS can receive downlink data before synchronization with uplink during handover process. Our proposed scheme reduces data transmission delay and packet loss probability for real-time downlink service.

Patent
25 Aug 2005
TL;DR: In this paper, a method of high speed assemble process capable of dealing with long packets with effective buffer memories usage is presented, which is based on a processing method of fragmented packets in packet transfer equipment for transmitting and receiving packet data between terminals through network.
Abstract: The present invention provides a method of high speed assemble process capable of dealing with long packets with effective buffer memories usage A processing method of fragmented packets in packet transfer equipment for transmitting and receiving packet data between terminals through network, includes, receiving fragmented packets, identifying whether the received packet is a packet fragmented into two from original, or a packet fragmented into three or more, for the packet identified as fragmented into two, storing the two fragmented packets into assembly buffer in fragmentation order, on basis of the respective offset values in the packets, and reading out from top, and for the packet fragmented into three or more, chain-connecting the assembly buffers and storing the packets therein in reception order, reading out the packets after deciding the order by comparing chain information and offset values of the fragmented packets within the chain, and then reassembling the packets

Proceedings ArticleDOI
22 Aug 2005
TL;DR: A simple, low-complexity protocol, called Variable-structure congestion Control Protocol (VCP), is designed and implemented that leverages only the existing two ECN bits for network congestion feedback, and yet achieves comparable performance to XCP, ie high utilization, low persistent queue length, negligible packet loss rate, and reasonable fairness.
Abstract: Achieving efficient and fair bandwidth allocation while minimizing packet loss in high bandwidth-delay product networks has long been a daunting challenge. Existing end-to-end congestion control (eg TCP) and traditional congestion notification schemes (eg TCP+AQM/ECN) have significant limitations in achieving this goal. While the recently proposed XCP protocol addresses this challenge, XCP requires multiple bits to encode the congestion-related information exchanged between routers and end-hosts. Unfortunately, there is no space in the IP header for these bits, and solving this problem involves a non-trivial and time-consuming standardization process.In this paper, we design and implement a simple, low-complexity protocol, called Variable-structure congestion Control Protocol (VCP), that leverages only the existing two ECN bits for network congestion feedback, and yet achieves comparable performance to XCP, ie high utilization, low persistent queue length, negligible packet loss rate, and reasonable fairness. On the downside, VCP converges significantly slower to a fair allocation than XCP. We evaluate the performance of VCP using extensive ns2 simulations over a wide range of network scenarios. To gain insight into the behavior of VCP, we analyze a simple fluid model, and prove a global stability result for the case of a single bottleneck link shared by flows with identical round-trip times.

Patent
13 Jan 2005
TL;DR: In this paper, a method of high speed assemble process capable of dealing with long packets with effective buffer memories usage is presented, which includes, receiving fragmented packets, identifying whether the received packet is a packet fragmented into two from original, or a packet fragmentation into three or more, storing the two fragmented packets into assembly buffer in fragmentation order, on basis of the respective offset values in the packets, and reading out from top.
Abstract: The present invention provides a method of high speed assemble process capable of dealing with long packets with effective buffer memories usage. A processing method of fragmented packets in packet transfer equipment for transmitting and receiving packet data between terminals through network, includes, receiving fragmented packets, identifying whether the received packet is a packet fragmented into two from original, or a packet fragmented into three or more, for the packet identified as fragmented into two, storing the two fragmented packets into assembly buffer in fragmentation order, on basis of the respective offset values in the packets, and reading out from top, and for the packet fragmented into three or more, chain-connecting the assembly buffers and storing the packets therein in reception order, reading out the packets after deciding the order by comparing chain information and offset values of the fragmented packets within the chain, and then reassembling the packets.

Proceedings ArticleDOI
13 Mar 2005
TL;DR: It is shown that careful design of packetization schemes in the application layer may significantly improve radio link resource utilization in delay sensitive media streaming under difficult wireless network conditions.
Abstract: Transmitting large packets over wireless networks helps to reduce header overhead, but may have an adverse effect on loss rate due to corruptions in a radio link. Packet loss in lower layers, however, is typically hidden from the upper protocol layers by link or MAC layer protocols. For this reason, errors in the physical layer are observed by the application as higher variance in end-to-end delay rather than increased packet loss rate. We study the effect of packet size on loss rate and delay characteristics in a wireless real-time application. We derive an analytical model for the dependency between packet length and delay characteristics. We validate our theoretical analysis through experiments in an ad hoc network using WLAN technologies. We show that careful design of packetization schemes in the application layer may significantly improve radio link resource utilization in delay sensitive media streaming under difficult wireless network conditions.

Patent
14 Oct 2005
TL;DR: In this article, a selective ordering of packets such that some sequences of packets on the channel are guaranteed not to be delivered out of order, while other packets may be delivered before earlier sent packets are received, thereby preempting their delivery.
Abstract: A communication protocol provides a selective ordering of packets such that some sequences of packets on the channel are guaranteed not to be delivered out of order, while other packets on the same channel may be delivered before earlier sent packets are received, thereby preempting their delivery. The communication protocol can be implemented using UDP over IP. The protocol may be used for exchange of information in a distributed multi-player game.

Patent
19 Jul 2005
TL;DR: A forward error correction (FEC) encoding system and method optimized for protecting real-time audio-video streams for transmission over packet-switched networks with minimal latency is proposed in this article.
Abstract: A forward error correction (FEC) encoding system and method optimized for protecting real-time audio-video streams for transmission over packet-switched networks with minimal latency Embodiments of this invention provide bandwidth-efficient and low-latency FEC for both variable and constant bit-rate MPEG-encoded audio and video streams To maximize bandwidth-efficiency and playable frame rate for recovered media streams, embodiments of the invention may sort packets by content type and aggregate them into FEC blocks weighted by sensitivity in the recovered stream to packet loss of a particular content type Embodiments of this invention may use temporal constraints to limit FEC block size and thereby facilitate their use in the transport of VBR streams

Patent
15 Jun 2005
TL;DR: In this article, a packet loss and detection mechanism periodically exchanges traffic packet counts to maintain an accurate diagnosis of the pseudowire health from either endpoint, and the raw packet counts are analyzed to identify misrouted and lost packets.
Abstract: Conventional network packet traffic loss/drop monitoring mechanisms, such as that employed for pseudowire, IP flow and tunnel traffic monitoring, do not process or diagnose the aggregate counts from both endpoints of a particular pseudowire. A packet loss and detection mechanism periodically exchanges traffic packet counts to maintain an accurate diagnosis of the pseudowire health from either endpoint. Further, the raw packet counts are analyzed to identify misrouted and lost packets, as both should be considered to assess network health and congestion. The pseudowire statistics are maintained for each pseudowire emanating from a particular edge router, providing a complete view of pseudowire traffic affecting a particular edge router. Such statistics are beneficial for problem detection, diagnosis, and for verification of throughput criteria such as those expressed in Quality of Service (QOS) terms and/or SLAs (service level agreements).

Patent
22 Jul 2005
TL;DR: In this paper, the authors proposed a remote access architecture for peer-to-peer communications and remote access connectivity, which provides a method for establishing a direct connection between peer computing devices via a third computing device such as a gateway.
Abstract: The present invention is generally directed towards a remote access architecture for providing peer-to-peer communications and remote access connectivity. In one embodiment, the remote access architecture of the present provides a method for establishing a direct connection between peer computing devices via a third computing device, such as a gateway. Additionally, the present invention provides the following techniques to optimize peer-to-peer communications: 1) false acknowledgement of receipt of network packets allowing communications via a lossless protocol of packets constructed for transmission via a lossy protocol, 2) payload shifting of network packets allowing communications via a lossless protocol of packets constructed for transmission via a lossy protocol, 3) reduction of packet fragmentation by adjusting the maximum transmission unit (MTU) parameter, accounting for overhead due to encryption, 4) application-aware prioritization of client-side network communications, and 5) network disruption shielding for reliable and persistent network connectivity and access.

Journal ArticleDOI
TL;DR: The obtained results demonstrate, that breaking the OSI protocol layer isolation paradigm and injecting content-level semantic and service-level requirements within the transport and traffic control protocols, lead to intelligent and efficient support of multimedia services over complex network architectures.
Abstract: There is an increasing demand for supporting real-time audiovisual services over next-generation wired and wireless networks. Various link/network characteristics make the deployment of such demanding services more challenging than traditional data applications like e-mail and the Web. These audiovisual applications are bandwidth adaptive but have stringent delay, jitter, and packet loss requirements. Consequently, one of the major requirements for the successful and wide deployment of such services is the efficient transmission of sensitive content (audio, video, image) over a broad range of bandwidth-constrained access networks. These media will be typically compressed according to the emerging ISO/IEC MPEG-4 standard to achieve high bandwidth efficiency and content-based interactivity. MPEG-4 provides an integrated object-oriented representation and coding of natural and synthetic audiovisual content for its manipulation and transport over a broad range of communication infrastructures. In This work, we leverage the characteristics of MPEG-4 and Internet protocol (IP) differentiated service frameworks, to propose an innovative cross-layer content delivery architecture that is capable of receiving information from the network and adaptively tune transport parameters, bit rates, and QoS mechanisms according to the underlying network conditions. This service-aware IP transport architecture is composed of: 1) an automatic content-level audiovisual object classification model; 2) a reliable application level framing protocol with fine-grained TCP-Friendly rate control and adaptive unequal error protection; and 3) a service-level QoS matching/packet tagging algorithm for seamless IP differentiated service delivery. The obtained results demonstrate, that breaking the OSI protocol layer isolation paradigm and injecting content-level semantic and service-level requirements within the transport and traffic control protocols, lead to intelligent and efficient support of multimedia services over complex network architectures.

Journal ArticleDOI
TL;DR: This paper proposes a new TCP scheme, called TCP New Jersey, which is capable of distinguishing wireless packet losses from congestion packet losses, and reacting accordingly, and minimizes the network congestion, reduces the network volatility, and stabilizes the queue lengths while achieving more throughput than other TCP schemes.

Journal ArticleDOI
TL;DR: Results show that the proposed algorithm significantly outperforms other techniques by several dBs in peak signal-to-noise ratio (PSNR), provides good visual quality, and has a rather low complexity, which makes it possible to perform real-time operation with reasonable computational resources.
Abstract: In low bit-rate packet-based video communications, video frames may have very small size, so that each frame fills the payload of a single network packet; thus, packet losses correspond to whole-frame losses, to which the existing error concealment algorithms are badly suited and generally not applicable. In this paper, we deal with the problem of concealment of whole frame-losses, and propose a novel technique which is capable of handling this very critical case. The proposed technique presents other two major innovations with respect to the state-of-the-art: i) it is based on optical flow estimation applied to error concealment and ii) it performs multiframe estimation, thus optimally exploiting the multiple reference frame buffer featured by the most modern video coders such as H.263+ and H.264. If data partitioning is employed, by e.g., sending headers, motion vectors, and coding modes in prioritized packets as can be done in the DiffServ network model, the algorithm is capable of exploiting the motion vectors to improve the error concealment results. The algorithm has been embedded in the H.264 test model software, and tested under both independent and correlated packet loss models with parameters typical of the wireless environment. Results show that the proposed algorithm significantly outperforms other techniques by several dBs in peak signal-to-noise ratio (PSNR), provides good visual quality, and has a rather low complexity, which makes it possible to perform real-time operation with reasonable computational resources.