scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 1996"


Proceedings ArticleDOI
29 Oct 1996
TL;DR: A mechanism that gives rise to self-similar network traffic is examined, the transfer of files or messages whose size is drawn from a heavy-tailed distribution is studied and performance implications of self-Similarity are discussed as represented by various performance measures.
Abstract: Measurements of LAN and WAN traffic show that network traffic exhibits variability on different scales. We examine a mechanism that gives rise to self-similar network traffic and discuss performance. The mechanism we study is the transfer of files or messages whose size is drawn from a heavy-tailed distribution. In a realistic client/server network the degree to which file sizes are heavy-tailed can directly determine the degree of traffic self-similarity at the link level. This causal relationship is robust relative to changes in network resources, network topology, the influence of cross-traffic, and the distribution of interarrival times. Properties of the transport layer play an important role in preserving and modulating this relationship. The reliable transmission and flow control mechanisms of TCP serve to maintain the long-range dependency structure induced by heavy-tailed file size distributions. In contrast, if a non-flow-controlled and unreliable (UDP-based) transport protocol is used, the resulting traffic shows little self-similarity: although still bursty at short time scales, it has little long-range dependence. Performance implications of self-similarity are discussed as represented by various performance measures. Increased self-similarity as expected, results in degradation of performance. Queueing delay, in particular is discussed. Throughput-related measures such as packet loss and retransmission rate, however increase only gradually with increasing traffic self-similarity as long as reliable, flow-controlled transport protocol is used.

502 citations


Patent
15 May 1996
TL;DR: In this paper, a system for screening data packets transmitted between a network to be protected, such as a private network, and another network, such a public network, is described.
Abstract: A system for screening data packets transmitted between a network to be protected, such as a private network, and another network, such as a public network. The system includes a dedicated computer with multiple (specifically, three) types of network ports: one connected to each of the private and public networks, and one connected to a proxy network that contains a predetermined number of the hosts and services, some of which may mirror a subset of those found on the private network. The proxy network is isolated from the private network, so it cannot be used as a jumping off point for intruders. Packets received at the screen (either into or out of a host in the private network) are filtered based upon their contents, state information and other criteria, including their source and destination, and actions are taken by the screen depending upon the determination of the filtering phase. The packets may be allowed through, with or without alteration of their data, IP (internet protocol) address, etc., or they may be dropped, with or without an error message generated to the sender of the packet. Packets may be sent with or without alteration to a host on the proxy network that performs some or all of the functions of the intended destination host as specified by a given packet. The passing through of packets without the addition of any network address pertaining to the screening system allows the screening system to function without being identifiable by such an address, and therefore it is more difficult to target as an IP entity, e.g. by intruders.

467 citations


Proceedings ArticleDOI
18 Nov 1996
TL;DR: This work experimentally and quantitatively examines the spatial and temporal correlation in packet loss among participants in a multicast session, and shows that there is some spatial correlation in loss among the multicast sites.
Abstract: The success of multicast applications such as Internet teleconferencing illustrates the tremendous potential of applications built upon wide-area multicast communication services. A critical issue for such multicast applications and the higher layer protocols required to support them is the manner in which packet losses occur within the multicast network. We present and analyze packet loss data collected on multicast-capable hosts at 17 geographically distinct locations in Europe and the US and connected via the MBone. We experimentally and quantitatively examine the spatial and temporal correlation in packet loss among participants in a multicast session. Our results show that there is some spatial correlation in loss among the multicast sites. However, the shared loss in the backbone of the MBone is, for the most part, low. We find a fairly significant amount of of burst loss (consecutive losses) at most sites. In every dataset, at least one receiver experienced a long loss burst greater than 8 seconds (100 consecutive packets). A predominance of solitary loss was observed in all cases, but periodic losses of length approximately 0.6 seconds and at 30 second intervals were seen by some receivers.

346 citations


Patent
28 May 1996
TL;DR: In this paper, the authors propose a packet analyzer that includes a network interface controller, packet capturing module, a packet analysis module, and a data management module, which is connected to a transmission medium for a network segment and arranged to receive the stream of data packets passing along the medium.
Abstract: An internet activity analyzer includes a network interface controller, a packet capturing module, a packet analysis module, and a data management module. The network interface controller is connected to a transmission medium for a network segment and is arranged to receive the stream of data packets passing along the medium. The packet stream is filtered to remove undesired packet data and is stored in a raw packet data buffer. The packet data is decoded at the internet protocol layer to provide information such as timing and sequencing data regarding the exchange of packets between nodes and the packet data for exchanges between multiple nodes may be recompiled into concatenated raw transaction data which may be coherently stored in a raw transaction data buffer. An application level protocol translator translates the raw transaction data and stores the data in a translated transaction data buffer. The translated data provides high level information regarding the transactions between nodes which is used to monitor or compile statistics regarding network or internetwork activity. The data management module communicates with the packet capturing module and the packet analyzer and, particularly, the data in the raw packet, decoded packet, raw transaction, and translated transaction data buffers to provide real time and stored analytical information concerning internet activity.

341 citations


Journal ArticleDOI
TL;DR: The suggested mechanism for dynamic adjustment of the bandwidth requirements of multimedia applications has been implemented and controls the bandwidth of the vic video conferencing system.

307 citations


Proceedings ArticleDOI
28 Aug 1996
TL;DR: This paper presents a decentralized channel access scheme for scalable packet radio networks that is free of packet loss due to collisions and that at each hop requires no per-packet transmissions other than the single transmission used to convey the packet to the next-hop station.
Abstract: Prior work in the field of packet radio networks has often assumed a simple success-if-exclusive model of successful reception. This simple model is insufficient to model interference in large dense packet radio networks accurately. In this paper we present a model that more closely approximates communication theory and the underlying physics of radio communication. Using this model we present a decentralized channel access scheme for scalable packet radio networks that is free of packet loss due to collisions and that at each hop requires no per-packet transmissions other than the single transmission used to convey the packet to the next-hop station. We also show that with a modest fraction of the radio spectrum, pessimistic assumptions about propagation resulting in maximum-possible self-interference, and an optimistic view of future signal processing capabilities that a self-organizing packet radio network may scale to millions of stations within a metro area with raw per-station rates in the hundreds of megabits per second.

305 citations


Journal ArticleDOI
TL;DR: This paper compares the performance of these techniques (excluding temporal scalability) under various loss rates using realistic length material and discusses their relative merits.
Abstract: Transmission of compressed video over packet networks with nonreliable transport benefits when packet loss resilience is incorporated into the coding. One promising approach to packet loss resilience, particularly for transmission over networks offering dual priorities such as ATM networks, is based on layered coding which uses at least two bitstreams to encode video. The base-layer bitstream, which can be decoded independently to produce a lower quality picture, is transmitted over a high priority channel. The enhancement-layer bitstream(s) contain less information, so that packet losses are more easily tolerated. The MPEG-2 standard provides four methods to produce a layered video bitstream: data partitioning, signal-to-noise ratio scalability, spatial scalability, and temporal scalability. Each was included in the standard in part for motivations other than loss resilience. This paper compares the performance of these techniques (excluding temporal scalability) under various loss rates using realistic length material and discusses their relative merits. Nonlayered MPEG-2 coding gives generally unacceptable video quality for packet loss ratios of 10/sup -3/ for small packet sizes. Better performance can be obtained using layered coding and dual-priority transmission. With data partitioning, cell loss ratios of 10/sup -4/ in the low-priority layer are definitely acceptable, while for SNR scalable encoding, cell loss ratios of 10/sup -3/ are generally invisible. Spatial scalable encoding can provide even better visual quality under packet losses; however, it has a high implementation complexity.

227 citations


Proceedings ArticleDOI
08 Nov 1996
TL;DR: This paper describes an effort to characterize the loss behavior of the AT&T WaveLAN, a popular in-building wireless interface, using a trace-based approach and derives another model based on the distributions of the error and error-free length of the packet streams.
Abstract: The loss behavior of wireless networks has become the focus of many recent research efforts. Although it is generally agreed that wireless communications experience higher error rates than wireline, the nature of these lossy links is not fully understood. This paper describes an effort to characterize the loss behavior of the AT&T WaveLAN, a popular in-building wireless interface. Using a trace-based approach, packet loss information is recorded, analyzed, and validated. Our results indicate that WaveLAN experiences an average packet error rate of 2 to 3 percent. Further analysis reveals that these errors are not independent, making it hard to model them with a simple two-state Markov chain. We derive another model based on the distributions of the error and error-free length of the packet streams. For validation, we modulate both the error models and the traces in a simulator. Trace-driven simulations yield an average TCP throughput of about 5 percent less than simulations using our best error model.

205 citations


Patent
01 Apr 1996
TL;DR: In this article, management packets are defined that are modified in the payload by each node along a virtual connection and are used to measure both end-to-end QoS and specific individual intermediate node performance parameters.
Abstract: Management packets are defined that are modified in the payload by each node along a virtual connection and are used to measure both end-to-end QoS and specific individual intermediate node performance parameters. Management packets are implemented by defining entirely new packets or by modifying ATM OAM cells. Switches or routers for use as intermediate nodes are defined that modify the payload of the management packet and locally measure packet delay and packet loss. An intermediate node measures and records the difference between the arrival and departure times of management packets at that switch utilizing delay-stamp fields within the management packets and either the switch internal routing header or timestamp fields within the packet. At the endpoint of the virtual connection, delay-stamp fields in the management packet indicate individual node delays and the cumulative delay. An intermediate node counts the number of packets it discards and records these values in the payload of the management packet individually and cumulatively.

205 citations


Proceedings ArticleDOI
28 Aug 1996
TL;DR: It is argued that quasi- FIFO is adequate for most applications, and a simple technique for speedy restoration of synchronization in the event of loss is described.
Abstract: Link striping algorithms are often used to overcome transmission bottlenecks in computer networks. Traditional striping algorithms suffer from two major disadvantages. They provide inadequate load sharing in the presence of variable length packets, and may result in non-FIFO delivery of data. We describe a new family of link striping algorithms that solves both problems. Our scheme applies to any layer that can provide multiple FIFO channels.We deal with variable sized packets by showing how fair queuing algorithms can be transformed into load sharing algorithms. Our transformation results in practical load sharing protocols, and shows a theoretical connection between two seemingly different problems. The same transformation can be applied to obtain load sharing protocols for links with different capacities. We deal with the FIFO requirement for two separate cases. If a sequence number can be added to each packet, we show how to speed up packet processing by letting the receiver simulate the sender algorithm. If no header can be added, we show how to provide quasi-FIFO delivery. Quasi-FIFO is FIFO except during occasional periods of loss of synchronization. We argue that quasi-FIFO is adequate for most applications. We also describe a simple technique for speedy restoration of synchronization in the event of loss.We develop an architectural framework for transparently embedding our protocol at the network level by striping IP packets across multiple physical interfaces. The resulting strIPe protocol has been implemented within the NetBSD kernel. Our measurements and simulations show that the protocol offers scalable throughput even when striping is done over dissimilar links, and that the protocol synchronizes quickly after packet loss. Measurements show performance improvements over conventional round robin striping schemes and striping schemes that do not resequence packets.

187 citations


Patent
Juha-Pekka Ahopelto1, Hannu Kari1
08 Jan 1996
TL;DR: In this article, a protocol independent routing of data packets between a mobile station of a packet radio network and a party (Host) connected to an external network is proposed. But this protocol does not allow the receiver to understand the protocol of the transferred extraneous data packet or the contents of the data packet.
Abstract: The present invention relates to a protocol-independent routing of data packets between a mobile station of a packet radio network and a party (Host) connected to an external network. In the invention, a data packet of an extraneous protocol (IPX) is transferred through a packet radio network using a second protocol (X.25) as encapsulated in a data packet according to the second protocol. The transferring packet radio network does not thus need to understand the protocol of the transferred extraneous data packet or to be able to interpret the contents of the data packet. A data packet network is connected to other packet radio networks, data networks or the backbone network between packet data networks via a gateway node (GPRS GSN), which uses the network-internal protocol (X.25) towards the dedicated packet network and the protocol of each network towards other networks. When a data packet is transferred via a gateway node from a network into another network, the data packet is encapsulated in a packet according to the protocol of the new network. When the encapsulated data packet arrives in a node which supports the protocol of the encapsulated data packet, the encapsulation is stripped away and the data packet is routed forward according to the protocol of the data packet.

Patent
11 Oct 1996
TL;DR: In this paper, a packet object generator and an incoming packet object handler are used to verify the source and destination of the received object packet. But they do not handle the received packet object data.
Abstract: A computer network having first and second network entities. The first network entity includes a packet object generator that generates a packet object including an executable source method, an executable destination method, and data associated with each of the methods. The first network entity also includes a communications interface to transmit the packet object. The second network entity includes a communications interface to receive the packet object and an incoming packet object handler to handle the received packet object. The incoming packet object handler includes a source and destination verifier to execute the source and destination methods with their associated data so as to verify the source and destination of the received object packet.

Patent
08 Jan 1996
TL;DR: In this article, a packet radio system encapsulates data packets of external data networks by a point-to-point protocol PPP (Fig. 4A, 4B), and passes them through one or more sub-networks to a point which supports the protocol of the encapsulated data packet.
Abstract: A packet radio system encapsulates data packets of external data networks by a point-to-point protocol PPP (Fig. 4A, 4B), and passes them through one or more sub-networks to a point which supports the protocol of the encapsulated data packet. In addition, a special radio link protocol of the packet radio network is required on the radio interface between a mobile data terminal equipment and a support node. PPP packets are encapsulated in data packets of said radio link protocol. The disadvantage of the arrangement is that the data packets of both the PPP protocol and the radio link protocol contain protocol-specific control fields, which reduces the transmission capacity of user information. Therefore, a PPP packet is compressed (Fig. 4C) before the encapsulation (Fig. 4D) by removing therefrom the unnecessary control fields. After having been transferred over the radio interface, the PPP packet is decompressed into its original format (Fig. 4F, 4G).

Patent
19 Apr 1996
TL;DR: In this paper, a switch fabric environment, which includes a buffer, packet data of different class-types from different sources is received, stored in the buffer, processed and outputted to its intended destination.
Abstract: In a switch fabric environment, which includes a buffer, packet data of different class-types from different sources is received, stored in the buffer, processed and outputted to its intended destination As the buffer fills up, transmission from some of the data sources is stopped to avoid dropping of packets To avoid packet loss, when the occupancy of the buffer reaches a first threshold value, further transmission of a first-class type of data is precluded from the particular source of that data then being received, while transmission from other sources of that same first-class type of data is not precluded from these other data sources until first-class type of data from such other sources is also received Further, data of a second-class type is not precluded from being transmitted as long as the amount of data stored in the buffer remains below a second threshold, which is greater than the first threshold When the occupancy of the buffer reaches that second threshold, further transmissions from the particular source of that second-class type of data then being received is also precluded As data from other sources of that second-class type of data is received, further transmissions from those other sources are also precluded A third-class type of data, however, is not precluded from transmission as long as the amount of data remains below a third threshold value, which is greater than the second threshold value In order to avoid packet loss, when a packet from any source is received, it is stored regardless of whether transmission from the source of that packet has been precluded Advantageously, a MAX/MIN distribution of the available bandwidth can be probabilistically achieved without packet loss

Proceedings ArticleDOI
16 Sep 1996
TL;DR: This work examines a specific mechanism, and examines its cost as well as the benefit expected from using it, and concludes that this mechanism can be augmented to obtain a joint source/channel coding scheme suitable for both the current and the future integrated services Internet.
Abstract: Anecdotal evidence suggests that the quality of many videoconferences in the Internet is mediocre because of high packet loss rates. This makes it important to design and implement mechanisms that minimize packet loss and its impact in video (and audio) applications. There are two such types of mechanisms. Rate control mechanisms attempt to minimize the amount of packet loss by matching the bandwidth requirements of a video flow to the capacity available in the network. However, they do not prevent packet loss altogether. Error control mechanisms attempt to minimize the visual impact of lost packets at the destinations. We provide motivation for using error control mechanisms based on forward error correction (FEC) and packet reconstruction. We examine a specific mechanism, and evaluate its cost as well as the benefit expected from using it. This mechanism can be augmented to obtain a joint source/channel coding scheme suitable for both the current and the future integrated services Internet.

Proceedings ArticleDOI
18 Nov 1996
TL;DR: A new error concealment technique for audio transmission over heterogeneous packet switched networks based on time-scale modification of correctly received packets, WSOLA ("Waveform Similarity Overlap-Add"), is presented.
Abstract: We present a new error concealment technique for audio transmission over heterogeneous packet switched networks based on time-scale modification of correctly received packets. An appropriate time-scale modification algorithm, WSOLA ("Waveform Similarity Overlap-Add"), is used and its parameters are optimized for scaling short audio segments. Particular attention is paid to the additional delay introduced by the new technique. For subjective listening tests, packet loss is simulated at error rates of 20% and 33% and the new technique is compared to previous proposals by category and component judgment of speech quality. The test results show that typical disturbance components of other techniques can be avoided and overall sound quality is higher.

Proceedings ArticleDOI
07 May 1996
TL;DR: It is shown that it is only required to keep one copy of the coded data at its highest possible quality and the transcoder is also capable of fast response to network demands to prevent packet loss.
Abstract: It is expected that most of the video services will be based on the MPEG2 standard and many of them using recorded streams. When compressed video is recorded, the characteristics of the channel through which it will be transmitted are assumed to be known beforehand. Therefore a great lack of flexibility arises in transmission of these streams when channels of diverse characteristics are used. If the same video programme is to be simultaneously distributed to several users through channels with different capacities, the service provider needs to keep several copies of that programme, each one encoded according to the corresponding channel characteristics. We show that it is only required to keep one copy of the coded data at its highest possible quality. Transcoding the main stream to lower rates is achieved with minimum delay. Therefore the transcoder is also capable of fast response to network demands to prevent packet loss.

Proceedings ArticleDOI
24 Mar 1996
TL;DR: The specifications and performance of RAMP a reliable adaptive multicast protocol, a combined error control approach that uses immediate, receiver-initiated, NAK-based, unicast error notification combined with originator based unicast retransmission, are presented.
Abstract: The specifications and performance of RAMP a reliable adaptive multicast protocol, are presented. Initially described in IETF RFC 1458 (1993), RAMP has been enhanced for use over an all-optical, circuit-switched, gigabit network under our ARPA-sponsored testbed for optical networking (TBONE) project. RAMP uses immediate, receiver-initiated, NAK-based, unicast error notification combined with originator based unicast retransmission. The approach is motivated by the loss characteristics of the TBONE network, where extremely low bit-error rates (10/sup -12/ or better) and the absence of any store-and-forward capabilities in the switches make packet losses almost entirely a result of receiver buffer overflows. As receiver losses are largely independent, use of unicast over multicast for NAKs and retransmission eliminates unnecessary receiver processing overhead associated with reading and discarding redundant packets. Use of immediate rather than delayed NAKs further improves performance by reducing both latency and the likelihood of buffer overflow. The effectiveness of this combined error control approach has been verified by other researchers, as well as through our own investigations. Interestingly, TBONE loss characteristics resemble those of switched virtual circuit ATM networks and packet-switched networks employing reservation services. As these networks provide quality of service guarantees in the switches, the likely source of packet loss is again due to receiver errors and buffer overflows. Hence, RAMP's design is also relevant for the next generation of packet switched networks.

Patent
18 Jun 1996
TL;DR: In this paper, back door packet communication between a workstation on a network and a device outside the network is identified by detecting packets that are associated with communication involving devices outside of the network, and identifying packets, among those detected packets, that are being sent or received by a device that is not authorized for communication with devices outside network.
Abstract: Back door packet communication between a workstation on a network and a device outside the network is identified by detecting packets that are associated with communication involving devices outside the network, and identifying packets, among those detected packets, that are being sent or received by a device that is not authorized for communication with devices outside the network.

Patent
Yoshimitsu Shimojo1
18 Sep 1996
TL;DR: In this paper, a packet transfer device that can be easily realized even when a number of input ports is large is presented, where each input buffer temporarily stores entered packets class by class, and outputs packets of a selected class specified by the control unit, while determining the selected class of packets to be outputted from the input buffers according to a packet storage state in the packet storage units of the input buffer as a whole for each class.
Abstract: A packet transfer device that can be easily realized even when a number of input ports is large. Each input buffer temporarily stores entered packets class by class, and outputs packets of a selected class specified by the control unit, while the control unit determines the selected class of packets to be outputted from the input buffers according to a packet storage state in the packet storage units of the input buffers as a whole for each class. Each input buffer can temporarily store entered packets while selecting packets to be outputted at a next phase, and the control unit can specify packets to be selected in the input buffers according to an output state of packets previously selected in the input buffers as a whole. Packets stored in the buffer can be managed in terms of a plurality of groups, and each packet entered at the buffer can be distributed into a plurality of groups so that packets are distributed fairly among flows. The packets belonging to one of a plurality of groups are then outputted from the buffer toward the output port. A packet transfer at the buffer can be controlled by issuing a packet transfer command according to a log of packet transfer commands with respect to the buffer and a packet storage state of the buffer.

Proceedings ArticleDOI
16 Nov 1996
TL;DR: This work considers the problem of providing communication protocol support for large-scale group collaboration systems for use in environments such as the Internet which are subject to packet loss, wide variations in end-to-end delays, and transient partitions and presents a communication service, Corona, that attempts to meet these requirements.
Abstract: We consider the problem of providing communication protocol support for large-scale group collaboration systems for use in environments such as the Internet which are subject to packet loss, wide variations in end-to-end delays, and transient partitions. We identify a set of requirements that are critical for the design of such group collaboration systems. These include dynamic awareness notifications, reliable data delivery, and scalability to large numbers of users. We present a communication service, Corona, that attempts to meet these requirements. Corona supports two commttnication paradigms: the publish-subscn”be paradigm and the peer group paradigm. We present the interfaces provided by Corona to applications which are based on these paradigms. We describe the semantics of each interface method call and show how they can help meet the above requirements.

Patent
12 Feb 1996
TL;DR: In this article, the authors propose a method and network interface logic for providing an embedded checksum in a packet transferred over a packet network, where a packet is sent directly onto the packet network from the packet storage means of a source computer using its network interface means.
Abstract: A method and network interface logic for providing an embedded checksum in a packet transferred over a packet network. The packet network comprises a plurality of interconnected, directly-attached computers operative to provide data fidelity between any neighboring pair of such computers at least as robust as such data fidelity between any non-neighboring pair of such computers. Each computer comprises packet storage means and network interface means. Such a packet is sent directly onto the packet network from the packet storage means of a source computer using its network interface means. The packet is received from the packet network and transferred directly into the packet storage means of a sibling computer using its network interface means. Such an embedded checksum is associated with the packet at the sibling computer with the embedded checksum being calculated by at least one of the source computer or the sibling computer based on substantially the entire packet for validating error-free receipt.

Journal ArticleDOI
Shinsuke Hara1, A. Ogino2, M. Araki, Minoru Okada1, Norihiko Morinaga1 
TL;DR: Computer simulation results show that the proposed SAW-ARQ protocol can achieve high throughput and reduce the number of retransmission effectively for slow and fast Rayleigh fading/log-normal shadowing conditions.
Abstract: In a noncellular or large cell-size mobile radio communication system, log-normal shadowing as well as Rayleigh fading becomes the predominant source of system degradation. This paper proposes an efficient stop and wait automatic repeat request (SAW-ARQ) protocol with adaptive packet length to provide reliable mobile data packet transmission. The adaptive SAW-ARQ protocol controls the transmitting packet length according to the time-varying channel condition estimated with the number of ACK (acknowledgment packet) and NACK (negative-acknowledgment packet). Computer simulation results show that the proposed protocol can achieve high throughput and reduce the number of retransmission effectively for slow and fast Rayleigh fading/log-normal shadowing conditions.

Patent
08 Mar 1996
TL;DR: In this paper, the burst errors are disbursed between all packets in the packet block by interleaving the packets together prior to transmission, and the receiver then deinterleaves the packets into their original format disbursing burst errors.
Abstract: Packets are transmitted in different block sizes (21, 26) according to the speed of motion of the receiver (24, 29). The packet block size (21, 26) is selected to minimize the effects of burst errors that occur at the receiver. The burst errors are disbursed between all packets in the packet block by interleaving the packets together prior to transmission. The receiver then deinterleaves the packets into their original format disbursing burst errors between all packets in the packet block. Since each packet will only contain a small proportion of the burst error, standard ECC schemes can be used to correct for bit errors in each packet increasing the probability that all packets will be transmitted successfully.

Patent
Douglas M. Grover1
31 Jul 1996
TL;DR: In this article, a method and apparatus for selecting and displaying command and response packets of interest communicated over a network utilizing a Command and Response protocol was presented. But the method and the apparatus of the invention did not enable the quick determination of problems which can occur when utilizing a CRS protocol.
Abstract: A method and apparatus for selecting and displaying command and response packets of interest communicated over a network utilizing a command and response protocol. The method and apparatus of the invention enable the quick determination of problems which can occur when utilizing a command and response protocol.

Patent
13 Dec 1996
TL;DR: A cable modem interface unit is positioned between a cable modem and a network driver interface layer as discussed by the authors, where the interface unit includes a control packet filter coupled to the modem, which determines whether the packet is a control or a data packet.
Abstract: A cable modem interface unit is positioned between a cable modem and a network driver interface layer. The cable modem receives packets from a packet source. The interface unit includes a control packet filter coupled to the modem. The control packet filter receives a packet from the cable modem and determines whether the packet is a control packet or a data packet. The interface unit further includes a receive unit coupled to the control packet and the network driver interface layer. If the control packet filter determines that the packet is a data packet, the receive unit receives the packet from the control packet filter and sends the packet to the network driver interface layer. The interface unit further includes a protocol handler coupled to the receive unit. If the control packet filter determines that the packet is a control packet, the protocol handler receives the packet from the control packet filter.

Patent
06 Sep 1996
TL;DR: In this paper, a communication controller at the sending computer divides data transferred from a host into sub-ACK unit packets and transfers them sequentially to a destination without waiting for the subACK's being subsequently provided by the destination.
Abstract: A communication controller at the sending computer divides data transferred from a host into sub-ACK unit packets and transfers them sequentially to a destination without waiting for the sub-ACK's being subsequently provided by the destination. A communication controller at the receiving computer issues a sub-ACK to the sending computer for each of the sub-ACK unit packet, if the sub-ACK packet has been normally received and otherwise issues retransmission request for the sub-ACK unit packets and merges data included in the sub-ACK unit packets into the initial data, after they are normally received.

Proceedings ArticleDOI
27 May 1996
TL;DR: This paper provides a mechanism by which unnecessary loss of packets can be avoided by allowing transmission only under "good" link conditions.
Abstract: As mobile hosts move, they encounter changing network characteristics. Characteristics such as bandwidth, reliability can change drastically when a mobile host moves from indoor to outdoor environment and vice versa. A mobile host can find itself in a lossy environment due to external conditions such as interference and fading. Existing end-to-end protocols are designed with an assumption that packet loss is a random and rare event. However due to mobility of hosts, there can be transient yet significant packet loss. In this paper we consider the effect of mobility on unreliable protocols such as UDP. We provide a mechanism by which unnecessary loss of packets can be avoided by allowing transmission only under "good" link conditions.

Proceedings ArticleDOI
01 Sep 1996
TL;DR: Mean Opinion Score (MOS) curves show that sound distortions due to packet repetition can be reduced and a new error concealment technique is presented, which modifies the time-scale of correctly received packets instead of repeating them.
Abstract: We present a new error concealment technique for audio transmission over packet networks with high packet loss rate. Unlike other techniques it modifies the time-scale of correctly received packets instead of repeating them. This is done by a time-domain algorithm, WSOLA, whose parameters are redefined so that short audio segments like lost packets can be extended. Particular attention is paid to the additional delay introduced by the new technique. For subjective hearing tests, single and double packet loss is simulated at high packet loss rates, and the new technique is compared to previous proposals by category judgment and component judgment of sound quality. Mean Opinion Score (MOS) curves show that sound distortions due to packet repetition can be reduced.

DOI
01 Jan 1996
TL;DR: This thesis gives a complete description of Phoenix, a toolkit for building fault-tolerant, distributed applications in large scale networks, and its architecture, which is based on a new implementation of the virtually synchronous communication paradigm.
Abstract: Large scale systems are becoming more and more common today. Many distributed applications are emerging that use the capability of world-wide internetworking. Since many applications require availability and consistency in the presence of failures, an adequate support for fault-tolerance is necessary. This can be provided by different paradigms and their implementations. Unfortunately, most of these implementations consider only local area networks, whereas this thesis describes a system, called Phoenix, which aims at large scale networks where additional types of failure have to be taken into account. This thesis gives a complete description of Phoenix, a toolkit for building fault-tolerant, distributed applications in large scale networks. Fault-tolerance in Phoenix is achieved using replicated process groups, and consistency within one process group is achieved by using view synchronous communication. The particularity of Phoenix is the provision of this fault-tolerance and consistency in a large scale environment, where large scale is two-fold: (1) the wide geographical distribution of the replicated processes, and (2) a high number of participating processes in the system. The description of Phoenix given here is based on its architecture. Each layer of Phoenix focuses on a particular problem and proposes a solution. Lower layers are responsible for the geographical large scale aspects and their problems, whereas higher layers provide high order communication and deal with numerical large scale aspects. In large scale networks, in addition to the increased unpredictable latency of messages, communication protocols have to deal with link failures, which are often only transient. The dynamic routing layer in the Phoenix architecture tries to mask these link failures by rerouting. This rerouting not only gives increased reliability of communication, but also a more stable and accurate image of the reachability of the processes. On top of the dynamic routing layer, the reliable communication layer provides eventually reliable channels, i.e. messages sent are eventually delivered at the destination provided that the sender and the destination processes are correct. This layer takes into account different parameters of large scale networks, such as (1) increased, unpredictable latency, and (2) non-negligible packet desequencing and (3) important packet loss. The consistency among the replicas is based on a new implementation of the virtually synchronous communication paradigm. The implementation is part of the view synchronous communication layer and is based on a modified consensus protocol together with the eventually reliable channels of the reliable communication layer. The modified consensus protocol itself is based on an unstable suspicion model, where incorrectly suspected processes can be considered alive at a later point. This will be exploited to make the protocol alive whenever a majority of replicas can communicate with each other. The situation where a distributed system is cut into smaller subsystems, and none of these subsystems contains a majority, is not uncommon in large scale, but is often only transient. Further, the dynamic routing layer already does a maximum to avoid this situation. Based on the view synchronous communication layer, the ordered multicast communication layer provides different ordering primitives based on solid, theoretical definitions, allowing the implementation of different total and uniform orders. The numerical large scale is considered by assigning different roles to the processes of a distributed system without leaving the context of groups. The idea is to concentrate the fault-tolerant aspect to a small set of core processes, whilst still guaranteeing convenient and efficient access semantics to processes outside these core processes.