scispace - formally typeset
Search or ask a question

Showing papers on "Transmission delay published in 1999"


Journal ArticleDOI
TL;DR: The prevalence of unusual network events such as out-of-order delivery and packet replication are characterized and a robust receiver-based algorithm for estimating "bottleneck bandwidth" is discussed that addresses deficiencies discovered in techniques based on "packet pair".
Abstract: We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20000 TCP bulk transfers between 35 Internet sites. Because we traced each 100-kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behavior due to the different directions of the Internet paths, which often exhibit asymmetries. We: (1) characterize the prevalence of unusual network events such as out-of-order delivery and packet replication; (2) discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair;" (3) investigate patterns of packet loss, finding that loss events are not well modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and (4) analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales.

913 citations


Journal ArticleDOI
TL;DR: From the single-user point of view considered in this paper, there exists an optimal repetition diversity order (or spreading factor) that minimizes the information outage probability for given rate, power, and fading statistics.
Abstract: We study optimal constant-rate coding schemes for a block-fading channel with strict transmission delay constraint, under the assumption that both the transmitter and the receiver have perfect channel-state information. We show that the information outage probability is minimized by concatenating a standard "Gaussian" code with an optimal power controller, which allocates the transmitted power dynamically to the transmitted symbols. We solve the minimum outage probability problem under different constraints on the transmitted power and we derive the corresponding power-allocation strategies. In addition, we propose an algorithm that approaches the optimal power allocation when the fading statistics are not known. Numerical examples for different fading channels are provided, and some applications discussed. In particular, we show that minimum outage probability and delay-limited capacity are closely related quantities, and we find a closed-form expression for the delay-limited capacity of the Rayleigh block-fading channel with transmission over two independent blocks. We also discuss repetition diversity and its relation with direct-sequence or multicarrier spread-spectrum transmission. The optimal power-allocation strategy in this case corresponds to selection diversity at the transmitter. From the single-user point of view considered in this paper, there exists an optimal repetition diversity order (or spreading factor) that minimizes the information outage probability for given rate, power, and fading statistics.

822 citations


Journal ArticleDOI
TL;DR: It is found that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected and that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP.
Abstract: It is a widely held belief that packet reordering in the Internet is a pathological behavior, or more precisely, that it is an uncommon behavior caused by incorrect or malfunctioning network components. Some studies of Internet traffic have reported seeing occasional packet reordering events and ascribed these events to "route fluttering", router "pauses" or simply to broken equipment. We have found, however, that parallelism in Internet components and links is causing packet reordering under normal operation and that the incidence of packet reordering appears to be substantially higher than previously reported. More importantly, we observe that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected. Perhaps the most disturbing observation about TCP's behavior is that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP.

434 citations


Proceedings ArticleDOI
21 Mar 1999
TL;DR: A simple algorithm is obtained that optimizes a subjective measure as opposed to an objective measure of quality, and incorporates the constraints of rate control and playout delay adjustment schemes, and it adapts to varying loss conditions in the network.
Abstract: Excessive packet loss rates can dramatically decrease the audio quality perceived by users of Internet telephony applications. Previous results suggest that error control schemes using forward error correction (FEC) are good candidates for decreasing the impact of packet loss on audio quality. However, the FEC scheme must be coupled to a rate control scheme. Furthermore, the amount of redundant information used at any given point in time should also depend on the characteristics of the loss process at that time (it would make no sense to send much redundant information when the channel is loss free), on the end to end delay constraints (destination typically have to wait longer to decode the FEC as more FEC information is used), on the quality of the redundant information, etc. However, it is not clear given all these constraints how to choose the "best" possible redundant information. We address this issue, and illustrate the approach using an FEC scheme for packet audio standardized in the IETF. We show that the problem of finding the best redundant information can be expressed mathematically as a constrained optimization problem for which we give explicit solutions. We obtain from these solutions a simple algorithm with very interesting features, namely (i) the algorithm optimizes a subjective measure (such as the audio quality perceived at a destination) as opposed to an objective measure of quality (such as the packet loss rate at a destination), (ii) it incorporates the constraints of rate control and playout delay adjustment schemes, and (iii) it adapts to varying loss conditions in the network (estimated online with RTCP feedback). We have been using the algorithm, together with a TCP-friendly rate control scheme and we have found it to provide very good audio quality even over paths with high and varying loss rates. We present simulation and experimental results to illustrate its performance.

377 citations


01 Sep 1999
TL;DR: This memo defines a metric for one-way packet loss across Internet paths and states that it is likely that the number of packets per path will increase over time.
Abstract: This memo defines a metric for one-way packet loss across Internet paths. [STANDARDS-TRACK]

290 citations


Patent
27 Oct 1999
TL;DR: In this paper, a linecard architecture for high-speed routing of data in a communications device is proposed, which provides low latency routing based on packet priority: packet routing and processing occurs at line rate (wire speed) for most operations.
Abstract: A linecard architecture for high speed routing of data in a communications device. This architecture provides low latency routing based on packet priority: packet routing and processing occurs at line rate (wire speed) for most operations. A packet data stream is input to the inbound receiver, which uses a small packet FIFO to rapidly accumulate packet bytes. Once the header portion of the packet is received, the header alone is used to perform a high speed routing lookup and packet header modification. The queue manager then uses the class of service information in the packet header to enqueue the packet according to the required priority. Enqueued packets are buffered in a large memory space holding multiple packets prior to transmission across the device's switch fabric to the outbound linecard. On arrival at the outbound linecard, the packet is enqueued in the outbound transmitter portion of the linecard architecture. Another large, multi-packet memory structure, as employed in the inbound queue manager, provides buffering prior to transmission onto the network.

284 citations


Patent
30 Apr 1999
TL;DR: A scalable, high-speed router for routing packets of information through an interconnected network comprises an interface for receiving a packet containing header and data information; a device for extracting routing information from the header of an arrived packet and generating a corresponding header packet for the arrived packet; a memory device for storing the data information of the arrived packets at predetermined memory locations; and, a data and header packet containing the packet forwarding information to the interface for routing the packet to a further destination in accordance with the packet forward information as mentioned in this paper.
Abstract: A scalable, high-speed router for routing packets of information through an interconnected network comprises an interface for receiving a packet containing header and data information; a device for extracting routing information from the header of an arrived packet and generating a corresponding header packet for the arrived packet; a memory device for storing the data information of the arrived packet at predetermined memory locations; a device for processing the corresponding header packet to determine a route for the arrived packet and assigning packet forwarding information to the header packet; and, a device for retrieving the data information from the predetermined memory locations and forwarding both the data and header packet containing the packet forwarding information to the interface for routing the packet to a further destination in accordance with the packet forwarding information. The processing device includes devices performing filtering, route-table lookup and flow identification functions and which devices are organized in a pipelined fashion for successive, high-speed operations on the header packet. The router ensures that the arrived packet is forwarded in accordance with any quality of service requirements and flow specifications.

255 citations


01 Jan 1999
TL;DR: It is demonstrated that algorithms embedded in the end-systems are able to synthesize a telephony-like service by blocking calls at times when the load in the packet network is high.
Abstract: We describe how a packet network with a simple pricing mechanism and no connection acceptance control may be used to carry a telephony-like service with low packet loss and some call blocking. The packet network uses packet marking to indicate congestion and endsystems are charged a fixed small amount per mark received. The end-systems are thus provided with the information and the incentive to use the packet network efficiently. We demonstrate that algorithms embedded in the end-systems are able to synthesize a telephony-like service by blocking calls at times when the load in the packet network is high.

203 citations


Patent
18 Feb 1999
TL;DR: In this article, a system and method for improving the efficiency of packet-based networks by using aggregate packets is described, which can reduce the transmission time when there are multiple packets being sent to common destinations because the interpacket time may be reduced.
Abstract: A system and method for improving the efficiency of a packet-based network by using aggregate packets are described. One example method involves determining which network devices support aggregate packets. If a first packet is received on a route that supports aggregate packets, it is then held for a short period. During this short period, if an additional packet is received that shares at least one common route element that also supports aggregate packets with the first packet, the first packet and the additional packet are combined into a single larger aggregate packet. This can reduce the transmission time when there are multiple packets being sent to common destinations because the inter-packet time may be reduced. Additionally, in some networks, this technique allows the bandwidth of a common medium to be more fully used because more of the packets will be closer to the maximum size allowed.

188 citations


Patent
14 Jan 1999
TL;DR: In this paper, the delay time needed in a jitter buffer is determined, enabling a smooth data feed to an application without excessive delays, by methods and apparatus that vary the size of the Jitter buffer based on an estimated variation of packet transmission delay derived from the times of arrival of stored packets.
Abstract: In a packet communication system, the delay time needed in a jitter buffer is determined, enabling a smooth data feed to an application without excessive delays, by methods and apparatus that vary the size of the jitter buffer based on an estimated variation of packet transmission delay derived from the times of arrival of stored packets. A variance buffer stores variances of the times of arrival of stored packets, and the estimated variation of packet transmission delay is derived from the stored variances. The size of the jitter buffer can be changed preferentially during periods of discontinuous packet transmission.

178 citations


Patent
09 Jul 1999
TL;DR: In this paper, an advanced reservation algorithm is proposed for assigning future slots of a transmission frame to a data packet in the transmission frame for transmission over a wireless telecommunication network system.
Abstract: A wireless telecommunications network having superior quality of service is provided. A system and method for assigning future slots of a transmission frame to a data packet in the transmission frame for transmission over a wireless telecommunication network system includes applying an advanced reservation algorithm, reserving a first slot for a first data packet of an internet protocol (IP) flow in a future transmission frame based on the algorithm, reserving a second slot for a second data packet of the IP flow in a transmission frame subsequent in time to the future transmission frame based on the algorithm, wherein the second data packet is placed in the second slot in an isochronous manner to the placement of the first data packet in the first slot. There may be a periodic variation between the placement of the first data packet in the first slot and the placement of second data packet in the second slot or no periodic variation between placements of slots. The advanced reservation algorithm makes a determination whether the IP flow is jitter-sensitive.

Patent
Keijo Laiho1
07 Jun 1999
TL;DR: In this paper, it was shown that an apparatus in a wireless telecommunications network is provided with Short Message Service via a circuit switched channel unless the apparatus is operating in a packet mode.
Abstract: An apparatus in a wireless telecommunications network is provided with Short Message Service via a circuit switched channel unless the apparatus is operating in a packet mode. If the apparatus is operating in the packet mode, the apparatus is provided with Short Message Service via a packet channel.

Patent
29 Apr 1999
TL;DR: In this article, a credit bucket algorithm is used to ensure that packet flows are within specified bandwidth consumption limits, by stripping the layer 2 header information from the packet and storing a linked list of table entries that includes the fields necessary to implement the credit bucket.
Abstract: A method and apparatus for controlling the flow of variable-length packets to a multiport switch involve accessing forwarding information in a memory (214) based at least partially on layer 4 information from a packet and then forwarding (186) the packet only if the packet is within a bandwidth consumption limit that is specified in the forwarding information. In a preferred embodiment, a credit bucket algorithm is used to ensure that packet flows are within specified bandwidth consumption limits. The preferred method for implementing the credit bucket algorithm to control flows of packets involves first receiving a particular packet from a flow and then stripping the layer 2 header information from the packet. The layer 3 and layer 4 information from the packet is then used to look-up (170) flow-specific forwarding and flow control information in a memory that stores a linked list of table entries that includes the fields necessary to implement the credit bucket algorithm. The credit bucket algorithm is implemented in embedded devices within an application-specific integrated circuit, allowing the control of packet flows based on the application of the flow.

Journal ArticleDOI
TL;DR: This paper presents a rate-distortion optimized mode selection method for packet lossy environments that takes into account the network conditions and the error concealment method used at the decoder.
Abstract: Reliable transmission of compressed video in a packet lossy environment cannot be achieved without error recovery mechanisms. We describe an effective method for increasing error resilience of video transmission over packet lossy networks such as the Internet. Intra coding (without reference to a previous picture) is a well-known technique for eliminating temporal error propagation in a predictive video coding system. Randomly intra coding of blocks increases error resilience to packet loss. However, when the error concealment used by the decoder is known, intra encoding following a method that optimizes the tradeoffs between compression efficiency and error resilience is a better alternative. In this paper, we present a rate-distortion optimized mode selection method for packet lossy environments that takes into account the network conditions and the error concealment method used at the decoder. We present results for different packet loss rates and typical packet sizes of the Internet, that illustrate the advantages of the proposed method.

Patent
29 Sep 1999
TL;DR: In this paper, a programmable network element (400) operates on packet traffic flowing through the element in accordance with a gateway program (404, 405, 406) which is dynamically uploaded into the network element or unloaded from it via a mechanism separate from the actual packet traffic as the element operates.
Abstract: A programmable network element (400) operates on packet traffic flowing through the element in accordance with a gateway program (404, 405, 406) which is dynamically uploaded into the network element or unloaded from it via a mechanism separate from the actual packet traffic as the element operates. Such programmable network element can simultaneously operate on plural packet flows with different or the same programs being applied to each flow. A dispatcher (402) provides a packet filter (403) with a set of rules provided by one or more of the dynamically loaded and invoked programs. These rules define, for each program, the characteristics of those packets flowing through the network element that are to be operated upon in some manner. A packet that flows from the network through the filter and satisfies one or more of such rules is sent by the packet filter to the dispatcher. The dispatcher, in accordance with one of the programs, either sends the packet to the program for manipulation by the program itself, or manipulates the packet itself in a manner instructed by the program. The processed packet is sent back through the filter to the network for routing to its destination.

Patent
17 Feb 1999
TL;DR: In this article, a store-and-forward network switch uses an embedded dynamic-random-access memory (DRAM) packet memory, where packets are stored at row boundaries so that DRAM page-mode cycles predominate.
Abstract: A store-and-forward network switch uses an embedded dynamic-random-access memory (DRAM) packet memory. An input port controller receiving a packet writes the packet to the embedded packet memory. The input port controller then sends a message to the output port over an internal token bus. The message includes the row address in the embedded packet memory where the packet was written and its length. The output port reads the message and reads the packet from the embedded memory at the row address before transmitting the packet to external media. Packets are stored at row boundaries so that DRAM page-mode cycles predominate. Only one packet is written to each DRAM row or page. Thus the column address is not sent between ports with the message sent over the token bus. A routing table can also be included in the embedded DRAM.

Patent
26 Feb 1999
TL;DR: In this paper, a network interface is presented that receives packet data from a shared medium and accomplishes the signal processing required to convert the data packet to host computer formatted data separately from receiving the data packets.
Abstract: A network interface is presented that receives packet data from a shared medium and accomplishes the signal processing required to convert the data packet to host computer formatted data separately from receiving the data packet. The network interface receives the data packet, converts the analog signal to a digitized signal, and stores the resulting sample packet in a storage queue. An off-line processor, which may be the host computer itself, performs the signal processing required to interpret the sample packet. In transmission, the off-line process converts host-formatted data to a digitized version of a transmission data packet and stores that in a transmission queue. A transmitter converts the transmission data packet format and transmits the data to the shared medium.

Proceedings ArticleDOI
05 Oct 1999
TL;DR: Simulation results show the efficiency of the data transmission and the feasibility of merging control using the data Transmission algorithm and when a platoon consists of ten vehicles, the transmission rate is three times as efficient as the conventional algorithm and it is sufficient for merging control.
Abstract: Deals with merging control of vehicles with inter-vehicle communication on highways, which would greatly contribute to increased safety and decreased traffic congestion. Algorithms for merging control of vehicles using the concept of a virtual vehicle and for inter-vehicle communication for the control are presented. A virtual vehicle is generated by mapping a vehicle on one lane onto another lane, in order to enable longitudinal control between a vehicle on a main lane and one on a sub lane to make smooth merging. The inter-vehicle communication is essential in the merging control algorithm, because the generation of the virtual vehicle requires the inter-vehicle communication. The data transmission in the communication consists of an intra-platoon algorithm and an inter-platoon one. The former is featured by provision of the interruption detective gaps that allow us to reduce transmission delay caused by collision, and the latter is featured by data transmission only between lead vehicles. Simulation results show the efficiency of the data transmission and the feasibility of merging control using the data transmission algorithm. When a platoon consists of ten vehicles, the transmission rate is three times as efficient as the conventional algorithm and it is sufficient for merging control.

Journal ArticleDOI
01 Aug 1999
TL;DR: A solution for this problem of inability to separate congestion packet loss from other types of packet loss has been presented and analysed using simulation techniques.
Abstract: Most commonly used data transfer protocols assume that every packet loss is an indication of network congestion. This interaction between the error recovery and the congestion control procedures results in a low utilisation of the link when there is an appreciable rate of losses due to link errors. This issue is significant for wireless links, particularly those with a long propagation delay. A solution for this problem of inability to separate congestion packet loss from other types of packet loss has been presented and analysed using simulation techniques.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: A balanced twin-description interframe video coder is designed and performance results are presented for video transmission aver packet networks with packet losses.
Abstract: A balanced twin-description interframe video coder is designed and performance results are presented for video transmission aver packet networks with packet losses. The coder is based on a predictive multiple description quantizer structure called mutually-refining DPCM (MR-DPCM). The novel feature of this predictive quantizer is that the decoder and encoder filter states trade in either of the two single-channel modes as well as in the two-channel mode. The performance and indeed the suitability of the multiple description approach for a network with packet lasses depends an the packetization method. Two packetization methods are considered-a "correct" one and a low latency but "incorrect" one. Performance results are presented for synthetic sources as well as for a video sequence under a variety of conditions.

Patent
13 Jul 1999
TL;DR: In this paper, a method and apparatus for minimizing overhead in packet re-transmission in a communication system is presented, where each packet is given a sequence number, based on a current transmission rate, the size of the packet, and a previously assigned sequence number.
Abstract: A method and apparatus are provided for minimizing overhead in packet re-transmission in a communication system. Each packet is given a sequence number, based on a current transmission rate, the size of the packet, and a previously assigned sequence number. The packet size can is adapted so that the entire packet fits into a single transmission block. The packet size may also be adapted based on throughput. The packet size may be adapted based on the transmission rate and/or throughput, whether the packet is being transmitted the first time or if it is being re-transmitted. Alternately, if the packet is being re-transmitted, the packet is transmitted at its original transmission rate, regardless of the current transmission rate.

Journal ArticleDOI
TL;DR: The problem of video quality prediction and control for high-resolution video transmitted over lossy packet networks is addressed and it is demonstrated that the reachable quality is upperbound and exhibits one optimal coding rate for a given packet loss ratio.
Abstract: We address the problem of video quality prediction and control for high-resolution video transmitted over lossy packet networks. In packet video, the bitstream flows through several subsystems (coder, network, decoder); each of them can impair the information, either by data loss or by introducing some delay. However, each of these subsystems can be fine-tuned in order to minimize these problems and to optimize the quality of the delivered signal, taking into account the available bitrate. The assessment of the end-user quality is a non-trivial issue. We analyse how the user-perceived quality is related to the average encoding bitrate for variable bit rate MPEG-2 video. We then show why simple distortion metrics may lead to inconsistent interpretations. Furthermore, for a given coder setup, we analyse the effect of packet loss on the user-level quality. We then demonstrate that, when jointly studying the impact of coding bit rate and packet loss, the reachable quality is upperbound and exhibits one optimal coding rate for a given packet loss ratio.

Patent
26 Jul 1999
TL;DR: In this article, the authors propose a method and apparatus for in-line processing a data packet while routing the packet through a router in a system transmitting data packets between a source and a destination over a network including the router.
Abstract: A method and apparatus for in-line processing a data packet while routing the packet through a router in a system transmitting data packets between a source and a destination over a network including the router. The method includes receiving the data packet and pre-processing layer header data for the data packet as the data packet is received and prior to transferring any portion of the data packet to packet memory. The data packet is thereafter stored in the packet memory. A routing through the router is determined including a next hop index describing the next connection in the network. The data packet is retrieved from the packet memory and a new layer header for the data packet is constructed from the next hop index while the data packet is being retrieved from memory. The new layer header is coupled to the data packet prior to transfer from the router.

Patent
22 Mar 1999
TL;DR: In this paper, a two-phase packet processing technique is provided for routing traffic in a packet-switched, integrated services network which supports a plurality of different service classes, which significantly reduces packet processing latency, particularly with respect to high priority or delay-sensitive packets.
Abstract: A two-phase packet processing technique is provided for routing traffic in a packet-switched, integrated services network which supports a plurality of different service classes. During Phase I, packets are retrieved from the router input interface and classified in order to identify the associated priority level of each packet and/or to determine whether a particular packet is delay-sensitive. If it is determined that a particular packet is delay-sensitive, the packet is immediately and fully processed. If, however, it is determined that the packet is not delay-sensitive, full processing of the packet is deferred and the packet is stored in an intermediate data structure. During Phase II, packets stored within the intermediate data structure are retrieved and fully processes. The technique of the present invention significantly reduces packet processing latency, particularly with respect to high priority or delay-sensitive packets. It is easily implemented in conventional routing systems, imposes little computational overhead, and consumes only a limited amount of memory resources within such systems.

Patent
09 Mar 1999
TL;DR: In this article, a packetization method and packet structure is proposed to improve the robustness of a bitstream generated when a still image is decomposed with a wavelet transform.
Abstract: A packetization method and packet structure improve the robustness of a bitstream generated when a still image is decomposed with a wavelet transform. The wavelet coefficients of one “texture unit” are scanned and coded in accordance with a chosen scanning method to produce a bitstream. The bitstreams for an integral number of texture units are assembled into a packet, each of which includes a packet header. Each packet header includes a resynchronization marker to enable a decoder to resynchronize with the bitstream if synchronization is lost, and an index number which absolutely identifies one of the texture units in the packet to enable a decoder to associate following packets with their correct position in the wavelet transform domain. The header information enables a channel error to be localized to a particular packet, preventing the effects of the error from propagating beyond packet boundaries. The invention is applicable to the pending MPEG-4 and JPEG-2000 image compression standards.

Patent
15 Jul 1999
TL;DR: In this paper, a method, apparatus and software program is provided for scheduling and admission controlling of real-time data packet traffic and a delivery deadline is determined for each payload data packet at the packet scheduler and packets are sorted into a time-stamp based queue.
Abstract: A method, apparatus and software program is provided for scheduling and admission controlling of real-time data packet traffic. Data packets are admitted or rejected for real-time processing according to throughput capabilities of a packet scheduler. A delivery deadline is determined for each payload data packet at the packet scheduler and packets are sorted into a time-stamp-based queue. Deadline violations are monitored and an adaptation of payload data packets can be triggered on demand in order to enter a stable state.

Proceedings ArticleDOI
29 Mar 1999
TL;DR: An algorithm that assigns unequal amounts of forward error correction to progressive data so as to provide graceful degradation as packet losses increase is presented, finding that for an exponential packet loss model, good image quality can be obtained, even when 40% of transmitted packets are lost.
Abstract: We present an algorithm that assigns unequal amounts of forward error correction to progressive data so as to provide graceful degradation as packet losses increase. We use the SPIHT coder to compress images in this work, but our algorithm can protect any progressive compression scheme. The algorithm can also use almost any function as a model of packet loss conditions. We find that for an exponential packet loss model with a mean of 20% and a total rate of 0.2 bpp, good image quality can be obtained, even when 40% of transmitted packets are lost.

Proceedings ArticleDOI
21 Mar 1999
TL;DR: This paper reports on the design and the evaluation of transmission control mechanisms specifically designed for multiplayer, distributed (serverless), interactive Internet applications and investigates how the "quality" of the game is influenced by the frequency at which players exchange state information, as well as by network impairments such as packet loss and transmission delay.
Abstract: This paper reports on the design and the evaluation of transmission control mechanisms specifically designed for multiplayer, distributed (serverless), interactive Internet applications. Distributed synchronization and dead reckoning are the main elements of this transmission control infrastructure. These mechanisms have been implemented in a fully distributed, multiplayer game application, i.e., one in which each entity in a game session computes its own local view of the session. The role of each entity is consequently to periodically send its own state to all other session participants (using RTP/UDP/IP multicast) and to periodically compute its own local view of the global game state using information received from the other participants. A detailed experimental analysis is provided using MBone and LAN experiments. We investigate how the "quality" of the game is influenced by the frequency at which players exchange state information, as well as by network impairments such as packet loss and transmission delay.

Patent
29 Mar 1999
TL;DR: In this article, a clock signal synchronized with a network is used to transmit an arrival interval of a PCR packet in which a PCR value is included and a synchronization residual is calculated.
Abstract: A transmission side of a communication apparatus of the present invention makes use of a clock signal synchronized with a network to transmit an arrival interval of a PCR packet in which a PCR value is included and a synchronization residual. A reception side of the communication apparatus of the present invention calculates delay fluctuations caused by the network from the arrival interval and the synchronization residual included in the received PCR packet and modifies the PCR value based on the delay fluctuations.

Patent
29 Apr 1999
TL;DR: In this article, a search window delay tracking procedure for a multipath search processor of a CDMA radio receiver is presented. But the method is not suitable for multichannel systems.
Abstract: The present invention provides a search window delay tracking procedure for use in a multipath search processor of a CDMA radio receiver. A channel impulse response is estimated for a received signal containing plural paths, each path having a corresponding path delay. A search window defines a delay profile that contains the plural paths of the received signal. A mean or average delay is calculated for the estimated channel impulse response (CIR), and an error is determined between the mean CIR delay and a desired delay position corresponding to the center of the CIR search window. An adjustment is made to reduce that error so that the center of the search window and the mean CIR delay are aligned. The error may be processed either linearly (in one embodiment) or non-linearly (in another embodiment) to minimize the error and to reduce an influence of noise. A non-linear filtering process includes calculating a delay spread from the mean CIR delay calculated for successive processing cycles of a window delay tracking procedure corresponding to each new input. A difference is determined between the successive delay spreads. Adjustment signal is set equal to the error signal if the difference is less than or equal to a threshold. Alternatively, the adjustment signal is set to zero if the difference is greater than the threshold. Consequently, if the delay spread in the current iteration is significantly different from the delay spread in the previous iteration, the new error sample calculated in the current iteration is considered unreliable, and no adjustment is made.