scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 1992"


Patent
30 Sep 1992
TL;DR: In this article, the authors propose an internetworking system for exchanging packets of information between networks, which consists of a network interface module for connecting a network to the system, receiving packets from the network in a native packet format used by the network and converting each received native packet to a packet having a generic format common to all networks connected to the network.
Abstract: An internetworking system for exchanging packets of information between networks, the system comprising a network interface module for connecting a network to the system, receiving packets from the network in a native packet format used by the network and converting each received native packet to a packet having a generic format common to all networks connected to the system, and converting each of the generic packets to the native packet format for transmission to the network; a communication channel for carrying the generic packets to and from the network interface module, the channel having bandwidth; a first processing module for controlling dynamic allocation and deallocation of the channel bandwidth to the network connected to the system via the network interface module; and a second processing module for receiving all of the generic packets put on the channel by the network interface module, determining a destination network interface module for each of the generic packets on the channel, determining whether each of the generic packet needs to be bridged to the destination network interface module, and transmitting each of the generic packets determined to need bridging to the destination network interface module via the channel.

377 citations


Patent
22 Oct 1992
TL;DR: In this article, an address translation is performed to generate local source and destination addresses which are much shorter than the globally unique addresses contained in the packet as dictated by the protocol, and these local addresses are inserted in a header that is added to the packet.
Abstract: A packet data communication network employs a local switch, router or bridge device functioning to transfer packets between segments of a larger network. When packets enter this device, an address translation is performed to generate local source and destination addresses which are much shorter than the globally-unique addresses contained in the packet as dictated by the protocol. These local addresses are inserted in a header that is added to the packet, in addition to any header already contained in the packet. This added header travels with the packet through the local switch, router or bridge device, but then is stripped off before the packet is sent out onto another network segment. The added header may also contain other information, such as a local name for the source and destination segment (link), as well as status information that is locally useful, but not part of the packet protocol and not necessary for transmission with the packet throughout the network. Local congestion information, results of address translations, and end-of-message information, are examples of such status information.

323 citations


01 Jan 1992
TL;DR: This study attempts to characterize the dynamics of Internet workload from an end-point perspective and concludes that efficient congestion control is still a very difficult problem in large internetworks.
Abstract: Dynamics of Internet load are investigated using statistics of round-trip delays, packet losses and out-of-order sequence of acknowledgments. Several segments of the Internet are studied. They include a regional network (the Jon von Neumann Center Network), a segment of the NSFNet backbone and a cross-country network consisting of regional and backbone segments. Issues addressed include: (a) dominant time scales in network workload; (b) the relationship between packet loss and different statistics of round-trip delay (average, minimum, maximum and standard-deviation); (c) the relationship between out of sequence acknowledgments and different statistics of delay; (d) the distribution of delay; (e) a comparison of results across different network segments (regional, backbone and cross-country); and (f) a comparison of results across time for a specific network segment. This study attempts to characterize the dynamics of Internet workload from an end-point perspective. A key conclusion from the data is that efficient congestion control is still a very difficult problem in large internetworks. Nevertheless, there are interesting signals of congestion that may be inferred from the data. Examples include (a) presence of slow oscillation components in smoothed network delay, (b) increase in conditional expected loss and conditional out-of-sequence acknowledgments as a function of various statistics of delay, (c) change in delay distribution parameters as a function of load, while the distribution itself remains the same, etc. The results have potential application in heuristic algorithms and analytical approximations for congestion control. Comments University of Pennsylvania Department of Computer and Information Sciences Technical Report No. MSCIS-92-83. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/300 On The Dynamics and Significance of Low Frequency Components of Internet Load MS-CIS-92-83 DISTRIBUTED SYSTEMS LAB 12 Universit.y of Pe1111sylva11ia School of Engi l~ceri~~g and Applied Science Conrputer arrd 111for1nnt.ion Science Department.

262 citations


Proceedings ArticleDOI
01 Oct 1992
TL;DR: The results indicate that the proposed hop-by-hop rate-based mechanism for congestion control displays stable behavior for a wide range of traffic conditions and diverse network topologies, and is better than that of the end-to-end control schemes studied here.
Abstract: The flow/congestion control scheme of TCP is based on the sliding window mechanism. As we demonstrate in this paper, the performance of this and other similar end-to-end flow control schemes deteriorates as networks move to the gigabit range. This has been the motivation for our search for a new flow and congestion control scheme. In this paper, we propose as an alternative, a hop-by-hop rate-based mechanism for congestion control. Due to the increasing sophistication in switch architectures, to provide “quality of service” guarantees for real-time as well as bursty data traffic, the implementation of hop-by-hop controls has become relatively inexpensive. A cost-effective implementation of the proposed scheme for a multi-gigabit packet switch is described in [2]. In this paper, we present results of a simulation study comparing the performance of this hop-by-hop flow control scheme to two end-to-end flow control schemes. The results indicate that the proposed scheme displays stable behavior for a wide range of traffic conditions and diverse network topologies. More importantly, the performance of the scheme, measured in terms of the average number of occupied buffers, the end-to-end throughput, the network delay, and the link utilization at the bottleneck, is better than that of the end-to-end control schemes studied here. These results present a convincing case against popular myths about hop-by-hop control mechanisms.

153 citations


Patent
18 Jun 1992
TL;DR: In this article, a network controller receives encrypted data packets in the form of interleaved streams of cells, and stores the received cells in a buffer until the end of each packet is received, at which time the complete packet is decrypted, error checked, and then transmitted to a host computer.
Abstract: A network controller receives encrypted data packets in the form of interleaved streams of cells, and stores the received cells in a buffer until the end of each packet is received, at which time the complete packet is decrypted, error checked, and then transmitted to a host computer. The network controller's buffer includes a data storage array in which data packets are stored as linked lists, and a packet directory having a entry for each data packet stored in the buffer. Each directory entry contains a pointer to the first and last location in the buffer where a corresponding data packet is stored, as well as status information for the data packet. When free space in the network controller's buffer falls below a specified threshold, the network controller transmits selected partial packets to the host computer without decrypting or error checking, and also stores in its packet directory entry for each transmitted partial packet a "partial transfer" status flag. Additional portions of the partial packets may be sent to the host computer with an indication of the packet to which they belong. Upon receiving the end of a data packet that was partially transferred to the host computer, the remainder of the data packet in the packet buffer is transmitted to the host computer, without decrypting or error checking the partial data packet. The host computer then transmits the complete packet through a loopback path in the network controller for decrypting and error checking.

136 citations


Journal ArticleDOI
01 Apr 1992
TL;DR: Simulation results show that modifications can eliminate the periodic packet losses and substantially reduce the traffic oscillation in a new congestion signal scheme and dual traffic adjustment strategy.
Abstract: The congestion control algorithm embedded in the 4.3-Tahoe BSD TCP implementation has dramatically improved congestion control over the Internet. However, several recent simulation studies on the dynamics of this algorithm has revealed that the algorithm exhibits clear oscillatory patterns in sending window size, round trip delay and bottleneck queue length. In this paper, we present a new congestion signal scheme and a dual traffic adjustment strategy. Simulation results show that our modifications can eliminate the periodic packet losses and substantially reduce the traffic oscillation.

127 citations


Patent
Mai-Huong Nguyen1
15 Oct 1992
TL;DR: In this paper, the authors proposed to reduce the complexity of processing this control information in an endpoint by only periodically processing the control information and data in the received packets in a high-speed packet network.
Abstract: The Transmission Control Protocol (TCP) is a connection-oriented transport layer protocol that offers a full duplex reliable virtual circuit connection between two endpoints. Each received TCP packet in an endpoint contains both control information and data. The complexity of processing this control information in an endpoint is reduced by only periodically processing the control information. In particular, control information in received packets are not processed in an endpoint until either a) a predetermined number of packets are received, or b) a timer expires, whichever occurs first. As a result, this overall decreases the amount of processing associated with the receipt of each TCP packet and improves the performance of the TCP protocol in a high-speed packet network.

118 citations


Proceedings ArticleDOI
01 Oct 1992
TL;DR: A dynamic multi-path routing scheme that has been considered for connection oriented homogeneous high speed networks to bridge the gap between routing and congestion control as the network becomes congested is described.
Abstract: In this paper we describe briefly a dynamic multi-path routing scheme that has been considered for connection oriented homogeneous high speed networks. The fundamental objective of the scheme is to bridge the gap between routing and congestion control as the network becomes congested. Because propagation delay far out shadows queueing and transmission delay in high speed networks, the proposed routing scheme works as a shortest path (minimum hop) first algorithm under light traffic conditions. However as the shortest path becomes congested, the source node uses multiple paths when and if available in order to distribute the load and reduce packet loss. The scheme is a cross between Alternate Path routing and Trunk Reservation.We compare the performance of the proposed scheme with the Shortest Path Only algorithm, the Alternate Path routing algorithm, the Random Routing algorithm, and the Trunk Reservation scheme. The throughput and packet loss performance are compared via simulations. These have been carried out concentrating on a 5 node network with varying traffic patterns, the intention being to gain insight into the strengths and weaknesses of the various schemes.

105 citations


Patent
25 Jun 1992
TL;DR: In this article, a technique for reducing latencies in bridge operation, by facilitating cut-through transmission of a receive data packet while the packet is still being received, but without the need for starting or ending delimiters, or packet lengths, in the packet data.
Abstract: A technique for reducing latencies in bridge operation, by facilitating cut-through transmission of a receive data packet while the packet is still being received, but without the need for starting or ending delimiters,or packet lengths, in the packet data. The technique can be applied to packets inbound from a network, packets outbound to a network, or packets being looped back to a client to which the bridge is connected. In the technique of the invention, each received packet is stored in a buffer memory and a count is maintained of the number of bits in the received packet. A transmit operation is started as soon as possible, preferably while the packet is still being received, and bytes are retrieved from the buffer memory for transmission. The transmit operation is terminated when a transmit byte count reaches the packet length as determined by the receive byte count. For cut-through operations, the transmit operation is started without knowledge of the packet length, but the packet length is made available to the transmit operation upon completion of the receive operation. For store-and-forward operations, the packet length is stored with the packet in the buffer memory, and retrieved for use in the transmit operation.

104 citations


Patent
10 Dec 1992
TL;DR: In this paper, an adaptive congestion control device (600) and method provides for minimizing congestion on a basis of independent congestion level indicators (626) and further provides efficient recovery in an integrated packet network (17, 26, 32, 38) that becomes congested.
Abstract: An adaptive congestion control device (600) and method provides for minimizing congestion on a basis of independent congestion level indicators (626). The invention further provides efficient recovery in an integrated packet network (17, 26, 32, 38) that becomes congested. In addition, the invention ensures that a user may utilize the network on a space-available basis when capacity is available in the network.

93 citations


Proceedings ArticleDOI
13 Sep 1992
TL;DR: It is concluded that CSMA/CA is quite successful in allocating bandwidth under stress, but that packet capture rate degrades very quickly once the LAN's effective range is exceeded and network maintainers should plan the layout of wireless networks at least as carefully as they plan wired networks.
Abstract: The performance of a high-speed commercial spread-spectrum wireless LAN that uses the CSMA/CA multiple-access strategy was studied. Using synthetic workloads, packet capture success rather than signal propagation characteristics was measured. Specifically throughput, packet loss rates, range, and patterns of errors within packets were measured. It is concluded that CSMA/CA is quite successful in allocating bandwidth under stress, but that packet capture rate degrades very quickly once the LAN's effective range is exceeded. Hence, network maintainers should plan the layout of wireless networks at least as carefully as they plan wired networks. >

Patent
29 May 1992
TL;DR: In this paper, a system implements checksumming of a network packet to be sent over a network, where the network packet is transferred from the main memory to a packet storage memory within a network adapter.
Abstract: A system implements checksumming of a network packet to be sent over a network. A processor constructs the network packet within a main memory. The network packet is transferred from the main memory to a packet storage memory within a network adapter. During the transfer, the network adapter calculates a checksum for the network packet. The network adapter then inserts the checksum into the network packet within the packet storage memory. The network adapter then sends the network packet to the network. In order to calculate the checksum for the network packet, hardware within the network adapter "snoops" an internal bus within the network adapter as the network packet is transported to the packet storage memory. Also, a checksum header is prepended to the network packet which includes control information for checksumming. This control information includes, for example, an indication whether the network adapter is to calculate a checksum and a specification of what data in the network packet is to be checksummed. The control information may additionally include a location within network packet where the checksum is to be inserted.

Journal ArticleDOI
TL;DR: A framing congestion control strategy based on a packet admission policy at the edges of the network and on a service discipline called stop-and-go queuing at the switching nodes is described, which provides bounded end-to-end delay and a small and controllable delay-jitter.
Abstract: The problem of congestion control in high-speed networks for multimedia traffic, such as voice and video, is considered. It is shown that the performance requirements of high-speed networks involve delay, delay-jitter, and packet loss. A framing congestion control strategy based on a packet admission policy at the edges of the network and on a service discipline called stop-and-go queuing at the switching nodes is described. This strategy provides bounded end-to-end delay and a small and controllable delay-jitter. The strategy is applicable to packet switching networks in general, including fixed cell length asynchronous transfer mode (ATM), as well as networks with variable-size packets. >

Patent
Oouchi Toshiya1
06 Feb 1992
TL;DR: In this paper, a packet rate control method for a packet network (1l - 1n, 2l - 2nl) is provided, in which a declaration of parameters indicating the rate of fixed-length packets is made to a call control system by a subscriber terminal device prior to communication as to be associated with a logical channel number.
Abstract: There is provided a packet-rate control method for a packet network (1l - 1n, 2l - 2nl), in which a declaration of parameters indicating the rate of fixed-length packets is so made to a call control system (5) by a subscriber terminal device prior to communication as to be associated with a logical channel number (LC). The rate of packets inputted to the network is so measured (101) as to be associated with a logical channel number. While the degree of violation is low, packets transmitted in violation of user-declaring parameters are provided (103) with a violation mark and taken in the network. If the degree of violation is high, the packets transmitted in violation of user-declaring parameters are discarded (104, 107) on the input side of the network. When there appears a possibility of congestion in the network, the marked packets are discarded (113). As a result, it is possible to prevent violation packets and marked packets from affecting the quality of service using normal packets.

Proceedings ArticleDOI
01 May 1992
TL;DR: A new characterization of an arrival stream is introduced, referred to as a self-loss, and it is used to qualitatively predict the effects of multiplexing bursty streams with nonbursty streams and the effectiveness of priority packet discarding is investigated.
Abstract: The authors consider a queuing system with a finite buffer and multiple heterogeneous arrival streams. They focus on Markov modulated arrival processes with different burstings and investigate the loss of individual arrival streams when the parameters of the heterogeneous arrival streams are varied. The analysis includes both continuous-time and discrete-time treatments of multiplexed heterogeneous Markov modulated arrivals. Loss probabilities are derived for a priority packet discarding scheme. A new characterization of an arrival stream is introduced, referred to as a self-loss, and it is used to qualitatively predict the effects of multiplexing bursty streams with nonbursty streams. The effectiveness of priority packet discarding is also investigated through numerical examples. >

Proceedings ArticleDOI
01 May 1992
TL;DR: It is shown by simulation results that a switch incorporating the shared-memory copy network has increased throughput and lower buffer requirements to maintain low packet loss probability when compared to a switch with a discrete buffer copy network.
Abstract: A new nonblocking copy network is presented, for use in an ATM switch supporting BISDN, with a shared-memory input buffer. Blocked cells from any switch input are stored in a single shared input buffer. The copy network consists of three Omega networks and shared-memory queues. The design is scalable for large numbers of inputs due to a low hardware complexity, O(N log/sub 2/ N), and distributed operation and control. It is shown by simulation results that a switch incorporating the shared-memory copy network has increased throughput and lower buffer requirements to maintain low packet loss probability when compared to a switch with a discrete buffer copy network. >

Patent
Tsang-Ling Sheu1
13 Oct 1992
TL;DR: In this paper, a fault-tolerant bridge/router with a distributed switch-over mechanism is proposed, which can tolerate any single failures and does not rely on network reconfiguration.
Abstract: A fault-tolerant bridge/router ("brouter") with a distributed switch-over mechanism of the present invention can tolerate any single failures and does not rely on network reconfiguration (or alternative paths) and, therefore, substantially improves system reliability/availability. The fault-tolerant brouter utilizes a plurality of processing elements communicating through a multiple-bus switching fabric. Each processing element can effectively support two ports, each port providing an interface to an individual LAN. Each LAN is then linked to two different ports on two different processing elements, respectively, thereby providing processing element redundancy. If a processing element fails, bridging/routing functions can be performed by the other, redundant processing element. The functions are switched using the switch-over mechanism. Because the switch-over mechanism is distributed, no centralized control mechanism is required. The fault-tolerant brouter of the present invention provides the prevention of packet loss so that a source station does not have to resend lost packets blocked due to a failed processing element and provides transparency to end stations so that the packet recovery is independent of the networking protocols implemented. In addition, due to the redundancy of the processing elements for each LAN, traffic from unlike LANs with different media speeds can be evenly balanced. In this manner, the fault-tolerant brouter of the present invention provides significant improvement in system reliability and availability.

Patent
23 Nov 1992
TL;DR: In this paper, the core and enhancement packets are transmitted in frame relay format and congestion forward (CF) and congestion backwards (CB) markers are used to feed back information of congestion conditions within a network to the packet assembler.
Abstract: A system in which core information, for example in the form of a core block or blocks (C), is transmitted in a core packet (PC), and at least some enhancement information, for example, in the form of enhancement blocks (E), is transmitted in an enhancement packet (PE) which is separate from the core packet (PC) and is discardable to relieve congestion. Preferably, the core and enhancement packets have headers (H) which include a discard eligible marker (DE) to indicate whether or not the associated packet can be discarded. The enhancement blocks (E) may be distributed between the core packet and enhancement packet in accordance with congestion conditions, or the enhancement blocks may be incorporated only in the enhancement packet, and the actual number of enhancement blocks included are varied depending on congestion conditions. Preferably, the packets are transmitted in frame relay format and congestion forward (CF) and congestion backwards (CB) markers are used to feed back information of congestion conditions within a network to the packet assembler (7) forming the core and enhancement packets.

Proceedings ArticleDOI
01 May 1992
TL;DR: The authors apply their folding algorithm to study various queuing phenomena related to an asynchronous transfer mode (ATM) multiplexer, and examine how the different system parameters affect the packet loss and queuing delay, and show the performance improvements by overload controls.
Abstract: For pt.I see ibid., p.1464 (1991). The authors apply their folding algorithm to study various queuing phenomena related to an asynchronous transfer mode (ATM) multiplexer. They examine how the different system parameters affect the packet loss and queuing delay, and show the performance improvements by overload controls. They also analyze the queuing performance under different dynamic resource allocation policies. These analyses provide important insights for ATM design. The wide applicability of the folding algorithm is emphasized by including many practical examples, whose complexities far exceed those found in the literature. A set of highly effective approximation techniques is also proposed to further extend the folding algorithm's application range. >

Proceedings ArticleDOI
01 May 1992
TL;DR: The authors present a queuing analysis and a simulation study of banyan switch fabrics based on 2*2 switching elements with crosspoint buffering and indicate that crosspointbuffering provides throughput approaching the offered load under uniform traffic conditions.
Abstract: The authors present a queuing analysis and a simulation study of banyan switch fabrics based on 2*2 switching elements with crosspoint buffering. In particular, the results apply to the PHOENIX switching element based banyan fabrics. The results indicate that crosspoint buffering provides throughput approaching the offered load under uniform traffic conditions. The effect of bursty traffic on the performance of the switch is studied. It is shown that a speedup factor of three or more is required to achieve acceptable delay and packet loss probability. It is also shown that the amount of buffer space required per port increases linearly with the burst size for a desired packet loss performance. For a given burst size the packet loss rate decreases exponentially as the buffer size is increased. The impact of crosspoint buffering and shared buffering in the switching elements on the performance of the banyan fabric is analyzed. >

Proceedings ArticleDOI
01 May 1992
TL;DR: The authors discuss the unfairness issue arising in a 802.6 distributed queue dual bus (DQDB) network at heavy loads, and an access control scheme is proposed as a solution.
Abstract: The authors discuss the unfairness issue arising in a 802.6 distributed queue dual bus (DQDB) network at heavy loads. Based on the 802.6 protocol, the end-nodes along a bus experience longer delays than the other nodes. The origin and remedy for this heavy load unfairness problem are discussed. An access control scheme is proposed as a solution. A comparison of the proposed scheme with the 802.6 protocol is presented. The simulation results and performance characteristics are discussed under several types of loads. With symmetric load conditions under the proposed scheme, all active nodes along a bus experience almost the same access delay and packet loss characteristics. The performance under several other load conditions was also found to be satisfactory. >

Journal ArticleDOI
TL;DR: A layered packet video coding algorithm based on a progressive transmission scheme that provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence is presented.
Abstract: Some of the important characteristics and requirements of packet video are discussed. A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. A network simulator used in testing the scheme is introduced, and simulation results for various conditions are presented. >

Proceedings ArticleDOI
06 Dec 1992
TL;DR: It is shown that a straightforward design of the schedulers on the adapter and the host operating system makes a system prone to receive livelocks, significantly increases receive latency, potentially starves transmission of packets, and increases packet loss.
Abstract: It is shown that a straightforward design of the schedulers on the adapter and the host operating system makes a system prone to receive livelocks, significantly increases receive latency, potentially starves transmission of packets, and increases packet loss. Receive livelocks and starvation of transmission of packets can have wide-ranging consequences in the network. When viewed from an endsystem perspective, transport protocols can unnecessarily time-out, and cause retransmissions. For an intermediate system, these problems can result in control packets not being transmitted in a timely manner. Several of these problems are attributable to poor scheduling algorithms based one interrupts and priorities, which are unsuitable in an overloaded environment. That several of these problems are due to the traditional way of scheduling communications is shown by means of a measurement study on an end system. An approach to scheduling communication functions which would alleviate, if not eliminate, these problems is outlined. >

Book
Mahmoud Naghshineh1, Roch Guerin1
01 Jan 1992
TL;DR: These performance measures include queue length distribution, packet loss probability, and user frame loss probability are investigated, when fast packet-switched networks that operate with either fixed or variable packet sizes are compared.
Abstract: The authors investigate various performance measures of interest, when fast packet-switched networks that operate with either fixed or variable packet sizes are compared. These performance measures include queue length distribution, packet loss probability, and user frame loss probability. The focus is on identifying key parameters that influence the outcome of this comparison, and on quantifying the potential benefits of each approach. >

Proceedings ArticleDOI
14 Jun 1992
TL;DR: The author introduces a multichannel asynchronous transfer mode (ATM) switch that guarantees the packet sequence throughout the network when channel grouping is used in the switching nodes.
Abstract: The author introduces a multichannel asynchronous transfer mode (ATM) switch that guarantees the packet sequence throughout the network when channel grouping is used in the switching nodes. The switch consists of two modified omega networks and a batcher sorter. It is shown that the packet sequence will be preserved by providing a virtual first-in, first-out (FIFO) queue within the switch architecture. The virtual FIFO queue is shared by all the input-output pairs so that the packet loss probability is significantly reduced with the same number of buffers. It also has distributed switch control so that the expansion of the switch into larger sizes does not create a bottleneck in the performance. The performance analysis of the switch shows that the number of buffers and the average packet delay can be significantly reduced in the proposed switch while maintaining the required throughput and packet loss probability compared to multichannel switches with dedicated buffers. >

Journal ArticleDOI
01 Jan 1992
TL;DR: The buffer size requirement of a shared-memory VCQ for different numbers of virtual channels at various packet loss probabilities is studied and two different implementation architectures for the shared- memory VCQ are presented, and their hardware complexity is compared.
Abstract: In order to take advantage of the low entry cost of the future public ATM (asynchronous transfer mode) network with shared facilities, it is highly desirable to interconnect different hosts and local area networks (LANs) to the ATM network. The interface between the computer hosts or LANs and the ATM network, commonly called a broadband terminal adaptor (BTA), provides the necessary format conversion for the data packets and the ATM cells. It is conceivable that multiple packets from different virtual channels are interleaved as they arrive at the receive-end BTA. The BTA must have a sufficiently large buffer, called a virtual channel queue (VCQ), to temporarily store the partially received packets. Once a complete packet has been received, it is forwarded to the host or LAN. Whenever the buffer fills with all incomplete packets, a packet must be discarded to make room for others. In this paper, we first study, through computer simulations, the buffer size requirement of a shared-memory VCQ for different numbers of virtual channels at various packet loss probabilities. We then present two different implementation architectures for the shared-memory VCQ, and compare their hardware complexity. The second architecture with linked-queue approach, adopted in our work, requires less buffer and has better scalability to accommodate a large number of virtual channels. Various possible error conditions, such as cell losses in the ATM network and the VCQ buffer overflow, are considered. Corresponding solutions are proposed and included in the VCQ designs.

Journal ArticleDOI
TL;DR: A closed form is derived for the average packet gap for the multiclassG/G/m/B queueing system in equilibrium and it only depends on the loss behavior of two consecutive packets, which considerably simplifies the monitoring process of real-time packet traffic sessions.
Abstract: Real-time packet traffic is characterized by a strict deadline on the end-to-end time delay and an upper bound on the information loss Due to the high correlation among consecutive packets, the individual packet loss does not well characterize the performance of real-time packet sessions An additional measure of packet loss is necessary to adequately assess the quality of each real-time connection The additional measure considered here is the average number of consecutively lost packets, also called the average packet gap We derive a closed form for the average packet gap for the multiclassG/G/m/B queueing system in equilibrium and show that it only depends on the loss behavior of two consecutive packets This result considerably simplifies the monitoring process of real-time packet traffic sessions If the packet loss process is markovian, the consecutive packet loss has a geometric distribution

Proceedings ArticleDOI
01 May 1992
TL;DR: An efficient recursive computation methodology is introduced to obtain the exact distribution of the number of lost packets in a block of packet arrivals of a given size for different arrival models and a different number of sessions.
Abstract: An efficient recursive computation methodology is introduced to obtain the exact distribution of the number of lost packets in a block of packet arrivals of a given size for different arrival models and a different number of sessions. The exact distribution is compared with the distribution obtained from an independence assumption on the loss probability of packets. Numerical examples are provided to show that the exact distribution may be worse than the distribution obtained under the independence assumption for applications such as forward error correction or better for applications such as straight message retransmission. >

Proceedings ArticleDOI
06 Dec 1992
TL;DR: A transport layer protocol for high-speed multimedia applications, namely MHTP (multimedia high- Speed transport protocol), is proposed to resolve problems in interworking multimedia systems using high- speed networks.
Abstract: In considering transport layer protocols for multimedia applications in high-speed networks, several chacteristics of multimedia applications have to be considered. These characteristics include high-speed transmission capability, real-time constraints, and large bandwidth required by applications, and packet loss of some percentage is acceptable in some multimedia applications. A transport layer protocol for high-speed multimedia applications, namely MHTP (multimedia high-speed transport protocol), is proposed to resolve problems in interworking multimedia systems using high-speed networks. >

Journal ArticleDOI
01 Oct 1992
TL;DR: It is shown that packet-based embedded encoding of speech can double the maximum tolerable missing packet rate and mitigates the effects of parameter mistracking caused by packet loss.
Abstract: Missing packets due to network congestion can seriously degrade the quality of received speech in packet voice communication. A variety of techniques exist that attempt to reconstruct the missing speech segments but these methods cannot be directly applied to low-bit-rate-encoded speech, such as ADPCM, where the decoded output depends not only on current transmitted codewords but also on the past history of the decoder. In this case, decoder parameters at the receiver lose track of encoder parameters after a missing packet leading to propagation of the decoding error into the following packet. A novel packet-based embedded encoding scheme is described that mitigates the effects of parameter mistracking caused by packet loss. The method may be applied to any low-bit-rate encoder that uses past information to decode current samples but specific results are given for its application to the CCITT 32 kbit/s ADPCM encoding standard. It is shown that packet-based embedded encoding of speech can double the maximum tolerable missing packet rate.