scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 1988"


Journal ArticleDOI
01 Aug 1988
TL;DR: The measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet, and an algorithm recently developed by Phil Karn of Bell Communications Research is described in a soon-to-be-published RFC.
Abstract: In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and three IMP hops) dropped from 32 Kbps to 40 bps. Mike Karels1 and I were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. We wondered, in particular, if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was “yes”.Since that time, we have put seven new algorithms into the 4BSD TCP: round-trip-time variance estimationexponential retransmit timer backoffslow-startmore aggressive receiver ack policydynamic window sizing on congestionKarn's clamped retransmit backofffast retransmit Our measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet.This paper is a brief description of (i) - (v) and the rationale behind them. (vi) is an algorithm recently developed by Phil Karn of Bell Communications Research, described in [KP87]. (viii) is described in a soon-to-be-published RFC.Algorithms (i) - (v) spring from one observation: The flow on a TCP connection (or ISO TP-4 or Xerox NS SPP connection) should obey a 'conservation of packets' principle. And, if this principle were obeyed, congestion collapse would become the exception rather than the rule. Thus congestion control involves finding places that violate conservation and fixing them.By 'conservation of packets' I mean that for a connection 'in equilibrium', i.e., running stably with a full window of data in transit, the packet flow is what a physicist would call 'conservative': A new packet isn't put into the network until an old packet leaves. The physics of flow predicts that systems with this property should be robust in the face of congestion. Observation of the Internet suggests that it was not particularly robust. Why the discrepancy?There are only three ways for packet conservation to fail: The connection doesn't get to equilibrium, orA sender injects a new packet before an old packet has exited, orThe equilibrium can't be reached because of resource limits along the path. In the following sections, we treat each of these in turn.

5,620 citations


Journal ArticleDOI
TL;DR: A nonblocking, self-routing copy network with constant latency is proposed, capable of packet replications and switching, which is usually a serial combinations of a copy network and a point-to-point switch.
Abstract: In addition to handling point-to-point connections, a broadband packet network should be able to provide multipoint communications that are required by a wide range of applications. The essential component to enhance the connection capability of a packet network is a multicast packet switch, capable of packet replications and switching, which is usually a serial combinations of a copy network and a point-to-point switch. The copy network replicates input packets from various sources simultaneously, after which copies of broadcast packets are routed to their final destination by the switch. A nonblocking, self-routing copy network with constant latency is proposed. Packet replications are accomplished by an encoding process and a decoding process. The encoding process transforms the set of copy numbers, specified in the headers of incoming packets, into a set of monotone address intervals which form new packet headers. The decoding process performs the packet replication according to the Boolean interval splitting algorithm through the broadcast banyan network, the decision making is based on a two-bit header information. This yields minimum complexity in the switch nodes. >

387 citations


Journal ArticleDOI
01 Aug 1988
TL;DR: The scheme is distributed, adapts to the dynamic state of thenetwork, converges to the optimal operating point, is quite simple to implement, and has low overhead while operational.
Abstract: We propose a scheme for congestion avoidance in networks using a connectionless protocol at the network layer. The scheme uses feedback from the network to the users of the network. The interesting challenge for the scheme is to use a minimal amount of feedback (one bit in each packet) from the network to adjust the amount of traffic allowed into the network. The servers in the network detect congestion and set a congestion indication bit on packets flowing in the forward direction. The congestion indication is communicated back to the users through the transport level acknowledgement.The scheme is distributed, adapts to the dynamic state of the networks, converges to the optimal operating point, is quite simple to implement, and has low overhead while operational. The scheme also addresses a very important aspect of fairness in the service provided to the various sources utilizing the network. The scheme attempts to maintain fairness in service provided to multiple sources.This paper presents the scheme and the analysis that went into the choice of the various decision mechanisms. We also address the performance of the scheme under transient changes in the network and for pathological conditions.

315 citations


Journal ArticleDOI
TL;DR: Comparisons to simulations using data collected from real conversations show that the packet loss can be determined accurately if the delay limit is less than 400 ms and more than half the packet length.
Abstract: In a packet-speech multiplexer with limited delay, packets arriving once the queue has reached a certain limit are either discarded, or if embedded encoding has been used, shortened. The uniform arrival and service model, which assumes that the information flow in and out of the multiplexer is uniform rather than in discrete packets, is used to analyze such a multiplexer. The equilibrium queue distribution is described by a set of differential equations that can be solved, together with a set of boundary equations describing the queue behavior at its limits, to yield equilibrium distributions of delay and packet loss. Comparisons to simulations using data collected from real conversations show that the packet loss can be determined accurately if the delay limit is less than 400 ms and more than half the packet length. >

187 citations


Patent
06 Dec 1988
TL;DR: In this article, an integrated voice and data network includes a multiplexer arranged with a voice queue for storing voice packets and a data queue to storing data packets. But the message transmission is not interrupted until the entire signaling message is transmitted.
Abstract: An integrated voice and data network includes a multiplexer arranged with a voice queue for storing voice packets and a data queue for storing data packets. Voice packets are transmitted for a predetermined interval T1. Data packets are transmitted for a predetermined interval T2. The predetermined intervals T1 and T2 may be of different durations. A separate signaling queue can be provided for storing received signaling messages. If a signaling message is moved into the separate signaling queue during either interval T1 and T2, that interval is suspended and the transmission of voice or data packets is interrupted until the entire signaling message is transmitted. Then the interrupted voice or data transmission is resumed for the remainder of the suspended interval T1 or T2. As an alternative, signaling messages can be transmitted during predetermined intervals between the intervals T1 and T2. Block dropping of low order voice bits also is described for reducing congestion at the node. The multiplexer guarantees a certain minimum bandwidth for voice traffic and data traffic. Concurrently, the multiplexer allows each type of traffic to utilize any spare bandwidth momentarily available because it is not being utilized by the other type of traffic. Signaling messages are serviced with very low delay and zero packet loss.

166 citations


Book
01 Dec 1988
TL;DR: In this article, the authors compare the concept of congestion avoidance with that of congestion control and propose a binary feedback scheme to increase or decrease the load of the users to make optimal use of the resources.
Abstract: Widespread use of computer networks and the use of varied technology for the interconnection of computers has made congestion a signi cant problem. In this report, we summarize our research on congestion avoidance. We compare the concept of congestion avoidance with that of congestion control. Brie y, congestion control is a recovery mechanism, while congestion avoidance is a prevention mechanism. A congestion control scheme helps the network to recover from the congestion state while a congestion avoidance scheme allows a network to operate in the region of low delay and high throughput with minimal queuing, thereby preventing it from entering the congested state in which packets are lost due to bu er shortage. A number of possible alternatives for congestion avoidance were identi ed. From these alternatives we selected one called the binary feedback scheme in which the network uses a single bit in the network layer header to feed back the congestion information to its users, which then increase or decrease their load to make optimal use of the resources. The concept of global optimality in a distributed system is de ned in terms of e ciency and fairness such that they can be independently quanti ed and apply to any number of resources and users. The proposed scheme has been simulated and shown to be globally e cient, fair, responsive, convergent, robust, distributed, and con guration-independent.

164 citations


PatentDOI
04 Mar 1988
TL;DR: In this paper, a buffer management system for a general multipoint packet switching network is proposed, which determines whether a packet should be stored, retransmitted, or discarded during an overload condition by identifying each incoming packet as either an excess packet or a nonexcess packet based on the number of packets stored in the memory array.
Abstract: A Buffer Management System for a general multipoint packet switching network where the network has terminals transmitting data in the form of packets belonging to multiple channels over communication links through a packet switch array, the packet switches of the array receiving incoming packets from input data links and having memory arrays for temporarily storing the incoming packets for retransmitting the stored packets over output links. The Buffer Management System determines whether a packet should be stored, retransmitted, or discarded during an overload condition by identifying each incoming packet as either an excess packet or a nonexcess packet based on the number of packets stored in the memory array of the same channel as the incoming packet, and writing an incoming nonexcess packet into the memory array when the memory array is full and at least one excess packet is in the memory array and for discarding the excess packet from the memory array.

142 citations


Patent
11 Oct 1988
TL;DR: In this paper, a self-routing multistage switching network for a fast packet switching system suitable for multimedia communication is proposed. But it is not suitable for wireless networks.
Abstract: A self-routing multistage switching network for a fast packet switching system suitable for multimedia communication. The self-routing multistage switching network has packet buffer means for storing packets, provided only in an input stage and respectively connected to input ports, and switching networks having no packet storing function and provided after the packet buffer means. The self-routing multistage switching network detects beforehand while packets are transmitted therethrough whether or not the packets are transmitted therethrough instead of being discarded, reports information for identifying the packets which are transmitted instead of being discarded backward to the packet buffer means through transmission routes through which the packets have been transmitted, and deletes the packets stored in the packet buffer means and corresponding to the packets which are allowed to be transmitted through the self-routing multistage switching network after sending out the same packets. The self-routing multistage switching network is capable of transmitting a plurality of packets for a piece of comunication without entailing outrun between the packets.

115 citations


Book
01 Dec 1988
TL;DR: In this article, a simple congestion control scheme using the acknowledgment timeouts as indications of packet loss and congestion is proposed, which can be used in any network with window flow control, e.g., ARPAnet or ISO.
Abstract: During overload, most networks drop packets due to buffer unavailability. The resulting timeouts at the source provide an implicit mechanism to convey congestion signals from the network to the source. On a timeout, a source should not only retransmit the lost packet, but it should also reduce its load on the network. Based on this realization, we have developed a simple congestion control scheme using the acknowledgment timeouts as indications of packet loss and congestion. This scheme does not require any new message formats, therefore, it can be used in any network with window flow control, e.g., ARPAnet or ISO.

110 citations


Journal ArticleDOI
TL;DR: The focus is on congestion control (that is, prevention of internal congestion); however some of the proposed schemes require the interaction of flow and congestion control.
Abstract: The reasons why congestion control is more difficult in interconnected local area networks (LANs) than in conventional packet nets are examined. The flow and congestion control mechanisms that can be used in an interconnected LAN environment are reviewed. The focus is on congestion control (that is, prevention of internal congestion); however some of the proposed schemes require the interaction of flow and congestion control. The schemes considered are dropping packets; input buffer limit, i.e. a limit on the number of input packets (i.e. packets from local hosts) that can be buffered in the packet switch; the use of choke packets, in which, whenever a bridge or router experiences congestion, it returns to the source a choke packet containing the header of the packet traveling in the congested direction and the source, on receiving the choke packet, declares the destination congested, and slows (or stops altogether, for a period of time) traffic to that destination; backpressure, which is the regulation of flow along a virtual connection; and congestion prevention, whereby a voice or video connection is accepted only if there is enough bandwidth (in a statistical sense) in the network to support it. >

90 citations


Journal ArticleDOI
TL;DR: The authors model the internal structure of a packet-switching node in a real-time system and characterize the tradeoff between throughput, delay, and packet loss as a function of the buffer size, switching speed, etc.
Abstract: The authors model the internal structure of a packet-switching node in a real-time system and characterize the tradeoff between throughput, delay, and packet loss as a function of the buffer size, switching speed, etc. They assume a simple shared-single-path switch fabric, though the analysis can be generalized to a wider class of switch fabrics. They show that with a small number of buffers the node will provide a guaranteed delay bound for high-priority traffic, a low average delay for low-priority traffic, no loss of packets at the input and low probability of packet loss at output. >

Proceedings ArticleDOI
P. Tran-Gia1, Hamid Ahmadi1
27 Mar 1988
TL;DR: The authors present and solve a discrete-time G/sup (X)//D/1-S queuing system with a finite queue size and batch arrivals with a general batch size distribution.
Abstract: The authors present and solve a discrete-time G/sup (X)//D/1-S queuing system with a finite queue size and batch arrivals with a general batch size distribution. The motivation for this model arises from performance modeling of a statistical multiplexer with synchronous transmission of fixed-size data units in synchronous time slots. The arrival process to the multiplexer, for example, may originate from a number of independent sources with packets of variable lengths. Hence, a packet arrival corresponds to an arrival of a batch of data units. Different performance measures such as percentage of packet loss and data-unit loss are considered under two different admission policies of packets into the queue. >

Journal ArticleDOI
TL;DR: A comprehensive model encompassing the process of packet duplication together with both forms of packet elimination is defined and a quasi-static distributed algorithm is developed that is optimal, deadlock free, and loop free.
Abstract: Packet duplication is discussed as a means of increasing network reliability in an environment where packet loss exists. Several methods of routing the duplicates are presented, one of which-the st-numbering-is shown to have the combined advantage of using disjoint paths and more even utilization of network resources. An additional mechanism, deliberate packet elimination, is introduced as a means of controlling congestion that may result, in part, from the duplication. A comprehensive model is defined encompassing the process of packet duplication together with both forms of packet elimination. Within this model, a cost function based on average packet delay is defined. A quasi-static distributed algorithm is developed that is optimal, deadlock free, and loop free. Extension of the model to include packet retransmission is considered. >

Proceedings ArticleDOI
28 Nov 1988
TL;DR: In this article, a novel high-performance packet-switching architecture, called the knockout switch, has been recently proposed, which is a nonblocking, cost-effective switch suitable for broadband packet switching.
Abstract: A novel high-performance packet-switching architecture, called the knockout switch, has been recently proposed. It is a nonblocking, cost-effective switch suitable for broadband packet switching. The authors give equations which can be used to derive the packet loss probability and investigate the knockout switch under various uniform traffic patterns. They also compare the knockout switch with the multistage switch under various nonuniform traffic patterns. >

Journal ArticleDOI
TL;DR: In this paper, a multipath interconnection is proposed to overcome the internal link congestion in the banyan interconnection, where multiple (i.e., alternate) paths are provided and one is selected at call-setup time.
Abstract: The banyan interconnection is prone to internal link congestion, resulting in a blocking switch architecture. Several solutions that have been implemented to reduce the severity of link congestion offer packets a multiplicity of paths, which tend to increase packet delay variability and allow delivery of out-of-sequence packets. This, in turn, can lead to an increase in end-to-end protocol complexity, particularly in the case of real-time services. A solution called multipath interconnection is proposed to overcome this difficulty. Multiple (i.e., alternate) paths are provided and one is selected at call-setup time. Subsequent packets belonging to the call are constrained to follow the selected path. A number of path selection strategies are presented. >

Proceedings ArticleDOI
25 Oct 1988
TL;DR: The proposed layered coding separates coded information into most significant parts and least significant parts (LSPs) and gives MSP packets priority over LSP packets to reduce the influence of packet loss on picture quality.
Abstract: Asynchronous Transfer Mode (ATM) is expected to be one of the important variable-bit-rate methods for video transmission. Packet loss has the greatest influence on picture quality in video network. This paper proposes a layered coding technique suitable for ATM using discrete cosine transform (DCT) cod-ing. The proposed layered coding separates coded information into most significant parts (MSPs) and least significant parts (LSPs) and gives MSP packets priority over LSP packets to reduce the influence of packet loss on picture quality. The influence of packet loss on picture quality is also described, and the effectiveness of the proposed layered coding is confirmed with decoded pictures.© (1988) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
Lionel Bustini1, Andre Cretegny1, Gerard Marmigere1, Guy Platel1, Pierre Secondo1 
20 Sep 1988
TL;DR: In this article, voice packets are made to include an EC field whose contents indicate whether the corresponding packet is eligible for clipping if required in a node queue within the network, and if clipping of a non eligible packet is required, then another bit field would be set in the following packet on the same link to limit any possible clipping of successive packets on same link.
Abstract: In a packet switching network, voice packets are made to include an EC field whose contents indicates whether the corresponding packet is eligible for clipping if required in a node queue within the network. Should clipping of a non eligible packet be required, then another bit field would be set in the following packet on same link to limit any possible clipping of successive packets on same link.

Patent
Ikuko Takada1
24 Aug 1988
TL;DR: In this paper, a data transmission system capable of eliminating a local traffic jam comprises a plurality of packet controllers each including a queue buffer for storing packets, the packet controllers being arranged in one-to-one correspondence with channels of a multiplexed transmission line, and an adapter for transmitting an object packet to a packet controller selected in accordance with a priority included in the object packet and an average priority of the packets stored in each queue buffer.
Abstract: A data transmission system capable of eliminating a local traffic jam comprises a plurality of packet controllers each including a queue buffer for storing packets, the packet controllers being arranged in one-to-one correspondence with channels of a multiplexed transmission line, and an adapter for transmitting an object packet to a packet controller selected in accordance with a priority included in the object packet and an average priority of the packets stored in each queue buffer. Each packet controller receives the object packet from the adapter, stores the received packet, reads out the stored packets from the queue buffer, the number of which is determined in accordance with transition of a degree of traffic jam of the corresponding channel, and transmits the readout packets to other stations through the corresponding channel. Each packet controller calculates an average priority of the packets stored in the queue buffer. The adapter transmits the object packet to the packet controller having the average priority lower than the priority of the object packet.

Proceedings Article
01 Jan 1988
TL;DR: This work compares the concept of congestion avoidance with that of flow control and congestion control, and model tlie network and tlie user policies for congestion avoidance as a feedback control system.
Abstract: Congestion occurs in a computer network when the resource demands exceed the capacity. Packets may be lost due to too much queuing in the network. During congestion, the network throughput may drop and the path delay may become very high. A congestion control scheme helps the network to recover from tlie congestion state. A congestion avoidance scheme allows a network to operate in tlie region of low delay and high throughput. Such schemes prevent a network from entering the congested state. Congestion avoidance is a prevention inechanism while congestion control is a recovery mechanism. We compare the concept of congestion avoidance with that of flow control and congestion control. A nuniber of possible alternative for congestion avoidance have been identified. From these a few were selected for study. The criteria for selection and goals for these schemes have been described. In particular, we wanted tlie sclieine to be globally efficient, fair, dynamic, convergent, robust, distributed, configuration independent., etc. We model tlie network and tlie user policies for congestion avoidance as a feedback control system. The key components of a generic congestion avoidance scheme are: congestion detection, congestion feedback, feedback selector, signal filter, decision function, and increase/decrease algorithms. These components have been explained as well as the features of simulation model used have been described.

Proceedings ArticleDOI
S.-Q. Li1
12 Jun 1988
TL;DR: A simple tool is developed to evaluate the boundary performance of packet loss at various rates; it is especially accurate at high rates and can be directly applied to the design of packet-switched voice systems.
Abstract: A study is presented of the temporal behavior of packet loss on a voice-transmitted TDM time-division multiplexed link. The mean duration is given of blocking and nonblocking periods, the variance of blocking periods, and the expected packet. The loss rate within blocking periods is derived. These measures are extended to characterize the performance of packet loss at various rates. The results provide significant information on voice packet loss and can be directly applied to the design of packet-switched voice systems. The packet loss rate in a voice system changes slowly and has large fluctuations. This temporal behavior of packet loss, especially at high rates, is basically characterized by voice correlation and system capacity. Increasing the buffer size merely extends the nonblocking periods and thereby reduces the overall average packet-loss-rate. Once a blocking period occurs, however, the length of this period as well as the behavior of packet loss within the period becomes irrelevant to the buffer size. Based on this observation, a simple tool is developed to evaluate the boundary performance of packet loss at various rates; it is especially accurate at high rates. >

Journal ArticleDOI
TL;DR: A series of simulations shows that this expected behavior occurs when there are very few network stations, very short data packets (but still long relative to ring latency), very short token hold times, and very high network loads.
Abstract: The IEEE standard 802.5 token ring protocol defines eight packet priorities. The intent is that high-priority packets should be delivered prior to low-priority packets. A series of simulations shows that this expected behavior occurs when there are very few network stations, very short data packets (but still long relative to ring latency), very short token hold times, and very high network loads. In the general case, priorities did not markedly influence packet delivery time. Use of the priority system generally resulted in more overhead and longer average packet delays than when all packets were carried as a single priority. The features of the protocol operation that are the cause of this increased delay and lack of priority discrimination are described mathematically. >


Journal ArticleDOI
TL;DR: In this paper, the link-level protocol with too small a window (such as three) caused excessive network congestion and was worse than no link level protocol at all, while with a sufficient window, the link protocol offered an improvement under high error conditions with light loads, but the improvement lessened as the load increased.
Abstract: Roundtrip message response time was measured on a simulated packet network subjected to message traffic from interactive users. The links had impairments of errors and propagation delay. Each virtual circuit had its own edge-to-edge protocol. With no link-level protocol in place, the edge-to-edge protocol yielded good performance, provided that the edge-to-edge timeout threshold accounted for network delays due to packet queuing. A link-level protocol with too small a window (such as three) caused excessive network congestion and was worse than no link-level protocol at all. With a sufficient window, the link protocol offered an improvement under high-error conditions with light loads, but the improvement lessened as the load increased. >

Journal ArticleDOI
TL;DR: This paper proposes a preemptive packet transfer scheme, where a long packet, composed of a number of virtual cells, can be preempted by short packets only at the end of each transferred cell.
Abstract: In heterogeneous packet-switching systems that handle packets of various sizes, such as short packets for voice and long packets for image or video, resource occupation by long packets causes an intolerable transfer delay of short packets. To avoid this problem, we propose a preemptive packet transfer scheme, where a long packet, composed of a number of virtual cells, can be preempted by short packets only at the end of each transferred cell. First, this paper analyzes the relationship between the transfer delay of short packets and cell size. Next, a division process for a long packet necessitated by short packet arrival is formulated. Then the amount of long packet processing and long packet transfer overhead are calculated. Finally, a design method for the proposed scheme is discussed and a packet format example for this scheme is proposed.

Proceedings ArticleDOI
27 Mar 1988
TL;DR: A fluid approximation to an integrated packet voice and data multiplexer where the transmission of voice packets has preemptive priority over data packets is derived.
Abstract: A fluid approximation to an integrated packet voice and data multiplexer where the transmission of voice packets has preemptive priority over data packets is derived. Performance measures such as the average delay for both voice and data packets are given and a comparison of the average delay for data packets as a function of the data packet arrival rate is made between the analytical results and simulation results. A voice-rate flow control procedure is incorporated in the model and its effect on average voice and data packet delays is computed. >

Journal ArticleDOI
TL;DR: In this article, a packet voice receiver for voice channels carried on a data network must compensate for random network delays by buffering packets before delivery, and the results include nonstationary buffer length distribution and mean values, as a function of time in the talkspurt.
Abstract: A packet voice receiver for voice channels carried on a data network must compensate for random network delays by buffering packets before delivery. This paper analyzes a generally applicable protocol devised by Barberis and Pazzaglia which waits for late packets. The results include nonstationary buffer length distribution and mean values, as a function of time in the talkspurt. These results may be used in the design of a packet voice receiver.

Proceedings ArticleDOI
08 Mar 1988
TL;DR: The Banyan interconnection, while well suited to multiprocessor and fast packet communication systems, is prone to link congestion resulting in a blocking switch architecture, and Multipath interconnection is proposed to overcome this difficulty.
Abstract: The Banyan interconnection, while well suited to multiprocessor and fast packet communication systems, is prone to link congestion resulting in a blocking switch architecture. Solutions have been proposed to reduce the severity of link congestion. In general, these solutions tend to increase packet delay variability and allow delivery of out-of-sequence packets. This may lead to an increase in end-to-end protocol complexity, particularly in the case of real-time services. Multipath interconnection is proposed to overcome this difficulty. Multiple (i.e. alternate) paths are provided, and one is selected at call setup. Subsequent packets belonging to the call are constrained to follow the selected path. A number of path-selection strategies are presented and evaluated. >

Proceedings ArticleDOI
Kiran M. Rege1, K.-J. Chen1
12 Jun 1988
TL;DR: An analytic model is presented to deal with sizing and managing sources so that the severity of congestion can be minimized and it is demonstrated that with proper system sizing and management, network resources can be efficiently utilized.
Abstract: An analytic model is presented to deal with sizing and managing sources so that the severity of congestion can be minimized. This model does not account for packet loss and the consequent retransmission. However, it computes distributions of the number of packets of each type in the trunk buffer and the corresponding memory requirements, which can be used to estimate the probability of packet loss for a given buffer size. The model uses closed queuing analysis with simple approximations to account for the constraints imposed by window flow control. Some results derived from the model are presented and compared with those obtained through simulation. The analytical model can deal with the buffer sizing problem for high-speed trunks which can carry thousands of virtual circuits. It is demonstrated that with proper system sizing and management, network resources can be efficiently utilized. >

01 Jan 1988
TL;DR: Two control procedures are proposed to reduce queueing delays in theacket voice systems: priority trans­ mission, and selective packet discarding.
Abstract: Packet voice systems consisting of many independent speakers multiplexed on a single chan­ nel are examined. Information incurring queueing delays beyond a maximum acceptable limit is discarded. The relation between the acceptable delay limit and the fidelity of communication in these systems is analyzed. Each voice stream is processed using speech activity detection and embedded coding. The encoded information is identified as more significant or less signif­ icant, with the former placed in high priority packets and the latter in low priority packets. Two control procedures are proposed to reduce queueing delays in the system: priority trans­ mission, and selective packet discarding. Using a bivariate Markov chain model, the resultant queueing delays and packet loss probabilities are derived, and the performance of controlled and uncontrolled systems are compared.

Journal ArticleDOI
TL;DR: Results presented in earlier work in which an edge-to-edge protocol was supported between network gateways are reviewed and extended, with emphasis placed on studying effects of varying the protocol, and, in particular, varying the acknowledgement methods used in window flow-control.
Abstract: Results presented in earlier work (IEEE J. Sel. Areas Commun. vol.6, no.1, p.190-6, 1988) in which an edge-to-edge protocol was supported between network gateways are reviewed and extended. Emphasis was placed on studying effects of varying the protocol, and, in particular, varying the acknowledgement methods used in window flow-control. The results were obtained with a simulation of a small packet network in which the full X.25 LAPB link protocol was coded in detail on every link in the network. The simulation stepped through each link in half-millisecond time increments, using a Monte Carlo technique to generate data from users and noise on the links. For the current study, an edge-to-edge protocol has been implemented separately for every virtual circuit in the network. Results of the simulation for a three-switch network are presented. The primary measure of performance is the user-perceived round-trip response time from the start of input of a message to completion of a reply from the message destination. >