scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 1981"


Journal ArticleDOI
TL;DR: Perceptual considerations indicate that packet lengths most robust to losses are in the range 16-32 ms, irrespective of whether interpolation is used or not, whereas tolerable P L values can be as high as 2 to 5 percent without interpolation and 5 to 10 percent with interpolation.
Abstract: We have studied the effects of random packet losses in digital speech systems based on 12-bit PCM and 4-bit adaptive DPCM coding. The effects are a function of packet length B and probability of packet loss P L . We have also studied tbe benefits of an odd-even sample-interpolation procedure that mitigates these effects (at the cost of increased decoding delay). The procedure is based on arranging a 2B -block of codewords into two B -sample packets, an odd-sample packet and an even-sample packet. If one of these packets is lost, the odd (or even) samples of the 2B -block are estimated from the even (or odd) samples by means of adaptive interpolation. Perceptual considerations indicate that packet lengths most robust to losses are in the range 16-32 ms, irrespective of whether interpolation is used or not. With these packet lengths, tolerable P L values, which are strictly input-speech-dependent, can be as high as 2 to 5 percent without interpolation and 5 to 10 percent with interpolation. These observations are based on a computer simulation with three sentence-length speech inputs, and on informal listening tests.

254 citations


Journal ArticleDOI
TL;DR: The bounding of maximum packet lifetime and related parameters is important for achieving transport protocol reliability and a mechanism is outlined for enforcing such a bound.

88 citations


Journal ArticleDOI
TL;DR: In this paper a distributed drop and throttle flow control (DTFC) policy based on a nodal buffer management scheme is proposed and achieves a very good network throughput even for loads fifty times beyond the normal operating region.
Abstract: Store-and-forward packet switched networks are subject to congestion under heavy load conditions. In this paper a distributed drop and throttle flow control (DTFC) policy based on a nodal buffer management scheme is proposed. Two classes of traffic are identified: "new" and "transit" traffic. Packets that traveled over one or more hops are considered as transit packets. Packets that are candidates to enter the communication network are considered as new packets. At a given node if the number of allocated buffers is greater than a limit value, then new traffic is rejected, whereas transit traffic is accepted. Indeed, if the total buffer area is occupied, transit traffic is also rejected and, furthermore, it is dropped from the network. This policy is analyzed in the context of symmetrical networks. A queueing network model is developed whereby network throughput is expressed in terms of the traffic load, the number of buffers in a node and the DTFC limit value. Optimal policies where the limit value is a function of the traffic load are found to prevent network congestion. Furthermore, they achieve a very good network throughput even for loads fifty times beyond the normal operating region. Moreover, suboptimal, easy to implement fixed limit policies offer satisfactory results.

36 citations


Journal ArticleDOI
TL;DR: Network design strategies for the control of load fluctuations are proposed and discussed and the design of input buffer limits for network congestion control, virtual channel window size, and nodal buffer capacity addressed.
Abstract: An experimental study was conducted using a network simulator to investigate the performance of packet communication networks as a function of: the network resource capacities (channels, buffers), the network load (number of virtual channels, virtual channel loads), protocols (flow control, congestion control, routing), and protocol parameters (virtual channel window size, input buffer limits). Performance characteristics are shown and the design of input buffer limits for network congestion control, virtual channel window size, and nodal buffer capacity addressed. Network design strategies for the control of load fluctuations are proposed and discussed.

22 citations


Journal ArticleDOI
TL;DR: The results show that the combination of buffer reservation and processor capacity allocation gives strictly nondecreasing network output as a function of increasing network input load, i.e., undesirable store and forward congestion effects are eliminated.
Abstract: The purpose of this study is twofold. First the study illustrates the utility of applying sparse matrix methods to packet network models. Secondly, these methods are used to give new results about the control of store and forward congestion in packet networks. Store and forward congestion (node to node blocking) reduces the effective traffic carrying capacity of the network by unnecessarily idling network resources. This study shows how store and forward congestion can be controlled by a combination of buffer reservation and processor capacity allocation. The scheme presented is analyzed using a Markovian state-space model of two coupled packet switches. The model contains more detail than previous analytic models. It is therefore solved using numerical sparse matrix methods. The results show that the combination of buffer reservation and processor capacity allocation gives strictly nondecreasing network output as a function of increasing network input load, i.e., undesirable store and forward congestion effects are eliminated.

15 citations


Journal ArticleDOI
01 Oct 1981
TL;DR: A queueing model is described which accounts for the non-Poissonian nature of the packet arrival process as a function of the interarrival time of packets associated with a particular message and the distribution of the number of packets per message.
Abstract: In a data network, when messages arrive at a switch to be served (transmitted) on a line, it seems reasonable to assume that the arrival process can be described as a Poisson (random) process However, when messages are divided into a number of packets of a maximum length, these packets arrive bunched together This gives rise to what is referred to as “peaked” traffic The degree of peakedness depends on 1) the interarrival time of packets associated with a particular message and 2) the distribution of the number of packets per message In this paper we describe a queueing model which accounts for the non-Poissonian nature of the packet arrival process as a function of these two factors Since packets are of a fixed maximum length, the model assumes that the packet service time is constant, as opposed to the mathematically more tractable but less realistic assumption of exponentially-distributed service time This queueing model is then used to describe the network delay as affected by: 1 Message switching versus packet switching, 2 A priority discipline in the queues, 3 Packet interarrival time per message, which is probably controlled by the line speed at the packet origination point, and 4 A network which carries only short inquiry-response traffic as opposed to a network which also carries longer low-priority printer traffic The general conclusions are that the peakedness in the arrival process caused by a short interarrival time of packets per message and the longer printer traffic would cause excessive delays in a network If inquiry-response traffic with a short response-time requirement is also to be carried on the same network a priority discipline has considerable value Message switching for such a combination of traffic should be avoided

6 citations


Patent
04 Aug 1981
TL;DR: In this article, the authors proposed a scheme to improve the efficiency of packet synthesis by sending stored packets out of a packet storage memory to a circuit when coupling with a next packet is not indicated at a packet transmitter and receiver.
Abstract: PURPOSE:To improve the efficiency of synthesis of packets, by sending stored packets out of a packet storage memory to a circuit when coupling with a next packet is not indicated at a packet transmitter and receiver. CONSTITUTION:For packet transfer from a central controller 6 to a communication controller CCE, control information, such as a packet coupling indication bit SYN, for indicating whether coupling with a next packet is required or not is added to the head of each packet, and the composite packet is transferred. The communication controller CCE controls the writing of packets to a packet storage memory 3 or the reading of packets from the packet storage memory 4 on the basis of the control information to perform packet synthesis processing. Therefore, a processing load on a central controller CPU is reduced to improve the efficiency of the packet synthesis.

6 citations


Book
01 Jan 1981
TL;DR: It is shown that priority functions indeed reduce packet delay, delay variance and packet loss for the high priority class and is extended to evaluate more completely the performance of P-CSMA.
Abstract: : The need for priority functions in Multiaccess Computer Communication Networks arises from applications with restrictions on packet delay. One alternative is to have a dedicated channel supporting very low loads to insure small delays; another alternative is to provide some priority mechanism by which packets that are time constrained can have priority over those which are not. One such mechanism, called Prioritized Carrier Sense Multiple Access (P-CSMA), recently proposed and analyzed by Tobagi (10-12), is studied here using simulation. The objective of this work is to extend the results obtained by the stochastic analysis and evaluate more completely the performance of P-CSMA. Three variations of the operation of the protocol are investigated, namely: nonpreemptive, semipreemptive and preemptive disciplines. In particular, we study the effect on average packet delay, packet loss and the variance of delay of several system parameters that prove to be interesting, such as: the number of stations, the number of buffers, the preemption discipline, etc. It is shown that priority functions indeed reduce packet delay, delay variance and packet loss for the high priority class.

1 citations


01 Jan 1981
TL;DR: The results show that the combination of buffer reservation and processor capacity allocation gives strictly nondecreasing network output as a function of increasing network input load, i.e., undesirable store and forward congestion effects are eliminated.
Abstract: Absrmcr-The purpose of this study is twofold. First the study illustrates the utility of applying sparse matrix methods to packet network models. Secondly, these methods are used to give new results about the control of store and forward congestion in packet networks. Store and forward congestion (node to node blocking) reduces the effective traffic carrying capacity of the network by unnecessarily idling network resources. This study shows how store and forward congestion can be controlled’by a Combination of buffer reservation and processor capacity allocation. The scheme presented is analyzed using a Markovian,state-space model of two coupled packet switches. The model contains more detail than previous analytic models. It is therefore solved using numerical sparse matrix methods. The results show that the combination of buffer reservation and processor capacity allocation ,gives strictly nondecreasing network output as a function of increasing network input load, i.e., undesirable store and forward congestion effects are eliminated.

1 citations