scispace - formally typeset
Search or ask a question

Showing papers on "Packet loss published in 1991"


Journal ArticleDOI
TL;DR: The process of packet clustering in a network with well-regulated input traffic is studied and a strategy for congestion-free communication in packet networks is proposed, which provides guaranteed services per connection with no packet loss and an end-to-end delay which is a constant plus a small bounded jitter term.
Abstract: The process of packet clustering in a network with well-regulated input traffic is studied and a strategy for congestion-free communication in packet networks is proposed. The strategy provides guaranteed services per connection with no packet loss and an end-to-end delay which is a constant plus a small bounded jitter term. It is composed of an admission policy imposed per connection at the source node, and a particular queuing scheme practiced at the switching nodes, which is called stop-and-go queuing. The admission policy requires the packet stream of each connection to possess a certain smoothness property upon arrival at the network. This is equivalent to a peak bandwidth allocation per connection. The queuing scheme eliminates the process of packet clustering and thereby preserves the smoothness property as packets travel inside the network. Implementation is simple. >

170 citations


Journal ArticleDOI
TL;DR: The main conclusion is that both the MMPP model and the fluid flow approximation can provide accurate loss predictions for parameter ranges of practical interest.
Abstract: Three different approximation techniques are examined. The performance models studied differ primarily in the manner in which the superposition of the voice sources (i.e., the arrival process) is modeled. The first approach models the superimposed voice sources as a renewal process, and performance calculations are based only on the first two moments of the renewal process. The second approach is based on modeling the superimposed voice sources as a Markov modulated Poisson process (MMPP). The choice of parameters for the MMPP attempts to capture aspects of the arrival process in a more intuitive manner than previously proposed approaches for determining the MMPP parameters and is shown to compute loss more accurately. Finally, a fluid flow approximation for computing packet loss is evaluated. For all three approaches, a unifying example, the case of multiplexing voice sources over a T1-rate link is considered. The main conclusion is that both the MMPP model and the fluid flow approximation can provide accurate loss predictions for parameter ranges of practical interest. >

132 citations


01 Jan 1991
TL;DR: This thesis examines the problem of congestion control in reservationless packet switched wide area data networks by modeling a conversation as a linear system in a simple control-theoretic approach, which is used to synthesize a robust and provably stable flow control protocol.
Abstract: This thesis examines the problem of congestion control in reservationless packet switched wide area data networks We define congestion as the loss of utility to a network user due to high traffic loads and congestion control mechanisms as those that maximize a user's utility at high traffic loads In this thesis, we study mechanisms that act at two time scales: multiple round trip times and less than one round trip time At these time scales, congestion control involves the scheduling discipline at the output trunks of switches and routers, and the flow control protocol at the transport layer of the hosts We initially consider the problem of protecting well-behaved users from congestion caused by ill-behaved users by allocating all users a fair share of the network bandwidth This motivates the design and analysis of the Fair Queueing resource scheduling discipline We then study the efficient implementation of the discipline by doing an average case performance evaluation of several data structures for packet buffering Since a Fair Queueing server maintains logically separate per-conversation queues and approximates a bitwise-round robin server, it partially decouples the service received by incoming traffic streams This allows us to deterministically model a single conversation in a network of Fair Queueing servers Analysis of the model shows that a source can estimate the service rate of the slowest server in the path to its destination (the bottleneck) by sending a pair of back-to-back packets (a packet-pair probe), and measuring the inter-acknowledgement spacing The probe values can be used to control a user's data sending rate We formalize this notion by modeling a conversation as a linear system in a simple control-theoretic approach This is used to synthesize a robust and provably stable flow control protocol The network state, that is, the service rate of the bottleneck, can be estimated from the series of probe values using an estimator based on elementary fuzzy logic Our analysis and performance claims are examined by simulation experiments on a set of eight test scenarios We show that under a wide variety of test conditions, both of our schemes provide users with good performance Thus, these mechanisms should prove useful in future high-speed networks

129 citations


Proceedings ArticleDOI
02 Dec 1991
TL;DR: A conflict-free protocol for packet-switched wavelength division multiaccess networks with the use of a control channel, each station in the network can obtain packet backlog information of all the other stations, and so packet transmission can be scheduled to avoid destination conflicts.
Abstract: A conflict-free protocol for packet-switched wavelength division multiaccess networks is proposed. With the use of a control channel, each station in the network can obtain packet backlog information of all the other stations, and so packet transmission can be scheduled to avoid destination conflicts. A very fast scheduling algorithm is proposed. Simulation results show that a maximum throughput of 1 can be achieved, as compared to a maximum of 0.63 for protocols without transmission scheduling. This high throughput performance is obtained because the transmission, reception and processing of backlog information and the transmission and reception of data packets are all done simultaneously in a pipeline operation and all destination conflicts are avoided in every slot through scheduling. The packet delay is calculated to be only one slot (due to scheduling) larger than those protocols without transmission scheduling at low traffic conditions. >

80 citations


Proceedings ArticleDOI
02 Dec 1991
TL;DR: A buffer management policy, called drop on demand, is proposed which yields a greater switch throughput and lower packet loss probability than previously proposed policies for all input traffic rates.
Abstract: An imbalanced traffic model is presented, and the performance of completely shared buffering and output queuing under imbalanced traffic is studied. It is found that shared buffering does not perform as well as than output queuing under this traffic condition. A buffer management policy, called drop on demand, is proposed which yields a greater switch throughput and lower packet loss probability than previously proposed policies for all input traffic rates. The optimal buffer management policy is studied for a class of dynamic allocation schemes with packet purging action. It is found that there exists an optimal stationary policy which can be obtained by solving a linear programming problem. >

80 citations


Proceedings ArticleDOI
01 Nov 1991
TL;DR: By integrating network-control into the image data compressional algorithm, the strong interactions between the coder and the network can be exploited and the available network bandwidth can be used best.
Abstract: The advantages of packet video, constant image quality, service integration and statistical multiplexing, areovershadowed by packet loss, delay and jitter. By integrating network-control into the image data compressionalgorithm, the strong interactions between the coder and the network can be exploited and the available network bandwidth can be used best . In order to enable video transmission over today's networks without reservation orpriorities and in the presence of high packet loss rates, congestion avoidance techniques need to be employed. This isachieved through rate and flow control, where feedback from the network is used to adapt coding parameters and varythe output rate. From the coding point of view the network is seen as data buffer. Analogously to constant bit rateapplications, where a controller measures buffer fullness, we attempt to avoid network congestion (eq. buffer overflow)by monitoring the network and adapting the coding parameters in real-time. 1. INTRODUCTION

76 citations


Book ChapterDOI
18 Nov 1991
TL;DR: The Real-time Channel Administration Protocol provides control and administration services for the Tenet real-time protocol suite, a connection-oriented suite of network and transport layer protocols for realtime communication.
Abstract: The Real-time Channel Administration Protocol (RCAP) provides control and administration services for the Tenet real-time protocol suite, a connection-oriented suite of network and transport layer protocols for realtime communication. RCAP performs per-channel reservation of network resources based on worst-case analysis to provide hard guarantees on delay, jitter, and packet loss bounds. It uses a hierarchical approach to provide these guarantees across a heterogeneous internetwork environment.

70 citations


Journal ArticleDOI
TL;DR: In this article, a dynamic rate control mechanism for voice and video traffic is proposed to achieve better statistical gain for voice traffic and to relieve congestion in fast packet networks, where the feedback delay for the source node to obtain the network congestion information is represented in the model.
Abstract: To achieve better statistical gain for voice and video traffic and to relieve congestion in fast packet networks, a dynamic rate control mechanism is proposed. An analytical model is developed to evaluate the performance of this control mechanism for voice traffic. The feedback delay for the source node to obtain the network congestion information is represented in the model. The study indicates that significant improvement in statistical gain can be realized for smaller capacity links (e.g., links that can accommodate less than 24 voice calls) with a reasonable feedback time (about 100 ms). The tradeoff for increasing the statistical gain is temporary degradation of voice quality to a lower rate. It is shown that whether the feedback delay is exponentially distributed or constant does not significantly affect performance in terms of fractional packet loss and average received coding rate. It is also shown that using the number of calls in talkspurt or the packet queue length as measures of congestion provides comparable performance. >

67 citations



Patent
18 Apr 1991
TL;DR: In this paper, the authors proposed a packet transmission protocol with two windows for controlling the volume of information in the network, i.e., the number of blocks, and the amount of packets transmitted through the network.
Abstract: A packet transmission protocol, which operates in a full-duplex mode in a system, which includes a transmitter, a receiver, and a communications network having a channel that logically ties the transmitter and the receiver together through the network, is disclosed. The receiver regularly sends a control packet to the transmitter. The control packet includes a plurality of data fields, which are useful in describing the state of the receiver to the transmitter. The transmitter receives the receiver's control packet and analyzes the data. If it finds that a particular block of packets had been received with an error (or not received at all), then the transmitter retransmits the block. The protocol includes two windows for controlling the volume of information, e.g. number of blocks, in the network. The first window, called the network window, is used to limit the data in the network so that network buffer resources can be sized economically and yet in a manner such that there will not be an excessive loss in the number of packets transmitted through the network. The second window, called the receiver flow control window, is typically larger than the first window and is used to assure that packets are not dropped, or lost, at the receiver. By having the second window larger than the first window, throughput can be increased while still meeting a commitment to the network that limits the number of packets in the network to a value consistent with economical buffer sizing. Typically, the first window is set to the value of the bandwidth delay product of the channel and the second window is set to a value at least twice that of the first window. The functions related to receiver flow control and network congestion control can be decoupled.

57 citations


Patent
05 Feb 1991
TL;DR: In this paper, a system for structuring information outputted at a variable bit rate from a video signal encoder and for organizing the information into information units or packets is presented.
Abstract: A system for structuring information outputted at a variable bit rate from a video signal encoder and for organizing the information into information units or packets. The information packets are compatible for transmission and processing by packet transmission networks such as asynchronous transfer mode (ATM), capable of sending transmission data streams at variable speed. In a preferred embodiment, the system uses an encoder that comprises video units. Each video unit contains information classified into three distinct classes. Each video unit is associated with a video packet comprising a header and a body. Each of the three distinct classes of information associated with a video unit uses a different technique for tho providing protection of that information against transmission errors and/or information packet loss.


Proceedings ArticleDOI
07 Apr 1991
TL;DR: The proposed congestion control scheme copes with traffic surges that are shorter than the network round-trip delay, and is targeted towards networks that carry aggregated traffic, and can be applied to ATM-based networks.
Abstract: Issues in congestion control are discussed, and a novel congestion control scheme for high-speed networks is described. The scheme is based on periodic transmission of sample time-stamped packets through the network. Upon reception, the packet delays are calculated, averaged, and used to determine the state of the network. The information on the state of the network is then used to drive the network admission control. The major advantage of the proposed scheme over conventional congestion control techniques is that it copes with traffic surges that are shorter than the network round-trip delay. This is achieved by controlling traffic admission with a continuous estimate of the network state. The scheme is targeted towards networks that carry aggregated traffic, and can be applied to ATM-based networks. >

Proceedings ArticleDOI
07 Apr 1991
TL;DR: A new routing algorithm is proposed for this architecture that facilitates the multicasting of packets and shows that a small increase in the expansion factor is sufficient to maintain a multicast packet loss probability that is equivalent to that obtained in the non-multicast case.
Abstract: A three stage packet switch architecture that uses a two stage, self-routing memoryless interconnection network to interconnect a third stage of smaller packet switches has been proposed by K. Y. Eng et al. (1988) and M. J. Karol and Chih-Lin I. (1989). The authors propose a new routing algorithm for this architecture that facilitates the multicasting of packets. Packets are duplicated as near to the switch's outputs as possible. The authors also analyze the switch's multicast performance, finding an overbound on the packet loss probability. The resulting overbound shows that a small increase in the expansion factor is sufficient to maintain a multicast packet loss probability that is equivalent to that obtained in the non-multicast case. >

Patent
24 Apr 1991
TL;DR: In this article, the authors propose a scheme to reduce the loss of message packets that are transmitted in the course of virtual connections in an asynchronous transfer mode and which comprise a packet header identifying the respective virtual connection, the message packets being respectively augmented by a continuous auxiliary identifier and, after multiplication, being separately transmitted via redundant switching matrices of packet switching equipment.
Abstract: In order to reduce the loss of message packets that are transmitted in the course of virtual connections in an asynchronous transfer mode and which comprise a packet header identifying the respective virtual connection, the message packets being respectively augmented by a continuous auxiliary identifier and, after multiplication, being separately transmitted via redundant switching matrices of a packet switching equipment, it is provided that, with reference to the auxiliary identifier, only that message packet transmitted without fault as a first of the multiplied message packets and having an auxiliary identifier that is the next one following the most recently-transmitted message packet is forwarded. In addition, that switching matrix by way of which the transmission of message packets respectively occurs most slowly is identified and message packets that are transmitted via the respective slowest switching matrix as the first of the multiplied message packets are forwarded.

Proceedings ArticleDOI
R.S. Dighe1, C.J. May1, G. Ramamurthy1
07 Apr 1991
TL;DR: The results demonstrate that it is possible to meet the differing needs of traffic types needing datagram transport and those needing virtual-circuit transport by using an intelligent scheduling strategy and a meaningful packet transport protocol.
Abstract: The authors examine the networking environment of the future, pose the technical questions that need to be addressed to provide service assurance, and propose a set of strategies for congestion avoidance. Congestion avoidance is recommended instead of congestion control, which has very limited value in a high-speed network, where the latency in detecting congestion and reacting to it may make the control ineffective. The proposed solution consists of enforcing rate-control at the network edges, bandwidth reservation for continuous bit oriented (CBO) traffic using a unique two-queue strategy, and a scheduler-based packet cross-connect system. Simulation results of a Q+ model of an access node are presented that quantify the performance of this solution and compare it to existing congestion control mechanisms. The results demonstrate that it is possible to meet the differing needs of traffic types needing datagram transport and those needing virtual-circuit transport by using an intelligent scheduling strategy and a meaningful packet transport protocol. >

Journal ArticleDOI
01 Oct 1991
TL;DR: A novel congestion control scheme for high-speed networks that copes with traffic surges that are shorter than the network round-trip delay and is targeted toward networks that carry aggregated traffic, and can be applied to ATM-based networks.
Abstract: In this paper, we describe a novel congestion control scheme for high-speed networks. The scheme is based on periodic transmission through the network of time-stamped sampling packets that sense the congestion status of the network. Upon reception, the sampling packet delays are calculated, averaged, and used to determine the state of the network. The information on the state of the network is then used to drive the network adaptive admission control. The major advantage of the proposed scheme over conventional congestion control techniques is that it copes with traffic surges that are shorter than the network round-trip delay. This is achieved by controlling traffic admission with continuous estimate of the network state. The scheme is targeted toward networks that carry aggregated traffic, and can be applied to ATM-based networks.

Proceedings ArticleDOI
05 Mar 1991
TL;DR: The exact distribution of the number of lost packets in a block of given size is obtained and compared with the distribution under the assumption of independent and identical packet loss probability (the i.i.d. distribution).
Abstract: The exact distribution of the number of lost packets in a block of given size is obtained and compared with the distribution under the assumption of independent and identical packet loss probability (the i.i.d. distribution). Numerical examples show that the exact distribution may be worse for applications such as forward error correction. >

Proceedings ArticleDOI
23 Jun 1991
TL;DR: An analytic model is developed to understand the effect of oversubscription to links on the probability of packet loss at the reassembly buffer and the effective throughput as seen by the source.
Abstract: The effect of a finite reassembly buffer on the performance of deflection routing under a specific nonuniform traffic model is studied. An analytic model is developed to understand the effect of oversubscription to links on the probability of packet loss at the reassembly buffer and the effective throughput as seen by the source. The effect of adding output buffers at the nodes on these performance measures is also investigated. The performance of deflection routing in networks of different sizes (local, metropolitan and wide area networks) is studied. >

Proceedings ArticleDOI
07 Apr 1991
TL;DR: Numerical results indicate that the proposed switch exhibits very good delay-throughput performance over a wide range of input traffic.
Abstract: A space-division, nonblocking packet switch with data concentration and output buffering is proposed. The performance of the switch is evaluated with respect to packet loss probability, the first and second moments of the equilibrium queue length and waiting time, throughput, and buffer overflow probability. Numerical results indicate that the proposed switch exhibits very good delay-throughput performance over a wide range of input traffic. The proposed switch compares favorably with the knockout switch of Y. Yeh et al. (1987) in terms of implementation complexity and packet concentration performance, and with the sunshine switch of J. Giacopelli et al. (1989) and the modular architecture proposed by T. Lee (1990) in terms of fewer basic building elements used to attain the same degree of output buffering. >

Proceedings ArticleDOI
02 Dec 1991
TL;DR: A comparison is made between the performance of two difference network architectures to implement multichannel-metropolitan area networks (M-MANs) and the behavior of both networks under given link failure occurrences is noted.
Abstract: A comparison is made between the performance of two difference network architectures to implement multichannel-metropolitan area networks (M-MANs). Both networks, the bidirectional Manhattan street network (BMSN) and the shuffle net (SN), are assumed with connectivity degree four. Performance under investigation includes throughput, average delay and packet loss. In the authors' model, user's attachments to network nodes are equipped with input and output buffers and performance depends on buffer sizes. The network reliability is taken into account and the behavior of both networks under given link failure occurrences is noted. The comparison is based on simulation results. Pros and cons of the two M-MAN architectures are pointed out in the discussion of the results. >

Journal ArticleDOI
TL;DR: First basic configuration of the ATM based video transmission system and its packet-loss protection schemes are discussed, and the DCT based layered coding scheme with packet priority classification is proposed as an effective packet- loss protection scheme.
Abstract: This paper discusses packet loss and its protection in an asynchronous transfer mode (ATM) based video distribution system. Packet losses in ATM based networks have such a great impact on the design of coding algorithms and network architectures that they should be exhaustively discussed and resolved. In this paper, first basic configuration of the ATM based video transmission system and its packet-loss protection schemes are discussed. The DCT based layered coding scheme with packet priority classification is proposed as an effective packet-loss protection scheme. Burstiness characteristics of the broadcast video sources are evaluated and modeled to clarify statistical multiplexing performance and packet-loss properties. The quality degradation caused by the packet losses is also evaluated by the SNR, and the superior performance of the proposed layered coding scheme is verified.

Proceedings ArticleDOI
N. Yin1, M.G. Hluchyj1
07 Apr 1991
TL;DR: The authors show that whether the feedback delay is exponentially distributed or constant does not significantly affect performance in terms of fractional packet loss and average received coding rate, and using the number of calls in talkspurt or the packet queue length as measures of congestion provides comparable performance.
Abstract: To achieve better statistical gain for voice and video traffic and to relieve congestion in integrated networks, a dynamic rate control mechanism is proposed. This mechanism, using a variable rate coder, adjusts the source coding rate based on network feedback information. An analytical model is developed to evaluate the performance of the mechanism for voice traffic. The feedback delay for the source node to obtain the network congestion information is presented in the model. Significant improvement in statistical gain can be realized for smaller capacity links (e.g., links that can accommodate less than 24 calls) with a reasonable feedback time (about 100 ms); but this increase causes a temporary degradation in voice quality to a lower rate. The authors show that whether the feedback delay is exponentially distributed or constant does not significantly affect performance in terms of fractional packet loss and average received coding rate. Using the number of calls in talkspurt or the packet queue length as measures of congestion provides comparable performance. >

Proceedings ArticleDOI
07 Apr 1991
TL;DR: A self-routing space-division fast packet switch architecture is proposed which achieves output queuing with a reduced number of internal paths (O(N).
Abstract: A self-routing space-division fast packet switch architecture is proposed which achieves output queuing with a reduced number of internal paths (O(N)). The switch architecture is a multi-level binary tree in which each branch constitutes a group of paths that are shared by all the packets destined to a subset of output ports. The reduction in the number of internal paths is obtained by interleaving the packet distribution and packet concentration functions throughout the switch fabric. Packet loss may occur at each level of the tree and is dependent on the degree of concentration exercised at that level. Owing to the binary tree structure of the switching fabric, a simple mathematical analysis is performed in order to determine the concentration parameters appropriate for each level. Several implementation architectures based on sorting networks are described. >

Proceedings ArticleDOI
07 Apr 1991
TL;DR: Analytic expressions and numerical techniques presented are for computing both time-based measures such as the distribution of periods during which all arriving packets are lost due to excessive delay as well as packet-based Measures such asThe distribution of the number of consecutively lost packets and theNumber of successful packets between such periods of loss.
Abstract: The stochastic properties of time-out loss periods are characterized for infinite queues, that is, uninterrupted intervals during which the virtual wait is at or above some fixed threshold. Analytic expressions and numerical techniques presented are for computing both time-based measures such as the distribution of periods during which all arriving packets are lost due to excessive delay as well as packet-based measures such as the distribution of the number of consecutively lost packets and the number of successful packets between such periods of loss. Both continuous and discrete-time systems are examined. It is shown that the assumption of random packet loss severely underestimates the number of consecutively lost packets. >

Journal ArticleDOI
TL;DR: Numerical results from a tandem queuing network model show that for a network with very-high-speed/low-error-rate channels, an edge-to-edge scheme gives a smaller packet transmission delay than a link-by-link scheme for both go-back-N and selective-repeat retransmission procedures, while keeping the packet loss probability sufficiently small.
Abstract: The authors investigate the effects of protocol processing overhead on the performance of error recovery schemes. The focus is on the edge-to-edge error recovery scheme, in which retransmissions of erred packets only take place between source and destination nodes. An approximation is obtained for the Laplace transform for the distribution of the end-to-end packet transfer delay, considering the processing time required for error recovery. The performance of the link-by-link error recovery scheme, in which retransmissions take place between adjacent nodes, is evaluated and compared to the performance of the edge-to-edge scheme. Numerical results from a tandem queuing network model show that for a network with very-high-speed/low-error-rate channels, an edge-to-edge scheme gives a smaller packet transmission delay than a link-by-link scheme for both go-back-N and selective-repeat retransmission procedures, while keeping the packet loss probability sufficiently small. >

01 Jan 1991
TL;DR: This work addresses the sometimes complex problem of unauthorized access to network resources by choosing internal data structures that simplify per packet lookup and then devoting 90 per cent of the code to implementing commands that maintain the setables in manner that is easy for system administrators.
Abstract: By using existing information in packet headers, routers can provide systemadministrators a facility to manage network connections between computers. Host address,network number, interface, direction, protocol, and port number are parameters that may beused to implement an access control policy.We present experiences developing the packet filtering facility in the NetBlazer dial-upIP router. We address the sometimes conflicting design goals of efficient performance andease of administration by choosing internal data structures that simplify per packet lookupand then devoting 90 per cent of our code to implementing commands that maintain thesetables in manner that is easy for system administrators. Introduction Wide area networks provide remote sites con-venient access to local networks. With thisincreased convenience comes the often complexproblem of unauthorized access to networkresources. Packet filtering in an IP router can beused to manage this complexity by controlling whichhosts and which services may be accessed fromremote locations.In a typical application, host address filtersallow remote stations to log in to a host that isknown to have carefully administered usernames andpasswords but prevent access to hosts that are lesssecure. Protocol filters allow logins, ftp and mail,but deny remote access to X11 or NFS.from host xx.lcs.mit.edu tcp port 3to host score.stanford.edu tcp port telnet reject;between host sri-nic.arpa and any accept;Figure 1: Examples of Filter specificationsAll IP routers do packet filtering to reduce net-work load. Broadcast packets, packets for which therouter does not have a route, packets with bad IPheaders, and packets that have been bouncing aroundover too many gateways (packets with TTL = 0) arenot forwarded [2].Routers from several manufacturers can use IPsource and destination address to provide administra-tive control of which hosts or networks may com-municate with each other. Some also use UDP orTCP port or ICMP message type to control whatapplications network connections are used for [4, 14,15].

Journal ArticleDOI
01 Jan 1991
TL;DR: Performance trade-offs in buffer architecture design for a space-division packet switching system is studied and choosing m = 3 or 4 is sufficient to exploit the maximum utilization of a non-blocking switch fabric.
Abstract: Performance trade-offs in buffer architecture design for a space-division packet switching system is studied. As described in Figure 1, the system is constructed by a non-blocking switch fabric and input/output buffers. The capacity of the non-blocking switch fabric is defined by the maximum number of packets, denoted by m, which can be simultaneously routed from multiple inputs to each output. The buffer size at each input is considered to be finite, equal to K. The emphasis here is placed on the input packet loss probability for systems constructed by different ms and Ks. From the performance point of view, we conclude: (a) choosing m = 3 or 4 is sufficient to exploit the maximum utilization of a non-blocking switch fabric (b) introducing input buffers of moderate size K significantly reduces the packet loss probability.

09 Dec 1991
TL;DR: Results are presented showing that PRMA (packet reservation multiple access) is a suitable option for the integrated voice and data services.
Abstract: Examines several multiple access protocols to find a suitable packet transmission scheme for integrated voice and data services. The protocols which are examined are based on a TDMA frame and slot structure. Results are presented showing that PRMA (packet reservation multiple access) is a suitable option for the integrated voice and data services. This protocol provides lower speech packet loss and also lower access delay for the data traffic. For example at 1 Mb/s 40 voice connections can be supported with 120 data stations for a speech packet loss less than 1%. The HPS-DB is also an attractive option for integrated voice and data services. The main attractive feature is that different combinations of protocol can be used. For example using the HPS-DB protocol PRMA or TDMA can be used for speech packet and TM-BCMA/CD can be used for data traffic which provides lowest mean end to end delay. >

Book ChapterDOI
22 Apr 1991
TL;DR: The results show that if the authors measure message latency against throughput then at some stage latency increases rapidly with little increase in throughput, and this point is independent of message length.
Abstract: This paper presents a simulation facility developed for the ‘mad-postman’ packet routing communication network. Unlike previously published results for this network this facility and the results generated provide a complete simulation of the network, the network-processor interface and the application layer. The results are presented in terms of average message latency against network throughput for applications that generate randomly addressed packets at random intervals. The results show that if we measure message latency against throughput then at some stage latency increases rapidly with little increase in throughput. This point is independent of message length.