scispace - formally typeset
Search or ask a question

Showing papers on "Queue management system published in 1993"


Journal ArticleDOI
TL;DR: In a system with a single buffer per queue, an allocation policy is obtained that maximizes the throughput and minimizes the delay when the arrival and service statistics of different queues are identical.
Abstract: Consider N parallel queues competing for the attention of a single server At each time slot each queue may be connected to the server or not depending on the value of a binary random variable, the connectivity variable Allocation at each slot; is based on the connectivity information and on the lengths of the connected queues only At the end of each slot, service may be completed with a given fixed probability Such a queueing model is appropriate for some communication networks with changing topology In the case of infinite buffers, necessary and sufficient conditions are obtained for stabilizability of the system in terms of the different system parameters The allocation policy that serves the longest connected queue stabilizes the system when the stabilizability conditions hold The same policy minimizes the delay for the special case of symmetric queues In a system with a single buffer per queue, an allocation policy is obtained that maximizes the throughput and minimizes the delay when the arrival and service statistics of different queues are identical >

783 citations


Book ChapterDOI
David M. Lucantoni1
10 May 1993
TL;DR: An overview of recent results related to the single server queue with general independent and identically distributed service times and a batch Markovian arrival process and stationary and transient distributions for the queue length and waiting time distributions are presented.
Abstract: We present an overview of recent results related to the single server queue with general independent and identically distributed service times and a batch Markovian arrival process (BMAP). The BMAP encompasses a wide range of arrival processes and yet, mathematically, the BMAP/G/1 model is a relatively simple matrix generalization of the M/G/1 queue. Stationary and transient distributions for the queue length and waiting time distributions are presented. We discuss numerical algorithms for computing these quantities, which exploit both matrix analytic results and numerical transform inversion. Two-dimensional transform inversion is used for the transient results.

222 citations


Patent
02 Mar 1993
TL;DR: In this paper, a multi-channel adaption unit (MAU) is proposed to manage scheduling and processing of data transfers defined by the descriptors without explicit linkages, in non-contiguous memory locations.
Abstract: A processor stores descriptors without explicit linkages, in non-contiguous memory locations, and sequentially hands them off to an adaptor which manages scheduling and processing of data transfers defined by the descriptors. Each descriptor is handed off in a request signalling process in which the processor polls the availability of a request register in the adaptor, and writes the address of a respective descriptor to that register when it is available. The adapter then schedules processing of the descriptor whose address is in the request register. The adapter manages a "Channel Descriptor Table" (CDT), which defines the order of processing of descriptors designated by the requests. In effect, the CDT defines a linked list queue into which the adapter installs descriptors, in the sequence of receipt of respective requests. Using the CDT information, the adapter retrieves successively queued descriptors and controls performance of operations (data transfer or other) defined by them. Accordingly, descriptors in the queue are retrieved and respectively defined operations are performed, in the order of receipt of respective requests; as if the descriptors had been stored by the processor with explicit linking and chaining associations and handed off to the adapter as an explicitly chained set of descriptors. In a preferred embodiment, a "multichannel adapter unit" (MAU), directing data transfers relative to multiple channels, contains one request register for all channels and a separate CDT and "request address port" dedicated to each channel. Requests accompanied by addresses designating these ports are "funneled" through the request register to CDT queues of respective channels. The processor can effectively remove a descriptor from any CDT queue, without potentially compromising handling of data transfers defined by other descriptors in the queue, by writing a "skip code" to the descriptor. Upon retrieving a descriptor with a skip code, the adapter automatically skips the operation defined by that descriptor and chains to a next descriptor (if the queue defined by the CDT is not empty).

134 citations


Journal ArticleDOI
TL;DR: It is illustrated that front dropping not only improves the delay performance on an internodal link, but also provides the overall loss performance for time constrained traffic such as packet voice.
Abstract: When congestion occurs in a packet queuing system, packets can be dropped from the rear or the front of the queue. It is demonstrated that the probability of a packet being dropped is the same in systems with rear and front packet dropping. It is shown that the probability of a packet being delayed longer than a given value in a system with front dropping is less than or equal to that in a system with rear dropping. It is further illustrated that front dropping not only improves the delay performance on an internodal link, but also provides the overall loss performance for time constrained traffic such as packet voice. >

46 citations


Patent
15 Jun 1993
TL;DR: In this article, an adjustable invalidation queue for use in the cache memories is proposed. But the queue is flushed down to the lower limit when the contents of the queue attain the upper limit and write requests on the bus are RETRYed.
Abstract: In a time-shared bus computer system with processors having cache memories, an adjustable invalidation queue for use in the cache memories. The invalidation queue has adjustable upper and lower limit positions that define when the queue is logically full and logically empty, respectively. The queue is flushed down to the lower limit when the contents of the queue attain the upper limit. During the queue flushing operation, WRITE requests on the bus are RETRYed. The computer maintenance system sets the upper and lower limits at system initialization time to optimize system performance under maximum bus traffic conditions.

42 citations


Patent
21 Jan 1993
TL;DR: In this article, a plurality of queues where each queue is defined by a set of criteria, the queue system comprises a plurality-of header registers where each header register defines a queue in the queuing system and a plurality where each task register can be associated with each queue in queue system.
Abstract: A plurality of queues where each queue is defined by a set of criteria, the queue system comprises a plurality of header registers where each header register defines a queue in the queue system and a plurality of task registers where each task register can be associated with each queue in the queue system. Each header register has a unique address and contains a previous field and a next field. Each task register has a unique address and contains a previous field and a next field. Each previous field and said next field stores the address of another register in a given queue such that each queue is formed in a double link structure. Control means is provided for dynamically assigning task registers to queues by controlling the addresses stored in the previous and next fields in each header and task registers such that each of said task registers is always assigned to a queue in the queue system.

38 citations


Patent
08 Oct 1993
TL;DR: In this article, a method of delivering messages between application programs is provided, which ensures that no messages are lost and none are delivered more than once, using asynchronous message queuing.
Abstract: A method of delivering messages between application programs is provided which ensures that no messages are lost and none are delivered more than once. The method uses asynchronous message queuing. One ore more queue manager programs (100) is located at each computer of a network for controlling the transmission of messages to and from that computer. Messages to be transmitted to a different queue manager are put onto special transmission queues (120). Transmission to an adjacent queue manager comprises a sending process (130) on the local queue manager (100) getting messages from a transmission queue and sending them as a batch of messages within a syncpoint-manager-controlled unit of work. A receiving process (150) on the receiving queue manager receives the messages and puts them within a second syncpoint-manager-controlled unit of work to queues (180) that are under the control of the receiving queue manager. Commitment of the batch is coordinated by the sender transmitting a request for commitment and for confirmation of commitment with the last message of the batch, commit at the sender then being triggered by the confirmation that is sent by the receiver in response to the request. The invention avoids the additional message flow that is a feature of two-phase commit procedures, avoiding the need for resource managers to synchronise with each other. It further reduces the commit flows by permitting batching of a number of messages.

26 citations


Journal ArticleDOI
TL;DR: In this paper, the effect of retrial times on the behavior of a single server retrial queue is investigated and the authors derive monotonicity properties of several system performance measures of interest.
Abstract: A single server retrial queue is a queueing system consisting of a primary queue with finite capacity, an orbit and a server. Customers can arrive at the primary queue either from outside the system or from the orbit. If the primary queue is full, an arriving customer joins the orbit and conducts a retrial later. Otherwise, he enters the primary queue, waits for service and then leaves the system after service completion. We investigate the effect of retrial times on the behavior of the system. In particular, we assume that the retrial time distributions are phase type and introduce a new relation, which we call K-dominance (short for Kalmykov), on these distributions. Longer retrial times with respect to this K-dominance are shown to result in a more congested system in the stochastic sense. From these results, we derive monotonicity properties of several system performance measures of interest.

20 citations


Book ChapterDOI
03 Nov 1993
TL;DR: This paper describes queue monitoring, a policy for managing the effect of delay jitter on audio and video in computer-based conferences that dynamically adjusts display latency in order to support low-latency conferences with an acceptable gap rate.
Abstract: This paper describes queue monitoring, a policy for managing the effect of delay jitter on audio and video in computer-based conferences. By observing delay jitter over time, this policy dynamically adjusts display latency in order to support low-latency conferences with an acceptable gap rate. Queue monitoring is evaluated by comparing it with two other policies in an empirical study of a computer-based conferencing system. Our results show that queue monitoring performs well under a variety of observed network loads.

19 citations


Proceedings ArticleDOI
28 Mar 1993
TL;DR: An exact analysis of the discrete-time single-server SMP/D/1/s queue with an arbitrary semi-Markov input process and a buffer size is presented and shows the influence of correlation on both loss probability and waiting time.
Abstract: An exact analysis of the discrete-time single-server SMP/D/1/s queue with an arbitrary semi-Markov input process and a buffer size s is presented. The SSMP/D/1/s queue and the DMAP/D/1/s queue re covered in this model. The analysis yields easy expressions for the queue length density at arrivals (loss probability) and the waiting time density. This queuing system is suitable for modeling correlated input streams of asynchronous transfer mode (ATM) networks involving batches. Calculations show the influence of correlation on both loss probability and waiting time. >

17 citations


Journal ArticleDOI
TL;DR: A Markovian queue with number of servers depending upon queue length is discussed, and a relationship is developed to obtain the optimum value of N for the maximum net profit.

Journal ArticleDOI
C. Bisdikian1
TL;DR: The conservation law formula for work-conserving queues with batch arrivals is derived for when the arrival instants of customers are assumed to constitute an arrivals see time averages (ASTA) process based on the analysis of the corresponding first-in first-out (FIFO) multiclass queue and its steady state.
Abstract: The conservation law formula for work-conserving queues with batch arrivals is derived for when the arrival instants of customers are assumed to constitute an arrivals see time averages (ASTA) process. The derivation is based on the analysis of the corresponding first-in first-out (FIFO) multiclass queue and its steady state. Hence, the mean work load in the queue at an arbitrary time instant can be equated with the mean time that an arbitrary customer has to wait in the FIFO queue. >

Journal Article
TL;DR: The role of an urban traffic control system in monitoring and managing queues in oversaturated or gridlock situations is emphasised in this paper, where the authors address the issue of queue management in the context of managing travel demand, congestion, and capacity.
Abstract: This paper addresses the issue of queue management in the context of managing travel demand, congestion, and capacity. The role of an urban traffic control system in monitoring and managing queues in oversaturated or gridlock situations is emphasised. A control philosophy of equal saturation to conflicting traffic movements may not be appropriate if the blocking of an intersection due to the downstream queue frequently occurs. A more active approach than adaptive signal control is needed; techniques such as the gating of traffic and the allocation of priorities according to queue storage space should be considered. A first step towards queue management is the introduction of on-line queue monitoring in a control system. Empirical studies carried out with the SCATS and TRACS systems in Australia have demonstrated the practicality of this concept. The concept is now implemented in TRACS for refinement in Brisbane and other provincial cities.

Journal ArticleDOI
TL;DR: The transient queueing behavior as affected by time stochastic properties of the underlying two-state Markov chain for the arrival process is explored.


Journal ArticleDOI
TL;DR: A discrete-time tandem network of cut-through queues is presented that allows finite capacity queues, blocking, and bursty traffic, and a new bursty arrival process, IBK(k), for cut- through traffic is introduced.


Proceedings ArticleDOI
14 Sep 1993
TL;DR: The study demonstrates that the delay in the back-pressure signal seriously affects the switch performance, since it could result in severe cell loss at the output ports.
Abstract: This paper presents a study of the effect of the delay in the back-pressure signal in an architecture with input-output-buffering with back-pressure control, on the switch performance. The exact value of the delay depends on the specific implementation of the back-pressure mechanism and the contention resolution policy. This involves the mechanism by which the information is broadcasted to the input ports. The study demonstrates that the delay in the back-pressure signal seriously affects the switch performance, since it could result in severe cell loss at the output ports. This cell loss at the output ports can be controlled by modifying the output queue management mechanism, i.e. by incorporating additional buffering at the output ports on top of the original output queue size. The amount of this additional buffering depends on the value of the delay in the back-pressure signal, and it does not seriously affect the size of the input buffers. Higher values of delay in the back-pressure signal do not adversely affect the cell loss at the input ports. This study also investigates the overall input and output buffer allocation policies to achieve acceptable cell loss performance for architectures with delayed back-pressure mechanisms. >

Patent
28 Oct 1993
TL;DR: In this article, the authors propose a queuing system that allows threads to be dispatched to real processors without large operating overhead, through object storage, through which the operating systems do not need to wait for the system's dispatching process to complete.
Abstract: An architecture uses a process, termed "encapsulation", by which queues and counters are only accessed through a special memory operand called "object storage". The system alone is in control of the object storage, and the user cannot access it directly at any time. If the user needs to access a queue, the user must request it from the system. The system will in turn provide such access by issuing the user a "token". This token is the only means of communication between the use and the requested queue. By providing threads to be dispatched to real processors without large operating overhead, through object storage, the operating systems do not need to wait for the system's dispatching process to complete. Operating systems can signal the system through the use of object storage that they are authorized to access the processor when needed and thus forego the long dispatching process. In addition, since real processors are not dedicated, they can execute other programs when not needed. Since the state of threads is unknown to the operating system and the object dispatcher is in charge, operating support is kept at a minimum, which in itself is an important advantage of the invention. The encapsulation process along with the queuing system used in the architecture lead to finer granularity.

Patent
29 Mar 1993
TL;DR: In this paper, the authors propose a method to take elements out of a queue in correct order and execute them by specifying process start time and priority order into elements which wait to be executed when the elements which are waiting in queue are stored in the queue.
Abstract: PURPOSE: To take elements out of a queue in correct order and execute them by specifying process start time and priority order into elements which wait to be executed when the elements which wait to be executed are stored in the queue. CONSTITUTION: Processes 9a-9c which accept the elements to be executed and their process start time access the queues in the increasing order of the priority and store the elements to be executed in the arrays T n of the process start time of plural queues 16 which are present in time series on the basis of the process start time and a process 9d takes an element to be executed next out of a queue 16 in the increasing order of the process start time and priority. Therefore, there are plural elements to be executed present in plural queues 16 while having the same priority order, they are executed according to the processing start time and never inverted in execution order. COPYRIGHT: (C)1994,JPO

Journal ArticleDOI
TL;DR: In this paper, the server alternates between two queues, a batch queue and an individual queue, giving priority to customers in the latter queue, and derive an expression for the mean customer sojourn time under the assumption of Poisson arrivals, and general service times for each of the three phases.

Book ChapterDOI
11 Aug 1993
TL;DR: The basic form of the k-d heap uses no extra space, takes linear time to construct, and supports instant access to the items carrying the minimum key of any dimension, as well as logarithmic time insertion, deletion, and modification of any item in the queue.
Abstract: This paper presents the k-d heap, an efficient data structure that implements a multi-dimensional priority queue. The basic form of the k-d heap uses no extra space, takes linear time to construct, and supports instant access to the items carrying the minimum key of any dimension, as well as logarithmic time insertion, deletion, and modification of any item in the queue. Moreover, it can be extended to a multi-dimensional double-ended mergeable priority queue, capable of efficiently supporting all the operations linked to priority queues. The k-d heap is very easily implemented, and has direct applications.

Journal ArticleDOI
TL;DR: The linear algebraic approach to queuing theory is applied to analyze the performance of a typical single-bus multiprocessor system, which can be modeled as an M/G/1/N queuing system with load-dependent arrivals.
Abstract: The linear algebraic approach to queuing theory is applied to analyze the performance of a typical single-bus multiprocessor system. This system can be modeled as an M/G/1/N queuing system with load-dependent arrivals. The method presented requires only that the nonexponential service time distribution for the system be a matrix-exponential, that is, one with a rational Laplace transform. Using linear algebraic techniques, expressions are obtained for the performance characteristics of interest, such as the processing power for the multiprocessor system. The algorithm does not rely on root finding and can be implemented using symbolic programming techniques. The explicit closed-form expression for the processing power is presented for some special cases. >

Patent
30 Dec 1993
TL;DR: Disclosed as mentioned in this paper is a data processing system and a method of managing a queue of items for processing in which the expected time an item will spend on the queue (130) is calculated.
Abstract: Disclosed is a data processing system and a method of managing a queue (130) of items for processing in which the expected time an item will spend on the queue (130) is calculated (209) when an item is received to be placed on a queue (130) If this exceeds an upper limit, then the item is rejected (213) and the queue (130) is purged (215) of all items An indication is provided to the sources of the items that were purged that this has occurred In this way, it is possible to detect the difference between a queue (130) which is longer, but dynamic, and one which is shorter, but static The method is applied to management of requests for a communications link between local and remote systems

Proceedings ArticleDOI
28 Mar 1993
TL;DR: It is shown that this queuing system provides an accurate analytical model for a distributed-queue dual-bus (DQDB) station, as well as a means for an appropriate evaluation of the correlation associated with key traffic processes in that network.
Abstract: A discrete-time priority queuing policy is presented and analyzed. The three-queue system provides distinct service strategies, namely, the consistent-gated (c-G), 1-limited (L), and head-of-line (HoL), to each of the queues. The proposed service policy is potentially applicable to the modeling of network resource allocating policies in integrated services communication networks. In particular, it is shown that this queuing system provides an accurate analytical model for a distributed-queue dual-bus (DQDB) station, as well as a means for an appropriate evaluation of the correlation associated with key traffic processes in that network. >


Journal ArticleDOI
TL;DR: In this paper, a branching process input stream that can be used to model the organization of packets (in packetized communication) in messages is defined and studied, and a central result on the effect that such streams have on a packet switch or queue is derived.
Abstract: In a previous paper, we defined and studied a branching process input stream that can be used to model the organization of packets (in packetized communication) in messages. In this paper we derive a central result on the effect that such streams have on a packet switch (or queue) and further study one particular model (the case of so-called short-range dependence), where the distribution of the number of packets in a buffer can explicitly be found (albeit in the form of a transform). The solution to this particular model suggests an approximation for the average buffer content in the general case.

01 Oct 1993
TL;DR: In this article, the authors deal with the estimation of the probability of long queues on approach lanes of signalized intersections, where the consequences of queues reaching sensitive areas are severe, both off-and on-line intersection control strategies should include an evaluation of the associated risk.
Abstract: This paper deals with the estimation of the probability of long queues on approach lanes of signalized intersections. Where the consequences of queues reaching sensitive areas are severe, both off- and on-line intersection control strategies should include an evaluation of the associated risk. Such instances include queues exceeding the length of a turning bay for left turns and blocking a through lane, or queues stretching across an upstream intersection. If the risk is unacceptably high, the cycle time, green intervals, phase structure or their sequence should be modified. This paper summarized international practices of queue determination and explains the method suggested in Canada for the estimation of the probability of a queue exceeding a given distance. The technique applies arrival flows to determine a Poisson-based dependent probability measure.

Proceedings ArticleDOI
23 May 1993
TL;DR: The proposed methods are used to evaluate the performance of an ATM multiplexer supporting either voice or video traffic, modeled as a discrete time, single server queuing system with an infinite buffer.
Abstract: An asynchronous transfer mode (ATM) multiplexer supporting a number of bursty sources is modeled as a discrete time, single server queuing system with an infinite buffer. The probability generating function (PGF) method is used to analyze the queuing behavior. The PGF method requires the determination of a large number of boundary values, and hence the roots of the characteristic equation. An iterative algorithm for evaluating the roots is proposed. The algorithm is decomposable when the arrival process is a superposition of elemental processes. Conditions for all the roots to be real are established. A set of equations to recursively compute the moments of the queue length is derived. The proposed methods are used to evaluate the performance of an ATM multiplexer supporting either voice or video traffic. >

Journal Article
TL;DR: The type of simulation on servingtime relating with length of waiting line is discussed, and six simulation models are set up, and simulation results are presented.
Abstract: Queuing system simulation is one of the most important problem ondiscrete-event system simulation.In this paper,the type of simulation on servingtime relating with length of waiting line is discussed.Six simulation models areset up,and simulation results are also presented in this paper.