scispace - formally typeset
Search or ask a question
Topic

Run queue

About: Run queue is a research topic. Over the lifetime, 470 publications have been published within this topic receiving 10633 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A new priority queue implementation for the future event set problem is described and shown experimentally to be O(1) in queue size for the priority increment distributions recently considered by Jones in his review article.
Abstract: A new priority queue implementation for the future event set problem is described in this article. The new implementation is shown experimentally to be O(1) in queue size for the priority increment distributions recently considered by Jones in his review article. It displays hold times three times shorter than splay trees for a queue size of 10,000 events. The new implementation, called a calendar queue, is a very simple structure of the multiple list variety using a novel solution to the overflow problem.

427 citations

Journal ArticleDOI
TL;DR: Experimental results indicate that no performance improvements can be obtained over the scheduler versions using a one-dimensional workload descriptor, and the best single workload descriptor is the number of tasks in the run queue.
Abstract: A task scheduler based on the concept of a stochastic learning automation, implemented on a network of Unix workstations, is described. Creating an artificial, executable workload, a number of experiments were conducted to determine the effect of different workload descriptions. These workload descriptions characterize the load at one host and determine whether a newly created task is to be executed locally or remotely. Six one-dimensional workload descriptors are examined. Two workload descriptions that are more complex are also considered. It is shown that the best single workload descriptor is the number of tasks in the run queue. The use of the worst workload descriptor, the 1-min load average, resulted in an increase of the mean response time of over 32%, compared to the best descriptor. The two best workload descriptors, the number of tasks in the run queue and the system call rate, are combined to measure a host's load. Experimental results indicate that no performance improvements over the scheduler versions using a one-dimensional workload descriptor can be obtained. >

260 citations

Journal ArticleDOI
Martin Eisenberg1
TL;DR: This paper analyzes a queuing model consisting of a system of queues that are attended periodically in a given order by a single server and shows how the moments of these distributions can be calculated.
Abstract: This paper analyzes a queuing model consisting of a system of queues that are attended periodically in a given order by a single server. The server empties each queue before proceeding to the next queue in sequence. A changeover time is required whenever the server switches from one queue to another. For a stationary process expressions are obtained for the Laplace-Stieltjes transforms of the waiting-time and intervisit-time distributions at each queue. It is also shown how the moments of these distributions can be calculated.

259 citations

Patent
25 Nov 1997
TL;DR: In this article, a QoS management system for a data packet transmission network, where routers offer priority services of the type required for isochronous handling of data representing real-time voice, includes a Quality of Service (QoS) management system to ensure that guarantees associated with such priority service can be met with a high degree of certainty.
Abstract: A packet router for a data packet transmission network, wherein routers offer priority services of the type required for isochronous handling of data representing real-time voice, includes a Quality of Service (QoS) management system for ensuring that guarantees associated with such priority service can be met with a high degree of certainty. This management system provides prioritized queues including a highest priority queue supporting reservations for the priority service suited to isochronous handling. The highest priority queue and other queues are closely monitored by a QoS manager element for states of near congestion and critical congestion. While neither state exists, filler packet flows are promoted from lower priority queues to the highest priority queue, in order to keep the latter queue optimally utilized. If all lower priority queues are empty at such times, dummy packets are inserted as filler flows. Dummy packets have a form causing routers and other stations receiving them to immediately discard them. The volume of dummy traffic allowed for each queue of the system is a predetermined fraction of the queue's estimated peak traffic load, and that volume is displaceable to allow forwarding of additional traffic through the queue when conditions require it. While a state of near congestion exists, the QoS manager demotes filler flow units from the highest priority queues to lower priority queues, in order to lessen the potential forwarding delays presented to real traffic occupying the highest priority queue. When a state of critical congestion exists in the highest priority queue, admission of new incoming traffic flows to that queue is suspended and forwarding of filler flows from that queue out to the network is also suspended.

245 citations

Patent
12 Jul 1994
TL;DR: In this article, an improved estimated waiting time arrangement is proposed to derive a more accurate estimate of how long a call that is or may be enqueued in a particular queue will have to wait before being serviced by an agent, by using the average rate of advance of calls through positions of the particular queue.
Abstract: In an automatic call distribution (ACD) system, an improved estimated waiting time arrangement derives a more accurate estimate of how long a call that is or may be enqueued in a particular queue will have to wait before being serviced by an agent, by using the average rate of advance of calls through positions of the particular queue. For a dequeued call, the arrangement determines the call's individual rate of advance from one queue position to the next toward the head of the queue. It then uses this individual rate to recompute a weighted average rate of advance through the queue derived from calls that preceded the last-dequeued call through the queue. To derive a particular call's estimated waiting time, the arrangement multiplies the present weighted average rate of advance by the particular call's position number in the queue. The arrangement may be called upon to update the derivation at any time before or while the call is in queue. Also, the arrangement performs the estimated waiting time derivation separately and individually for each separate queue. The arrangement advantageously takes into consideration the effect of ACD features that affect the estimated waiting time, including changes in the numbers of agents that are serving the queue due to agent login and logout, multiple split/skill queuing, agents with multiple skills or in multiple splits, priority queuing, interflow, intraflow, and call-abandonment rates.

210 citations


Network Information
Related Topics (5)
Fair-share scheduling
24.7K papers, 516.6K citations
73% related
Dynamic priority scheduling
28.2K papers, 585.1K citations
72% related
Server
79.5K papers, 1.4M citations
71% related
Queueing theory
21.4K papers, 438.8K citations
71% related
Load balancing (computing)
27.3K papers, 415.5K citations
69% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20211
20201
20192
20179
201612
201512