scispace - formally typeset
Search or ask a question

Showing papers on "Run queue published in 2008"


Patent
02 Dec 2008
TL;DR: In this article, a system for service queue management is disclosed including a terminal controlled by the system having a processing platform to generate service queue information for a user, and a mechanism for the user to request or receive the service queuing information.
Abstract: A system for service queue management is disclosed including a terminal controlled by the system having a processing platform to generate service queue information for a user, and a mechanism for the user to request or receive the service queue information. The processing platform preferably transmits promotions related to goods and services associated with the service queue information to a user. The mechanism for the user to request or receive service queue information may preferably include a user's mobile communications device or a manually-activated device located on the terminal. Service queue information may include a queue number, next queue number to be issued, estimated wait time for service, a predicted time of service, time of issuance of ticket, predicted queue-waiting time, marketing messages, instructions for use of said ticket dispenser, and average queue-waiting time.

84 citations


Patent
19 Jun 2008
TL;DR: In this paper, a system and method for assessing the status of work waiting for service in a work queue or a work pool is presented, where work items are placed in the queue or pool and have a service time goal.
Abstract: The present invention provides a system and method for assessing the status of work waiting for service in a work queue or a work pool. Work items are placed in the work queue or work pool and have a service time goal. The work items in the work queue or work pool are scanned and a required queue position for each work item is calculated according to the amount of time remaining prior to the expiration of the service time goal and weighted advance time for servicing of work items in the work queue or pool. An array of counters has elements which correspond to required queue positions. Upon the calculation of the required queue position for a work item, the counter corresponding to the required queue position is incremented. When all of the work items are scanned, the array of counters is analyzed to predict a future state of the work queue or work pool.

75 citations


Journal ArticleDOI
TL;DR: This paper considers several discrete-time priority queues with priority jumps, which all differ in their jumping mechanism, based on a certain jumping criterion, and thus all have a different performance.
Abstract: In this paper, we consider several discrete-time priority queues with priority jumps. In a priority scheduling scheme with priority jumps, real-time and non-real-time packets arrive in separate queues, i.e., the high- and low-priority queue respectively. In order to deal with possibly excessive delays however, non-real-time packets in the low-priority queue can in the course of time jump to the high-priority queue. These packets are then treated in the high-priority queue as if they were real-time packets. Many criteria can be used to decide when packets of the low-priority queue jump to the high-priority queue. Some criteria have already been introduced in the literature, and we first overview this literature. Secondly, we propose and analyse a new priority scheme with priority jumps. Finally, we extensively compare all cited schemes. The schemes all differ in their jumping mechanism, based on a certain jumping criterion, and thus all have a different performance. We show the pros and cons of each jumping scheme.

25 citations


Journal ArticleDOI
TL;DR: A new parallel queue computation model, 2-offset P-Code queue computationmodel, is presented together with a new code generation algorithm that takes leveled DAGs as input and produces 2- offset P- code assembly.

19 citations


Patent
Xin He1, Qi Zhang
31 Mar 2008
TL;DR: In this article, lock-free circular queues relying only on atomic aligned read/write accesses in multiprocessing systems are disclosed, and a comparison between a queue tail index and each queue head index indicates that there is sufficient room available in a circular queue for at least one more queue entry.
Abstract: Lock-free circular queues relying only on atomic aligned read/write accesses in multiprocessing systems are disclosed. In one embodiment, when comparison between a queue tail index and each queue head index indicates that there is sufficient room available in a circular queue for at least one more queue entry, a single producer thread is permitted to perform an atomic aligned write operation to the circular queue and then to update the queue tail index. Otherwise an enqueue access for the single producer thread would be denied. When a comparison between the queue tail index and a particular queue head index indicates that the circular queue contains at least one valid queue entry, a corresponding consumer thread may be permitted to perform an atomic aligned read operation from the circular queue and then to update that particular queue head index. Otherwise a dequeue access for the corresponding consumer thread would be denied.

15 citations


Patent
Yoo Tae Joon1
29 Feb 2008
TL;DR: In this article, a queue processing method and a router perform cache update and queue processing based upon whether or not the packet capacity stored in the queue exceeds a rising threshold, or whether the queue capacity is below a falling threshold.
Abstract: A queue processing method and a router perform cache update and queue processing based upon whether or not the packet capacity stored in the queue exceeds a rising threshold, or whether the packet capacity stored in the queue is below a falling threshold after the packet capacity stored in the queue has exceeded the rising threshold. This queue processing method and router makes it possible to eliminate overhead associated with the update of flow information by using two caches, while concomitantly removing the inequality of packet flows via RED queue management with the expedient of using two caches.

13 citations


Journal ArticleDOI
TL;DR: This paper presents two new availability prediction models capable of predicting the CPU availability for a new task on a computer system having information about the tasks in its run queue and an improvement of the SPAP model, capable of making these predictions from real-time measurements provided by a monitoring tool.
Abstract: The success of different computing models, performance analysis, and load balancing algorithms depends on the processor availability information because there is a strong relationship between a process response time and the processor time available for its execution. Therefore, predicting the processor availability for a new process or task in a computer system is a basic problem that arises in many important contexts. Unfortunately, making such predictions is not easy because of the dynamic nature of current computer systems and their workload, which can vary drastically in a short interval of time. This paper presents two new availability prediction models. The first, called the SPAP (static process assignment prediction) model, is capable of predicting the CPU availability for a new task on a computer system having information about the tasks in its run queue. The second, called the DYPAP (dynamic process assignment prediction) model, is an improvement of the SPAP model and is capable of making these predictions from real-time measurements provided by a monitoring tool, without any kind of information about the tasks in the run queue. Furthermore, the implementation of this monitoring tool for Linux workstations is presented. In addition, the results of an exhaustive set of experiments are reported to validate these two models and to evaluate the accuracy of their predictions.

12 citations


Proceedings ArticleDOI
20 Oct 2008
TL;DR: This paper considers a single-server cyclic polling system consisting of two queues where the server is delayed by a random switch-over time between visits to successive queues, and studies the cycle time distribution, the waiting times for each customer type, the joint queue length distribution at polling epochs, and the steady-state marginal queue length distributions.
Abstract: In this paper we consider a single-server cyclic polling system consisting of two queues. Between visits to successive queues, the server is delayed by a random switch-over time. Two types of customers arrive at the first queue: high and low priority customers. For this situation the following service disciplines are considered: gated, globally gated, and exhaustive. We study the cycle time distribution, the waiting times for each customer type, the joint queue length distribution at polling epochs, and the steady-state marginal queue length distributions for each customer type.

12 citations


Patent
Martin Skarve1, Anders Jonsson1
24 Apr 2008
TL;DR: In this paper, the adjustment of the nominal target error rate for transmission of data from a priority queue to a new predetermined target rate depending on the state of the priority queue is discussed.
Abstract: The invention deals with the adjustment of the nominal target error rate for transmission of data from a priority queue to a new predetermined target error rate depending on the state of the priority queue. Usually, the adjustment to the new predetermined target error rate will be to a predefined lower target error rate based on states of the priority queue, such as amount of data in the priority queue, time passed since the latest transmission of data from the priority queue, whether the amount of data in the priority queue will fit into one transport block, whether the data unit to be transmitted for the priority queue is the first or last data unit in the priority queue and may also be based on the type of data stored in the priority queue. There may be more than one such priority queue.

11 citations


Patent
14 Mar 2008
TL;DR: In this paper, an apparatus and computer program for workload balancing in an asynchronous messaging system is described. But it does not specify the number of server instances, which process work items from a queue of messages, based upon that queue's average queue depth and one or more predetermined thresholds.
Abstract: The present invention relates to an apparatus and computer program for workload balancing in an asynchronous messaging system. The number of server instances, which process work items from a queue of messages, is controlled based upon that queue's average queue depth and one or more predetermined thresholds.

11 citations


Patent
29 Jul 2008
TL;DR: The queue manager logic as discussed by the authors maintains a queue control register for each queue, including head and tail descriptor index values, to form a linked list of packet descriptors, and updates the linking memory to maintain the queue.
Abstract: A network element including a processor with logic for managing packet queues by way of packet descriptor index values that are mapped to addresses in the memory space of the packet descriptors. A linking memory is implemented in the same integrated circuit as the processor, and has entries corresponding to the descriptor index values. Each entry can store the next descriptor index in a packet queue, to form a linked list of packet descriptors. Queue manager logic receives push and pop requests from host applications, and updates the linking memory to maintain the queue. The queue manager logic also maintains a queue control register for each queue, including head and tail descriptor index values.

Patent
25 Feb 2008
TL;DR: In this paper, a thread is about to complete and the scheduler looks at the run queue from which the completing thread belongs to dispatch another thread, and identifies a thread that is compatible with the thread that was still running on the SMT processor.
Abstract: Identifying compatible threads in a Simultaneous Multithreading (SMT) processor environment is provided by calculating a performance metric, such as cycles per instruction (CPI), that occurs when two threads are running on the SMT processor. The CPI that is achieved when both threads were executing on the SMT processor is determined. If the CPI that was achieved is better than the compatibility threshold, then information indicating the compatibility is recorded. When a thread is about to complete, the scheduler looks at the run queue from which the completing thread belongs to dispatch another thread. The scheduler identifies a thread that is (1) compatible with the thread that is still running on the SMT processor (i.e., the thread that is not about to complete), and (2) ready to execute. The CPI data is continually updated so that threads that are compatible with one another are continually identified.

Patent
27 Feb 2008
TL;DR: In this paper, the authors present a method, system, and program product where at least one command in a first queue is transferred to a second queue, and there are commands in the second memory port that should have been in the first queue.
Abstract: The present invention is generally directed to a method, system, and program product wherein at least one command in a first queue is transferred to a second queue. When the first queue can no longer accept command(s) and a second queue is able to accept command(s), the second queue accepts the command(s) that the first queue can not. When the first queue is able to accept command(s), and there are command(s) in the second memory port that should have been in the first queue, the command(s) in the second queue are transferred to the first queue.

Patent
22 Oct 2008
TL;DR: In this paper, a computer system instantiates a virtualized instance of a queue manager instance in a virtual layer associated with the queue manager in the computing system, such that the virtualized queue manager can be used to implement the supplemental commands.
Abstract: In one embodiment, a computer system instantiates a queue manager configured to process a plurality of existing queue manager commands on messages in a message queue. The computer system instantiates a virtualized instance of a queue manager in a virtual layer associated with the queue manager in the computing system. The a virtualized queue manager instance provides supplemental queue manager commands usable in addition to existing queue manager commands, such that the queue manager can be used to implement the supplemental commands without substantial modification. The computer system receives an indication that a message in a message queue is to be accessed according to a specified command provided by the instantiated virtualized queue manager instance that is not natively supported by the queue manager and the virtualized queue manager performs the specified supplemental command as indicated by the received indication by performing one or more existing queue manager commands.

Proceedings ArticleDOI
28 May 2008
TL;DR: This work introduces a method of imposing desired active queue management behavior on an upstream queue without requiring administrative control over the queue, and performs comparably to a RED policy applied on the remote queue, without dropping VoIP packets.
Abstract: Consumers or administrators of small business networks usually cannot directly configure the link which connects their network to their Internet provider. The link setup often provides a sub-optimal configuration for end users' traffic patterns and, at best, favors bulk transfers. We introduce a method of imposing desired active queue management behavior on an upstream queue without requiring administrative control over the queue. We achieve this by observing various externally measurable characteristics of the queue's behavior and then manipulating congestion-controlled traffic through standard feedback channels such as packet drops or ECN notifications. This technique can be directly applied to improve the quality of VoIP connections sharing a bottleneck link with multiple TCP connections. Our approach performs comparably to a RED policy applied on the remote queue, without dropping VoIP packets. The complexity of our approach is independent of the number of connections, and hence it can be implemented on a network gateway without adding noticeable processing overhead.

Patent
12 Feb 2008
TL;DR: In this article, an execution queue stores a write command from the host in response to the issuing of the write command, and is removed from the execution queue by a signal indicating that data designated by the write commands has been written to the hard disk.
Abstract: An execution queue stores a write command from the host in response to issuance of the write command from the host, and is removed from the execution queue in response to a signal indicating that data designated by the write command has been written to the hard disk. A holding queue stores the write command removed from the execution queue. In response to the command being stored in the holding queue, a request is issued for an acknowledgment from the host. The write command is removed from the holding queue in response to the acknowledgment being received from the host. An outgoing queue stores the write command removed from the holding queue for deletion. The queues are controlled by queue management hardware, the request is issued by the queue management hardware, and the signal and acknowledgment are received by the queue management hardware.

Patent
16 Jun 2008
TL;DR: In this article, a method, system, and medium are provided for rerouting messages from a particular parallel queue instance that is experiencing below normal message throughput by disabling the slow queue instance.
Abstract: A method, system, and medium are provided for re-routing messages from a particular parallel queue instance that is experiencing below normal message throughput. The messages are re-routed to the other parallel queue instances by disabling the slow queue instance. A series of determinations are made, prior to disabling the queue instance, to confirm that disabling the queue instance is the preferred response to the decreased throughput for the queue instance.

Patent
Roger H. E. Pett1
09 Sep 2008
TL;DR: In this article, breakpoints are handled in an asynchronous debug model by building a queue of basic operations to run a debug application program interface (API), where user commands are each broken down into a simple command and placed on the queue.
Abstract: Breakpoints are handled in an asynchronous debug model by building a queue of basic operations to run a debug application program interface (API). User commands are each broken down into a simple command and placed on the queue. In response to a debug event, a new simple command is generated. If, when a first command on the queue is processed, a thread is not stopped at a location with an installed breakpoint, an operation corresponding to the first command is started, the operation is removed from the queue, and a next operation is started. If the thread is stopped at the location with the breakpoint, the thread performs a hop. When the hop terminates, the first command is removed from the queue. If the first command is a run command, and there is no cause to stop the thread, the run command is moved to the end of the queue.

Patent
16 May 2008
TL;DR: In this paper, a group based allocation of terminal server network bandwidth is proposed. Butler et al. present methods, systems, and computer program products for group-based allocation of network bandwidth.
Abstract: The present invention extends to methods, systems, and computer program products for group based allocation of terminal server network bandwidth. Output packets are classified into groups based on classification criteria. Output packets for each group are queue into a corresponding queue. During a queue flush cycle each queue containing data is flushed for an essentially equal amount of time. Flushing each queue essentially equally reduces the negative impact that can otherwise result when a subset of sessions (or even a single session) request(s) a disproportional share of terminal server network bandwidth. Responsiveness can be further increased by distributing the essentially equal amount for each queue across the queue flush cycle.

Patent
30 Apr 2008
TL;DR: In this article, a queue management module dynamically adjusts one or more of the weights such that subsequent amounts of processing time actually required to process the number of packets defined by each of the queue weights more accurately reflects the desirable quotas assigned to each queue.
Abstract: In general, techniques are described for dynamically managing weighted queues. In accordance with the techniques, a network security device comprises a queue management module that assigns, for each queue of a plurality of queues, a quota desirable to a user that a processor of the network security device consumes to service each queue. The queue management module determines, based on the desirable quotas, a queue weight for each queue and computes. Based on the computation, the queue management module dynamically adjusts one or more of the weights such that subsequent amounts of processing time actually required to process the number of packets defined by each of the queue weights more accurately reflects the desirable quotas assigned to each of the queues. The network device outputs the number of packets in accordance with the adjusted weights.

Patent
19 Aug 2008
TL;DR: In this article, a method for avoiding preemption in a small low-power embedded system is presented, where the system can fetch and run a periodic atomic task from a periodic run queue, reducing any one of periodic atomic tasks or performing the change of a task after changing a field of the run atomic task into a run standby state.
Abstract: Provided are a small low power embedded system and a preemption avoidance method thereof. A method for avoiding preemption in a small low power embedded system includes fetching and running a periodic atomic task from a periodic run queue, reducing any one of periodic atomic tasks or performing the change of a task after changing a field of the run periodic atomic task into a run standby state, according to a result value of the run of the periodic atomic task, fetching a sporadic atomic task from a sporadic run queue, and acquiring a system clock, running the fetched sporadic atomic task according to run time in the worst condition, and reducing any one of sporadic atomic tasks or performing the change of an event after a field of the run sporadic atomic task into a run standby state, according to a result value of the run of the sporadic atomic task.

Journal Article
TL;DR: The design of the controllers in the queue system, including the hardware circuit and software, which are the important parts of the whole system are introduced.
Abstract: With the assistance of computer and networks, we can apply the RS- 485 in the queue control, which can queue for the man and improve the living quality as well. In this paper, we introduce the design of the controllers in the queue system, including the hardware circuit and software, which are the important parts of the whole system.

Patent
Qiuming Gao1
26 Dec 2008
TL;DR: In this paper, the authors propose a subsequent instruction operation device and method, which includes: establishing a subsequent queue, setting a queue radix address and queue maximal length of the subsequent queue; generating subsequent operation instructions according to the length of data needs to be written or read.
Abstract: A subsequent instruction operation device and method, the method includes: establishing a subsequent queue, setting a queue radix address and queue maximal length of the subsequent queue; generating subsequent operation instructions according to the length of the data needs to be written or read, and the queue radix address and queue maximal length of the subsequent queue; executing subsequent operation instructions in the subsequent queue, completing data operation to the subsequent queue.