scispace - formally typeset
Search or ask a question

Showing papers on "Run queue published in 2010"


Proceedings ArticleDOI
09 Jan 2010
TL;DR: This paper provides a user level implementation of speed balancing on UMA and NUMA multi-socket architectures running Linux and indicates that speed balancing when compared to the native Linux load balancing improves performance and provides good performance isolation in all cases considered.
Abstract: To fully exploit multicore processors, applications are expected to provide a large degree of thread-level parallelism. While adequate for low core counts and their typical workloads, the current load balancing support in operating systems may not be able to achieve efficient hardware utilization for parallel workloads. Balancing run queue length globally ignores the needs of parallel applications where threads are required to make equal progress. In this paper we present a load balancing technique designed specifically for parallel applications running on multicore systems. Instead of balancing run queue length, our algorithm balances the time a thread has executed on ``faster'' and ``slower'' cores. We provide a user level implementation of speed balancing on UMA and NUMA multi-socket architectures running Linux and discuss behavior across a variety of workloads, usage scenarios and programming models. Our results indicate that speed balancing when compared to the native Linux load balancing improves performance and provides good performance isolation in all cases considered. Speed balancing is also able to provide comparable or better performance than DWRR, a fair multi-processor scheduling implementation inside the Linux kernel. Furthermore, parallel application performance is often determined by the implementation of synchronization operations and speed balancing alleviates the need for tuning the implementations of such primitives.

51 citations


Patent
09 Nov 2010
TL;DR: In this paper, an incentive is created from metadata associated with the queue and offered to the registered customer to urge the registered customers to take a queue balancing action, where the customer wait time exceeds a threshold maximum wait time and the tracking device is within the queue.
Abstract: Embodiments of the invention provide for managing attraction attendance levels through tracking current attendance levels and notifying patrons of incentives to alter their attraction selections in real-time. Examples identify an attraction queue that has a customer wait time failing to meet a threshold and determine a geographic relationship of a tracking device associated with a registered customer to the queue. Accordingly, an incentive is created from metadata associated with the queue and offered to the registered customer to urge the registered customer to take a queue balancing action. The queue balancing action may be leaving the queue if the customer wait time exceeds a threshold maximum wait time and the tracking device is within the queue, or choosing to enter the queue if the customer wait time is less than a threshold minimum wait time and the tracking device is outside of the queue.

29 citations


Journal ArticleDOI
TL;DR: This paper considers a single-server cyclic polling system consisting of two queues where the server is delayed by a random switch-over time between visits to successive queues, and studies the cycle time distribution, the waiting times for each customer type, the joint queue length distribution at polling epochs, and the steady-state marginal queue length distributions.
Abstract: In this paper we consider a single-server cyclic polling system consisting of two queues Between visits to successive queues, the server is delayed by a random switch-over time Two types of customers arrive at the first queue: high and low priority customers For this situation the following service disciplines are considered: gated, globally gated, and exhaustive We study the cycle time distribution, the waiting times for each customer type, the joint queue length distribution at polling epochs, and the steady-state marginal queue length distributions for each customer type

21 citations


Patent
10 May 2010
TL;DR: In this article, a command is received from a first agent via a first predetermined memory-mapped register, the first agent being one of multiple agents representing software processes, each being executed by one of processor cores of a network processor in a network element.
Abstract: A command is received from a first agent via a first predetermined memory-mapped register, the first agent being one of multiple agents representing software processes, each being executed by one of processor cores of a network processor in a network element. A first queue associated with the command is identified based on the first predetermined memory-mapped register. A pointer is atomically read from a first hardware-based queue state register associated with the first queue. Data is atomically accessed at a memory location of the memory based on the pointer. The pointer stored in the first hardware-based queue state register is atomically updated, including incrementing the pointer of the first hardware-based queue state register, reading a queue size of the queue from a first hardware-based configuration register associated with the first queue, and wrapping around the pointer if the pointer reaches an end of the first queue based on the queue size.

13 citations


Patent
30 Nov 2010
TL;DR: In this paper, a method for verifying software includes accessing a job queue, accessing a resource queue, and assigning a job from the job queue to a resource from the resource queue if an addition is made to the a job queuing or to the resource queuing.
Abstract: A method for verifying software includes accessing a job queue, accessing a resource queue, and assigning a job from the job queue to a resource from the resource queue if an addition is made to the a job queue or to a resource queue. The job queue includes an indication of one or more jobs to be executed by a worker node, each job indicating a portion of a code to be verified. The resource queue includes an indication of a one or more worker nodes available to verify a portion of software. The resource is selected by determining the best match for the characteristics of the selected job among the resources in the resource queue.

10 citations


Patent
Brian Tunning1
05 Apr 2010
TL;DR: In this article, the authors provide techniques for dynamically re-ordering operation requests that have previously been submitted to a queue management unit by sending one or more priority-change messages to perform operations that have already been queued.
Abstract: Techniques are provided for dynamically re-ordering operation requests that have previously been submitted to a queue management unit. After the queue management unit has placed multiple requests in a queue to be executed in an order that is based on priorities that were assigned to the operations, the entity that requested the operations (the “requester”) sends one or more priority-change messages. The one or more priority-change messages include requests to perform operations that have already been queued. For at least one of the operations, the priority assigned to the operation in the subsequent request is different from the priority that was assigned to the same operation when that operation was initially queued for execution. Based on the change in priority, the operation whose priority has change is placed at a different location in the queue, relative to the other operations in the queue that were requested by the same requester.

9 citations


Patent
Yi Yang1, Wei Huang1, Mingshi Sun1
22 Apr 2010
TL;DR: In this paper, the authors proposed a queue scheduling method and apparatus that enables the scheduling of any number of queues, and supports the expansion of the number of queues under the circumstances that the hardware implementation logic core is not changed.
Abstract: A queue scheduling method and apparatus is disclosed in the embodiments of the present invention, the method comprises: one or more queues are indexed by using a first circulation link list; one or more queues are accessed respectively by using the front pointer of the first circulation link list, and the value acquired from subtracting a value of a unit to be scheduled at the head of the queue from a weight middle value of each queue is treated as the residual weight middle value of the queue; when the weight middle value of one queue in the first circulation link list is less than the unit to be scheduled at the head of the queue, the queue is deleted from the first circulation link list and the weight middle value is updated with the sum of a set weight value and the residual weight middle value of the queue; the queue deleted from the first circulation link list is linked with a second circulation link list. The present invention enables the scheduling to support any number of queues, and supports the expansion of the number of queues under the circumstances that the hardware implementation logic core is not changed.

6 citations


Patent
02 Jun 2010
TL;DR: In this article, a method, system, and medium are provided for enabling a queue manager to handle messages written with a character set the queue manager is not configured to handle, by activating a conversion utility that converts messages from the one character set into another character set.
Abstract: A method, system, and medium are provided for enabling a queue manager to handle messages written with a character set the queue manager is not configured to handle. In a messaging-middleware environment, queue managers receive messages from applications and communicate the messages to queues where they can be retrieved. Upon receiving a message written in a character set the queue manager is not configured to handle, the queue manager may activate a conversion utility that converts messages from the one character set into a character set the queue manager can handle. The converted message may be returned to the queue manager and stored in the queue to which the message was addressed.

4 citations


Patent
Qiuming Gao1
22 Jun 2010
TL;DR: In this article, a concurrent instruction operation method and device are provided, which includes: establishing a concurrent queue, and setting a queue base address and a queue maximum length of the concurrent queue.
Abstract: A concurrent instruction operation method and device are provided. The method includes: establishing a concurrent queue, and setting a queue base address and a queue maximum length of the concurrent queue; generating concurrent operation instructions according to a length of data that needs to be written or read as well as the queue base address and queue maximum length of the concurrent queue; and executing the concurrent operation instructions in the concurrent queue, and completing a data operation to the concurrent queue.

3 citations


Patent
30 Apr 2010
TL;DR: In this paper, the authors describe a dynamic work queue for applications, in which a message queue is included in a data store accessible to one or more computing devices, and a request is obtained from a client to configure one or multiple services to obtain and process messages for the application from the queue in response to the request.
Abstract: Disclosed are various embodiments for a dynamic work queue for applications. A message queue is included in a data store accessible to one or more computing devices. One or more network pages are generated in the one or more computing devices indicating which ones of multiple services are configured to obtain and process messages for an application from the queue. Each of the services is executed on a respective one of multiple servers. A request is obtained from a client to configure one or more of the services to obtain and process messages for the application from the queue. The one or more of the services are configured to obtain and process messages for the application from the queue in response to the request.

2 citations


Patent
11 Nov 2010
TL;DR: In this paper, a thread of a process is placed in a run queue associated with a processor and data is added to the thread indicating a time that the thread was placed into the run queue.
Abstract: A method, computer system, and computer program product for identifying a transient thread. A thread of a process is placed in a run queue associated with a processor. Data is added to the thread indicating a time that the thread was placed into the run queue.

Proceedings ArticleDOI
11 Jul 2010
TL;DR: Experimental results prove the effectiveness of the proposed dynamic balancing algorithm, which makes use of the CPU run queue length to appraise processor load state and is adaptive to compute-intensive task.
Abstract: A distributed controlled and sender initiated dynamic balancing algorithm is proposed, aimed at solving load imbalance problem in homogeneous multi-processor system. The proposed algorithm makes use of the CPU run queue length to appraise processor load state, the process runtime to select the load which is suitable to be migrated, the relatively self-contained message mechanism to diffuse processor load state and load balancing requirement, it is adaptive to compute-intensive task. Experimental results prove the effectiveness of the proposed algorithm.

Patent
09 Dec 2010
TL;DR: In this paper, an approach is provided where a put request is received to put a data entry into a queue, and a detection is made that a primary queue data structure corresponding to the queue is damaged.
Abstract: An approach is provided where a put request is received to put a data entry into a queue. A detection is made that a primary queue data structure corresponding to the queue is damaged. If an alternate queue data structure corresponding to the queue has not yet been created, then the alternate queue data structure is dynamically created. The data entry is then stored in the alternate queue data structure.

Proceedings ArticleDOI
01 Jan 2010
TL;DR: The simulation results indicate that the proposed DSA scheme can obtain a good tradeoff between average waiting time and interrupted probability, and may reduce theaverage waiting time of the higher priority user, and although the average waitingTime of the lower priority user increases, their interrupted probability reduces.
Abstract: In this paper, we propose a new Dynamic Spectrum Access (DSA) scheme which employs a buffer queue with priority Two coexist systems: primary system and secondary system are introduced, which share the same bandwidth When all the bands are busy, the newly coming secondary user (SU) is inserted into the buffer queue according to the user's priority The average waiting time and the average queue length of this model are studied The simulation results indicate that our model can obtain a good tradeoff between average waiting time and interrupted probability It may reduce the average waiting time of the higher priority user, also, although the average waiting time of the lower priority user increases, their interrupted probability reduces

Patent
11 May 2010
TL;DR: In this paper, a gatekeeper is triggered by an incoming system request, based upon queue size associated with the server and expiration of the elements of the queue, the gatekeeper determines whether to forward the incoming system requests to the server.
Abstract: Various embodiments of systems and methods for dynamically protecting a server during sudden surges in traffic are described herein. A gatekeeper is triggered by an incoming system request. Based upon queue size associated with the server and expiration of the elements of the queue, the gatekeeper determines whether to forward the incoming system request to the server. The queue size comprises a maximum allowable load within a time window. The expired elements in the queue are removed by comparing the difference of current time and time-stamped time, with time window. If the queue is not full or even if the queue is full but one of the elements in the queue is expired, the incoming system request may be forwarded to the server. If the queue is full and there are no expired elements in the queue, the incoming system request may be dropped.

Patent
11 Nov 2010
TL;DR: In this paper, a thread of a process is placed in a run queue associated with a processor and data is added to the thread indicating a time that the thread was placed into the run queue.
Abstract: A method, computer system, and computer program product for identifying a transient thread. A thread of a process is placed in a run queue associated with a processor. Data is added to the thread indicating a time that the thread was placed into the run queue.