scispace - formally typeset
Search or ask a question

Showing papers on "Run queue published in 2017"


Proceedings ArticleDOI
26 Jan 2017
TL;DR: A new linearizable multi-producer-multi-consumer queue is proposed, with wait-free progress bounded by the number of threads, and withWait-free bounded memory reclamation, which is easy to plugin with other algorithms for either enqueue or dequeue methods.
Abstract: Queues are a widely deployed data structure. They are used extensively in many multi threaded applications, or as a communication mechanism between threads or processes. We propose a new linearizable multi-producer-multi-consumer queue we named Turn queue, with wait-free progress bounded by the number of threads, and with wait-free bounded memory reclamation. Its main characteristics are: a simple algorithm that does no memory allocation apart from creating the node that is placed in the queue, a new wait-free consensus algorithm using only the atomic instruction compare-and-swap (CAS), and is easy to plugin with other algorithms for either enqueue or dequeue methods.

15 citations


Journal ArticleDOI
TL;DR: The results are useful for engineers not only checking whether the given cyclic polling system is stable, but also adjusting some parameters to make the system satisfy some requirements under the condition that the system isstable.
Abstract: The stability of a cyclic polling system, with a single server and two infinite-buffer queues, is considered. Customers arrive at the two queues according to independent batch Markovian arrival processes. The first queue is served according to the gated service discipline, and the second queue is served according to a state-dependent time-limited service discipline with the preemptive repeat-different property. The state dependence is that, during each cycle, the predetermined limited time of the server's visit to the second queue depends on the queue length of the first queue at the instant when the server last departed from the first queue. The mean of the predetermined limited time for the second queue either decreases or remains the same as the queue length of the first queue increases. Due to the two service disciplines, the customers in the first queue have higher service priority than the ones in the second queue, and the service fairness of the customers with different service priority levels is also considered. In addition, the switchover times for the server traveling between the two queues are considered, and their means are both positive as well as finite. First, based on two embedded Markov chains at the cycle beginning instants, the sufficient and necessary condition for the stability of the cyclic polling system is obtained. Then, the calculation methods for the variables related to the stability condition are given. Finally, the influence of some parameters on the stability condition of the cyclic polling system is analyzed. The results are useful for engineers not only checking whether the given cyclic polling system is stable, but also adjusting some parameters to make the system satisfy some requirements under the condition that the system is stable.

11 citations


Patent
02 Feb 2017
TL;DR: A controller area network has a plurality of nodes in communication through a bus as discussed by the authors, and the nodes have controllers and computer readable instructions that, when executed, perform the steps of: receiving a new message, inserting the new message into the queue in order of priority if the queue is not full; refusing the new messages if the queuing is full and the priority of new messages is lower than the priorities of current messages in the queue.
Abstract: A controller area network has a plurality of nodes in communication through a bus. The nodes have controllers and computer readable instructions that, when executed, perform the steps of: receiving a new message; inserting the new message into the queue in order of priority if the queue is not full; refusing the new message if the queue is full and the priority of the new message is lower than the priorities of current messages in the queue; inserting the new message into the queue in order of priority if the queue is full and the priority of the new message is higher than a priority of at least one of the current messages; removing the new message from the queue if the current time exceeds an expiration indicator; sending the new message to the controller for transmission and holding the new message in the queue during transmission; and removing the new message from the queue after successful transmission.

5 citations


Patent
08 Mar 2017
TL;DR: In this article, a method for processing an overlapping node event is presented, which comprises the steps as follows: a target node receives a first node event and generates a first-node event task; the conditions that the first node task doesn't conflict with the node event task in a run queue in a distributed lock manager DLM of the target node and the firstnode task does not conflict with another node event in a conflict queue in the distributed lock managers DLM are determined, and the task is put into the run queue, wherein the run que is used for storing one or
Abstract: The invention provides a method for processing an overlapping node event. The method is used for a distributed system. The method comprises the steps as follows: a target node receives a first node event and generates a first node event task; the conditions that the first node event task doesn't conflict with the node event task in a run queue in a distributed lock manager DLM of the target node and the first node event task does not conflict with the node event task in a conflict queue in the distributed lock manager DLM are determined, and the first node event task is put into the run queue, wherein the run queue is used for storing one or more node event tasks which are being executed and the conflict queue is used for storing the node event task waiting for execution; and the first node event task in the run queue is executed.

4 citations


Proceedings ArticleDOI
13 Apr 2017
TL;DR: This research explores an extension of MLFQ-NS with the new idea of Intelligent Mitigation which redirects time not just to the final queue Qn, but also to Qn-1, which is studied through simulation to understand its effectiveness and safety in reallocating a percentage of CPU time to the two lowest priority queues Queue(N) and Queue-1 when under heavy load, at and exceeding the maximum CPU processing capacity.
Abstract: The performance of Multi-Level Feedback Queues (MLFQ) has been explored as a mechanism for allocating CPU time in a multiprogramming operating systems. MLFQ-based systems have the advantage of not requiring data to be saved and updated for each process after each burst of CPU time. Thus, the overhead computation time to run the scheduling algorithm is small. But MLFQs along with other algorithms risk starvation of those processes needing large CPU bursts, which drop down to the lowest-priority queue(N) of the stack of queues. This research extends previous work investigating the safety of reallocating a small quantity of CPU time from higher-priority queues to the final queue in order to prevent starvation called MLFQ-No Starvation (MLFQ-NS). This research explores an extension of MLFQ-NS with the new idea of Intelligent Mitigation (MLFQ-IM) which redirects time not just to the final queue Qn, but also to Qn-1, which is studied through simulation to understand its effectiveness and safety in reallocating a percentage of CPU time to the two lowest priority queues Queue(N) and Queue(N-1) when under heavy load, at and exceeding the maximum CPU processing capacity.

3 citations


01 Jan 2017
TL;DR: This contribution presents an overview of the open questions on the key operational aspects and performance figures of the LHC during Run 3 and HL-LHC era, which could be tackled and answered in the current Run 2.
Abstract: This contribution presents an overview of the open questions on the key operational aspects and performance figures of the LHC during Run 3 and HL-LHC era, which could be tackled and answered in the current Run 2.

2 citations


Journal ArticleDOI
TL;DR: In the proposed algorithm Smart Job First Multilevel feedback queue SJFMLFQ with smart time quantum STQ, the processes are arranged in ascending order of their CPU execution time and calculate a Smart Priority Factor SPF on which processes are scheduled in queue.
Abstract: Multilevel feedback queue scheduling MLFQ algorithm is based on the concept of several queues in which a process moves. In earlier scenarios there are three queues defined for scheduling. The two higher level queues are running on Round Robin scheduling and last level queue is running on FCFS First Come First Serve. A fix time quantum is defined for RR scheduling and scheduling of process depends upon the arrival time in ready queue. Previously a lot of work has been done in MLFQ. In our propose algorithm Smart Job First Multilevel feedback queue SJFMLFQ with smart time quantum STQ, the processes are arranged in ascending order of their CPU execution time and calculate a Smart Priority Factor SPF on which processes are scheduled in queue. The process which has lowest SPF value will schedule first and the process which has highest SF value will schedule last in queue. Then a smart time quantum STQ is calculated for each queue. As a result, we found decreasing in turnaround time, average waiting time and increasing throughput as compared to the previous approaches and hence increase in the overall performance.

2 citations


Patent
22 Jun 2017
TL;DR: In this paper, the authors describe methods and systems related to implementations of provisioning publisher-subscriber queues, which include receiving, by a computing apparatus, a data queue request from a publisher, and computing apparatus may generate at least one of a publisher data queue, a publisher information queue, or an access queue.
Abstract: Methods and systems related to implementations of provisioning publisher-subscriber queues are described. The implementations include receiving, by a computing apparatus, a data queue request from a publisher. The computing apparatus may generate at least one of a publisher data queue, a publisher information queue, or an access queue. The computing apparatus may further control access to the publisher data queue based on the access queue and the publisher information queue.

1 citations


Patent
26 Jan 2017
TL;DR: In this article, a plurality of queues are grouped into a predefined number of queue ranges, with each queue range having associated with it a queue range ready signal, and a queue process sequencer suitable for determining a queue ranges ready for processing based on the queue range-ready signals.
Abstract: Memory systems may include a plurality of queues, a queue ready indicator suitable for grouping the plurality of queues into a predefined number of queue ranges, each queue range having associated with it a queue range ready signal, and setting a queue range ready signal to ready when each queue in the queue range associated with the queue range ready signal is ready for processing, and a queue process sequencer suitable for determining a queue range ready for processing based on the queue range ready signals, and processing a queue within the queue range determined to be ready for processing.

1 citations