Scalably scheduling processes with arbitrary speedup curves
read more
Citations
A survey of hard real-time scheduling for multiprocessor systems
Scheduling heterogeneous processors isn't as easy as you think
A tutorial on amortized local competitiveness in online scheduling
SelfishMigrate: A Scalable Algorithm for Non-clairvoyantly Scheduling Heterogeneous Processors
Competitive algorithms from competitive equilibria: non-clairvoyant scheduling under polyhedral constraints
References
Speed is as Powerful as Clairvoyance.
Speed is as powerful as clairvoyance
Optimal Time-Critical Scheduling via Resource Augmentation
Nonclairvoyant scheduling
Scheduling in the dark
Related Papers (5)
Frequently Asked Questions (14)
Q2. Why did it take Opt more than time dt to finish the work w?
Because the speedup function Γ of the phase containing the work w is non-decreasing, it took Opt(J) more than time dt to finish the work w.
Q3. What is the speedup function of a job?
A phase of a job is parallelizable if its speedup function is Γ(p) = p. Increasing the number of processors allocated to a parallelizable phase by a factor of s increases the rate of processing by a factor of s.
Q4. What is the obvious scheduling algorithm in the multiprocessor setting?
The most obvious scheduling algorithm in the multiprocessor setting is Equi-partition (Equi), which splits the processors evenly among all processes.
Q5. What is the competitiveness of a randomized algorithm?
There is a randomized algorithm, Randomized Multi-Level Feedback Queues, that is O(logn)-competitive against an oblivious adversary [2, 14].
Q6. What is the corollary to Moore’s law?
The founder of chip maker Tilera asserts that a corollary to Moore’s law will be that the number of cores/processors will double every 18 months [15].
Q7. Why was LAPS used instead of SETF?
LAPS was used instead of the more obvious choice of SETF because the analysis of speed scaling algorithms generally require amortized local competitiveness arguments, and it is not clear what potential function one should use with SETF.
Q8. What is the known competitiveness result for broadcast scheduling?
So if the web server has multiple unsatisfied requests for the same file, it need only broadcast that file once, simultaneously satisfying all the users who issued these requests. [11] showed how to convert any s-speed c-competitive nonclairvoyant algorithm for scheduling jobs with arbitrary speedup curves into a 2s-speed c-competitive algorithm for broadcast scheduling.
Q9. What is the purpose of this paper?
In this paper, the authors consider one such software technical challenge: developing operating system algorithms/policies for scheduling processes with varying degrees of parallelism on a multiprocessor.
Q10. What is the definition of a nonclairvoyant operating system scheduling algorithm?
An operating system scheduling algorithm generally needs to be nonclairvoyant, that is, the algorithm does not require internal knowledge about jobs, say for example the jobs’ work requirement, since such information is generally not available to the operating systems.
Q11. What is the reason why some jobs may be significantly sped up when simultaneously run on multiple?
That is, some jobs may be be considerably sped up when simultaneously run on to multiple processors, while some jobs may not be sped up at all (this could be because the underlying algorithm is inherently sequential in nature, or because the process was not coded in a way to make it easily parallelizable).
Q12. What is the meaning of the word nonclairvoyant?
A nonclairvoyant algorithm only knows when processes have been released and finished in the past, and which processes have been run on each processor each time in the past.
Q13. What is the schedule quality of service metric?
In this paper the authors will consider the schedule quality of service metric total response time, which for a schedule S is defined to be F (S) = ∑ni=1(Ci − ri).
Q14. What is the case that X starts working on w′ with p0 processors?
The schedule X will start working on the work w′ with po processors when Opt(J) started working on the work w, and then after X completes w′, X can let these p0 processors idle until Opt(J) completes w.Now consider that case that po ≥ pa.