scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 1990"


Journal ArticleDOI
TL;DR: A new algorithm is presented, the Sporadic Server algorithm, which greatly improves response times for soft deadline a periodic tasks and can guarantee hard deadlines for both periodic and aperiodic tasks.
Abstract: This thesis develops the Sporadic Server (SS) algorithm for scheduling aperiodic tasks in real-time systems. The SS algorithm is an extension of the rate monotonic algorithm which was designed to schedule periodic tasks. This thesis demonstrates that the SS algorithm is able to guarantee deadlines for hard-deadline aperiodic tasks and provide good responsiveness for soft-deadline aperiodic tasks while avoiding the schedulability penalty and implementation complexity of previous aperiodic service algorithms. It is also proven that the aperiodic servers created by the SS algorithm can be treated as equivalently-sized periodic tasks when assessing schedulability. This allows all the scheduling theories developed for the rate monotonic algorithm to be used to schedule aperiodic tasks. For scheduling aperiodic and periodic tasks that share data, this thesis defines the interactions and schedulability impact of using the SS algorithm with the priority inheritance protocols. For scheduling hard-deadline tasks with short deadlines, an extension of the rate monotonic algorithm and analysis is developed. To predict performance of the SS algorithm, this thesis develops models and equations that allow the use of standard queueing theory models to predict the average response time of soft-deadline aperiodic tasks serviced with a high-priority sporadic server. Implementation methods are also developed to support the SS algorithm in Ada and on the Futurebus+.

947 citations


Proceedings ArticleDOI
05 Dec 1990
TL;DR: The stability of a queuing network with interdependent servers is considered, and the problem of scheduling server activation under the constraints imposed by the dependency among them is studied.
Abstract: The stability of a queuing network with interdependent servers is considered. The dependency among the servers is described by defining subsets of servers that can be activated simultaneously. Packet radio networks provide the motivation for the consideration of these systems. The problem of scheduling server activation under the constraints imposed by the dependency among them is studied. The stability region of the network under a specific scheduling policy is the set of vectors of arrival rates in the queues of the system for which the stochastic process of the queue lengths is ergodic. The 'supremum' (i.e., union) of the stability regions over all the policies is characterized. Then a scheduling policy is specified, the stability region of which is equal to the union of the stability regions over all the policies. Finally, the behavior of the network for arrival rates outside the stability region is studied. >

910 citations


Journal ArticleDOI
TL;DR: Programming assistance, automation concepts, and their application to a message-passing system program development tool called Hypertool, which performs scheduling and handles the communication primitive insertion automatically, thereby increasing productivity and eliminating synchronization errors.
Abstract: Programming assistance, automation concepts, and their application to a message-passing system program development tool called Hypertool are discussed. Hypertool performs scheduling and handles the communication primitive insertion automatically, thereby increasing productivity and eliminating synchronization errors. Two algorithms, based on the critical-path method, are presented for scheduling processes statically. Hypertool also generates the performance estimates and other program quality measures to help programmers improve their algorithms and programs. >

700 citations


Proceedings Article
01 Jan 1990
TL;DR: In this paper, a comprehensive study of the problem of scheduling broadcast transmissions in a multihop, mobile,packet radio network is provided that is based on throughput optimization subject to freedom from interference.
Abstract: A comprehensive study of the problem of scheduling broadcast transmissions in a multihop, mobile,packet radio network is provided that is based on throughput optimization subject to freedom from interference. It is shown that the problem is NP complete.

507 citations


Journal ArticleDOI
TL;DR: The major research results in deterministic parallel-machine scheduling theory will pass a survey and it is revealed that there exist a lot of potential areas worthy of further research.

499 citations


Journal ArticleDOI
TL;DR: In this article, a comprehensive study of the problem of scheduling broadcast transmissions in a multihop, mobile packet radio network is provided that is based on throughput optimization subject to freedom from interference.
Abstract: A comprehensive study of the problem of scheduling broadcast transmissions in a multihop, mobile packet radio network is provided that is based on throughput optimization subject to freedom from interference. It is shown that the problem is NP complete. A centralized algorithm that runs in polynomial time and results in efficient (maximal) schedules is proposed. A distributed algorithm that achieves the same schedules is then proposed. The algorithm results in a maximal broadcasting zone in every slot. >

498 citations


Journal ArticleDOI
TL;DR: A new scheduling heuristic (MH) is introduced that schedules program modules represented as nodes in a precedence task graph with communication onto arbitrary machine topology taking contention into consideration.

464 citations


Journal ArticleDOI
TL;DR: This paper analyzes the effects of different deterioration schemes and derive optimal scheduling policies that minimize the expected makespan, and, for some models, policies that minimizing the variance of the makespan.
Abstract: N jobs are to be processed sequentially on a single machine. While waiting for processing, jobs deteriorate, causing the random processing requirement of each job to grow at a job-specific rate. Under such conditions, the actual processing times of the jobs are no longer exchangeable random variables and the expected makespan is no longer invariant under any scheduling strategy that disallows idleness. In this paper, we analyze the effects of different deterioration schemes and derive optimal scheduling policies that minimize the expected makespan, and, for some models, policies that minimize the variance of the makespan. We also allow for random setup and detaching times. Applications to optimal inventory issuing policies are discussed and extensions are considered.

449 citations


Journal ArticleDOI
TL;DR: Rate monotonic scheduling theory puts real-time software engineering on a sound analytical footing and its implications for Ada are reviewed.
Abstract: Rate monotonic scheduling theory puts real-time software engineering on a sound analytical footing. The authors review the theory and its implications for Ada. >

377 citations


Proceedings ArticleDOI
02 Dec 1990
TL;DR: Two queue service disciplines, rate-based scheduling and hierarchical round robin scheduling, are described, designed for fast packet networks and suitable for use in networks based on the asynchronous transfer mode (ATM) being defined in CCITT.
Abstract: Future high-speed networks are expected to carry traffic with a wide range of performance requirements. Two queue service disciplines, rate-based scheduling and hierarchical round robin scheduling, are described. These disciplines allow some connections to receive guaranteed rate and jitter performance, while others receive best-effort service. Rate based scheduling is designed for fast packet networks, while hierarchical round robin is an extension of round robin scheduling suitable for use in networks based on the asynchronous transfer mode (ATM) being defined in CCITT. Both schemes are feasible at rates of 1 Gb/s. The schemes allow strict bounds on the buffer space required for rate controlled connections and can provide efficient utilization of transmission bandwidth. >

365 citations


Journal ArticleDOI
TL;DR: This paper presents an efficient scheduling algorithm for dynamic scheduling in real-time systems that focuses its attention on a small subset of tasks with the shortest deadlines and is shown to be very effective when the maximum allowable scheduling overhead is fixed.
Abstract: Efficient scheduling algorithms based on heuristic functions are developed for scheduling a set of tasks on a multiprocessor system. The tasks are characterized by worst-case computation times, deadlines, and resources requirements. Starting with an empty partial schedule, each step of the search extends the current partial schedule by including one of the tasks yet to be scheduled. The heuristic functions used in the algorithm actively direct the search for a feasible schedule, i.e. they help choose the task that extends the current partial schedule. Two scheduling algorithms are evaluated by simulation. To extend the current partial schedule, one of the algorithms considers, at each step of the search, all the tasks that are yet to be scheduled as candidates. The second focuses its attention on a small subset of tasks with the shortest deadlines. The second algorithm is shown to be very effective when the maximum allowable scheduling overhead is fixed. This algorithm is hence appropriate for dynamic scheduling in real-time systems. >

Patent
11 Jun 1990
TL;DR: In this article, the anarchy-based scheduling model for the scheduling of processes and resources by allowing each processor to access a single image of the operating system stored in the common memory that operates on a common set of operating system shared resources.
Abstract: An integrated software architecture for a highly parallel multiprocessor system having multiple tightly-coupled processors that share a common memory efficiently controls the interface with and execution of programs on such a multiprocessor system. The software architecture combines a symmetrically integrated multithreaded operating system and an integrated parallel user environment. The operating system distributively implements an anarchy-based scheduling model for the scheduling of processes and resources by allowing each processor to access a single image of the operating system stored in the common memory that operates on a common set of operating system shared resources. The user environment provides a common visual representation for a plurality of program development tools that provide compilation, execution and debugging capabilities for multithreaded user programs and assumes parallelism as the standard mode of operation.

Proceedings ArticleDOI
05 Dec 1990
TL;DR: The stack resource policy is at least as good as the PCP in reducing maximum priority inversion and supports a stronger schedulability test with EDF scheduling.
Abstract: The stack resource policy (SRP) is a resource allocation policy which permits processes with different priorities to share a single runtime stack. It is a refinement of the priority ceiling protocol (PCP), which strictly bounds priority inversion and permits simple schedulability tests. With or without stack sharing, the SRP offers the following improvements over the PCP: (1) it unifies the treatment of stack, reader-writer, multiunit resources, and binary semaphores; (2) it applies directly to some dynamic scheduling policies, including earliest deadline first (EDF), as well as to static priority policies; (3) with EDF scheduling, it supports a stronger schedulability test; and (4) it reduces the maximum number of context switches for a job execution request by a factor of two. It is at least as good as the PCP in reducing maximum priority inversion. >

Journal ArticleDOI
TL;DR: In this article, a branch and bound method is proposed to minimize the delay of loading and unloading of a ship in a pre-emptive machine scheduling model, where each hold of each ship has a given amount of work and cranes can interrupt their work on individual holds without any loss of efficiency.
Abstract: Typical cargo ships spend 60% of their time in port, costing their owners about $1000 per hour. In this paper, we attack such costs with a method to speed loading and unloading. We model the need for container handling as generic “work,” which cranes can do at a constant rate. Each hold of each ship has a given amount of this work and cranes can interrupt their work on individual holds without any loss of efficiency. In the parlance of scheduling theory, this constitutes an “open shop” with parallel, identical machines, where jobs consist of independent, single-stage, preemptable tasks. Practical problems often involve only a few ships but many holds; the complication of preemptable tasks makes them very complex. The paper presents a branch and bound method which, for this model, minimizes delay costs (weighted tardiness). As part of the method, we extend previous solutions to the feasibility problem of preemptive machine scheduling (to cases where multiple machines can work simultaneously on a single task). Computational results and extensions to more complicated problems are offered. Certain concepts developed here may also be applicable to other problems, both in scheduling and elsewhere. In particular, they may lead to optimal solutions of problems for which feasibility determination methods already exist.

Proceedings ArticleDOI
01 Apr 1990
TL;DR: It is found that the “smallest number of processes first” (SNPF) scheduling discipline performs poorly, and policies that allocate an equal fraction of the processing power to each job in the system perform better, on the whole, than policies that allocated processing power unequally.
Abstract: Scheduling policies for general purpose multiprogrammed multiprocessors are not well understood. This paper examines various policies to determine which properties of a scheduling policy are the most significant determinants of performance. We compare a more comprehensive set of policies than previous work, including one important scheduling policy that has not previously been examined. We also compare the policies under workloads that we feel are more realistic than previous studies have used. Using these new workloads, we arrive at different conclusions than reported in earlier work. In particular, we find that the “smallest number of processes first” (SNPF) scheduling discipline performs poorly, even when the number of processes in a job is positively correlated with the total service demand of the job. We also find that policies that allocate an equal fraction of the processing power to each job in the system perform better, on the whole, than policies that allocate processing power unequally. Finally, we find that for lock access synchronization, dividing processing power equally among all jobs in the system is a more effective property of a scheduling policy than the property of minimizing synchronization spin-waiting, unless demand for synchronization is extremely high. (The latter property is implemented by coscheduling processes within a job, or by using a thread management package that avoids preemption of processes that hold spinlocks.) Our studies are done by simulating abstract models of the system and the workloads.

Journal ArticleDOI
TL;DR: It is concluded that a simple and fast heuristic algorithm, such as HNF, may be sufficient to achieve adequate performance in terms of program execution time and processors' idle time.

Proceedings ArticleDOI
01 Jan 1990
TL;DR: Experimental evaluation of the algorithm shows that the heuristics and search techniques incorporated in the algorithm are extremely effective and show that, if a task set can be feasibly allocated and scheduled, the algorithm is highly likely to find it without any backtracking during the search.
Abstract: A static algorithm for allocating and scheduling components of complex periodic tasks across sites in distributed systems is discussed. Besides dealing with the periodicity constraints (which have been the sole concern of many previous algorithms), this algorithm handles precedence, communication, and fault-tolerance requirements of subtasks of the tasks. The algorithm determines the allocation of subtasks of periodic tasks to sites, the scheduled start times of subtasks allocated to a site, and the schedule for communication along the communication channel(s). Experimental evaluation of the algorithm shows that the heuristics and search techniques incorporated in the algorithm are extremely effective. Specifically, they show that, if a task set can be feasibly allocated and scheduled, the algorithm is highly likely to find it without any backtracking during the search. >

Journal ArticleDOI
TL;DR: This work examines the effectiveness of optimizations aimed to allowing distributed machine to efficiently compute inner loops over globally defined data structures by targeting loops in which some array references are made through a level of indirection.

Journal ArticleDOI
TL;DR: In this paper, the problem of minimizing the makespan in a single facility problem where the processing time of a job consists of a fixed and a variable part has been considered and the variable part depends on the start time of the job.

Journal ArticleDOI
TL;DR: This paper addresses the problem where the scheduler is given the sequence of jobs and must determine the estimated starting times of the procedures in order that the surgeons may plan their personal schedules with respect to hospital rounds and office visits.
Abstract: In this paper, we address a number of scheduling problems that are often faced in a hospital's operating room (OR). The operating room can be modeled as a one machine job shop where the surgical procedures are thought of as the jobs and the operating room itself is the machine. The procedure (job) times are stochastic and the operating room scheduler exerts control over the schedule of jobs. The situation can also be thought of as a D/GI/1 queueing system, where the arrival times of the customers are a decision variable. Initially we address the problem where the scheduler is given the sequence of jobs and must determine the estimated starting times of the procedures in order that the surgeons may plan their personal schedules with respect to hospital rounds and office visits. The costs that must be balanced are (1) the idle time costs if the estimated starting time is later than the actual available start time, and (2) the surgeon's waiting time if the estimated starting time is before the actua...

Proceedings ArticleDOI
28 May 1990
TL;DR: The DASH resource model is defined as a basis for reserving and scheduling resources involved in end-to-end handling of continuous-media information flowing continuously over real time i.e. digital audio or digital video data.
Abstract: The DASH resource model is defined as a basis for reserving and scheduling resources (disk, CPU, network, etc.) involved in end-to-end handling of continuous-media (information flowing continuously over real time i.e. digital audio or digital video) data. The model uses primitives that express work-load characteristics and performance requirements, and defines an algorithm for negotiated reservation of distributed resources. This algorithm is embodied in the session reservation protocol, a backward-compatible extension of the Internet Protocol. Hardware trends and future applications that motivate the DASH resource model are described. The performance requirements for using continuous media and the limitations of existing systems are discussed. The DASH resource model for reserving and scheduling resources is presented. The DASH kernel is briefly described. >

Journal ArticleDOI
TL;DR: A multiclass closed queueing network with two single-server stations with a large customer population and nearly balanced loading is considered, and a static priority policy that computes an index for each class and awards higher priority at station 1 respectively, station 2 is obtained.
Abstract: We consider a multiclass closed queueing network with two single-server stations. Each class requires service at a particular station, and customers change class after service according to specified probabilities. There is a general service time distribution for each class. The problem is to schedule the two servers to maximize the long-run average throughput of the network. By assuming a large customer population and nearly balanced loading of the two stations, the scheduling problem can be approximated by a dynamic control problem involving Brownian motion. A reformulation of this control problem is solved exactly and the solution is interpreted in terms of the queueing network to obtain a scheduling rule. We conjecture, quite naturally, that the resulting scheduling rule is asymptotically optimal under heavy traffic conditions, but no attempt is made to prove that. The scheduling rule is a static priority policy that computes an index for each class and awards higher priority at station 1 respectively, station 2 to classes with the smaller respectively, larger values of this index. An analytical comparison of this rule to any other static policy is also obtained. An example is given that illustrates the procedure and demonstrates its effectiveness.

Proceedings ArticleDOI
01 Apr 1990
TL;DR: It is shown that for a wide range of plausible overhead values, dynamic scheduling is superior to static scheduling, and within the class of static schedulers, a simple “run to completion” scheme is preferable to a round-robin approach.
Abstract: Existing work indicates that the commonly used “single queue of runnable tasks” approach to scheduling shared memory multiprocessors can perform very poorly in a multiprogrammed parallel processing environment. A more promising approach is the class of “two-level schedulers” in which the operating system deals solely with allocating processors to jobs while the individual jobs themselves perform task dispatching on those processors.In this paper we compare two basic varieties of two-level schedulers. Those of the first type, static, make a single decision per job regarding the number of processors to allocate to it. Once the job has received its allocation, it is guaranteed to have exactly that number of processors available to it whenever it is active. The other class of two-level scheduler, dynamic, allows each job to acquire and release processors during its execution. By responding to the varying parallelism of the jobs, the dynamic scheduler promises higher processor utilizations at the cost of potentially greater scheduling overhead and more complicated application level task control policies.Our results, obtained via simulation, highlight the tradeoffs between the static and dynamic approaches. We investigate how the choice of policy is affected by the cost of switching a processor from one job to another. We show that for a wide range of plausible overhead values, dynamic scheduling is superior to static scheduling. Within the class of static schedulers, we show that, in most cases, a simple “run to completion” scheme is preferable to a round-robin approach. Finally, we investigate different techniques for tuning the allocation decisions required by the dynamic policies and quantify their effects on performance.We believe our results are directly applicable to many existing shared memory parallel computers, which for the most part currently employ a simple “single queue of tasks” extension of basic sequential machine schedulers. We plan to validate our results in future work through implementation and experimentation on such a system.

Journal ArticleDOI
TL;DR: In this article, a classification scheme is proposed for a class of models that arise in the area of vehicle routing and scheduling and illustrated on a number of problems that have been considered in the literature.

Journal ArticleDOI
01 May 1990
TL;DR: A superscalar processor that combines the best qualities of static and dynamic instruction scheduling to increase the performance of non-numerical applications and shows that a 1.6-times speedup over scalar code is achievable by boosting instructions above only a single conditional branch.
Abstract: This paper describes a superscalar processor that combines the best qualities of static and dynamic instruction scheduling to increase the performance of non-numerical applications. The architecture performs all instruction scheduling statically to take advantage of the compiler's ability to efficiently schedule operations across many basic blocks. Since the conditional branches in non-numerical code are highly data dependent, the architecture introduces the concept of boosted instructions, instructions that are committed conditionally upon the result of later branch instructions. Boosting effectively removes the dependencies caused by branches and makes the scheduling of side-effect instructions as simple as those that are side-effect free. For efficiency, boosting is supported in the hardware by shadow structures that temporarily hold the side effects of boosted instructions until the conditional branches that the boosted instructions depend upon are executed. When the branch condition is determined, the buffered side effects are either committed or squashed. The limited static scheduler in our evaluation system shows that a 1.6-times speedup over scalar code is achievable by boosting instructions above only a single conditional branch. This performance is similar to the performance of a pure dynamic scheduler.

Journal ArticleDOI
TL;DR: An effective multiplier method-based differential dynamic programming (DDP) algorithm for solving the hydroelectric generation scheduling problem (HSP) is presented and results demonstrate the efficiency and optimality of the algorithm.
Abstract: An effective multiplier method-based differential dynamic programming (DDP) algorithm for solving the hydroelectric generation scheduling problem (HSP) is presented. The algorithm is developed for solving a class of constrained dynamic optimization problems. It relaxes all constraints but the system dynamics by the multiplier method and adopts the DDP solution technique to solve the resultant unconstrained dynamic optimization problem. The authors formulate the HSP of the Taiwan power system and apply the algorithm to it. Results demonstrate the efficiency and optimality of the algorithm for this application. Computational results indicate that the growth of the algorithm's run time with respect to the problem size is moderate. CPU times of the testing cases are well within the Taiwan Power Company's desirable performance; less than 30 minutes on a VAX/780 mini-computer for a one-week scheduling. >

Proceedings ArticleDOI
05 Dec 1990
TL;DR: The authors present a realistic model for studying the I/O scheduling problem in the context of a system which executes real-time transactions that takes advantage of the fact that reading from the disk occurs before a transaction commits, while writing to the disk usually occurs after the transaction commits.
Abstract: The I/O scheduling problem is examined in detail. The authors present a realistic model for studying this problem in the context of a system which executes real-time transactions. The model takes advantage of the fact that reading from the disk occurs before a transaction commits, while writing to the disk usually occurs after the transaction commits. New algorithms are presented that exploit this fact in order to meet the deadlines of individual requests. The algorithms are evaluated via detailed simulation and their performance is compared with traditional disk scheduling algorithms. >

Journal ArticleDOI
F. Bonomi1, Anurag Kumar1
TL;DR: It is shown that if the arrival streams are all Poisson and all jobs have the same exponentially distributed service requirements, the probabilistic splitting of the generic stream that minimizes the average job response time is such that it balances the server idle times in a weighted least-squares sense, where the weighting coefficients are related to the service speeds of the servers.
Abstract: A model comprising several servers, each equipped with its own queue and with possibly different service speeds, is considered. Each server receives a dedicated arrival stream of jobs; there is also a stream of generic jobs that arrive to a job scheduler and can be individually allocated to any of the servers. It is shown that if the arrival streams are all Poisson and all jobs have the same exponentially distributed service requirements, the probabilistic splitting of the generic stream that minimizes the average job response time is such that it balances the server idle times in a weighted least-squares sense, where the weighting coefficients are related to the service speeds of the servers. The corresponding result holds for nonexponentially distributed service times if the service speeds are all equal. This result is used to develop adaptive quasi-static algorithms for allocating jobs in the generic arrival stream when the load parameters are unknown. The algorithms utilize server idle-time measurements which are sent periodically to the central job scheduler. A model is developed for these measurements, and the result mentioned is used to cast the problem into one of finding a projection of the root of an affine function, when only noisy values of the function can be observed. >

Journal ArticleDOI
TL;DR: A very general, yet powerful backtracking procedure for solving the duration minimization and net present value maximization problems in a precedence and resource-constrained network of the PERT/CPM variety.

Journal ArticleDOI
TL;DR: The results of experiments conducted using the region scheduling technique in the generation of code for a reconfigurable long instruction word architecture are presented and the advantages of region scheduling over trace scheduling are discussed.
Abstract: Region scheduling, a technique applicable to both fine-grain and coarse-grain parallelism, uses a program representation that divides a program into regions consisting of source and intermediate level statements and permits the expression of both data and control dependencies. Guided by estimates of the parallelism present in regions, the region scheduler redistributes code, thus providing opportunities for parallelism in those regions containing insufficient parallelism compared to the capabilities of the executing architecture. The program representation and the transformations are applicable to both structured and unstructured programs, making region scheduling useful for a wide range of applications. The results of experiments conducted using the technique in the generation of code for a reconfigurable long instruction word architecture are presented. The advantages of region scheduling over trace scheduling are discussed. >