scispace - formally typeset
Search or ask a question

Showing papers on "Dynamic priority scheduling published in 1990"


Journal ArticleDOI
TL;DR: An investigation is conducted of two protocols belonging to the priority inheritance protocols class; the two are called the basic priority inheritance protocol and the priority ceiling protocol, both of which solve the uncontrolled priority inversion problem.
Abstract: An investigation is conducted of two protocols belonging to the priority inheritance protocols class; the two are called the basic priority inheritance protocol and the priority ceiling protocol. Both protocols solve the uncontrolled priority inversion problem. The priority ceiling protocol solves this uncontrolled priority inversion problem particularly well; it reduces the worst-case task-blocking time to at most the duration of execution of a single critical section of a lower-priority task. This protocol also prevents the formation of deadlocks. Sufficient conditions under which a set of periodic tasks using this protocol may be scheduled is derived. >

2,443 citations


Proceedings ArticleDOI
05 Dec 1990
TL;DR: A general criterion for the schedulability of a fixed priority scheduling of period tasks with arbitrary deadlines is given and the results are shown to provide a basis for developing predictable distributed real-time systems.
Abstract: Consideration is given to the problem of fixed priority scheduling of period tasks with arbitrary deadlines. A general criterion for the schedulability of such a task set is given. Worst case bounds are given which generalize the C.L. Liu and J.W. Layland (1973) bound. The results are shown to provide a basis for developing predictable distributed real-time systems. >

867 citations


Journal ArticleDOI
TL;DR: In this paper, conditions which guarantee stability, robustness, and performance properties of the global gain schedule designs are given, which confirm and formalize popular notions regarding gain scheduled designs, such as that the scheduling variable should vary slowly, and capture the plant's nonlinearities.
Abstract: Gain scheduling has proven to be a successful design methodology in many engineering applications. In the absence of a sound theoretical analysis, these designs come with no guarantees of the robustness, performance, or even nominal stability of the overall gain-scheduled design. An analysis is presented for two types of nonlinear gain-scheduled control systems: (1) scheduling on a reference trajectory, and (2) scheduling on the plant output. Conditions which guarantee stability, robustness, and performance properties of the global gain schedule designs are given. These conditions confirm and formalize popular notions regarding gain scheduled designs, such as that the scheduling variable should vary slowly, and capture the plant's nonlinearities. >

773 citations


Proceedings ArticleDOI
05 Dec 1990
TL;DR: The authors first give necessary and sufficient conditions for a sporadic task system to be feasible (i.e., schedulable) and lead to a feasibility test that runs in efficient pseudo-polynomial time for a very large percentage of sporadic task systems.
Abstract: Consideration is given to the preemptive scheduling of hard-real-time sporadic task systems on one processor. The authors first give necessary and sufficient conditions for a sporadic task system to be feasible (i.e., schedulable). The conditions cannot, in general, be tested efficiently (unless P=NP). They do, however, lead to a feasibility test that runs in efficient pseudo-polynomial time for a very large percentage of sporadic task systems. >

740 citations


Journal ArticleDOI
TL;DR: This paper presents an efficient scheduling algorithm for dynamic scheduling in real-time systems that focuses its attention on a small subset of tasks with the shortest deadlines and is shown to be very effective when the maximum allowable scheduling overhead is fixed.
Abstract: Efficient scheduling algorithms based on heuristic functions are developed for scheduling a set of tasks on a multiprocessor system. The tasks are characterized by worst-case computation times, deadlines, and resources requirements. Starting with an empty partial schedule, each step of the search extends the current partial schedule by including one of the tasks yet to be scheduled. The heuristic functions used in the algorithm actively direct the search for a feasible schedule, i.e. they help choose the task that extends the current partial schedule. Two scheduling algorithms are evaluated by simulation. To extend the current partial schedule, one of the algorithms considers, at each step of the search, all the tasks that are yet to be scheduled as candidates. The second focuses its attention on a small subset of tasks with the shortest deadlines. The second algorithm is shown to be very effective when the maximum allowable scheduling overhead is fixed. This algorithm is hence appropriate for dynamic scheduling in real-time systems. >

349 citations


Journal ArticleDOI
TL;DR: In this article, the authors demonstrate that cyclic material flow and certain distributed scheduling policies can lead to instability in the sense that the required buffer levels are unbounded, even when the set-up times for changing part types are zero.
Abstract: The paper concerns policies for sequencing material through a flexible manufacturing system to meet desired production goals for each part type. The authors demonstrate by examples that cyclic material flow and certain distributed scheduling policies can lead to instability in the sense that the required buffer levels are unbounded. This can be the case even when the set-up times for changing part types are zero. Sufficient conditions are then derived under which a class of distributed policies is stable. Finally, a general supervisory mechanism is presented which will stabilize any scheduling policy (i.e. maintain bounded buffer sizes at all machines) while satisfying the desired production rates. >

337 citations


Proceedings ArticleDOI
05 Dec 1990
TL;DR: The stack resource policy is at least as good as the PCP in reducing maximum priority inversion and supports a stronger schedulability test with EDF scheduling.
Abstract: The stack resource policy (SRP) is a resource allocation policy which permits processes with different priorities to share a single runtime stack. It is a refinement of the priority ceiling protocol (PCP), which strictly bounds priority inversion and permits simple schedulability tests. With or without stack sharing, the SRP offers the following improvements over the PCP: (1) it unifies the treatment of stack, reader-writer, multiunit resources, and binary semaphores; (2) it applies directly to some dynamic scheduling policies, including earliest deadline first (EDF), as well as to static priority policies; (3) with EDF scheduling, it supports a stronger schedulability test; and (4) it reduces the maximum number of context switches for a job execution request by a factor of two. It is at least as good as the PCP in reducing maximum priority inversion. >

286 citations


Journal ArticleDOI
TL;DR: An optimal acceptance test is presented which returns a decision on the basis of the current state of processor load and by considering tasks to be scheduled according to the well known preemptive algorithm Earliest Deadline, the acceptance algorithm and a complexity analysis are presented.
Abstract: While scheduling theory has been developed over a long period of time, it is important to note that most results concern problems with static characteristics. However, a real-time system is dynamic and requires on-line and adaptive scheduling strategies. So, an important aspect of real-time systems research is to devise methods flexible enough to react to a dynamic change of processor load and to attempt to schedule all the tasks judiciously. In this paper, we are particularly concerned with the problem of scheduling tasks which are of two kinds: periodic and sporadic, on a monoprocessor machine. Periodic tasks are independent, run cyclically and their characteristics are known in advance. In addition, we allow for the unpredictable occurrence of aperiodic task groups, with timing and precedence constraints. Clearly, the main problem is to devise a schedulability test that makes it possible to decide whether a new occurring group can be accepted, without upsetting the tight timing behavior requirements. We present an optimal acceptance test which returns a decision on the basis of the current state of processor load and by considering tasks to be scheduled according to the well known preemptive algorithm Earliest Deadline. The acceptance algorithm and a complexity analysis are presented.

242 citations


Journal ArticleDOI
TL;DR: This work presents a concurrency control protocol for systems using the earliest deadline first scheduling algorithm and shows that the protocol prevents both deadlock and chained blocking.
Abstract: Real-time systems have stringent deadline requirements for their tasks. To meet the requirements, a real-time system must use scheduling algorithms that ensure a predictable response even in the face of mutually exclusive accesses to critical sections. We present a concurrency control protocol for systems using the earliest deadline first scheduling algorithm. The protocol specifies a dynamic priority ceiling for each critical section which is the earliest deadline of jobs which are currently in or will enter the critical section. Jobs trying to enter a critical section will be blocked if they do not have a priority higher than the priority ceiling of any critical section which is in use. We show that the protocol prevents both deadlock and chained blocking. The schedulability condition and implementation issues of the protocol are also discussed.

226 citations


Proceedings ArticleDOI
01 Apr 1990
TL;DR: It is found that the “smallest number of processes first” (SNPF) scheduling discipline performs poorly, and policies that allocate an equal fraction of the processing power to each job in the system perform better, on the whole, than policies that allocated processing power unequally.
Abstract: Scheduling policies for general purpose multiprogrammed multiprocessors are not well understood. This paper examines various policies to determine which properties of a scheduling policy are the most significant determinants of performance. We compare a more comprehensive set of policies than previous work, including one important scheduling policy that has not previously been examined. We also compare the policies under workloads that we feel are more realistic than previous studies have used. Using these new workloads, we arrive at different conclusions than reported in earlier work. In particular, we find that the “smallest number of processes first” (SNPF) scheduling discipline performs poorly, even when the number of processes in a job is positively correlated with the total service demand of the job. We also find that policies that allocate an equal fraction of the processing power to each job in the system perform better, on the whole, than policies that allocate processing power unequally. Finally, we find that for lock access synchronization, dividing processing power equally among all jobs in the system is a more effective property of a scheduling policy than the property of minimizing synchronization spin-waiting, unless demand for synchronization is extremely high. (The latter property is implemented by coscheduling processes within a job, or by using a thread management package that avoids preemption of processes that hold spinlocks.) Our studies are done by simulating abstract models of the system and the workloads.

219 citations


Proceedings ArticleDOI
01 Jan 1990
TL;DR: Experimental evaluation of the algorithm shows that the heuristics and search techniques incorporated in the algorithm are extremely effective and show that, if a task set can be feasibly allocated and scheduled, the algorithm is highly likely to find it without any backtracking during the search.
Abstract: A static algorithm for allocating and scheduling components of complex periodic tasks across sites in distributed systems is discussed. Besides dealing with the periodicity constraints (which have been the sole concern of many previous algorithms), this algorithm handles precedence, communication, and fault-tolerance requirements of subtasks of the tasks. The algorithm determines the allocation of subtasks of periodic tasks to sites, the scheduled start times of subtasks allocated to a site, and the schedule for communication along the communication channel(s). Experimental evaluation of the algorithm shows that the heuristics and search techniques incorporated in the algorithm are extremely effective. Specifically, they show that, if a task set can be feasibly allocated and scheduled, the algorithm is highly likely to find it without any backtracking during the search. >

Journal ArticleDOI
TL;DR: A multiclass closed queueing network with two single-server stations with a large customer population and nearly balanced loading is considered, and a static priority policy that computes an index for each class and awards higher priority at station 1 respectively, station 2 is obtained.
Abstract: We consider a multiclass closed queueing network with two single-server stations. Each class requires service at a particular station, and customers change class after service according to specified probabilities. There is a general service time distribution for each class. The problem is to schedule the two servers to maximize the long-run average throughput of the network. By assuming a large customer population and nearly balanced loading of the two stations, the scheduling problem can be approximated by a dynamic control problem involving Brownian motion. A reformulation of this control problem is solved exactly and the solution is interpreted in terms of the queueing network to obtain a scheduling rule. We conjecture, quite naturally, that the resulting scheduling rule is asymptotically optimal under heavy traffic conditions, but no attempt is made to prove that. The scheduling rule is a static priority policy that computes an index for each class and awards higher priority at station 1 respectively, station 2 to classes with the smaller respectively, larger values of this index. An analytical comparison of this rule to any other static policy is also obtained. An example is given that illustrates the procedure and demonstrates its effectiveness.

Proceedings ArticleDOI
01 Apr 1990
TL;DR: It is shown that for a wide range of plausible overhead values, dynamic scheduling is superior to static scheduling, and within the class of static schedulers, a simple “run to completion” scheme is preferable to a round-robin approach.
Abstract: Existing work indicates that the commonly used “single queue of runnable tasks” approach to scheduling shared memory multiprocessors can perform very poorly in a multiprogrammed parallel processing environment. A more promising approach is the class of “two-level schedulers” in which the operating system deals solely with allocating processors to jobs while the individual jobs themselves perform task dispatching on those processors.In this paper we compare two basic varieties of two-level schedulers. Those of the first type, static, make a single decision per job regarding the number of processors to allocate to it. Once the job has received its allocation, it is guaranteed to have exactly that number of processors available to it whenever it is active. The other class of two-level scheduler, dynamic, allows each job to acquire and release processors during its execution. By responding to the varying parallelism of the jobs, the dynamic scheduler promises higher processor utilizations at the cost of potentially greater scheduling overhead and more complicated application level task control policies.Our results, obtained via simulation, highlight the tradeoffs between the static and dynamic approaches. We investigate how the choice of policy is affected by the cost of switching a processor from one job to another. We show that for a wide range of plausible overhead values, dynamic scheduling is superior to static scheduling. Within the class of static schedulers, we show that, in most cases, a simple “run to completion” scheme is preferable to a round-robin approach. Finally, we investigate different techniques for tuning the allocation decisions required by the dynamic policies and quantify their effects on performance.We believe our results are directly applicable to many existing shared memory parallel computers, which for the most part currently employ a simple “single queue of tasks” extension of basic sequential machine schedulers. We plan to validate our results in future work through implementation and experimentation on such a system.

01 Jan 1990
TL;DR: This work may improve the current practices employed in designing and constructing supervisory control systems by encouraging the use of modern software engineering methodologies and reducing the amount of tuning that is required to produce systems that meet their real-time constraints--while providing improved scheduling, graceful degradation, and more freedom in modifying the system over time.
Abstract: A real-time application is typically composed of a number of cooperating activities that must execute within specific time intervals. Since there are usually more activities to be executed than there are processors on which to execute them, several activities must share a single processor. Necessarily, satisfying the activities' timing constraints is a prime concern in making the scheduling decisions for that processor. Unfortunately, the activities are not independent. Rather, they share data and devices, observe concurrency constraints on code execution, and send signals to one another. These interactions can be modeled as contention for shared resources that must be used by one activity at a time. An activity awaiting access to a resource currently held by another activity is said to depend on that activity, and a dependency relationship is said to exist between them. Dependency relationships may encompass both precedence constraints and resource conflicts. No algorithm solves the problem of scheduling activities with dynamic dependency relationships in a way that is suitable for all real-time systems. This thesis provides an algorithm, called scDASA, that is effective for scheduling the class of real-time systems known as supervisory control systems. Simulation experiments that account for the time required to make scheduling decisions demonstrate that scDASA provides equivalent or superior performance to other scheduling algorithms of interest under a wide range of conditions for parameterized, synthetic workloads. scDASA performs particularly well during overloads, when it is impossible to complete all of the activities. This research makes a number of contributions to the field of computer science, including: a formal model for analyzing scheduling algorithms; the scDASA scheduling algorithm, which integrates resource management with standard scheduling functions; results that demonstrate the efficacy of scDASA in a variety of situations; and a simulator. In addition, this work may improve the current practices employed in designing and constructing supervisory control systems by encouraging the use of modern software engineering methodologies and reducing the amount of tuning that is required to produce systems that meet their real-time constraints--while providing improved scheduling, graceful degradation, and more freedom in modifying the system over time.

Journal ArticleDOI
01 May 1990
TL;DR: A superscalar processor that combines the best qualities of static and dynamic instruction scheduling to increase the performance of non-numerical applications and shows that a 1.6-times speedup over scalar code is achievable by boosting instructions above only a single conditional branch.
Abstract: This paper describes a superscalar processor that combines the best qualities of static and dynamic instruction scheduling to increase the performance of non-numerical applications. The architecture performs all instruction scheduling statically to take advantage of the compiler's ability to efficiently schedule operations across many basic blocks. Since the conditional branches in non-numerical code are highly data dependent, the architecture introduces the concept of boosted instructions, instructions that are committed conditionally upon the result of later branch instructions. Boosting effectively removes the dependencies caused by branches and makes the scheduling of side-effect instructions as simple as those that are side-effect free. For efficiency, boosting is supported in the hardware by shadow structures that temporarily hold the side effects of boosted instructions until the conditional branches that the boosted instructions depend upon are executed. When the branch condition is determined, the buffered side effects are either committed or squashed. The limited static scheduler in our evaluation system shows that a 1.6-times speedup over scalar code is achievable by boosting instructions above only a single conditional branch. This performance is similar to the performance of a pure dynamic scheduler.

Journal ArticleDOI
01 Dec 1990
TL;DR: In this article, the authors define a model that allows for communication delays between precedence-related tasks, and propose a classification of various submodels to address certain types of scheduling problems that arise when a parallel computation is to be executed on a multiprocessor.
Abstract: This paper adresses certain types of scheduling problems that arise when a parallel computation is to be executed on a multiprocessor. We define a model that allows for communication delays between precedence-related tasks, and propose a classification of various submodels. We also review complexity results and optimization and approximation algorithms that have been presented in the literature.

Journal ArticleDOI
TL;DR: An effective multiplier method-based differential dynamic programming (DDP) algorithm for solving the hydroelectric generation scheduling problem (HSP) is presented and results demonstrate the efficiency and optimality of the algorithm.
Abstract: An effective multiplier method-based differential dynamic programming (DDP) algorithm for solving the hydroelectric generation scheduling problem (HSP) is presented. The algorithm is developed for solving a class of constrained dynamic optimization problems. It relaxes all constraints but the system dynamics by the multiplier method and adopts the DDP solution technique to solve the resultant unconstrained dynamic optimization problem. The authors formulate the HSP of the Taiwan power system and apply the algorithm to it. Results demonstrate the efficiency and optimality of the algorithm for this application. Computational results indicate that the growth of the algorithm's run time with respect to the problem size is moderate. CPU times of the testing cases are well within the Taiwan Power Company's desirable performance; less than 30 minutes on a VAX/780 mini-computer for a one-week scheduling. >

Journal ArticleDOI
TL;DR: A very general, yet powerful backtracking procedure for solving the duration minimization and net present value maximization problems in a precedence and resource-constrained network of the PERT/CPM variety.

Proceedings ArticleDOI
05 Dec 1990
TL;DR: A new concurrency control algorithm for real-time database systems is proposed, by which real- time scheduling and concurrency Control can be integrated.
Abstract: A new concurrency control algorithm for real-time database systems is proposed, by which real-time scheduling and concurrency control can be integrated. The algorithm is founded on a priority-based locking mechanism to support time-critical scheduling by adjusting the serialization order dynamically in favor of high priority transactions. Furthermore, it does not assume any knowledge about the data requirements or execution time of each transaction, making the algorithm very practical. >

Journal ArticleDOI
TL;DR: The results of experiments conducted using the region scheduling technique in the generation of code for a reconfigurable long instruction word architecture are presented and the advantages of region scheduling over trace scheduling are discussed.
Abstract: Region scheduling, a technique applicable to both fine-grain and coarse-grain parallelism, uses a program representation that divides a program into regions consisting of source and intermediate level statements and permits the expression of both data and control dependencies. Guided by estimates of the parallelism present in regions, the region scheduler redistributes code, thus providing opportunities for parallelism in those regions containing insufficient parallelism compared to the capabilities of the executing architecture. The program representation and the transformations are applicable to both structured and unstructured programs, making region scheduling useful for a wide range of applications. The results of experiments conducted using the technique in the generation of code for a reconfigurable long instruction word architecture are presented. The advantages of region scheduling over trace scheduling are discussed. >

Journal ArticleDOI
03 Jan 1990
TL;DR: Applications of Genetic Algorithms (GAs) to the Job Shop Scheduling (JSS) problem is described and it is believed GAs can be employed as an additional tool in the Computer Integrated Manufacturing (CIM) cycle.
Abstract: We describe applications of Genetic Algorithms (GAs) to the Job Shop Scheduling (JSS) problem. More specifically, the task of generating inputs to the GA process for schedule optimization is addressed. We believe GAs can be employed as an additional tool in the Computer Integrated Manufacturing (CIM) cycle. Our technique employs an extension to the Group Technology (GT) method for generating manufacturing process plans. It positions the GA scheduling process to receive outputs from both the automated process planning function and the order entry function. The GA scheduling process then passes its results to the factory floor in terms of optimal schedules. An introduction to the GA process is discussed first. Then, an elementary n-task, one processor (machine) problem is provided to demonstrate the GA methodology in the JSS problem arena. The technique is then demonstrated on an n-task, two processor problem, and finally, the technique is generalized to the n-tasks on m-processors (serial) case.

Proceedings ArticleDOI
24 Jun 1990
TL;DR: A single high level synthesis algorithm is presented that schedules the operations of a data dependence graph, allocates the necessary hardware, and maps the operations to specific functional units by extending the global analysis approach developed for force-directed scheduling to include individual module instances.
Abstract: A single high-level synthesis algorithm is presented that schedules the operations of a data dependence graph, allocates the necessary hardware and maps the operations to specific functional units. This is achieved by extending the global analysis approach developed for force-directed scheduling to include individual module instances. This new algorithm should be applicable to any behavioral synthesis system that schedules operations from a data dependence graph. >

Journal ArticleDOI
TL;DR: The performance results show that SSEDV outperforms SSEDO; that both of these new algorithms can improve performance of up to 38% over previously-known real-time disk scheduling algorithms; and that all of these real- time scheduling algorithms are significantly better than nonreal-time algorithms in the sense of minimizing the transaction loss ratio.
Abstract: In this paper, we present two new disk scheduling algorithms for real-time systems. The two algorithms, called SSEDO (for Shortest Seek and Earliest Deadline by Ordering) and SSEDV (for Shortest Seek and Earliest Deadline by Value), combine deadline information and disk service time information in different ways. The basic idea behind these new algorithms is to give the disk I/O request with the earliest deadline a high priority, but if a request with a larger deadline is "very" close to the current disk arm position, then it may be assigned the highest priority. The performance of SSEDO and SSEDV algorithms is compared with three other proposed real-time disk scheduling algorithms ED, P-SCAN, and FD-SCAN, as well as four conventional algorithms SSTF, SCAN, C-SCAN, and FCFS. An important aspect of the performance study is that the evaluation is not done in isolation with respect to the disk, but as part of an integrated collection of protocols necessary to support a real-time transaction system. The transaction system model is validated on an actual real-time transaction system testbed, called RT-CARAT. The performance results show that SSEDV outperforms SSEDO; that both of these new algorithms can improve performance of up to 38% over previously-known real-time disk scheduling algorithms; and that all of these real-time scheduling algorithms are significantly better than non-real-time algorithms in the sense of minimizing the transaction loss ratio.

Proceedings ArticleDOI
C. Le Pape1
13 May 1990
TL;DR: The author advocates a mixed strategy, allowing robots to make and execute individual plans as well as to connect with a central task planner and scheduler when appropriate.
Abstract: Task planning and scheduling techniques developed as part of a project whose goal is to control the operations of many robots in the same environment are presented. Centralized approaches allow task allocation to be improved. However, they are not appropriate for coordinating the actions of multiple mobile robots in a dynamic environment where unforeseeable events occur. Consequently, the author advocates a mixed strategy, allowing robots to make and execute individual plans as well as to connect with a central task planner and scheduler when appropriate. The author first presents individual planning techniques. Then he describes a framework allowing robots to exchange information with a central system able to optimize task allocation. >

Journal ArticleDOI
TL;DR: A discussion is presented of two ways of mapping the cells in a two-dimensional area of a chip onto processors in an n-dimensional hypercube such that both small and large cell moves can be applied.
Abstract: A discussion is presented of two ways of mapping the cells in a two-dimensional area of a chip onto processors in an n-dimensional hypercube such that both small and large cell moves can be applied. Two types of move are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described, along with a distributed data structure that needs to be stored in the hypercube to support such a parallel cost evaluation. A novel tree broadcasting strategy is presented for the hypercube that is used extensively in the algorithm for updating cell locations in the parallel environment. A dynamic parallel annealing schedule is proposed that estimates the errors due to interacting parallel moves and adapts the rate of synchronization automatically. Two novel approaches in controlling error in parallel algorithms are described: heuristic cell coloring and adaptive sequence control. The performance on an Intel iPSC-2/D4/MX hypercube is reported. >

Journal ArticleDOI
01 Jan 1990-Networks
TL;DR: The model deals with the issues of rest-period identification, work/rest period sequencing, and shift scheduling simultaneously and is designed to handle multiple shift cases with time-varying demands.
Abstract: The rotating workforce scheduling problem involves the construction of an efficient sequence of work and rest periods spanning over a number of weeks. This schedule must satisfy the workforce requirements during the different shifts of each day and conform to all the other conditions imposed on the work/rest periods and their sequence. We consider the modeling of the rotating workforce scheduling problem as a network flow problem. All the constraints on the problem are incorporated in the network itself, except for the staff-covering constraints that are treated as side constraints. The optimal solution to the problem corresponds to a path in the network and is identified using a dual-based approach. The model deals with the issues of rest-period identification, work/rest period sequencing, and shift scheduling simultaneously and is designed to handle multiple shift cases with time-varying demands. The procedure, which is capable of solving large-scale problems, is applied to three well-known problems in rotating workforce scheduling. The computational results presented indicate that this procedure provides a useful method for solving large-scale complex problems in workforce scheduling.

Proceedings ArticleDOI
28 May 1990
TL;DR: Two location policies that, by adapting to the system load, capture the advantages of receiver-initiated, sender-in Initiated, and symmetrically initiated algorithms are presented and can be used in conjunction with a broad range of existing transfer policies.
Abstract: Two location policies that, by adapting to the system load, capture the advantages of receiver-initiated, sender-initiated, and symmetrically initiated algorithms are presented. A key feature of these location policies is that they are general and can be used in conjunction with a broad range of existing transfer policies. By means of simulation, two representative algorithms making use of these adaptive location policies are shown to be stable and to improve performance significantly relative to nonadaptive policies. >

Proceedings ArticleDOI
R. Composano1
02 Jan 1990
TL;DR: A path-based scheduling algorithm for synchronous digital systems is presented, which yields solutions with the minimum number of control steps, taking into account arbitrary constraints that limit the amount of operations in each control step.
Abstract: A path-based scheduling algorithm for synchronous digital systems is presented. It yields solutions with the minimum number of control steps, taking into account arbitrary constraints that limit the amount of operations in each control step. The result is a finite-state machine that implements the control. Although the complexity of the algorithm is proportional to the number of paths in the control-flow graph, it is shown to be practical for large examples with thousands of nodes. >

Journal ArticleDOI
TL;DR: The procedure presented is an efficient near-optimal method based on the Lagrangian relaxation technique and the list-scheduling concept that can be used to provide quick answers to what-if questions and to reconfigure the schedule to reincorporate new jobs and other dynamic changes.
Abstract: A methodology is presented for scheduling jobs on identical, parallel machines. Each job comprises a small number of operations that must be processed in a specified order. The objective is to minimize the total weighted quadratic tardiness of the schedule, subject to capacity and precedence constraints. The procedure presented is an efficient near-optimal method based on the Lagrangian relaxation technique and the list-scheduling concept. In addition, the resulting job-interaction information can be used to provide quick answers to what-if questions and to reconfigure the schedule to reincorporate new jobs and other dynamic changes. This scheduling methodology has been implemented in a knowledge-based scheduling system. Typical sizes of problems involve 35 to 40 machines and 100 to 200 jobs, each with 3 to 5 operations. >

Proceedings ArticleDOI
24 Jun 1990
TL;DR: A novel property called well-posedness of timing constraints is analyzed that is used to identify consistency of constraints in the presence of unbounded delay operations, and an approach to relative scheduling that yields a minimum schedule that satisfies the constraints, or detects if no schedule exists, in polynomial time is presented.
Abstract: Scheduling techniques are used in high-level synthesis of integrated circuits. Traditional scheduling techniques assume fixed execution delays for the operations. For the synthesis of ASIC designs that interface with external signals and events, operations with unbounded delays. i.e., delays unknown at compile time, must also be considered. A relative scheduling technique that supports operations with fixed and unbounded delays is presented. The technique satisfies the timing constraints imposed by the user, which places bounds between the activation of operations. A novel property called well-posedness of timing constraints is analyzed that is used to identify consistency of constraints in the presence of unbounded delay operations, and an approach to relative scheduling is presented that yields a minimum schedule that satisfies the constraints, or detects if no schedule exists, in polynomial time. >