scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 1988"


Journal ArticleDOI
TL;DR: An approximation method for solving the minimum makespan problem of job shop scheduling by sequences the machines one by one, successively, taking each time the machine identified as a bottleneck among the machines not yet sequenced.
Abstract: We describe an approximation method for solving the minimum makespan problem of job shop scheduling. It sequences the machines one by one, successively, taking each time the machine identified as a bottleneck among the machines not yet sequenced. Every time after a new machine is sequenced, all previously established sequences are locally reoptimized. Both the bottleneck identification and the local reoptimization procedures are based on repeatedly solving certain one-machine scheduling problems. Besides this straight version of the Shifting Bottleneck Procedure, we have also implemented a version that applies the procedure to the nodes of a partial search tree. Computational testing shows that our approach yields consistently better results than other procedures discussed in the literature. A high point of our computational testing occurred when the enumerative version of the Shifting Bottleneck Procedure found in a little over five minutes an optimal schedule to a notorious ten machines/ten jobs problem on which many algorithms have been run for hours without finding an optimal solution.

1,579 citations


Journal ArticleDOI
TL;DR: A taxonomy of approaches to the resource management problem is presented in an attempt to provide a common terminology and classification mechanism necessary in addressing this problem.
Abstract: One measure of the usefulness of a general-purpose distributed computing system is the system's ability to provide a level of performance commensurate to the degree of multiplicity of resources present in the system. A taxonomy of approaches to the resource management problem is presented in an attempt to provide a common terminology and classification mechanism necessary in addressing this problem. The taxonomy, while presented and discussed in terms of distributed scheduling, is also applicable to most types of resource management. >

1,082 citations


Proceedings Article
29 Aug 1988
TL;DR: In this paper, the authors developed a new family of algorithms for scheduling real-time transactions with deadlines, which have four components: a policy to manage overloads, a policy for scheduling the CPU, access to data, concurrency control and scheduling I/O requests on a disk device.
Abstract: This thesis has six chapters. Chapter 1 motivates the thesis by describing the characteristics of real-time database systems and the problems of scheduling transactions with deadlines. We also present a short survey of related work and discuss how this thesis has contributed to the state of the art. In Chapter 2 we develop a new family of algorithms for scheduling real-time transactions. Our algorithms have four components: a policy to manage overloads, a policy for scheduling the CPU, a policy for scheduling access to data, i.e., concurrency control and a policy for scheduling I/O requests on a disk device. In Chapter 3, our scheduling algorithms are evaluated via simulation. Our chief result is that real-time scheduling algorithms can perform significantly better than a conventional non real-time algorithm. In particular, the Least Slack (static evaluation) policy for scheduling the CPU, combined with the Wait Promote policy for concurrency control, produces the best overall performance. In Chapter 4 we develop a new set of algorithms for scheduling disk I/O requests with deadlines. Our model assumes the existence of a real-time database system which assigns deadlines to individual read and write requests. We also propose new techniques for handling requests without deadlines and requests with deadlines simultaneously. This approach greatly improves the performance of the algorithms and their ability to minimize missed deadlines. In Chapter 5 we evaluate the I/O scheduling algorithms using detailed simulation. Our chief result is that real-time disk scheduling algorithms can perform better than conventional algorithms. In particular, our algorithm FD-SCAN was found to be very effective across a wide range of experiments. Finally, in Chapter 6 we summarize our conclusions and discuss how this work has contributed to the state of the art. Also, we briefly explore some interesting new directions for continuing this research.

682 citations


Book
01 Jan 1988

600 citations


Journal ArticleDOI
TL;DR: The author proposes a family of heuristic algorithms for Stone's classic model of communicating tasks whose goal is the minimization of the total execution and communication costs incurred by an assignment, and augments this model to include interference costs which reflect the degree of incompatibility between two tasks.
Abstract: Investigate the problem of static task assignment in distributed computing systems, i.e. given a set of k communicating tasks to be executed on a distributed system of n processors, to which processor should each task be assigned? The author proposes a family of heuristic algorithms for Stone's classic model of communicating tasks whose goal is the minimization of the total execution and communication costs incurred by an assignment. In addition, she augments this model to include interference costs which reflect the degree of incompatibility between two tasks. Whereas high communication costs serve as a force of attraction between tasks, causing them to be assigned to the same processor, interference costs serve as a force of repulsion between tasks, causing them to be distributed over many processors. The inclusion of interference costs in the model yields assignments with greater concurrency, thus overcoming the tendency of Stone's model to assign all tasks to one or a few processors. Simulation results show that the algorithms perform well and in particular, that the highly efficient Simple Greedy Algorithm performs almost as well as more complex heuristic algorithms. >

424 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: This paper presents several general results concerning competitive algorithms, as well as results on specific on-line problems.
Abstract: An on-line problem is one in which an algorithm must handle a sequence of requests, satisfying each request without knowledge of the future requests. Examples of on-line problems include scheduling the motion of elevators, finding routes in networks, allocating cache memory, and maintaining dynamic data structures. A competitive algorithm for an on-line problem has the property that its performance on any sequence of requests is within a constant factor of the performance of any other algorithm on the same sequence. This paper presents several general results concerning competitive algorithms, as well as results on specific on-line problems.

412 citations


Journal ArticleDOI
TL;DR: This work considers the problem to schedule project networks subject to arbitrary resource constraints in order to minimize an arbitrary regular performance measure (i.e. a non-decreasing function of the vector of completion times).
Abstract: Project networks with time windows are generalizations of the well-known CPM and MPM networks that allow for the introduction of arbitrary minimal and maximal time lags between the starting and completion times of any pair of activities. We consider the problem to schedule such networks subject to arbitrary (even time dependent) resource constraints in order to minimize an arbitrary regular performance measure (i.e. a non-decreasing function of the vector of completion times). This problem arises in many standard industrial construction or production processes and is therefore particularly suited as a background model in general purpose decision support systems. The treatment is done by a structural approach that involves a generalization of both the disjunctive graph method in job shop scheduling [1] and the order theoretic methods for precedence constrained scheduling [18,23,24]. Besides theoretical insights into the problem structure, this approach also leads to rather powerful branch-and-bound algorithms. Computational experience with this algorithm is reported.

403 citations


Patent
08 Sep 1988
TL;DR: Indicia as mentioned in this paper is a method for the prospective scheduling, periodic monitoring and dynamic management of a plurality of interrelated and interdependent resources using a computer system, which includes providing a data base containing information about the resources and graphically displaying utilization and availability of the resources as a function of time.
Abstract: The invention relates to the method for the prospective scheduling, periodic monitoring and dynamic management of a plurality of interrelated and interdependent resources using a computer system. The method includes providing a data base containing information about the resources and graphically displaying utilization and availability of the resources as a function of time. Indicia can be made to appear on the display to provide visual identification of symbols as well as information about scheduling, status and conflicts involving the resources. In addition, access to the data base can be made available to provide a continuous update of the display so that the display of the resources is for the most recent data in the data base. Access to the data base can also permit the operator to call up a wide variety of information about the resources and can also be used to track events and procedures.

360 citations


Book ChapterDOI
01 Jan 1988
TL;DR: Consider an open queueing network with I single-server stations and K customer classes, where each customer class requires service at a specified station, and customers change class after service in a Markovian fashion.
Abstract: Consider an open queueing network with I single-server stations and K customer classes. Each customer class requires service at a specified station, and customers change class after service in a Markovian fashion. (With K allowed to be arbitrary, this routing structure is almost perfectly general.) There is a renewal input process and general service time distribution for each class. The correspondence between customer classes and service stations is in general many to one, and the service discipline (or scheduling protocol) at each station is left as a matter for dynamic decision making.

358 citations


J.W. Wong1
01 Dec 1988
TL;DR: The architecture and performance of systems that use a broadcast channel to deliver information to a community of users are discussed, and existing scheduling algorithms are described and their mean-response-time performance evaluated.
Abstract: The architecture and performance of systems that use a broadcast channel to deliver information to a community of users are discussed. Information is organized into units called pages, and at any instant of time, two or more users may request the same page. Broadcast delivery is attractive for such an environment because a single transmission of a page will satisfy all pending requests for that page. Three alternative architectures for broadcast information delivery systems are considered. They are one-way broadcast, two-way interaction, and hybrid one-way broadcast/two-way interaction. An important design issue is the scheduling of page transmissions such that the user response time is minimized. For each architecture, existing scheduling algorithms are described, and their mean-response-time performance evaluated. Properties of scheduling algorithms that yield optimal mean response time are discussed. A comparative discussion of the performance differences of the three architectures is also provided. >

350 citations


Journal ArticleDOI
TL;DR: The effectiveness of these proposed alogrithms is empirically evaluated and found to indicate that these heuristic algorithms yield optimal or near optimal schedules in many cases.

Journal ArticleDOI
TL;DR: Grain packing reduces total execution time by balancing execution time and communication time and used with an optimizing scheduler, it gives consistently better results than human-engineered scheduling and packing.
Abstract: A method called grain packing is proposed as a way to optimize parallel programs. A grain is defined as one or more concurrently executing program modules. A grain begins executing as soon as all of its inputs are available, and terminates only after all of its outputs have been computed. Grain packing reduces total execution time by balancing execution time and communication time. Used with an optimizing scheduler, it gives consistently better results than human-engineered scheduling and packing. The method is language-independent and is applicable to both extended serial and concurrent programming languages, including Occam, Fortran, and Pascal. >

Journal ArticleDOI
TL;DR: A trace-driven simulation study of dynamic load balancing in homogeneous distributed systems supporting broadcasting finds that source initiative algorithms were found to perform better than server initiative algorithms and the performances of all hosts, even those originally with light loads, are generally improved by load balancing.
Abstract: A trace-driven simulation study of dynamic load balancing in homogeneous distributed systems supporting broadcasting is presented. Information about job CPU and input/output (I/O) demands collected from production systems is used as input to a simulation model that includes a representative CPU scheduling policy and considers the message exchange and job transfer cost explicitly. Seven load-balancing algorithms are simulated and their performances compared. Load balancing is capable of significantly reducing the mean and standard deviation of job response times, especially under heavy load, and for jobs with high resource demands. Algorithms based on periodic or nonperiodic load information exchange provide similar performance, and, among the periodic policies, the algorithms that use a distinguished agent to collect and distribute load information cut down the overhead and scale better. With initial job placements only, source initiative algorithms were found to perform better than server initiative algorithms. The performances of all hosts, even those originally with light loads, are generally improved by load balancing. >

Journal ArticleDOI
TL;DR: A fair Share scheduler allocates resources so that users get their fair machine share over a long period because central-processing-units have traditionally allocated resources fairly among processes.
Abstract: Central-processing-unit schedulers have traditionally allocated resources fairly among processes. By contrast, a fair Share scheduler allocates resources so that users get their fair machine share over a long period.

Proceedings Article
21 Aug 1988
TL;DR: IPEM, for Integrated Planning, Execution and Monitoring, provides a simple, clear and well defined framework to integrate these processes and renders a local ability to replan after both unexpected events and execution failure.
Abstract: IPEM, for Integrated Planning, Execution and Monitoring, provides a simple, clear and well defined framework to integrate these processes. Representation integration is achieved by naturally incorporating execution and monitoring information into [Chapman, 1987] tweak's partial plan representation. Control integration is obtained by using a production system architecture where IF-THEN rules, referred to as flaws and fixes, specify partial plan transformations. Conflict resolution is done using a scheduler that embodies the current problem solving strategy. Since execution and plan elaboration operations have been designed to be independently applicable, and execution of an action is a scheduling decision like any other, the framework effectively supports interleaving of planning and execution (IPE). This renders a local ability to replan after both unexpected events and execution failure. The framework has served as the basis for an implemented hierarchical, nonlinear planning and execution system that has been tested on numerous examples, on various domains, and has shown to be reliable and robust.

Journal ArticleDOI
01 May 1988
TL;DR: A preliminary investigation of a number of fundamental issues which are important in the context of scheduling concurrent jobs on multiprogrammed parallel systems to gain insight into system behaviour and understand the basic principles underlying the performance of scheduling strategies in such parallel systems.
Abstract: Processor scheduling on multiprocessor systems that simultaneously run concurrent applications is currently not well-understood. This paper reports a preliminary investigation of a number of fundamental issues which are important in the context of scheduling concurrent jobs on multiprogrammed parallel systems. The major motivation for this research is to gain insight into system behaviour and understand the basic principles underlying the performance of scheduling strategies in such parallel systems. Based on abstract models of systems and scheduling disciplines, several high level issues that are important in this context have been analysed.

Journal ArticleDOI
01 Mar 1988
TL;DR: This paper discusses solutions for two problems: what is a reasonable method for modeling real-time constraints for database transactions and time constraints add a new dimension to concurrency control.
Abstract: Scheduling transactions with real-time requirements presents many new problems. In this paper we discuss solutions for two of these problems: what is a reasonable method for modeling real-time constraints for database transactions? Traditional hard real-time constraints (e.g., deadlines) may be too limited. May transactions have soft deadlines and a more flexible model is needed to capture these soft time constraints. The second problem we address is scheduling. Time constraints add a new dimension to concurrency control. Not only must a schedule be serializable but it also should meet the time constraints of all the transactions in the schedule.

Proceedings ArticleDOI
01 Jun 1988
TL;DR: The integrated code scheduling method combines two scheduling techniques—one to reduce pipeline delays and the other to minimize register usage—into a single phase, and the DAG-driven register allocator uses a dependency graph to assist in assigning registers.
Abstract: We discuss the issues about the interdependency between code scheduling and register allocation. We present two methods as solutions: (1) an integrated code scheduling technique; and (2) a DAG-driven register allocator. The integrated code scheduling method combines two scheduling techniques—one to reduce pipeline delays and the other to minimize register usage—into a single phase. By keeping track of the number of available registers, the scheduler can choose the appropriate scheduling technique to schedule a better code sequence. The DAG-driven register allocator uses a dependency graph to assist in assigning registers; it introduces much less extra dependency than does an ordinary register allocator. For large basic blocks, both approaches were shown to generate more efficient code sequences than conventional techniques in the simulations.

Journal ArticleDOI
TL;DR: It is shown that the shortest time to extinction (STE) policy is optimal for a class of continuous and discrete time nonpreemptive M/G/1 queues that do not allow unforced idle times.
Abstract: Many problems can be modeled as single-server queues with impatient customers. An example is that of the transmission of voice packets over a packet-switched network. If the voice packets do not reach their destination within a certain time interval of their transmission, they are useless to the receiver and considered lost. It is therefore desirable to schedule the customers such that the fraction of customers served within their respective deadlines is maximized. For this measure of performance, it is shown that the shortest time to extinction (STE) policy is optimal for a class of continuous and discrete time nonpreemptive M/G/1 queues that do not allow unforced idle times. When unforced idle times are allowed, the best policies belong to the class of shortest time to extinction with inserted idle time (STEI) policies. An STEI policy requires that the customer closest to his or her deadline be scheduled whenever it schedules a customer. It also has the choice of inserting idle times while the queue is nonempty. It is also shown that the STE policy is optimal for the discrete time G/D/1 queue where all customers receive one unit of service. The paper concludes with a comparison of the expected customer loss using an STE policy with that of the first-come, first-served (FCFS) scheduling policy for one specific queue.

Journal ArticleDOI
Robert J. Wittrock1
TL;DR: An algorithm that schedules the loading of parts into a manufacturing line to minimize the makespan and secondarily to minimize queueing is presented.
Abstract: Consider a manufacturing line that produces parts of several types. Each part must be processed by at most one machine in each of several banks of machines. This paper presents an algorithm that schedules the loading of parts into such a line. The objective is primarily to minimize the makespan and secondarily to minimize queueing. The problem is decomposed into three subproblems and each of these is solved using a fast heuristic. The most challenging subproblem is that of finding a good loading sequence, and this is addressed using workload concepts and an approximation to dynamic programming. We make several extensions to the algorithm in order to handle limited storage capacity, expediting, and reactions to system dynamics. The algorithm was tested by computing schedules for a real production line, and the results are discussed.

Journal ArticleDOI
TL;DR: An algorithm for scheduling the load control using dynamic programming is presented, based on an analytic dynamic model of the load under control, which can be used for different utility objectives, including minimizing production cost and minimizing peak load over a period of time.
Abstract: Many utilities have load management programs whereby they directly control residential appliances in their service area. An algorithm for scheduling the load control using dynamic programming is presented. This method is based on an analytic dynamic model of the load under control. The method can be used for different utility objectives, including minimizing production cost and minimizing peak load over a period of time. >

Patent
10 Mar 1988
TL;DR: In this paper, a task scheduler system including an array of priority queues for use in a real-time multitasking operating system including equation lists, configuration lists, a function library, input and output drivers, user-created task definition lists of major and minor tasks and interrupt handlers is presented.
Abstract: A task scheduler system including an array of priority queues for use in a real time multitasking operating system including equation lists, configuration lists, a function library, input and output drivers, user-created task definition lists of major and minor tasks and interrupt handlers The system includes task scheduling apparatus which, upon the completion of each library function, interrogates the priority queues and finds the highest priority task segment whose requested resource is available and executed, and which executes task segments in the same priority queue in round-robin fashion The system further includes task creation apparatus and apparatus for maintaining the status of all major tasks in a system in the states of unlocked and done, unlocked and active, unlocked and waiting, locked and active, or locked and waiting The status maintaining apparatus also includes apparatus for locking tasks into a mode of operation such that the task scheduler will only allow the locked task to execute and the normal state of priority execution is overridden, and waiting apparatus for suspending operation on a task that requires completion of a library function

Proceedings ArticleDOI
06 Dec 1988
TL;DR: Two task-scheduling algorithms for distributed hard real-time computer systems are presented, based on a heuristic approach, that outperform all the baseline algorithms except for the ideal but impractical centralized baseline and in many cases perform close to the ideal.
Abstract: Two task-scheduling algorithms for distributed hard real-time computer systems are presented. Both algorithms are based on a heuristic approach and explicitly account for both the deadlines and criticality of tasks when making scheduling decisions. In analyzing the algorithms, a performance metric called the weighted guarantee ratio is defined. It reflects both the percentage of tasks that make their deadlines and their relative worth to the system. The performance is analyzed by simulating the behavior of the algorithms as well as that of several other pertinent baseline algorithms under a wide range of system conditions including a nonhomogeneous task arrival rate. The results show that the algorithms outperform all the baseline algorithms except for the ideal but impractical centralized baseline and in many cases perform close to the ideal. >

Journal ArticleDOI
TL;DR: A hierarchical model for VLSI circuit testing is introduced and very efficient suboptimum algorithms are proposed for defining test schedules for both the equal length test and unequal length test cases.
Abstract: The test scheduling problem for equal length and unequal length tests for VLSI circuits using built-in self-test (BIST) has been modeled. A hierarchical model for VLSI circuit testing is introduced. The test resource sharing model from C. Kime and K. Saluja (1982) is employed to exploit the potential parallelism. Based on this model, very efficient suboptimum algorithms are proposed for defining test schedules for both the equal length test and unequal length test cases. For the unequal length test case, three different scheduling disciplines are defined, and scheduling algorithms are given for two of the three cases. Data on algorithm performance are presented. The issue of the control of the test schedule is also addressed, and a number of structures are proposed for implementation of control. >

Journal ArticleDOI
TL;DR: This work developed and performed limited testing of a scheduling system, called OPIS 0, that exhibits opportunistic behavior and it is believed that such opportunistic views of scheduling would lead to systems that allow more flexibility in terms of designing scheduling procedures and supporting the scheduling function.
Abstract: In a search for more efficient yet effective ways of solving combinatorially complex problems such as jobshop scheduling, we move towards opportunistic approaches that attempt to exploit the structure of a given problem. Rather than adhere to a single problem-solving plan, such approaches are characterized by almost continual surveillance of the current problem-solving state to possibly modify plans so that activity is consistently directed toward those actions that currently seem most promising. Opportunistic behavior may occur in problem decomposition down to selective application of scheduling heuristics. We developed and performed limited testing of a scheduling system, called OPIS 0, that exhibits such behavior to some extent. The results are encouraging when compared to ISIS and a dispatching system. It is believed that such opportunistic views of scheduling would lead to systems that allow more flexibility in terms of designing scheduling procedures and supporting the scheduling function.

Journal ArticleDOI
TL;DR: This article integrates the concepts of the Dynamic Cycle Lengths Heuristic with a nonlinear integer optimization model to obtain an overall scheduling policy that allocates items to machines and schedules production quantities during the next time period.
Abstract: An effective scheduling policy known as the Dynamic Cycle Lengths Heuristic was introduced by Leachman and Gascon in 1988 for the multi-item, single-machine production system facing stochastic, time-varying demands. In this article we develop a heuristic scheduling policy for the multi-machine extension of the same problem. We integrate the concepts of the Dynamic Cycle Lengths Heuristic with a nonlinear integer optimization model to obtain an overall scheduling policy that allocates items to machines and schedules production quantities during the next time period. We report promising performance in limited simulation tests of the policy.

Journal ArticleDOI
TL;DR: A critical review of the single machine static scheduling problem with two criteria jointly with a joint objective function where both criteria are included is presented.
Abstract: The single machine static scheduling problem has been studied with a number of criteria. Increasingly, this problem has been considered with two criteria jointly. Such attempts have usually taken two approaches. One approach is to consider one of the two criteria as the objective function and the other as a constraint. The other approach is to consider a joint objective function where both criteria are included. This paper presents a critical review of this research.

Proceedings ArticleDOI
13 Jun 1988
TL;DR: It is found that while placement alone is capable of large improvement in performance, the addition of migration can achieve considerable additional improvement.
Abstract: The authors consider whether the addition of a migration facility to a distributed scheduler already capable of placement can significantly improve performance. They examine performance over a broad range of workload characteristics and file system structures. They find that while placement alone is capable of large improvement in performance, the addition of migration can achieve considerable additional improvement. >

Journal ArticleDOI
TL;DR: In this article, the authors show that software pipelining is an effective and viable scheduling technique for VLIW processors, where iterations of a loop in the source program are continuously in...
Abstract: This paper shows that software pipelining is an effective and viable scheduling technique for VLIW processors. In software pipelining, iterations of a loop in the source program are continuously in...

Journal ArticleDOI
TL;DR: This paper presents a method for real-time scheduling and routing of material in a Flexible Manufacturing System FMS that extends the earlier scheduling work of Kimemia and Gershwin in which the FMS model includes machines that fail at random times and stay down for random lengths of time.
Abstract: This paper presents a method for real-time scheduling and routing of material in a Flexible Manufacturing System FMS. It extends the earlier scheduling work of Kimemia and Gershwin in which the FMS model includes machines that fail at random times and stay down for random lengths of time. The new element is the capability of different machines to perform some of the same operations. The times that different machines require to perform the same operation may differ. This paper includes a model, its analysis, a real-time algorithm, and examples.