scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 1979"


Posted Content
01 Jan 1979
TL;DR: The complexity of a class of vehicle routing and scheduling problems is investigated and known NP-hardness results are reviewed and compiled to compile the worst-case performance of approximation algorithms.
Abstract: The complexity of a class of vehicle routing and scheduling problems is investigated. We review known NP-hardness results and compile the results on the worst-case performance of approximation algorithms. Some directions for future research are suggested. The presentation is based on two discussion sessions during the Workshop to Investigate Future Directions in Routing and Scheduling of Vehicles and Crews, held at the University of Maryland at College Park from June 4 to June 6, 1979.

1,026 citations


Journal ArticleDOI
TL;DR: It is shown that unit execution time jobs subject to a precedence constraint whose complement is chordal can be scheduled in linear time on m processors.
Abstract: We show that unit execution time jobs subject to a precedence constraint whose complement is chordal can be scheduled in linear time on m processors. Generalizations to arbitrary execution times are NP-complete.

238 citations


Journal ArticleDOI
TL;DR: A new procedure for the routing and scheduling of school buses is presented, implemented and tested successfully in two school districts and resulted in about a 20% savings in cost and the transportation of 600 additional students with one extra bus in the second.
Abstract: In this paper, a new procedure for the routing and scheduling of school buses is presented This procedure has been implemented and tested successfully in two school districts It resulted in about a 20% savings in cost in one of these districts and the transportation of 600 additional students with one extra bus in the second Also, in this paper, the key aspects of the procedures for the preparation of the data are explained including the ministop concept, a new and innovative method for the coding of the network data In each section of this paper, the core algorithm is explained, followed by the modifications that were made in attempting to implement the theoretical procedure In this way, the contract between theory and practice can be shown

172 citations


Proceedings ArticleDOI
30 May 1979
TL;DR: It is formally show that the performance of a scheduler, i.e., the amount of parallelism that it supports, depends explicitly upon the number of information that is available to the scheduler.
Abstract: A concurrency control mechanism (or a scheduler) is the component of a database system that safeguards the consistency of the database in the presence of interleaved accesses and update requests. We formally show that the performance of a scheduler, i.e., the amount of parallelism that it supports, depends explicitly upon the amount of information that is available to the scheduler. We point out that most previous work on concurrency control is simply concerned with specific points of this basic trade-off between performance and information. In fact, several of these approaches are shown to be optimal for the amount of information that they use.

111 citations


Journal ArticleDOI
TL;DR: The study reported in this paper focuses on a heuristic which can handle reasonably large problems, and yet can be simply and economically implemented.
Abstract: In this paper we consider the problem of scheduling “n” independent fades on “m” parallel processors. Each job consists of a single operation with a specific processing time and due date. The processors are identical and the operation of the system is non-preemptive. The objective is to schedule the jobs in such a way that the total tardiness of the n jobs is as small as possible. For the case of a single processor with n jobs, there exists algorithms which provide optimal solutions. On the other hand currently available optimal scheduling algorithms for multiple processors can handle only small problems. Therefore, practitioners are forced to use heuristic methods to schedule their jobs on multiple processors. This raises questions of the following nature: “Are we scheduling our jobs reasonably well? Are there other schedules with which our total tardiness can be lowered substantially? How far off might the heuristic solution be, from the optimal solution?” The study reported in this paper focuses on a h...

73 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of scheduling n tasks on to several identical processors to meet the objective of minimizing the expected flow-time and show that the strategy which always serves those tasks whose processing-time distributions have the highest hazard rates is shown to be optimal when these distributions are all exponential.
Abstract: We consider the problem of how to schedule n tasks on to several identical processors to meet the objective of minimising the expected flow-time. The strategy which always serves those tasks whose processing-time distributions have the highest hazard rates is shown to be optimal when these distributions are all exponential.

70 citations


Journal ArticleDOI
TL;DR: This work examines the problem of scheduling parallel production lines in the glass-container industry with a resource constraint imposed by the furnace melting rate and concludes that a shortest processing time based dispatching rule probably provides the most efficient operating policy.
Abstract: By means of a computer simulation we examine the problem of scheduling parallel production lines in the glass-container industry with a resource constraint imposed by the furnace melting rate. The results of the simulation model are combined with relevant aspects of scheduling theory to arrive at the conclusion that a shortest processing time based dispatching rule probably provides the most efficient operating policy.

51 citations


Journal ArticleDOI
TL;DR: A detailed examination of the first nine steps in the computerized street sweeper routing system (disregarding the final output step) is given, and an example is worked out to illustrate these ideas.

49 citations


Journal ArticleDOI
TL;DR: A chance constraint model for determining the appropriate safety capacity, analogous to safety stock in inventory theory, to meet varying volume demands when forecast errors are present and can be extended to other service oriented organizations.
Abstract: This paper illustrates a shift scheduling procedure for a commercial bank's encoder work force for check processing in the presence of daily work load uncertainty. The author presents a chance constraint model for determining the appropriate safety capacity, analogous to safety stock in inventory theory, to meet varying volume demands when forecast errors are present. A series of tests are conducted to evaluate the model's performance under different operating costs, forecast errors, and volume arrival rates, which are based on data collected at Chemical Bank. The results indicate this model provides low cost solutions. This study provides two contributions to managers and management scientists. First, even though the paper illustrates the encoder work force shift scheduling decision, the methodology presented can be extended to other service oriented organizations. The main elements that need to be present are: varying between day work loads, varying within day work loads, uncertainty in estimating work ...

45 citations


Journal ArticleDOI
TL;DR: It is shown that in the case when the processing times for the n tasks are independent exponential random variables, and when they are independent hyperexponentials which are mixtures of two fixed exponentials, that the policy of performing tasks with longest expected processing time first minimizes the expected makespan.
Abstract: We consider the problem of scheduling n tasks on two identical parallel processors. We show both in the case when the processing times for the n tasks are independent exponential random variables, and when they are independent hyperexponentials which are mixtures of two fixed exponentials, that the policy of performing tasks with longest expected processing time (LEPT) first minimizes the expected makespan, and that in the hyperexponential case the policy of performing tasks with shortest expected processing time (SEPT) first minimizes the expected flow time. The approach is simpler than the dynamic programming approach recently employed by Bruno and Downey.

41 citations


Journal ArticleDOI
01 Oct 1979
TL;DR: An approximate method is described which consists of two stages; in the first stage the problem of scheduling activities with known performing times on parallel machines is solved, and in the second, the continuous resource is allocated among the activities (or parts of activities) which are performed simultaneously in the obtained schedule.
Abstract: Allocation is discussed of constrained resources among activities of a network project, when the resource requirements of activities concern a unit of a discrete resource (machine, processor) from a finite set of m parallel units and an amount of a continuously divisible resource (power, fuel flow, approximate manpower) which is arbitrary within a certain interval. For every activity the function relating the performing speed to the allotted amount of continuous resource is known as is the state which has to be reached in order to complete the activity. Two optimality criteria; project duration and the mean finishing time of an activity are considered. For the first criterion the way in which finding the optimal solution is reduced to a constrained nonlinear programming problem is described. The number of variables in this problem depends on the number of m?element combinations of activities which may be performed simultaneously in accordance with the precedence constraints. Consequently, this approach is of more theoretical than practical importance. For some special cases, however, it allows analytical results to be obtained. Next, an approximate method is described which consists of two stages. In the first stage the problem of scheduling activities with known performing times on parallel machines is solved, and in the second, the continuous resource is allocated among the activities (or parts of activities) which are performed simultaneously in the obtained schedule.

01 Jan 1979
TL;DR: A non-exhaustive algorithm giving optimal schedules of a combinational algorithm and for a fixed number m of processors is developed.
Abstract: One develops a non-exhaustive algorithm giving optimal schedules of a combinational algorithm and for a fixed number m of processors. Several arguments such as e.g. partial symmetries in the precedence graph, saturation of the processors, timing constraints on the tasks etc. are used in order to reduce as drastically as possible the computation needed in the selection of an optimal schedule. 1. Introduetion Objectives and motivations of scheduling theory may be formulated in very general terms as follows (we quote Coffman (ref.2)). The scheduling problems assume a set of resources or processors and a set of tasks which isto be serviced by these resources. Based on prespecified properties of and constraints on the tasks and resources, the problem is to find an efficientalgorithm for sequencing the tasks to optimize some desired performance measure. The only measure that will be considered in this paper is the schedule length, i.e. the maximum time spent in the system by the tasks (another measure which is currently considered is the mean time spent in the system by the tasks 2)).The models of the problem we analyse are deterministic in the sense that all information governing the scheduling decision is assumed to be known in advance. In particular, the tasks and all information describing them are assumed to be available at the outset, which we normally take as time t = o. The scheduling model, from which subsequent problems are drawn, will now be described in a more systematic way. The system of tasks has the structure of a combinational algorithm (To, ~)

Journal ArticleDOI
TL;DR: To the class of queuing networks analyzable by the method of Baskett, Chandy, Muntz, and Palacios, service centers whose scheduling is random are added, showing that for purposes of this analysis, the results are identical to FCFS queuing.
Abstract: To the class of queuing networks analyzable by the method of Baskett, Chandy, Muntz, and Palacios, we add service centers whose scheduling is random. That is, upon completion of a service interval, the server chooses next to serve one of the waiting customers selected at random. As in the case of first-come first-served (FCFS) scheduling, all tasks must have the same exponentially distributed service time at such a center. We show that for purposes of this analysis, the results are identical to FCFS queuing. Example applications for random selection scheduling in computer system modeling are provided.

Journal ArticleDOI
TL;DR: In this article, a new maintenance scheduling algorithm based on stochastic simulation was proposed to provide a more reliable operation for the system, and better assessment of risk and system costs.
Abstract: This paper critically reviews the various techniques utilised for maintenance scheduling of generating facilities and points out their inaccuracies in minimising the risk of system shortages. The paper describes a new maintenance scheduling algorithm based on stochastic simulation which would provide a more reliable operation for the system, and better assessment of risk and system costs.

Journal ArticleDOI
TL;DR: The hypothesis that the optimal network algorithm developed by Boyce, Farhi and Weischedel may be applied to produce approximate solutions to problems of practical dimensions within a reasonable range of time is supported.
Abstract: The paper describes some possibilities for modifying the optimal network algorithm developed by Boyce, Farhi and Weischedel in a way that makes it applicable to some practical problems of network planning. The modifications, which have been tested with respect to their effect on the efficiency of the algorithm, include the introduction of asymmetrical demand structures, the integration of an existing network, the lexico-minimization of a dynamic objective function, and the consideration of constraints related to interdependencies between candidate links. Two small network problems and one medium-sized problem (61 nodes, 104 links, 16 candidates) have been computed; the results support the hypothesis that the algorithm may be applied to produce approximate solutions to problems of practical dimensions within a reasonable range of time.

Proceedings ArticleDOI
15 May 1979
TL;DR: This paper examines the possibility of using a dedicated multiprocessor network to do the step-by-step computations needed in the digital simulation of the dynamic response of a large power system.
Abstract: This paper examines the possibility of using a dedicated multiprocessor network to do the step-by-step computations needed in the digital simulation of the dynamic response of a large power system. This multiprocessor network would use a general purpose digital computer for the input and output. It is found that over 97% of the computations for a typical 1723-bus, 396-machine, stability study could be done in parallel. Approximately 30% of the computation time is spent solving the network equations, I = YE. Various algorithms for this part of the solution and ways of scheduling the work assignments to the processors are discussed.

Dissertation
01 Jan 1979
TL;DR: This paper presents a meta-modelling system that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of computer programming called “ CAD/CAM”.
Abstract: Thesis. 1979. Ph.D.--Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.

Proceedings ArticleDOI
03 Oct 1979
TL;DR: Relation type procedure parameters serve two purposes: data accessing and access scheduling and Scheduling requirements are analyzed within the framework of the single-assignment approach.
Abstract: Pascal/R, a language extension based on a data structure relation and some high level language constructs for relations [9] is augmented by a procedure concept for concurrent execution of database actions. Relation type procedure parameters serve two purposes: data accessing and access scheduling. Scheduling requirements are analyzed within the framework of the single-assignment approach [10] and proposals for the stepwise reduction of implementation effort are discussed.

Journal ArticleDOI
Smith1
TL;DR: A simple queuing model for multiple channel controllers is created, and an approximate solution is generated that appears to be robust with respect to changes in some of the assumptions used in making the approximation.
Abstract: A multiple channel controller (MCC) is a controller which switches a given number of channels among a larger number of input/output devices and permits simultaneous access to as many devices as there are channels available. A simple queuing model for multiple channel controllers is created, and an approximate solution for this model is generated. The approximate solution is found, using simulation, to be very close for those cases examined to the actual behavior of the model. Further simulations indicate that the approximate solution of the model appears to be robust with respect to changes in some of the assumptions used in making the approximation. Trace data taken from a real system are analyzed and they confirm the predicted utility of MCC's. The problem of optimal scheduling of MCC's is briefly discussed. Alternative system configurations are compared with the objective of minimizing queuing delays.

Journal ArticleDOI
TL;DR: In this paper, an efficient decomposition technique for optimal generation scheduling of hydro-thermal systems is presented, where the decomposed subproblems have been converted into unconstrained nonlinear programming sub-problems using augmented penalty function approach.

Journal ArticleDOI
TL;DR: An experimental extension to VM/370 is described whereby a distinct execution and data domain (Virtual Control Storage) is made available to virtual machines that require access to a resource manager, without requiring a change in the scheduling unit.
Abstract: The architecture of a virtual machine system has specific advantages over that of conventional operating systems because virtual machines are well separated from one another and from the control program. This structure requires that a protected, multi-user resource manager be placed in a distinct virtual machine because the protection domain and scheduling unit are one entity, the virtual machine. But cooperation between distinct virtual machines necessarily entails scheduling overhead and often delay. This paper describes an experimental extension to VM/370 whereby a distinct execution and data domain (Virtual Control Storage) is made available to virtual machines that require access to a resource manager, without requiring a change in the scheduling unit. Thus scheduling overhead and delays are avoided when transition is made between user program and resource manager. A mechanism is described for exchanging data between execution domains by means of address-space mapping.

Journal ArticleDOI
TL;DR: Using the method of discrete time analysis, a numerical method is developed for evaluating the distribution of the delay encountered by a customer in a time-inhomogeneous, single server queue with batch arrivals.
Abstract: Using the method of discrete time analysis, a numerical method is developed for evaluating the distribution of the delay encountered by a customer in a time-inhomogeneous, single server queue with batch arrivals.

Journal ArticleDOI
TL;DR: A relatively simple routing algorithm which has been a proven cost saver in a number of applications and is suitable for computer implementation by firms without prior computer routing experience is provided.
Abstract: The escalating cost per mile of large vehicle operation has increased the advantages of more efficient automatic scheduling. There has been a significant number of articles relating to computerized routing, but the gap between the literature and practice is still very wide. In this paper, the author provides a detailed description of a relatively simple routing algorithm which has been a proven cost saver in a number of applications. The technique can be used for manual routing in smaller firms and is suitable for computer implementation by firms without prior computer routing experience.

Journal ArticleDOI
TL;DR: In this article, a solution to the complete problem of optimal voltage scheduling and control in large scale systems is presented, as an extension of the differential injection method for optimal voltage allocation in a single system.

Journal ArticleDOI
TL;DR: The problem of scheduling the jobs over the summer construction period to meet conditions to meet a convex annual demand pattern is formulated as a mixed-integer linear programming problem which is solved by a branch-and-bound algorithm.
Abstract: A number of jobs have to be carried out on a gas transmission network every year, which restrict the transmission capacity of the network while they are in progress. There are certain pairs of jobs which must not be in progress at the same time. The network must meet a convex annual demand pattern. The problem of scheduling the jobs over the summer construction period to meet these conditions is formulated as a mixed-integer linear programming problem which is solved by a branch-and-bound algorithm. Results and benefits to the operators of the network are described.

Journal ArticleDOI
TL;DR: A dynamic programming algorithm for finding an ordering that minimizes the total sum of arbitrary cost functions for one-machine scheduling problems allowing non-simultaneous jobs arrivals and arbitrary non-decreasing cost functions is extended.
Abstract: This paper is concerned with one-machine scheduling problems allowing non-simultaneous jobs arrivals. For problems in which all jobs are simultaneously available Held and Karp developed a dynamic programming algorithm for finding an ordering that minimizes the total sum of arbitrary cost functions. This paper extends the approach of Held and Kabp to the problems with nonsimultaneous arrivals and arbitrary non-decreasing cost functions. The algorithm is illustrated by a problem of minimizing the number of tardy jobs.

Proceedings ArticleDOI
01 Jun 1979
TL;DR: This paper considers the scheduling of jobs which will become active; that is, which will begin to compete for use of the system processors, by examining some intrinsic property of the jobs rather than an external one.
Abstract: In a computer system, scheduling occurs whenever the next-task-to-receive-service is selected from a queue of waiting tasks. 5 This paper considers the scheduling of jobs which will become active; that is, which will begin to compete for use of the system processors. Jobs waiting to be activated may be scheduled by referring to properties of the job such as priority or total resource usage. 4,7 It is also possible to examine some feature of the jobs already activated (i.e., the jobs in the multiprogramming mix) in order to determine a suitable candidate for admission to the mix. This is called mix-dependent scheduling. Here, we prefer to examine some intrinsic property of the jobs rather than an external one. The property we have chosen to investigate is the rate of processor (CPU or I/O) usage.

Dissertation
01 Jan 1979
TL;DR: The analysis of the worst-case and expected performance results reveals that there is a high degree of correlation in the behaviour of the algorithms as predicted or estimated by these two performance measurements, respectively.
Abstract: The problem of scheduling independent jobs on heterogeneous multiprocessor models (i.e., those with non-identical or uniform processors) with independent memories has been studied. Actually, a number of demand scheduling nonpreemptive algorithms have been evaluated, with respect to their mean flow and completion time performance criterion. In particular, the deterministic analysis has been used to predict the worst-case performance whereas simulation techniques have been applied to estimate the expected performance of the algorithms. As a result from the deterministic analysis, informative worstcase bounds have been proven, from which the behaviour of the extreme performance of the considered algorithms can be well predicted. However, relaxing some or a combination of the system parameters then, our model corresponds to versions which have already been studied. (i.e. the classical homogeneous and heterogeneous models or the homogeneous one with independent memories). For such cases, the proven bounds in this thesis either agree or are better and more informative than the ones found for these simpler models.. Finally, the analysis of the worst-case and expected performance results reveals that there is a high degree of correlation in the behaviour of the algorithms as predicted or estimated by these two performance measurements, respectively.

Book ChapterDOI
01 Jan 1979
TL;DR: An improvement to existing heuristic algorithms based on a two-phase integer programming formulation which exploits the structure of the scheduling problem to efficiently produce schedules which meet manpower demands at minimum costs is described.
Abstract: Many service industries face the problem of scheduling employees to meet a work load which can change from day to day and from hour to hour. This paper describes an improvement to existing heuristic algorithms. The improvement is based on a two-phase integer programming formulation which exploits the structure of the scheduling problem to efficiently produce schedules which meet manpower demands at minimum costs. The algorithm also includes restrictions on ‘days-off’ patterns and on working-hour patterns of both part-and full-time employees. An extension of the algorithm to include shifts of arbitrary lengths is also discussed. Results from an application in the food service industry are given.