scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 1972"


Journal ArticleDOI
TL;DR: Five well-known scheduling policies for movable head disks are compared using the performance criteria of expected seek time (system oriented) and expected waiting time (individual I/O request oriented) to choose a utility function to measure total performance.
Abstract: Five well-known scheduling policies for movable head disks are compared using the performance criteria of expected seek time (system oriented) and expected waiting time (individual I/O request oriented). Both analytical and simulation results are obtained. The variance of waiting time is introduced as another meaningful measure of performance, showing possible discrimination against individual requests. Then the choice of a utility function to measure total performance including system oriented and individual request oriented measures is described. Such a function allows one to differentiate among the scheduling policies over a wide range of input loading conditions. The selection and implementation of a maximum performance two-policy algorithm are discussed.

232 citations


Journal ArticleDOI
TL;DR: An algorithm is presented that will minimize the total processing time for a particular case of the n-job, m-machine scheduling problem by modeling it as a traveling-salesman problem and known solution techniques can be employed.
Abstract: This paper presents an algorithm that will minimize the total processing time for a particular case of the n-job, m-machine scheduling problem. In many industrial processes, jobs are processed by a given sequence of machines. Often, once the processing of a job commences, the job must proceed immediately from one machine to the next without encountering any delays en route. The machine sequence need not be the same for an jobs. Because of this processing constraint that prohibits intermediate queues, most normal scheduling techniques are not applicable. This paper obtains a solution to this constrained scheduling problem by modeling it as a traveling-salesman problem; known solution techniques can then be employed. The paper solves a sample problem and discusses computational considerations.

227 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a mathematical investigation into bus scheduling, where the main objective is to minimize the number of buses that is needed and a secondary criterion is the minimization of the waiting time, for which a calculus of variation technique is used.
Abstract: The paper presents a mathematical investigation into bus scheduling. The passenger arrival rate is supposed to be given; the problem is to determine the bus departure rate as a function of time. The primary objective is to minimize the number of buses that is needed. A secondary criterion is the minimization of the passenger waiting time, for which a calculus of variation technique is used. Although the paper deals mainly with a single busroute, it is also shown how the theory can be extended to the case of a pair of linked busroutes. The practical implications are illustrated by an example. The fleetsize formula that is used here is thought to be applicable to many transportation systems.

118 citations


Journal ArticleDOI
TL;DR: It is shown that a simple algorithm provides optimal solutions to problems of scheduling men or equipment to meet cyclic requirements over periods where each man or machine must be idle for two consecutive periods per cycle.
Abstract: It is shown that a simple algorithm provides optimal solutions to problems of scheduling men or equipment to meet cyclic requirements over periods where each man or machine must be idle for two consecutive periods per cycle. An example illustrates the application to scheduling to meet seven distinct daily requirements per week using employees for five consecutive work days.

74 citations


Journal ArticleDOI
TL;DR: It is concluded that a successful CPU scheduling method must be preemptive and must prevent a given job from holding the CPU for too long a period.
Abstract: Microscopic level job stream data obtained in a production environment by an event-driven software probe is used to drive a model of a multiprogramming computer system. The CPU scheduling algorithm of the model is systematically varied. This technique, called trace-driven modeling, provides an accurate replica of a production environment for the testing of variations in the system. At the same time alterations in scheduling methods can be easily carried out in a controlled way with cause and effects relationships being isolated. The scheduling methods tested included the best possible and worst possible methods, the traditional methods of multiprogramming theory, round-robin, first-come-first-served, etc., and dynamic predictors. The relative and absolute performances of these scheduling methods are given. It is concluded that a successful CPU scheduling method must be preemptive and must prevent a given job from holding the CPU for too long a period.

74 citations


Journal ArticleDOI
TL;DR: These examples display the great versatility of the results and demonstrate the flexibility available for the intelligent design of discriminatory treatment among jobs (in favor of short jobs and against long iobs) in time shared computer systems.
Abstract: Scheduling algorithms for time shared computing facilities are considered in terms of a queueing theory model. The extremely useful limit of "processor sharing" is adopted, wherein the quantum of service shrinks to zero; this approach greatly simplifies the problem. A class of algorithms is studied for which the scheduling discipline may change for a given job as a function of the amount of service received by that job. These multilevel disciplines form a natural extension to many of the disciplines previously considered. The average response time for jobs conditioned on their service requirement is solved for. Explicit solutions are given for the system M/G/1 in which levels may be first come first served (FCFS), feedback (FB), or round-robin (RR) in any order. The service time distribution is restricted to be a polynomial times an exponential for the case of RR. Examples are described for which the average response time is plotted. These examples display the great versatility of the results and demonstrate the flexibility available for the intelligent design of discriminatory treatment among jobs (in favor of short jobs and against long iobs) in time shared computer systems.

51 citations


Journal ArticleDOI
TL;DR: The paper gives an algorithm that uses partial enumeration for what is essentially a mixed integer program and employs a maximum-flow computation as a check for feasibility with respect to available resources.
Abstract: This paper treats the problem of project (or machine) scheduling with resource constraints to achieve minimum total duration time as a disjunctive graph. The prospective advantage of this approach is the elimination of the need to consider individual time periods over the program horizon; a feasibility check determines whether the resource constraints can be met by any particular network representation of the project. The paper gives an algorithm that uses partial enumeration for what is essentially a mixed integer program. The algorithm employs a maximum-flow computation as a check for feasibility with respect to available resources.

40 citations


Journal ArticleDOI
TL;DR: The problem of scheduling activities so as to minimize project duration in the presence of resource constraints is considered and a branch and bound algorithm is described.
Abstract: The problem of scheduling activities so as to minimize project duration in the presence of resource constraints is considered. A branch and bound algorithm is described and some limited computational experience is reported.

33 citations


ReportDOI
01 Jan 1972
TL;DR: Simulation results show the algorithms to be useful in scheduling less restricted job sets and an upper bound is seen to compare favorably with the upper bound intrinsic to the model.
Abstract: From Sagamore computer conference on parallel processing; Syracuse, New York, USA (22 Aug 1973). A simple algorithm to schedule a restricted set of jobs on a multiprocessor system with two classes of processors is described. Through deterministic analysis an upper bound is established for the behavior of the algorithm. This bound is seen to compare favorably with the upper bound intrinsic to the model. Simulation results show the algorithms to be useful in scheduling less restricted job sets. (auth)

30 citations


Journal ArticleDOI
TL;DR: The most significant results of this study are that the shortest-imminent-operation rule is superior to others in reducing job lateness and shop flow time and the GASP-II package works efficiently for large-size shop problems.
Abstract: The purpose of this paper is to report on a study which involves a simulation of a hypothetical job shop with several machines. The investigation employs GASP-II as a computer language. This simulation study is concerned with: (1) testing a new method of assign ing job due-dates and (2) comparing and evaluating the effect of different processing-time distributions on the performance of a number of scheduling rules.The most significant results of this study are:(1) The shortest-imminent-operation rule is superior to others in reducing job lateness and shop flow time (2) The procedure in which the due-date allowance is proportional to the number of operations and workcontent of the jobs has proved to be beneficial in the case of the non-due-date rules (3) The operation of a job shop using the shortest-imminent-operation and slack-per-remaining-number-of-operations rules is degraded when the processing-time distribution having Erlang parameter K equal to 4 or 8. However, per formance is better when K = 8 tha...

29 citations


Journal ArticleDOI
TL;DR: An application of vehicle scheduling techniques to the planning and operation of medical specimen collection services is described and a decision-making procedure is suggested for use in this problem area.
Abstract: An application of vehicle scheduling techniques to the planning and operation of medical specimen collection services is described and a decision-making procedure suggested for use in this problem ...

Proceedings ArticleDOI
05 Dec 1972
TL;DR: The subject of scheduling for movable head rotating storage devices, i.e., disk-like devices, has been discussed at length in recent literature and a comprehensive simulation study has been reported on by Teorey and Pinkerton.
Abstract: The subject of scheduling for movable head rotating storage devices, i.e., disk-like devices, has been discussed at length in recent literature. The early scheduling models were developed by Denning, Frank, and Weingarten. Highly theoretical models have been set forth recently by Manocha, and a comprehensive simulation study has been reported on by Teorey and Pinkerton.

Journal ArticleDOI
01 Feb 1972
TL;DR: In this paper, the authors describe new algorithms based on 1st-order gradient techniques for long-term and short-term scheduling of multistorage hydroelectric and multithermal systems for minimum operational cost.
Abstract: The paper describes new algorithms based on 1st-order-gradient techniques for long-term and short-term scheduling of multistorage hydroelectric and multithermal systems for minimum operational cost, with system variables in discrete form, and water inflows and load demands as deterministic. Constraints on various para meters are duly taken into account.

Journal ArticleDOI
01 Jul 1972
TL;DR: The synchronous longitudinal guidance (SLG) approach to allocating guideway space and controlling traffic in a ground transportation network is described, and the required auxiliary SLG functions of entrance and exit control, merge control, and safety assurance are discussed briefly.
Abstract: The synchronous longitudinal guidance (SLG) approach to allocating guideway space and controlling traffic in a ground transportation network is described in this paper. The transportation system considered is one in which completely automated vehicles follow deterministic position-time profiles during their travel through the network. The key feature in the SLG approach is a dynamic scheduling algorithm which allocates guideway space to vehicles in such a way that the capacity of critical points (or bottlenecks) in the network is not exceeded. This guarantees that traffic flows smoothly through the entire network and that queues are confined to the entrances. Two scheduling algorithms are Identified in this paper. The basic slot allocation algorithm used in processing vehicle trip requests is outlined first. Then the more general cycle allocation algorithm, which is suitable for control of large networks, is described. The results of a computer simulation of a network run using the latter algorithm are summarized, and the required auxiliary SLG functions of entrance and exit control, merge control, and safety assurance are discussed briefly.


Journal ArticleDOI
TL;DR: It is shown that significant improvements in the measure of system performance can be obtained by using variable time-slice techniques and by selecting the optimum round-robin cycle time.
Abstract: A simulation model of a time-sharing system with a finite noncontiguous store and an infinite auxiliary store is used to study the variation of system parameters such as store size, number of jobs allowed to execute simultaneously, job-scheduling algorithm, etc. The effects of these variations on a measure of system performance is used to ascertain which of the parameters controllable by the job-scheduling algorithm, including the scheduling itself, require optimization, and which of the parameters not normally controllable by the scheduling algorithm have a marked effect on system performance. System performance is based upon the mean cost of delay to all jobs processed.It is shown that significant improvements in the measure of system performance can be obtained by using variable time-slice techniques and by selecting the optimum round-robin cycle time. It appears that these features would benefit from optimization whereas other parameters controllable by the scheduling algorithm affect system performance in a predictable manner and would not benefit from optimization.Features not normally under the control of the scheduling algorithm can also have a marked effect on the measure of performance; in particular, supervisor overheads, the size of the store, and the speed of the CPU.A comparison is made between the results of the simulation model and two analytical equations for quantum-oriented nonpreemptive time-sharing systems. The comparison is found to be very favorable.

Journal ArticleDOI
TL;DR: It is found a bit misleading to apply the appellation "perma-nent blocking" to a condition which under fortuitous circumstances might resolve itself, and the term "indefinite postponement" is preferred.
Abstract: find it a bit misleading to apply the appellation "perma-nent blocking" to a condition which under fortuitous circumstances might resolve itself. We prefer the term "indefinite postponement." The algorithm given by Holt in the final section of his paper is unnecessarily cautious and is likely to delay the resolution of the "indefinite postponement condition ." There is no need to restrict granting of requests to the first process in the safe sequence found. Safe requests by any process preceding Pi in the sequence should always be granted. Certain requests may be granted to those after Pi in the sequence without postponing the grant for P~. We would like to comment on the paper by R.C. Holt [1]. In Section 2 Holt quotes Habermann [2] and then continues as follows: "Habermann states, 'The algorithms [for deciding if a state is safe] decide only whether or not granting a request can produce a deadlock, so assignment rules (according to priority rules for instance) can be implemented freely.' The obvious conclusion would seem to be that a scheduler which grants only safe requests will avoid all deadlocks and grant all requests." Holt's "obvious conclusion" is quite clearly false. The statement is equivalent to "a scheduler which does not grant any unsafe requests will eventually grant all requests." We do not believe that anyone who read the earlier article thoughtfully would reach that conclusion. The earlier articles [2, 3] make it clear that the notion of safe state is equivalent to the existence of a sequence of resource allocations and process completions which results in the return of all resources to the common pool. From this it follows that, were one to choose a scheduling policy which restricted the set of possible sequences to a proper subset of the set of "safe sequences," we must not consider any state to be safe if its safety was predicted on sequences which are not in that subset. With this fact in mind the difficulties discussed in Holt's Section 3 can be systematically avoided. Although we hate to quibble over choice of words, we General permission to republish, but not for profit, all or part of this material is granted, provided that reference is made to this publication, to its date of issue, and to the fact that reprinting privileges were granted by permission of the Association for Computing Machinery. My note commenting on Habermann's Communications article …

01 Jun 1972
TL;DR: This paper examines the problem of scheduling a set of tasks on a system with a number of identical processors, and shows that timing anomalies also exist when tasks are restricted to be of equal (unit) length.
Abstract: In this paper we examine the problem of scheduling a set of tasks on a system with a number of identical processors. Several timing anomalies are known to exist for the general case, in which the execution time can increase when inter-task constraints are removed or processors are added. It is shown that these anomalies also exist when tasks are restricted to be of equal (unit) length. Several, increasingly restrictive, heuristic scheduling algorithms are reviewed. The "added processor" anomaly is shown to persist through all of them, though in successively weaker form.

Journal ArticleDOI
TL;DR: This paper formulates a model for the scheduling and control of a multiphase project that consists of a given sequence of jobs to be completed by a specified due date and shows that a minimum a priori probability of meeting the due date is guaranteed.
Abstract: This paper formulates a model for the scheduling and control of a multiphase project that consists of a given sequence of jobs to be completed by a specified due date The duration of each job is the sum of a known fixed time and a random delay; and, while the former may be shortened within given limits, there is no control over the latter The decision variables are the level of activity in each job and the starting time of the first in the sequence An optimal decision policy for this model is obtained assuming a simple cost structure It is shown that, if this policy is followed, a minimum a priori probability of meeting the due date is guaranteed


Proceedings ArticleDOI
16 May 1972
TL;DR: EXEC 8 is the multiprogramming, time sharing operating system for the Univac 1100 computer systems that provides all the I/O control, file handling, diagnostic error testing, user support systems, etc., normally associated with third generation operating systems.
Abstract: EXEC 8 is the multiprogramming, time sharing operating system for the Univac 1100 computer systems. EXEC 8 attempts to provide satisfactory concurrent batch, demand (interactive), and real time processing through complicated priority scheduling schemes for both real memory and CPU time allocation. Basically, the scheduling schemes allow real time service to have whatever resources it requires and demand and batch service requests share the remainder. The sharing algorithm is quite complicated; in essence, however, it dynamically limits the time average impact of demand service on the system performance to an installation set limit function of the number of active demand users. Within the demand and batch type categories, time and core are allocated by exponential scheduling algorithms biased to favor small jobs, but constrained to service all jobs eventually. In addition, EXEC 8 provides all the I/O control, file handling, diagnostic error testing, user support systems, etc., normally associated with third generation operating systems.

Journal ArticleDOI
TL;DR: This paper considers the problem of scheduling several time critical processes in parallel and suggests that these considerations perhaps apply less equivocally to hybrid computation in which there are well-defined quantitative accuracy considerations than to other real time applications.
Abstract: In real time applications of computers such as hybrid computation and process control, a single processor may be required to execute several time critical processes. A typical single process will include short highly time critical phases such as the operation of ADC's or DAC's on an analog computer. These phases of duration 5T1, 8T2, • • . must be performed in immediate response to events at times TUT2,.... A process may also include intermediate phases, such as the computation of the next values to be written to the DAC's, which must be executed at any time within the intervals (7\, T2), (T2, T3) For the purpose of this description a 'crisis' as discussed by Middleton (1971) is considered to be an extra event which trggiers a null phase of computation. This paper considers the problem of scheduling several time critical processes in parallel. The premises on which the present discussion is based are given below. They perhaps apply less equivocally to hybrid computation in which there are well-defined quantitative accuracy considerations than to other real time applications.


01 Aug 1972
TL;DR: This article investigates the application of minimal-total-processing-time (MTPT) scheduling disciplines to rotating storage units when random arrival of requests is allowed and shows that the sorting procedure is the most time consuming phase of the algorithm.
Abstract: This article investigates the application of minimal-total-processing-time (MTPT) scheduling disciplines to rotating storage units when random arrival of requests is allowed. Fixed-head drum and moving-head disk storage units are considered and particular emphasis is placed on the relative merits of the MTPT scheduling discipline with respect to the shortest-latency-time-first (SLTF) scheduling discipline. The data presented are the results of simulation studies. Situations are discovered in which the MTPT discipline is superior to the SLTF discipline, and situations are also discovered in which the opposite is true. An implementation of the MTPT scheduling algorithm is presented and the computational requirements of the algorithm are discussed. It is shown that the sorting procedure is the most time consuming phase of the algorithm.


Journal ArticleDOI
TL;DR: In this article, an economist makes some cross-disciplinary research supported suggestions on scheduling of motion pictures seen on television so that audiences for the reruns may be maximized, based on the assumption that reruns will be viewed more often.
Abstract: An excellent example of cross‐disciplinary research is the following piece in which an economist makes some research‐supported suggestions on scheduling of motion pictures seen on television so that audiences for the reruns may be maximized. Professor Taylor is a member of the economics faculty at Northern Illinois University in Normal.

Journal ArticleDOI
E.G. Mallach1
TL;DR: An aerospace guidance computer organization, consisting of multiple processors and memory units attached to a central time-multiplexed data bus, is described and results indicate that efficient job execution is not seriously impaired even when the system is so overloaded that new jobs are inordinately delayed in starting.
Abstract: An aerospace guidance computer organization, consisting of multiple processors and memory units attached to a central time-multiplexed data bus, is described. A job mix for this type of computer is obtained by analysis of Apollo mission programs. Multiprocessor performance is then analyzed using: 1) queuing theory, under certain "limiting case" assumptions; 2) Markov process methods; and 3) system simulation. Results of the analyses indicate: 1) Markov process analysis is a useful and efficient predictor of simulation results; 2) efficient job execution is not seriously impaired even when the system is so overloaded that new jobs are inordinately delayed in starting; 3) job scheduling is significant in determining system performance; and 4) a system having many slow processors may or may not perform better than a system of equal power having few fast processors, but will not perform significantly worse.

ReportDOI
01 Dec 1972
TL;DR: In this article, a data-sharing scheduler is defined in terms of finite-state machine theory, and a basic theorem of data sharing is stated and proved, and its implications are explored.
Abstract: : 02AF-671AESDTR-147-Vol-3 See also Part 2, AD757903. A data-sharing scheduler is defined in terms of finite-state machine theory. Using the language and concepts of finite-state machines, the authors give precise definitions for the notions of delayed, blocked, deadlock, permanent blocking, and sharing a datum; these notions and their interrelationships lead to a characterization of a class of schedular (unrestricted and nontrivial); a basic theorem of data sharing is stated and proved, and its implications are explored. The authors also give a narrative summary and discussion of the main result of the work suitable for the system designer or analyst.

Journal ArticleDOI
Ray Bentley1
TL;DR: The out-of-kilter algorithm is used to solve a complex assignment problem involving interacting and conflicting personal choices subject to interacting resource constraints.
Abstract: The out-of-kilter algorithm is used to solve a complex assignment problem involving interacting and conflicting personal choices subject to interacting resource constraints. An example of successful use is given and extensions into the corporate and social planning fields are suggested.

Proceedings ArticleDOI
05 Dec 1972
TL;DR: It is the intent of this paper to present a cellular processor which can be used for scheduling and controlling a polymorphic computer network, freeing some of the processor time for more important functions.
Abstract: Polymorphic computer systems are comprised of a large number of hardware devices such as memory modules, processors, various input/output devices, etc., which can be combined or connected in a number of ways by a controller to form one or several computers to handle a variety of jobs or tasks. Task assignment and resource allocation in computer networks and polymorphic computer systems are currently being handled by software. It is the intent of this paper to present a cellular processor which can be used for scheduling and controlling a polymorphic computer network, freeing some of the processor time for more important functions. (See Figure 1.)