scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 1973"


Journal ArticleDOI
TL;DR: The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service and it is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization.
Abstract: The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service. It is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets. It is also shown that full processor utilization can be achieved by dynamically assigning priorities on the basis of their current deadlines. A combination of these two scheduling techniques is also discussed.

7,067 citations


Journal ArticleDOI
TL;DR: This paper presents a staff planning and scheduling model that has specific application in the nurse-staffing process in acute hospitals, and more general application in many other service organizations in which demand and production characteristics are similar.
Abstract: This paper presents a staff planning and scheduling model that has specific application in the nurse-staffing process in acute hospitals, and more general application in many other service organizations in which demand and production characteristics are similar. The aggregate planning models that have been developed for goods-producing organizations are not appropriate for these types of service organizations. In this paper the process for staffing services is divided into three decision levels: a policy decisions, including the operating procedures for service centers and for the staff-control process itself; b staff planning, including hiring, discharge, training, and reallocation decisions; and c short-term scheduling of available staff within the constraints determined by the two previous levels. These three planning "levels" are used as decomposition stages in developing a general staffing model. The paper formulates the planning and scheduling stages as a stochastic programming problem, suggests an iterative solution procedure using random loss functions, and develops a noniterative solution procedure for a chance-constrained formulation that considers alternative operating procedures and service criteria, and permits including statistically dependent demands. The discussion includes an example application of the model and illustrations of its potential uses in the nurse-staffing process.

184 citations


Patent
15 Aug 1973
TL;DR: A multiprogrammed multiprocessing information processing system has independently operating computing, input/output, and memory modules through an exchange, and interacting with a multilevel operating system designed to automatically makes optimum use of all system resources by controlling system resources and by scheduling jobs in the multiprogramming mix of the processing system as mentioned in this paper.
Abstract: A multiprogrammed multiprocessing information processing system having independently operating computing, input/output, and memory modules through an exchange, and interacting with a multilevel operating system designed to automatically makes optimum use of all system resources by controlling system resources and by scheduling jobs in the multiprogramming mix of the processing system. In operation, the operating system insures that all system resources are automatically allocated to meet the needs of the programs introduced into the system as well as insuring the continuous and automatic reassignment of resources, the initiation of new jobs, and the monitoring of their performance. System reliability is achieved by the incorporation of error detection circuit throughout the system, by single-bit correction of errors in memory, by recording errors for software analysis and by modularization and redundacy of critical elements.

160 citations


Proceedings ArticleDOI
25 Jun 1973
TL;DR: FANSSIM II as mentioned in this paper is a digital logic simulator under development capable of simulating a 2500 gate network in concurrence with approximately 10,000 single-fault networks, which is expected to be above a million signals/dollar, exceeding the real simulation rate for the IBM 360-50 by a factor of 50:1.
Abstract: Injecting a single fault into a fault-free digital network creates a “bad” network which is only slightly dissimilar from the original. Injecting the same stimuli (signals) into both of these networks will produce activity sequences which are often identical, normally almost identical, and rarely substantially different from each other.This similarity between good and bad networks and their activities suggests a method of simulation which avoids the customary duplication of essentially identical good and bad simulations. This method consists of simulating good network activity, and of initiating and performing a concurrent bad network simulation only if bad network activity actually differs from good activity. The run time savings inherent in this method are substantial if hundreds or thousands of bad networks can be simulated in concurrence with a single good network.FANSSIM II is a digital logic simulator under development capable of simulating a 2500 gate network in concurrence with approximately 10,000 single-fault networks. The storage requirements for this simulation are estimated to remain under 450,000 bytes. The effective simulation rate is expected to be above a million signals/dollar, exceeding the real simulation rate of 20,000 signals/dollar for the IBM 360-50 by a factor of 50:1.Some of the techniques and features used are the following:• Fault sources are detected during good network activity and trigger the initiation of concurrent bad network activity.• Fault effects are transmitted piggyback via good signals or separately as independent bad signals.• Fault effects arriving at good gates cause the divergence of bad gates.• Bad gates disappear, or converge, as soon as their inputs and outputs are again in agreement with the associated good gate.• The passage of time is simulated precisely by using assignable rise and fall gate delays.• Feedback, reconvergent fanout, and race detection are handled without special mechanisms.• Economical event handling, desirable here due to accumulation of events of many bad networks, is achieved by using the time mapping6event scheduling technique.

146 citations


Book ChapterDOI
01 Jan 1973
TL;DR: This paper considers the most general type of “network” flow shop in which jobs pass through several stages, each of which is composed on one or more identical processors.
Abstract: This paper considers the most general type of “network” flow shop in which jobs pass through several stages, each of which is composed on one or more identical processors. Jobs are processed on any one of the processors at each stage in ascending order of stage numbers and the objective is minimization of makespan. The class of shops considered is characterized by prohibited in-process inventory and slightly restricted job ordering per processor. Originally designed for the scheduling of nylon polymerization, the algorithms developed in the paper have numerous applications, especially in the chemical processes and petrochemical production areas.

109 citations


Journal ArticleDOI
TL;DR: An example has been solved and an easy method of determining the economic manufacturing schedule of a multi-product single machine system on a repetitive basis is described.
Abstract: This paper describes an easy method of determining the economic manufacturing schedule of a multi-product single machine system on a repetitive basis. An example has been solved to illustrate the method reported in the paper.

80 citations


Journal ArticleDOI
TL;DR: This paper is concerned with the detection of failure of a system when the time to failure is a Weibull variate, and graphical aids for computing an appropriate inspection policy on the basis of costs, or of mean time between failure and its detection are given.
Abstract: This paper is concerned with the detection of failure of a system when the time to failure is a Weibull variate. The suggested inspection policy depends on a single meaningful parameter. Graphical aids for computing an appropriate inspection policy on the basis of costs, or on the basis of mean time between failure and its detection are given.

63 citations


Journal ArticleDOI
TL;DR: The n job, one-machine scheduling problem is considered where set-up and processing times are random and the objective is to minimize the number of late jobs.
Abstract: The n job, one-machine scheduling problem is considered where set-up and processing times are random and the objective is to minimize the number of late jobs. In the deterministic case, Moore's algorithm is known to produce an optimal schedule. A chance-constrained formulation of the nondeterministic problem is derived in which a job is processed if the probability that it will be completed prior to its due date is greater than a specified level. A deterministic equivalent problem is achieved to which application of a modification of Moore's algorithm is proven to produce an optimal schedule.

45 citations


Journal ArticleDOI
TL;DR: The theoretical properties of this m-machine problem are explored, and the problem of determining an optimum scheduling procedure is examined; properties of the optimum schedule are given as well as the corresponding reductions in the number of schedules that must be evaluated in the search for an optimum.
Abstract: This paper deals with the sequencing problem of minimizing linear delay costs with parallel identical processors The theoretical properties of this m-machine problem are explored, and the problem of determining an optimum scheduling procedure is examined Properties of the optimum schedule are given as well as the corresponding reductions in the number of schedules that must be evaluated in the search for an optimum An experimental comparison of scheduling rules is reported; this indicates that although a class of effective heuristics can be identified, their relative behavior is difficult to characterize

42 citations


Journal ArticleDOI
TL;DR: A dynamic programming solution is presented which decomposes the problem of scheduling jobs on M-parallel processors into a sequencing problem within an allocation problem and the computation required for solution is found to depend on the sequencing problem as it is affected by the waiting cost function.
Abstract: The problem of scheduling jobs on M-parallel processors is one of selecting a set of jobs to be processed from a set of available jobs in order to maximize profit. This problem is examined and a dynamic programming solution is presented which decomposes it into a sequencing problem within an allocation problem. The computation required for solution is found to depend on the sequencing problem as it is affected by the waiting cost function. Various forms of the waiting cost function are considered. The solution procedure is illustrated by an example, and possible extensions of the formulation are discussed.

22 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to present empirical data from actual program measurements, in the hope that workers in the field might find experimental evidence upon which to substantiate and base theoretical work.
Abstract: The working set model for program behavior has been proposed in recent years as a basis for the design of scheduling and paging algorithms. Although the words “working set” are now commonly encountered in the literature dealing with resource allocation, there is a dearth of published data on program working set behavior. It is the purpose of this paper to present empirical data from actual program measurements, in the hope that workers in the field might find experimental evidence upon which to substantiate and base theoretical work.

Journal ArticleDOI
Charles D. Pack1
TL;DR: While there is noticeable improvement in the performance of the computer (model), in the sense that time-shared scheduling delays are reduced, these improvements are offset by the transmission delays imposed by multiplexing so that there may be little or no change in the computer-communications system performance.
Abstract: A study is made of the way in which asynchronous time division multiplexing changes the stochastic nature of the arrival process from a user to the computer and, consequently, affects the performance of a time-shared computer-communications system. It is concluded that while, for certain values of system parameters, there is noticeable improvement in the performance of the computer (model), in the sense that time-shared scheduling delays are reduced, these improvements are offset by the transmission delays imposed by multiplexing so that there may be little or no change in the computer-communications system performance.Analytical and simulation results are based on the model of the computer-communications system being an M/D/1 queue (the multiplexor) in tandem with a single exponential server (the computer). Analytical results include a general description of the output process of an M/D/1 queue and the conditions under which this output process is approximately Poisson.

Journal ArticleDOI
TL;DR: This paper explores the advantages of the concurrent design of the language, operating system, and machine (via microcode) to create an interactive programming laboratory and suggested an important new concept for operating systems: separation of the scheduling from the maintenance functions in resource allocation.
Abstract: This paper explores the advantages of the concurrent design of the language, operating system, and machine (via microcode) to create an interactive programming laboratory. It describes the synergistic effect that the freedom to move and alter features from one of these domains to another has had on the design of this system (which has not been implemented). This freedom simplified both incremental compilation and the system's addressing structure, and centralized the communication mechanisms enabling the construction of hierarchical subsystems. It also suggested an important new concept for operating systems: separation of the scheduling from the maintenance functions in resource allocation. This separation enables incorporation of new scheduling algorithms (decision of what to do) without endangering the system integration (correctly performing the scheduling decisions).

Journal ArticleDOI
TL;DR: In this paper, the Ek/Em/r queue is analyzed using a single generating function with many associated variables, and augmented equations are derived which, when added to the steady state equations, give sufficient conditions to solve for the state probabilities.
Abstract: The Ek/Em/r queue is analysed using a single generating function with many associated variables. Augmenting equations are derived which, when added to the steady state equations, give sufficient conditions to solve for the state probabilities of the system.

Journal ArticleDOI
TL;DR: This paper demonstrates how the model can be used experimentally to obtain model parameters which, in the judgment of management, achieve a desirable balance between objectives.
Abstract: This paper describes a methodology to treat multiple objectives in a mathematical programming problem. A linear programming model is developed for the short-term manpower scheduling problem in a post office so as to get a desirable balance between mail transit times and resource expenditures. The scheduling problem is of particular interest because of 1) the multiplicity of objectives, 2) the existence of several mail classes, each having different arrival patterns, routings, and dispatch times, and 3) the complexity of the different scheduling options available. The paper demonstrates how the model can be used experimentally to obtain model parameters which, in the judgment of management, achieve a desirable balance between objectives. Once the parameters are determined, the model prescribes how to vary the overtime usage, reassign workers to the various work stations, and adjust the priorities of the mail classes. The model has indirect use in studying the effects of changing work capacities, dispatch schedules, and mail arrival patterns.

Journal ArticleDOI
01 May 1973
TL;DR: Simulation studies indicated that the scheduler was able to adapt to changing work loads, and it improved the turnaround times significantly, based on a multiprocessor-uniprogram environment.
Abstract: This research is directed toward the development of a scheduling algorithm for large digital computer systems. To meet this goal methods of adaptive control and pattern recognition are applied. As jobs are received by the computer, a pattern recognition scheme is applied to the job in an attempt to classify its characteristics, such as a CPU-bound job, an I/O job, a large memory job, etc. Simultaneously, another subsystem, using a linear programming model, evaluates the overall system performance, and from this information an optimized (or desired) job stream is determined. When the processor requests a new job, it is chosen from the various classifications in an attempt to meet the optimal (or desired) job stream. After the jobs are completely processed, their characteristics are compared to the projected classification produced by the pattern discriminant function. The results are then returned to the discriminant function to update the decision mechanism, a minimum-distance discriminant function. From a systems point of view, this results in an adaptive or self-organizing control system. The overall effect is a dynamic scheduling algorithm. Simulation studies indicated that the scheduler was able to adapt to changing work loads, and it improved the turnaround times significantly. These simulation studies were based on a multiprocessor-uniprogram environment.

Journal ArticleDOI
TL;DR: In this article, the authors obtained a precise and simple statement of the optimal policy for systems with one terminal stop at which interval control can be exerted, and the special case of a two-stop line is analyzed in detail.
Abstract: A transportation system is operated with one vehicle and several service points Travel times between adjacent stops are random variables whose distribution functions are unchanged over a given period of time The problem is to devise the scheduling strategy that minimizes average waiting time for passengers of the system From the calculus of variations, and the principles advanced by Osuna and Newell for dealing with such problems, we are able to obtain a precise and simple statement of optimal policy for systems with one terminal stop at which interval control can be exerted The special case of a two-stop line is analyzed in detail An approximate optimal policy for systems with two terminals, very closely related to the one-terminal optimal strategy, is also suggested

Journal ArticleDOI
TL;DR: In this article, a new formulation is proposed for the problem of the optimal phasing out over some time period of a group of similar items of capital equipment, and the possibility of hiring items from an external source is explicitly considered, as are policy constraints which restrict the minimum number of items which may be held at any time.
Abstract: A new formulation is proposed for the problem of the optimal phasing out over some time period of a group of similar items of capital equipment. The possibility of hiring items from an external source is explicitly considered, as are policy constraints which restrict the minimum number of items which may be held at any time. Initially formulated as an integer program, the problem is transformed to the familiar transportation form of linear programming.

Book ChapterDOI
01 Jan 1973
TL;DR: The solution of scheduling and sequencing problems has in most cases proved to involve serious difficulties and direct solution methods, although frequently proposed, have generally not been efficient enough to provide a practical solution procedure.
Abstract: The solution of scheduling and sequencing problems has in most cases proved to involve serious difficulties. Direct solution methods are available only for problems of very special structure — e.g., see [7] — and computational experience with recursive procedures has generally been quite disappointing. Dynmaic programming has been of very limited utility (e.g., see [4] and the comments of [8] on [2]). Branch and bound methods, although frequently proposed (e.g., see [1] and [6]), have generally not been efficient enough to provide a practical solution procedure.

Book ChapterDOI
08 Oct 1973
TL;DR: In a broad sense the problems most central to the design of operating systems are sequencing problems, which include sequencing to ensure mutually exclusive use of a resource, determinacy, avoidance of deadlocks, or synchronized execution of tasks.
Abstract: In a broad sense the problems most central to the design of operating systems are sequencing problems. This is reflected in the term, operating systems, itself. These problems include sequencing to ensure mutually exclusive use of a resource, determinacy, avoidance of deadlocks, or synchronized execution of tasks; sequencing to make efficient use of memory and input/output resources; and sequencing task executions to optimize performance measures such as schedule-length and mean finishing time. (A mathematical treatment of these classes of problems can be found in [1]). Clearly, the first objective in the study of these problems has been and is the discovery of algorithms that are optimal in some desirable sense, or if optimality implies an excessive implementation cost, heuristic algorithms that are easily implemented and whose performance is reasonably close to the optimal. The frequently difficult mathematics associated with these studies is concerned with proofs of optimality, general complexity analyses, and the analysis of the performance of algorithms.


Journal ArticleDOI
TL;DR: An approximate approach to the problem of developing schedules for the execution of computer programs in a batch oriented multiprogramming environment is developed by heuristically reducing the dimensionality of the problem to a sequential optimization problem.
Abstract: This paper discusses the problem of developing schedules for the execution of computer programs in a batch oriented multiprogramming environment. An approximate approach to the problem is developed by heuristically reducing the dimensionality of the problem to a sequential optimization problem. The superiority of this heuristic criterion over five other commonly used criteria is shown. A numerical example is given.

Journal ArticleDOI
TL;DR: This paper investigates the problem of scheduling a processor to optimize throughput in a multiprogramming environment and shows that for any set of independent programs a preemptive strategy is not necessary to obtain the minimum running time for the entire batch.
Abstract: This paper investigates the problem of scheduling a processor to optimize throughput in a multiprogramming environment. A deterministic model is used to study the scheduling of a batch of k program...


ReportDOI
31 Dec 1973
TL;DR: This is the final report for the ARPA Contract number DAHC 15-73-C- 0368 at UCLA covering the period from Jun3 15, 1973 to November 30, 1975, with a short statement of accomplishments followed by a complete bibliography of published works which were supported under this research contract.
Abstract: : This is the final report for the ARPA Contract number DAHC 15-73-C- 0368 at UCLA covering the period from Jun3 15, 1973 to November 30, 1975. During this contract period we have been engaged in the following tasks: providing a sophisticated network measurement facility adequate for a variety of uses such as performance measurement, model validation, and the design of network algorithms; conducting experiments on the network to analyze the effect of transmitting various data sources; defining and extending the tools necessary to analyze and evaluate the performance of computer communication systems; developing models of multiple resource systems and computer networks; studying packet communication systems that incorporate satellite and/or radio communications; and designing and beginning implementation of a verifiably secure operating system for the PDP 11/45. Included is a short statement of accomplishments followed by a complete bibliography of published works which were supported under this research contract.

Proceedings ArticleDOI
27 Aug 1973
TL;DR: In this article, a new adaptive method of internal scheduling of resources, with the goal of the optimization of computer system performance, is described, where a general system effectiveness measure is defined which parametrically encompasses the prototypical system effectiveness measures to be considered.
Abstract: The objective of this paper is the description of the development and verification of a new, adaptive method of internal scheduling of resources, with the goal of the optimization of computer system performance. A general system effectiveness measure is defined which parametrically encompasses the prototypical system effectiveness measures to be considered. The adaptive internal scheduler then selects such tasks for resource allocation request fulfillment that a local system effectiveness measure, derived from the general measure, is optimized, leading to semi-optimization of the general measure. The adaptive scheduler functions, as a second-order exponential estimator. A predicator-corrector algorithm functions as the adaptive controller by varying the estimator's parameters and the time of application of the estimator in response to the nature of the sequence of deviations between the predicted and actual values of resource utilization. In order to validate the new scheduler, a workload description in the form of task profile distributions was gathered by a software monitor on the Georgia Tech B5700 running a live job stream. A simulator was developed to allow the comparison of the new scheduler with other nonadaptive schedulers shown to be good by various researchers, under various general system effectiveness measure prototypes. The simulators was validated by running it with the B5700 TSSMCP scheduler against the B5700 workload job profiles. Values resulting from the simulation checked against those of the measured B5700 system quite well. The results of other simulation runs show that the new adaptive scheduler is clearly statistically superior to other schedulers under most measures considered and is inferior to no other scheduler under any measure considered, at least in that environment. Only the new internal scheduler is described here.

Journal ArticleDOI
TL;DR: A strategy is described for a controlling algorithm to schedule and supply many data sets to a large, long-running applications program and provides protection against breakdown of the computer or its operating system.

Proceedings ArticleDOI
01 Jan 1973
TL;DR: The design and an evaluation method of an operating system in which the optimization of system utilization is controlled by resource-demanded dispatching of program segments and dynamic scheduling of segment execution is presented.
Abstract: This paper presents the design and an evaluation method of an operating system in which the optimization of system utilization is controlled by resource-demanded dispatching of program segments and dynamic scheduling of segment execution. The basic departure of the proposed system design from the operating systems currently in use is that system resources, especially I/O facilities, have much more control over the scheduling of task execution. Each system resource can actively request the dispatcher to activate program segments that use it, whenever the queue length of its associated queue falls under a dynamically adjustable threshold. Program tasks are broken into small segments and resource requirements of these segments are obtained by the system to control segment dispatching. The method of dynamic optimization of the proposed system is presented together with the method for testing the new control algorithms using an existing operating system.

01 Jan 1973
TL;DR: In an effort to formalize and systematize its contractor performance evaluation practices, the Department of Defense established the Contractor Performance Evaluation (CPE) Program, which was discontinued after 8 years.
Abstract: : The concept of contractor performance evaluation as in aid to the buying decision is universally applicable. It is practiced by individuals in the conduct of their personal business transactions; by business firms in dealing with their suppliers; and by the Department of Defense in fulfilling its procurement mission. In an effort to formalize and systematize its contractor performance evaluation practices, DOD established the Contractor Performance Evaluation (CPE) Program. After 8 years, its major portion was discontinued. The cancellation's effect on Army procurement practices is assessed. Current attitudes, trends, methods, policies, and techniques relative to contractor performance evaluation are identified and analyzed. Recommendations for improving current practices are presented. (Author)

Journal ArticleDOI
TL;DR: A new sorting, activity cost per unit time in descending order, is shown to be not significantly less good than traditional critical path values, and of considerably greater practical use on site due to easy computation.
Abstract: Resource limitations impose delays in the execution of multiactivity projects. This computer-based experiment investigated both the network configuration and the priorities used at points of obstruction. Reviews of critical path analysis and resource scheduling method are included in addition to a description of the experiment and a summary of the statistical analysis. The effect on delay of network parameters such as free float, density and connectivity is quantified. A new sorting, activity cost per unit time in descending order, is shown to be not significantly less good than traditional critical path values, and of considerably greater practical use on site due to easy computation.