scispace - formally typeset
Search or ask a question

Showing papers on "Scheduling (computing) published in 1982"


Proceedings Article
01 Jan 1982

606 citations


Journal ArticleDOI
TL;DR: Computational results indicate that the procedures provide cost-effective optimal solutions for small problems and good heuristic solutions for larger problems, while simultaneously taking into account a variety of constraint types.
Abstract: This paper introduces methods for formulating and solving a general class of nonpreemptive resource-constrained project scheduling problems in which the duration of each job is a function of the resources committed to it. The approach is broad enough to permit the evaluation of numerous time or resource-based objective functions, while simultaneously taking into account a variety of constraint types. Typical of the objective functions permitted are minimize project duration, minimize project cost given performance payments and penalties, and minimize the consumption of a critical resource. Resources which may be considered include those which are limited on a period-to-period basis such as skilled labor, as well as those such as money, which are consumed and constrained over the life of the project. At the planning stage the user of this approach is permitted to identify several alternative ways, or modes, of accomplishing each job in the project. Each mode may have a different duration, reflecting the magnitude and mix of the resources allocated to it. At the scheduling phase, the procedure derives a solution which specifies how each job should be performed, that is, which mode should be selected, and when each mode should be scheduled. In order to make the presentation concrete, this paper focuses on two problems: given multiple resource restrictions, minimize project completion time, and minimize project cost. The latter problem is also known as the resource-constrained time-cost tradeoff problem. Computational results indicate that the procedures provide cost-effective optimal solutions for small problems and good heuristic solutions for larger problems. The programmed solution algorithms are relatively simple and require only modest computing facilities, which permits them to be potentially useful scheduling tools for organizations having small computer systems.

454 citations


Book ChapterDOI
01 Jan 1982
TL;DR: A survey of deterministic sequencing and scheduling can be found in this article, where the authors survey the state of the art with respect to optimization and approximation algorithms and interpret these in terms of computational complexity theory.
Abstract: The theory of deterministic sequencing and scheduling has expanded rapidly during the past years. We survey the state of the art with respect to optimization and approximation algorithms and interpret these in terms of computational complexity theory. Special cases considered are single machine scheduling, identical, uniform and unrelated parallel machine scheduling, and open shop, flow shop and job shop scheduling. This paper is a revised version of the survey by Graham et al. (Ann. Discrete Math. 5(1979) 287–326) , with emphasis on recent developments.

326 citations


Journal ArticleDOI
Efe1
TL;DR: The purpose of task allocation scheduling in a set of interconnected processors is to reduce job turnaround time by maximizing the utilization of resources while mimimizing any communication between processors.
Abstract: When confronted with the need to utilize a remote computer facility or a data base that does not exist in a local computer system, a user looks to distributed processing. Distributed processing not only solves the above problems, it can also increase processing speed by providing facilities for parallel execution. Furthermore, its interconnected set of mini-or microprocessors is flexible, efficient, reliable, modular, and (comparatively) inexpensive. Distributed processing applications can be found in large networks covering a number of computing centers and also in small signal-processing systems. When referring to any of these systems in this article, I call the processing elements simply \"processors\" and use the term \"task\" for programs or other kinds of code units submitted to a system. Although there are related areas of research in distributed processing environments, including file allocation and processor scheduling, this article concentrates on the problem of task allocation management. The purpose of task allocation scheduling in a set of interconnected processors is to reduce job turnaround time. This is done by maximizing the utilization of resources while mimimizing any communication between processors. No access conflicts arise from a shared memory (as in a multiple processing system) because every processor is assumed to have its own working area. The benefits of task allocation scheduling make distributed processing desirable, but several problems must be solved before they can be realized. For example, when the number of processors in a system increases beyond a certain level, throughput decreases.1 This \"saturation ef-fect\" can be simply explained: If there are n processors in the system and if each has a processing speed of k, then one would expect the speed of the system to be n x k. In reality, however, a lower processing speed results, caused by such factors as control overheads, communication between processors, unbalanced loading, queueing delays, and the precedence order of the parts of a task assigned to separate processors. In order to eliminate or minimize saturation, these inhibiting factors must also be eliminated or minimized. First, a fast, dynamic assignment algorithm can be used to attack the control overhead problem. Additionally, load distribution must be provided for a system as a means of balancing and minimizing both interprocessor communications (IPC) and queueing delays. Since, in reality, any system's resources are limited, the effect of limited memory size in each processor on saturation must also be considered, as must response time. Response time …

323 citations


Journal ArticleDOI
TL;DR: A categorization process based on two powerful project summary measures is provided, and it is shown that a rule introduced by this research performs significantly better on most categories of projects.
Abstract: Application of heuristic solution procedures to the practical problem of project scheduling has previously been studied by numerous researchers. However, there is little consensus about their findings, and the practicing manager is currently at a loss as to which scheduling rule to use. Furthermore, since no categorization process was developed, it is assumed that once a rule is selected it must be used throughout the whole project. This research breaks away from this tradition by providing a categorization process based on two powerful project summary measures. The first measure identifies the location of the peak of total resource requirements and the second measure identifies the rate of utilization of each resource type. The performance of the rules are classified according to values of these two measures, and it is shown that a rule introduced by this research performs significantly better on most categories of projects.

300 citations


Journal ArticleDOI
TL;DR: This investigation considers the problem of nonpreemptively assigning a set of independent tasks to a system of identical processors to maximize the earliest processor finishing time and proves that the worst-case performance of the LPT algorithm has an asymptotically tight bound of $4}{3}$ times the optimal.
Abstract: This investigation considers the problem of nonpreemptively assigning a set of independent tasks to a system of identical processors to maximize the earliest processor finishing time. While this goal is a nonstandard scheduling criterion, it does have natural applications in certain maintenance scheduling and deterministic fleet sizing problems. The problem is NP-hard, justifying an analysis of heuristics such as the well-known LPT algorithm in an effort to guarantee near-optimal results. It is proved that the worst-case performance of the LPT algorithm has an asymptotically tight bound of $\frac{4}{3}$ times the optimal.

113 citations


Journal ArticleDOI

107 citations


Journal ArticleDOI
01 Apr 1982
TL;DR: The complexity of scheduling conventional horizontal processors and the ease of scheduling polycyclic processors is demonstrated by means of an example.
Abstract: A horizontal architecture consists of a number of resources that can operate in parallel, each of which is controlled by a field in the wide instruction word. Such architectures offer the potential for high performance scientific computing at a modest cost. If this potential performance is to be realized, the multiple resources of a horizontal processor must be scheduled effectively. The scheduling task for conventional horizontal processors is quite complex and the construction of highly optimizing compilers for them is a difficult and expensive project. The polycyclic architecture is a horizontal architecture with architectural support for the scheduling task. The complexity of scheduling conventional horizontal processors and the ease of scheduling polycyclic processors is demonstrated by means of an example.

99 citations


Journal ArticleDOI
TL;DR: Improved priority scheduling rules for a repair shop supporting a multi-item repairable inventory system with a hierarchical product structure are presented in this article, where a variety of scheduling rules are evaluated using a simulation of a representative shop and product structure.
Abstract: Improved priority scheduling rules are presented for a repair shop supporting a multi-item repairable inventory system with a hierarchical product structure. A variety of scheduling rules are evaluated using a simulation of a representative shop and product structure. The results indicate that dynamic rules which use inventory status information perform better than other dynamic or static rules which ignore inventory status; moreover, dynamic rules which use work-in-process inventory information outperform dynamic rules which ignore work-in-process inventory levels. In the simulation, the use of improved scheduling rules provides performance equivalent to a 20% reduction in spares inventory.

83 citations


Journal ArticleDOI
TL;DR: A network-based optimizing approach to the classroom/time model which rapidly approximates the solutions is devised which combines the insight of the scheduler with combinatorial and searching ability of a computer via a transshipment optimization network model.

78 citations


Journal ArticleDOI
TL;DR: This paper traces the development of the theory of cyclic queues and queue networks from 1954 to the present and application of queue theory to underground coal mining.
Abstract: The first paper to introduce the concept of a cyclic queue appeared in the Operational Research Quarterly in 1954. The paper dealt with the 'flow' of aircraft engines from operation to maintenance to available for operation. In 1958, the first paper analyzing the cyclic queue model appeared in the same journal. This paper was application of queue theory to underground coal mining. In 1965 it was shown that stochastic queue networks can be treated analytically in the same manner as cyclic queues with a small adjustment in the auxiliary parameters. Since then cyclic queue models have been applied not only to the problems mentioned above but also to many other production and service industry problems: computer design and control, ship operations, production processes, communications flow, ingot movements in a steel mill, to name but a few. In order for this to be possible, extension and advances in theory have been required and these have come from many nations and many fields of endeavour. This paper traces the development of the theory of cyclic queues and queue networks from 1954 to the present.

Book ChapterDOI
Gideon Weiss1
01 Jan 1982
TL;DR: Optimality of preemptive SEPT and LEPT also holds when processing times are drawn from a common MHR (monotone hazard rate) distribution.
Abstract: m Parallel machines are available for the processing of n jobs. The jobs require random amounts of processing. When processing times are exponentially distributed, SEPT (shortest expected processing time first) minimizes the flowtime, LEPT (longest expected processing time first) minimizes the makespan and maximizes the time to first machine idleness. For m = 2, various other problems can be optimized by different rules. Optimality of preemptive SEPT and LEPT also holds when processing times are drawn from a common MHR (monotone hazard rate) distribution.

Book ChapterDOI
01 Jan 1982
TL;DR: In this paper, a polynomial time-bounded algorithm is presented for solving three problems involving the preemptive scheduling of precedence-constrained jobs on parallel machines: the "intree problem", the "two-machine problem with equal release dates", and the "general two machine problem".
Abstract: Polynomial time-bounded algorithms are presented for solving three problems involving the preemptive scheduling of precedence-constrained jobs on parallel machines: the “intree problem”, the “two-machine problem with equal release dates”, and the “general two-machine problem”. These problems are preemptive counterparts of problems involving the nonpreemptive scheduling of unit-time jobs previously solved by Brucker, Garey and Johnson and by Garey and Johnson. The algorithms and proofs (and the running times of the algorithms) closely parallel those presented in their papers. These results improve on previous results in preemptive scheduling and also suggest a close relationship between preemptive scheduling problems and problems in nonpreemptive scheduling of unit-time jobs.

Journal ArticleDOI
Sushil Gupta1
TL;DR: A mathematical model based on the branch-and-bound technique to solve static scheduling problems involving n jobs and m machines where the objective is to minimize the cost of setting up the machines is presented.
Abstract: This paper presents a mathematical model based on the branch-and-bound technique to solve static scheduling problems involving n jobs and m machines where the objective is to minimize the cost of setting up the machines. Set-up times are sequence dependent and not included in processing times. There is a finite non-zero cost associated with setting the machines which is different for each machine. It is further assumed that the routing may be different for different jobs and a job may return to a machine more than once.

Journal ArticleDOI
TL;DR: It is found that scheduling is helpful in reallocating delay among user classes and can be used to improve the fairness of a network, and a conservation theorem characterizing the effects of scheduling on overall mean end-to-end delay is established.
Abstract: The use of channel scheduling to improve a measure of fairness in packet-switching networks is investigated. This fairness measure is based on mean end-to-end delays derived from Kleinrock's classical model. The network designer can incorporate any desired relative delay among user classes into this fairness measure. It is found that scheduling is helpful in reallocating delay among user classes and can be used to improve the fairness of a network. It is also shown that a parameterized queueing discipline can be used to further improve fairness. A conservation theorem characterizing the effects of scheduling on overall mean end-to-end delay is established. The results are applicable to both fixed and random routing and are found to be relatively insensitive to fluctuations in traffic.

Journal ArticleDOI
TL;DR: Polynomially bounded solution methods are presented to solve a class of precedence constrained scheduling problems in which each job requires a certain amount of nonrenewable resource that is being consumed during its execution.

Journal ArticleDOI
Clyde L. Monma1
TL;DR: Linear-time algorithms are presented for several problems of scheduling n equal-length tasks on m identical parallel processors subject to precedence constraints, which improves upon previous time bounds for the maximum lateness problem with treelike precedence constraints.
Abstract: Linear-time algorithms are presented for several problems of scheduling n equal-length tasks on m identical parallel processors subject to precedence constraints. This improves upon previous time bounds for the maximum lateness problem with treelike precedence constraints, the number-of-late-tasks problem without precedence constraints, and the one machine maximum lateness problem with general precedence constraints.

Journal ArticleDOI
TL;DR: It is shown that the often used dynamic dispatching pohcy is optimal within the framework of this multiprogramming model, and the range within which these properties hold is discussed, and some examples are given.
Abstract: finite-source queuing model (sometimes called the finite-population, machine-interference, or machine-repairman model), which has often been used in analyzing time-sharing systems and multi- programmed computer systems, is invesugated. The model studied here has two service staUons, a processor (single server) and peripherals (infinite server), and a finite number of customers (or jobs) that have a distract service rate at the processor. The model is in eqmhbnum. It is shown that the utilization factor of the processor can be obtained in an analyuc form and ts independent of various scheduling disciphnes employed at the processor, such as FCFS, generahzed processor sharing, preempUve (resume) and nonpreemptwe priority disciphnes, under some condiaon. Other relevant propemes of this model are also shown. The range within which these properties hold is discussed, and some examples are given. Examples of appficatlon to multiprogrammmg and tune-sharing systems are given; in particular, It Is shown that the often used dynamic dispatching pohcy (which gwes the higher preempuve priority to the more I/O oriented job) is optimal within the framework of this multiprogramming model. Categories and SubJect Descriptors:

Journal ArticleDOI
TL;DR: An integrated approach to production scheduling and materials requirements planning is presented, which discusses existing techniques and suggests how the new method can overcome certain deficiencies.
Abstract: This paper presents an integrated approach to production scheduling and materials requirements planning. It discusses existing techniques and suggests how the new method can overcome certain deficiencies. The method is incorporated in a computer program, and sample outputs for an example problem are given. Some industrial experience is reported.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of non-preemptively scheduling n independent tasks on m identical and parallel machines with the objective of minimizing the overall finishing time, and show that if the execution times are restricted to a fixed number, say k, of different values, then it can be solved in polynomial time.
Abstract: We consider the problem of nonpreemptively scheduling n independent tasks on m identical and parallel machines with the objective of minimizing the overall finishing time. The problem has been shown to be NP-complete in the strong sense and hence there probably will not be any pseudopolynomial time algorithm for solving this problem. We show, however, that if the execution times are restricted to a fixed number, say k, of different values, then it can be solved in polynomial time. Our algorithm can be implemented to run in time 0log2p * log2m * n2k-1 and space 0log2m * nk-1 in the worst case, where p is the largest execution time.

Patent
Nicholas Trufyn1
06 May 1982
TL;DR: In this paper, the user actuates real-time resource reallocation in a multi-tasking environment where the operating system builds a process queue against a resource and a new task is interrupt invoked with the dispatcher allocating the resource to the next task in the queue, the queue switching being orthogonol to the dispatcher scheduling of processes.
Abstract: User actuates real time resource reallocation in a multi-tasking environment wherein the operating system builds a process queue against a resource and wherein a new task is interrupt invoked with the dispatcher allocating the resource to the next task in the queue, the queue switching being orthogonol to the dispatcher scheduling of processes.

Journal ArticleDOI
TL;DR: A programming model for job shop scheduling which can consider a multiple-performance system of evaluations and incorporate multiple organizational goals is presented.

Book ChapterDOI
01 Jan 1982
TL;DR: A number of examples of models in which the optimal policies can be determined by polynomial time algorithms while the deterministic counterparts of these models are NP-complete are given.
Abstract: In this paper we consider stochastic scheduling models where all relevant data (like processing ,times, release dates, due dates, etc) are independent random variables, exponentially distributed We are interested in the computational complexity of determining optimal policies for these stochastic scheduling models We give a number of examples of models in which the optimal policies can be determined by polynomial time algorithms while the deterministic counterparts of these models are NP-complete We also give some examples of stochastic scheduling models for which there exists no polynomial time algorithm if P ≠ NP

Journal ArticleDOI
TL;DR: A heuristic method is used to solve the vehicle scheduling problem by maintaining local optimality whilst approaching the feasible region and giving results comparable with the best published algorithms.
Abstract: A heuristic method is used to solve the vehicle scheduling problem by maintaining local optimality whilst approaching the feasible region. Tests with published problems show that the technique gives results comparable with the best published algorithms. The practical requirements of real life scheduling are discussed, and the flexibility of the technique is demonstrated for a complex problem involving weekly cyclical deliveries.

Journal ArticleDOI
TL;DR: The Primal Network Flow Convex (PNFC) code implements this algorithm and three examples, from communication networks, that can be solved with PNFC are discussed: solving the area transfer problem; scheduling the collection of traffic data records; and planning the placement of pair-gain systems.
Abstract: Algorithms for finding a minimum-cost, single-commodity flow in a capacitated network are based on variants of the simplex method of linear programming. We describe an implementation of a primal algorithm which is fast and can solve large problems. The major ideas incorporated are (i) the sparsity of the network is used to reduce the time and computer storage space requirements; (ii) basic solutions are stored compactly as spanning trees of the network; (iii) a candidate stack is used to allow flexible strategies in choosing an arc to enter the basis tree; (iv) the predecessor and thread data structures are used to efficiently traverse the tree and to update the solution at each iteration; (v) rules are implemented to avoid cycling or stalling caused by degeneracy; and (vi) piecewise-linear, convex arc costs are handled implicitly. The Primal Network Flow Convex (PNFC) code implements this algorithm and three examples, from communication networks, that can be solved with PNFC are discussed: (i) solving the area transfer problem; (ii) scheduling the collection of traffic data records; and (iii) planning the placement of pair-gain systems.

Book ChapterDOI
01 Jan 1982
TL;DR: Heuristics which are asymptotically optimal in expectation as the number of jobs in the system increases are analyzed for problems whose second stages are either identical or uniform m-machine scheduling problems.
Abstract: This paper surveys recent results for stochastic discrete programming models of hierarchical planning problems. Practical problems of this nature typically involve a sequence of decisions over time at an increasing level of detail and with increasingly accurate information. These may be modelled by multistage stochastic programmes whose lower levels (later stages) are stochastic versions of familiar NP-hard deterministic combinatorial optimization problems and hence require the use of approximations and heuristics for near-optimal solution. After a brief survey of distributional assumptions on processing times under which SEPT and LEPT policies remain optimal for m-machine scheduling problems, results are presented for various 2-level scheduling problems in which the first stage concerns the acquisition (or assignment) of machines. For example, heuristics which are asymptotically optimal in expectation as the number of jobs in the system increases are analyzed for problems whose second stages are either identical or uniform m-machine scheduling problems. A 3-level location, distribution and routing model in the plane is also discussed.

Proceedings ArticleDOI
18 Aug 1982
TL;DR: This paper provides real-time solutions to the resource allocation problem (that is, it gives distributed algorithms with real time response) and makes essential use of probabilistic techniques as first used by [Rabin, 80b], where processes are allowed to make independent Probabilistic choices.
Abstract: In this paper we consider a resource allocation problem which is local in the sense that the maximum number of users competing for a particular resource at any time instant is bounded and also at any time instant the maximum number of resources that a user is willing to get is bounded. The problem may be viewed as that of achieving matchings in dynamically changing hypergraphs, via a distributed algorithm. We show that this problem is related to the fundamental problem of handshake communication (which can be viewed as achieving matchings in a dynamically changing graph, via distributed algorithms) in that an efficient solution to each of them implies an efficient solution to the other. We provide real-time solutions to the resource allocation problem (that is, we give distributed algorithms with real time response). We make essential use of probabilistic techniques as first used by [Rabin, 80b], where processes are allowed to make independent probabilistic choices. On the other hand, no probability assumptions about the system behavior are made. One of our solutions assumes the existence of an underlying real-time handshake communication system, as described in [Reif, Spirakis, 81]. Our other solution is based on efficient synchronization by flag variables, which are written only by one process and read by at most one other process. The special case of equi-speed processes is first examined. Then we generalize to asynchronous processes. Applications are made to dining philosophers, scheduling and two-phase locking in databases.

Book ChapterDOI
01 Jan 1982
TL;DR: Some of the known results for sequencing identical jobs on parallel machines subject to treelike precedence constraints are presented, and highest-level-first policies are shown to minimize the makespan in the deterministic case.
Abstract: In this paper we present some of the known results for sequencing identical jobs on parallel machines subject to treelike precedence constraints. In-trees and out-trees are discussed. Highest-level-first policies are shown to minimize the makespan in the deterministic case. These policies are not necessarily optimal for stochastic processing times, except for the case of two machines and in-tree precedence constraints.

Journal ArticleDOI
Leslie Jill Miller1
01 Apr 1982
TL;DR: A priority-based task management scheduling algorithm is then defined which uses the optimal schedule of the formal model as a parameter, and its performance is simulated.
Abstract: A multiprocessor architecture is proposed which is based on the Multics concept of having all on-line information processor-addressible. All memory management is done by an intelligent paged virtual memory system, and each processor deals only with those segments relevant to its single executing program. The processors are chosen to have different implementations of a single system-wide instruction set and the problem is to effectively schedule different categories of programs, called task groups, on the dissimilar processors.Average weighted instruction times for each task group on every processor are defined as task/processor suitability measures, and typical values are given for different groups of programs running on IBM 370 models. Through the use of linear programming techniques, an optimal schedule for any such multiprocessor is then defined for the static case where task group loads and task/processor suitability values are known in advance. A priority-based task management scheduling algorithm is then defined which uses the optimal schedule of the formal model as a parameter, and its performance is simulated.

Journal ArticleDOI
TL;DR: This paper attempts to resolve the existing confusion concerning missing operations by defining null-discontinuous (NDC) as those problems which are not null-continuous.
Abstract: This paper attempts to resolve the existing confusion concerning missing operations. Scheduling problems are classified in two groups: (i) null-continuous (NC)—comprising the problems where an optimal schedule remains optimal on replacement of arbitrarily small processing times (existing operations) with zeros (missing operations); (ii) null-discontinuous (NDC)—comprising those problems which are not null-continuous.