scispace - formally typeset
Search or ask a question

Showing papers on "Flow shop scheduling published in 1972"


Journal ArticleDOI
TL;DR: The optimization problem of minimizing the completion time in flow-shop sequencing with an environment of no intermediate storage with application in computer systems is pointed out and techniques are developed to solve the problem.
Abstract: The optimization problem of minimizing the completion time in flow-shop sequencing with an environment of no intermediate storage is considered. Application of this problem in computer systems is pointed out and techniques are developed to solve the problem.

247 citations


Journal ArticleDOI
TL;DR: An algorithm is presented that will minimize the total processing time for a particular case of the n-job, m-machine scheduling problem by modeling it as a traveling-salesman problem and known solution techniques can be employed.
Abstract: This paper presents an algorithm that will minimize the total processing time for a particular case of the n-job, m-machine scheduling problem. In many industrial processes, jobs are processed by a given sequence of machines. Often, once the processing of a job commences, the job must proceed immediately from one machine to the next without encountering any delays en route. The machine sequence need not be the same for an jobs. Because of this processing constraint that prohibits intermediate queues, most normal scheduling techniques are not applicable. This paper obtains a solution to this constrained scheduling problem by modeling it as a traveling-salesman problem; known solution techniques can then be employed. The paper solves a sample problem and discusses computational considerations.

227 citations


Journal ArticleDOI
TL;DR: It is concluded that a successful CPU scheduling method must be preemptive and must prevent a given job from holding the CPU for too long a period.
Abstract: Microscopic level job stream data obtained in a production environment by an event-driven software probe is used to drive a model of a multiprogramming computer system. The CPU scheduling algorithm of the model is systematically varied. This technique, called trace-driven modeling, provides an accurate replica of a production environment for the testing of variations in the system. At the same time alterations in scheduling methods can be easily carried out in a controlled way with cause and effects relationships being isolated. The scheduling methods tested included the best possible and worst possible methods, the traditional methods of multiprogramming theory, round-robin, first-come-first-served, etc., and dynamic predictors. The relative and absolute performances of these scheduling methods are given. It is concluded that a successful CPU scheduling method must be preemptive and must prevent a given job from holding the CPU for too long a period.

74 citations


Journal ArticleDOI
TL;DR: In this paper, the authors discuss the nature of the labor assignment problem in job shops, and suggest a procedure for making labor assignments at the time of actual production, using aggregate information on work flow patterns in the shop.
Abstract: The movement of workers among machines is a tactic often employed by job shop managers to break bottlenecks and smooth the flow of work. Despite its common occurrence in industrial practice, the problem has only recently received attention in the management science literature. The purposes of this paper are (1) to discuss the nature of the labor assignment problem in job shops, and (2) to suggest a procedure for making labor assignments at the time of actual production. The significant features of the specific rules suggested are that they use aggregate information on work flow patterns in the shop, and that they make fewer labor transfers than other rules which have been suggested.

37 citations


Journal ArticleDOI
TL;DR: The algorithm developed has the attractive property of exhibiting a computational complexity on the order of N log N, which is a special case of the traveling salesman problem.
Abstract: Suppose a set of N records must be read or written from a drum, fixed-head disk, or similar storage unit of a computer system. The records vary in length and are arbitrarily located on the surface of the drum. The problem considered here is to find an algorithm that schedules the processing of these records with the minimal total amount of rotational latency (access time), taking into account the current position of the drum. This problem is a special case of the traveling salesman problem. The algorithm that is developed has the attractive property of exhibiting a computational complexity on the order of N log N.

29 citations


Journal ArticleDOI
TL;DR: The most significant results of this study are that the shortest-imminent-operation rule is superior to others in reducing job lateness and shop flow time and the GASP-II package works efficiently for large-size shop problems.
Abstract: The purpose of this paper is to report on a study which involves a simulation of a hypothetical job shop with several machines. The investigation employs GASP-II as a computer language. This simulation study is concerned with: (1) testing a new method of assign ing job due-dates and (2) comparing and evaluating the effect of different processing-time distributions on the performance of a number of scheduling rules.The most significant results of this study are:(1) The shortest-imminent-operation rule is superior to others in reducing job lateness and shop flow time (2) The procedure in which the due-date allowance is proportional to the number of operations and workcontent of the jobs has proved to be beneficial in the case of the non-due-date rules (3) The operation of a job shop using the shortest-imminent-operation and slack-per-remaining-number-of-operations rules is degraded when the processing-time distribution having Erlang parameter K equal to 4 or 8. However, per formance is better when K = 8 tha...

29 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examine the relationship between the problems of priority scheduling and inventory control and describe several priority scheduling rules that use inventory information in making machine scheduling decisions, such as Minimum Slack Time Per Remaining Operation? Critical Ratio, and a modification of the Shortest Processing Time Rule.
Abstract: This paper examines the relationship between the problems of priority scheduling and inventory control and describes several priority scheduling rules that use inventory information in making machine scheduling decisions. These rules include: Minimum Slack Time Per Remaining Operation? Critical Ratio, and a modification of the Shortest Processing Time Rule. Simulation experiments evaluate the gain in shop and inventory performance resulting from the inclusion of inventory data in scheduling rules. The results indicate that an increase in the timeliness of inventory information for scheduling purposes may not lead to improved performance.

25 citations


Proceedings ArticleDOI
05 Dec 1972
TL;DR: Those job scheduling strategies which give high throughput are characteristically observed to be more sensitive to CPU scheduling methods than those which yield relatively low throughput, which is typical of virtually all of the previous work that the emphasis has been on improving CPU utilization.
Abstract: There have been very few systematic studies of the effect on system performance of strategies for scheduling jobs for execution in a multi-programming system. Most of this work has been concerned with empirical efforts to obtain job mixes which effectively utilize the central processor. These efforts are frequently carried out in commercial or production oriented installations where the job load consists of a relatively few jobs whose internal characteristics can be well determined. This approach is not feasible in an environment where internal job characteristics are not known before run time, or where internal job characteristics may vary rapidly. Such circumstances are often the case in an industrial or research laboratory or in a university computer center. This study uses as its measures for determining job scheduling strategies such quantities as are frequently known or can be accurately estimated such as amount of core memory required, processor service time required, etc. The specific job scheduling strategies used include first-come-first-serve (FCFS), shortest processor service time first (STF), smallest cost (cost = core size X processor service time) first (SCF), and smallest memory requirement first (SMF). We evaluated both preemptive resume and non-preemptive job scheduling. It is typical of virtually all of the previous work that the emphasis has been on improving CPU utilization. There may often be other goals which are more useful measures of performance such as throughput (job completion rate per unit time), the expected wait time before completion of a given class of job, the utilization of I/O resources, etc. We collected several measures of system performance including all of those listed previously to assess the effects of job scheduling. There has been very little previous study of the interaction between job scheduling and CPU scheduling. We systematically vary CPU scheduling algorithms in conjunction with alteration of job scheduling strategies. Those job scheduling strategies which give high throughput are characteristically observed to be more sensitive to CPU scheduling methods than those which yield relatively low throughput. We do not, however, attempt to correlate job scheduling methods with internal job characteristics such as CPU burst time, etc. We did, however, consider the effect of skewed CPU burst time distribution on performance under different pairs of strategies.

9 citations


Journal ArticleDOI
TL;DR: Main memory and intermemory data transmission rate requirements of two-level memory multiprocessor systems are characterized and the notion of an optimal pair (M, C) is introduced and the optimal pairs are also found.
Abstract: Main memory and intermemory data transmission rate requirements of two-level memory multiprocessor systems are characterized. Operational software is modeled by computation graphs. For a fixed processing schedule, independent bounds on the main memory M and the data transmission rate C, required to insure that the processing will follow the specified schedule, are determined. Tradeoffs between M and C are completely specified by determination of the family of (M, C) pairs, any member of which will guarantee that the fixed schedule will be followed. The notion of an optimal pair (M, C) is introduced and the optimal pairs are also found. The interrelationship between values of (M, C) pairs and schedules is briefly explored througb modification of specified schedules. KEY W O R D S A N D P H R A S E S : memory requirement, multiprocessor system, multilevel memory, computation graph, task, schedule, memory occupation graph, Gantt chart CR C A T E G O R I E S : 4.32, 5.32, 8.1

7 citations


Journal ArticleDOI
TL;DR: In this paper, a controlled experiment was designed to illustrate one scheduling problem characteristic which accounts for the superiority of model decisions over intuitive decisions, and the relative superiority increases as the time-horizon complexity increases.
Abstract: This performance of intuitive aggregate scheduling is compared with the scheduling performance of a mathematical model. A controlled experiment was designed to illustrate one scheduling problem characteristic which accounts for the superiority of model decisions over intuitive decisions. This relative superiority increases as the time-horizon complexity increases. The horizon complexity is a manifestation of the cost structure of the decision setting.

7 citations


01 Dec 1972
TL;DR: In this article, a branch-and-bound algorithm for the sequencing problem with no precedence relationships among jobs is presented. But the computational experience is not very encouraging, as the computational times required to solve the problems are very short and most problems become optimal at the early stages of computation.
Abstract: : Several theoretical results are developed to obtain an efficient branch-and-bound algorithm for the sequencing problem when all jobs are available to process at time zero and are independent (i.e., there are no precedence relationships among jobs). The branch-and-bound algorithm and its computational results are given for the case of linear penalty functions. The computational experiences are very encouraging. The computer times required to solve the problems are very short and most problems become optimal at the early stages of computation. (Author)


Journal ArticleDOI
C. C. New1
TL;DR: A totally new approach to the setting of batch quantities and production lead times is described in which both are set together in relation to the work-load on the shop.
Abstract: A totally new approach to the setting of batch quantities and production lead times is described in which both are set together in relation to the work-load on the shop. The method is practical computationally and has application in many companies faced with a "repeat-order" situation. Simulation studies of a 75-machine job-shop are described in which use of the decision rules developed showed considerable improvement over the existing practice. The system is particularly applicable in situations where it is not feasible to use work-sequencing systems on individual machine groups, since it works by the control of inlet times only. Although the heuristic developed was tested using simulation methods all data are realistic and relate to the particular shop for which it was designed.

01 Oct 1972
TL;DR: Two computer methods for industrial optimization of machining conditions are described and demonstrated, which are designed to refine the initial data input with shop test data obtained during normal production, as related to one or more of three production objectives.
Abstract: : Two computer methods for industrial optimization of machining conditions are described and demonstrated. The performance index method requires only shop data for machining time, number of pieces produced, and number of tool changes. The production optimization method requires tool life, time, and cost data. Both are designed to refine the initial data input with shop test data obtained during normal production, as related to one or more of three production objectives: minimum unit cost, maximum production rate and maximum profit rate. The computer programs are constructed for use by shop personnel with little knowledge of mathematics or computers. Both methods are rapid and economical, and the programs can be processed by either in-plant or remote computer facilities. The user is given all information needed to install the programs and adapt them to his purposes.


Journal ArticleDOI
01 Jul 1972

11 Aug 1972
TL;DR: In this paper, the problem of finding a schedule with stochastically smallest makespan in a simple two-machine flow shop with the objective of finding the schedule with the smallest makepan was considered.
Abstract: : The authors consider the problem of sequencing n jobs in a simple two-machine flow-shop with the objective of finding a schedule with stochastically smallest makespan. Results are derived under a special stochastic structure for the processing time distributions. (Author)

01 Feb 1972
TL;DR: Two algorithms have been formulated for scheduling n jobs through a single facility to minimize the number of late jobs when set-up times are sequence dependent and a branch and bound technique which arrives at an optimal solution with no restrictions on the sequence used.
Abstract: : Two algorithms have been formulated for scheduling n jobs through a single facility to minimize the number of late jobs when set-up times are sequence dependent. The first is a simple matrix algorithm which solves the problem when jobs must be processes in first-come, first-served (FCFS) order. The second is a branch and bound technique which arrives at an optimal solution with no restrictions on the sequence used. Both algorithms are demonstrated by examples. (Author)

Journal ArticleDOI
TL;DR: The authors are sure that the of the definition of shops from that of problems makes it possible to systemize various job shop problems and to utilize scheduling theory for system design.
Abstract: A definite description of the flow shop scheduling problems and their analytic results are presented here. A job shop can be regarded as consisting of three kinds of components, i.e. the work station, the intermediate pool and the transport way. The reverse flow shop is defined as a shop having a flow direction opposite to that of the primal flow shop. It makes our problems more practical and more flexible to introduce two more concepts; one is the initial condition and the other is the terminal condition. They represent the desired shapes at the beginning and at the end of the Gantt chart of the solution, respectively. Also, a generalized objective function is adopted, which can be used to estimate the idling cost of any set of work stations, the rate of operations in the shop, etc. On the basis of these ideas, two types of deterministic scheduling problems (the Primal Problems) and their Reverse Problems (the same types of problems concerned with the reverse flow shop) are defined. The authors are sure that the separation of the definition of shops from that of problems makes it possible to systemize various job shop problems and to utilize scheduling theory for system design. The main results can be stated briefly as follows. (1) A theorem which gives one of the principal methods to get the lower bounds of a class of functions including the objective function adopted. (2) Four theorems concerned with the properties of the solutions and the structures of our generalized primal problems. (3) Two theorems stating the mutuality of the solutions of the primal and the reverse problems. These results would be very useful for the construction of the effective algorithms for the solution of the scheduling problems.


Proceedings ArticleDOI
01 Dec 1972
TL;DR: Methods of adaptive control and pattern recognition are applied and it is indicated that the scheduler was able to adapt to changing workloads and it improved the turnaround times significantly.
Abstract: This research is directed toward the development of a scheduling algorithm for large digital computer systems. To meet this goal, methods of adaptive control and pattern recognition are applied. Simulation studies indicated that the scheduler was able to adapt to changing workloads and it improved the turnaround times significantly.