scispace - formally typeset
Search or ask a question

Showing papers in "Operations Research in 1960"


Journal ArticleDOI
TL;DR: A technique is presented for the decomposition of a linear program that permits the problem to be solved by alternate solutions of linear sub-programs representing its several parts and a coordinating program that is obtained from the parts by linear transformations.
Abstract: A technique is presented for the decomposition of a linear program that permits the problem to be solved by alternate solutions of linear sub-programs representing its several parts and a coordinating program that is obtained from the parts by linear transformations. The coordinating program generates at each cycle new objective forms for each part, and each part generates in turn from its optimal basic feasible solutions new activities columns for the interconnecting program. Viewed as an instance of a “generalized programming problem” whose columns are drawn freely from given convex sets, such a problem can be studied by an appropriate generalization of the duality theorem for linear programming, which permits a sharp distinction to be made between those constraints that pertain only to a part of the problem and those that connect its parts. This leads to a generalization of the Simplex Algorithm, for which the decomposition procedure becomes a special case. Besides holding promise for the efficient computation of large-scale systems, the principle yields a certain rationale for the “decentralized decision process” in the theory of the firm. Formally the prices generated by the coordinating program cause the manager of each part to look for a “pure” sub-program analogue of pure strategy in game theory, which he proposes to the coordinator as best he can do. The coordinator finds the optimum “mix” of pure sub-programs using new proposals and earlier ones consistent with over-all demands and supply, and thereby generates new prices that again generates new proposals by each of the parts, etc. The iterative process is finite.

2,281 citations


Journal ArticleDOI
TL;DR: In this paper, two types of preventive maintenance policies are considered, and the optimum policies are determined, in each case, as unique solutions of certain integral equations depending on the failure distribution.
Abstract: Two types of preventive maintenance policies are considered. A policy is defined to be optimum if it maximizes “limiting efficiency,” i.e., fractional amount of up-time over long intervals. Elementary renewal theory is used to obtain optimum policies. The optimum policies are determined, in each case, as unique solutions of certain integral equations depending on the failure distribution. It is shown that both solutions are also minimum cost solutions when the proper identifications are made. The two optimum policies are compared under certain restrictions.

1,279 citations


Journal ArticleDOI
B. Giffler1, G. L. Thompson1
TL;DR: It is shown that it is practical, in problems of small size, to generate the complete set of all active schedules and to pick the optimal schedules directly from this set and, when this is not practical, to random sample from the bet of allactive schedules.
Abstract: Algorithms are developed for solving problems to minimize the length of production schedules. The algorithms generate anyone, or all, schedules of a particular subset of all possible schedules, called the active schedules. This subset contains, in turn, a subset of the optimal schedules. It is further shown that every optimal schedule is equivalent to an active optimal schedule. Computational experience with the algorithms shows that it is practical, in problems of small size, to generate the complete set of all active schedules and to pick the optimal schedules directly from this set and, when this is not practical, to random sample from the bet of all active schedules and, thus, to produce schedules that are optimal with a probability as close to unity as is desired. The basic algorithm can also generate the particular schedules produced by well-known machine loading rules.

757 citations


Journal ArticleDOI
TL;DR: This formulation of discrete linear programming seems, however, to involve considerably fewer variables than two other recent proposals and on these grounds may be worth some computer experimentation.
Abstract: This is a proposal for the application of discrete linear programming to the typical job-shop scheduling problem—one that involves both sequencing restrictions and also noninterference constraints for individual pieces of equipment. Thus far, no attempt has been made to establish the computational feasibility of the approach in the case of large-scale realistic problems. This formulation seems, however, to involve considerably fewer variables than two other recent proposals [Bowman, E. H. 1959. The schedule-sequencing problem. Opns Res. 7 621–624; Wagner, H. 1959. An integer linear-programming model for machine scheduling. Naval Res. Log. Quart. (June).], and on these grounds may be worth some computer experimentation.

546 citations


Journal ArticleDOI
TL;DR: Criteria are presented for the design of amber signal light phases through whose use such “dilemma zones” can be avoided, in the interest of over-all safety at intersections.
Abstract: A theoretical analysis and observations of the behavior of motorists confronted by an amber signal light are presented. A discussion is given of the following problem when confronted with an improperly timed amber light phase a motorist may find himself, at the moment the amber phase commences, in the predicament of being too close to the intersection to stop safely or comfortably and yet too far from it to pass completely through the intersection before the red signal commences. The influence on this problem of the speed of approach to the intersection is analyzed. Criteria are presented for the design of amber signal light phases through whose use such “dilemma zones” can be avoided, in the interest of over-all safety at intersections.

298 citations


Journal ArticleDOI
TL;DR: Linear-programming solutions to the assembly-line balancing problem are offered in two forms: integer and linear programming as discussed by the authors. But these solutions depend on work recently presented on integer solutions to linear programming problems.
Abstract: Linear-programming solutions to the assembly-line balancing problem are offered in two forms. Feasible solutions depend on work recently presented on integer solutions to linear-programming problems. As yet, the computation involved for a practical problem would be quite large.

227 citations


Journal ArticleDOI
TL;DR: In this paper, the authors further developed the approach to the traffic-flow problem based on an integral differential equation of the Boltzmann type which has been considered by one of us (I P ) in a recent paper.
Abstract: The approach to the traffic-flow problem based on an integral differential equation of the Boltzmann type which has been considered by one of us (I P ) in a recent paper is further developed. The possibility of passing is explicitly introduced into the equation for the velocity distribution function. As in the previous paper, it is shown that at sufficiently high concentration a collective flow process must take place. In order to study more specifically the effects of one car on another, we define reduced n-car distribution functions giving the probability of finding a cluster of n cars all having the same velocity. We derive an equation for the evolution of this distribution function. Study of it yields some information as to the way traffic changes from relatively free flow to completely hindered, “condensed” flow.

210 citations


Journal ArticleDOI
TL;DR: Several methods are available for determining the shortest route through a network and these methods are described in some detail with added remarks as to their relative merits.
Abstract: Several methods are available for determining the shortest route through a network. These methods are described in some detail with added remarks as to their relative merits. Most of the methods are intended for both manual and digital computation, however, two analog methods are included. Brief mention is also made of the duality between the shortest-route problem and the network capacity problem.

119 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the schedule times are approximately normally distributed for large numbers of jobs for M × J deterministic flow shops performed on an IBM 704 digital computer and the meaning of this result in the decision-theoretical problem of sampling for a minimum is discussed.
Abstract: Numerical experiments for M × J deterministic flow shops performed on an IBM 704 digital computer lead to the conclusion that the schedule times are approximately normally distributed for large numbers of jobs. The meaning of this result in the decision-theoretical problem of sampling for a minimum is discussed. Examples of these results for 10 × 100 and 10 × 20 schedules are given.

113 citations


Journal ArticleDOI
TL;DR: In this article, the problem of whether or not a search activity should be started and, if started, whether it should be continued, is addressed, and the optimal policies are characterized.
Abstract: This paper deals with the problem of whether or not a search activity should be started and, if started, whether or not it should be continued. This problem suggests a model that is described. The model gives rise to a general functional equation for which existence and uniqueness conditions are given. Several examples are discussed, solutions to the specific functional equations appropriate to the examples are given, and the optimal policies are characterized.

94 citations


Journal ArticleDOI
TL;DR: This paper defines dependability as the probability that a system will be able to operate when needed, and states that most systems are not required to perform their duty constantly, so the system has a different set of failure and repair rates depending upon whether the system is performing its duty, is off, or is warming up.
Abstract: This paper defines dependability as the probability that a system will be able to operate when needed. There are many possible measures of dependability. Three of the most important are pointwise availability---the probability that the system will be operable at a specified instant of time reliability---the probability that the system will not fail during a given interval of time, and interval availability---the expected fraction of a given interval of time the system will be operable. The probability that the system is operable at the start of the interval must be determined to evaluate reliability or interval availability. If the system's failure and repair rates do not change, this probability is easily determined. However, most systems are not required to perform their duty constantly, therefore, the system has a different set of failure and repair rates depending upon whether the system is performing its duty, is off, or is warming up. An equation is given to determine the probability that the system is operable at the start of an interval if the system repeats the series of different intervals duty, off, warming up, etc. in identical form.

Journal ArticleDOI
TL;DR: The maximum capacity route is defined to be the route between any two given cities (nodes) that allows the greatest flow that differs from the maximum capacity problem in that only the single best route is desired.
Abstract: The maximum capacity route is defined to be the route between any two given cities (nodes) that allows the greatest flow. This differs from the maximum capacity problem[1] in that only the single best route is desired, in the maximum capacity problem, the object is to find the maximum flow between two nodes using as many different routes as needed. The maximum capacity route problem arises in automatic teletype networks where, for any given origin-destination pair, there is only one route specified.

Journal ArticleDOI
TL;DR: In this article, the Laplace transform of the transient probabilities of the ordered queuing problem with Poisson inputs, multiple channels, and exponential service times is derived for the two-channel case and known equilibrium conditions are shown to hold.
Abstract: In this paper we obtain the Laplace transform of the transient probabilities of the ordered queuing problem, with Poisson inputs, multiple channels, and exponential service times. Explicit expressions are derived for the two-channel case and known equilibrium conditions are shown to hold. The proof proceeds in two stages. The first obtains the Laplace transform of the generating function of the system and the second solves a first order linear partial differential equation in a restricted generating function introduced to determine the Laplace transform of the probability functions appearing in the first generating function. In the two-channel case the solution is decomposed into two components, one of which is immediately related to the well-known solution of the single-channel queue. We also study the problem with different service distributions for the two-channel case and compute the distribution of a busy period for that case.

Journal ArticleDOI
TL;DR: In this article, an elementary description of importance sampling as used in Monte Carlo analyses is given, along with an overview of the statistical sampling procedures that can be used to reduce the required computer time.
Abstract: Some Monte Carlo analyses require hundreds of hours of high speed computer time. Many problems of current interest can not be handled because the computer time required would be too great. Statistical sampling procedures have been developed that greatly reduce the required computer time. Importance sampling is one of these. This paper is an elementary description of importance sampling as used in Monte Carlo analyses.

Journal ArticleDOI
TL;DR: It is shown that an “equivalent” system with homogeneous servers does not exist and the error incurred in assigning each server the arithmetic mean of the service rates of all of them is analyzed and illustrated.
Abstract: A “multiple-booth” server system with nonhomogeneous servers is analyzed under the assumption of Poisson distributed arrivals and exponential service time with different individual mean service rates for each server. Explicit expressions for the state probabilities are obtained in closed form under “steady-state” conditions and the expected length of the waiting line is derived therefrom. It is shown that an “equivalent” system with homogeneous servers does not exist. The error incurred in assigning each server the arithmetic mean of the service rates of all of them is analyzed and illustrated if expected number in the system is used as a criterion of comparison.

Journal ArticleDOI
TL;DR: One of the papers from the Symposium Case Histories Five Years After, held at the Fourteenth National Meeting of the Society, in St. Louis, Missouri, October 23, 1958 as mentioned in this paper.
Abstract: One of the papers from the Symposium Case Histories Five Years After, held at the Fourteenth National Meeting of the Society, in St. Louis, Missouri, October 23, 1958.

Journal ArticleDOI
TL;DR: The solution of the Akers-Friedman production scheduling problem for the case of two parts and m machines (2 × m) is given and the combination of dynamic programming and graphical approaches makes the method of solution especially effective.
Abstract: The solution of the Akers-Friedman production scheduling problem for the case of two parts and m machines (2 × m) is given. The combination of dynamic programming and graphical approaches makes the method of solution especially effective. Even for large m the solution is very quickly obtained by using the proposed method. The case n × m is discussed and a 3 × 10 example is solved.

Book ChapterDOI
TL;DR: In this paper, a simple two-dimensional Markov process is used to study characteristics of market dynamics, including time relations, period-to-period changes in market shares, gains and losses resulting from promotional activity and rapidity of convergence to new steady state values.
Abstract: Brand preference information combined with a simple two-dimensional Markov process is used to study characteristics of market dynamics. Advertising alters temporarily the brand preference structure of the consuming public. Discussions of time relations, period-to-period changes in market shares, gains and losses resulting from promotional activity and rapidity of convergence to new steady-state values are considered. Sensitivity characteristics of the relations are commented upon.

Journal ArticleDOI
TL;DR: In this article, the authors developed methods for determining based on sales data the duration and timing of the selling seasons, and for forecasting total sales for the season for each individual item in the line, at different probability levels.
Abstract: Because of high obsolescence costs, optimum decisions on the amounts of highly seasonal, styled items to place into inventory in anticipation of customer orders hinge primarily on the probabilities of selling these amounts before the end of the season. In a study of the operations of a textile manufacturer, methods, were developed for determining based on sales data the duration and timing of the selling seasons, and for forecasting total sales for the season for each individual item in the line, at different probability levels. From these, criteria for weekly re-evaluation of inventories are established.

Journal ArticleDOI
TL;DR: This work considers a closed system with two stages, the first a repair stage and the second an operating stage, of which a maximum of A can be operating or productive at any one time (A operators).
Abstract: The finite queue problem, for which tables exist [Peck, L. G., R. N. Hazelwood. 1958. Finite Queuing Tables. ORSA Publications in Operations Research No. 2. Wiley, New York.], is a special case of the cyclic queue [Koenigsberg, E. 1958. Oper. Res. Quart. 9 22–35]. We consider a closed system with two stages, the first a repair stage and the second an operating stage. There are N machines in the system, of which a maximum of A can be operating or productive at any one time (A operators). When breakdowns occur at a mean rate μ2 the machines enter the repair stage which has M parallel servers (M repairmen), who service the machines at a mean rate μ1. If μ1 and μ2 are defined by an exponential distribution, then the numbers N, A, M, and μ2/μ1 define the output of the system. When A = N the problem is identical to the Swedish Machine problem for which tables are already available [Peck, L. G., R. N. Hazelwood. 1958. Finite Queuing Tables. ORSA Publications in Operations Research No. 2. Wiley, New York.].

Journal ArticleDOI
TL;DR: In this paper, an operational study of tunnel-traffic flow is presented, where a bottleneck within the tunnel is shown to cause shock waves which travel backward in the stream and continue to the entrance of the tunnel.
Abstract: An operational study of tunnel-traffic flow is presented. A bottleneck within the tunnel is shown to cause shock waves which travel backward in the stream and continue to the entrance of the tunnel. Flow and density data at the bottleneck is fitted to a fluid model description of the flow. Analysis of the speeds of successive vehicles indicate that the speeds stabilize to where the resultant flow is below the maximum allowable flow at the bottleneck. This speed is such that shock waves develop. The control of the traffic input by platooning the entering vehicles is shown to eliminate the shock waves and allow higher flows.

Journal ArticleDOI
TL;DR: A slight modification of the existing technique used by Gaver and by Luchak to solve queuing problems with Poisson input and a wide class of service time distributions has been made to obtain time-dependent solution of the bulk-service queuing problem.
Abstract: A slight modification of the existing technique used by Gaver and by Luchak to solve queuing problems with Poisson input and a wide class of service time distributions has been made to obtain time-dependent solution of the bulk-service queuing problem. Some known results have been deduced as corollaries and the advantage of introducing this modification has been pointed out.

Journal ArticleDOI
TL;DR: In this paper, the relation between the distributions of the spacings between the events and the cumulative number of events that occur in a given time interval is explored, and various asymptotic results are obtained for the Poisson, Erlang and Stuttering Poisson processes.
Abstract: The theory of recurrent events is often used in operational models to describe the stochastic repetition of a certain event---for instance, customers arriving at a queue. This paper explores in some detail the relation between the distributions of 1 the spacings between the events, and 2 the cumulative number of events that occur in a given time interval. It is shown that different results are obtained for the second class of distributions called the state probabilities and their averages, depending upon the location of the origin of the measurement interval relative to the process. Two situations of interest---starting after an event has occurred, and starting “at random”---are explored, and various asymptotic results are obtained. Special results for the Poisson, Erlang, and Stuttering-Poisson processes are presented. A condition for the asymptotic normality of the state probabilities is also found.

Journal ArticleDOI
R. F. Rinehart1
TL;DR: In this article, a case study of discrepancies and their causes in a supply facility of an agency of the Federal Government is presented, showing that about 80% of the discrepancies were caused by the discrepancy-correcting procedures themselves.
Abstract: A matter that has received scant attention in the literature of operations research is the role of discrepancies between physical stocks and stocks of record in a supply operation. Discrepancies not only exert a deleterious effect on the performance of the supply function, but may have a serious effect on the realization of benefits to be derived from implementation of theoretically optimal methods of inventory control and procurement or production scheduling. The present paper reports on a case study of discrepancies and their causes in a supply facility of an agency of the Federal Government. Contrary to the general expectation that the massive and complex paper work system connected with processing of issues and receipts was the principal source of discrepancies, it was possible to determine that about 80 per cent of the discrepancies were caused by the discrepancy-correcting procedures themselves. As a result two principal targets for further study are clearly delineated, 1 the inventorying process and 2 the stock balance adjustment procedures. Even on the basis of preliminary study some ameliorating recommendations were possible.

Journal ArticleDOI
TL;DR: It is shown that wye-delta transformations analogous to those used with electrical networks are available for the maximum-flow problem both with and without node capacities and that for the minimum-route problem dual transformations apply.
Abstract: In network problems such as the maximum-flow problem and the minimum-route problem, it is often desirable to attempt to simplify the given network before applying the various algorithms available for its solution. This is especially true when the maximum flow or minimum route between a number of different pairs of points is desired. Various transformations are discussed that can lead to considerable simplification. In particular, it is shown that wye-delta transformations analogous to those used with electrical networks are available. The application of these transformations to the maximum-flow problem both with and without node capacities is discussed and it is shown that for the minimum-route problem dual transformations apply. The effect of the topological properties of a network on the usefulness of these transformations is examined briefly. The application of the transformations to two networks in the literature is shown.

Journal ArticleDOI
TL;DR: This paper comprises an extension of Model I of a recent paper of Gluss, 1959 Opns.
Abstract: This paper comprises an extension of Model I of a recent paper of Gluss, 1959 Opns. Res. 7, 468--477, and its purpose is to dictate strategies that minimize the expected cost in time of locating a fault in a complex system of equipment. These strategies are specialized for use with automatic testing equipment The model assumes that the complex system consists of N modules containing n1,..., nN elements respectively, that the cost of examining the modules are t1,..., tN respectively, and that the costs of examining the elements within the rth module are tr1,..., trnr. It is further assumed that module tests are performed to find which module is faulty before element tests are performed, and that there exist probabilities at each stage that errors of two kinds can be made 1 that the test fails to detect an actual fault in the module or item tested, 2 that the test finds a fault that does not exist. The estimation of the probabilities of faults lying in respective modules or elements is performed in a different way from that in Gluss' paper they are computed from element reliability data by manipulation of their λ parameters, where λ is the element failure rate. Furthermore, consideration is given to fault symptoms that are supplied by weighting the probabilities according to the symptom information. Because of the anticipated difficulty in obtaining the necessary parameter estimates, the analysis may be most useful for its illumination of the influence the several required estimates have on the optimum search routine.

Journal ArticleDOI
TL;DR: This paper compares the waiting-time distributions under different queue disciplines of customers in a single-server queuing system with Poisson input and general (independent) service time and shows that the variance of the waited-time distribution when the queue discipline is “last-come, first-served” is greater than the corresponding variance when the queues are “first-come-first-served.”
Abstract: In this paper we compare the waiting-time distributions under different queue disciplines of customers in a single-server queuing system with Poisson input and general (independent) service time. To make the comparison it is necessary to know the distribution of the unexpended service time at a moment of arrival, and the distribution of busy periods. These are discussed and it is shown that the variance of the waiting-time distribution when the queue discipline is “last-come, first-served” is greater (whatever the service-time distribution) than the corresponding variance when the queue discipline is “first-come, first-served.” The same comparison is also discussed (and is shown to be simpler) for the system with general independent input and negative exponential distribution of service times.

Journal ArticleDOI
TL;DR: A mathematical investigation of the type of queuing problem, in which customers arrive at random, form a single queue in order to arrival, and are served in batches, the size of each batch being either a fixed number s of customers or the total number in the queue, whichever is less, is presented.
Abstract: Bailey [Bailey, N T J 1954 On queuing processes with bulk service J Roy Stat Soc B16 80--87], using the “imbedded-Markov-chain” technique devised by Kendall [Kendall, D G 1951 Some problems in the theory of queues J Roy Stat Soc B13 151--185; 1953 Stochastic processes occurring in the theory of queues and their analysis by the 'imbedded Markov chain' Ann Math Stat24 338--354] presented a mathematical investigation of the type of queuing problem, in which customers arrive at random, form a single queue in order to arrival, and are served in batches, the size of each batch being either a fixed number s of customers or the total number in the queue, whichever is less In the following note, the same problem has been solved by a different method

Journal ArticleDOI
TL;DR: In this paper, the authors consider the case where customers arrive at a counter at the instants τ 1, τ 2, τ 3, τ 4, τ 5, τ 6, τ 7, τ 8, τ 9, τ 10, τ 11, τ 12, τ 13, τ 14, τ 15, τ 16, τ 17, τ 18, τ 19, τ 20, τ 21, τ 22, τ 23, τ 24, τ 25, τ 26, τ 27, τ 28, τ 29, τ 30, τ 31, τ 32,
Abstract: Customers arrive at a counter at the instants τ1, τ2, …, τn …, where the inter-arrival times τn − τn−1(n = 1, 2, …, τ0 = 0) are indentically distributed, independent, random variables. The customers will be served by a single server. The service times are identically distributed, independent, random variables with exponential distribution. Let ξ(t) denote the queue size at the instant t. If ξ(τn − 0) = k then a transition Ek → Ek+1 is said to occur at the instant t = τn. The following probabilities are determined \documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\rho^{(n)}_{ik}=\mathbf{P}\{\xi(\tau_{n}-0)=k\mid\xi(0)=i+1\},\quad P^{\ast}_{ik}(t)=\mathbf{P}\{\xi(t)=k\mid\xi(0)=i\},$$ \end{document} Gn(X) = the probability that a busy period consis...

Journal ArticleDOI
TL;DR: This paper is concerned with the analysis of the reliability of complex systems in which components are used intermittently and which are maintained in operating condition by component replacement.
Abstract: This paper is concerned with the analysis of the reliability of complex systems in which components are used intermittently and which are maintained in operating condition by component replacement. The idea that a failed component causes system failure only when it is called into use is expressed mathematically. Based on component failure distributions and usage properties, the system reliability and expected time to system failure are derived as functions of system age for two different maintenance policies. With both policies, a component is replaced whenever it causes system failure. In the first, this is the only maintenance, while in the second, system check-outs are conducted at fixed intervals and all components which have failed without causing system failure are replaced. The two policies are compared and, for the second, the dependence of system reliability on the maintenance interval is determined.