scispace - formally typeset
Search or ask a question

Showing papers in "Operations Research in 1985"


Journal ArticleDOI
David E. Bell1
TL;DR: The implications of disappointment, a psychological reaction caused by comparing the actual outcome of a lottery to one's prior expectations, for decision making under uncertainty, are explored and explicit recognition that decision makers may be paying a premium to avoid potential disappointment is provided.
Abstract: Decision analysis requires that two equally desirable consequences should have the same utility and vice versa. Most analyses of financial decision making presume that two consequences with the same dollar outcome will be equally preferred However, winning the top prize of $10,000 in a lottery may leave one much happier than receiving $10,000 as the lowest prize in a lottery. This paper explores the implications of disappointment, a psychological reaction caused by comparing the actual outcome of a lottery to one's prior expectations, for decision making under uncertainty. Explicit recognition that decision makers may be paying a premium to avoid potential disappointment provides an interpretation for some known behavioral paradoxes, and suggests that decision makers may be sensitive to the manner in which a lottery is resolved. The concept of disappointment is integrated into utility theory in a prescriptive model.

1,203 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered a class of M/G/1 queueing models with a server who is unavailable for occasional intervals of time and showed that the stationary number of customers present in the system at a random point in time is distributed as the sum of two or more independent random variables.
Abstract: This paper considers a class of M/G/1 queueing models with a server who is unavailable for occasional intervals of time. As has been noted by other researchers, for several specific models of this type, the stationary number of customers present in the system at a random point in time is distributed as the sum of two or more independent random variables, one of which is the stationary number of customers present in the standard M/G/1 queue i.e., the server is always available at a random point in time. In this paper we demonstrate that this type of decomposition holds, in fact, for a very general class of M/G/1 queueing models. The arguments employed are both direct and intuitive. In the course of this work, moreover, we obtain two new results that can lead to remarkable simplifications when solving complex M/G/1 queueing models.

664 citations


Journal ArticleDOI
TL;DR: Dec decomposition and partitioning methods for solvingMultistage stochastic linear programs model problems in financial planning, dynamic traffic assignment, economic policy analysis, and many other applications.
Abstract: Multistage stochastic linear programs model problems in financial planning, dynamic traffic assignment, economic policy analysis, and many other applications. Equivalent representations of such problems as deterministic linear programs are, however, excessively large. This paper develops decomposition and partitioning methods for solving these problems and reports on computational results on a set of practical test problems.

608 citations


Journal ArticleDOI
TL;DR: A Lagrangean relaxation of a zero-one integer programming formulation of the problem of cutting a number of rectangular pieces from a single large rectangle is developed and used as a bound in a tree search procedure.
Abstract: We consider the two-dimensional cutting problem of cutting a number of rectangular pieces from a single large rectangle so as to maximize the value of the pieces cut. We develop a Lagrangean relaxation of a zero-one integer programming formulation of the problem and use it as a bound in a tree search procedure. Subgradient optimization is used to optimize the bound derived from the Lagrangean relaxation. Problem reduction tests derived from both the original problem and the Lagrangean relaxation are given. Incorporating the bound and the reduction tests into a tree search procedure enables moderately sized problems to be solved.

467 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed an analytic method for minimizing the cost of distributing freight by truck from a supplier to many customers, and derived formulas for transportation and inventory costs, and determined the optimal trade-off between these costs.
Abstract: This paper develops an analytic method for minimizing the cost of distributing freight by truck from a supplier to many customers. It derives formulas for transportation and inventory costs, and determines the optimal trade-off between these costs. The paper analyzes and compares two distribution strategies: direct shipping i.e., shipping separate loads to each customer and peddling i.e., dispatching trucks that deliver items to more than one customer per load. The cost trade-off in each strategy depends on shipment size. Our results indicate that, for direct shipping, the optimal shipment size is given by the economic order quantity EOQ model, while for peddling, the optimal shipment size is a full truck. The peddling cost trade-off also depends on the number of customers included on a peddling route. This trade-off is evaluated analytically and graphically. The focus of this paper is on an analytic approach to solving distribution problems. Explicit formulas are obtained in terms of a few easily measurable parameters. These formulas require the spatial density of customers, rather than the precise locations of every customer. This approach simplifies distribution problems substantially while providing sufficient accuracy for practical applications. It allows cost trade-offs to be evaluated quickly using a hand calculator, avoiding the need for computer algorithms and mathematical programming techniques. It also facilitates sensitivity analyses that indicate how parameter value changes affect costs and operating strategies.

420 citations


Journal ArticleDOI
TL;DR: In this article, the Gauss-Jordan method was used to derive certain relations between steady state probabilities of a Markov chain and then used to develop a numerical algorithm to find these probabilities.
Abstract: We apply regenerative theory to derive certain relations between steady state probabilities of a Markov chain. These relations are then used to develop a numerical algorithm to find these probabilities. The algorithm is a modification of the Gauss-Jordan method, in which all elements used in numerical computations are nonnegative; as a consequence, the algorithm is numerically stable.

347 citations


Journal ArticleDOI
TL;DR: An integer linear programming algorithm for vehicle routing problems involving capacity and distance constraints using constraint relaxation and a new class of subtour elimination constraints is described.
Abstract: This paper describes an integer linear programming algorithm for vehicle routing problems involving capacity and distance constraints. The method uses constraint relaxation and a new class of subtour elimination constraints. Two versions of the algorithm are presented, depending upon the nature of the distance matrix. Exact solutions are obtained for problems involving up to sixty cities.

317 citations


Journal ArticleDOI
TL;DR: This paper presents a new branch and bound algorithm for the single machine total weighted tardiness problem that obtains lower bounds using a Lagrangian relaxation approach with subproblems that are total weighted completion time problems.
Abstract: This paper presents a new branch and bound algorithm for the single machine total weighted tardiness problem. It obtains lower bounds using a Lagrangian relaxation approach with subproblems that are total weighted completion time problems. The well-known subgradient optimization technique is replaced by a multiplier adjustment method that leads to an extremely fast bound calculation. The method incorporates various devices for checking dynamic programming dominance in the search tree. Extensive computational results for problems with up to 50 jobs show the superiority of the algorithm over existing methods.

283 citations


Journal ArticleDOI
TL;DR: The paper derives conditions to determine whether a pair of alternatives can be ranked given the partial information about weighting constants, and presents an algorithm that partially rank-orders the complete set of alternatives based on the pairwise ranking information.
Abstract: A method is presented for ranking multiattributed alternatives using a weighted-additive evaluation function with partial information about the weighting scaling constants, the method is applied to evaluate materials for use in nuclear waste containment. The paper derives conditions to determine whether a pair of alternatives can be ranked given the partial information about weighting constants, and presents an algorithm that partially rank-orders the complete set of alternatives based on the pairwise ranking information.

278 citations


Journal ArticleDOI
TL;DR: A model and algorithm that can be used to find consistent and realistic reorder intervals for each item in large-scale production-distribution systems is presented and the optimal solution can be found using the proposed algorithm, which is a polynomial time algorithm.
Abstract: The objective of this paper is to present a model and algorithm that can be used to find consistent and realistic reorder intervals for each item in large-scale production-distribution systems. We assume such systems can be represented by directed acyclic graphs. Demand for each end item is assumed to occur at a constant and continuous rate. Production is instantaneous and no backorders are allowed. Both fixed setup costs and echelon holding costs are charged at each stage. We limit our attention to nested and stationary policies. Furthermore, we restrict the reorder interval for each stage to be a power of 2 times a base planning period. The model that results from these assumptions is an integer nonlinear programming problem. The optimal solution can be found using the proposed algorithm, which is a polynomial time algorithm. A real world example is given to illustrate the procedure.

258 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the mixed zero-one problem with continuous variables, and derive two classes of facet-defining linear inequalities of the convex hull of X, and show that these facets can be used as cutting planes to strengthen the formulations of certain mixed integer problems.
Abstract: Many problems in the Operations Research/Management Science literature can be formulated with both zero-one and continuous variables. However, the exact optimization of such mixed zero-one models remains a computational challenge. In this paper, we propose to study mixed problems from a mathematical point of view that is similar in spirit to recent research on purely combinatorial problems that has investigated systems of defining linear inequalities or facets of the underlying polytope. At least two numerical studies have validated this line of research computationally, and the advances in the problem-solving capabilities are considerable. We expect that similar gains are possible for the mixed zero-one problem. More precisely, we consider the mixed integer programs whose feasible region X is composed of i a simple additive constraint in the continuous variables xj, for j = 1, 2, ', n and ii constraints 0 ≤ xj ≤ mjyj defined by binary variables yj for j = 1, 2, ', n. This type of feasible region arises in a variety of mixed integer problems, and particularly in network problems with fixed charges on the arcs. We derive two classes of facet-defining linear inequalities of the convex hull of X, and show that the second of these classes gives a complete description of the convex hull when mj = m for all j. We also develop methods to detect violated inequalities from these classes, so that these facets can be used as cutting planes to strengthen the formulations of certain mixed integer problems.

Journal ArticleDOI
TL;DR: In this article, the impact of dependence on the precision and value of information is investigated, and the results indicate that positive dependence among information sources can have a serious detrimental effect on the information.
Abstract: In many inferential and decision-making situations, information is obtained from a number of information sources, and the separate pieces of information are often not independent. This paper investigates the impact of dependence on the precision and value of information. The results indicate that positive dependence among information sources can have a serious detrimental effect on the precision and value of the information. Differences in precision between the dependent and independent cases can be remarkably large. With dependence, the incremental value of information can decrease very rapidly, and the limiting value of information as more sources are considered can be considerably less than the expected value of perfect information. The results of this paper have implications for the acquisition and use of information in decision-making problems.

Journal ArticleDOI
TL;DR: This paper presents inclusion-exclusion bounds and compares them with disjoint subset bounds and is based on a generalization of Abraham's recursive disjointed products.
Abstract: The reliability literature has recently introduced several multistate models. This paper discusses reliability bounds in the most general of these models. It presents inclusion-exclusion bounds and compares them with disjoint subset bounds. The later bounds are based on a generalization of Abraham's recursive disjoint products.

Journal ArticleDOI
TL;DR: Hakimi's one-median problem is extended by embedding it in a general queueing context and properties of the optimal location as a function of demand rate are developed.
Abstract: This paper extends Hakimi's one-median problem by embedding it in a general queueing context. Demands for service arise solely on the nodes of a network G and occur in time as a Poisson process. A single mobile server resides at a facility located on G. The server, when available, is dispatched immediately to any demand that occurs. When a demand finds the server busy with a previous demand, it is either rejected Model 1 or entered into a queue that is depleted in a first-come, first-served manner Model 2. Service time for each demand comprises travel time to the scene, on-scene time, travel time back to the facility and possibly additional off-scene time. One desires to locate the facility on G so as to minimize average cost of response, which is either a weighted sum of mean travel time and cost of rejection Model 1, or the sum of mean queueing delay and mean travel time. For Model 1, one finds that the optimal location reduces to Hakimi's familiar nodal result. For Model 2, nonlinearities in the objective function can yield an optimal solution that is either at a node or on a link. Properties of the objective function for Model 2 are utilized to develop efficient finite-step procedures for finding the optimal location. Certain interesting properties of the optimal location as a function of demand rate are also developed.

Journal ArticleDOI
TL;DR: A modeling format and a solution algorithm for partial and general economic equilibrium problems taken from the literature on computation of economic equilibria, demonstrating that the algorithm is economical in terms of the number of pivots, function evaluations and CPU time.
Abstract: This paper presents a modeling format and a solution algorithm for partial and general economic equilibrium problems. It reports on computational experience from a series of small to medium sized problems taken from the literature on computation of economic equilibria. The common characteristic of these models is the presence of weak inequalities and complementary slackness, e.g., a linear technology with alternative activities or various institutional constraints on prices. The algorithm computes the equilibrium by solving a sequence of linear complementarity problems. The iterative outer part of this algorithm is a Newton process. For the inner part, we use Lemke's almost complementary pivoting algorithm. Theoretical results for the performance of this algorithm are at present available only for the partial equilibrium cases. Our computational experience with both types of models, however, is encouraging. The algorithm solved all nine test problems when initiated at reasonable starting points. Five of these nine problems are solved for several different starting points, indicating a large region over which the algorithm converges. Our results demonstrate that the algorithm is economical in terms of the number of pivots, function evaluations and CPU time.

Journal ArticleDOI
TL;DR: A new method is presented for obtaining a probability distribution function that bounds the exact probability distribution of the project completion time from below and it is proved that this bounding distribution is better tighter than any of the existing lower bounds.
Abstract: We consider the PERT model of a project composed of activities whose durations are random variables with known distributions. For the situations in which the activity durations are completely independent, we present a new method for obtaining a probability distribution function that bounds the exact probability distribution of the project completion time from below. The bounding distribution can be used to obtain an upper bound on the mean completion time of the project. We also prove and illustrate that this bounding distribution is better tighter than any of the existing lower bounds, implying that the corresponding upper bound on the mean completion time is tighter than any of the existing upper bounds.

Journal ArticleDOI
TL;DR: In this paper, the authors extend the price increase model by relaxing the requirement on the timing of price increase, and develop optimal ordering strategies for situations where the price increases becomes effective at any future specified time.
Abstract: The familiar model for determining the optimal ordering strategy, given an announced price increase, assumes that the buyer has an opportunity to place an order at the end of the next economic order quantity cycle before the price increase takes effect. This paper extends the price increase model by relaxing the requirement on the timing of the price increase. Specifically, we develop optimal ordering strategies for situations where the price increase becomes effective at any future specified time. We also calculate savings for alternate ordering strategies.

Journal ArticleDOI
TL;DR: Alternative formulations of this problem using a Lagrangean relaxation approach to decouple the problem and provide lower bounds used in a branch and bound algorithm are presented.
Abstract: Certain manufacturing situations involve a small number of items produced sequentially on the same facility with high changeover cost. Production schedules are characterized by long runs, and individual items are produced infrequently. At the same time, seasonal demand patterns require that the items be maintained in inventory in the right mix. This paper presents alternative formulations of this problem. It uses a Lagrangean relaxation approach to decouple the problem and provide lower bounds used in a branch and bound algorithm. Some experimental computations on small problems are reported.

Journal ArticleDOI
TL;DR: Using a closed queueing network model, the consequences of varying workloads among multiserver queues that may be of unequal size and the problem of assigning servers of similar types to the queues in the network are solved.
Abstract: Using a closed queueing network model, we explore the consequences of varying workloads among multiserver queues that may be of unequal size. In addition, we solve the problem of assigning servers of similar types to the queues in the network to maximize expected throughput. We show that 1 unbalanced configurations of assigned servers are superior to balanced ones, and 2 unbalanced workloads are better than balanced ones. We find that there can be significant differences in system throughput from balanced versus unbalanced configurations/workloads. Finally, we discuss applications to planning problems of flexible manufacturing systems.

Journal ArticleDOI
TL;DR: It is shown how a class of norms with polygonal contours, called block norms, can yield attractive choices as distance functions with respect to two criteria, the Weber problem and the Rawls problem.
Abstract: In formulating a continuous location model with facilities represented as points in Rn e.g., typically in the plane, one must characterize the distance between two points as a function of their coordinates. Two criteria in selecting a distance function are 1 to obtain good approximations of actual distances, and 2 to obtain a mathematical model of the location problem that is easy to solve. In this paper, we show how a class of norms with polygonal contours, called block norms, can yield attractive choices as distance functions with respect to these criteria. In particular, we consider the following relevant properties of block norms: they generalize the concepts of rectilinear or city-block travel; they are dense in the set of all norms; they have interesting travel interpretations; in the plane, they can be expressed as a sum of the absolute values of linear functions; they often give better approximations to actual highway distances than the most frequently used family of norms, the lp norms; and, finally, they yield linear programming formulations of certain facility location problems i.e., the Weber problem and the Rawls problem.

Journal ArticleDOI
TL;DR: This paper uses signatures to describe a method for finding optimal assignments that terminates in at most n-1n-2/2 pivot steps and takes at most On3 work.
Abstract: The "signature" of a dual feasible basis of the assignment problem is an n-vector whose ith component is the number of nonbasic activities of type i, j. This paper uses signatures to describe a method for finding optimal assignments that terminates in at most n-1n-2/2 pivot steps and takes at most On3 work.

Journal ArticleDOI
TL;DR: This work represents the only exact analysis of an MRP-type assembly system when demand is stochastic and characterized the forms of both the optimal order policy for the components and the optimal assembly policy of the end product for a multiperiod problem of arbitrary, but finite length.
Abstract: This paper considers an inventory system in which an end product is assembled from two components, each of which is ordered from an external supplier. Only the end product has final demand, which is assumed to be random. Using the functional equation approach of dynamic programming, we characterize the forms of both the optimal order policy for the components and the optimal assembly policy of the end product for a multiperiod problem of arbitrary, but finite length. We believe that this work represents the only exact analysis of an MRP-type assembly system when demand is stochastic.

Journal ArticleDOI
TL;DR: A new polynomially bounded shortest path algorithm, called the partitioning shortest path PSP algorithm, for finding the shortest path from one node to all other nodes in a network containing no cycles with negative lengths is developed.
Abstract: This paper develops a new polynomially bounded shortest path algorithm, called the partitioning shortest path PSP algorithm, for finding the shortest path from one node to all other nodes in a network containing no cycles with negative lengths. This new algorithm includes as variants the label setting algorithm, many of the label correcting algorithms, and the apparently computationally superior threshold algorithm.

Journal ArticleDOI
Kamal Golabi1
TL;DR: Finite horizon as well as infinite horizon models are studied and it is shown that the critical number strategy is also average-cost optimal, and the relationship of these expressions to minimal expected cost is demonstrated.
Abstract: We consider a single-item inventory model with deterministic demands. At the beginning of each period, a random ordering price is received according to a known distribution function. A decision must be made as to how much if any of the item to order in each period so as to minimize total expected costs while satisfying all demands. We show that, in each period, a sequence of critical price levels determines the optimal ordering strategy, so that it is optimal to satisfy the demands of the next n periods if and only if the random price falls between the nth and the n+ 1st levels. We derive recursive expressions that describe the critical price numbers, and demonstrate the relationship of these expressions to minimal expected cost. We study finite horizon as well as infinite horizon models and show that the critical number strategy is also average-cost optimal.

Journal ArticleDOI
TL;DR: The article analyzes the problems that motivate overbooking, discusses the relevant practices of the air carriers, and describes significant contributions and implementations of operations research.
Abstract: This paper surveys the application of operations research to airline overbooking, a distinctive problem of considerable significance to the public as well as the airline industry, and one which has received great attention in the press. The airlines have been overbooking their flights deliberately for decades, while moving gradually from a posture of categorical denial to a disclosure of their practices. Through the years, they have developed and employed a variety of statistical and tactical models to predict and control the consequences of this controversial procedure for increasing load factors. Yet, the situation still has not been resolved satisfactorily. The article analyzes the problems that motivate overbooking, discusses the relevant practices of the air carriers, and describes significant contributions and implementations of operations research.

Journal ArticleDOI
TL;DR: Methods for strengthening the linear programs, as well as other techniques necessary for a commercial branch-and-bound code to be successful in solving 0-1 programming problems, are described.
Abstract: We present methods that are useful in solving some large scale hierarchical planning models involving 0-1 variables. These 0-1 programming problems initially could not be solved with any standard techniques. We employed several approaches to take advantage of the hierarchical structure of variables ordered by importance and other structures present in the models. Critical, but not sufficient for success, was a strong linear programming formulation. We describe methods for strengthening the linear programs, as well as other techniques necessary for a commercial branch-and-bound code to be successful in solving these problems.

Journal ArticleDOI
TL;DR: A result is given that quantifies the loss in variance reduction caused by the estimation of the optimal control matrix in Monte Carlo simulation and derives analytically the optimal size of the vector of control variates under specific assumptions on the covariance matrix.
Abstract: This paper considers some statistical aspects of applying control variates to achieve variance reduction in the estimation of a vector of response variables in Monte Carlo simulation. It gives a result that quantifies the loss in variance reduction caused by the estimation of the optimal control matrix. For the one-dimensional case, we derive analytically the optimal size of the vector of control variates under specific assumptions on the covariance matrix. For the multidimensional case, our numerical results show that good variance reduction is achieved when the number of control variates is relatively small approximately of the same order as the number of unknown parameters. Finally, we give some recommendations for future research.

Journal ArticleDOI
TL;DR: Two solution methods are presented for generalized versions of the minisum minimax problem in which location is restricted to the union of a finite number of convex polygons, distances are approximated by norms that may differ with the given points, and transportation costs are increasing and continuous functions of distance.
Abstract: The minisum minimax problem consists of locating a single facility in the plane with the aim of minimizing the sum of the weighted distances the maximum weighted distance to m given points. We present two solution methods for generalized versions of these problems in which i location is restricted to the union of a finite number of convex polygons; ii distances are approximated by norms that may differ with the given points; and iii transportation costs are increasing and continuous functions of distance. Computational experience is described.

Journal ArticleDOI
TL;DR: In this article, the M/M/s queue with an arbitrary number of customers present at time zero is considered, and the authors obtain probabilities in a relatively simple closed form that can be used to evaluate exactly several measures of system performance including the expected delay in queue of each arriving customer.
Abstract: Although the transient behavior of a queueing system is often of interest, available analytical results are usually quite restricted or are very complicated. We consider the M/M/s queue with an arbitrary number of customers present at time zero. We obtain probabilities in a relatively simple closed form that can be used to evaluate exactly several measures of system performance, including the expected delay in queue of each arriving customer. A numerical examination is carried out to see how the choice of initial condition affects the nature of convergence of the expected delays to their steady-state values. We also discuss the implications of these results for the initialization of steady-state simulations.

Journal ArticleDOI
TL;DR: A time-dependent stopping problem and its application to the decision-making process associated with transplanting a live organ, and it is shown that the control-limit type policy that maximizes the expected reward is a nonincreasing function of time.
Abstract: We consider a time-dependent stopping problem and its application to the decision-making process associated with transplanting a live organ. "Offers" e.g., kidneys for transplant become available from time to time. The values of the offers constitute a sequence of independent identically distributed positive random variables. When an offer arrives, a decision is made whether to accept it. If it is accepted, the process terminates. Otherwise, the offer is lost and the process continues until the next arrival, or until a moment when the process terminates by itself. Self-termination depends on an underlying lifetime distribution which in the application corresponds to that of the candidate for a transplant. When the underlying process has an increasing failure rate, and the arrivals form a renewal process, we show that the control-limit type policy that maximizes the expected reward is a nonincreasing function of time. For non-homogeneous Poisson arrivals, we derive a first-order differential equation for the control-limit function. This equation is explicitly solved for the case of discrete-valued offers, homogeneous Poisson arrivals, and Gamma distributed lifetime. We use the solution to analyze a detailed numerical example based on actual kidney transplant data.