scispace - formally typeset
Search or ask a question

Showing papers in "Operations Research in 1980"


Journal ArticleDOI
TL;DR: Multiattribute utility theory is used to suggest a form for a utility function over life years and health status in patients with coronary artery disease and chronic kidney disease and shows that stated preferences appear consistent with this functional form.
Abstract: Multiattribute utility theory is used to suggest a form for a utility function over life years and health status. The analysis is motivated by consideration of two diseases—coronary artery disease and chronic kidney disease—in which the choice of treatment may depend on the patient's tradeoff between these attributes. Certain plausible independence properties lead to a quasi-additive utility function for life years and health status. Particular attention is given to utility functions for life years exhibiting constant proportional risk posture. An empirical assessment shows that stated preferences appear consistent with this functional form. The derived utility functions have been applied to the treatment decision of whether to prescribe coronary artery bypass graft surgery in patients with coronary artery disease.

657 citations


Journal ArticleDOI
TL;DR: An algorithm for the 0-1 knapsack problem (KP), which relies mainly on three new ideas, one of which is a binary search-type procedure for solving LKP which, unlike earlier methods, does not require any ordering of the variables.
Abstract: We describe an algorithm for the 0-1 knapsack problem (KP), which relies mainly on three new ideas. The first one is to focus on what we call the core of the problem, namely, a knapsack problem equivalent to KP, defined on a particular subset of the variables. The size of this core is usually a small fraction of the full problem size, and does not seem to increase with the latter. While the core cannot be identified without solving KP, a satisfactory approximation can be found by solving the associated linear program (LKP). The second new ingredient is a binary search-type procedure for solving LKP which, unlike earlier methods, does not require any ordering of the variables. The computational effort involved in this procedure is linear in the number of variables. Finally, the third new feature is a simple heuristic which under certain conditions finds an optimal solution with a probability that increases with the size of KP. Computational experience with an algorithm based on the above ideas, on several ...

468 citations


Journal ArticleDOI
TL;DR: One of the major conclusions is that it is not difficult to get within 2–3% of optimality using a composite heuristic which requires on the order of n3 computations where n is the number of nodes in the network.
Abstract: There have been a multitude of heuristic algorithms proposed for the solution of large scale traveling salesman problems. Our intent in this paper is to examine some of these well known heuristics, to introduce some new heuristics, and to compare these approximate techniques on the basis of efficiency and accuracy. We emphasize the strengths and weaknesses of each algorithm tested. One of our major conclusions is that it is not difficult to get within 2–3% of optimality using a composite heuristic which requires on the order of n3 computations where n is the number of nodes in the network.

290 citations


Journal ArticleDOI
TL;DR: A Dynamic Programming approach for sequencing a given set of jobs in a single machine is developed, so that the total processing cost is minimized, and offers savings in computational effort as compared to the classical Dynamic programming approach to sequencing problems.
Abstract: A Dynamic Programming approach for sequencing a given set of jobs in a single machine is developed, so that the total processing cost is minimized. Assume that there are N distinct groups of jobs, where the jobs within each group are identical. A very general, yet additive cost function is assumed. This function includes the overall completion time minimization problem as well as the total weighted completion time minimization problem as special cases. Priority considerations are included; no job may be shifted by more than a prespecified number of positions from its initial, First Come-First Served position in a prescribed sequence. The running time and the storage requirement of the Dynamic Programming algorithm are both polynomial functions of the maximum number of jobs per group, and exponential functions of the number of groups N. This makes our approach practical for real-world problems in which this latter number is small. More importantly, the algorithm offers savings in computational effort as compared to the classical Dynamic Programming approach to sequencing problems, savings which are solely due to taking advantage of group classifications. Specific cost functions, as well as a real-world problem for which the algorithm is particularly well-suited, are examined. The problem application is the optimal sequencing of aircraft landings at an airport. A numerical example as well as suggestions on possible extensions to the model are also presented.

211 citations


Journal ArticleDOI
TL;DR: The heuristic solutions are compared with optimal solutions obtained by branch and bound in numerous randomly generated problems and are found to be optimal in most cases.
Abstract: This paper treats the problem of minimizing the total weighted flow cost plus job-processing cost in a single machine sequencing problem for jobs having processing costs which are linear functions of processing times. The optimal job sequence and processing times are obtainable from the solution of an associated problem of optimal row and column selection in a symmetric matrix. Some sufficient conditions for expediting certain jobs are proved. In order to handle cases in which these conditions fail to complete the solution to the problem a heuristic algorithm with a provable performance bound is developed. The heuristic solutions are compared with optimal solutions obtained by branch and bound in numerous randomly generated problems and are found to be optimal in most cases.

197 citations


Journal ArticleDOI
TL;DR: An analytic derivation of path optimality indices for directed, acyclic networks for stochastic shortest route analysis is presented and UDCs are shown to be important to the efficient implementation of the prescribed analytic procedure.
Abstract: The problem addressed in this paper is the selection of the shortest path through a directed, acyclic network where the arc lengths are independent random variables. This problem has received little attention although the deterministic version of the problem has been studied extensively. The concept of a path optimality index is introduced as a performance measure for selecting a path of a stochastic network. A path optimality index is defined as the probability a given path is shorter than all other network paths. This paper presents an analytic derivation of path optimality indices for directed, acyclic networks. A new network concept, Uniformly Directed Cutsets (UDCs), is introduced. UDCs are shown to be important to the efficient implementation of the prescribed analytic procedure. There are strong indications that stochastic shortest route analysis has numerous applications in operations research and management science. Potential application areas include, equipment replacement analysis, reliability ...

192 citations


Journal ArticleDOI
TL;DR: This work identifies a large class of cyclic staffing problems for which special structure permits the ILP to be solved parametrically as a bounded series of network flow problems.
Abstract: A fundamental problem of cyclic staffing is to size and schedule a minimum-cost workforce so that sufficient workers are on duty during each time period. This may be modeled as an integer linear program with a cyclically structured 0-1 constraint matrix. We identify a large class of such problems for which special structure permits the ILP to be solved parametrically as a bounded series of network flow problems. Moreover, an alternative solution technique is shown in which the continuous-valued LP is solved, and the result rounded in a special way to yield an optimum solution to the ILP.

174 citations


Journal ArticleDOI
TL;DR: It is shown that a search plan maximizes the overall probability of detection if and only if for each time interval i the search conducted at time i maximized the probability of detecting a stationary target with the probability that the stationary target occupies cell c.
Abstract: We consider optimal search for a moving target in discrete space. A limited amount of search effort is available at each of a fixed number of time intervals and we assume an exponential detection function. We show that a search plan maximizes the overall probability of detection if and only if for each time interval i the search conducted at time i maximizes the probability of detecting a stationary target with the probability that the stationary target occupies cell c equal to the probability that the moving target occupies cell c at time i and is not detected by the search at any time interval other than i. This characterization gives an iterative algorithm to compute optimal search plans. These plans are compared with incrementally optimal plans.

167 citations


Journal ArticleDOI
Chris N. Potts1
TL;DR: In this paper, a single machine sequencing problem is considered in which each job has a release date, a processing time and a delivery time, and the objective is to find a sequence of jobs which minimizes the time by which all jobs are delivered.
Abstract: The single machine sequencing problem is considered in which each job has a release date, a processing time and a delivery time. The objective is to find a sequence of jobs which minimizes the time by which all jobs are delivered. A heuristic is presented which never deviates by more than 50% from the optimum.

164 citations


Journal ArticleDOI
TL;DR: Making use of a well known conservation law, this work proves a necessary and sufficient condition for the existence of a scheduling strategy that achieves the desired performance.
Abstract: In this paper we study the problem of designing scheduling strategies when the demand on the system is known and waiting time requirements are pre-specified. This important synthesis problem has received little attention in the literature, and contrasts with the common analytical approach to the study of service systems. This latter approach contributes only indirectly to the problem of finding satisfactory scheduling rules when the desired (or required) response-time performance is known in advance. Briefly, the model studied assumes a Markov queueing system with M (priority) classes of jobs. For each class, a desired mean waiting time is given in advance. Making use of a well known conservation law, we prove a necessary and sufficient condition for the existence of a scheduling strategy that achieves the desired performance. We also give a constructive procedure for checking the condition and, if a solution exists, a procedure for finding one such strategy. Our assumptions are discussed and the possibil...

159 citations


Journal ArticleDOI
TL;DR: The model developed in this paper incorporates two submodels a purchase timing model which describes the occurrence over time of purchases of the product class and a multibrand stochastic choice model which specifies how any brand may be chosen on a given purchase occasion.
Abstract: The model developed in this paper incorporates two submodels a purchase timing model which describes the occurrence over time of purchases of the product class and a multibrand stochastic choice model which specifies how any brand may be chosen on a given purchase occasion. The mathematical derivations obtained by combining both submodels lead to the identification of the formal connection between the aggregates of the market—in particular, market share, penetration, duplication and brand switching. The combining is done under the assumption of independence between the zero-order choice process and the Erlang purchase timing process. The model is fully determined when the following four types of parameters are known the market shares, mn, a measure of heterogeneity of the population in terms of choice, p, the order of the Erlang timing process, r, and two parameters which describe the distribution over the population of the purchase rate of the product class—a shape parameter k and a scale parameter c. An...

Journal ArticleDOI
TL;DR: Possible fatalities to members of the public are defined in this paper as public risk and it is shown that any utility function over number of fatalities which exhibits this equity condition must be risk prone.
Abstract: Possible fatalities to members of the public are defined in this paper as public risk. Given other things are equal, such as the benefits to individuals in society, there may be a preference for an equitable balancing of individual risks. This concept of equity is defined and it is shown that any utility function over number of fatalities which exhibits this equity condition must be risk prone. Commonly used indicators, such as the average risk per person and the expected number of fatalities, do not promote equity. An attitude of aversion toward catastrophes is defined and shown to conflict with risk equity.

Journal ArticleDOI
TL;DR: It is found that this method provides a fairly good approximation procedure for obtaining system performance measures such as blocking probabilities, output rates, etc., in open restricted queueing networks.
Abstract: This paper presents an approximation method for analyzing open restricted queueing networks with exponential service time and Poisson arrivals. Analysis is made by node-by-node decomposition through the introduction of a pseudo-arrival rate and an effective service rate. The method is applied to example networks and evaluated by comparing the results obtained thereby with those by simulations or exact calculations. We find that this method provides a fairly good approximation procedure for obtaining system performance measures such as blocking probabilities, output rates, etc., in open restricted queueing networks.

Journal ArticleDOI
TL;DR: If reading the data takes t units of time, then the time required to solve the problem grows exponentially with the square root of t, which means that a class of instances of the problem which are difficult to solve by such algorithms are identified.
Abstract: We consider a class of algorithms which use the combined powers of branch-and-bound, dynamic programming and rudimentary divisibility arguments for solving the zero-one knapsack problem. Our main result identifies a class of instances of the problem which are difficult to solve by such algorithms. More precisely, if reading the data takes t units of time, then the time required to solve the problem grows exponentially with the square root of t.

Journal ArticleDOI
TL;DR: In this article, a general structure for using intensity measures for estimating consumer preference functions is provided, including alternative measurement theories, axioms for developing testable implications of each theory, statistical tests to test these implications and distinguish which theory describes how consumers are using the intensity measures are developed, and functional forms appropriate for the preference functions implied by each theory.
Abstract: To design successful new products and services, managers need to measure consumer preferences relative to product attributes. Many existing methods use ordinal measures. Intensity measures have the potential to provide more information per question, thus allowing more accurate models or fewer consumer questions (lower survey cost, less consumer wearout). To exploit this potential, researchers must be able to identify how consumers react to these questions and must be able to estimate intensity-based preference functions. This paper provides a general structure for using intensity measures for estimating consumer preference functions. Within the structure: (1) alternative measurement theories are reviewed, (2) axioms for developing testable implications of each theory are provided, (3) statistical tests to test these implications and distinguish which theory describes how consumers are using the intensity measures are developed, (4) functional forms appropriate for the preference functions implied by each ...

Journal ArticleDOI
TL;DR: A reasonable criterion for the resulting loss in accuracy is defined, and bounds on this quantity are derived and standard iterative methods can be used to improve the accuracy of a given aggregated problem.
Abstract: This paper explores the effects of aggregating variables in large linear programs. We define a reasonable criterion for the resulting loss in accuracy, and derive bounds on this quantity. A posteriori bounds may be calculated after solving the aggregated problem, and a priori bounds before. Also, we show that standard iterative methods can be used to improve the accuracy of a given aggregated problem. A numerical example illustrates the results.

Journal ArticleDOI
TL;DR: The interactive approach to decision solving focuses on reducing the feasible region of the decision space rather than improving the stored image of the overall preference function, which obviates the need for any type of choice among vectors and stays reasonably within the decision maker's capability to supply necessary information for problem solution.
Abstract: There is a need to develop user-oriented math programming techniques for resolution of decision problems in which several objectives must be considered. One approach, the Geoffrion-Dyer-Feinberg algorithm, allows interaction between the computer and the decision maker during the solution process. The interactive approach is adopted in this paper. However, our approach focuses on reducing the feasible region of the decision space rather than improving the stored image of the overall preference function. In so doing, the problem is reduced to a series of pairwise tradeoffs between the objectives. This obviates the need for any type of choice among vectors on the part of the decision maker and stays reasonably within his capability to supply necessary information for problem solution.

Journal ArticleDOI
TL;DR: The traditional perishable inventory costs of ordering, holding, shortage or penalty, disposal and revenue are incorporated into the continuous review framework and the type of policy that is optimal with respect to long run average expected cost is presented for both the backlogging and lost-sales models.
Abstract: This paper extends the notions of perishable inventory models to the realm of continuous review inventory systems. The traditional perishable inventory costs of ordering, holding, shortage or penalty, disposal and revenue are incorporated into the continuous review framework. The type of policy that is optimal with respect to long run average expected cost is presented for both the backlogging and lost-sales models. In addition, for the lost-sales model the cost function is presented and analyzed.

Journal ArticleDOI
TL;DR: Decision analysis is a process that enhances effective decision making by providing for both logical, systematic analysis and imaginative creativity and application criticisms question how much decision analysis improves actual decision making.
Abstract: Making decisions is what you do when you don't know what to do. Decision analysis is a process that enhances effective decision making by providing for both logical, systematic analysis and imaginative creativity. The procedure permits representing the decision-maker's information and preferences concerning the uncertain, complex, and dynamic features of the decision problem. As decision analysis has become more accepted and influential the ethical responsibility of decision analysts has increased. Analysts must be sensitive to assuming improper roles of advocacy and to participating in analyses whose means or ends are ethically repugnant. Criticisms of decision analysis are examined at three levels. Application criticisms question how much decision analysis improves actual decision making. Conceptual criticisms argue that the decomposition and recomposition of the decision analysis process may lend to a misshapen framing of the problem or to a suppression of “soft” or “fragile” considerations. Criticisms...

Journal ArticleDOI
TL;DR: It is shown that the problem may be modeled as an M/D/1 queueing system with the optimal policy being a two-critical-number policy and in computational tests that were performed, optimal policies were computed in less than 1/8 second of CPU time.
Abstract: A single production facility is dedicated to producing one type of product with completed units going directly into inventory. The demand for the product is governed by a Poisson process and is supplied directly to inventory when available, or is backordered until it is produced by the production facility. Relevant costs are a linear inventory holding cost, a linear backorder cost, a fixed setup cost for initiating a production run. The objective is to find a control policy to minimize the expected cost per time unit. It is shown that the problem may be modeled as an M/D/1 queueing system with the optimal policy being a two-critical-number policy. Cost expressions are derived as functions of the policy parameters, and based on a convexity property of these cost expressions, an efficient search procedure is proposed for finding the optimal policy. In computational tests that were performed on an IBM 360/65, optimal policies were computed in less than 1/8 second of CPU time.

Journal ArticleDOI
TL;DR: It is shown that under reasonable assumptions, solving a multistage stochastic program with recourse is equivalent to solving a nested sequence of piecewise quadratic programs and the algorithm presented in an earlier report is extended to the multistages situation.
Abstract: We consider a multistage stochastic program with recourse, with discrete distribution, quadratic objective function and linear inequality constraints. We show that under reasonable assumptions, solving such a program is equivalent to solving a nested sequence of piecewise quadratic programs and we extend the algorithm presented in an earlier report to the multistage situation. Finally, we consider the application of the method to an energy investment problem and report on the results of numerical experiments.

Journal ArticleDOI
TL;DR: This paper proposes a new approach and develops an efficient algorithm for solving a class of (simplified) portfolio selection problems based on the technique of parametric principal pivoting that achieves enormous savings in computer storage and computations.
Abstract: This paper proposes a new approach and develops an efficient algorithm for solving a class of (simplified) portfolio selection problems. The approach is based on the technique of parametric principal pivoting. The algorithm is particularly suited for problems with special structure and can handle potentially large problems. When specialized to the multiple index model, the algorithm achieves enormous savings in computer storage and computations.

Journal ArticleDOI
TL;DR: An extension of the Lin-Kernighan local search algorithm for the solution of the asymmetric traveling salesman problem is presented and computational results suggest that the heuristic is feasible for fairly large instances.
Abstract: We present an extension of the Lin-Kernighan local search algorithm for the solution of the asymmetric traveling salesman problem. Computational results suggest that our heuristic is feasible for fairly large instances. We also present some theoretical results which guided our design of the heuristic.

Journal ArticleDOI
TL;DR: The problem is formulated and solved as a nonconvex mathematical programming problem using a procedure termed DISpersion-CONcentration, and the total cost is the sum of weighted distances between all pairs of facilities.
Abstract: Layout problems often involve a given number of facilities which must be located in the plane. Each of these facilities has a given area, and the cost of interactions between every facility pair is known. Problem optimality is achieved when facilities do not overlap and the total cost, which is the sum of weighted distances between all pairs of facilities, is minimized. The problem is formulated and solved as a nonconvex mathematical programming problem using a procedure termed DISpersion-CONcentration.

Journal ArticleDOI
TL;DR: This paper presents a new algorithm for solving the assignment problem based on a scheme of relaxing the given problem into a series of simple network flow transportation problems for each of which an optimal solution can be easily obtained.
Abstract: This paper presents a new algorithm for solving the assignment problem. The algorithm is based on a scheme of relaxing the given problem into a series of simple network flow transportation problems for each of which an optimal solution can be easily obtained. The algorithm is thus seen to be able to take advantage of the nice properties in both the primal and the dual approaches for the assignment problem. The computational bound for the algorithm is shown to be 0n3 and the average computation time is better than most of the specialized assignment algorithms.

Journal ArticleDOI
TL;DR: In this note it is shown that Stidham's proof applies directly to the more general case of H = λG, provided λ and G are finite and a simple technical assumption is satisfied.
Abstract: Brumelle has generalized the queueing formula L = λW to H = λG, where λ is the arrival rate and H and G are respectively time and customer averages of some queue statistics which have a certain relationship to each other but are otherwise arbitrary. Stidham has developed a simple proof of L = λW for each sample path, in which the only requirement is that λ and W be finite. In this note it is shown that Stidham's proof applies directly to the more general case of H = λG, provided λ and G are finite and a simple technical assumption is satisfied. The result is used to obtain time average probabilities in the queue GI/M/c/K. Finally, a counterexample is given to demonstrate that the technical assumption is not superfluous, even in the special case where H and G can be interpreted, respectively, as the time average number of units in the system and the average time spent by a unit in the system, as is the case with both L = λW and the application to the queue GI/M/c/K.

Journal ArticleDOI
TL;DR: A new formulation of the time-dependent salesman problem is presented which uses n3 variables and only n constraints to solve the problem.
Abstract: A new formulation of the time-dependent salesman problem is presented which uses n3 variables and only n constraints.

Journal ArticleDOI
TL;DR: A dual bound, based on the dual of the LP relaxation of the integer programming formulation of the p-median problem, is developed and tested in a branch-and-bound algorithm.
Abstract: The p-median problem consists of locating p facilities on a network, so that the sum of shortest distances from each of the nodes of the network to its nearest facility is minimized. A dual bound, based on the dual of the LP relaxation of the integer programming formulation of the problem, is developed and tested in a branch-and-bound algorithm. Computational results show that the resulting solution procedure has some advantages over existing exact methods for this problem.

Journal ArticleDOI
TL;DR: Five problems of finding efficient vectors as a subset of a finite set of vectors are shown to be related and a common methodology based on the simplex method of linear programming is developed for solving all of them.
Abstract: Five problems of finding efficient vectors as a subset of a finite set of vectors are shown to be related and a common methodology based on the simplex method of linear programming is developed for solving all of them. Randomly generated problems for one of the five types are solved using the method, and the implications regarding computational requirements are discussed.

Journal ArticleDOI
Paul Zipkin1
TL;DR: Methods for assessing the loss in accuracy resulting from aggregation are developed, and several reasonable measures of "accuracy loss" for this case are defined, and the bounds on these quantities derived.
Abstract: Most applied linear programs reflect a certain degree of aggregation-either explicit or implicit-of some larger, more detailed problem. This paper develops methods for assessing the loss in accuracy resulting from aggregation. We showed previously that, when columns only are aggregated, a feasible solution to the larger problem can be recovered. This may not be the case under row-aggregation. Several reasonable measures of "accuracy loss" for this case are defined, and the bounds on these quantities derived. These results enable the modeler to compare and evaluate alternative approximate models of the same problem.