scispace - formally typeset
Search or ask a question

Showing papers in "Management Science in 1977"


Journal ArticleDOI
TL;DR: In this paper, the number of days required to clear a check drawn on a bank in a city depends on the city in which the check is cashed and the bank's available funds.
Abstract: The number of days required to clear a check drawn on a bank in city j depends on the city i in which the check is cashed. Thus, to maximize its available funds, a company that pays bills to numero...

842 citations


Journal ArticleDOI
TL;DR: Three previously unreported heuristics are included in the study, one of which turns out to be superior to the other ten heuristic tested and is presented in this paper.
Abstract: This paper presents computational experience with eleven flow shop sequencing heuristics. Included in the study are three previously unreported heuristics, one of which turns out to be superior to the other ten heuristics tested. The comparisons were made on a variety of problem sizes, up to fifty jobs and fifty operations.

512 citations


Journal ArticleDOI
TL;DR: Analysis of the results shows that, in many cases, the decisions/decision-making process of the participants was affected by the information system structure and/or attributes of individual decision makers.
Abstract: The use of computer based information-decision systems to support decision making in organizations has increased significantly in the last decade. Very little effort has been devoted, however, to determine what relationships exist between the structure of information presented for decision making and the ensuing effectiveness of the decision. This article summarizes a series of experiments. The Minnesota Experiments, which were conducted to examine the significance of various information system characteristics on decision activity. Several research programs administered in the period 1970--1975 are discussed in this paper. By varying the manner in which information was provided to participants in each experiment, the impact of various information system characteristics and individual differences on decision effectiveness was investigated. Analysis of the results shows that, in many cases, the decisions/decision-making process of the participants was affected by the information system structure and/or attributes of individual decision makers. The results suggest guidelines for the designers of information systems and fruitful avenues for continued research.

360 citations


Journal ArticleDOI
Peter A. Morris1
TL;DR: It is proved that the decision maker should process a calibrated expert's opinion by multiplying the expert's probability assessment by his own prior probability assessment and normalizing, and the existence of a composite probability function which measures the joint information contained in the probability assessments generated by a panel of experts.
Abstract: This paper presents a new approach to the problem of expert resolution. The proposed analytic structure provides a mechanism by which a decision maker can incorporate the possibly conflicting probability assessments of a group of experts. The approach is based upon the Bayesian inferential framework presented in [Morris, P. A. 1974. Decision analysis expert use. Management Sci.20 9, May]. A number of specific results are derived from analysis of a generic model structure. In the single expert continuous variable case, we prove that the decision maker should process a calibrated expert's opinion by multiplying the expert's probability assessment by his own prior probability assessment and normalizing. A method for subjectively calibrating an expert is also presented. In the multi-expert case, we obtain a simple multiplicative rule for combining the expert judgments. We also prove the existence of a composite probability function which measures the joint information contained in the probability assessments generated by a panel of experts. The interesting result is that composite prior should be processed as if it were the probability statement of a single calibrated expert.

349 citations


Journal ArticleDOI
TL;DR: The results indicate that significant reductions in crane travel time and distance are obtainable in some real-world and previously modeled situations via the proposed storage assignment/interleaving policies, which may be directly translated into increased throughput capacity for existing systems.
Abstract: This paper extends work previously reported on storage assignment rules for automatic warehousing systems to include interleaving; that is, the sequencing of storage and retrieve requests. Using both continuous analytical models and discrete evaluation procedures, this paper compares the operating performance of several storage assignment/interleaving policies. The results indicate that significant reductions in crane travel time and distance are obtainable in some real-world and/or previously modeled situations via the proposed storage assignment/interleaving policies. These reductions may be directly translated into increased throughput capacity for existing systems. The storage assignment/interleaving policies may also be used to improve the design of proposed systems to achieve a more desirable balance between throughput and storage capacity. Although the system examined is a high-rise automatic warehouse, the results presented have implications for the warehousing function in general.

317 citations


Journal ArticleDOI
TL;DR: In this paper, an efficient branch-and-bound solution procedure for the warehouse location problem in which limitations on the amount of goods which can be handled are also imposed is described. But the proposed method is made efficient by developing dominance, lower and upper bounding procedures and branch and node selection rules utilizing the special structure of this problem.
Abstract: This paper describes an efficient solution procedure for the warehouse location problem in which limitations on the amount of goods which can be handled are also imposed. The proposed branch-and-bound solution method is made efficient by developing dominance, lower and upper bounding procedures and branch and node selection rules utilizing the special structure of this problem. Computational results are provided for large sized problems.

225 citations


Journal ArticleDOI
TL;DR: In this paper, a modified version of Benders' original mixed-integer programming algorithm is proposed, in which the solution of a sequence of integer programs is replaced by the solution for a sequence linear programs plus some hopefully few integer programs.
Abstract: As applied to mixed-integer programming, Benders' original work made two primary contributions: 1 development of a “pure integer” problem Problem P that is equivalent to the original mixed-integer problem, and 2 a relaxation algorithm for solving Problem P that works iteratively on an LP problem and a “pure integer” problem. In this paper a modified algorithm for solving Problem P is proposed, in which the solution of a sequence of integer programs is replaced by the solution of a sequence of linear programs plus some hopefully few integer programs. The modified algorithm will still allow for taking advantage of any special structures e.g., an LP subproblem that is a “network problem” just as in Benders' original algorithm. The modified Benders' algorithm is explained and limited computational results are given.

197 citations


Journal ArticleDOI
TL;DR: This report also appears as Working Paper No. 260, Western Management Science Institute, University of California, Los Angeles, November 1976.
Abstract: A complete description is given of the design, implementation and use of a family of very fast and efficient large scale minimum cost (primal simplex) network programs. The class of capacitated transshipment problems solved is the most general of the minimum cost network flow models which include the capacitated and uncapacitated transportation problems and the classical assignment problem; these formulations are used for a large number of diverse applications to determine how (or at what rate) a good should flow through the arcs of a network to minimize total shipment costs. The presentation tailors the unified mathematical framework of linear programming to networks with special emphasis on data structures which are not only useful for basis representation, basis manipulation, and pricing mechanisms, but which also seem to be fundamental in general mathematical programming. A review of pertinent optimization literature accompanies computational testing of the most promising ideas. Tuning experiments for...

193 citations


Journal ArticleDOI
TL;DR: In this paper, a general bounding approach is developed which includes all previously presented lower bounds as special cases, and the strongest bound obtained in this way is combined with two enumeration schemes, the relative merits of which are discussed.
Abstract: The classical combinatorial optimization problem of minimizing maximum completion time in a general job-shop has been the subject of extensive research. In this paper we review and extend this work. A general bounding approach is developed which includes all previously presented lower bounds as special cases. The strongest bound obtainable in this way is combined with two enumeration schemes, the relative merits of which are discussed. The results of some computational experiments and a large bibliography are included as well.

187 citations


Journal ArticleDOI
TL;DR: A heuristic method is proposed for solving the problem where n is large; this method requires very little computing and was found to produce very good results for a sample of problems of varying size.
Abstract: The paper considers the problem of n given jobs to be processed on a single machine where it is desirable to minimise the variance of job waiting times. A theorem is presented to the effect that the optimal sequence must be V-shaped i.e., the jobs must be arranged in descending order of processing times if they are placed before the shortest job, but in ascending order of processing times if placed after it, and an algorithm for determining the optimal solution is given. A heuristic method is proposed for solving the problem where n is large; this method requires very little computing and was found to produce very good results for a sample of problems of varying size. The concept of the “efficient set” is examined and heuristic methods for generating this set are given.

187 citations


Journal ArticleDOI
Daniel P. Heyman1
TL;DR: It is shown that the optimal cost rate is larger than the one achieved by the comparable optimal N-policy which activates the server when N customers are in the queue and the expected number of customers present when the server is activated is the optimal value of N.
Abstract: We consider situations where the server cannot continuously monitor its queue to sense customer arrivals. For this situation we introduce the T-policy which activates the server T time units after the end of the last busy period. We consider in detail an M/G/1 queue with linear customer holding costs and a fixed charge for activating the server. For the minimum cost-rate criterion we obtain the optimal value of T and the optimal cost rate. We show that the optimal cost rate is larger than the one achieved by the comparable optimal N-policy which activates the server when N customers are in the queue. We also show that under the optimal T-policy, the expected number of customers present when the server is activated is the optimal value of N.

Journal ArticleDOI
TL;DR: In this paper, the authors suggest that two aspects of cognitive and personality traits are relevant to user adaptive behavior: (1) cognitive style and (2) implementation apprehension, e.g., resistance to change, defense mechanisms and stress tolerance.
Abstract: Presently observation and interviews are used in gathering information to facilitate organizational acceptance of MIS modifications. The authors propose that the measurement and evaluation of users' cognitive styles and related personality traits may provide an effective means for attaining successful MIS modifications. The authors suggest that two aspects of cognitive and personality traits are relevant to user adaptive behavior: (1) cognitive style and (2) implementation apprehension, e.g., resistance to change, defense mechanisms and stress tolerance. These design aspects are incorporated in a field study.

Journal ArticleDOI
TL;DR: In this article, a zero-one integer programming formulation of the project scheduling problem is presented which maximizes the discounted value of cash flows in a project when progress payments and cash outflows are made upon the completion of certain activities.
Abstract: A zero-one integer programming formulation of the project scheduling problem is reported which maximizes the discounted value of cash flows in a project when progress payments and cash outflows are made upon the completion of certain activities. While this problem has been treated previously, it has not been addressed in a manner which allows for the type of constraints treated in this paper. And while cash can be treated as any other resource in the noncash flow, resource-constrained version of this problem, the effects of cash inflows and outflows and the time sequencing alternatives for them allow for a different treatment of cash resources than has been previously reported.

Journal ArticleDOI
TL;DR: In this article, a variety of well-known facility location and location-allocation models are shown to be equivalent to, and therefore solvable as, generalized assignment problems GAP's, a 0-1 programming model in which it is desired to minimize the cost of assigning n tasks to a subset of m agents.
Abstract: A variety of well-known facility location and location-allocation models are shown to be equivalent to, and therefore solvable as, generalized assignment problems GAP's. The GAP is a 0-1 programming model in which it is desired to minimize the cost of assigning n tasks to a subset of m agents. Each task must be assigned to one agent, but each agent is limited only by the amount of a resource, e.g., time, available to him and the fact that the amount of resource required by a task depends on both the task and the agent performing it. The facility location models considered are divided into public and private sector models. In the public sector, both p-median and capacity constrained p-median problems are treated In the p-median problem exactly p of n sites must be selected to provide service to all n. Each site has an associated weight, e.g., its population, and it is desired to minimize the weighted average distance between the n sites and their respective service sites. The capacity constrained p-median problem differs only in that there is an upper limit on the sum of the weights of the sites served by each service site. In the private sector we consider both capacitated and uncapacitated warehouse location problems in which each customer's demands must be satisfied by a single warehouse. In addition, we show how certain types of constraints limiting the site and capacity combinations allowed can be incorporated into these models through their treatment as GAP's. An existing algorithm for the GAP is modified to take advantage of the special structure of these facility location problems, and computational results are reported.

Journal ArticleDOI
TL;DR: This paper describes the results of a two-and-a-half year effort in proactive planning at the Bureau of the Census in Washington, D.C. and the methodology that was utilized and developed to achieve those results.
Abstract: This paper describes the results of a two-and-a-half year effort in proactive planning at the Bureau of the Census in Washington, D.C. Most large-scale organizations merely react to their future(s) instead of actively planning for and thus anticipating the future(s) they would like to bring about; would the Bureau be bold enough to try and break out of this pattern by engaging in what Ackoff and others have termed proactive or interactive planning? This paper not only describes the substantive results of the effort, but more importantly, the methodology that was utilized and developed to achieve those results. Approximately 120 self-selecting participants from all branches and levels of the Bureau (from secretaries to division heads to the Director) were first given the explicit instruction to think as freely as they could about the year 2000 (i.e., not to be hampered by current constraints in the internal or external environment) and to write out a scenario indicating what for them the Bureau should be l...

Journal ArticleDOI
TL;DR: In this article, the authors take stock of what is known and suggest some conceptual foundations for future progress in the areas of postoptimality analysis and parametric optimization techniques for integer programming.
Abstract: The purpose of this paper is to take stock of what is known and to suggest some conceptual foundations for future progress in the areas of postoptimality analysis and parametric optimization techniques for integer programming.

Journal ArticleDOI
TL;DR: The results here and elsewhere indicate that consensus and collaboration problems between R&D and marketing may be alleviated by replacing the interacting decision making process, which is typically used by many organizations, with a combined nominal-interacting process.
Abstract: Because R&D and marketing are dependent upon each other for new product development, it is imperative that they achieve consensus and organizational integration a team spirit of collaboration and joint commitment. But, consensus and integration are often inhibited by the differing viewpoints of R&D and marketing, which are a natural consequence of their specialized organizational roles and cultures. There is a need for a process that will bridge these dissonant viewpoints and cultures, while otherwise preserving the specialized orientations of the two parties. The bridging properties of three group decision making processes---nominal, interacting, and combined nominal-interacting---were tested by nine strategic planning teams, each composed of R&D and marketing personnel. The combined nominal-interacting process yielded very high levels of statistical consensus and group integration. The nominal process produced statistical consensus but it did not yield high levels of integration. The interacting process did not produce either consensus or integration. The results here and elsewhere indicate that consensus and collaboration problems between R&D and marketing may be alleviated by replacing the interacting decision making process, which is typically used by many organizations, with a combined nominal-interacting process.

Journal ArticleDOI
TL;DR: A formal theory for predicting the success of information system development projects is developed and one important managerial tool to establish effective user-specialist communication is suggested.
Abstract: It is generally agreed between researchers and practitioners that user involvement is a key to the success of computer based information systems. This paper treats user involvement within the framework of a dyadic two party communication relationship between user and specialist. We add to previous work in three respects. First, the theoretical framework is enriched by specifying what we consider to be important contingency factors for the interaction between user and specialist. Second, a formal theory for predicting the success of information system development projects is developed. Third, one important managerial tool to establish effective user-specialist communication is suggested.

Journal ArticleDOI
TL;DR: In this article, a solution procedure for solving the project time/cost tradeoff problem of reducing a project duration at a minimum cost is introduced, where a minimal cut in a flow network derived from the original project network is used to identify the project activities which should experience a duration modification in order to achieve the total project reduction.
Abstract: This paper introduces a solution procedure for solving the project time/cost tradeoff problem of reducing a project duration at a minimum cost. The solution to the time/cost problem is achieved by locating a minimal cut in a flow network derived from the original project network. This minimal cut is then utilized to identify the project activities which should experience a duration modification in order to achieve the total project reduction. The paper will document this cut-based procedure and provide a practical application to a project situation.

Journal ArticleDOI
TL;DR: In this article, a methodology for estimating minimum cost due-dates in a job shop production system is presented and illustrates a methodology that is based on multiple linear and nonlinear regression to estimate the relationship between the response measures and the value of K, the multiple of total processing time employed in assigning due dates, for various dispatching-labor assignment operating policies.
Abstract: This paper presents and illustrates a methodology for estimating minimum cost due-dates in a job shop production system. Simulation of a hypothetical dual-constrained job shop is used to derive response measures of shop performance for various dispatching, labor assignment and due-date assignment rules. Multiple linear and nonlinear regression is used to estimate the relationship between the response measures and the value of K, the multiple of total processing time employed in assigning due-dates, for various dispatching-labor assignment operating policies. Five response measures are used as dependent variables for the regression models. They are mean job flow-time cost, mean job lateness cost, mean job earliness cost, mean job due-date cost, and mean labor transfer cost. The coefficients of the regression analysis are employed to predict shop performance in terms of the five component cost measures for varying levels of K given the dispatching-labor assignment operating policy. The estimated relationshi...

Journal ArticleDOI
TL;DR: In this article, a convex quadratic program is proposed to solve the problem of determining both lot sizes and repeating sequences for N products on one facility. But the problem is difficult due to the combinatorial and continuous nature of the problem, and several heuristics have been suggested that come fairly close to a computable lower bound.
Abstract: The problem of determining both lot sizes and repeating sequences for N products on one facility is difficult due to the combinatorial and continuous nature of the problem. The work that has been done on the problem has made various assumptions “zero-switch rule” or “equal lot size,” for example to simplify the problem. Several heuristics have been suggested that come fairly close to a computable lower bound on cost. This paper discusses a mathematical programming formulation of the entire problem, along with reasons for its current nonpracticality. Heuristics for reducing the problem to a manageable mathematical programming formulation are presented. Specifically, a formulation is given when the sequence is known with both potentially unequal lot sizes and idle time periods as variables. The formulation is a convex quadratic program. Heuristics for finding sequences to feed to the quadratic program are explored, and examples from previous literature demonstrate that the methods here give consistent results as good as and sometimes significantly better than previous methods. Finally, we discuss situations in which such methods might be important and situations in which easier methods should suffice.

Journal ArticleDOI
TL;DR: In this paper, the rate of outdating of a perishable product (such as blood) and the age distribution of the inventory are analyzed, assuming that after each period's demand, the inventory is replenished with fresh product, up to a constant level.
Abstract: The rate of outdating of a perishable product (such as blood) and the age distribution of the inventory are analyzed. It is assumed that after each period's demand, the inventory is replenished with fresh product, up to a constant level. The age distribution is treated as a finite Markov chain. Expected outdating is shown to be convex in the inventory level. Upper and lower bounds for the expected outdating are obtained for the case of general demand distribution. Better and easily computable bounds and approximate distribution are found for the Poisson demand case.

Journal ArticleDOI
TL;DR: An optimization program that is used to help electric utilities plan investments for power generation offers special economies which make it attractive to power system planners.
Abstract: This paper describes the development and application of an optimization program that is used to help electric utilities plan investments for power generation. For each year over a planning horizon the program determines what types and sizes of generating plants should be constructed, so as to minimize total discounted cost while meeting reliably the system's forecasted demands for electricity. The problem is formulated as a large-scale, chance constrained, mixed integer program. The solution algorithm employs Benders' Partitioning Principle, a mixed integer linear programming code, and a successive linearization procedure. Computation costs are low and, in the important area of sensitivity analysis, the program offers special economies which make it attractive to power system planners. Computational results are presented for a full sized generation planning problem for the six New England states where the algorithm is currently being used for planning generating facilities.

Journal ArticleDOI
TL;DR: Several results associated with observation quality are determined which are sufficient for particularly simple characterizations of an optimal policy for the two-state case and generalizations of several results due to Ross are presented.
Abstract: This paper studies the problem of optimally controlling a discrete-time production process with countable state space which is subject to one of three control settings at each time interval: produce, inspect while producing, or repair (revise) the process. The cost of the item produced and the inspection and repair costs are assumed dependent on the state of the production process. It is assumed that the inspector-decisionmaker receives imperfect on-line observations of the production process at both times of production and inspection. Bounds on optimal cost are obtained. For the two-state case, several results associated with observation quality are determined which are sufficient for particularly simple characterizations of an optimal policy. Generalizations of several results due to Ross are also presented.

Journal ArticleDOI
TL;DR: The single-server queueing system is studied where arrivals are rejected if their waiting plus service times would exceed a fixed amount K.
Abstract: The single-server queueing system is studied where arrivals are rejected if their waiting plus service times would exceed a fixed amount K. Applications of this model include equipment repair facilities and buffered communication devices with constant discharge rate receiving messages from a high-speed data channel. A procedure for computing the equilibrium behavior is described for the case of random arrivals and arbitrary service time requirements. Detailed analytic results and graphs are given for the case of exponentially distributed service time requirements, including the server utilization, rejection probability, mean time in system, mean server busy period, and the mean and probability density function of the virtual waiting time.

Journal ArticleDOI
TL;DR: An expected cost model for a process whose mean is controlled by an XI„ chart and whose variance iscontrolled by an R chart is developed and finds the optimal interval between samples and the expected cost for several examples where Shewhart's heuristic design is used in place of the optimal design.
Abstract: In this paper, we develop an expected cost model for a process whose mean is controlled by an XI„ chart and whose variance is controlled by an R chart. The expected cost comprises the fixed and variable costs of sampling, the cost of investigating and correcting the process when at least one control chart indicates that the process parameters have shifted, and the cost of producing defective units. We use a search procedure to determine the sample size, interval between samples and control limits for both charts that minimize the expected cost. Optimal solutions to numerical examples are presented. A sensitivity analysis of the model is performed. In addition, we find the optimal interval between samples and the expected cost for several examples with large shifts in the mean and variance where Shewhart's heuristic design is used in place of the optimal design. Comparison of the expected cost of the optimal design to the expected cost of Shewhart's design shows an increase in expected cost of only 0.4 to 8.2 percent for the latter design. But other situations are discussed and examples presented which indicate that the optimal design is preferred.

Journal ArticleDOI
TL;DR: In this paper, an integrated approach consisting of model formulation, empirical investigation, and optimization is carried out to determine an optimal policy for investment in advertising for a firm that wishes to maximize its discounted profits.
Abstract: This paper determines an optimal policy for investment in advertising for a firm that wishes to maximize its discounted profits. To that end, an integrated approach consisting of model formulation, empirical investigation, and optimization is carried out. A model of market share response to advertising is formulated as a first-order Markov process, with nonstationary transition probabilities. These probabilities are assumed to be a function of the advertising goodwill accumulated by the firm and its competitors. The model as specified is nonlinear in its parameters, and nonlinear regression techniques are applied to estimate them. It is shown that this nonlinear form offers, via likelihood ratio tests, a unique opportunity for testing the model, and in a resulting empirical test, the model is found to be consistent with the data. Given these empirical findings, an optimal advertising policy is derived by the use of optimal control theory. The managerial implications of the recommended multi-period policy are examined, and the policy's sensitivity to managerial inputs and economic conditions is analyzed and illustrated.

Journal ArticleDOI
TL;DR: In this paper, a branch-and-bound algorithm is presented for determining an optimal single-cycle policy for arborescent inventory systems with fixed order cost at each stage and proportional holding costs on each stage's echelon inventory.
Abstract: In this paper we examine optimal and near-optimal continuous review policies for a deterministic arborescent inventory system: Known and constant outside demand must be met without backlogging or lost sales at minimum average system cost per unit time. Costs are of two types: A fixed order cost at each stage and proportional holding costs on each stage's echelon inventory. We describe some characteristics of optimal policies and, under fairly mild conditions (e.g., zero initial inventory), prove that the optimal stationary policy is a “single-cycle” policy. We present an efficient branch-and-bound algorithm for determining optimal single-cycle policies for arborescent systems. We also examine the near-optimality of “system myopic” single-cycle policies.

Journal ArticleDOI
TL;DR: In this paper, the effect of decision flexibility on the value of information was explored and it was shown that sensitivity analysis can be used as a basis for obtaining simplified approximations for EVPIGUF's.
Abstract: This paper explores the effect of decision flexibility on the value of information. The usefulness of calculating the value of information under various assumptions concerning decision flexibility is illustrated with a simple economic example. An upper limit to the value of information given some level of flexibility is the expected value of perfect information given undiminished flexibility EVPIGUF. By approximating an arbitrary smooth value function with a quadratic equation, first order characteristics of the EVPIGUF are identified. Finally, it is shown that sensitivity analysis can be used as a basis for obtaining simplified approximations for EVPIGUF's.

Journal ArticleDOI
TL;DR: In this paper, the authors formulate a general model for an uncapacitated facility location problem in which demands are related to the prices established at the various locations, and pricing decisions and location decisions are determined simultaneously in this model.
Abstract: We formulate a general model for an uncapacitated facility location problem in which demands are related to the prices established at the various locations. Pricing decisions and location decisions are determined simultaneously in this model, in contrast to traditional location models that assume fixed demands and prices. We show that a transformation of the general model is equivalent to the fixed-demand location model of Efroymson and Ray, and thus may be solved by any of the exact or heuristic methods available for that model. Specifying either a private sector objective of maximizing profits or a public sector objective of maximizing net social benefits provides a particular case of the general model. A third plausible objective is the “quasi-public” one of maximizing net social benefits subject to a constraint that ensures sufficient revenues to cover costs. A Lagrangian relaxation of this constraint, which yields another case of the general model, is used to develop a solution procedure for the quasi-public objective. Details of the solution approach are given for quadratic revenue functions, and an example illustrates the procedure.