scispace - formally typeset
Search or ask a question

Showing papers in "Management Science in 1976"


Book ChapterDOI
TL;DR: In this article, a growth model for the timing of initial purchase of new products is proposed, and a behavioral rationale for the model is offered in terms of innovative and imitative behavior.
Abstract: A growth model for the timing of initial purchase of new products. The basic assumption of the model is that the timing of a consumer’s initial purchase is related to the number of previous buyers. A behavioral rationale for the model is offered in terms of innovative and imitative behavior. The model yields good predictions of the sales peak and the timing of the peak when applied to historical data. A long-range forecast is developed for the sales of color television sets.

4,408 citations


Journal ArticleDOI
TL;DR: In this article, a family of scoring rules for the elicitation of continuous probability distributions are developed and discussed, which involve the computation of a score based on the assessor's stated probabilities and on the event that actually occurs.
Abstract: Personal, or subjective, probabilities are used as inputs to many inferential and decision-making models, and various procedures have been developed for the elicitation of such probabilities. Included among these elicitation procedures are scoring rules, which involve the computation of a score based on the assessor's stated probabilities and on the event that actually occurs. The development of scoring rules has, in general, been restricted to the elicitation of discrete probability distributions. In this paper, families of scoring rules for the elicitation of continuous probability distributions are developed and discussed.

771 citations


Journal ArticleDOI
TL;DR: In this paper, a man-machine interactive mathematical programming method is presented for solving the multiple criteria problem involving a single decision maker, where all decision-relevant criteria or objective functions are concave functions to be maximized, and the constraint set is convex.
Abstract: In this paper a man-machine interactive mathematical programming method is presented for solving the multiple criteria problem involving a single decision maker. It is assumed that all decision-relevant criteria or objective functions are concave functions to be maximized, and that the constraint set is convex. The overall utility function is assumed to be unknown explicitly to the decision maker, but is assumed to be implicitly a linear function, and more generally a concave function of the objective functions. To solve a problem involving multiple objectives the decision maker is requested to provide answers to yes and no questions regarding certain trade offs that he likes or dislikes. Convergence of the method is proved; a numerical example is presented. Tests of the method as well as an extension of the method for solving integer linear programming problems are also described.

732 citations


Journal ArticleDOI
TL;DR: It is shown that significant reductions in crane travel time and distance are obtainable from turnover-based rules, and these improvements can be directly translated into increased throughput capacity for existing systems, and may be used to alter the design of proposed systems.
Abstract: In the past few years, increasing numbers of automatic warehousing systems using computer-controlled stacker cranes have been installed. Our research concerns the scientific scheduling and design of these systems. There are three elements to scheduling: the assignment of multiple items to the same pallet Pallet Assignment; the assignment of pallet loads to storage locations Storage Assignment; and rules for sequencing storage and retrieve requests Interleaving. This paper deals with optimal storage assignment. Results are obtained which compare the operating performance of three storage assignment rules: random assignment, which is similar to the closest-open-location rule used by many currently operating systems; full turnover-based assignment: and class-based turnover assignment. It is shown that significant reductions in crane travel time and distance are obtainable from turnover-based rules. These improvements can, under certain circumstances, be directly translated into increased throughput capacity for existing systems, and may be used to alter the design e.g., size and number of racks, speed of cranes, etc. of proposed systems in order to achieve a more desirable system balance between throughput and storage capacity.

606 citations


Journal ArticleDOI
TL;DR: In this paper, the authors generalize the combinatorial station selection problem discussed by J. M. Rhys and P. D. Hansen and define a maximal closure of a directed graph G such that if a node belongs to the closure all its successors also belong to the set.
Abstract: This paper generalizes the selection problem discussed by J. M. Rhys [Rhys, J. M. W. 1970. Shared fixed cost and network flows. Management Sci.17 3, November.], J. D. Murchland [Murchland, J. D. 1968. Rhys's combinatorial station selection problem. London Graduate School of Business Studies, Transport Network Theory Unit, Report LBS-TNT-68, June 10.], M. L. Balinski [Balinski, M. L. 1970. On a selection problem. Management Sci.17 3, November.] and P. Hansen [Hansen, P. 1974. Quelques approches de la programmation non lineaire en variables 0-1. Conference on Mathematical Programming, Bruxelles, May.]. Given a directed graph G, a closure of G is defined as a subset of nodes such that if a node belongs to the closure all its successors also belong to the set. If a real number is associated to each node of G a maximal closure is defined as a closure of maximal value.

256 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the situation of a family of items sharing a common supplier or common production facility, and present an alternative procedure which is extremely simple to use, yet appears to have rather small cost penalties in relation to the earlier more sophisticated methods.
Abstract: We consider the situation of a family of items sharing a common supplier or common production facility. There is a major setup header cost to make a replenishment of the family and a minor line cost for each item included in the replenishment. In such a setting it makes sense to coordinate replenishments of the various items of a family. Under the assumption of deterministic demand, several authors have suggested iterative solutions which are not necessarily guaranteed to be optimal, yet are not easy to implement. We present an alternative procedure which is extremely simple to use, yet appears to have rather small cost penalties in relation to the earlier more sophisticated methods.

237 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that the group cardinal utility function which is implied must be a linear combination of the individual cardinal utility functions, and that interpersonal comparisons of preferences are required.
Abstract: Given a group composed of N individuals and given that each has rated all of the alternatives using a cardinal utility function, the problem is to aggregate these to obtain a group cardinal utility function for evaluating each alternative. The cardinal utilities indicate the strength of preference of one alternative relative to others as well as order the alternatives. Five assumptions, which seem reasonable for the aggregation, are postulated, and it is shown the group cardinal utility function which is implied must be a linear combination of the individual cardinal utility functions. For assessing such a function, interpersonal comparisons of preferences are required. Suggestions for who should make these comparisons and how they might be done are given.

220 citations


Journal ArticleDOI
TL;DR: It is shown that the Dinkelbach algorithm converges superlinearly and often locally quadratically and a priori and a posteriori error estimates are derived.
Abstract: Dinkelbach's algorithm [Dinkelbach, W. 1967. On nonlinear fractional programming. Management Sci.13 492--498.] solving the parametric equivalent of a fractional program is investigated. It is shown that the algorithm converges superlinearly and often locally quadratically. A priori and a posteriori error estimates are derived. Using those estimates and duality as introduced in Part I, a revised version of the algorithm is proposed. In addition, a similar algorithm is presented where, in contrast to Dinkelbach's procedure, the rate of convergence is still controllable. Error estimates are derived also for this algorithm.

201 citations


Journal ArticleDOI
TL;DR: In this article, a model for maximizing the profit of a product sold under warranty is proposed and analyzed, where the decision variables assumed in the model are the price of the product and the length of the period throughout which the manufacturer is responsible for service.
Abstract: This paper proposes and analyzes a model for maximizing the profit of a product sold under warranty. The decision variables assumed in the model are the price of the product and the length of the period throughout which the manufacturer is responsible for service. The paper focuses on the dependence of optimal profit on price and protection period for a realistic and general description of a warrantied product. Failures are stochastic and repairs are of constant cost. Demand depends exponentially on price and warranty duration. After an introduction and literature review, the basic model is presented and assumptions are made about the particular probability law governing product failures. The optimization procedure is then illustrated for certain values of input parameters. Finally, an extensive economic analysis of the sensitivity of the optimal solution to variations in input parameters is provided.

181 citations


Journal ArticleDOI
TL;DR: This paper considers the project scheduling problem with multiple constrained resources and shows that the choice of priority rule is important with the parallel method, but with the sampling method, although it does affect the distribution of the sample, the choices of rule are not significant.
Abstract: This paper considers the project scheduling problem with multiple constrained resources. Two classes of heuristic procedure, both making use of priority rules, are discussed: the parallel method, which generates just one schedule; and the sampling method, which generates a set of schedules using probabilistic techniques and selects the best schedule from this sample. An experimental investigation is described in which a set of projects with different characteristics is scheduled by each of these heuristics with a variety of priority rules. The effects of the heuristic method, the project characteristics and the priority rules are assessed. It is shown that the choice of priority rule is important with the parallel method, but with the sampling method, although it does affect the distribution of the sample, the choice of rule is not significant. The sampling method with sample size 100 is shown to produce samples at least 7% better than those generated by the corresponding parallel method, with 99% confidence. Further results are discussed and conclusions are presented.

180 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined changes in the optimal proportions of investment capital placed in a safe asset and in a risky asset by an expected utility maximizing risk averse investor, provided that the investor's absolute risk aversion is nondecreasing or his proportional risk aversion never exceeds unity.
Abstract: This paper examines changes in the optimal proportions of investment capital placed in a safe asset and in a risky asset by an expected utility maximizing risk averse investor. If the return for the safe asset increases and the risky asset distribution remains fixed, the optimal proportion invested in the safe asset will increase provided that the investor's absolute risk aversion is nondecreasing or his proportional risk aversion never exceeds unity. Otherwise, it can be optimal to decrease holdings in the safe asset when its return increases. If the return for the safe asset remains fixed and the risky distribution improves by a first degree stochastic dominance change, the optimal proportion invested in the risky asset will increase (or not decrease) provided that proportional risk aversion never exceeds one plus the product of the gross return for the safe asset times absolute risk aversion. Otherwise, it may be optimal to decrease holdings in the risky asset when its distribution improves in the indi...

Journal ArticleDOI
TL;DR: In this paper, the relation between the compromise solution and its parameter has been investigated and it has been shown that under some nice conditions, under the assumption that the solution is a continuous function of its parameter, a fundamental monotonicity result Theorem 3.1 concerning compromise solutions is derived.
Abstract: We report some new results on compromise solutions studied by Yu [Yu, P. L. 1973. A class of decisions for group decision problems. Management Sci.19 8, April 936--946]. The following article focuses on the relation between the compromise solution and its parameter. In particular, we show that, under some nice conditions, the compromise solution is a continuous function of its parameter. A fundamental monotonicity result Theorem 3.1 concerning compromise solutions is derived. The result enables us to generate the bounds of all compromise solutions. When n = 2, two monotonicity results are derived. These yield a good interpretation of the parameter p. When p is small the “group utility” is emphasized; and when p increases the individual regrets receive more weight. Finally we construct an example to illustrate that the monotonicity results for n = 2 are almost impossible to be generated for n > 2.

Journal ArticleDOI
TL;DR: In this paper, a duality theory for linear and concave-convex fractional programs is developed and related to recent results by Bector, Craven-Mond, Jagannathan, Sharma-Swarup, et al.
Abstract: This paper, which is presented in two parts, is a contribution to the theory of fractional programming, i.e., maximization of quotients subject to constraints. In Part I a duality theory for linear and concave-convex fractional programs is developed and related to recent results by Bector, Craven-Mond, Jagannathan, Sharma-Swarup, et al. Basic duality theorems of linear, quadratic and convex programming are extended. In Part II Dinkelbach's algorithm solving fractional programs is considered. The rate of convergence as well as a priori and a posteriori error estimates are determined. In view of these results the stopping rule of the algorithm is changed. Also the starting rule is modified using duality as introduced in Part I. Furthermore a second algorithm is proposed. In contrast to Dinkelbach's procedure the rate of convergence is still controllable. Error estimates are obtained too.

Journal ArticleDOI
TL;DR: A model is developed for evaluating subsets where the choice criterion is one of balance among the attributes of items in the subset chosen and the model is applied to the problem of evaluating subset of television shows and of choosing the most balanced subset of shows.
Abstract: There are numerous situations in management and elsewhere in which an individual decision maker chooses subsets of multiattributed items. The specification of a measure of goodness for selecting subsets may differ from one situation to the next. In this paper, a model is developed for evaluating subsets where the choice criterion is one of balance among the attributes of items in the subset chosen. A method for determining the parameters of the model from a small number of judgments on subsets using linear programming is discussed. The model is applied to the problem of evaluating subsets of television shows and of choosing the most balanced subset of shows. Several extensions of the model and potential applications are also given.

Journal ArticleDOI
TL;DR: In this article, a formal connection between brand switching models and multi-period analysis of brand choice behavior is established, and a formal, well-defined stochastic process is developed for analysis of any event or combination of events in terms of subsets of the market.
Abstract: We show in this paper the formal connection between brand switching models and multiperiod analysis of brand choice behavior. A formal, well-defined stochastic process is developed from stochastic choice premises which permits analysis of any event or combination of events in terms of subsets of the market. The formal connection between single-brand analysis and multi-brand analysis is demonstrated. The parameters of the stochastic process involve the market shares and a measure of the heterogeneity of the population. This information is sufficient to permit the estimation of aggregate brand switching on adjacent trials or brand penetration and multiple brand purchasing over sequences of purchase occasions.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a computer applicable methodology for analyzing multiple objective linear programming problems when interval, rather than fixed, weights are assigned to each of the objectives, and the procedure presented in this paper involves converting the multiple-objective linear programming problem with interval criterion weights into an equivalent vector-maximum problem.
Abstract: The purpose of this research is to develop a computer applicable methodology for analyzing multiple objective linear programming problems when interval, rather than fixed, weights are assigned to each of the objectives. Rather than obtaining a single efficient extreme point as one would normally expect with fixed weights, a cluster of efficient extreme points is typically generated when interval weights are specified. Then, from the subset of solutions generated, it is contemplated, that the decision-maker will be able to qualitatively identify his efficient extreme point of greatest utility (which should either be the decision-maker's optimal point or be close enough to it to terminate the decision process). The procedure presented in this paper involves converting the multiple objective linear programming problem with interval criterion weights into an equivalent vector-maximum problem. Once in such form, an algorithm for the vector-maximum problem can then be used to determine the subset of “efficient ...

Journal ArticleDOI
TL;DR: In this paper, the costs and benefits of using corporate simulation models are examined. And the authors speculate on future developments in the field of corporate modeling, drawing on the survey results.
Abstract: Why are over 2,000 corporations either using, developing, or planning to develop some form of corporate simulation model? What types of companies are using corporate planning models? How are they being used? Which resources are required? These are among the questions which were raised in a recent survey of 346 companies whose results are summarized in this paper. The paper also examines the costs and benefits to be derived from using corporate simulation models. Finally, drawing on the survey results, the authors speculate on future developments in the field of corporate modeling.

Journal ArticleDOI
TL;DR: An efficient algorithm is presented for the 0-1 knapsack problem and the availability of a callable FORTRAN subroutine which solves this problem is announced.
Abstract: In this note we present an efficient algorithm for the 0-1 knapsack problem and announce the availability of a callable FORTRAN subroutine which solves this problem. Computational results show that 50 variable problems can be solved in an average of 4 milliseconds and 200 variable problems in an average of 7 milliseconds on an IBM 360/91.

Journal ArticleDOI
TL;DR: Heuristic methods are presented for scheduling telephone traffic exchange operators to meet demand that varies over a 24-hour operating period, both in terms of solution quality and computational efficiency.
Abstract: Heuristic methods are presented for scheduling telephone traffic exchange operators to meet demand that varies over a 24-hour operating period. Two types of heuristics are described 1 for determining the work shift types to be considered in preparing an operator shift schedule and 2 for constructing an operator shift schedule from a given set of work shift types. These heuristics are evaluated both in terms of solution quality and computational efficiency, using actual operating data.

Journal ArticleDOI
TL;DR: In this article, a bound on the perishing cost is obtained that is a function only of the total inventory on hand, and an approximate transfer function is computed myopically for the new problem.
Abstract: Recent work in describing optimal ordering policies for perishable inventory casts the problem as a multi-dimensional dynamic program, the dimensionality being one less than the useful lifetime of the product, m. As m becomes large the computational problem becomes severe so that developing approximations is a key problem. A bound on the perishing cost is obtained that is a function only of the total inventory on hand. Substituting this bound and an approximate transfer function, optimal policies can be computed myopically for the new problem. When the demand density possesses a monotone likelihood ratio, it is shown the function to be minimized is strictly pseudo-convex. Sample computations for special cases indicate that the approximation results in costs which are generally less than one percent higher than the optimal.

Journal ArticleDOI
Warren E. Walker1
TL;DR: The algorithm is being used by the U.S Environmental Protection Agency's Office of Solid Waste Management Programs to decide on the number, type, size, and location of the disposal facilities to operate in a region, and how to allocate the region's wastes to these facilities.
Abstract: An algorithm with three variations is presented for the approximate solution of fixed charge problems. Computational experience shows it to be extremely fast and to yield very good solutions. The basic approach is 1 to obtain a local optimum by using the simplex method with a modification of the rule for selection of the variable to enter the basic solution, and 2 once at a local optimum, to search for a better extreme point by jumping over adjacent extreme points to resume iterating two or three extreme points away. This basic approach is the same as that used by Steinberg [Steinberg, D. I. 1970. The fixed charge problem. Naval Res. Log. Quart.17 217--236.], Cooper [Cooper, L. 1975. The fixed charge problem---I: A new heuristic method. Comp. & Maths, with Appls.1 89--95.], and Denzler [Denzler, D. R. 1969. An approximate algorithm for the fixed charge problem. Naval Res. Log. Quart.16 411--416.] in their algorithms, but is an extension and improvement of all three. The algorithm is being used by the U.S Environmental Protection Agency's Office of Solid Waste Management Programs to decide on the number, type, size, and location of the disposal facilities to operate in a region, and how to allocate the region's wastes to these facilities.

Journal ArticleDOI
TL;DR: It is hoped that the paper will help to unify the health status index concept, will serve to standardize terminology and notation, and will facilitate comparisons of the various models.
Abstract: A general mathematical formulation of the health status index model is developed. Equations are given for three types of population health indexes and for the determination of the amount of health improvement created by a health care program. Fourteen of the major health status index models from the literature are analyzed, and it is shown that each can be viewed as a special case of the general formulation. It is hoped that the paper will help to unify the health status index concept, will serve to standardize terminology and notation, and will facilitate comparisons of the various models.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a model that integrates a and b into a single computer-based model which permits the development of aggregate-output plans in the face of changing, productivity.
Abstract: Research reports of others indicate that many manufacturing organizations experience a the probiem of aggregate planning, and b experience systematic productivity changes throughout the “life” of a product. Methods for resolving a and for quantifying b have been developed and applied independently in the operations management literature. All current aggregate planning models are suitable only for constant productivity situations. The current research integrates a and b into a single computer-based model which permits the development of aggregate-output plans in the face of changing, productivity. The model requires reformulation of traditional aggregate planning methods to incorporate changes in productivity and thereafter solves the reformulated planning problem using direct-computer search. The potential significance of the model is demonstrated by generating a series of aggregate plans for various learning rates. These plans are then used to develop manpower schedules, for cash flow analysis, and for making product pricing decisions.

Journal ArticleDOI
TL;DR: In this article, an improved model for solving the long-run multiple warehouse location problem was proposed, which provides a synthesis of a mixed integer programming formulation for the single-period warehouse location model with a dynamic programming procedure for finding the optimal sequence of configurations over multiple periods.
Abstract: This paper proposes an improved model for solving the long-run multiple warehouse location problem. The approach used provides a synthesis of a mixed integer programming formulation for the single-period warehouse location model with a dynamic programming procedure for finding the optimal sequence of configurations over multiple periods. We show that only the Rt, best rank order solutions in any single period need be considered as candidates for inclusion in the optimal multi-period solution. Thus the computational feasibility of the dynamic programming procedure is enhanced by restricting the state space to these Rt best solutions. Computational results on the ranking procedure are presented, and a problem involving two plants, five warehouses, 15 customer zones, and five periods is solved to illustrate the application of the method.

Journal ArticleDOI
TL;DR: In this article, the authors apply cooperative game theory in assigning "fair" costs and benefits to the participants in a cooperative venture to solve the problem of water resource development in water resources.
Abstract: As the demand for natural resources intensifies so too the costs of further exploitation of these resources become enormous. These enormous costs in turn require the development of a given resource to be a cooperative venture among several participants, each of whom must be assured that the costs and benefits of the venture will be “fairly” distributed among them. It is at this point that the theory of cooperative games may offer guidelines as to what is fair or not fair to each participant or player in a given cost allocation. To illustrate an application of cooperative game theory in assigning “fair” costs and benefits to the participants in a cooperative venture we will consider a problem in water resource development.

Journal ArticleDOI
TL;DR: In this paper, a continuous time model of cash management is formulated with stochastic demand and allowing for positive and negative cash balances, and the form of the optimal policy is assumed to be of a simple form d, D, U, u.
Abstract: A continuous time model of cash management is formulated with stochastic demand and allowing for positive and negative cash balances. The form of the optimal policy is assumed to be of a simple form d, D, U, u. The parameters of the optimal policy are explicitly evaluated and the properties of the system are discussed.

Journal ArticleDOI
TL;DR: In this article, a branch-and-bound procedure for the fixed-charge transportation problem has been developed, where simple penalties are easily constructed from the optimal solution of this transportation problem.
Abstract: A new branch-and-bound procedure specialized for the fixed-charge transportation problem has been developed. The technique strongly exploits the underlying transportation structure. The relaxed problem assumes this form and simple penalties are easily constructed from the optimal solution of this transportation problem. These penalties are used in the standard branch-and-bound tasks of fathoming and guiding separation. The procedure has been coded, and over sixty test problems have been solved.

Journal ArticleDOI
TL;DR: In this article, a simulation study of the relative impact of due-date assignment, dispatching and labor assignment decision rules on the performance of a dual-constrained job shop is presented.
Abstract: This paper describes a simulation study of the relative impact of due-date assignment, dispatching and labor assignment decision rules on the performance of a hypothetical dual-constrained job shop. Six criteria are used to measure the performance of the decision rules. They are mean flow-time, variance of flow-time, mean lateness, variance of lateness, proportion of jobs late, and total labor transfers. Multiple regression is the principal method of analysis of the results of the experiments. This technique provides regression coefficients and analysis of variance statistics for the decision rules for the various performance measures. Since the objective of this study is to go beyond simple statements of the significance of the various decision rules, the multiple regression results are used with analysis of variance statistics to indicate the relative impact of the decision rules. The regression coefficients and the omega squared ω2 indices [Hays, W. L. 1962. Statistics. Holt, Rinehart and Winston, New York, 406--408.] indicate that the relative importance of the due-date assignment, dispatching and labor assignment decision rules is dependent upon the measure of performance considered. In addition, the relative importance and optimality of the dispatching rules is dependent upon the due-date tightness for selected performance measures.

Journal ArticleDOI
TL;DR: In this paper, an explicit utility formulation, incorporating both risk and time preference, was developed based on some results in the axiomatic theory of choice under uncertainty, for Markov decision processes with finite state and action spaces.
Abstract: Optimality criteria for Markov decision processes have historically been based on a risk neutral formulation of the decision maker's preferences. An explicit utility formulation, incorporating both risk and time preference and based on some results in the axiomatic theory of choice under uncertainty, is developed. This forms an optimality criterion called utility optimality with constant aversion to risk. The objective is to maximize the expected utility using an exponential utility function. Implicit in the formulation is an interpretation of the decision process which is not sequential. It is shown that optimal policies exist which are not necessarily stationary for an infinite horizon stationary Markov decision process with finite state and action spaces. An example is given.

Journal ArticleDOI
TL;DR: The objective is to find a grouping of tasks into stations that satisfies all precedence relations and minimizes the number of stations, subject to the constraint that the probability that the resulting station work content at each station is no more than the given cycle time is bounded by a given value.
Abstract: Consider an assembly line balancing problem with stochastic task times Our objective is to find a grouping of tasks into stations that satisfies all precedence relations and minimizes the number of stations, subject to the constraint that the probability that the resulting station work content at each station is no more than the given cycle time is bounded by a given value Similar to Held and Karp's approach, we formulate the problem in dynamic programming The solution procedure is based on Mitten's preference order dynamic programming