scispace - formally typeset
Search or ask a question

Showing papers in "Management Science in 1963"


Journal ArticleDOI
TL;DR: The DELPHI method was devised in order to obtain the most reliable opinion consensus of a group of experts by subjecting them to a series of questionnaires in depth interspersed with controlled opinion feedback as mentioned in this paper.
Abstract: This paper gives an account of an experiment in the use of the so-called DELPHI method, which was devised in order to obtain the most reliable opinion consensus of a group of experts by subjecting them to a series of questionnaires in depth interspersed with controlled opinion feedback.

4,856 citations


Journal ArticleDOI
TL;DR: Preliminary evidence suggests that the relatively few parameters used by the model can lead to very nearly the same results obtained with much larger sets of relationships among securities, as well as the possibility of low-cost analysis.
Abstract: This paper describes the advantages of using a particular model of the relationships among securities for practical applications of the Markowitz portfolio analysis technique. A computer program has been developed to take full advantage of the model: 2,000 securities can be analyzed at an extremely low cost—as little as 2% of that associated with standard quadratic programming codes. Moreover, preliminary evidence suggests that the relatively few parameters used by the model can lead to very nearly the same results obtained with much larger sets of relationships among securities. The possibility of low-cost analysis, coupled with a likelihood that a relatively small amount of information need be sacrificed make the model an attractive candidate for initial practical applications of the Markowitz technique.

2,545 citations


Book ChapterDOI
TL;DR: The heuristic approach outlined in this paper appears to offer significant advantages in the solution of this class of problems in that it provides considerable flexibility in the specification (modeling) of the problem to be solved and is economical of computer time.
Abstract: A heuristic computer program for locating warehouses is outlined and compared with efforts at solving the problem either by means of simulation or as a variant of linear programming. The heuristic approach outlined offers advantages in the solution of this class of problems in that it (1) provides considerable flexibility in the specification of the problem to be solved; (2) can be used to study large-scale problems, that is, complexes with several hundred potential warehouse sites and several thousand shipment destinations; and (3) is economical of computer time.

916 citations


Journal ArticleDOI
TL;DR: In this article, the equivalence of the Koopmans-beckmann problem to a linear assignment problem with certain additional constraints is demonstrated, and a method for calculating a lower bound on the cost function is presented, and this forms the basis for an algorithm to determine optimal solutions.
Abstract: This paper presents a formulation of the quadratic assignment problem, of which the Koopmans-Beckmann formulation is a special case. Various applications for the formulation are discussed. The equivalence of the problem to a linear assignment problem with certain additional constraints is demonstrated. A method for calculating a lower bound on the cost function is presented, and this forms the basis for an algorithm to determine optimal solutions. Further generalizations to cubic, quartic, N-adic problems are considered.

867 citations


Journal ArticleDOI
TL;DR: A computer program governed by an algorithm which determines how relative location patterns should be altered to obtain sequentially the most improved pattern with each change, commands their alteration, evaluates the results of alterations, and identifies the sub-optimumrelative location patterns.
Abstract: This paper presents a new methodology for determining suboptimum relative location patterns for physical facilities. It presents a computer program governed by an algorithm which determines how relative location patterns should be altered to obtain sequentially the most improved pattern with each change, commands their alteration, evaluates the results of alterations, and identifies the sub-optimum relative location patterns.3 The computer output yields a block diagramatic layout of the facility areas, and the areas need not be equal.4 A manufacturing plant layout example is used because the methodology is well illustrated by reference to some kind of example, and this one is commonly known. The methodology itself, however, is general in nature and not restricted to such applications.

803 citations


Journal ArticleDOI
TL;DR: In this article, the authors report some research, ideas, and theory about managerial decision making, including the idea that management's own past decisions can be incorporated into a system of improving their present decisions.
Abstract: This paper reports some research, ideas, and theory about managerial decision making. The first research projects dealt with are aggregate production and employment scheduling. From this is developed the idea that management's own (past) decisions can be incorporated into a system of improving their present decisions. Decision rules are developed, with the coefficients in the rules derived from management's past decisions (rather than from a cost or value model). Half a dozen test cases are used to illustrate and test these ideas (theory). Some rationale about decision making in organizations and criteria surfaces is supplied to help interpret the major ideas presented.

396 citations


Journal ArticleDOI
TL;DR: In this article, a new efficiency criterion is proposed for the Markowitz portfolio selection approach, where the use of standard deviation as a measure of risk in the original Markowitz analysis and elsewhere in economic theory is sometimes unreasonable.
Abstract: A new efficiency criterion is proposed for the Markowitz portfolio selection approach. It is shown that the use of standard deviation as a measure of risk in the original Markowitz analysis and elsewhere in economic theory is sometimes unreasonable. An Investment with a relatively high standard deviation (σ) will be relatively safe if its expected value (E) is sufficiently high. For the net result may be a high expected floor, E − Kσ, beneath the future value of the investment (where K is some constant). The revised efficient set is shown to eliminate paradoxical cases arising from the Markowitz calculation. It also simplifies the task left to the investor. For it yields a smaller efficient set (which is a subset of the Markowitz efficient set) and therefore reduces the range of alternatives from among which the investor must still select his portfolio. The proposed criterion may also be somewhat more easily understood by the nonprofessional.

393 citations


Journal ArticleDOI
TL;DR: This paper is concerned with the manner in which an investigation by simulated experimentation is conducted, after the model has been formulated and the computer program completed, and methods of improving the precision of results.
Abstract: This paper is concerned with the manner in which an investigation by simulated experimentation is conducted, after the model has been formulated and the computer program completed. It considers the...

340 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived the probability distribution of the present worth, annual cost, or internal rate of return of the proposed investment under certain assumptions, and used this information for an accurate appraisal of a risky investment.
Abstract: The amount of risk involved is often one of the important considerations in the evaluation of proposed investments. This paper is concerned with the derivation of the type of explicit, well-defined, and comprehensive information that is essential for an accurate appraisal of a risky investment. It is shown how, under certain assumptions, such information in the form of the probability distribution of the present worth, annual cost, or internal rate of return of the proposed investment can be derived. The derivation and use of this information is then illustrated by an example.

304 citations


Journal ArticleDOI
TL;DR: In this paper, the optimal ordering policy for a n-period dynamic inventory problem in which the ordering cost is linear plus a fixed reorder cost and the other one-period costs are convex is characterized by a pair of critical numbers, (sn, Sn).
Abstract: The optimal ordering policy for a n-period dynamic inventory problem in which the ordering cost is linear plus a fixed reorder cost and the other one-period costs are convex is characterized by a pair of critical numbers, (sn, Sn); see Scarf, [4]. In this paper we give bounds for the sequences {sn} and {Sn} and discuss their limiting behavior. The limiting (s, S) policy characterizes the optimal ordering policy for the infinite horizon problem. Similar results are obtained if there is a time-lag in delivery.

280 citations


Journal ArticleDOI
TL;DR: This paper presents a synthesis of both dynamic and linear programming methods applicable to the same kind of model for the case of time discounting, and gives order quantities as a function of initial stock levels.
Abstract: R. Howard (R. Howard.1960. Dynamic Programming and Markov Processes. John Wiley and Sons, Inc., New York.) and A. Manne (A. Manne. 1960. Linear programming and sequential decisions. Management Science, April.) have suggested two methods for optimizing average costs per unit time by means of a sequential decision rule. This rule gives order quantities as a function of initial stock levels, and is based upon the assumption of a Markov process which is both homogenous and discrete. The formulation does not include any restrictive hypothesis on the structure of costs. This paper presents a synthesis of both dynamic and linear programming methods applicable to the same kind of model for the case of time discounting.

Journal ArticleDOI
TL;DR: In this paper, the authors define the problem of control as that of choosing operating rules for members of an organization and enforcement rules for the operating rules so to maximize the organization's objective function.
Abstract: The problem of control is defined as that of choosing operating rules for members of an organization and enforcement rules for the operating rules so to maximize the organization's objective function. The control problem is sketched for three characteristic types of large organizations: large corporations, governments in their budgetary aspects, and economic systems as a whole. The essential roles of uncertainty and of differential amounts of information in different parts of an organization in the problem of control are brought out. The merits and drawbacks of the price system as a control mechanism are discussed in light of the preceding discussion, with especial reference to the effects of uncertainty.

Journal ArticleDOI
TL;DR: The procedure presented in this paper leads to optimal line balances by operation on a matrix of zeros and ones called a "Precedence Matrix" to illustrate the method in detail and comparisons are made.
Abstract: Assembly line balancing consists of assigning work elements, which are subject to sequencing restrictions, along an assembly line in an optimal manner. The procedure presented in this paper leads to optimal line balances by operation on a matrix of zeros and ones called a “Precedence Matrix.” A nine element problem is used to illustrate the method in detail and comparisons with other procedures are made. Several balances obtained by applying the FORTRAN program appended to the paper to actual and sample problems are also shown.

Journal ArticleDOI
TL;DR: In this article, the optimal composition of cattle feed is formulated as a linear programming problem in the case of certainty, where the coefficients of the constraints are not constant but can be considered as stochastic.
Abstract: The optimal composition of cattle feed, which can be formulated as a linear programming problem in the case of certainty, is considered when compositions of inputs vary. In the corresponding linear programming formulation the coefficients of the constraints are not constant but can be considered as stochastic. Reformulating the constraints as chance constraints, a nonlinear programming problem results. For an illustrative example this problem is solved using one of Zoutendijk's methods of feasible directions.

Journal ArticleDOI
TL;DR: In this article, an extended dual theorem comparable in precision and exhaustiveness to the finite space theorem is developed for arbitrary convex programs with convex constraints which subsumes in principle all characterizations of optimality or duality in convex programming.
Abstract: By constructing a new infinite dimensional space for which the extreme point—linear independence and opposite sign theorems of Charnes and Cooper continue to hold, and, building on a little-known work of Haar (herein presented), an extended dual theorem comparable in precision and exhaustiveness to the finite space theorem is developed. Building further on this a dual theorem is developed for arbitrary convex programs with convex constraints which subsumes in principle all characterizations of optimality or duality in convex programming. No differentiability or constraint qualifications are involved, and the theorem lends itself to new computational procedures.

Journal ArticleDOI
TL;DR: In this article, the authors considered a model in which all parts of the system but one are monitored continuously (monitored) and that the remaining part cannot be inspected except when it is replaced.
Abstract: In a system with several stochastically failing parts and economies of scale in their maintenance, it may be advantageous to follow an “opportunistic” policy for maintenance. In opportunistic policies the action to be taken on a given part at a given time depends on the state of the other parts of the system. For the model considered in this paper, it is assumed that all parts of the system but one are inspected continuously (monitored) and that the remaining part cannot be inspected except when it is replaced. If the monitored parts have exponential distributions of time to failure, the optimal replacement policy for the remaining part has the following form: Let the non-monitored part be labeled 0 and let there be M monitored parts, labeled 1, …, M; then there are M + 1 numbers n1 , …, nM, N, with 0 ≦ ni ≦ N ≦ ∞ such that: (a) if Part i fails at a time when the age of Part 0 is between 0 and ni, replace Part i alone (i = 1, …, M); (b) if Part i fails at a time when the age of Part 0 is between ni and N,...

Journal ArticleDOI
TL;DR: In this paper, three experiments are reported relating types of planning activities to performance and attitudes, and the findings are consistent with earlier studies showing strong, positive relationship between participation in planning on the one hand and morale and productivity on the other.
Abstract: Three experiments are reported relating types of planning activities to performance and attitudes. In general, the findings are consistent with earlier studies showing strong, positive relationship between participation in planning on the one hand and morale and productivity on the other. In all three experiments, performance and attitudes were better when subjects were using plans developed by themselves than plans developed by others.

Journal ArticleDOI
TL;DR: In this paper, the authors present the results of some digital computer simulation tests of a procedure for the economic planning of lot sizes, work force, and inventories, which is both dynamic and stochastic.
Abstract: This paper presents the results of some digital computer simulation tests of a procedure for the economic planning of lot sizes, work force, and inventories. A dynamic, deterministic, linear programming model was used to obtain approximate solutions to the actual problem which is both dynamic and stochastic. The tests were made with data taken from an actual factory. An alternate procedure, based upon single-item inventory control, was also tested; its results were compared with those obtained from the linear programming model. On the basis of these tests, this linear programming method appears to offer a promising method for the practical economic planning of production activities.

Journal ArticleDOI
TL;DR: Fulkerson's algorithm can be given a structural interpretation using concepts that are familiar to civil engineers and the traditional mathematical background of civil engineers includes neither linear programming nor network flow theory.
Abstract: For a project that consists of numerous jobs subject to technological ordering restrictions the polygon representing project cost versus completion time is to be determined when the normal and crash completion times are known for each job and the cost of doing the job varies linearly between these times. A linear programming formulation of this problem was given by Kelley [Kelley, J. E., Jr. 1961. Critical-path planning and scheduling: Mathematical basis. Oper. Res. 9 296–320.] and a network flow formulation by Fulkerson [Fulkerson, D. R. 1961. A network flow computation for project cost curves. Management Sci. 7 (2, January) 167–178.]. Since the traditional mathematical background of civil engineers includes neither linear programming nor network flow theory, these methods are not as widely used in the building industry as they deserve. This paper shows that Fulkerson's algorithm can be given a structural interpretation using concepts that are familiar to civil engineers.

Journal ArticleDOI
TL;DR: A Simulation Process was developed at the El Segundo Division of Hughes Aircraft company whereby actual shop operating data contained in an existing data processing system is used as input to an IBM 7090 computer for simulation of the ElSegundo Fabrication Shop.
Abstract: A Simulation Process was developed at the El Segundo Division of Hughes Aircraft company whereby actual shop operating data contained in an existing data processing system is used as input to an IBM 7090 computer for simulation of the El Segundo Fabrication Shop. The Simulation Process was used as a study tool to evaluate the effects of various management decisions on the operation of the Fabrication Shop. Management Technology, ISSN 0542-4917, was published as a separate journal from 1960 to 1964. In 1965 it was merged into Management Science.

Journal ArticleDOI
TL;DR: This paper evaluated the Carnegie Tech Management Game and found that players reported learning many kinds of things from their experience, but learning derived more from interpersonal interactions with other players and with outside groups like boards of directors than from interaction with the game model itself.
Abstract: This study was undertaken to evaluate a complex management simulation exercise as an environment for learning. The exercise was the Carnegie Tech Management Game; the players were students in a graduate management program who played the game. Players reported learning many kinds of things from their experience, but learning derived more from interpersonal interactions with other players and with outside groups like boards of directors than from interaction with the game model itself. Players may learn more about recognizing problems for future attention than about solutions of problems that can be applied in new situations. The kinds and amounts of learning vary with the length of game play, with team success or failure, and with individual job assignment on the team. They do not vary with measures of status on the team.

Journal ArticleDOI
TL;DR: In this paper, the problem of finding the decision rule which maximizes the expected length of time between replacements subject to the side conditions that the probabilities of replacement through certain undesirable states are bounded by prescribed numbers is considered.
Abstract: A system is observed periodically and classified into one of a finite number of states. On the basis of the observations certain maintenance or replacement decisions are made. Transitions probabilities from state to state are determined by the decisions. The problem considered is that of finding the decision rule which maximizes the expected length of time between replacements subject to the side conditions that the probabilities of replacement through certain undesirable states are bounded by prescribed numbers.

Journal ArticleDOI
TL;DR: In this article, the authors present some basic results for the case where the study is to be based upon fundamental cost considerations and the assumption of an infinite calling population, and include a number of economic models and accompanying procedures for determining the level of service which minimizes the total of the expected cost of service and the expected time of waiting for that service.
Abstract: Studies of industrial waiting line problems typically involve determining the proper balance between the amount of service and the amount of waiting for that service. This paper presents some basic results for the case where the study is to be based upon fundamental cost considerations and the assumption of an infinite calling population. Included are a number of economic models and accompanying procedures for determining the level of service which minimizes the total of the expected cost of service and the expected cost of waiting for that service.

Journal ArticleDOI
TL;DR: In this article, the authors calculate operating characteristics of opportunistic replacement and inspection policies, defined as a function defined on the stochastic process induced when the policy is implemented, and show that the expected rate of joint replacement of the uninspected part and one of the monitored parts, as well as planned replacement of a single uninsulated part, and the probability of at least m failures of a monitored part in the interval [0, t] are derived.
Abstract: This paper calculates some of the operating characteristics of opportunistic replacement and inspection policies. An operating characteristic of a particular policy is a function defined on the stochastic process induced when the policy is implemented. An opportunistic replacement policy makes the replacement of a single uninspected part conditional on the state (good or failed) of one or more continuously inspected (monitored) parts. An opportunistic inspection policy makes the inspection of a non-monitored part conditional on the state (good or failed) of a monitored part. Some of the operating characteristics of these policies examined in this paper are: the expected rate of opportunistic (joint) replacement of the uninspected part and one of the monitored parts; the expected rate of planned replacement of the uninspected part; the probability of at least m failures of a monitored part in the interval [0, t]; the expected rate of opportunistic inspection—inspection of the non-monitored part which is tr...

Journal ArticleDOI
TL;DR: Some of the more recent theoretical and computational developments in non-linear programming are surveyed and the notions of Lagrange multipliers and duality are discussed together with applications of these ideas to scientific and business problems.
Abstract: Some of the more recent theoretical and computational developments in non-linear programming are surveyed. The notions of Lagrange multipliers and duality are discussed together with applications of these ideas to scientific and business problems. Moreover, several algorithms for solving quadratic programming problems are reviewed. Explicit rules are given for two of these algorithms, and a simple example is solved by both methods. A large step gradient method for the solution of convex programs is given and one of Gomory's algorithms for integer programming is described. Simple examples are solved using both of these techniques. Linear fractional programming is also discussed briefly.

Journal ArticleDOI
TL;DR: If certain information is given about the "psychology" of the players who participate in an n-person cooperative game, concerning their bargaining abilities, their moral codes, their roles in the various coalitions and their a priori expectations, then it is possible to define a measure for the power of each coalition which, perhaps, is a better description of the game than the usual characteristic function.
Abstract: If certain information is given about the “psychology” of the players who participate in an n-person cooperative game, concerning their bargaining abilities, their moral codes, their roles in the various coalitions and their a priori expectations, then it is possible to define a measure for the power of each coalition which, perhaps, is a better description of the game than the usual characteristic function. The required information, called the standard of fairness of the players is a Thrall partition function which satisfies certain requirements. Its determination is discussed both from an experimental and from a theoretical point of view. In terms of the power, every game becomes a constant-sum game. Applications to the von Neumann and Morgenstern solutions and to the bargaining set are discussed.

Journal ArticleDOI
TL;DR: In this article, a distinction is made between two related approaches to stochastic linear programming, the passive and the active approach respectively, where the statistical distribution of the optimum value of the objective function is estimated either exactly or approximately by numerical methods and optimum decision rules are based on different characteristics of the estimated distribution.
Abstract: A linear programming problem is said to be stochastic if one or more of the coefficients in the objective function or the system of constraints or resource availabilities is known only by its probability distribution. Various approaches are available in this case, which may be classified into three broad types: ‘chance constrained programming’, ‘two-stage programming under uncertainty’ and ‘stochastic linear programming’. For problems of ‘stochastic linear programming’ a distinction is usually made between two related approaches to stochastic programming, the passive and the active approach respectively. In the passive approach to stochastic linear programming the statistical distribution of the optimum value of the objective function is estimated either exactly or approximately by numerical methods and optimum decision rules are based on the different characteristics of the estimated distribution. In the active approach, a new set of decision variables are introduced which indicate the proportions of dif...

Journal ArticleDOI
TL;DR: In this article, the problem of maximizing expected net value of an action, i.e., the expected value of the action finally taken minus the expected total cost of searching and testing, is studied.
Abstract: A person searches through a population of possible actions, looking for one with high value. He discovers these actions one after another, paying a certain cost each time. On his first encounter with a possible action, the person obtains some preliminary information about its actual value, and at this point he can take the action, he can continue looking, or, at a certain cost, he can perform a test and obtain some more information about its actual value. If he decides to test, then having obtained the additional information he can again either take the action or continue looking. The problem is to conduct this process in such a manner as to maximize expected net value, that is, the expected value of the action finally taken minus the expected total cost of searching and testing. This problem is analyzed and optimal policies are given in the case where the possible actions are regarded as independent selections from a large population. The joint distribution in this population of the actual value, the pre...

Journal ArticleDOI
TL;DR: The algorithm is computationally very simple, requiring the solution of a single linear programming problem which can be accomplished with only slight modification of existing computer codes for the revised simplex method.
Abstract: This paper describes an algorithm for the solution of the convex programming problem using the simplex method. The algorithm is computationally very simple, requiring the solution of a single linear programming problem which can be accomplished with only slight modification of existing computer codes for the revised simplex method.

Journal ArticleDOI
TL;DR: In this article, the basic ideas of the committee decision problem are briefly restated and the problem is formulated, and some proposals are made for the weights of the linear combination of individual preference functions.
Abstract: The order of discussion is as follows. In Section 2 the basic ideas (which are already “classical”) are briefly restated and the problem is formulated. In Section 3 some proposals are made for the weights of the linear combination of individual preference functions. Section 4 illustrates the committee decision problem by a very simple example of measures of central tendency, the solution of which suggests a method for solving the general problem. Section 5 is the heart of the article; it introduces the symmetry condition. The last section deals with the case when symmetry cannot be realized.