Showing papers on "Stochastic programming published in 1969"
••
TL;DR: In this paper, the optimal consumption-investment problem for an investor whose utility for consumption over time is a discounted sum of single-period utilities, with the latter being constant over time and exhibiting constant relative risk aversion (power-law functions or logarithmic functions), is discussed.
Abstract: Publisher Summary This chapter reviews the optimal consumption-investment problem for an investor whose utility for consumption over time is a discounted sum of single-period utilities, with the latter being constant over time and exhibiting constant relative risk aversion (power-law functions or logarithmic functions). It presents a generalization of Phelps' model to include portfolio choice and consumption. The explicit form of the optimal solution is derived for the special case of utility functions having constant relative risk aversion. The optimal portfolio decision is independent of time, wealth, and the consumption decision at each stage. Most analyses of portfolio selection, whether they are of the Markowitz–Tobin mean-variance or of more general type, maximize over one period. The chapter only discusses special and easy cases that suffice to illustrate the general principles involved and presents the lifetime model that reveals that investing for many periods does not itself introduce extra tolerance for riskiness at early or any stages of life.
2,369 citations
••
TL;DR: Stochastic dynamic prediction as mentioned in this paper assumes the laws governing atmospheric behavior are entirely deterministic, but seeks solutions corresponding to probabilistic statements of the initial conditions, thus recognizing the impossibility of exact or sufficiently dense observations.
Abstract: Stochastic dynamic prediction assumes the laws governing atmospheric behavior are entirely deterministic, but seeks solutions corresponding to probabilistic statements of the initial conditions, thus recognizing the impossibility of exact or sufficiently dense observations. The equation that must be solved is the continuity equation for probability. For practical reasons only approximate solutions to this equation are possible in general. Deterministic forecasts represent a very low order of approximation. More exact methods are developed and some of the attributes and advantages of stochastic dynamic predictions are illustrated by applying them to a low order set of dynamic equations. Stochastic dynamic predictions have significantly smaller mean square errors than deterministic procedures, and also give specific information on the nature and extent of the uncertainty of the forecast. Also the range of time over which useful forecasts can be obtained is extended. However, they also require considerably more extensive calculations. The question of analysis to obtain the initial stochastic statement of the atmospheric state is considered and one finds here too a promise of significant advantages over present deterministic methods. It is shown how the stochastic method can be used to assess the value of new or improved data by considering their influence on the decrease in the uncertainty of the forecast. Comparisons among physical-numerical models are also made more effectively by applying stochastic methods. Finally the implications of stochastic dynamic prediction on the question of predictability are briefly considered, with the conclusion that some earlier estimates have been too pessimistic. DOI: 10.1111/j.2153-3490.1969.tb00483.x
407 citations
••
TL;DR: Stochastic dynamic prediction assumes the laws governing atmospheric behavior are entirely deterministic, but seeks solutions corresponding to probabilistic statements of the initial conditions, and assumes that the initial laws are deterministic as mentioned in this paper.
Abstract: Stochastic dynamic prediction assumes the laws governing atmospheric behavior are entirely deterministic, but seeks solutions corresponding to probabilistic statements of the initial conditions, th...
278 citations
•
01 Nov 1969
TL;DR: A unifying framework of concepts central to the optimization of large structured systems is developed and used in the organization of the literature.
Abstract: : A unifying framework of concepts central to the optimization of large structured systems is developed and used in the organization of the literature. The basic concepts are divided in two groups, (1) problem manipulations, in which a given problem is restated in an alternative form more amenable to solution, and (2) solution strategies which reduce an optimization problem to a related sequence of simpler problems that can be solved by specialized methods.
225 citations
••
TL;DR: Several specialized dynamic programming techniques applicable to water system problems are introduced, including successive approximations, forward dynamic programming, dynamic programming for stochastic control, and iteration in policy space.
73 citations
••
TL;DR: This work is concerned with the optimal control of a discrete-time linear system with random parameters and the method of solution is based on the dynamic programming approach which leads to functional recurrence equations.
Abstract: This work is concerned with the optimal control of a discrete-time linear system with random parameters. It is assumed that the parameters of the system vary randomly during the process, namely, the parameters constitute sequences of random variables. These random variables are not necessarily independent. An important particular case occurs where there are unknown constant parameters in the system. The measurements of the state of the system contain additive noise. A quadratic function of the state and controller, with appropriate weighting, serves as the criterion function. The solutions for the open-loop controller and the open-loop feedback controller are presented. The method of solution is based on the dynamic programming approach which leads to functional recurrence equations.
66 citations
••
TL;DR: In this article, it was shown that the objective of stochastic programs with recourse is also lower semi-continuous and a lemma of general interest in the theory of convex functions is established.
Abstract: : In an earlier paper, 'Stochastic Programs with Recourse,' the authors introduced a general class of stochastic (linear) programs and showed, among other things, that the objective of any such program is convex when considered as a function of the first-stage decision variables. In this paper it is shown that the objective is also lower semi-continuous. In the process of proving this result, a lemma of general interest in the theory of convex functions is established.
30 citations
••
TL;DR: This article describes an analytic approach to flight scheduling within an airlift system that consists of a monthly planning model that produces an initial schedule and a daily model for making periodic changes in the schedule, formulated as two-stage stochastic linear programs.
Abstract: This article describes an analytic approach to flight scheduling within an airlift system. The model takes explicit account of the uncertainty present in cargo requirements or demands. For computational feasibility, the approach consists of two related models: (1) a monthly planning model that produces an initial schedule, and (2) a daily model for making periodic changes in the schedule. Both are formulated as two-stage stochastic linear programs. A detailed mathematical description of each model and its physical interpretation is given.
The monthly model determines the number of flights each type of aircraft in the fleet. Excess demands on certain routes are assumed to be met, at least in part, by spot procurement of commerical lift from outside the system. The flight assignment is determined by minimizing the expected total system cost, which consists of operating costs, costs of reallocating aircraft to different routes, spot commercial procurement costs, and other penalty costs of excess demand. The model accounts for limitations on the number of flying hours and the carrying capacities of various aircraft in satisfying demands.
In the daily model the number of aircraft of each type to switch from one route to another and the number of commercial flights on spot contract to add on the current day are the principal decision variables. These are determined by balancing operating, procurement, and redistribution costs against the expected costs of additional cargo delay. The current state of the system — the amount of unmoved cargo on various routes andathe position of aircraft throughout the system — plays a role in determining these decisions.
A description of two variants of an algorithm recently developed for this class of problems is presented. Both versions, which use ideas from convex programming, make extensive use of linear programming codes for the brunt of the calculations. The models may thus be solved by augmenting existing linear programming routines.
19 citations
••
TL;DR: In this article, a dynamic programming model for selecting an optimal combination of transportation modes over a mid-to-mid-time planning horizon is presented. But the model is formulated as an optimal discrete time stochastic control problem where cost is quadratic and dynamic equations linear in the state and control variables.
Abstract: This paper develops a dynamic programming model for selecting an optimal combination of transportation modes over a midtiperiod planning horizon. The formulation explicitly incorporates uncertainty regarding future requirements or demands for a number of commodity classes. In addition to determining the optimal modes to employ, the model assigns individual commodity classes to various modes, determines which supply points serve which destinations, and reroutes carriers from destinations to alternative sources where they will be most effective. The model is formulated as an optimal discrete time stochastic control problem where cost is quadratic and dynamic equations linear in the state and control variables. This model may be solved in closed form by an efficient dynamic programming algorithm that permits the treatment of relatively large scale systems. Also developed is an alternative, generally suboptimal method of solution, based upon solving a sequence of convex programming problems over time. This te...
16 citations
••
IBM1
TL;DR: In this paper, a wide class of single-product, dynamic inventory problems with convex cost functions and a finite horizon is investigated as a stochastic programming problem, which can be substantially reduced in size to a linear program with upper-bounded variables.
Abstract: A wide class of single-product, dynamic inventory problems with convex cost functions and a finite horizon is investigated as a stochastic programming problem. When demands have finite discrete distribution functions, we show that the problem can be substantially reduced in size to a linear program with upper-bounded variables. Moreover, we show that the reduced problem has a network representation; thus network flow theory can be used for solving this class of problems. A consequence of this result is that, if we are dealing with an indivisible commodity, an integer solution of the dynamic inventory problem exists. This approach can be computationally attractive if the demands in different periods are correlated or if ordering cost is a function of demand.
••
01 Jan 1969TL;DR: A method is described for determining the optimum levels for the input variables of a quadratic response surface design that requires the minimization of the generalized variance subject to linear constraints.
Abstract: : A method is described for determining the optimum levels for the input variables of a quadratic response surface design. The problem is formulated as a mathematical program, and an algorithm for solving the problem is described. Specifically, the problem requires the minimization of the generalized variance subject to linear constraints. Attention is given to the non-convexity of the objective function, and a stochastic control is developed. A number of examples are given to illustrate the method. (Author)
••
TL;DR: Control problems arising in stochastic service systems are treated by a decomposition technique involving the methods of queuing theory and dynamic programming to find the optimal operation of the surgical facilities in a hospital.
Abstract: Control problems arising in stochastic service systems are treated by a decomposition technique involving the methods of queuing theory and dynamic programming. This approach is applied to the optimal operation of the surgical facilities in a hospital. The generalization of the method to other stochastic service systems is immediate.
••
01 Jan 1969TL;DR: The problems of statistical distribution of the maximand are here analyzed under stochastic and chance-constrained linear programming and uses of non- central Chi-square, truncated normal, non-central F and other non-negative distributions of statistical reliability theory are indicated.
Abstract: The problems of statistical distribution of the maximand are here analyzed under stochastic and chance-constrained linear programming Uses of non-central Chi-square, truncated normal, non-central F and other non-negative distributions of statistical reliability theory are indicated This analysis would be useful for economic models involving input-output coefficients which are usually required to be non-negative
••
01 Aug 1969
TL;DR: The problem is that of optimally testing a coherent system to learn some characteristic of it, for example, whether it is operating or not, as well as a comparison of computer computation times for both.
Abstract: : The problem is that of optimally testing a coherent system to learn some characteristic of it, for example, whether it is operating or not. A branch and bound and a dynamic programming solution are given, as well as a comparison of computer computation times for both. Several specific models with analytical solutions are also presented.
••
01 Aug 1969
TL;DR: A study of the properties of the deterministic equivalent program of a stochastic program with recourse and the various properties ofThe objective functions are examined.
Abstract: : The paper presents a study of the properties of the deterministic equivalent program of a stochastic program with recourse. After a brief discussion of the place of the stochastic programming model in the realm of stochastic optimization and definition of the problem under consideration, the characterization of feasible solutions are discussed, and the various properties of the objective functions are examined (dependent on or independent from the type of distribution of the random elements).
••
TL;DR: A new and unifying formal framework for duality in optimization problems, relating more closely programs to games, is proposed by means of the concepts of “hemi-games” and “quasi-duality”, and a new generalization of La-grange multipliers is presented.
Abstract: It is proposed here a new and unifying formal framework for duality in optimization problems, relating more closely programs to games, by means of the concepts of “hemi-games” and “quasi-duality”. A new generalization of the idea of La-grange multipliers is also presented.
Associated with each programming problem, we consider through a generalization of the Lagrange's function one particular game from the many possible ones such that our programming problem is one of the two hemi-games of such a game.
Each hemi-game can be considered as a programming problem. Then, for each game we have a pair of programming problems: the two hemi-games of the game considered.
If the game has a solution, then so does each of the two associated programming problems; and the solution of each one is an optimal strategy for the respective player.
This couple of programming problems constitute a pair of dual problems. And each pair of dual problems can be thought out as such a couple of programming problems associated with the two players of a solvable game.
Various and seemingly disparate dualities, already considered in the literature, are then exhibited in order to show how they can be obtained from the hemi-game notion proposed.
This conceptual framework of duality is not used here for obtaining “new” results, but seems sufficiently interesting in itself.
••
01 Apr 1969
TL;DR: In this paper, a broad class of optimal structural design problems is formulated as an optimal control problem and two methods of solving this class of problems are then presented and their application to optimal design problems are discussed.
Abstract: : The class of problems treated falls into the rapidly developing field of optimal structural design. A structure is initially laid out with its geometry fixed but with the distribution of material in the elements left to the designer's choice. The amount and distribution of material is chosen so that the structure performs some function and is best in some sense. The examples treated in this report take minimum weight as their optimality criterion. A broad class of optimal design problems is first formulated as an optimal control problem. Two methods of solving this class of problems are then presented and their application to optimal design problems is discussed. Three optimal design problems are formulated and solved in detail. These problems contain many of the features expected of real-world problems and illustrate the power of computational methods which can be applied for their solution. (Author)
01 Dec 1969
TL;DR: It was found that dynamic programming gives the optimal solution to the allocation problem and the model based on Lagrange multipliers gave a solution that was within 0.5% of the optimal Solution given by dynamic programming.
Abstract: : The reliability allocation problem has previously been solved by methods other than the classical optimization methods. The thesis brings together and compares three allocation models that are based on optimization methods. It was found that dynamic programming gives the optimal solution to the allocation problem. The model based on Lagrange multipliers gave a solution that was within 0.5% of the optimal solution given by dynamic programming. The linear programming algorithm gave a solution that was within 0.6% of the optimal solution.
••
TL;DR: In this paper, a modified definition of error, employing a properly chosen single-time-constant ideal, leads, on mean-square-error stochastic optimization, to systems which have much better damping characteristics.
Abstract: It is generally known that the design of linear systems for stochastic optimization for mean-square-error minimization leads to optimal systems that are insufficiently damped to be of practical utility. It is shown here that a modified definition of error, employing a properly chosen single-time-constant ideal, leads, on mean-square-error stochastic optimization, to systems which have much better damping characteristics. The procedure is illustrated by a typical gain-optimization problem in a servo system.
••
TL;DR: In this paper, it was shown that the optimal stochastic control is the conditional expectation of the deterministic control, given the measurement history, for a class of deterministic systems.
Abstract: It is shown that, for a class of stochastic systems, i.e., those in which the cost increases as the distance between the stochastic and the deterministic controls increases, the optimal stochastic control is the conditional expectation of the deterministic control, given the measurement history.
••
••
IBM1
TL;DR: An experimental real-time system is described for assigning customer engineers (servicemen) to requests for service, preventive maintenance and engineering- and sales-change activities and is viewed as a stochastic programming formulation.
Abstract: An experimental real-time system is described for assigning customer engineers (servicemen) to requests for service, preventive maintenance and engineering- and sales-change activities. The system, which can be applied to service organizations of many kinds, is viewed as a stochastic programming formulation. The resultant mathematical programming problem is structured as a control system, an inner control loop and an outer adaptive feedback loop in which system parameters are adjusted based on a performance index. Tests of the system have been made using data from the Brooklyn, New York and Washington, D. C. IBM Field Engineering Division branch offices.