scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic programming published in 1971"


Book
01 Jun 1971
TL;DR: The Classics Edition contains applications of Static Optimization and Dynamic Optimization for Economizing and the Economy, and theory of the Household Theory of the Firm and General Equilibrium Welfare Economics.
Abstract: Preface to the Classics Edition Preface Part I. Introduction. Economizing and the Economy Part II. Static Optimization. The Mathematical Programming Problem Classical Programming Nonlinear Programming Linear Programming Game Theory Part III. Applications of Static Optimization. Theory of the Household Theory of the Firm General Equilibrium Welfare Economics Part IV. Dynamic Optimization. The Control Problem Calculus of Variations Dynamic Programming Maximum Principle Differential Games Part V. Applications of Dynamic Optimization. Optimal Economic Growth Appendix A: Analysis Appendix B: Matrices Index.

857 citations


Book
01 Jan 1971
TL;DR: In this article, a detailed treatment of the simpler problems, and filling the need to introduce the student to the more sophisticated mathematical concepts required for advanced theory by describing their roles and necessity in an intuitive and natural way.
Abstract: : The text treats stochastic control problems for Markov chains, discrete time Markov processes, and diffusion models, and discusses method of putting other problems into the Markovian framework. Computational methods are discussed and compared for Markov chain problems. Other topics include the fixed and free time of control, discounted cost, minimizing the average cost per unit time, and optimal stopping. Filtering and conrol for linear systems, and stochastic stability for discrete time problems are discussed thoroughly. The book gives a detailed treatment of the simpler problems, and fills the need to introduce the student to the more sophisticated mathematical concepts required for advanced theory by describing their roles and necessity in an intuitive and natural way. Diffusion models are developed as limits of stochastic difference equations and also via the stochastic integral approach. Examples and exercises are included. (Author)

643 citations



Journal ArticleDOI
Allan N. Rae1
TL;DR: A method is presented for determining the money value of additional information, additional resources, and the expected cost of uncertainty in the stochastic programming model.
Abstract: This paper presents a further development of discrete stochastic programming, viewed within the context of Bayesian decision theory. Some probability models and information structures (with and without additional information) are discussed, followed by an indication of how the stochastic programming matrix may be set up to reflect the various information structures. Some expected utility theories are then reviewed, and their usefulness in allowing the specification of a wide variety of objective functions for the stochastic programming model is illustrated. Lastly, a method is presented for determining the money value of additional information, additional resources, and the expected cost of uncertainty.

124 citations


Journal ArticleDOI
Allan N. Rae1
TL;DR: An empirical application of discrete stochastic programming is presented, including a discussion of data requirements, matrix construction, and solution interpretation, and based on this empirical evidence, the problem-solving potential of the technique is evaluated.
Abstract: Discrete stochastic programming has been suggested as a means of solving sequential decision problems under uncertainty, but as yet little or no empirical evidence of the capabilities of this technique in solving such problems has appeared. This paper presents in some detail an empirical application of discrete stochastic programming, including a discussion of data requirements, matrix construction, and solution interpretation. Based on this empirical evidence, the problem-solving potential of the technique is evaluated.

122 citations


Book
01 Jan 1971

114 citations


Journal ArticleDOI
TL;DR: A study in which an optimal operating policy for a multipurpose reservoir was determined, where the optimal operatingpolicy is stated in terms of the state of the reservoir indicated by the storage volume and the river flow in the preceding month and uses a stochastic dynamic programming approach.
Abstract: . For a multipurpose single reservoir a deterministic optimal operating policy can be readily devised by the dynamic programming method. However, this method can only be applied to sets of deterministic stream flows as might be used repetitively in a Monte Carlo study or possibly in a historical study. This paper reports a study in which an optimal operating policy for a multipurpose reservoir was determined, where the optimal operating policy is stated in terms of the state of the reservoir indicated by the storage volume and the river flow in the preceding month and uses a stochastic dynamic programming approach. Such a policy could be implemented in real time operation on a monthly basis or it could be used in a design study. As contrasted with deterministic dynamic programming, this method avoids the artificiality of using a single set of stream flows. The data for this study are the conditional probabilities of the stream flow in successive months, the physical features of the reservoir in question, and the return functions and constraints under which the system operates.

101 citations


Journal ArticleDOI
TL;DR: Existence, uniqueness and characterizing properties are given for a class of constrained minimization problems in real Euclidean space whose solutions are generalized splines, which are called discrete splines.
Abstract: Existence, uniqueness and characterizing properties are given for a class of constrained minimization problems in real Euclidean space. These problems are the discrete analogues of minimization problems in Banach space whose solutions are generalized splines. Solutions of these discrete problems, which are called discrete splines, can be obtained by algorithms of mathematical programming.

74 citations


Journal ArticleDOI
TL;DR: A guide to the literature in random search methods for obtaining solutions to para meter optimization problems is provided, while describing some of the theoretical results obtained as well as the development of practical algorithms.
Abstract: A class of algorithms known as random search methods has been developed for obtaining solutions to para meter optimization problems. This paper provides a guide to the literature in this area, while describ ing some of the theoretical results obtained as well as the development of practical algorithms. Included are brief descriptions of the problems associated with inequality constraints, noisy measurements, and the location of the global optimum. An attempt is made to indicate types of problems for which random search methods are especially attractive.

53 citations


Journal ArticleDOI
TL;DR: In this article, the general discrete-time linear quadratic stochastic control problem is solved in two steps, by using dynamic programming to obtain a solution to the stochastically control problem in which perfect measurements of the state are available.
Abstract: This paper treats the general discrete-time linear quadratic stochastic control problem. This problem is solved in two steps. Dynamic programming is used to obtain a solution to the stochastic control problem in which perfect measurements of the state are available. Then the stochastic control problem in which only noisy measurements of a linear operator on the state are available is converted into a new stochastic control problem in which perfect measurements of the state are available. This conversion is based upon Kalman filter theory and is valid whenever the disturbances and measurement noises are Gaussian.

51 citations


Journal ArticleDOI
TL;DR: It is proved that the optimal control law can be realized by the cascade of a Kalman filter and a linear feedback and provides some motivation for different extension results.
Abstract: The problem of controlling stochastic linear systems with quadratic criteria is considered. It is proved that the optimal control law can be realized by the cascade of a Kalman filter and a linear feedback. The importance of different assumptions required in this proof is discussed in detail. This discussion provides some motivation for different extension results.

Journal ArticleDOI
TL;DR: This paper presents a discrete stochastic programming model for commercial bank bond portfolio management that provides an optimization technique that explicitly takes into consideration the dynamic nature of the problem and that incorporates risk by treating future cash flows and interest rates as discrete random variables.
Abstract: This paper presents a discrete stochastic programming model for commercial bank bond portfolio management. It differs from previous bond portfolio models in that it provides an optimization technique that explicitly takes into consideration the dynamic nature of the problem and that incorporates risk by treating future cash flows and interest rates as discrete random variables. The model's data requirements and its computational demands are sufficiently limited so that it can be implemented as a normative aid to bond portfolio management. In addition, it can be extended by the addition of other asset and liability categories to serve as a more general model for commercial bank asset and liability management.

Journal ArticleDOI
TL;DR: Optimization problems involving linear systems with retardations in the controls are studied in a systematic way in this paper, where controllability, existence and uniqueness of the optimal control, sufficient conditions, techniques of synthesis, and dynamic programming are discussed.
Abstract: Optimization problems involving linear systems with retardations in the controls are studied in a systematic way. Some physical motivation for the problems is discussed. The topics covered are: controllability, existence and uniqueness of the optimal control, sufficient conditions, techniques of synthesis, and dynamic programming. A number of solved examples are presented.

Journal ArticleDOI
TL;DR: The application of linear and quadratic programming to optimal control problems and to stochastic or deterministic system design problems is discussed and illustrated with examples.
Abstract: The application of linear and quadratic programming to optimal control problems and to stochastic or deterministic system design problems is discussed and illustrated with examples.

Journal ArticleDOI
TL;DR: In this paper, linear programming versions of some control problems on Markov chains are derived, and are studied under conditions which occur in typical problems which arise by discretizing continuous time and state systems or in discrete state systems.
Abstract: Linear programming versions of some control problems on Markov chains are derived, and are studied under conditions which occur in typical problems which arise by discretizing continuous time and state systems or in discrete state systems. Control interpretations of the dual variables and simplex multipliers are given. The formulations allows the treatment of ‘state space’ like constraints which cannot be handled conveniently with dynamic programming. The relation between dynamic programming on Markov chains and the deterministic discrete maximum principle is explored, and some insight is obtained into the problem of singular stochastic controls (with respect to a stochastic maximum principle).

Journal ArticleDOI
TL;DR: The conditions under which stochastic dynamic programs easily reduce to static deterministic programs are investigated, finding them to be rich enough to aid in the solution of a number of practical problems.
Abstract: This paper investigates conditions under which stochastic dynamic programs easily reduce to static deterministic programs. The conditions, though strict, are still rich enough to aid in the solution of a number of practical problems.


01 Oct 1971
TL;DR: In this article, the control of stochastic dynamic systems is studied with particular emphasis on those which influence the quality or nature of the measurements which are made to effect control, and the means by which dynamic programming may be applied to solve a combined control/measurement problem.
Abstract: The control of stochastic dynamic systems is studied with particular emphasis on those which influence the quality or nature of the measurements which are made to effect control. Four main areas are discussed: (1) the meaning of stochastic optimality and the means by which dynamic programming may be applied to solve a combined control/measurement problem; (2) a technique by which it is possible to apply deterministic methods, specifically the minimum principle, to the study of stochastic problems; (3) the methods described are applied to linear systems with Gaussian disturbances to study the structure of the resulting control system; and (4) several applications are considered.

Journal ArticleDOI
01 Jan 1971
TL;DR: Stochastic programming problems appear when the authors make decisions in situations with uncertainty and risk, when any action has an ambiguous outcome and to each solution x = (x1 …, xn) it is possible to associate certain indicators fi that depend on x and on the state of nature ω, which is an element of the probabilistic space (Ω, A, P).
Abstract: Stochastic programming problems appear when we make decisions in situations with uncertainty and risk, when any action has an ambiguous outcome and to each solution x = (x1 …, xn) it is possible to associate certain indicators fi (x, ω), i = 1, …, m, that depend on x and on the state of nature ω, which is an element of the probabilistic space (Ω, A, P). Since for any x the value of the objective function f 1 (x, ω) and of the constraints functions f(x, ω), i = 2,… m, will depend on the realization ω, we have great freedom in determining the feasible and the optimal solutions in stochastic programming problems; for example, deciding whether they should be deterministic or have random solutions.


01 Jan 1971
TL;DR: The development of an algorithm and a computational procedure for optimization of multidimensional, nonlinear, discrete and dynamic processes based on dynamic programming, which is free of the dimensionality problems usually associated with dynamic programming.
Abstract: The subject of this thesis is the development of an algorithm and a computational procedure for optimization of multidimensional, nonlinear, discrete and dynamic processes. The algorithm is based on dynamic programming, but it is free of the dimensionality problems usually associated with dynamic programming. Bounds on both state and control variables are accounted for. The contents of the thesis are summarized as follows: First, a review of several techniques, which are described in literature and which use the method of dynamic programming, is made. Second, a description of the method of region-limiting strategies together with that of functional approximation to represent the minimal cost function is given. Third, a procedure is presented to reduce the computing effort when a quadratic polynomial is used as the approximating function. Fourth, a computer program to implement the present method is described, and the results obtained by applying the method to several different trajectory optimization problems are given. Fifth, some parallel-processing and array-processing systems are reviewed, and procedures to adapt the method of region-limiting strategies for implementation on such machines are described. And Sixth, recommendations are made to further develop the technique for a wider range of optimization problems. DYNAMIC PROGRAMMING USIN1G REGION-LIMITING STRATEGIES FOR OPTIMIZATION OF MULTIDIMENSIONAL NONLINEAR PROCESSES by JAGDISH KUMAR ARORA A thesis submitted to the Graduate Faculty in partial fulfillment of the requirements .for the degree of. DOCTOR OF PHILOSOPHY in Electrical Engineering


Journal ArticleDOI
TL;DR: Using the example of several two-dimensional problems in the theory of elasticity, the present article has demonstrated the use of a new calculating algorithm, making it possible to use one of the methods of mathematical optimization, i.
Abstract: 1. Using the example of several two-dimensional problems in the theory of elasticity, the present article has demonstrated the use of a new calculating algorithm, making it possible to use one of the methods of mathematical optimization, i. e., dynamic programming in a discrete form. 2. The proposed algorithm, with certain modifications, can be extended also to the case of irregular regions.


01 Jan 1971
TL;DR: The methods are discussed and compared, both in theory and by means of sample problems using computer programs supplied with the report, on the basis of such factors as efficiency, handling of constraints, and termination mechanisms.
Abstract: : The report surveys a number of optimization methods which can be applied to non-differentiable functions. The methods include both deterministic and non-deterministic approaches to the solution of problems in optimization. The methods are discussed and compared, both in theory and by means of sample problems using computer programs supplied with the report, on the basis of such factors as efficiency, handling of constraints, and termination mechanisms. Some guidelines are offered for selection of techniques and programs most suitable for several types of problems. (Author)


01 Jan 1971
Abstract: The linear programming model with stochastic elements in the vector of cost coefficients or the vector of resource requirements has been approached in many ways. The foremost attempts at a solution involve the transformation of the model to a deterministic equivalent. There are a number of deterministic equivalents which have been developed for this purpose. The objective of this study is to develop an experimental model which can be used to evaluate proposed deterministic equivalents to the stochastic programming model. This experimental model has been designed to determine the responses of a deterministic equivalent to induced changes in the properties and the positions of the stochastic parameters which appear in the linear programming model. Three different linear deterministic equivalents were evaluated in this study. These were the one-stage expected value approach, the two-stage slack approach to programming under uncertainty, and the active approach to linear programming under risk. The experimental model' was used to evaluate, in turn, two. different variations of an empirical stochastic linear programming problem in terms of each deterministic equivalent. Two variations of the empirical problem were analyzed so that conclusions could be stated for either a tightly constrained or a slightly constrained problem. A Monte Carlo simulation of each of these empirical problems was also performed. The results of these simulations were used as standards with which to evaluate the results of each deterministic equivalent. The experimental procedure was divided into three phases. In the first phase the stochastic parameters were limited to the vector of resource requirements, in the second phase the stochastic parameters appeared only in the vector of cost coefficients, while in the third phase the stochastic parameters appeared in both vectors simultaneously. In all cases the stochastic parameters were assumed to be normally and independently distributed with known means and variances, while the non stochastic parameters in the problem were assumed to be constant and equal to their expected values. In all three phases of the experiment the deter­ ministic equivalents were analyzed for each experimental problem as the positions of the stochastic parameters changed and as the variances of the stochastic parameters increased. For all initial conditions and for each of the deterministic equivalents.,, the null hypothesis of no difference between the results of the simulation