scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic programming published in 1968"


Journal ArticleDOI
TL;DR: In this paper, a method for solving linear programming problems where (any number of) the functional, restraint, and input-output coefficients are subject to discrete; probability distributions is presented, where the objective function is formulated in terms of variance and/or expectation.
Abstract: A method is presented for solving linear programming problems where (any number of) the functional, restraint, and input-output coefficients are subject to discrete; probability distributions. The objective function is formulated in terms of variance and/or expectation. The procedure involves the simultaneous generation of all (mutually exclusive) possible outcomes and hence the transference of all variability into the objective function of a very much enlarged linear program.

177 citations


Book
01 Jan 1968
TL;DR: In this article, the authors present selected applications of nonlinear programming in some detail and present a general introduction to non-linear programming, which contains definitions, classifications of problems, mathematical characteristics, and solution procedures.
Abstract: : The report presents selected applications of nonlinear programming in some detail. The first chapter, which is a general introduction to nonlinear programming, contains definitions, classifications of problems, mathematical characteristics, and solution procedures. The remaining chapters deal with various problems and their nonlinear programming models.

152 citations


Journal ArticleDOI
TL;DR: Under suitable normality assumptions this problem is amenable to a quadratic programming formulation and the objective function consists of the maximization of the probability that a realization (in terms of target variables) will lie in a confidence region of predetermined size.
Abstract: This paper deals with the problem of attaining a set of targets (goals) by means of a set of instruments (subgoals) when the relation between the two groups of variables can be expressed with a linear system of stochastic equations. The objective function consists of the maximization of the probability that a realization (in terms of target variables) will lie in a confidence region of predetermined size. Under suitable normality assumptions this problem is amenable to a quadratic programming formulation.

120 citations


Book
01 Jan 1968
TL;DR: In this paper, a comprehensive treatment of stochastic systems is presented, beginning with the foundations of probability and ending with optimal control, which leads to the solution of optimal control problems resulting in controllers with significant practical application.
Abstract: A comprehensive treatment of stochastic systems beginning with the foundations of probability and ending with stochastic optimal control. The book divides into three interrelated topics. First, the concepts of probability theory, random variables and stochastic processes are presented, which leads easily to expectation, conditional expectation, and discrete time estimation and the Kalman filter. With this background, stochastic calculus and continuous-time estimation are introduced. Finally, dynamic programming for both discrete-time and continuous-time systems leads to the solution of optimal stochastic control problems resulting in controllers with significant practical application. This book will be valuable to first year graduate students studying systems and control, as well as professionals in this field.

102 citations


Journal ArticleDOI
TL;DR: An illustrative example is developed from an actual application of goal programming to media planning over a period of time, which involves distributions of frequencies by demographic and other characteristics as well as budget and other constraining limitations.
Abstract: An illustrative example is developed from an actual application of goal programming to media planning over a period of time. These goals involve distributions of frequencies by demographic and other characteristics as well as budget and other constraining limitations.

29 citations


Journal ArticleDOI
TL;DR: The technique is to replace probability distributions by their corresponding expectations, and to use the values of the states in the corresponding deterministic system under its optimal policy to determine an approximate policy in the stochastic system through a single application of Howard's policy improvement operation.
Abstract: This note describes and illustrates a computational technique to obtain approximate solutions to stochastic dynamic programming problems. The technique is to replace probability distributions by their corresponding expectations, and to use the values of the states in the corresponding deterministic system under its optimal policy to determine an approximate policy in the stochastic system through a single application of Howard's policy improvement operation. Two examples are given.

25 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to develop a way of looking at stochastic programming problems which is natural in statistical decision theory, to relate this approach to the previous research on linear programming under risk (in which it is implicit), and to make a detailed investigation of one type of stochastically linear programming problem within this framework.

20 citations


Journal ArticleDOI
TL;DR: This paper shows that chance-constrained equivalents exist for several stochastic programming problems that are concerned with selecting a decision vector which will optimize the expectation of a random loss function, sometimes subject to deterministic constraints.
Abstract: This paper shows that chance-constrained equivalents exist for several stochastic programming problems that are concerned with selecting a decision vector which will optimize the expectation of a random loss function, sometimes subject to deterministic constraints. The equivalent chance-constrained problems are concerned with selecting a decision vector that will optimize a deterministic function subject to one or more probability constraints of the form pr[yi≧ I¾i] ≧ α i, i ∈ I, and sometimes also subject to deterministic constraints. Equivalence is shown for stochastic programming problems with either linear or nonlinear random loss functions, including many stochastic scheduling and inventory problems. Such stochastic programming problems may be converted to equivalent chance-constrained problems that can be solved in their certainty equivalent form, for example, by the Horizon Method of Stochastic Scheduling.

20 citations


Journal ArticleDOI
TL;DR: A method is presented for selecting a subset of features from a specified set when economic considerations prevent utilization of the complete set, and the formulation of the feature selection problem as a dynamic programming problem permits an optimal solution to feature selection problems which previously were uncomputable.
Abstract: A method is presented for selecting a subset of features from a specified set when economic considerations prevent utilization of the complete set. The formulation of the feature selection problem as a dynamic programming problem permits an optimal solution to feature selection problems which previously were uncomputable. Although optimality is defined in terms of a particular measure, the Fisher return function, other criteria may be substituted as appropriate to the problem at hand. This mathematical model permits the study of interactions among processing time, cost, and probability of correctly classifying patterns, thus illustrating the advantages of dynamic programming. The natural limitation of the model is that the only features which can be selected are those supplied by its designer. Conceptually, the dynamic programming approach can be extended to problems in which several constraints limit the selection of features, but the computational difficulties become dominant as the number of constraints grows beyond two or three.

19 citations


Journal ArticleDOI
TL;DR: Existing methods of optimal design are shown to be corresponding to various formulations of mathematical programs, and the duality theorems of mathematical programming can be used to obtain necessary and sufficient criteria of optimum.

18 citations


Journal ArticleDOI
TL;DR: In this article, the adaptive case is shown to be reducible to the stochastic one, and the optimal policies and expected gains are determined for some important classes of information structures.
Abstract: Problems of multi-stage decision under uncertainty are usually classified into “stochastic” and “adaptive” ones, depending on whether the decision maker does or does not know the relevant probability distribution. If the Bayesian approach is taken, then, in the adaptive case, the decision maker is assumed to know the prior distribution of certain parameters. It is shown in the paper that the adaptive case is then reducible to the stochastic one. The problems can also be classified according to the kind of information memory available to the decision maker. In the paper, optimal policies and the expected gains they yield, are determined for some important classes of “information structures.” In cases in which added information does not increase the expected gain, a sufficient information structure is specified.

Journal ArticleDOI
TL;DR: This correspondence describes the formulation and solution of a nonlinear, non-Gaussian stochastic control problem, used to obtain the solution to the problem of optimally controlling a robot, equipped with sensors, that is operating in an unknown environment.
Abstract: This correspondence describes the formulation and solution of a nonlinear, non-Gaussian stochastic control problem. Dynamic programming is used to obtain the solution to the problem of optimally controlling a robot, equipped with sensors, that is operating in an unknown environment.


Journal ArticleDOI
TL;DR: It is shown that a two-stage stochastic program with recourse with right-hand sides random has optimal decision rules which are continuous and piecewise linear, but this result does not extend to programs with three or more stages.

01 Feb 1968
TL;DR: In this paper, the authors introduce risk aversion into stochastic programming with recourse, where the objective becomes to maximize the expected (concave) utility of the net payoffs.
Abstract: : In stochastic programming with recourse the objective is to maximize expected net payoff. This implicitly assumes no aversion to risk. This paper introduces risk aversion into stochastic programming with recourse. The objective becomes to maximize the expected (concave) utility of the net payoffs. Because of the special structure of the problem a number of computational short cuts are possible in the mathematical program that results. The latest representation of the gradient is but a slight modification of the latest representation of the linear objective function without risk aversion. All the second stage problems can be solved as linear programs. Unfortunately it appears necessary to solve the first stage problem as a non-linear program. (Author)

Journal ArticleDOI
TL;DR: A multistage decision problem is optimized using a new formulation of stochastic dynamic programming that employs at one stage a Markov decision process with an infinite number of substages and shows how this process may be compressed and handled as one stage in the larger problem.
Abstract: A multistage decision problem is optimized using a new formulation of stochastic dynamic programming. The problem optimized in this paper concerns a semiconductor production process where the transitions at each work station are stochastic. The mathematical model employs at one stage a Markov decision process with an infinite number of substages and shows how this process may be compressed and handled as one stage in the larger problem.

Proceedings ArticleDOI
15 Jul 1968
TL;DR: The aim of this paper is to illustrate the application of backtrack programming to the design of welded plate girders.
Abstract: The object of engineering design is to satisfy some need of man with the maximization or minimization of some measure of effectiveness of the solution. Common measures of effectiveness are cost, cost-benefit ratio, and profit. In mathematical terminology an object or facility can be described by a list or vector of parameter values. The position of each element in the vector associates it with a particular parameter. The performance of the object or facility and the constraints imposed on the performance are described by a set of equalities and inequalities called the design equations. The effectiveness of the solution is indicated by the value obtained by evaluating an objective function with the design parameter values. The object of the design process is to determine the vector of parameter values, the optimum solution, which satisfies the design equations and maximizes or minimizes, as appropriate, the value of the objective function.There are a number of mathematical procedures which are useful in optimization problems. However, most methods are applicable only to particular classes of problems. The application of linear programming, for example, is limited to problems in which the design variables are involved only in linear relationships. Dynamic programming is applicable to problems in which there is a sequential flow of information. Gradient search methods are useful in many problems, but may lead to incorrect results if the function is not unimodal. Difficulties are encountered also if one or more of the parameters are defined only at discrete values. Backtrack programming, 2,3on the other hand, is generally applicable to optimization problems including those for which more specialized techniques are available. The aim of this paper is to illustrate the application of backtrack programming to the design of welded plate girders.

Journal ArticleDOI
TL;DR: Dynamic programming applications to optimal stochastic orbital transfer strategy, describing computer program.
Abstract: Dynamic programming applications to optimal stochastic orbital transfer strategy, describing computer program


Posted ContentDOI
TL;DR: The role of dynamic programming as a means of examining the allocation and pricing problems in the theory of the firm is considered in this paper, and it is concluded that some theoretical contribution may be possible by using dynamic programming to attack problems beyond the scope of conventional methods.
Abstract: The role of dynamic programming as a means of examining the allocation and pricing problems in the theory of the firm is considered in this paper. The production relationships and equilibrium conditions as specified by neoclassical theory and linear programming are stated and dynamic programming formulations of each of these models are constructed and compared. It is demonstrated that dynamic programming adds nothing to established theory in these cases, providing simply an alternative means of computation which might be preferred for some empirical problems. It is concluded that some theoretical contribution may be possible by using dynamic programming to attack problems beyond the scope of conventional methods.