scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic programming published in 1975"


Journal ArticleDOI
TL;DR: This short paper considers a discretization procedure often employed in practice and shows that the solution of the discretized algorithm converges to the Solution of the continuous algorithm, as theDiscretization grids become finer and finer.
Abstract: The computational solution of discrete-time stochastic optimal control problems by dynamic programming requires, in most cases, discretization of the state and control spaces whenever these spaces are infinite. In this short paper we consider a discretization procedure often employed in practice. Under certain compactness and Lipschitz continuity assumptions we show that the solution of the discretized algorithm converges to the solution of the continuous algorithm, as the discretization grids become finer and finer. Furthermore, any control law obtained from the discretized algorithm results in a value of the cost functional which converges to the optimal value of the problem.

191 citations


Book ChapterDOI
J. A. Tomlin1
01 Jan 1975
TL;DR: Particular attention is given to two “optimal” scaling methods, giving results on their speed and effectiveness (in terms of their optimality criteria) as well as their influence on the numerical behavior of the problem.
Abstract: The scaling of linear programming problems remains a rather poorly understood subject (as indeed it does for linear equations). Although many scaling techniques have been proposed, the rationale behind them is not always evident and very few numerical results are available. This paper considers a number of these techniques and gives numerical results for several real problems. Particular attention is given to two “optimal” scaling methods, giving results on their speed and effectiveness (in terms of their optimality criteria) as well as well as their influence on the numerical behavior of the problem.

79 citations


Journal ArticleDOI
01 Oct 1975-Ecology
TL;DR: In this paper, a general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival.
Abstract: Optimal exploitation strategies were studied for an animal population in a Markovian (stochastic, serially correlated) environment. This is a general case and encompasses a number of important special cases as simplifications. Extensive empirical data on the Mallard (Anas platyrhynchos) were used as an example of general theory. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. A general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. The literature and analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, two hypotheses were explored: (1) exploitation mortality represents a largely additive form of mortality, and (2) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under the rather general conditions. Direct feedback control was an integral component in the optimal decision—making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. If we assume that exploitation is largely an additive force of mortality in Mallards, then optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slight concave function of the environmental conditions. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the Mallard breeding population. Dynamic programming is suggested as a very general formulation for realistic solutions to the general optimal exploitation problem. The concepts of state vectors and stage transformations are completely general. Populations can be modeled stochastically and the objective function can include extra—biological factors. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, or harvest rate, or designed to maintain a constant breeding population size is inefficient.

70 citations


Book ChapterDOI
01 Jan 1975
TL;DR: It is shown that stochastic optimization problems ressemble deterministic optimization problems up to the nonanticipativity restriction on the choice of the control policy, a result strongly akin to the Maximum principle.
Abstract: It is shown that stochastic optimization problems ressemble deterministic optimization problems up to the nonanticipativity restriction on the choice of the control policy This condition can be introduced explicitly in the form of a constraint Doing so, lead to an optimization problem for which it is possible to derive optimality criteria The convex case is considered here It is shown that the variables associated with the nonanticipativity restriction form a martingale These variables can be used to formulate an equivalent problem allowing for pointwise optimization, a result strongly akin to the Maximum principle Finally two simple examples are used to illustrate the main results

50 citations


Journal ArticleDOI
TL;DR: In this article, the authors treat the problem of existence of optimal control for a large class of delay-differential Ito equations, where the control is a non-anticipative measurable function of the trajectory (the case of complete information).
Abstract: The paper treats the problem of existence of optimal controls for a large class of delay-differential Ito equations, where the control is a nonanticipative measurable function of the trajectory (the case of complete information). The technique, which seems simpler than past approaches to the problem, requires the use of results on weak convergence of measures, and gives fairly general results. Control can be either over a fixed-time interval, or it can terminate when a target set is reached, and there can be additional (almost everywhere continuous) side constraints.

47 citations


Journal ArticleDOI
TL;DR: In this paper, the linear constrained multistage stochastic programming problem is interpreted as a programming problem in linear space, and a duality theory is developed from the general results of Rockafellar [16].
Abstract: The linearly constrained multistage stochastic programming problem is interpreted as a programming problem in $L_p $-space, linear if the stochastic problem is linear, and a duality theory is developed from the general results of Rockafellar [16] The duality is symmetric for linear problems, provided that the stochastic model is suitably generalized, and can be given an economic interpretation If a certain set $\mathcal{C}$, closely related to the epigraph of the perturbation function, is closed, then the stochastic programming problem attains its minimum, which equals the supremum of the dual problem The closedness of $\mathcal{C}$ follows from simple conditions on the technology matrix A for the problem

43 citations


Journal ArticleDOI
TL;DR: In this article, design equations for an optimal limited state feedback controller problem were developed for the stochastic case, in which both plant noise and measurement noise may be present, and for the case of a dynamic compensator.
Abstract: Design equations are developed for an optimal limited state feedback controller problem. These equations are developed for the stochastic case, in which both plant noise and measurement noise may be present, and for the case of a dynamic compensator. Four possible approaches to the solution of the nonlinear design equations are described. A fourth-order example illustrates some of the difficulties associated with the solution of these equations and suggests additional areas for study.

38 citations


Journal ArticleDOI
01 Dec 1975-Metrika
TL;DR: In this paper, a global optimal solution of a number of different problems in respect to stratification and grouping of random variables or their values result in optimization problems of the same structure.
Abstract: In statistics and their fields of application a number of different problems in respect to stratification and grouping of random variables or their values result in optimization problems of the same structure. By a suitable transformation a global optimal solution of these problems can be determined by dynamic programming. The results are illustrated for discrete and continuous random variables by numerical results.

35 citations


Book ChapterDOI
01 Jan 1975
TL;DR: This chapter defines N-Decision problems from that point of view and discusses the properties of optimal decision and goal with a view to solve N-decision problems.
Abstract: Publisher Summary There are many problems in a fuzzy environment where it is necessary to decide a present estimated goal. Therefore, it becomes necessary to formulate decision problems in such a sense that an estimated goal can be decided. This chapter defines N-decision problems from that point of view and discusses the properties of optimal decision and goal with a view to solve N-decision problems. It discusses some properties of 1-decision problems and shows that 1-decision problems can be reduced to simply 0-decision problems. As it seems that almost real world problems involving economic systems and public systems satisfy a pseudo complement in the domain under consideration, almost N-decision problems can be solved by the method for solving 0-decision problems. 0-decision problems may be regarded as optimization problems of logical functions. The chapter discusses the properties of optimization problems including logical functions.

27 citations


Journal ArticleDOI
TL;DR: In this article, the money multiplier in a simple monetary macroeconomic model is treated as a random variable and the optimal control law is derived, and some consequences of erroneous modeling of the random disturbance are exhibited by simulation.
Abstract: Multiplicative random disturbances frequently occur in economic modeling. The money multiplier in a simple monetary macroeconomic model is treated as a random variable in this paper. The optimal control law is derived, and some consequences of erroneous modeling of the random disturbance are exhibited by simulation.

25 citations


01 Sep 1975
TL;DR: In this article, a theory and methods for analyzing sensitivity of the optimal value and optimal solution set to perturbations in problem data in nonlinear bounded optimization problems with discrete variables are presented.
Abstract: : Theory and methods are presented for analyzing sensitivity of the optimal value and optimal solution set to perturbations in problem data in nonlinear bounded optimization problems with discrete variables. Emphasis is given to studying behavior of the optimal value function. Theory is developed primarily for mixed integer programming (MIP) problems, where the domain is a subset of a Euclidean vector space.

Proceedings ArticleDOI
01 Dec 1975
TL;DR: A class of monotone mappings underlying many sequential optimization problems over a finite or infinite horizon which are of interest in applications and some fixed point properties of the optimal value function are proved.
Abstract: In this paper we consider a class of monotone mappings underlying many sequential optimization problems over a finite or infinite horizon which are of interest in applications This class of problems Includes deterministic and stochastic optimal control problems, minimax control problems, Semi-Markov Decision problems and others We prove some fixed point properties of the optimal value function and we analyze the convergence properties of a related generalized Dynamic Programming algorithm We also give a sufficient condition for convergence, which is widely applicable and considerably strengthens known related results

Journal ArticleDOI
TL;DR: In this paper, the existence of finite optimum solutions for linear programming problems with absolute value functions subject to linear constraints is proved. But the necessary and sufficient conditions for such problems are not discussed.
Abstract: This paper considers some programming problems with absolute-value (objective) functions subject to linear constraints. Necessary and sufficient conditions for the existence of finite optimum solutions to these problems are proved.

Journal ArticleDOI
TL;DR: There are three main reasons why a purely linear programming model may not represent a constrained optimization problem adequately: economies of scale, other non-linearities that do not invalidate a local optimum, random data, and integer programming, non- linear programming and stochastic programming.
Abstract: There are three main reasons why a purely linear programming model may not represent a constrained optimization problem adequately: economies of scale, other non-linearities that do not invalidate a local optimum, random data. These reasons lead respectively to integer programming, non-linear programming and stochastic programming. Examples of each type of model are discussed. These have all been solved using a standard mathematical programming system to exploit sparseness efficiently. Economies of scale arise when selecting a set of new pipelines to expand the capacity of a given network. This problem involves non-linear functions, but is essentially an integer programming problem because we must use branch and bound methods to find the best combinations of pipelines, and pipeline diameters. An unsuccessful and a subsequent successful formulation for this problem are discussed. A non-linear programming model for allocating resources in health care is outlined. A model for multi-time-period production scheduling with stochastic demands is also outlined. The model requires data defining the uncertainties in demand forecasts, and the extent to which these are correlated with each other and with past sales. The existence of software for this model may encourage more people to quantify these data.

Journal ArticleDOI
TL;DR: The truncated block enumeration method of multiple choice programming is described and used in the development of an algorithm to solve problems of this type of problem.
Abstract: This paper considers multiple choice programming problems in which the elements of the activity matrix can be normally distributed random variables or random vectors. The truncated block enumeration method of multiple choice programming is described and used in the development of an algorithm to solve problems of this type. Deterministic inequalities computed from the means and variances are employed by the block pivoting algorithm to assure fast convergence to a (sub)optimal solution. The solution will satisfy each constraint with the required marginal probabilities, but a lower bound of the joint probabilities is also computed. As an option, problems can be solved when the lower bound of the joint probability that all the constraints are satisfied is specified alone.

Journal ArticleDOI
TL;DR: In this paper, a methodological discussion of economic problems which necessitate the introduction of stochastic considerations is presented, and examples are taken from the work of the author and his collaborators concerning the application of Stochastic processes to economic growth.
Abstract: A methodological discussion of economic problems which necessitate the introduction of stochastic considerations. Examples are taken from the work of the author and his collaborators concerning the application of stochastic processes to economic growth, stochastic programming utilized in planning and stochastic control theory applied to problems of economic policy.

Journal ArticleDOI
TL;DR: The problem of multiple-objective optimization for the environment development system is formulated and the so-called Pareto-optimal solution set is formulated, related in a one-to-one manner to a family of auxiliary scalar index problems.
Abstract: This paper deals with multiple-objective optimization problems for the environment development system. The model of the environment development system, which was introduced by Kulikowski, is described by a system of non linear differential equations which include interconnected n exogenous and m endogenous controlled factors or processes. The problem of multiple-objective optimization for the environment development system is formulated. A main difficulty of multiple-objective optimization is that it is no longer clear what one means by an optimal solution. A possible remedy for this situation is to refine the concept of optimal solution by introducing the so-called Pareto-optimal solution set. Then multiple-objective optimization problem boils down to determining the set of Pareto-optimal solutions. The Pareto-optimal solution set is related in a one-to-one manner to a family of auxiliary scalar index problems. For an unconstrained multiple-objective optimization problem for the environment deve...

Journal ArticleDOI
01 Nov 1975
TL;DR: A finite-horizon partially observed stochastic optimization problem is presented, where the underlying process subject to control is a finite-state discrete-time controlled semi-Markov vector process, the information pattern is classical, and times of control reset and noise corrupted observation occur at times of core process transition.
Abstract: A finite-horizon partially observed stochastic optimization problem is presented, where the underlying (or core) process subject to control is a finite-state discrete-time controlled semi-Markov vector process, the information pattern is classical, and times of control reset and noise corrupted observation occur at times of core process transition. Conditions for optimality are stated. The new problem formulation is shown to generalize several well-known problem formulations. Cost equality and inequality results associated with observation quality are determined, and the subsequent simplified dynamic programming equations obtained. Particular attention is given to cases where not all elements of the vector core process are either completely observed or completely unobserved.


Journal ArticleDOI
TL;DR: Numerical results for a number of functions and circuit tolerance optimization problems are presented in this paper to demonstrate the performance of DISOPT.
Abstract: An integrated computer programme in Fortran IV for continuous or discrete non linear programming problems is presented Several recent techniques and algorithms for non-linear programming have been adapted and new ideas have been introduced They include the minimax and exterior-point approaches to non-linear programming, least pth optimization and the Dakin tree-search algorithm The user may optionally choose the combination of techniques and algorithms best suited to his problems Since many practical design problems can be easily formulated as non-linear programming problems, the programme, called DISOPT, enjoys a very wide range of applications such as continuous and discrete tolerance assignments, digital filter design, circuit design, system modelling and approximation problems Numerical results for a number of functions and circuit tolerance optimization problems are presented in this paper to demonstrate the performance of DISOPT

Journal ArticleDOI
TL;DR: It is shown that for stochastic linear programs with simple randomization the minimum risk solution does not depend on the probability distribution of coefficients and can be obtained by linear programming.
Abstract: In this paper there are presented some recent results in stochastic linear programming which require for obtaining numerical solutions mainly the existing efficient programs for linear and quadratic programming. It is also shown that for stochastic linear programs with simple randomization the minimum risk solution does not depend on the probability distribution of coefficients and can be obtained by linear programming. The relevance of the results to planning under uncertainty is illustrated and numerical examples and computation experience is reported. All the methods presented can provide numerical results for problems of dimensions met in applications.


Proceedings ArticleDOI
01 Dec 1975
TL;DR: This paper discusses the control of nonlinear stochastic systems and, in particular, linear systems with unknown parameters and explicit expressions of the probing and caution terms in a Stochastic control problem are presented by a closed-loop approximation of the stochastically dynamic programming equation.
Abstract: This paper discusses the control of nonlinear stochastic systems and, in particular, linear systems with unknown parameters. The stochastic nature of the problem leads to the probing and caution properties of the control. Explicit expressions of the probing and caution terms in a stochastic control problem are presented. These terms are obtained by a closed-loop approximation of the stochastic dynamic programming equation. An approximate value of information can be evaluated and the benefit to be derived from probing (experimentation)can be traded off against its cost. The interplay between caution and probing is discussed.

Journal ArticleDOI
TL;DR: In this article, the problem of control of a stochastic system with control dependent noise is investigated, and the control policy given by the algorithm reduces to the optimal control policy in two cases in which the optimal controlling policy is known.

Journal ArticleDOI
TL;DR: In this article, an intertemporal model of an open economy incorporating risk in the export price is developed, and the optimal trajectories for consumption, investment, growth, exports, and resource allocation are determined using dynamic stochastic programming.

01 Jan 1975
TL;DR: In this article, an infinite horizon stochastic dynamic programming (DP) problem with the recursive additive reward system is studied, by using linear programming (LP) and the optimal stationary policy can be obtained by the usual LP method.
Abstract: We study, by using linear programming (LP), an infinite­ horizon stochastic dynamic programming (DP) problem with the recursive additive reward system. Since this DP problem has discount factors which may depend on the transition, it includes the "discounted" Markovian decision problem. It is shown that this problem can also be formulated as one of LP problems and that the optimal stationary policy can be obtained by the usual LP method. Some interesting examples of DP models and their mumerical solutions by LP algorithm are illustrated. Furthermore, it is verified that these solutions coincides with ones obtained by Howard's policy iteration algorithm.

Journal ArticleDOI
TL;DR: The special structure of these models is exploited to develop the optimization techniques which are illustrated by simple design examples.
Abstract: Reliability of a component can be computed if the probability distributions for the stress and strength are known. The factors which determine the parameters of the distributions for stress and strength random variables can be controlled in design problems. This leads to the problem of finding the optimal values of these parameters subject to resource and design constraints. Some optimization models are discussed. The special structure of these models is exploited to develop the optimization techniques which are illustrated by simple design examples.

Journal ArticleDOI
TL;DR: The purpose of this paper is to provide a survey of the state of the art of dynamic programming and an indication of the problem areas to which dynamic programming has been applied is contained.
Abstract: The purpose of this paper is to provide a survey of the state of the art of dynamic programming. An indication of the problem areas to which dynamic programming has been applied is contained. In addition, a discussion of various theoretical advances in dynamic programming is presented. The paper is divided into four areas, a discussion of continuous parameter dynamic programs is given and applications optimal control problems are discussed. The second section contains a discussion of discrete deterministic dynamic programs and applications to areas such as scheduling. The third section contains a discussion of a solution in stochastic decision problems via dynamic programming. The final section is devoted to the treatment of combinatorial problems and an indication of how they can often be handled via dynamic programming. An extensive bibliography of both theory and applications is appended.

Journal ArticleDOI
TL;DR: In this article, the passive and active approach to stochastic linear programming is considered and alternative approaches are discussed in its discrete version, and the theory is illustrated with the help of econometric models for Indian economic planning.
Abstract: We consider the passive and active approach to stochastic linear programming and mention also some alternative approaches. Stochastic control theory is discussed in its discrete version. The theory is illustrated with the help of econometric models for Indian economic planning. 1. Stochastic programming. 2. Stochastic control theory.

Book ChapterDOI
08 Sep 1975
TL;DR: Deterministic models for the prediction of expected performance have been presented, and it has been shown that optimal controllers designed using the models are good approximations to the true optimal controllers for the stochastic system.
Abstract: Stochastic control problems for a rather general class of nonlinear systems have been considered in this paper. Deterministic models for the prediction of expected performance have been presented, and it has been shown that optimal controllers designed using the models are good approximations to the true optimal controllers for the stochastic system. Some results have been presented on the use of an estimator coupled with an optimal controller based on continuous observation of the state.