scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic programming published in 1970"



Journal ArticleDOI
TL;DR: In this article, the authors used stochastic dynamic programming (SDP) and statistical model for the streamflow prediction to determine the optimal monthly total hydrogeneration for a large hydroelectric power system in the Pacific Northwest.
Abstract: For a large hydroelectric power system, such as that of the Pacific Northwest, an important operational decision each month is the amount of hydrogeneration. This decision is important because the inflow of the water is uncertain while hydro, with zero marginal cost, can be used not only to satisfy firm load commitments, but also to displace other firm resources or to serve secondary loads. In such a case, the tradeoff between savings at the present and expected benefits in the future is determined mainly by the total hydrogeneration. The use of a composite representation of multireservoir hydroelectric power systems to determine the optimal monthly total hydrogeneration is described. The analytical tool employed is that of stochastic dynamic programming, and the statistical model for the streamflow prediction is based on previous flows and snowpack information. For the anticipated 1975 system in the Pacific Northwest, comparison between the optimal operation introduced here and the presently used rule-curve operation indicates that substantial savings may be obtained, mainly owing to the more uniform displacement of the high marginal cost thermal resources by hydrogeneration.

114 citations


Journal ArticleDOI
TL;DR: This paper addresses the question of how much better (i.e., how much more profitable) the authors could expect their plans to be if somehow they could know at planning time what the outcomes of the uncertain events will turn out to be.
Abstract: The problem of planning under uncertainty has many aspects; in this paper we consider the aspect that has to do with evaluating the state of information. We address ourselves to the question of how much better (i.e., how much more profitable) we could expect our plans to be if somehow we could know at planning time what the outcomes of the uncertain events will turn out to be. This expected increase in profitability is the “expected value of perfect information” and represents an upper bound to the amount of money that it would be worthwhile to spend in any survey or other investigation designed to provide that information beforehand. In many cases, the amount of calculation to compute an exact value is prohibitive. However, we derive bounds (estimates) for the value. Moreover, in the case of operations planning by linear or convex programming, we show how to evaluate these bounds as part of a post-optimal analysis.

109 citations



Journal ArticleDOI
TL;DR: The algorithms presented apply when the preference functions h(x) and g(y) are convex, and continuously differentiable, k is a convex polytope, ξ has a distribution that satisfies mild convergence conditions, and the objective is to minimize the expectation of the sum of the two preference functions.
Abstract: This paper presents computational algorithms for the solution of a class of stochastic programming problems. Let x and y represent the decision and state vectors, and suppose that x must be chosen from some set K and that y is a linear function of both x and an additive random vector ξ. If y is uniquely determined once x is chosen and ξ is observed, we say that the problem has simple recourse. The algorithms presented apply, e.g., when the preference functions h(x) and g(y) are convex, and continuously differentiable, k is a convex polytope, ξ has a distribution that satisfies mild convergence conditions, and the objective is to minimize the expectation of the sum of the two preference functions. An illustrative example of an inventory problem is formulated, and the special case when g is asymmetric, quadratic, and separable is presented in detail to illustrate the calculations involved.

47 citations


Journal ArticleDOI
TL;DR: This paper develops a second-order algorithm for solving discrete-time dynamic optimization problems with terminal constraints that utilizes strong variations and, as a result, has certain advantages over existing discrete- time methods.
Abstract: : Recently, the notion of Differential Dynamic Programming has been used to obtain new second-order algorithms for solving non-linear optimal control problems. (Unlike conventional Dynamic Programming, the Principle of Optimality is applied in the neighborhood of a nominal, non-optimal, trajectory.) A novel feature of these algorithms is that they permit strong variations in the system trajectory. In this paper, Differential Dynamic Programming is used to develop a second-order algorithm for solving discrete-time dynamic optimization problems with terminal constraints. This algorithm also utilizes strong variations and, as a result, has certain advantages over existing discrete-time methods. A non-linear computed example is presented, and comparisons are made with the results of other researchers who have solved this problem. The experience gained during the computation has suggested some extensions to an earlier, previously published Differential Dynamic Programming algorithm for continuous time problems. These extensions, and their implications are discussed. (Author)

47 citations


Journal ArticleDOI
TL;DR: This paper attempts to review and compare three such mathematical modeling and solution techniques, namely dynamic programming, policy iteration, and linear programming, used to derive alternative sequential operating policies for a multiple purpose reservoir.
Abstract: Within the past few years, a number of papers have been published in which stochastic mathematical programming models, incorporating first order Markov chains, have been used to derive alternative sequential operating policies for a multiple purpose reservoir. This paper attempts to review and compare three such mathematical modeling and solution techniques, namely dynamic programming, policy iteration, and linear programming. It is assumed that the flows into the reservoir are serially correlated stochastic quantities. The design parameters are assumed fixed, i.e., the reservoir capacity and the storage and release targets, if any, are predetermined. The models are discrete since the continuous variables of time, volume, and flow are approximated by discrete units. The problem is to derive an optimal operating policy. Such a policy defines the reservoir release as a function of the current storage volume and inflow. The form of the solution and some of the advantages, limitations and computational efficiencies of each of the models and their algorithms are compared using a simplified numerical example.

42 citations


Journal ArticleDOI
Kailash C. Kapur1
TL;DR: A general mathematical optimization model for such systems is developed which has broad applications for the planning, system design and evaluation of many transportation systems and three types of solution techniques are discussed.
Abstract: Transportation systems have multi-objective functions and there are multi-factor decision situations. A general mathematical optimization model for such systems is developed which has broad applications for the planning, system design and evaluation of many transportation systems. Three types of solution techniques are discussed. For multi-objective linear programs, a solution is obtained which satisfies the decision maker's preferences and optimization from the decision maker's point of view is considered. A goal programming solution technique is given when goals for the system can be defined. If this is not possible, an overall utility function is defined on the various objective functions and a concept of additive utilities is explored and a parametric programming solution is given.

31 citations


Journal ArticleDOI
TL;DR: Direct applications of mathematical programming techniques in numerical solutions of optimal control problems are reviewed and areas of application include aerospace trajectory optimization and rendezvous problems, computer control of processes, and nuclear reactor control problems.
Abstract: Direct applications of mathematical programming techniques in numerical solutions of optimal control problems are reviewed The types of control systems discussed include linear, nonlinear, continuous- and discrete-time, deterministic, stochastic, and distributed-parameter systems The areas of application include aerospace trajectory optimization and rendezvous problems, computer control of processes, and nuclear reactor control problems A classified bibliography is included

28 citations


Book ChapterDOI
01 Jan 1970
TL;DR: A number of algorithms will be described which are based on the principle of feasible directions and special problems like linear programming, unconstrained optimization, optimization subject to linear equality constraints, quadratic programming and linearly constrained nonlinear programming will be briefly dealt with.
Abstract: A number of algorithms will be described which are based on the principle of feasible directions. Special problems like linear programming, unconstrained optimization, optimization subject to linear equality constraints, quadratic programming and linearly constrained nonlinear programming will be briefly dealt with.

23 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived deterministic equivalent problems for a general class of chance constrained but not necessarily linear programming problems under the assumption of discrete distributions for the random variables involved.
Abstract: Under the assumption of discrete distributions for the random variables involved, deterministic equivalent problems are derived for a general class of chance constrained but not necessarily linear programming problems. These permit the explicit solution of such problems for all or most types of optimal stochastic decision rules which are of interest, including optimal multistage rules and not restricted to the class of linear rules. The formulation given encompasses certain cases of stochastic programming with recourse, and the deterministic equivalents derived for these reduce to well-known versions available in the literature.

Journal ArticleDOI
TL;DR: In this article, the authors show that using dynamic programming to study inverse problems of classic type can be equally advantageous in dealing with problems that are characteristically of economic type, and also point out that the methods used can be used with equal effectiveness for stochastic and adaptive processes.

Proceedings ArticleDOI
01 Dec 1970
TL;DR: It is shown how various known approaches of production cost simulation can be used in conjunction with the dynamic programming optimization to accommodate large systems that may be represented with great technical detail.
Abstract: This paper extends previously published results on the application of a dynamic programming optimization approach to the planning of systems, with special emphasis on the long-term expansion of power systems. Future uncertainties about loads, equipment costs, etc. are considered explicitly in the method described. A practical computational solution to the resulting stochastic optimization is obtained by means of the recently developed open-loop feedback approximation. Further, it is shown how various known approaches of production cost simulation can be used in conjunction with the dynamic programming optimization to accommodate large systems that may be represented with great technical detail.

Book ChapterDOI
01 Jan 1970
TL;DR: Methods and results of the theory of linear and nonlinear programming can be applied to a set of statistical problems and a survey of such applications is carried out.
Abstract: One of the main problems in mathematical statistics is to find procedures which satisfy certain optimality conditions. A great variety of these conditions turns out to be equivalent to programming problems. Hence methods and results of the theory of linear and nonlinear programming can be applied to a set of statistical problems. The paper is a survey of such applications. Since there is always a timelag between the development of new methods in one field (programming) and their applications in another field (statistics), we are mainly concerned with applications of (infinite) linear programming. As another field of applications we discuss a problem in probability theory which is connected with the moment problem.

Journal ArticleDOI
TL;DR: Optimal solutions and algorithms are presented for both continuous and discrete optimization problems for S-convex and symmetric objective and feasibility regions.
Abstract: Mathematical programs with S-convex and with symmetric objective and feasibility regions are investigated. Optimal solutions and algorithms are presented for both continuous and discrete optimization problems.


Journal ArticleDOI
TL;DR: In this article, a particular class of extensions of the traditional linear programming model, those problems containing integer variables which are restricted to a value of either zero or one, are presented, referred to as dichotomous-integer variables.
Abstract: Linear programming has received much attention as a tool for managerial decision making in business and government. It provides the decision maker with a precise and simple framework for defining his problem and a quick and simple means of obtaining an optimal solution to that problem. In addition, linear programming has the advantages of being applicable to a wide range of managerial problems, being able to consider simultaneously each of the multiple goals and relationships which may be represented by a complex problem, and providing information about the relative significance of each constraint with respect to the objective (shadow prices). However, because of the simplicity inherent in the linear programming model, there are many managerial problems to which it cannot readily be applied. Such problems may be too complex for a modeling approach, or may contain relationships which cannot be represented by linear functions, or may be characterized by probabilistic relationships or perhaps relationships which change over time. Most managers could probably cite "additional considerations" which the simple linear programming model would be unable to consider if applied to problems in their area of authority. To cope with complexities of this sort, several variations and extensions of linear programming have been developed. Quadratic programming and integer programming have been developed to attack problems of nonlinearity of functional relationships. Stochastic programming deals with problems containing probabilistic relationships. Dynamic programming can be adapted to problems in which time or the sequence of events is an important consideration. However, it can be stated that, in general, what these techniques contribute in terms of more accurate representation of actual problems is often sacrificed in the form of great difficulty in reaching an optimal solution. This paper concentrates upon a particular class of extensions of the traditional linear programming model-those problems containing integer variables which are restricted to a value of either zero or one. These variables are referred to as "dichotomous-integer variables," and the formulation and solution of problems containing such variables is referred to as "dichotomous-integer programming." Dichotomous-integer variables generally indicate discrete changes in objective or constraint functions, or the presence or absence of some condition or decision. Several well-known linear and integer programming problems are presented in this paper and then extended by means of adding dichotomousinteger variables representing additional considerations or complicating factors which are typical of many real world problems. The purpose of the paper is to demonstrate that integer programming, particularly dichotomous-integer programming, provides a powerful means of * Assistant professor of accounting at the College of Business Administration, University of Texas at Austin.


Journal ArticleDOI
TL;DR: A mathematical programming approach to the computational realization of optimal control problems for nonlinear discrete time systems is discussed and an algorithm which implements mathematical programming solutions on a sequential basis is proposed.
Abstract: A mathematical programming approach to the computational realization of optimal control problems for nonlinear discrete time systems is discussed. The method proposed is applicable to uniformly or nonuniformly sampled systems. The sampling intervals may be known or unknown a priori. The method includes any additional state-space, control and time interval constraints. The constraints may generally be nonlinear, of equality or inequality type. Several examples of implementing this method are presented. In addition to deterministic discrete time systems, stochastic systems are considered as well. An algorithm which implements mathematical programming solutions on a sequential basis is proposed.

01 Jan 1970
TL;DR: In this paper, the authors studied optimization problems involving linear systems with retardations in the controls and discussed some physical motivation for the problems, including controllability, existence and uniqueness of the optimal control, sufficient conditions, techniques of synthesis, and dynamic programming.
Abstract: : Optimization problems involving linear systems with retardations in the controls are studied in a systematic way. Some physical motivation for the problems is discussed. The topics covered are: controllability, existence and uniqueness of the optimal control, sufficient conditions, techniques of synthesis, and dynamic programming. A number of solved examples are presented. (Author)


Posted ContentDOI
TL;DR: In this article, the authors investigate the utility implications of stochastic programming models in connection with the expected utility maximization principle and make attempts to impute the value(s) of key parameters in each of these models and some other utility models from the actual decision made in the mean-standard devication analysis of framework.
Abstract: : The main interest is in investigating several kinds of implications of stochastic programming models and their uses to provide several types of dialogues between formal models and the decision maker himself under risk-taking situations. First, some stochastic programming models which are often discussed in the literature are presented and parametric evaluations using dual evaluators are attempted. Second, we see a close relationship between these models and the mean-standard deviation analysis mainly developed in the context of portfolio selection theory. Third, we explore utility implications of these models in connection with the expected utility maximization principle. Finally, attempts are made to impute the value(s) of key parameters in each of these models and some other utility models from the actual decision made in the mean-standard devication analysis of framework. (Author)

01 Mar 1970
TL;DR: Dynamic Programming methodology is introduced for inspection models deal with operating systems whose stochastic failure is detected by observations carried out intermittently and can be utilized for any type of failure rate - increasing, decreasing or mixed.
Abstract: : Inspection models deal with operating systems whose stochastic failure is detected by observations carried out intermittently. Solutions of the problems under consideration using differentiation have previously been given by the authors. In the current study Dynamic Programming methodology is introduced for this purpose. The approach has many potential advantages - it can be utilized for any type of failure rate - increasing, decreasing or mixed. Furthermore, the method is applicable if additional types of costs are introduced, or if costs are time dependent. (Author)




01 Dec 1970
TL;DR: A model which includes the salient features of decentralization problems and behavior by a two-level goal-programming approach is presented, which seems to have a good potential for the solution of other non-convex problems, like those which arise from chance-constrained programming when zero-order decision rules are used.
Abstract: : The paper presents a model which includes the salient features of decentralization problems and behavior by a two-level goal-programming approach. The fact that it turns out to be a non-convex programming problem is of interest, both for the managerial implications and the mathematical problems involved. The technique used to reach a solution, or some similar methods, seems to have a good potential for the solution of other non-convex problems, like those which arise from chance-constrained programming when zero-order decision rules are used, or other types of dynamic planning which involve decomposition of goals over time similar to the decomposition between units dealt with here. (Author)

Journal ArticleDOI
TL;DR: In this article, the linear time-dependent stochastic optimal control problem with a quadratic performance index is used to solve a class of problems which have the same system equations but with a different class of non-quadratic performance indices.
Abstract: The solution of the linear time-dependent stochastic optimal control problem with a quadratic performance index is well known. This paper presents a method of using this known solution to solve a class of problems which have the same system equations but with a class of non-quadratic performance indices.