scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic programming published in 1978"


Journal ArticleDOI
01 Mar 1978

229 citations


Journal ArticleDOI
TL;DR: In the general framework of inifinite-dimensional convex programming, two fundamental principles are demonstrated and used to derive several basic algorithms to solve a so-called "master" (constrained optimization) problem.
Abstract: In the general framework of inifinite-dimensional convex programming, two fundamental principles are demonstrated and used to derive several basic algorithms to solve a so-called "master" (constrained optimization) problem. These algorithms consist in solving an infinite sequence of "auxiliary" problems whose solutions converge to the master's optimal one. By making particular choices for the auxiliary problems, one can recover either classical algorithms (gradient, Newton-Raphson, Uzawa) or decomposition-coordination (two-level) algorithms. The advantages of the theory are that it clearly sets the connection between classical and two-level algorithms, It provides a framework for classifying the two-level algorithms, and it gives a systematic way of deriving new algorithms.

186 citations


Journal ArticleDOI
TL;DR: This example indicates that for some mechanical engineering optimization problems, using the multicriterion optimization approach, the authors can automatically obtain a solution which is optimal and acceptable to the designer.

102 citations


Book ChapterDOI
01 Jan 1978
TL;DR: In this paper, the problem of finding optimal reservoir capacities by minimizing total building cost eventually plus a penalty, where a reliability type constraint is prescribed, further lower and upper bounds for the capacities are prescribed.
Abstract: Mathematically a natural river system is a rooted directed tree where the orientations of the edges coincide with the directions of the streamflows. Assume that in some of the river valleys it is possibie to build reservoirs the purpose of which will be to retain the flood, once a year, say. The problem is to find optimal reservoir capacities by minimizing total building cost eventually plus a penalty, where a reliability type constraint, further lower and upper bounds for the capacities are prescribed. The solution of the obtained nonlinear programming problem is based on the supporting hyperplane method of Veinott combined with simulation of multivariate probability distributions. Numerical illustrations are given.

85 citations


Journal ArticleDOI
TL;DR: An algorithm to solve the general convex (nondifferentiable) programming problem with noise is proposed and Probabilistic convergence theorems are obtained.
Abstract: The problem of minimizing a nonlinear function with nonlinear constraints when the values of the objective, the constraints and their gradients have errors, is studied. This noise may be due to the stochastic nature of the problem or to numerical error.

56 citations


Journal ArticleDOI
Luc Devroye1
TL;DR: The global convergence and the asymptotical optimality of the sequential sampling procedure are proved for both the stochastic and deterministic optimization problem.
Abstract: A sequential random search method for the global minimization of a continuous function is proposed. The algorithm gradually concentrates the random search effort on areas neighboring the global minima. A modification is included for the case that the function cannot be exactly evaluated. The global convergence and the asymptotical optimality of the sequential sampling procedure are proved for both the stochastic and deterministic optimization problem.

47 citations


Book ChapterDOI
01 Jan 1978
TL;DR: The models discussed in the present paper are generalizations of the models introduced previously by A. Prekopa and M. Ziermann and are stochastic programming models and algorithms used to determine the initial stock levels rather than simple formulas.
Abstract: The models discussed in the present paper are generalizations of the models introduced previously by A. Prekopa [6] and M. Ziermann [13]. In the mentioned papers the initial stock level of one basic commodity is determined provided that the delivery and demand process allow certain homogeneity (in time) assumptions if they are random. Here we are dealing with more than one basic commodity and drop the time homogeneity assumption. Only the delivery processes will be assumed to be random. They will be supposed to be stochastically independent. The first model discussed in this paper was already introduced in [9]. All these models are stochastic programming models and algorithms are used to determine the initial stock levels rather than simple formulas. We have to solve nonlinear programming problems where one of the constraints is probabilistic. The function and gradient values of the corresponding constraining function are determined by simulation. A numerical example is detailed.

28 citations


Journal ArticleDOI
TL;DR: This paper presents an alternative approach based on the mean absolute deviation, which permits solution by a conventional linear programming algorithm whilst avoiding some of those assumptions previously required.
Abstract: Deriving acceptable farm plans where input-output coefficients are stochastic is a complex problem. Previous formulations have required many simplifying assumptions about the stochastic variables in the analysis. This paper presents an alternative approach based on the mean absolute deviation, which permits solution by a conventional linear programming algorithm whilst avoiding some of those assumptions previously required. The formulation also incorporates a stochastic objective function. Examples are provided using the situation of stochastic feed supply with reference to representative sheep-grain farms on the Northern Tablelands of New South Wales. Results from these suggest that this alternative approach is a distinct improvement on earlier stochastic formulations which utilize linear programming algorithms.

20 citations


Book ChapterDOI
01 Jan 1978
TL;DR: In this article, a large nonlinear stochastic programming model was formulated to determine the optimal monthly water release rules for the dams for the whole year, and detailed water distributions to various sectors were treated as linear programming subproblems.
Abstract: The model concerns the water release and distribution problem of the Karun River and its tributaries in Khuzestan, Iran. The system consists of three dams with three hydroelectric plants, 16 irrigation areas and 13 municipal/industrial demand locations. Water inflows fluctuate with time. A large nonlinear stochastic programming model was formulated to determine the optimal monthly water release rules for the dams for the whole year. Recourse actions and chance constraints are incorporated in the model to account for the uncertainty of the inflows. Detailed water distributions to various sectors are treated as linear programming subproblems.

16 citations


Journal ArticleDOI
TL;DR: The paper describes a modified "wait-and-see" approach to solving two-stage stochastic programming problems that allows management to consider a wide variety of objectives in making the choice between alternatives and facilities detection of the cause of any infeasibility due to management policy constraints.
Abstract: The paper describes a modified "wait-and-see" approach to solving two-stage stochastic programming problems. The approach, which involves a detailed sensitivity analysis in the classical sense, is described within the frameworks of decision theory and probabilistic programming. Although optimality in the mathematical sense cannot be guaranteed by using the approach, it is suggested that the managerial benefits weigh heavily in its favour. The approach allows management to consider a wide variety of objectives in making the choice between alternatives and facilities detection of the cause of any infeasibility due to management policy constraints. In addition, it allows much simpler programming calculations and provides an upper bound on the benefits that can be obtained by solving the full "here-and-now" problem and thus a judgement of the worth of the added computational burden can easily be made.

14 citations


Journal ArticleDOI
01 Dec 1978
TL;DR: The most important results concerning the stochastic dynamic decision processes with finite state and action spaces can be handled by policy and value iteration, both typical dynamic programming techniques, as well as by linear programming.
Abstract: As has been known for a long time, stochastic dynamic decision processes with finite state and action spaces can be handled by policy and value iteration, both typical dynamic programming techniques, as well as by linear programming. In the present paper, the most important results concerning the latter are reported and an outlook on more general settings is given.


Journal ArticleDOI
TL;DR: In this paper, the optimal policies and minimal value functions of the so-constructed Markovian Decision Process are calculated' by dynamic programming, and convergence theorems are proved and bounds for the minimal value function are obtained.
Abstract: In this note we investigate a stochastic dynamic programming problem with countable state space and finite action space. An approximation procedure is given for the numerical solution by a decomposition algorithm of the state space. The optimal policies and minimal value functions of the so-constructed Markovian Decision Process are calculated' by dynamic programming. Convergence theorems are proved and bounds for the minimal value function are obtained.

Journal ArticleDOI
TL;DR: A model is developed which can take into account various real life factors in the planning phase, including delaying production, and the probability of production being scrapped, or of extra work being required, and some computational results are reported.


Journal ArticleDOI
TL;DR: It is shown that it is possible under certain assumptions to obtain some or even all £-efficient solutions of the stochastic problem by solving the parametric problem with respect to a certain parameter set.
Abstract: In this paper we study the relation between the general concept for an optimal solution for stochastic programming problems with a random objective function-the concept of an £-efficient solution-and the associated parametric problem, We show that it is possible under certain assumptions to obtain some or even all £-efficient solutions of the stochastic problem by solving the parametric problem with respect to a certain parameter set.

01 Jan 1978
TL;DR: The main purpose of this article is to review briefly some important applications of nondifferentiable and stochastic optimization and to characterize principal directions of research.
Abstract: Optimization methods are of a great practical importance in systems analysis. They allow us to find the best behavior of a system, determine the optimal structure and compute the optimal parameters of the control system etc. The development of nondifferentiable optimization, differentiable and nondifferentiable stochastic optimization allows us to state and effectively solve new complex optimization problems which were impossible to solve by classical optimization methods. The main purpose of this article is to review briefly some important applications of nondifferentiable and stochastic optimization and to characterize principal directions of research. Clearly, the interests of the author have influenced the content of this article.

Journal ArticleDOI
TL;DR: In this paper, an exact method for solving all-integer linear programming problems is presented, which is used to search efficiently candidate hyperplanes for the optimal feasible integer solution, avoiding the explosive storage requirements for high-dimensional dynamic programming by the development of an analytic representation of the optimal allocation at each stage.
Abstract: An exact method for solving all-integer linear-programming problems is presented. Dynamic-programming methodology is used to search efficiently candidate hyperplanes for the optimal feasible integer solution. The explosive storage requirements for high-dimensional dynamic programming are avoided by the development of an analytic representation of the optimal allocation at each stage. Computational results for problems of small to moderate size are also presented.

Journal ArticleDOI
TL;DR: In this paper, a method of using deterministic dynamic programming to determine the long term operating policy for a reservoir system is described, which is defined to be the policy which is to be used as a matter of routine, without foreknowledge of future events.
Abstract: This paper describes a method of using deterministic dynamic programming to determine the long term operating policy for a reservoir system The long term policy is defined to be the policy which is to be used as a matter of routine, without foreknowledge of future events Stochastic dynamic programming formulations of this problem have been advocated, but because of excessive computer times used in the analysis, the approach is only feasible for small systems This complication does not apply to deterministic formulations but the analysis produces only the optimal policy for the particular flow trace that was used, not one that may occur at random The paper describes one way of overcoming this latter restriction The effect of length of data sequence and reservoir size is also investigated Policies produced by the new method compared favourably with those obtained by simulation

Journal ArticleDOI
TL;DR: After reviewing existing approaches to the general stochastic programming problem, an improved experi mental method is proposed that uses a variety of mathematical programming algorithms and any desired pattern of parameter variation.
Abstract: After reviewing existing approaches to the general stochastic programming problem, an improved experi mental method is proposed. This method uses a va riety of mathematical programming algorithms and any desired pattern of parameter variation. Statistical analysis of the results allows decision-makers to make probabilistic statements about the values of the decision variables and of the objective function. Illustrative examples are given.

Journal ArticleDOI
TL;DR: Numerical examples illustrate the possibilities of applying the present approach and obtained results for the management and control of complex technological processes and for themanagement of computational processes in the real-time computer operating systems.

Journal ArticleDOI
TL;DR: A global algorithm is presented for the minimax problem of a stochastic program in which some of the right hand side parameters are Stochastic and it is shown how minimax solutions may be obtained where stoChastic parameters occur solely in the objective function, and in the Objective function and right hand sides simultaneously.

Book ChapterDOI
01 Jan 1978
TL;DR: This chapter describes dynamic programming as an extremely flexible optimization technique that can be applied to multi-sector growth models to maximize some objective function subject to constraints imposed by factors such as technology and resource availability.
Abstract: Publisher Summary This chapter describes dynamic programming as an extremely flexible optimization technique The variety of problems that have been formulated as dynamic programs seems endless, accounting for the frequent use of dynamic programming as a conceptual and analytical tool Its application to solving problems has been limited by the computational difficulties, which arise when the number of possible states is large According to Bellman, the explosive increase in the computing time and computer storage requirements for state spaces of high dimension is called the curse of dimensionality This presents an obstacle to the widespread application of dynamic programming because representative models of real systems require multidimensional state spaces An approach to this problem is to search for efficient heuristics In multi-sector growth models, the status of an economy is represented by a state vector whose components indicate the current stock levels of the various goods; the goal is to control the economy over time so as to maximize some objective function subject to constraints imposed by factors such as technology and resource availability

ReportDOI
01 Jan 1978
TL;DR: Program MOGG is a FORTRAN code which will find a global optimum to these latter problems of nonconvex optimization which can be approximated arbitrarily closely by separable problems wherein all functions are piecewise linear.
Abstract: : The global optima of nonconvex optimization problems are, in general, impossible to find. Many such problems, however, can be approximated arbitrarily closely by separable problems wherein all functions are piecewise linear. Program MOGG is a FORTRAN code which will find a global optimum to these latter problems. The code is based on a branch and bound algorithm that is guaranteed to terminate after a finite number of steps. The code incorporates a linear programming subsystem designed to be numerically stable even for ill-conditioned problems.

Book ChapterDOI
01 Jan 1978
TL;DR: A computionally practical algorithm for obtaining an approximate solution using the basic stochastic dynamic programming approach is developed that preserves the "closed loop" feature of the dynamic programming solution in that the resulting decision policy depends both on the results of past experiments and on the statistics of the outcomes of future experiments.
Abstract: The paper is concerned with the development of a stochastic mathematical model for management of large-scale Research and Development program. The problem of optimal funding of an R & D complex program, consisting of several projects, their components and possible technical approaches is considered. It is assumed that the values of costs of technical approaches as well as the probabilities of technical succes are not known with certainty. So it is advantageous to perform a limited number of diagnostic experiments in order to reduce this uncertainty. The problem is to develop a policy for performing experiments and allocating resources on the basis of the results of the experiments. This policy is such that a chosen performance index is optimized. A computionally practical algorithm for obtaining an approximate solution using the basic stochastic dynamic programming approach is developed. This algorithm preserves the "closed loop" feature of the dynamic programming solution in that the resulting decision policy depends both on the results of past experiments and on the statistics of the outcomes of future experiments. In other words, the present decision takes into account the value of future information.



01 Jan 1978
TL;DR: In this article, the problem of finding optimal reservoir capacities by minimizing total building cost eventually plus a penalty, where a reliability type constraint is prescribed, further lower and upper bounds for the capacities are prescribed.
Abstract: Mathematically a natural river system is a rooted directed tree where the orientations of the edges coincide with the directions of the streamflows. Assume that in some of the river valleys it is possible to build reservoirs the purpose of which will be to retain the flood, once a year, say. The problem is to find optimal reservoir capacities by minimizing total building cost eventually plus a penalty, where a reliability type constraint, further lower and upper bounds for the capacities are prescribed. The solution of the obtained nonlinear programming problem is based on the supporting hyperplane method of Veinott combined with simulation of multivariate probability distributions. Numerical illustrations are given.

01 Mar 1978
TL;DR: In this paper, a dynamic programming approach to multi-criterion optimization problem is proposed, which is based on the concept that the ideal solution to a multiobjective problem must be a pareto optimal solution.
Abstract: : Decision makers are often confronted with problems for which there exist several distinct measures of success. Such problems can often be expressed in terms of linear or nonlinear programming models with several 'criterion' functions instead of single objective functions. A variety of techniques have been applied to multicriterion problems, but the approach used here, 'The Dynamic Programming Approach to Multicriterion Optimization Problem,' is based on the concept that the ideal solution to a multiobjective problem must be a pareto optimal solution. In many cases simply narrowing the set of candidate solutions to the set of all pareto optimal solutions may enable the decision maker to find the compromise being sought. The determination of nondominated points and corresponding nondominated values (pareto optimal solution) related to the multicriterion optimization problem is approached through the use of dynamic programming. The dynamic programming approach has an attractive property which provides the basis for generation of nondominated solutions at each stage by the decomposition method. By using recursive equations we can find out the nondominated points and corresponding nondominated solutions of multiaggregate return function. (Author)

01 Jan 1978
TL;DR: The aim of the author is to develop methods of optimal decision making which avoid direct comparison of different decisions and use only easily accessible information from the computational point of view.
Abstract: The development of optimization methods has a significant meaning for systems analysis. Optimization methods provide working tools for quantitative decision making based on correct specification of the problem and appropriately chosen solution methods. Not all problems of systems analysis are optimization problems, of course, but in any systems problem optimization methods are useful and important tools. The power of these methods and their ability to handle different problems makes it possible to analize and construct very complicated systems. Economic planning for instance would be much more limited without linear programming techniques which are very specific optimization methods. LP methods had a great impact on the theory and practice of systems analysis not only as a computing aid but also in providing a general model or structure for the systems problems. LP techniques, however, are not the only possible optimization methods. The consideration of uncertainty, partial knowledge of the systems structure and characteristics, conflicting goals and unknown exogenous models and consequently more sophisticated methods to work with these models. Nondifferentiable optimization methods seem better suited to handle these features than other techniques at the present time. The theory of nondifferentiable optimization studies extremum problems of complex structure involving interactions of subproblems, stochastic factors, multi-stage decisions and other difficulties. This publication covers one particular, but unfortunately common, situation when an estimation of the outcome from some definite decision needs a solution of a difficult auxiliary, internal, extremum problem. Solution of this auxiliary problem may be very time-consuming and so may hinder the wide analysis of different decisions. The aim of the author is to develop methods of optimal decision making which avoid direct comparison of different decisions and use only easily accessible information from the computational point of view.