scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic programming published in 1994"


Book
15 Apr 1994
TL;DR: Puterman as discussed by the authors provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models, focusing primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous time discrete state models.
Abstract: From the Publisher: The past decade has seen considerable theoretical and applied research on Markov decision processes, as well as the growing use of these models in ecology, economics, communications engineering, and other fields where outcomes are uncertain and sequential decision-making processes are needed. A timely response to this increased activity, Martin L. Puterman's new work provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models. It discusses all major research directions in the field, highlights many significant applications of Markov decision processes models, and explores numerous important topics that have previously been neglected or given cursory coverage in the literature. Markov Decision Processes focuses primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous-time discrete state models. The book is organized around optimality criteria, using a common framework centered on the optimality (Bellman) equation for presenting results. The results are presented in a "theorem-proof" format and elaborated on through both discussion and examples, including results that are not available in any other book. A two-state Markov decision process model, presented in Chapter 3, is analyzed repeatedly throughout the book and demonstrates many results and algorithms. Markov Decision Processes covers recent research advances in such areas as countable state space models with average reward criterion, constrained models, and models with risk sensitive optimality criteria. It also explores several topics that have received little or no attention in other books, including modified policy iteration, multichain models with average reward criterion, and sensitive optimality. In addition, a Bibliographic Remarks section in each chapter comments on relevant historic

11,625 citations


Journal ArticleDOI
TL;DR: In this paper, a stochastic dynamic programming model is used to model the operating flexibility of a multinational corporation to shift production between two manufacturing plants located in different countries, the value of which is dependent upon the real exchange rate.
Abstract: The multinational corporation is a network of activities located in different countries. The value of this network derives from the opportunity to benefit from uncertainty through the coordination of subsidiaries which are geographically dispersed. We model this coordination as the operating flexibility to shift production between two manufacturing plants located in different countries. A stochastic dynamic programming model treats explicitly this flexibility as equivalent to owning an option, the value of which is dependent upon the real exchange rate. The model is extended to analyze hysteresis effects and within-country growth options. We show that the management of across-border coordination has led to changes in the heuristic rules used for performance evaluation and transfer pricing.

1,092 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed an asset/liability management model using multistage stochastic programming, which determines an optimal investment strategy that incorporates a multi-period approach and enables the decision makers to define risks in tangible operational terms.
Abstract: Frank Russell Company and The Yasuda Fire and Marine Insurance Co., Ltd., developed an asset/liability management model using multistage stochastic programming. It determines an optimal investment strategy that incorporates a multiperiod approach and enables the decision makers to define risks in tangible operational terms. It also handles the complex regulations imposed by Japanese insurance laws and practices. The most important goal is to produce a high-income return to pay annual interest on savings-type insurance policies without sacrificing the goal of maximizing the long-term wealth of the firm. During the first two years of use, fiscal 1991 and 1992, the investment strategy devised by the model yielded extra income of 42 basis points (¥8.7 billion or US$79 million).

371 citations


Journal ArticleDOI
TL;DR: The present paper is intended to review the existing literature on multi-objective combinatorial optimization (MOCO) problems and examines various classical combinatorials problems in a multi-criteria framework.
Abstract: In the last 20 years many multi-objective linear programming (MOLP) methods with continuous variables have been developed. However, in many real-world applications discrete variables must be introduced. It is well known that MOLP problems with discrete variables can have special difficulties and so cannot be solved by simply combining discrete programming methods and multi-objective programming methods. The present paper is intended to review the existing literature on multi-objective combinatorial optimization (MOCO) problems. Various classical combinatorial problems are examined in a multi-criteria framework. Some conclusions are drawn and directions for future research are suggested.

320 citations


Journal ArticleDOI
TL;DR: In this paper, a robust optimization model for planning power system capacity expansion in the face of uncertain power demand is developed. But the model is not suitable for the case of large-scale power systems.
Abstract: We develop a robust optimization model for planning power system capacity expansion in the face of uncertain power demand. The model generates capacity expansion plans that are both solution and model robust. That is, the optimal solution from the model is ‘almost’ optimal for any realization of the demand scenarios (i.e. solution robustness). Furthermore, the optimal solution has reduced excess capacity for any realization of the scenarios (i.e. model robustness). Experience with a characteristic test problem illustrates not only the unavoidable trade-offs between solution and model robustness, but also the effectiveness of the model in controlling the sensitivity of its solution to the uncertain input data. The experiments also illustrate the differences of robust optimization from the classical stochastic programming formulation.

187 citations


Journal ArticleDOI
TL;DR: An optimal dynamic solution is presented that simplifies the structure of the control mechanism by exercising ground-holding on groups of aircraft instead of individual flights by using stochastic linear programming with recourse.
Abstract: Existing probabilistic solutions to the ground-holding problem in air traffic control are of a static nature, with ground-holds assigned to aircraft at the beginning of daily operations. In this paper we present an optimal dynamic solution that simplifies the structure of the control mechanism by exercising ground-holding on groups of aircraft instead of individual flights. Using stochastic linear programming with recourse, we have been able to solve problem instances for one of the largest airports in the U.S. with just a powerful PC. We illustrate the advantage of the probabilistic dynamic solution over: (a) the static solution; (b) a deterministic solution; and (c) the passive strategy of no ground-holding.

151 citations


Journal ArticleDOI
TL;DR: It is speculated that the stochastic training method implemented in this study for training recurrent perceptrons can be used to train perceptron networks that have radically recurrent architectures.
Abstract: Evolutionary programming, a systematic multi-agent stochastic search technique, is used to generate recurrent perceptrons (nonlinear IIR filters). A hybrid optimization scheme is proposed that embeds a single-agent stochastic search technique, the method of Solis and Wets, into the evolutionary programming paradigm. The proposed hybrid optimization approach is further augmented by "blending" randomly selected parent vectors to create additional offspring. The first part of this work investigates the performance of the suggested hybrid stochastic search method. After demonstration on the Bohachevsky and Rosenbrock response surfaces, the hybrid stochastic optimization approach is applied in determining both the model order and the coefficients of recurrent perceptron time-series models. An information criterion is used to evaluate each recurrent perceptron structure as a candidate solution. It is speculated that the stochastic training method implemented in this study for training recurrent perceptrons can be used to train perceptron networks that have radically recurrent architectures. >

130 citations


Book
01 May 1994
TL;DR: New models of fuzzy multicriteria, discrete, geometric, fractioned, dynamic and stochastic programming are discussed and relations to genetic algorithms, simulated annealing and neural networks presented in the book may lead to a new generation of fuzzy optimization models.
Abstract: The purpose of this text is to provide a comprehensive exposition of relevant recent developments in the field of broadly perceived fuzzy optimization. New models of fuzzy multicriteria, discrete, geometric, fractioned, dynamic and stochastic programming are discussed. Relations to genetic algorithms, simulated annealing and neural networks presented in the book may lead to a new generation of fuzzy optimization models.

126 citations


Journal ArticleDOI
TL;DR: The purpose of the paper is to present the numerical results of a comparative study of eleven mathematical programming codes which represent typical realizations of the mathematical methods mentioned in the structural optimization system MBB-LAGRANGE, which proceeds from a typical finite element analysis.
Abstract: For FE-based structural optimization systems, a large variety of different numerical algorithms is available, e.g. sequential linear programming, sequential quadratic programming, convex approximation, generalized reduced gradient, multiplier, penalty or optimality criteria methods, and combinations of these approaches. The purpose of the paper is to present the numerical results of a comparative study of eleven mathematical programming codes which represent typical realizations of the mathematical methods mentioned. They are implemented in the structural optimization system MBB-LAGRANGE, which proceeds from a typical finite element analysis. The comparative results are obtained from a collection of 79 test problems. The majority of them are academic test cases, the others possess some practicalreal life background. Optimization is performed with respect to sizing of trusses and beams, wall thicknesses, etc., subject to stress, displacement, and many other constraints. Numerical comparison is based on reliability and efficiency measured by calculation time and number of analyses needed to reach a certain accuracy level.

115 citations


Journal ArticleDOI
TL;DR: A nonlinear programming algorithm which exploits the matrix sparsity produced by matrices in sequential quadratic programming applications to solve trajectory optimization problems with nonlinear equality and inequality constraints.
Abstract: One of the most effective numerical techniques for solving nonlinear programming problems is the sequential quadratic programming approach. Many large nonlinear programming problems arise naturally in data fitting and when discretization techniques are applied to systems described by ordinary or partial differential equations. Problems of this type are characterized by matrices which are large and sparse. This paper describes a nonlinear programming algorithm which exploits the matrix sparsity produced by these applications. Numerical experience is reported for a collection of trajectory optimization problems with nonlinear equality and inequality constraints.

105 citations


Journal ArticleDOI
TL;DR: In this paper, a nonlinear disaggregation technique for the operation of multireservoir systems is described, where the disaggregation is done by training a neural network to give, for an aggregated storage level, the storage level of each reservoir of the system.
Abstract: This paper describes a nonlinear disaggregation technique for the operation of multireservoir systems. The disaggregation is done by training a neural network to give, for an aggregated storage level, the storage level of each reservoir of the system. The training set is obtained by solving the deterministic operating problem of a large number of equally likely flow sequences. The training is achieved using the back propagation method, and the minimization of the quadratic error is computed by a variable step gradient method. The aggregated storage level can be determined by stochastic dynamic programming in which all hydroelectric installations are aggregated to form one equivalent reservoir. The results of applying the learning disaggregation technique to Quebec's La Grande river are reported, and a comparison with the principal component analysis disaggregation technique is given.

Book
09 May 1994
TL;DR: In this paper, the authors present a survey of the classical optimization techniques, optimization and inequalities of linear programming problems, as well as their application in simulation and optimization in function spaces.
Abstract: Synopsis: Classical Optimisation Techniques, Optimisation and Inequalities, Numerical Methods of Optimisation, Linear Programming Techniques, Non-linear Programming Techniques, Dynamic Programming Methods, Variational Methods, Stochastic Approximation Procedures, Optimisation in Simulation, Optimisation in Function Spaces Classical Optimisation Techniques: Preliminaries, Necessary and Sufficient Conditions for an Extremum, Constrained Optimisation - Lagrange Multipliers, Statistical Applications Optimisation and Inequalities: Classical Inequalities, Matrix Inequalities, Applications Numerical Methods of Optimisation: Numerical Evaluation of Roots of Equations, Direct Search Methods, Gradient Methods, Convergence of Numerical Procedures, Non-linear Regression and Other Statistical Algorithms Linear Programming Techniques: Linear Programming Problem, Standard Form of the Linear Programming Problem, Simplex Method, Karmarkar's Algorithm, Zero-Sum Two Person Finite-Games and Linear Programming, Integer Programming, Statistical Applications Non-linear Programming Methods: Statistical Examples, Kuhn-Tucker Conditions, Quadratic Programming, Convex Programming, Applications, Statistical Control of Optimisation, Stochastic Programming, Geometric Programming Dynamic Programming Methods: Regulation and Control, Functional Equation and Principles of Optimality, Dynamic Programming and Approximation, Patent Care through Dynamic Programming, Pontryagin Maximum Principle, Miscellaneous Applications Variational Methods: Statistical Applications, Euler-Lagrange Equations, Neyman-Pearson Technique, Robust Statistics and Variational Methods, Penalised Maximum Likelihood Estimates Stochastic Approximation Procedures: Robbins-Monro Procedure, General Case, Kiefer-Wolfowitz Procedure, Applications, Stochastic Approximation and Filtering Optimisation in Simulation: Optimisation Criteria, Optimality of Regression Experiments, Response Surface Methods, Miscellaneous Stochastic Methods, Application Optimisation in Function Spaces: Preliminaries, Optimisation Results, Splines in Statistics, Chapter Exercises, Bibliography Index.

Journal ArticleDOI
TL;DR: In this article, a two-stage stochastic programming formulation is developed where the objective is to determine an optimal plan (i.e., process utilization levels, purchases and sales of materials) and/or an optimal capacity expansion policy that maximizes an expected profit.
Abstract: The problem of planning under uncertainty is addressed. Short term production planning with a time horizon of a few weeks or months and long-range planning including capacity expansion options are considered. Based on the postulation of general probability distribution functions describing process uncertainty, a two-stage stochastic programming formulation is developed where the objective is to determine an optimal plan (i.e., process utilization levels, purchases and sales of materials) and/or an optimal capacity expansion policy that maximize an expected profit. A decomposition-based optimization approach is proposed, where planning decisions are taken by coupling economic optimality and plan feasibility without requiring an a priori discretization of the uncertainty. The proposed algorithmic procedure features a highly parallel solution structure which can be exploited for computational efficiency. Three example problems are presented to illustrate the steps of the novel planning under uncertainty optimization algorithm

Journal ArticleDOI
TL;DR: Upper and lower bounds on two-stage stochastic linear programs using limited moment information are developed using first and cross moments, from which bounds using only first moments are developed.
Abstract: This paper develops upper and lower bounds on two-stage stochastic linear programs using limited moment information. The case considered is when both the right-hand side as well as the objective coefficients of the second stage problem are random. Random variables are allowed to have arbitrary multivariate probability distributions with bounded support. First, upper and lower bounds are obtained using first and cross moments, from which we develop bounds using only first moments. The bounds are shown to solve the respective general moment problems.

Journal ArticleDOI
TL;DR: It is shown that asymptotic optimality can be achieved with a finite master program provided that a quadratic regularizing term is included and the elimination of the cutting planes impacts neither the number of iterations required nor the statistical properties of the terminal solution.
Abstract: Stochastic decomposition is a stochastic analog of Benders' decomposition in which randomly generated observations of random variables are used to construct statistical estimates of supports of the objective function. In contrast to deterministic Benders' decomposition for two stage stochastic programs, the stochastic version requires infinitely many inequalities to ensure convergence. We show that asymptotic optimality can be achieved with a finite master program provided that a quadratic regularizing term is included. Our computational results suggest that the elimination of the cutting planes impacts neither the number of iterations required nor the statistical properties of the terminal solution.

Proceedings ArticleDOI
11 Dec 1994
TL;DR: The method, called sample-path optimization, for optimizing performance functions in certain stochastic systems that can be modeled by simulation is explained, conditions under which it converges are given, and some sample calculations that indicate how it performs are displayed.
Abstract: This paper summarizes information about a method, called sample-path optimization, for optimizing performance functions in certain stochastic systems that can be modeled by simulation. We explain the method, give conditions under which it converges, and display some sample calculations that indicate how it performs. We also describe briefly some more extensive numerical experiments on large systems (PERT networks with up to 110 stochastic arcs, and tandem production lines with up to 50 machines). Details of these experiments are reported elsewhere; we give references to this and other related work. We conclude with some currently unanswered questions.

Journal ArticleDOI
TL;DR: This work considers the problem of optimizing the steady-state mean of a controlled regenerative process using a stochastic optimization algorithm driven by infinitesimal perturbation analysis (IPA) derivative estimates and derives IPA derivative estimates for the problem and proves almost sure convergence of the algorithm.
Abstract: We consider the problem of optimizing the steady-state mean of a controlled regenerative process using a stochastic optimization algorithm driven by infinitesimal perturbation analysis (IPA) derivative estimates. We derive IPA derivative estimates for our problem and prove almost sure convergence of the algorithm. The generality of our formulation should encompass a wide variety of practical systems. We illustrate our framework and results via several examples. >

Journal ArticleDOI
TL;DR: An upper bound for the rate of convergence is given in terms of the objective functions of the associated deterministic problems and it is shown how it can be applied to derivation of the Law of Iterated Logarithm for the optimal solutions.
Abstract: In this paper we study stability of optimal solutions of stochastic programming problems with fixed recourse. An upper bound for the rate of convergence is given in terms of the objective functions of the associated deterministic problems. As an example it is shown how it can be applied to derivation of the Law of Iterated Logarithm for the optimal solutions. It is also shown that in the case of simple recourse this upper bound implies upper Lipschitz continuity of the optimal solutions with respect to the Kolmogorov--Smirnov distance between the corresponding cumulative probability distribution functions.

Journal ArticleDOI
TL;DR: A stochastic dynamic programming model for determining the optimal ordering policy for a perishable or potentially obsolete product so as to satisfy known time-varying demand over a specified planning horizon is presented.

Journal ArticleDOI
TL;DR: A parallel matrix factorization procedure using the Sherman–Morrison–Woodbury formula is developed, based on the work of Birge and Qi, that achieves near perfect speedup in deterministic equivalent formulation of two-stage stochastic programs.
Abstract: Solving the deterministic equivalent formulation of two-stage stochastic programs using interior point algorithms requires the solution of linear systems of the form\[ \left( AD^2 A^\top \right) dy = b . \] The constraint matrix A has a dual, block-angular structure. We develop a parallel matrix factorization procedure using the Sherman–Morrison–Woodbury formula, based on the work of Birge and Qi [Management Sci., 34 (1988), pp. 1472–1479]. This procedure requires the solution of smaller, independent systems of equations. With the use of optimal communication algorithms and careful attention to data layout a parallel implementation is obtained that achieves near perfect speedup. Scalable performance is observed on an Intel iPSC/860 hypercube and a Connection Machine CM–5. Results are reported with the solution of the linear systems arising when solving stochastic programs with up to 98,304 scenarios, which correspond to deterministic equivalent linear programs with up to 1,966,090 constraints and 13,762,6...

Journal ArticleDOI
TL;DR: In this paper, the authors present two methods for approximating the optimal groundwater pumping policy for several interrelated aquifers in a stochastic setting that also involves conjunctive use of surface water.
Abstract: This paper presents two methods for approximating the optimal groundwater pumping policy for several interrelated aquifers in a stochastic setting that also involves conjunctive use of surface water. The first method employs a policy iteration dynamic programming (DP) algorithm where the value function is estimated by Monte Carlo simulation combined with curve-fitting techniques. The second method uses a Taylor series approximation to the functional equation of DP which reduces the problem, for a given observed state, to solving a system of equations equal in number to the aquifers. The methods are compared using a four-state variable, stochastic dynamic programming model of Madera County, California. The two methods yield nearly identical estimates of the optimal pumping policy, as well as the steady state pumping depth, suggesting that either method can be used in similar applications.

Journal ArticleDOI
TL;DR: Experiments indicate that evolutionary programming generally outperforms genetic algorithms and the General Algebraic Modeling System, a numerical optimization software package.
Abstract: Evolutionary programming is a stochastic optimization procedure that can be applied to difficult combinatorial problems. Experiments are conducted with three standard optimal control problems (linear-quadratic, harvest, and push-cart). The results are compared to those obtained with genetic algorithms and the General Algebraic Modeling System (GAMS), a numerical optimization software package. The results indicate that evolutionary programming generally outperforms genetic algorithms. Evolutionary programming also compares well with GAMS on certain problems for which GAMS is specifically designed and outperforms GAMS on other problems. The computational requirements for each procedure are briefly discussed.

Journal ArticleDOI
TL;DR: The effect of uncertainty on future plant operation is investigated via the incorporation of explicit future plan feasibility constraints into a two-stage stochastic programming formulation with an objective to maximize an expected profit over a time horizon, and the use of the Value of Perfect Information, VPI.

Journal ArticleDOI
TL;DR: A multistage stochastic linear program can be approximated by replacing the underlying information structure by a coarser structure, and the introduction of the partially aggregated problems leads to a general disaggregation framework for developing heuristic approaches to solving the original problem.
Abstract: A multistage stochastic linear program can be approximated by replacing the underlying information structure by a coarser structure. Given the ordinary Lagrangian for the original problem, this amounts to restricting both the primal and dual variables to be adapted to the coarser information structure. The resulting primal problem has the same form as the original, but with aggregated constraints and decisions. Mutual upper and lower bounds on the optimal values of the original and aggregated problems are obtained via partially aggregated problems formed by restricting only the primal variables or only the dual variables to the coarser structure. The introduction of the partially aggregated problems leads to a general disaggregation framework for developing heuristic approaches to solving the original problem.

Journal ArticleDOI
TL;DR: In this article, the authors considered a two-machine flow shop subject to breakdown and repair of machines and subject to non-negativity constraints on work-in-process and showed that the value function of the problem is locally Lipschitz and is a viscosity solution to the dynamic programming equation together with certain boundary conditions.

Journal ArticleDOI
TL;DR: In this paper, a demand driven stochastic dynamic programming (DDSP) model is developed that allows the use of actual variable monthly demand in generating the operating policies, and the associated penalty for each operating policy is a function of the release and the expected storage.
Abstract: In this study a demand driven stochastic dynamic programming (DDSP) model is developed that allows the use of actual variable monthly demand in generating the operating policies. In DDSP, the uncertainties of the streamflow process, and the forecasts are captured using Bayesian decision theory (BDT)—probabilities are continuously updated for each month. Furthermore, monthly demand along with inflow, storage, and flow forecast are included as hydrologic state variables in the algorithm. In this model the associated penalty for each operating policy is a function of the release and the expected storage. The operating policies are compared and tested in a hydrologic real-time simulation model and in a real-life operation model. The objectives of this paper are: to evaluate the usefulness and the hydrologic reliability of the generated operating policies by DDSP model; and to demonstrate how the assumption of fixed demand in optimization is deficient when the demand is actually variable or uncertain. The reliability of the operating policies is measured in terms of meeting the required demand when the operating policies are applied in simulation/operation models.

Journal ArticleDOI
TL;DR: The goal of this paper is to introduce a new methodology through an interactive algorithm for solving this multi-objective simulation optimization problem.

Journal ArticleDOI
TL;DR: The stability of the optimal value and the solutions of stochastic programming problems and the stability of sensors that model the possibility of making inquiries to improve the probabilistic information available about the uncertain quantities are analyzed.
Abstract: The paper examines the stability of the optimal value and the solutions of stochastic programming problems. Stability is checked with respect to variations in both the problem formulation and the probability distribution that describes the uncertainty. Of particular interest is the case where the payoff functions may be discontinuous. These results are applied to analyze the stability of sensors that model the possibility of making inquiries to improve the probabilistic information available about the uncertain quantities.

Journal ArticleDOI
TL;DR: A computationally more appealing order-cone decomposition scheme is proposed which behaves quadratically in the number of random variables and the resulting upper and lower approximations are amenable to efficient solution techniques.
Abstract: We previously obtained tight upper and lower bounds to the expectation of a saddle function of multivariate random variables using first and cross moments of the random variables without independence assumptions. These bounds are applicable when domains of the random vectors are compact sets in the euclidean space. In this paper, we extend the results to the case of unbounded domains, similar in spirit to the extensions by Birge and Wets in the pure convex case. The relationship of these bounds to a certain generalized moment problem is also investigated. Finally, for solving stochastic linear programs utilizing the above bounding procedures, a computationally more appealing order-cone decomposition scheme is proposed which behaves quadratically in the number of random variables. Moreover, the resulting upper and lower approximations are amenable to efficient solution techniques.

Journal ArticleDOI
01 Oct 1994-Networks
TL;DR: A new method to approximate the expected recourse function as a convex, separable function of the supplies to the second stage, using Lagrangian relaxation into tree subproblems whose expected recourse functions are numerically tractable.
Abstract: We study a class of two-stage dynamic networks with random arc capacities where the decisions in the first stage must be made before realizing the random quantities in the second stage. The expected total cost of the second stage is a function of the first-stage decisions, known as the expected recourse function, which is generally intractable. This paper proposes a new method to approximate the expected recourse function as a convex, separable function of the supplies to the second stage. First, the second-stage network is decomposed using Lagrangian relaxation into tree subproblems whose expected recourse functions are numerically tractable. Subgradient optimization is then used to update the Lagrange multipliers to improve the approximations. Numerical experiments show that this structural decomposition approach can produce high-quality approximations of the expected recourse functions for large stochastic networks. © 1994 by John Wiley & Sons, Inc.