scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic programming published in 1987"


Book
01 Apr 1987
TL;DR: As one of the part of book categories, dynamic programming deterministic and stochastic models always becomes the most wanted book.
Abstract: If you really want to be smarter, reading can be one of the lots ways to evoke and realize. Many people who like reading will have more knowledge and experiences. Reading can be a way to gain information from economics, politics, science, fiction, literature, religion, and many others. As one of the part of book categories, dynamic programming deterministic and stochastic models always becomes the most wanted book. Many people are absolutely searching for this book. It means that many love to read this kind of book.

1,311 citations


Book
01 Oct 1987
TL;DR: This paper presents a meta-modelling framework for solving the optimization problems that can be formulated as nonconvex quadratic problems and some of the methods used for solving these problems have been developed.
Abstract: Convex sets and functions.- Optimality conditions in nonlinear programming.- Combinatorial optimization problems that can be formulated as nonconvex quadratic problems.- Enumerative methods in nonconvex programming.- Cutting plane methods.- Branch and bound methods.- Bilinear programming methods for nonconvex quadratic problems.- Large scale problems.- Global minimization of indefinite quadratic problems.- Test problems for global nonconvex quadratic programming algorithms.

472 citations


Journal ArticleDOI
Jon M. Conrad1
TL;DR: In this paper, the authors present a general-purpose computer program for solving dynamic programming problems in agriculture and natural resource management, which is based on the Lagrangian derived from the Discrete Maximum Principle.
Abstract: I Introduction.- 1 The Management of Agricultural and Natural Resource Systems.- 1.1 The Nature of Agricultural and Natural Resource Problems.- 1.2 Management Techniques Applied to Resource Problems.- 1.2.1 Farm management.- 1.2.2 Forestry management.- 1.2.3 Fisheries management.- 1.3 Control Variables in Resource Management.- 1.3.1 Input decisions.- 1.3.2 Output decisions.- 1.3.3 Timing and replacement decisions.- 1.4 A Simple Derivation of the Conditions for Intertemporal Optimality.- 1.4.1 The general resource problem without replacement.- 1.4.2 The general resource problem with replacement.- 1.5 Numerical Dynamic Programming.- 1.5.1 Types of resource problem.- 1.5.2 Links with simulation.- 1.5.3 Solution procedures.- 1.5.4 Types of dynamic programming problem.- 1.6 References.- 1.A Appendix: A Lagrangian Derivation of the Discrete Maximum Principle.- 1.B Appendix: A Note on the Hamiltonian Used in Control Theory.- II The Methods of Dynamic Programming.- 2 Introduction to Dynamic Programming.- 2.1 Backward Recursion Applied to the General Resource Problem.- 2.2 The Principle of Optimality.- 2.3 The Structure of Dynamic Programming Problems.- 2.4 A Numerical Example.- 2.5 Forward Recursion and Stage Numbering.- 2.6 A Simple Crop-irrigation Problem.- 2.6.1 The formulation of the problem.- 2.6.2 The solution procedure.- 2.7 A General-Purpose Computer Program for Solving Dynamic Programming Problems.- 2.7.1 An introduction to the GPDP programs.- 2.7.2 Data entry using DPD.- 2.7.3 Using GPDP to solve the least-cost network problem.- 2.7.4 Using GPDP to solve the crop-irrigation problem.- 2.8 References.- 3 Stochastic and Infinite-Stage Dynamic Programming.- 3.1 Stochastic Dynamic Programming.- 3.1.1 Formulation of the stochastic problem.- 3.1.2 A stochastic crop-irrigation problem.- 3.2 Infinite-stage Dynamic Programming for Problems With Discounting.- 3.2.1 Formulation of the problem.- 3.2.2 Solution by value iteration.- 3.2.3 Solution by policy iteration.- 3.3 Infinite-stage Dynamic Programming for Problems Without Discounting.- 3.3.1 Formulation of the problem.- 3.3.2 Solution by value iteration.- 3.3.3 Solution by policy iteration.- 3.4 Solving Infinite-stage Problems in Practice.- 3.4.1 Applications to agriculture and natural resources.- 3.4.2 The infinite-stage crop-irrigation problem.- 3.4.3 Solution to the crop-irrigation problem with discounting.- 3.4.4 Solution to the crop-irrigation problem without discounting.- 3.5 Using GPDP to Solve Stochastic and Infinite-stage Problems.- 3.5.1 Stochastic problems.- 3.5.2 Infinite-stage problems.- 3.6 References.- 4 Extensions to the Basic Formulation.- 4.1 Linear Programming for Solving Stochastic, Infinite-stage Problems.- 4.1.1 Linear programming formulations of problems with discounting.- 4.1.2 Linear programming formulations of problems without discounting.- 4.2 Adaptive Dynamic Programming.- 4.3 Analytical Dynamic Programming.- 4.3.1 Deterministic, quadratic return, linear transformation problems.- 4.3.2 Stochastic, quadratic return, linear transformation problems.- 4.3.3 Other problems which can be solved analytically.- 4.4 Approximately Optimal Infinite-stage Solutions.- 4.5 Multiple Objectives.- 4.5.1 Multi-attribute utility.- 4.5.2 Risk.- 4.5.3 Problems involving players with conflicting objectives.- 4.6 Alternative Computational Methods.- 4.6.1 Approximating the value function in continuous form.- 4.6.2 Alternative dynamic programming structures.- 4.6.3 Successive approximations around a nominal control policy.- 4.6.4 Solving a sequence of problems of reduced dimension.- 4.6.5 The Lagrange multiplier method.- 4.7 Further Information on GPDP.- 4.7.1 The format for user-written data files.- 4.7.2 Redimensioning arrays in FDP and IDP.- 4.8. References.- 4.A Appendix: The Slope and Curvature of the Optimal Return Function Vi{xi}.- III Dynamic Programming Applications to Agriculture.- 5 Scheduling, Replacement and Inventory Management.- 5.1 Critical Path Analysis.- 5.1.1 A farm example.- 5.1.2 Solution using GPDP.- 5.1.3 Selected applications.- 5.2 Farm Investment Decisions.- 5.2.1 Optimal tractor replacement.- 5.2.2 Formulation of the problem without tax.- 5.2.3 Formulation of the problem with tax.- 5.2.4 Discussion.- 5.2.5 Selected applications.- 5.3 Buffer Stock Policies.- 5.3.1 Stochastic yields: planned production and demand constant.- 5.3.2 Stochastic yields and demand: planned production constant.- 5.3.3 Planned production a decision variable.- 5.3.4 Selected applications.- 5.4 References.- 6 Crop Management.- 6.1 The Crop Decision Problem.- 6.1.1 States.- 6.1.2 Stages.- 6.1.3 Returns.- 6.1.4 Decisions.- 6.2 Applications to Water Management.- 6.3 Applications to Pesticide Management.- 6.4 Applications to Crop Selection.- 6.5 Applications to Fertilizer Management.- 6.5.1 Optimal rules for single-period carryover functions.- 6.5.2 Optimal rules for a multiperiod carryover function.- 6.5.3 A numerical example.- 6.5.4 Extensions.- 6.6 References.- 7 Livestock Management.- 7.1 Livestock Decision Problems.- 7.2 Livestock Replacement Decisions.- 7.2.1 Types of problem.- 7.2.2 Applications to dairy cows.- 7.2.3 Periodic revision of estimated yield potential.- 7.3 Combined Feeding and Replacement Decisions.- 7.3.1 The optimal ration sequence: an example.- 7.3.2 Maximizing net returns per unit of time.- 7.3.3 Replacement a decision option.- 7.4 Extensions to the Combined Feeding and Replacement Problem.- 7.4.1 The number of livestock.- 7.4.2 Variable livestock prices.- 7.4.3 Stochastic livestock prices.- 7.4.4 Ration formulation systems.- 7.5 References.- 7.A Appendix: Yield Repeatability and Adaptive Dynamic Programming.- 7.A.1 The concept of yield repeatability.- 7.A.2 Repeatability of average yield.- 7.A.3 Expected yield given average individual and herd yields.- 7.A.4 Yield probabilities conditional on recorded average yields.- IV Dynamic Programming Applications to Natural Resources.- 8 Land Management.- 8.1 The Theory of Exhaustible Resources.- 8.1.1 The simple theory of the mine.- 8.1.2 Risky possession and risk aversion.- 8.1.3 Exploration.- 8.2 A Pollution Problem.- 8.2.1 Pollution as a stock variable.- 8.2.2 A numerical example.- 8.3 Rules for Making Irreversible Decisions Under Uncertainty.- 8.3.1 Irreversible decisions and quasi-option value.- 8.3.2 A numerical example.- 8.3.3 The discounting procedure.- 8.4 References.- 9 Forestry Management.- 9.1 Problems in Forestry Management.- 9.2 The Optimal Rotation Period.- 9.2.1 Deterministic problems.- 9.2.2 Stochastic problems.- 9.2.3 A numerical example of a combined rotation and protection problem.- 9.3 The Optimal Rotation and Thinning Problem.- 9.3.1 Stage intervals.- 9.3.2 State variables.- 9.3.3 Decision variables.- 9.3.4 Objective function.- 9.4 Extensions.- 9.4.1 Allowance for distributions of tree sizes and ages.- 9.4.2 Alternative objectives.- 9.5 References.- 10 Fisheries Management.- 10.1 The Management Problem.- 10.2 Modelling Approaches.- 10.2.1 Stock dynamics.- 10.2.2 Stage return.- 10.2.3 Developments in analytical modelling.- 10.3 Analytical Dynamic Programming Approaches.- 10.3.1 Deterministic results.- 10.3.2 Stochastic results.- 10.4 Numerical Dynamic Programming Applications.- 10.4.1 An application to the southern bluefin tuna fishery.- 10.4.2 A review of applications.- 10.5 References.- V Conclusion.- 11 The Scope for Dynamic Programming Applied to Resource Management.- 11.1 Dynamic Programming as a Method of Conceptualizing Resource Problems.- 11.2 Dynamic Programming as a Solution Technique.- 11.3 Applications to Date.- 11.4 Expected Developments.- 11.5 References.- Appendices.- A1 Coding Sheets for Entering Data Using DPD.- A2 Program Listings.- A2.1 Listing of DPD.- A2.2 Listing of FDP.- A2.3 Listing of IDP.- A2.4 Listing of DIM.- Author Index.

204 citations


Journal ArticleDOI
TL;DR: The minimax approach to stochastic programming with recourse deals with the case of incomplete knowledge of the distribution F of the random coefficients and leads to the deterministic program [ILM0001] where x is a given set of admissible solutions, φ is a recourse function and f is aGiven set of distributions.
Abstract: The minimax approach to stochastic programming with recourse deals with the case of incomplete knowledge of the distribution F of the random coefficients. It leads to the deterministic program [ILM0001] where x is a given set of admissible solutions, φis a recourse function and f is a given set of distributions. To express the objective function of (*) in a form suitable for further computation, general results on the moment problem can be used.In section 1, the main ideas are briefly surveyed. Subsequently, the method is applied to (*) in section 2. It is shown how the results can be used both to draw conclusions on robustness of the optimal value of the given stochastic program with respect to the set of distributions considered, and to study sensitivity of the optimal solution with respect to a specified distribution. In section 3, an application to a stochastic model of a water resource system with incomplete knowledge about the distribution of the random demand is suggested

177 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provided sufficient conditions for the equilibrium price system and a vector of exogenously specified state variable processes to form a diffusion process in a pure exchange economy, which involves smoothness of agents' utility functions and certain nice properties of the aggregate endowment process and the dividend processes of traded assets.
Abstract: This paper provides sufficient conditions for the equilibrium price system and a vector of exogenously specified state variable processes to form a diffusion process in a pure exchange economy. The conditions involve smoothness of agents' utility functions and certain nice properties of the aggregate endowment process and the dividend processes of traded assets. In place of the dynamic programming, a martingale representation technique is utilized to characterize equilibrium portfolio policies. This technique is useful even when there does not exist a finite dimensional Markov structure in the economy and thus the Markovian stochastic dynamic programming is not applicable. A gents are shown to hold certain hedging mutual funds and the riskless asset. In contrast to earlier results, the market portfolio does not have a special role in hedging, since the markets are dynamically complete. When there exists a finite dimensional Markov system in the economy, the dimension of the hedging demand identified through the Markovian dynamic programming may be much larger than that identified by the martingale method. Copyright 1987 by The Econometric Society.

150 citations


Journal ArticleDOI
TL;DR: In this article, the sufficiency tests are applied to the necessary conditions to determine when solutions of the stochastic optimization problems also solve the deterministic robust stability problems, and the modified Riccati equation approach of Petersen and Hollot is generalized in the static case and extended to dynamic compensation.
Abstract: Three parallel gaps in robust feedback control theory are examined: sufficiency versus necessity, deterministic versus stochastic uncertainty modeling, and stability versus performance. Deterministic and stochastic output-feedback control problems are considered with both static and dynamic controllers. The static and dynamic robust stabilization problems involve deterministically modeled bounded but unknown measurable time-varying parameter variations, while the static and dynamic stochastic optimal control problems feature state-, control-, and measurement-dependent white noise. General sufficiency conditions for the deterministic problems are obtained using Lyapunov's direct method, while necessary conditions for the stochastic problems are derived as a consequence of minimizing a quadratic performance criterion. The sufficiency tests are then applied to the necessary conditions to determine when solutions of the stochastic optimization problems also solve the deterministic robust stability problems. As an additional application of the deterministic result, the modified Riccati equation approach of Petersen and Hollot is generalized in the static case and extended to dynamic compensation.

130 citations


Journal ArticleDOI
TL;DR: In this paper, two dynamic programming models, one deterministic and one stochastic, are compared to generate reservoir operating rules. And the results show that the DPR generated rules are more effective in the operation of medium to very large reservoirs and the SDP generated rules were more effective for small reservoirs.
Abstract: Two dynamic programming models — one deterministic and one stochastic — that may be used to generate reservoir operating rules are compared. The deterministic model (DPR) consists of an algorithm that cycles through three components: a dynamic program, a regression analysis, and a simulation. In this model, the correlation between the general operating rules, defined by the regression analysis and evaluated in the simulation, and the optimal deterministic operation defined by the dynamic program is increased through an iterative process. The stochastic dynamic program (SDP) describes streamflows with a discrete lag-one Markov process. To test the usefulness of both models in generating reservoir operating rules, real-time reservoir operation simulation models are constructed for three hydrologically different sites. The rules generated by DPR and SDP are then applied in the operation simulation model and their performance is evaluated. For the test cases, the DPR generated rules are more effective in the operation of medium to very large reservoirs and the SDP generated rules are more effective for the operation of small reservoirs.

117 citations


Journal ArticleDOI
TL;DR: In this article, the effect of changes in problem functions and/or distributions in certain two-stage stochastic programming problems with recourse is analyzed under reasonable assumptions that the locally optimal value of the perturbed problem will be continuous and the corresponding set of local optimizers will be upper semicontinuous with respect to the parameters.
Abstract: We analyze the effect of changes in problem functions and/or distributions in certain two-stage stochastic programming problems with recourse. Under reasonable assumptions the locally optimal value of the perturbed problem will be continuous and the corresponding set of local optimizers will be upper semicontinuous with respect to the parameters (including the probability distribution in the second stage).

105 citations


Journal ArticleDOI
TL;DR: The relationship between fuzzy programming and stochastic programming is determined and equivalent precise analogues are derived for these problems.

97 citations


Journal ArticleDOI
TL;DR: In this article, the authors present backward, forward, and backward-forward models that weaken previous sufficient conditions and that include, but are not restricted to, optimization problems, including extremization and nonextremization problems.
Abstract: The unifying purpose of the abstract dynamic programming models is to find sufficient conditions on the recursive definition of the objective function that guarantee the validity of the dynamic programming iteration. This paper presents backward, forward, and backward-forward models that weaken previous sufficient conditions and that include, but are not restricted to, optimization problems. The backward-forward model is devoted to the simultaneous solution of a collection of interrelated sequential problems based on the independent computation of a cost-to-arrive function and a cost-to-go function. Several extremization and nonextremization problems illustrate the applicability of the proposed models.

88 citations


Journal ArticleDOI
TL;DR: Bounds on the expected value of a convex function are obtained by means of an approximating generalized moment problem andumerical implementation is discussed in the context of stochastic programming problems.
Abstract: Bounds on the expected value of a convex function are obtained by means of an approximating generalized moment problem. Numerical implementation is discussed in the context of stochastic programming problems.

Journal ArticleDOI
TL;DR: In this article, an optimization model is developed which can be used as an analytical tool in the decision-making process for reservoir operation, taking into consideration the uncertainty of forecast at the time a policy must be established.
Abstract: Hydropower production plays an important role in the operation of electric energy production-distribution systems. Since most of the economically feasible hydroelectric sites have already been developed, it is necessary to examine and develop practical, real-time operational models which can be used to increase the output from existing hydropower plants. Three main issues are addressed by this research: the potential of increasing the output from existing hydropower plants, the alleviation of dimensionality problems for multistate dynamic programming, and the use of probabilistic forecast in the decision-making process. An optimization model is developed which can be used as an analytical tool in the decision-making process for reservoir operation. The model takes into consideration the uncertainty of forecast at the time a policy must be established. The uncertainty is expressed in terms of the second moments of the forecast probability distributions. There is no limitation on the type of distribution, and it is assumed that forecast is made by a conceptual type of watershed model. The proposed methodology is applicable to constrained stochastic systems with quadratic objective functions and linear dynamics. It uses the decomposition principle of dynamic programming without discretizing the state or control variable and therefore the method can be used for large-scale systems. It is an iterative procedure which requires an initially feasible solution and solves a series of quadratic programming problems at each iteration. The applicability of the research is demonstrated through case studies.

Journal ArticleDOI
TL;DR: In this article, a stochastic dynamic programming approach together with a numerical example are used to compute quality control policies and justify the intensive use of quality control in manufacturing, and a linkage between quality control, the production learning process and the firm's propensity to use quality control under various conditions is established.
Abstract: The purpose of this paper is to establish a linkage between quality control, the production learning process and the firm's propensity to use quality control under various conditions. A stochastic dynamic programming approach together with a numerical example are used to compute quality control policies and justify the intensive use of quality control in manufacturing.

01 Jan 1987
TL;DR: The stability problem with respect to the probability measure, involved with those approximations as well as with inexact information in applied problems, is discussed for recourse and chance constrained models.
Abstract: It has become an accepted approach to attack stochastic programming problems by approximating the given probability distribution in various ways After sketching one of these approaches for recourse problems, the stability problem with respect to the probability measure, involved with those approximations as well as with inexact information in applied problems, is discussed for recourse and chance constrained models Published in: J Guddat, HTh Jongen, B Kummer, and F Nožicka (eds): Parametric Optimization and Related Topics Akademie–Verlag, Berlin (1987) 387–407 ∗Part of this work was accomplished at the MRC, University of Wisconsin-Madison and sponsored by the National Science Foundation under Grant No DCR-8502202 The author greatly appreciates the hospitality and support extended by these institutions †Address: Institut fur Operations Research und Mathematische Methoden der Wirtschaftswissenschaften der Universitat Zurich, Weinbergstrasse 59, CH-8006 Zurich

Journal ArticleDOI
TL;DR: A duality theory is developed in which a general relation between φ-divergence and utility functions is revealed, via the conjugate transform, and a new type of certainty equivalent concept emerges.
Abstract: The paper considers stochastically constrained nonlinear programming problems. A penalty function is constructed in terms of a “distance” between random variables, defined in terms of the φ-divergence functional a generalization of the relative entropy. A duality theory is developed in which a general relation between φ-divergence and utility functions is revealed, via the conjugate transform, and a new type of certainty equivalent concept emerges.

Journal ArticleDOI
TL;DR: In this article, a chance-constrained stochastic programming model is developed for water quality optimization, which determines the least cost allocation of waste treatment plant biochemical oxygen demand (BOD) removal efficiencies, subject to probabilistic restrictions on maximum allowable instream dissolved oxygen deficit.
Abstract: A chance-constrained stochastic programming model is developed for water quality optimization. It determines the least cost allocation of waste treatment plant biochemical oxygen demand (BOD) removal efficiencies, subject to probabilistic restrictions on maximum allowable instream dissolved oxygen deficit. The new model extends well beyond traditional approaches that assume streamflow is the sole random variable. In addition to streamflow, other random variables in the model are initial in-stream BOD level and dissolved oxygen (DO) deficit; waste outfall flow rates, BOD levels and DO deficits; deoxygenation k1, reaeration k2, and sedimentation-scour rate k3 coefficients of the Streeter-Phelps DO sag model; photosynthetic input-benthic depletion rates Ai, and nonpoint source BOD input rate Pi for the Camp-Dobbins extensions to the Streeter-Phelps model. These random variables appear in more highly aggregated terms which in turn form part of the probabilistic constraints of the water quality optimization model. Stochastic simulation procedures for estimating the probability density functions and covariances of these aggregated terms are discussed. A new chance-constrained programming variant, imbedded chance constraints, is presented along with an example application. In effect, this method imbeds a chance constraint within a chance constraint in a manner which is loosely associated with the distribution-free method of chance-constrained programming. It permits the selection of nonexpected value realizations of the mean and variance estimates employed in the deterministic equivalents of traditional chance-constrained models. As well, it provides a convenient mechanism for generating constraint probability response surfaces. A joint chance-constrained formulation is also presented which illustrates the possibility for prescription of an overall system reliability level, rather than reach-by-reach reliability assignment.

Journal ArticleDOI
TL;DR: A special Lyapunov function technique is used to show that the method is convergent with probability 1 to stationary points and recovers the subgradient and Lagrange multipliers that appear in necessary conditions of optimality.
Abstract: For a stochastic programming problem with a nonsmooth objective a recursive stochastic subgradient method is proposed in which successive directions are obtained from quadratic programming subproblems The subproblems result from linearization of original constrains and from approximation of the objective by a quadratic function involving stochastic ϵ-subgradient estimates constructed in the course of computation A special Lyapunov function technique is used to show that the method is convergent with probability 1 to stationary points and recovers the subgradient and Lagrange multipliers that appear in necessary conditions of optimality

Journal ArticleDOI
TL;DR: In this article, the authors considered the discounted return criterion for semi-Markov decision models and established conditions for the existence of a policy which is asymptotically discount optimal uniformly in the unknown parameter.
Abstract: The principle of estimation and control was introduced and studied independently by Kurano and Mandl under the average return criterion for models in which some of the data depend on an unknown parameter. Kurano and Mandl considered Markov decision models with finite state space and bounded rewards. Conditions are established for the existence of an optimal policy based on a consistent estimator for the unknown parameter which is optimal uniformly in the parameter. These results were extended by Kolonko to semi-Markov models with denumerable state space and unbounded rewards. The present paper considers the same principle of estimation and control for the discounted return criterion. The underlying semi-Markov decision model may have a denumerable state space and unbounded rewards. Conditions are established for the existence of a policy which is asymptotically discount optimal uniformly in the unknown parameter. The essential conditions are continuity and compactness conditions and a multiplicative form ...

Journal ArticleDOI
TL;DR: An approximation formula for chance constraints is presented which can be used in either the single- or multiple-objective case, and will place a bound on the chance constraint at least as tight as the true non-linear form, thus overachieving the chance constraints at the expense of other constraints or objectives.
Abstract: Decision environments involve the need to solve problems with varying degrees of uncertainty as well as multiple, potentially conflicting objectives. Chance constraints consider the uncertainty encountered. Codes incorporating chance constraints into a mathematical programming model are not available on a widespread basis owing to the non-linear form of the chance constraints. Therefore, accurate linear approximations would be useful to analyse this class of problems with efficient linear codes. This paper presents an approximation formula for chance constraints which can be used in either the single- or multiple-objective case. The approximation presented will place a bound on the chance constraint at least as tight as the true non-linear form, thus overachieving the chance constraint at the expense of other constraints or objectives.

Journal ArticleDOI
TL;DR: In this article, a survey of stochastic programming decision models with incomplete knowledge of the distribution of the random parameters is presented, which can be used to study robustness of the optimal solution and the optimal value of the objective function relative to small changes of the underlying distribution.
Abstract: The possibility of successful applications of stochastic programming decision models has been limited by the assumed complete knowledge of the distribution Fof the random parameters as well as by the limited scope of the existing numerical procedures. We shall give a survey of selected methods which can be used to deal with the incomplete knowledge of the distribution F, namely to study robustness of the optimal solution and the optimal value of the objective function relative to small changes of the underlying distribution and to get error bounds in approximation schemes.


Journal ArticleDOI
TL;DR: A new piecewise linear upper bound is presented on this function, called the network recourse function, which is compared to the standard Madansky bound, and is shown computationally to be a little weaker, but much faster to find.
Abstract: We consider the optimal value of a pure minimum cost network flow problem as a function of supply, demand and arc capacities. We present a new piecewise linear upper bound on this function, which is called the network recourse function. The bound is compared to the standard Madansky bound, and is shown computationally to be a little weaker, but much faster to find. The amount of work is linear in the number of stochastic variables, not exponential as is the case for the Madansky bound. Therefore, the reduction in work increases as the number of stochastic variables increases. Computational results are presented.

Journal ArticleDOI
TL;DR: It is shown that fully polynomial time approximation schemes can be developed for a large class of stochastic programming problems with 0–1 variables in which cost coefficients are subject to independent normal distributions, if their deterministic versions obtained by replacing cost coefficients by constants have polynomially solvable algorithms.



Journal ArticleDOI
TL;DR: In this article, a decision-analytic approach is taken to the problem of assessing the economic value of imperfect weather forecasts, focusing on measures of the quality of such information and on the relationship between quality and economic value.
Abstract: A decision-analytic approach is taken to the problem of assessing the economic value of imperfect weather forecasts. Emphasis is placed on measures of the quality of such information and on the relationship between quality and economic value. The fallowing/planting problem for a spring wheat farmer is examined in detail as a specific application. It is assumed that the farmer's goal is to maximize the total expected discounted return over an infinite horizon, which places this problem within the general framework of Markov decision processes. By means of stochastic dynamic programming, the economic value to the farmer of currently available seasonal precipitation forecasts, as well as of hypothetical improvements in the quality of such forecasts, is estimated. Because the relationship between the quality and value of forecasts is highly nonlinear, the need to explicitly determine value-of-information estimates, rather than relying on quality as a surrogate for value, is made clear.


Journal ArticleDOI
K.-J. Bierth1
TL;DR: In this article, the expected average reward criterion is considered instead of the average expected reward criterion commonly used in stochastic dynamic programming, which seems to be more natural and yields stronger results.

Journal ArticleDOI
TL;DR: In this paper, a general approach to the development of deterministic equivalents of constraints to be satisfied within certain probability limits is presented, and a deterministic transformation of a stochastic programming problem with random variables in the objective function is presented.

Journal ArticleDOI
01 Sep 1987
TL;DR: The developed approximation depends only on the first two statistical moments of the random inputs and up to the third derivatives of the cost functions and elucidates the stochastic optimization problem yielding insights which cannot be easily obtained from the numerical application of discrete DP.
Abstract: A new approximate method of solution for stochastic optimal control problems with many state and control variables is introduced. The method is based on the expansion of the optimal control into the deterministic feedback control plus a caution term. The analytic, small-perturbation calculation of the caution term is at the heart of the new method. The developed approximation depends only on the first two statistical moments of the random inputs and up to the third derivatives of the cost functions. Its computational requirements do not exhibit the exponential growth exhibited by discrete stochastic DP and can be used as a suboptimal solution to problems for which application of stochastic DP is not feasible. The method is accurate when the cost-to-go functions are approximately cubic in a neighbourhood around the deterministic trajectory whose size depends on forecasting uncertainty. Furthermore, the method elucidates the stochastic optimization problem yielding insights which cannot be easily obtained from the numerical application of discrete DP.