scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic programming published in 1988"


Book
01 Jan 1988
TL;DR: In this paper, the authors describe a powerful and flexible technique for the modeling of behavior, based on evolutionary principles, which employs stochastic dynamic programming and permits the analysis of behavioral adaptations wherein organisms respond to changes in their environment and in their own current physiological state.
Abstract: This book describes a powerful and flexible technique for the modeling of behavior, based on evolutionary principles. The technique employs stochastic dynamic programming and permits the analysis of behavioral adaptations wherein organisms respond to changes in their environment and in their own current physiological state. Models can be constructed to reflect sequential decisions concerned simultaneously with foraging, reproduction, predator avoidance, and other activities. The authors show how to construct and use dynamic behavioral models. Part I covers the mathematical background and computer programming, and then uses a paradigm of foraging under risk of predation to exemplify the general modeling technique. Part II consists of five "applied" chapters illustrating the scope of the dynamic modeling approach. They treat hunting behavior in lions, reproduction in insects, migrations of aquatic organisms, clutch size and parental care in birds, and movement of spiders and raptors. Advanced topics, including the study of dynamic evolutionarily stable strategies, are discussed in Part III.

817 citations


BookDOI
01 Mar 1988
TL;DR: This is a comprehensive and timely overview of the numerical techniques that have been developed to solve stochastic programming problems and comprehensively covers all major advances in the field (both Western and Soviet).
Abstract: This is a comprehensive and timely overview of the numerical techniques that have been developed to solve stochastic programming problems. After a brief introduction to the field, where accent is laid on modeling questions, the next few chapters lay out the challenges that must be met in this area. They also provide the background for the description of the computer implementations given in the third part of the book. Selected applications are described next. Some of these have directly motivated the development of the methods described in the earlier chapters. They include problems that come from facilities location, exploration investments, control of ecological systems, energy distribution and generation. Test problems are collected in the last chapter. This is the first book devoted to this subject. It comprehensively covers all major advances in the field (both Western and Soviet). It is only because of the recent developments in computer technology, that we have now reached a point where our computing power matches the inherent size requirements faced in this area. The book demonstrates that a large class of stochastic programming problems are now in the range of our numerical capacities.

584 citations


Journal ArticleDOI
TL;DR: This paper describes a multicut algorithm to carry out outer linearization of stochastic programs and presents experimental and theoretical justification for reductions in major iterations.

532 citations


DOI
01 Jan 1988

361 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a stochastic dynamic programming model that captures the essential elements of this problem, and a numerical example further demonstrates the optimal mode switching decision rules, where the value derived from the ability to better cope with uncertainty is considered.
Abstract: The author studies the topical issue of flexible manufacturing system (FMS) justification. He contends that current evaluation methods fall short of capturing a key advantage of an FMS: the value of flexibility. He identifies various benefits of FMS that arise from the ability to switch between modes of production, and in particular, he models the value derived from the ability to better cope with uncertainty. A model to capture this value must solve for the value of flexibility together with the dynamic operating schedule of the production process. He presents a stochastic dynamic programming model that captures the essential elements of this problem. A numerical example further demonstrates the optimal mode switching decision rules. >

227 citations


Journal ArticleDOI
TL;DR: A method is presented to obtain sharp lower and upper bounds for the probability that at least one out of a number of events in an arbitrary probability space will occur, utilizing only the first few terms in the inclusion-exclusion formula.
Abstract: We present a method to obtain sharp lower and upper bounds for the probability that at least one out of a number of events in an arbitrary probability space will occur. The input data are some of the binomial moments of the occurrences, such as the sum of the probabilities of the individual events, or the sum of the joint probabilities of all pairs of events. We develop a special, very simple linear programming algorithm to obtain these bounds. The method allows us to compute good bounds in an optimal way, utilizing only the first few terms in the inclusion-exclusion formula. Possible applications include obtaining bounds for the reliability of a stochastic system, solving algorithmically some stochastic programming problems, and approximating multivariate probabilities in statistics. In a numerical example we approximate the probability that a Gaussian process runs below a given level in a number of consecutive epochs.

147 citations


Journal ArticleDOI
TL;DR: A new computational algorithm is presented for the solution of discrete time linearly constrained stochastic optimal control problems decomposable in stages and is much more efficient than the conventional way based on enumeration or iterative methods with linear rate of convergence.
Abstract: A new computational algorithm is presented for the solution of discrete time linearly constrained stochastic optimal control problems decomposable in stages. The algorithm, designated gradient dynamic programming, is a backward moving stagewise optimization. The main innovations over conventional discrete dynamic programming (DDP) are in the functional representation of the cost-to-go function and the solution of the single-stage problem. The cost-to-go function (assumed to be of requisite smoothness) is approximated within each element defined by the discretization scheme by the lowest-order polynomial which preserve its values and the values of its gradient with respect to the state variables at all nodes of the discretization grid. The improved accuracy of this Hermitian interpolation scheme reduces the effect of discretization error and allows the use of coarser grids which reduces the dimensionality of the problem. At each stage, the optimal control is determined on each node of the discretized state space using a constrained Newton-type optimization procedure which has quadratic rate of convergence. The set of constraints which act as equalities is determined from an active set strategy which converges under lenient convexity requirements. This method of solving the single-stage optimization is much more efficient than the conventional way based on enumeration or iterative methods with linear rate of convergence. Once the optimal control is determined, the cost-to-go function and its gradient with respect to the state variables is calculated to be used at the next stage. The proposed technique permits the efficient optimization of stochastic systems whose high dimensionality does not permit solution under the conventional DDP framework and for which successive approximation methods are not directly applicable due to stochasticity. Results for a four-reservoir example are presented. The purpose of this paper is to present a new computational algorithm for the stochastic optimization of sequential decision problems. One important and extensively studied class of such problems in the area of water resources is the discrete time optimal control of multireservoir systems under stochastic inflows. Other applications include the optimal design and operation of sewer systems [e.g., Mays and Wenzel, 1976; Labadie et al., 1980], the optimal conjunctive utilization of surface and groundwater resources [e.g., Buras, 1972], and the minimum cost water quality maintenance in rivers [e.g., Dracup and Fogarty, 1974; Chang and Yeh, 1973], to mention only a few of the water resources applications and pertinent references. An extensive review of dynamic programming applications in water resources can be found in the works by Yakowitz [1982] and Yeh [1985]. Before we proceed with the

105 citations


Journal ArticleDOI
TL;DR: A variant of Karmarkar's algorithm for block-angular structuredlinear programs, such as stochastic linear programs, is presented, which gives a worst-case bound on the order of the running time that can be an order of magnitude better than that of Karma's standard algorithm.
Abstract: We present a variant of Karmarkar's algorithm for block-angular structured linear programs, such as stochastic linear programs. By computing the projection efficiently, we give a worst-case bound on the order of the running time that can be an order of magnitude better than that of Karmarkar's standard algorithm. Further implications for approximations and very large-scale problems are given.

103 citations


Journal ArticleDOI
TL;DR: This paper presents a straightforward extension of the E-M inequality for the dependent case, yielding a discrete distribution that is extremal with respect to a partial ordering of a set of distribution functions and proves that the bounds for the expectation behave monotonically by applying the obtained inequality to smaller and smaller subintervals.
Abstract: Bounding the expectation of a convex function of a multivariate random variable is of great importance for solving stochastic linear programming problems SLP with recourse. The classic bounds are those of Jensen and Edmundson-Madansky, whereby Jensen's inequality is available in the independent as well as in the dependent case, in contrast with the Edmundson-Madansky E-M inequality Madansky, A. 1959. Bounds on the expectation of a convex function of multivariate random variable. Ann. Math. Statist.30 743--746. which is valid only in the independent case. This paper presents a straightforward extension of the E-M inequality for the dependent case, yielding a discrete distribution that is extremal with respect to a partial ordering of a set of distribution functions. Further, we prove that the bounds for the expectation behave monotonically by applying the obtained inequality to smaller and smaller subintervals. As this requires the evaluation of conditional probabilities and expectations, we replace the given distribution by a discrete one resulting from sampling, and determine the sampling size that keeps the statistical error negligible. Finally we conclude with the application to SLP recourse problems and state some computational results illustrating the effort for solving recourse problems.

92 citations


Journal ArticleDOI
TL;DR: In this article, a structural model of household health production that jointly determines the demand for leisure and demand for consumption for elderly males is developed and estimates a statistical model for household health.
Abstract: Our article develops and estimates a structural model of household health production that jointly determines the demand for leisure and the demand for consumption for elderly males. We use a stochastic dynamic programming framework based on the ...

90 citations


Journal ArticleDOI
TL;DR: A resource directed decomposition method, which simultaneously exploits stochastic programming and mixed integer programming model structures, is proposed, which applies to fuel contract and plant construction decisions faced by an electric utility.
Abstract: This paper reports on the application of stochastic programming with recourse to strategic planning decisions regarding resource acquisition. A resource directed decomposition method, which simultaneously exploits stochastic programming and mixed integer programming model structures, is proposed. Computational experience with the method applied to fuel contract and plant construction decisions faced by an electric utility is presented.

Journal ArticleDOI
01 Sep 1988-Networks
TL;DR: The main results obtained include the proof of some counterintuitive facts, the validity of applying stochastic programming to this problem and the proof that the computational complexity of a particular SSP problem is polynomial.
Abstract: This paper considers Stochastic Shortest Path (SSP) problems in probabilistic networks. A variety of approaches have already been proposed in the literature. However, unlike in the deterministic case, they are related to distinct models, interpretations and applications. We have chosen to look at the case where detours from the original path must be taken whenever the “first-choice” arc fails. The main results obtained include the proof of some counterintuitive facts (e.g., the SSP may contain a cycle), the proof of the validity of applying stochastic programming to this problem and the proof that the computational complexity of a particular SSP problem is polynomial.


Journal ArticleDOI
TL;DR: A numerical approach for computing optimal dynamic checkpointing strategies for general rollback and recovery systems is presented, based on value-iteration stochastic dynamic programming with spline or finite-element approximation of the value and policy functions.
Abstract: A numerical approach for computing optimal dynamic checkpointing strategies for general rollback and recovery systems is presented. The system is modeled as a Markov renewal decision process. General failure distributions, random checkpointing durations, and reprocessing-dependent recovery times are allowed. The aim is to find a dynamic decision rule to maximize the average system availability over an infinite time horizon. A computational approach to approximate such a rule is proposed. This approach is based on value-iteration stochastic dynamic programming with spline or finite-element approximation of the value and policy functions. Numerical illustrations are provided. >

Journal ArticleDOI
TL;DR: Recent work on dynamic stochastic programming problems and their applications is surveyed, including new results on the measurability and interpretation-in terms of the expected value of perfect information (EVPI)-of the dual multiplier processes corresponding to these problems.
Abstract: This paper surveys recent work on dynamic stochastic programming problems and their applications. New results are included on the measurability and interpretation-in terms of the expected value of perfect information (EVPI)-of the dual multiplier processes corresponding to these problems. A final section reports preliminary computational experiments with algorithms for 2-stage problems

Journal Article
TL;DR: Improved mathematical techniques to the PAVER and Micro PAVER Pavement Management Systems are described and the suitability of this approach for investigating the effects of deferred maintenance is presented.
Abstract: This paper describes the application of improved mathematical techniques to the PAVER and Micro PAVER Pavement Management Systems. The use of stochastic dynamic programming to determine optimal strategies and related mean costs over specified life-cycle periods is outlined. The incorporation of simple simulation techniques to estimate the variance associated with these costs is described. The suitability of this approach for investigating the effects of deferred maintenance is presented. The use of outputs from these programs in subsequent prioritization and budget allocation modules is briefly discussed. An example that incorporates outputs from the dynamic programming and simulation programs is shown, and the validity of these outputs is discussed.



Book
31 May 1988
TL;DR: In this paper, an approximate decomposition and aggregation for finite-dimensional deterministic problems is proposed for weakly controllable input-output properties of discrete dynamic systems with weak or aggregatable control.
Abstract: 1: The Perturbation Method in Mathematical Programming.- 1.1. Formulation and peculiarities of problems.- 1.2. Perturbations in linear programs.- 1.3 Nonlinear programs: perturbations in objective functions.- 1.4. Necessary and sufficient conditions for an extremum. Quasiconvex and quasilinear programs.- 1.5. Perturbations in nonconvex programs.- 2: Approximate Decomposition and Aggregation for Finite Dimensional Deterministic Problems.- 2.1. Perturbed decomposable structures and two-level planning.- 2.2. Aggregation of activities.- 2.3 Weakly controllable input-output characteristics.- 2.4. Input-output analysis.- 2.5. Aggregation in optimization models based on input-output analysis.- 2.6. Aggregation in the interregional transportation problem with regard to price scales.- 2.7. Optimization of discrete dynamic systems.- 2.8. Control of weakly dynamic systems under state variable constraints.- 3: Singular Programs.- 3.1. Singularity and regularization in quasiconvex problems.- 3.2. The auxiliary problem in the singular case.- 3.3. An approximate aggregation of Markov chains with incomes.- 3.4. An approximation algorithm for Markov programming.- 3.5. An iterative algorithm for suboptimization.- 3.6. An artificial introduction of singular perturbations in compact inverse methods.- 4: The Perturbation Method in Stochastic Programming.- 4.1. One- and two-stage problems.- 4.2. Optimal control problems with small random perturbations.- 4.3. Discrete dynamic systems with weak or aggregatable controls. An asymptotic stochastic maximum principle.- 4.4. Sliding planning and suboptimal decomposition of operative control in a production system.- 4.5. Sliding planning on an infinite horizon.- 4.6. Control of weakly dynamic systems under random disturbances.- 5: Suboptimal Linear Regulator Design.- 5.1. The LQ problem. Suboptimal decomposition.- 5.2. Loss of controllability, singularity, and suboptimal aggregation.- 5.3. Examples of suboptimal regulator synthesis.- 5.4. Control of oscillatory systems.- 5.5. LQG problems.- 6: Nonlinear Optimal Control Problems.- 6.1. The maximum principle and smooth solutions.- 6.2. The general terminal problem.- 6.3. Difference approximations.- 6.4. Weak control (nonuniqueness of the reduced solution).- 6.5. Aggregation in a singular perturbed problem.- Related Literature.

Journal ArticleDOI
TL;DR: In this paper, the authors give a new upper bound that requires operations that only grow polynomially in the number of random variables and show that this bound is sharp if the function is linear.
Abstract: Stochastic linear programs require the evaluation of an integral in which the integrand is itself the value of a linear program. This integration is often approximated by discrete distributions that bound the integral from above or below. A difficulty with previous upper bounds is that they generally require a number of function evaluations that grows exponentially in the number of variables. We give a new upper bound that requires operations that only grow polynomially in the number of random variables. We show that this bound is sharp if the function is linear and give computational results to illustrate its performance.

Book
31 Dec 1988
TL;DR: In this paper, the authors proposed a nonparametric approach for the estimation of the production frontier in the Stochastic case and applied it in the context of data analysis and information theory.
Abstract: 1 Efficiency Analysis in Production- 11 Partial and General Equilibrium Models- 12 Production Frontier as Flexible Production Functions- 13 Parametric Forms and their Econometric Estimation- 14 Nonparametric Theory: Different Facets- 15 Implications of Nonparametric Theory- 2 The Nonparametric Approach- 21 Convex Hull Method- 22 Stochastic Micro and Macro Frontier- 23 Data Envelopment Analysis- 24 Consistency Approach through Data Adjustment- 25 Distribution of Technical and Price Efficiency- 3 Interface With Parametric Theory- 31 Average versus Optimal Production Function- 32 Estimation of Production Frontier- 33 Robust Methods of Estimation- 34 Data Screening and Cluster Analysis- 35 Problems in Parametric Theory- 4 Implications of Nonparametric Theory- 41 Economic Implications- 42 Econometrics and Minimax Frontiers- 43 The Regression and Index Number Problem- 44 Joint Costs and Nonlinear Frontiers- 45 Efficiency Analysis in the Stochastic Case- 5 Extensions of Nonparametric Approach- 51 Economic Generalizations- 52 Extensions of Data Envelopment Analysis- 53 Combining Parametric and Nonparametric Models- 54 Nonparametric Regression and Bootstrap Methods- 55 Distribution of Technical and Price Efficiency- 6 Applications of Nonparametric Theory- 61 Production in Quasi-Market and Non-Market Systems- 62 Managerial Models in Operations Research- 63 Economic Development and International Trade- 64 Stochastic Programming and Markov Process Models- 65 Multivariate Data Analysis and Information Theory- Author Index


Journal ArticleDOI
TL;DR: In this article, the possibility and necessity in fuzzy programming and in other optimization problems that traditionally use random variables are employed to describe uncertainty, and the possibility approach to multiobjective programming produces the same results as fuzzy programming.

Journal ArticleDOI
TL;DR: It is described in this paper how large-scale system methods for solving multi-staged systems, such as Bender's Decomposition, high-speed sampling or Monte Carlo simulation, and parallel processors can be combined to solve some important planning problems involving uncertainty.
Abstract: Industry and government routinely solve deterministic mathematical programs for planning and schelduling purposes, some involving thousands of variables with a linear or non-linear objective and inequality constraints. The solutions obtained are often ignored because they do not properly hedge against future contingencies. It is relatively easy to reformulate models to include uncertainty. The bottleneck has been (and is) our capability to solve them. The time is now ripe for finding a way to do so. To this end, we describe in this paper how large-scale system methods for solving multi-staged systems, such as Bender's Decomposition, high-speed sampling or Monte Carlo simulation, and parallel processors can be combined to solve some important planning problems involving uncertainty. For example, parallel processors may make it possible to come to better grips with the fundamental problems of planning, scheduling, design, and control of complex systems such as the economy, an industrial enterprise, an energy system, a water-resource system, military models for planning-and-control, decisions about investment, innovation, employment, and health-delivery systems.

Journal ArticleDOI
TL;DR: In this article, the authors deal with a statistical approach to stability analysis in nonlinear stochastic programming, where the distribution function of the underlying random variable is estimated by the empirical distribution function, and the problem of estimated parameters is considered.
Abstract: The paper deals with a statistical approach to stability analysis in nonlinear stochastic programming. Firstly the distribution function of the underlying random variable is estimated by the empirical distribution function, and secondly the problem of estimated parameters is considered. In both the cases the probability that the solution set of the approximate problem, is not contained in an l-neighbourhood of the solution set to the original problem is estimated, and under differentiability properties an asymptotic expansion for the density of the (unique) solution to the approximate problem is derived.


01 Sep 1988
TL;DR: In this paper, a description of stochastic dynamical optimization models is given, which is intended to exhibit some of the connections between various formulations that have appeared in the literature, and indicate some difficulties that must be overcome when trying to adapt solution methods that have been successfully applied to one class of problems to another.
Abstract: This description of stochastic dynamical optimization models is intended to exhibit some of the connections between various formulations that have appeared in the literature, and indicate some of the difficulties that must be overcome when trying to adapt solution methods that have been successfully applied to one class of problems to an apparently related but different class of problems. The emphasis is on solvable models. The authors begin with the least dynamical versions of stochastic optimization models, one- and two-stage models then consider discrete time models, and conclude with continuous time models.

Book ChapterDOI
01 Dec 1988
TL;DR: In this paper, the authors present a discussion of the risks associated with dedicated and other fixed income portfolio selection problems, including cash requirements or liabilities of the dedicated portfolio, the possibility of bond re-calls or defaults due to bankruptcy, to name just a few.
Abstract: Dedicated bond portfolios, even when they are conservatively assembled and managed, are still subject to major uncertainties and risks. Uncertainties associated with interest rates, and their effect on bond prices, have received the greatest attention from financial planners and theorists. Other uncertainties include the cash requirements or liabilities of the dedicated portfolio, the possibility of bond re-calls or defaults due to bankruptcy, to name just a few. Joehnck[l] has a lengthier discussion of these and other risks associated with dedicated and other fixed income portfolio selection problems.