scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic programming published in 2003"


Book ChapterDOI
01 Jan 2003
TL;DR: In this article, Monte Carlo sampling methods for solving large scale stochastic programming problems are discussed, where a random sample is generated outside of an optimization procedure, and then the constructed, so-called sample average approximation (SAA), problem is solved by an appropriate deterministic algorithm.
Abstract: In this chapter we discuss Monte Carlo sampling methods for solving large scale stochastic programming problems We concentrate on the “exterior” approach where a random sample is generated outside of an optimization procedure, and then the constructed, so-called sample average approximation (SAA), problem is solved by an appropriate deterministic algorithm We study statistical properties of the obtained SAA estimators The developed statistical inference is incorporated into validation analysis and error estimation We describe some variance reduction techniques which may enhance convergence of sampling based estimates We also discuss difficulties in extending this methodology to multistage stochastic programming Finally, we briefly discuss the SAA method applied to stochastic generalized equations and variational inequalities

990 citations


Journal ArticleDOI
TL;DR: Although these stochastic methods cannot guarantee global optimality with certainty, their robustness, plus the fact that in inverse problems they have a known lower bound for the cost function, make them the best available candidates.
Abstract: Here we address the problem of parameter estimation (inverse problem) of nonlinear dynamic biochemical pathways. This problem is stated as a nonlinear programming (NLP) problem subject to nonlinear differential-algebraic constraints. These problems are known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based) local optimization methods fail to arrive at satisfactory solutions. To surmount this limitation, the use of several state-of-the-art deterministic and stochastic global optimization methods is explored. A case study considering the estimation of 36 parameters of a nonlinear biochemical dynamic model is taken as a benchmark. Only a certain type of stochastic algorithm, evolution strategies (ES), is able to solve this problem successfully. Although these stochastic methods cannot guarantee global optimality with certainty, their robustness, plus the fact that in inverse problems they have a known lower bound for the cost function, make them the best available candidates.

908 citations


Journal ArticleDOI
TL;DR: Two new versions of forward and backward type algorithms are presented for computing such optimally reduced probability measures approximately for convex stochastic programs with an (approximate) initial probability distribution P having finite support supp P.
Abstract: We consider convex stochastic programs with an (approximate) initial probability distribution P having finite support supp P, i.e., finitely many scenarios. The behaviour of such stochastic programs is stable with respect to perturbations of P measured in terms of a Fortet-Mourier probability metric. The problem of optimal scenario reduction consists in determining a probability measure that is supported by a subset of supp P of prescribed cardinality and is closest to P in terms of such a probability metric. Two new versions of forward and backward type algorithms are presented for computing such optimally reduced probability measures approximately. Compared to earlier versions, the computational performance (accuracy, running time) of the new algorithms has been improved considerably. Numerical experience is reported for different instances of scenario trees with computable optimal lower bounds. The test examples also include a ternary scenario tree representing the weekly electrical load process in a power management model.

851 citations


Journal ArticleDOI
TL;DR: Arguments from stability analysis indicate that Fortet-Mourier type probability metrics may serve as such canonical metrics in a convex stochastic programming problem with a discrete initial probability distribution.
Abstract: Given a convex stochastic programming problem with a discrete initial probability distribution, the problem of optimal scenario reduction is stated as follows: Determine a scenario subset of prescribed cardinality and a probability measure based on this set that is the closest to the initial distribution in terms of a natural (or canonical) probability metric. Arguments from stability analysis indicate that Fortet-Mourier type probability metrics may serve as such canonical metrics. Efficient algorithms are developed that determine optimal reduced measures approximately. Numerical experience is reported for reductions of electrical load scenario trees for power management under uncertainty. For instance, it turns out that after 50% reduction of the scenario tree the optimal reduced tree still has about 90% relative accuracy.

838 citations


Journal ArticleDOI
TL;DR: In this article, an efficient method based on linear programming for approximating solutions to large-scale stochastic control problems is proposed. But the approach is not suitable for large scale queueing networks.
Abstract: The curse of dimensionality gives rise to prohibitive computational requirements that render infeasible the exact solution of large-scale stochastic control problems. We study an efficient method based on linear programming for approximating solutions to such problems. The approach "fits" a linear combination of pre-selected basis functions to the dynamic programming cost-to-go function. We develop error bounds that offer performance guarantees and also guide the selection of both basis functions and "state-relevance weights" that influence quality of the approximation. Experimental results in the domain of queueing network control provide empirical support for the methodology.

643 citations


Journal ArticleDOI
TL;DR: This work presents a detailed computational study of the application of the SAA method to solve three classes of stochastic routing problems and finds provably near-optimal solutions to these difficult Stochastic programs using only a moderate amount of computation time.
Abstract: The sample average approximation (SAA) method is an approach for solving stochastic optimization problems by using Monte Carlo simulation. In this technique the expected objective function of the stochastic problem is approximated by a sample average estimate derived from a random sample. The resulting sample average approximating problem is then solved by deterministic optimization techniques. The process is repeated with different samples to obtain candidate solutions along with statistical estimates of their optimality gaps. We present a detailed computational study of the application of the SAA method to solve three classes of stochastic routing problems. These stochastic problems involve an extremely large number of scenarios and first-stage integer variables. For each of the three problem classes, we use decomposition and branch-and-cut to solve the approximating problem within the SAA scheme. Our computational results indicate that the proposed method is successful in solving problems with up to 21694 scenarios to within an estimated 1.0% of optimality. Furthermore, a surprising observation is that the number of optimality cuts required to solve the approximating problem to optimality does not significantly increase with the size of the sample. Therefore, the observed computation times needed to find optimal solutions to the approximating problems grow only linearly with the sample size. As a result, we are able to find provably near-optimal solutions to these difficult stochastic programs using only a moderate amount of computation time.

461 citations


Book ChapterDOI
01 Jan 2003
TL;DR: This chapter starts with motivating examples and then proceeds to formulation of linear, and later nonlinear, two stage stochastic programming problems, and gives a functional description of two stage programs.
Abstract: In this introductory chapter we discuss some basic approaches to modeling of stochastic optimization problems. We start with motivating examples and then proceed to formulation of linear, and later nonlinear, two stage stochastic programming problems. We give a functional description of two stage programs. After that we proceed to a discussion of multistage stochastic programming and its connections with dynamic programming. We end this chapter by introducing robust and min–max approaches to stochastic programming. Finally, in the appendix, we introduce and briefly discuss some relevant concepts from probability and optimization theories.

410 citations


Journal ArticleDOI
TL;DR: This work presents an algorithm that produces a discrete joint distribution consistent with specified values of the first four marginal moments and correlations, constructed by decomposing the multivariate problem into univariate ones, and using an iterative procedure that combines simulation, Cholesky decomposition and various transformations to achieve the correct correlations.
Abstract: In stochastic programming models we always face the problem of how to represent the random variables. This is particularly difficult with multidimensional distributions. We present an algorithm that produces a discrete joint distribution consistent with specified values of the first four marginal moments and correlations. The joint distribution is constructed by decomposing the multivariate problem into univariate ones, and using an iterative procedure that combines simulation, Cholesky decomposition and various transformations to achieve the correct correlations without changing the marginal moments. With the algorithm, we can generate 1000 one-period scenarios for 12 random variables in 16 seconds, and for 20 random variables in 48 seconds, on a Pentium III machine.

401 citations


Journal ArticleDOI
TL;DR: A novel scheme is proposed, where optimality is achieved by tracking the necessary conditions of optimality by separating the constraint-seeking from the sensitivity-seeking components of the inputs.

362 citations


Journal ArticleDOI
TL;DR: Stochastic optimization problems involving stochastic dominance constraints are introduced and necessary and sufficient conditions of optimality and duality theory are developed and it is shown that the Lagrange multipliers corresponding to dominance constraint are concave nondecreasing utility functions.
Abstract: We introduce stochastic optimization problems involving stochastic dominance constraints. We develop necessary and sufficient conditions of optimality and duality theory for these models and show that the Lagrange multipliers corresponding to dominance constraints are concave nondecreasing utility functions. The models and results are illustrated on a portfolio optimization problem.

327 citations


Book
29 Aug 2003
TL;DR: The original contribution of Dynamic Economics: Quantitative Methods and Applications lies in the integrated approach to the empirical application of dynamic optimization programming models, showing that empirical applications actually complement the underlying theory of optimization, while dynamic programming problems provide needed structure for estimation and policy evaluation.
Abstract: This book is an effective, concise text for students and researchers that combines the tools of dynamic programming with numerical techniques and simulation-based econometric methods. Doing so, it bridges the traditional gap between theoretical and empirical research and offers an integrated framework for studying applied problems in macroeconomics and microeconomics. In part I the authors first review the formal theory of dynamic optimization; they then present the numerical tools and econometric techniques necessary to evaluate the theoretical models. In language accessible to a reader with a limited background in econometrics, they explain most of the methods used in applied dynamic research today, from the estimation of probability in a coin flip to a complicated nonlinear stochastic structural model. These econometric techniques provide the final link between the dynamic programming problem and data. Part II is devoted to the application of dynamic programming to specific areas of applied economics, including the study of business cycles, consumption, and investment behavior. In each instance the authors present the specific optimization problem as a dynamic programming problem, characterize the optimal policy functions, estimate the parameters, and use models for policy evaluation. The original contribution of Dynamic Economics: Quantitative Methods and Applications lies in the integrated approach to the empirical application of dynamic optimization programming models. This integration shows that empirical applications actually complement the underlying theory of optimization, while dynamic programming problems provide needed structure for estimation and policy evaluation.

Book ChapterDOI
01 Jan 2003
TL;DR: In this article, a tour of good energy optimization models that explicitly deal with uncertainty is given, where the uncertainty usually stems from unpredictability of demand and/or prices of energy, or from resource availability and prices.
Abstract: We give the reader a tour of good energy optimization models that explicitly deal with uncertainty. The uncertainty usually stems from unpredictability of demand and/or prices of energy, or from resource availability and prices. Since most energy investments or operations involve irreversible decisions, a stochastic programming approach is meaningful. Many of the models deal with electricity investments and operations, but some oil and gas applications are also presented. We consider both traditional cost minimization models and newer models that reflect industry deregulation processes. The oldest research precedes the development of linear programming, and most models within the market paradigm have not yet found their final form.

Book ChapterDOI
01 Jan 2003
TL;DR: In this paper, continuity properties of optimal values and solution sets relative to changes of the original probability distribution, varying in some space of probability measures equipped with some convergence and metric, are studied.
Abstract: The behaviour of stochastic programming problems is studied in case of the underlying probability distribution being perturbed and approximated, respectively. Most of the theoretical results provide continuity properties of optimal values and solution sets relative to changes of the original probability distribution, varying in some space of probability measures equipped with some convergence and metric, respectively. We start by discussing relevant notions of convergence and distances for probability measures. Then we associate a distance with a stochastic program in a natural way and derive (quantitative) continuity properties of values and solutions by appealing to general perturbation results for optimization problems. Later we show how these results relate to stability with respect to weak convergence and how certain ideal probability metrics may be associated with more specific stochastic programs. In particular, we establish stability results for two-stage and chance constrained models. Finally, we present some consequences for the asymptotics of empirical approximations and for the construction of scenario-based approximations of stochastic programs.

Journal ArticleDOI
TL;DR: A continuous global optimization heuristic for a stochastic approximation of an objective function, which is not globally convex, is introduced and some results of the estimation of the parameters for a specific agent based model of the DM/US-$ foreign exchange market are presented.

Book
01 Jan 2003
TL;DR: An attempt is made to describe the theoretical prop- erties of several stochastic adaptive search methods, which may allow us to better predict algorithm performance and ultimately design new and improved algorithms.
Abstract: The field of global optimization has been developing at a rapid pace. There is a journal devoted to the topic, as well as many publications and notable books discussing various aspects of global optimization. This book is intended to complement these other publications with a focus on stochastic methods for global optimization. Stochastic methods, such as simulated annealing and genetic algo- rithms, are gaining in popularity among practitioners and engineers be- they are relatively easy to program on a computer and may be cause applied to a broad class of global optimization problems. However, the theoretical performance of these stochastic methods is not well under- stood. In this book, an attempt is made to describe the theoretical prop- erties of several stochastic adaptive search methods. Such a theoretical understanding may allow us to better predict algorithm performance and ultimately design new and improved algorithms. This book consolidates a collection of papers on the analysis and de- velopment of stochastic adaptive search. The first chapter introduces random search algorithms. Chapters 2-5 describe the theoretical anal- ysis of a progression of algorithms. A main result is that the expected number of iterations for pure adaptive search is linear in dimension for a class of Lipschitz global optimization problems. Chapter 6 discusses algorithms, based on the Hit-and-Run sampling method, that have been developed to approximate the ideal performance of pure random search. The final chapter discusses several applications in engineering that use stochastic adaptive search methods.

Journal ArticleDOI
TL;DR: This work identifies corresponding two- and multi-stage stochastic integer programs that are large-scale block-structured mixed-integer linear programs if the underlying probability distributions are discrete.
Abstract: Including integer variables into traditional stochastic linear programs has considerable implications for structural analysis and algorithm design. Starting from mean-risk approaches with different risk measures we identify corresponding two- and multi-stage stochastic integer programs that are large-scale block-structured mixed-integer linear programs if the underlying probability distributions are discrete. We highlight the role of mixed-integer value functions for structure and stability of stochastic integer programs. When applied to the block structures in stochastic integer programming, well known algorithmic principles such as branch-and-bound, Lagrangian relaxation, or cutting plane methods open up new directions of research. We review existing results in the field and indicate departure points for their extension.

Journal ArticleDOI
TL;DR: Algorithms for two-stage stochastic linear programming with recourse and their implementation on a grid computing platform are described and large sample-average approximations of problems from the literature are presented.
Abstract: We describe algorithms for two-stage stochastic linear programming with recourse and their implementation on a grid computing platform. In particular, we examine serial and asynchronous versions of the L-shaped method and a trust-region method. The parallel platform of choice is the dynamic, heterogeneous, opportunistic platform provided by the Condor system. The algorithms are of master-worker type (with the workers being used to solve second-stage problems), and the MW runtime support library (which supports master-worker computations) is key to the implementation. Computational results are presented on large sample-average approximations of problems from the literature.

Journal ArticleDOI
TL;DR: The developed TISP model provides a linkage to predefined policies determined by authorities that have to be respected when a modeling effort is undertaken and furnishes the reflection of uncertainties presented as both probabilities and intervals.
Abstract: This study introduces a two-stage interval-stochastic programming (TISP) model for the planning of solid-waste management systems under uncertainty. The model is derived by incorporating the concept of two-stage stochastic programming within an interval-parameter optimization framework. The approach has the advantage that policy determined by the authorities, and uncertain information expressed as intervals and probability distributions, can be effectively communicated into the optimization processes and resulting solutions. In the modeling formulation, penalties are imposed when policies expressed as allowable waste-loading levels are violated. In its solution algorithm, the TISP model is converted into two deterministic submodels, which correspond to the lower and upper bounds for the desired objective-function value. Interval solutions, which are stable in the given decision space with associated levels of system-failure risk, can then be obtained by solving the two submodels sequentially. Two special characteristics of the proposed approach make it unique compared with other optimization techniques that deal with uncertainties. First, the TISP model provides a linkage to predefined policies determined by authorities that have to be respected when a modeling effort is undertaken; second, it furnishes the reflection of uncertainties presented as both probabilities and intervals. The developed model is applied to a hypothetical case study of regional solid-waste management. The results indicate that reasonable solutions have been generated. They provide desired waste-flow patterns with minimized system costs and maximized system feasibility. The solutions present as stable interval solutions with different risk levels in violating the waste-loading criterion and can be used for generating decision alternatives.

Journal ArticleDOI
TL;DR: In this article, a stochastic programming formulation for fleet sizing under uncertainty in future demands and operating conditions is presented, where a partial moment measure of risk is incorporated into the expected recourse function.
Abstract: We create a formulation and a solution procedure for fleet sizing under uncertainty in future demands and operating conditions. The formulation focuses on robust optimization, using a partial moment measure of risk. This risk measure is incorporated into the expected recourse function of a two-stage stochastic programming formulation, and stochastic decomposition is used as a solution procedure. A numerical example illustrates the importance of including uncertainty in the fleet sizing problem formulation, and the nature of the fundamental tradeoff between acquiring more vehicles and accepting the risk of potentially high costs if insufficient resources are available.

Journal ArticleDOI
TL;DR: In this paper, a simple but effective solution method capable of tackling problems with large numbers of potential members (e.g. >100,000,000) is presented. But this method requires a ground structure with minimal connectivity to be used in the first iteration; members are then added as required in subsequent iterations until the (provably) optimal solution is found.
Abstract: Computerized layout (or “topology”) optimization was pioneered almost four decades back. However, despite dramatic increases in available computer power and the application of increasingly efficient optimization algorithms, even now only relatively modest sized problems can be tackled using the traditional “ground structure” approach. This is because of the need, in general, for the latter to contain every conceivable member connecting together the nodes in a problem. A simple, but effective solution method capable of tackling problems with large numbers of potential members (e.g. >100,000,000) is presented. Though the method draws on the linear programming technique of “column generation”, since layout optimization specific heuristics are employed it is presented as an iterative “member adding” method. The method requires a ground structure with minimal connectivity to be used in the first iteration; members are then added as required in subsequent iterations until the (provably) optimal solution is found.

Journal ArticleDOI
TL;DR: In this paper, a model of the foundations versus flexibility trade-off enables the competing options to be optimized by balancing the expected profits that may arise from future expansion, and the cost of enhancing the foundation.
Abstract: Infrastructure facilities are generally heavy, fixed, and normally irreversible once construction has been completed. As existing facilities, they may confront economic competition of an increased space demand and the need for future expansion. Due to economic-based irreversibility, the expansion of a constructed facility requires the foundation and, to a lesser degree, columns to be enhanced and options for expansion to be accounted for at the very beginning of construction. Enhancing the foundation and columns represents an up-front cost, but has a return in flexibility for future expansion. This trade-off can be viewed as an investment problem, in that a premium has to be paid first for an option that can be exercised later. A model of the foundations versus flexibility trade-off enables the competing options to be optimized by balancing the expected profits that may arise from future expansion, i.e., the value of flexibility, and the cost of enhancing the foundation. Use of the model is demonstrated for the construction of a public parking garage, with the optimal foundation size determined. The evolution of parking demand is modeled with a trinomial lattice. Stochastic dynamic programming is used to determine the optimal expansion process. A model that does not consider the value of flexibility is compared with two value-flexible models. The value of flexibility in this case study is so significant that failure to account for flexibility is not economical. Valuation modeling such as discounted cash flow analysis with uncertainty modeling is important to capitalize on the worth of flexibility.

Journal ArticleDOI
TL;DR: The network simplex algorithm, stochastic simulation and genetic algorithm are integrated to produce a hybrid intelligent algorithm to solve capacitated location-allocation problem with stochastically demands.

Proceedings ArticleDOI
01 Oct 2003
TL;DR: An overview of the genetic algorithm (GA) and a new stochastic algorithm called particle swarm optimization (PSO) has been shown to be a valuable addition to the electromagnetic design engineer's toolbox.
Abstract: Modern antenna designers are constantly challenged to seek for optimum solutions for complex electromagnetic device designs. The temptation has grown because of ever increasing advances in computational power. The standard brute force design techniques are systematically being replaced by the state-of-the-art optimization techniques. The ability of using numerical methods to accurately and efficiently characterizing the relative quality of a particular design has excited the EM engineers to apply stochastic global optimizers. The genetic algorithm (GA) is the most popular of the so-called evolutionary methods in the electromagnetics community. Recently, a new stochastic algorithm called particle swarm optimization (PSO) has been shown to be a valuable addition to the electromagnetic design engineer's toolbox. In this paper we provide an overview of both techniques and present some representative examples. Most of the material incorporated in this invited plenary session paper is based on the earlier publication work by the author and his students at UCLA.

Journal ArticleDOI
TL;DR: This paper shows how one can formulate this problem as a Markov decision process with recourse that considers decision making throughout the process life cycle and at different hierarchical levels, and decomposes the problem into a sequence of single-period subproblems, each of which is a two-stage stochastic program with recourse.

Journal ArticleDOI
TL;DR: In this article, the essential elements of semidenite programming as a computational tool for the analysis of systems and control problems are presented, with particular emphasis on general duality properties such as providing suboptimality or infeasibility certicates.

Journal ArticleDOI
TL;DR: Two classes of optimization models to maximize revenue in a restaurant (while controlling average waiting time as well as perceived fairness) that may violate the first-come-first-serve (FCFS) rule are developed.
Abstract: We develop two classes of optimization models to maximize revenue in a restaurant (while controlling average waiting time as well as perceived fairness) that may violate the first-come-first-serve (FCFS) rule. In the first class of models, we use integer programming, stochastic programming, and approximate dynamic programming methods to decide dynamically when, if at all, to seat an incoming party during the day of operation of a restaurant that does not accept reservations. In a computational study with simulated data, we show that optimization-based methods enhance revenue relative to the industry practice of FCFS by 0.11% to 2.22% for low-load factors, by 0.16% to 2.96% for medium-load factors, and by 7.65% to 13.13% for high-load factors, without increasing, and occasionally decreasing, waiting times compared to FCFS. The second class of models addresses reservations. We propose a two-step procedure: Use a stochastic gradient algorithm to decide a priori how many reservations to accept for a future time and then use approximate dynamic programming methods to decide dynamically when, if at all, to seat an incoming party during the day of operation. In a computational study involving real data from an Atlanta restaurant, the reservation model improves revenue relative to FCFS by 3.5% for low-load factors and 7.3% for high-load factors.

Book ChapterDOI
01 Jan 2003
TL;DR: This chapter explores the use of concepts from stochastic programming in the context of resource allocation problems that arise in freight transportation and focuses on the degree to which some techniques exploit the natural structure of these problems.
Abstract: Freight transportation is characterized by highly dynamic information processes: customers call in orders over time to move freight; the movement of freight over long distances is subject to random delays; equipment failures require last minute changes; and decisions are not always executed in the field according to plan. The high-dimensionality of the decisions involved has made transportation a natural application for the techniques of mathematical programming, but the challenge of modeling dynamic information processes has limited their success. In this chapter, we explore the use of concepts from stochastic programming in the context of resource allocation problems that arise in freight transportation. Since transportation problems are often quite large, we focus on the degree to which some techniques exploit the natural structure of these problems. Experimental work in the context of these applications is quite limited, so we highlight the techniques that appear to be the most promising.

Book
05 Dec 2003
TL;DR: In this article, the authors present an easily readable, up-to-date treatment of asset and wealth management in the presence of liabilities and other portfolio complexities using discrete-time, multi-period stochastic programming.
Abstract: All individuals and institutions face asset/liability management problems on a continuous basis. In this Research Foundation monograph, the author presents an easily readable, up-to-date treatment of asset and wealth management in the presence of liabilities and other portfolio complexities. The approach discussed and recommended is discrete-time, multiperiod stochastic programming. For most practical purposes, such models provide a superior alternative to other approaches, such as mean-variance, simulation, control theory, and continuous-time finance.

Journal ArticleDOI
TL;DR: In this paper, the authors present a tool set robust to changes in demand that considers a set of possible, discrete demand scenarios with associated probabilities, and determines the tools to purchase, under a budget constraint, to minimize weighted average unmet demand.
Abstract: In the semiconductor industry, capacity planning, the calculation of number of tools needed to manufacture forecasted product demands, is difficult because of sensitivity to product mix and uncertainty in future demand. Planning for a single demand profile can result in a large gap between planned capacity and actual capability when the realized product mix turns out differently from the one planned. This paper presents a method which accepts this uncertainty and uses stochastic integer programming to find a tool set robust to changes in demand. It considers a set of possible, discrete demand scenarios with associated probabilities, and determines the tools to purchase, under a budget constraint, to minimize weighted average unmet demand. The resulting robust tool set deals well with all the scenarios at no or minimal additional cost compared to that for a single demand profile. We also discuss the modifications of conventional business processes, needed to implement this method for dealing explicitly with uncertainty in demand.

Journal ArticleDOI
TL;DR: A Branch-and-Fix Coordi- nation approach is introduced for coordinating the selection of the branching nodes and branching variables in the scenario subproblems to be jointly optimized.