scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic programming published in 2006"


Journal ArticleDOI
TL;DR: A large deviation-type approximation, referred to as “Bernstein approximation,” of the chance constrained problem is built that is convex and efficiently solvable and extended to the case of ambiguous chance constrained problems, where the random perturbations are independent with the collection of distributions known to belong to a given convex compact set.
Abstract: We consider a chance constrained problem, where one seeks to minimize a convex objective over solutions satisfying, with a given close to one probability, a system of randomly perturbed convex constraints. This problem may happen to be computationally intractable; our goal is to build its computationally tractable approximation, i.e., an efficiently solvable deterministic optimization program with the feasible set contained in the chance constrained problem. We construct a general class of such convex conservative approximations of the corresponding chance constrained problem. Moreover, under the assumptions that the constraints are affine in the perturbations and the entries in the perturbation vector are independent-of-each-other random variables, we build a large deviation-type approximation, referred to as “Bernstein approximation,” of the chance constrained problem. This approximation is convex and efficiently solvable. We propose a simulation-based scheme for bounding the optimal value in the chance constrained problem and report numerical experiments aimed at comparing the Bernstein and well-known scenario approximation approaches. Finally, we extend our construction to the case of ambiguous chance constrained problems, where the random perturbations are independent with the collection of distributions known to belong to a given convex compact set rather than to be known exactly, while the chance constraint should be satisfied for every distribution given by this set.

1,099 citations


Journal ArticleDOI
TL;DR: It is shown that the structure of the optimal robust policy is of the same base-stock character as the optimal stochastic policy for a wide range of inventory problems in single installations, series systems, and general supply chains.
Abstract: We propose a general methodology based on robust optimization to address the problem of optimally controlling a supply chain subject to stochastic demand in discrete time. This problem has been studied in the past using dynamic programming, which suffers from dimensionality problems and assumes full knowledge of the demand distribution. The proposed approach takes into account the uncertainty of the demand in the supply chain without assuming a specific distribution, while remaining highly tractable and providing insight into the corresponding optimal policy. It also allows adjustment of the level of robustness of the solution to trade off performance and protection against uncertainty. An attractive feature of the proposed approach is its numerical tractability, especially when compared to multidimensional dynamic programming problems in complex supply chains, as the robust problem is of the same difficulty as the nominal problem, that is, a linear programming problem when there are no fixed costs, and a mixed-integer programming problem when fixed costs are present. Furthermore, we show that the optimal policy obtained in the robust approach is identical to the optimal policy obtained in the nominal case for a modified and explicitly computable demand sequence. In this way, we show that the structure of the optimal robust policy is of the same base-stock character as the optimal stochastic policy for a wide range of inventory problems in single installations, series systems, and general supply chains. Preliminary computational results are very promising.

619 citations


Journal ArticleDOI
TL;DR: A recently developed software tool executing on a computational grid is used to solve many large instances of these problems, allowing for high-quality solutions and to verify optimality and near-optimality of the computed solutions in various ways.
Abstract: We investigate the quality of solutions obtained from sample-average approximations to two-stage stochastic linear programs with recourse. We use a recently developed software tool executing on a computational grid to solve many large instances of these problems, allowing us to obtain high-quality solutions and to verify optimality and near-optimality of the computed solutions in various ways.

449 citations


Journal ArticleDOI
TL;DR: The robust sampled problem is shown to be a good approximation for the ambiguous chance constrained problem with a high probability using the Strassen-Dudley Representation Theorem that states that when the distributions of two random variables are close in the Prohorov metric one can construct a coupling of the random variables such that the samples are close with ahigh probability.
Abstract: In this paper we study ambiguous chance constrained problems where the distributions of the random parameters in the problem are themselves uncertain. We focus primarily on the special case where the uncertainty set ** of the distributions is of the form ** where ρp denotes the Prohorov metric. The ambiguous chance constrained problem is approximated by a robust sampled problem where each constraint is a robust constraint centered at a sample drawn according to the central measure **. The main contribution of this paper is to show that the robust sampled problem is a good approximation for the ambiguous chance constrained problem with a high probability. This result is established using the Strassen-Dudley Representation Theorem that states that when the distributions of two random variables are close in the Prohorov metric one can construct a coupling of the random variables such that the samples are close with a high probability. We also show that the robust sampled problem can be solved efficiently both in theory and in practice.

337 citations


Journal ArticleDOI
TL;DR: In this article, an optimization-via-simulation algorithm, called COMPASS, was proposed for estimating the performance measure via a stochastic, discrete-event simulation, and the decision variables were integer ordered.
Abstract: We propose an optimization-via-simulation algorithm, called COMPASS, for use when the performance measure is estimated via a stochastic, discrete-event simulation, and the decision variables are integer ordered. We prove that COMPASS converges to the set of local optimal solutions with probability 1 for both terminating and steady-state simulation, and for both fully constrained problems and partially constrained or unconstrained problems under mild conditions.

261 citations


Journal ArticleDOI
TL;DR: In this article, an interval-parameter multi-stage stochastic linear programming (IMSLP) method has been developed for water resources decision making under uncertainty, where penalties are exercised with recourse against any infeasibility, which permits in-depth analyses of various policy scenarios that are associated with different levels of economic consequences when the promised water allocation targets are violated.

261 citations


01 Jan 2006
TL;DR: The purpose of this tutorial is to present a mathematical framework that is well-suited to the limited information available in real-life problems and captures the decision-maker’s attitude towards uncertainty; the proposed approach builds upon recent developments in robust and data-driven optimization.
Abstract: Traditional models of decision-making under uncertainty assume perfect information, i.e., accurate values for the system parameters and speciflc probability distributions for the random variables. However, such precise knowledge is rarely available in practice, and a strategy based on erroneous inputs might be infeasible or exhibit poor performance when implemented. The purpose of this tutorial is to present a mathematical framework that is well-suited to the limited information available in real-life problems and captures the decision-maker’s attitude towards uncertainty; the proposed approach builds upon recent developments in robust and data-driven optimization. In robust optimization, random variables are modeled as uncertain parameters belonging to a convex uncertainty set and the decision-maker protects the system against the worst case within that set. Data-driven optimization uses observations of the random variables as direct inputs to the mathematical programming problems. The flrst part of the tutorial describes the robust optimization paradigm in detail in single-stage and multi-stage problems. In the second part, we address the issue of constructing uncertainty sets using historical realizations of the random variables and investigate the connection between convex sets, in particular polyhedra, and a speciflc class of risk measures.

260 citations


Journal ArticleDOI
TL;DR: Monte Carlo simulations demonstrate the viability of the genetic algorithm by showing that it consistently and quickly provides good feasible solutions, which makes the real time implementation for high-dimensional problems feasible.

256 citations


Journal ArticleDOI
TL;DR: Under the assumption that the stochastic parameters are independently distributed, it is shown that two-stage Stochastic programming problems are ♯P-hard and certain multi-stage Stochastic Programming problems are PSPACE-hard.
Abstract: Stochastic programming is the subfield of mathematical programming that considers optimization in the presence of uncertainty. During the last four decades a vast quantity of literature on the subject has appeared. Developments in the theory of computational complexity allow us to establish the theoretical complexity of a variety of stochastic programming problems studied in this literature. Under the assumption that the stochastic parameters are independently distributed, we show that two-stage stochastic programming problems are ?P-hard. Under the same assumption we show that certain multi-stage stochastic programming problems are PSPACE-hard. The problems we consider are non-standard in that distributions of stochastic parameters in later stages depend on decisions made in earlier stages.

245 citations


Journal ArticleDOI
TL;DR: This work presents a hybrid mixed-integer disjunctive programming formulation for the stochastic program corresponding to this class of problems and hence extends the Stochastic programming framework.
Abstract: We address a class of problems where decisions have to be optimized over a time horizon given that the future is uncertain and that the optimization decisions influence the time of information discovery for a subset of the uncertain parameters. The standard approach to formulate stochastic programs is based on the assumption that the stochastic process is independent of the optimization decisions, which is not true for the class of problems under consideration. We present a hybrid mixed-integer disjunctive programming formulation for the stochastic program corresponding to this class of problems and hence extend the stochastic programming framework. A set of theoretical properties that lead to reduction in the size of the model is identified. A Lagrangean duality based branch and bound algorithm is also presented.

235 citations


Journal ArticleDOI
TL;DR: The extension of Robust Optimization methodology developed in this paper opens up new possibilities to solve efficiently multi-stage finite-horizon uncertain optimization problems, in particular, to analyze and to synthesize linear controllers for discrete time dynamical systems.
Abstract: In this paper, we propose a new methodology for handling optimization problems with uncertain data. With the usual Robust Optimization paradigm, one looks for the decisions ensuring a required performance for all realizations of the data from a given bounded uncertainty set, whereas with the proposed approach, we require also a controlled deterioration in performance when the data is outside the uncertainty set.The extension of Robust Optimization methodology developed in this paper opens up new possibilities to solve efficiently multi-stage finite-horizon uncertain optimization problems, in particular, to analyze and to synthesize linear controllers for discrete time dynamical systems.

Journal ArticleDOI
TL;DR: This paper considers a stochastic integer programming model for the airline crew scheduling problem and develops a branching algorithm to identify expensive flight connections and find alternative solutions that better withstand disruptions.
Abstract: Traditional methods model the billion-dollar airline crew scheduling problem as deterministic and do not explicitly include information on potential disruptions. Instead of modeling the crew scheduling problem as deterministic, we consider a stochastic crew scheduling model and devise a solution methodology for integrating disruptions in the evaluation of crew schedules. The goal is to use that information to find robust solutions that better withstand disruptions. Such an approach is important because we can proactively consider the effects of certain scheduling decisions. By identifying more robust schedules, cascading delay effects are minimized. In this paper we describe our stochastic integer programming model for the airline crew scheduling problem and develop a branching algorithm to identify expensive flight connections and find alternative solutions. The branching algorithm uses the structure of the problem to branch simultaneously on multiple variables without invalidating the optimality of the algorithm. We present computational results demonstrating the effectiveness of our branching algorithm.

Journal ArticleDOI
TL;DR: It is proved that the classical mean-variance criterion leads to computational intractability even in the simplest stochastic programs, and a number of alternative mean-risk functions are shown to be computationally tractable using slight variants of existing stochastically programming decomposition algorithms.
Abstract: Traditional stochastic programming is risk neutral in the sense that it is concerned with the optimization of an expectation criterion. A common approach to addressing risk in decision making problems is to consider a weighted mean-risk objective, where some dispersion statistic is used as a measure of risk. We investigate the computational suitability of various mean-risk objective functions in addressing risk in stochastic programming models. We prove that the classical mean-variance criterion leads to computational intractability even in the simplest stochastic programs. On the other hand, a number of alternative mean-risk functions are shown to be computationally tractable using slight variants of existing stochastic programming decomposition algorithms. We propose decomposition-based parametric cutting plane algorithms to generate mean-risk efficient frontiers for two particular classes of mean-risk objectives.

Journal ArticleDOI
TL;DR: This paper considers Conditional Value-at-Risk as risk measure in the framework of two-stage stochastic integer programming, and presents an explicit mixed-integer linear programming formulation of the problem when the probability distribution is discrete and finite.
Abstract: In classical two-stage stochastic programming the expected value of the total costs is minimized. Recently, mean-risk models - studied in mathematical finance for several decades - have attracted attention in stochastic programming. We consider Conditional Value-at-Risk as risk measure in the framework of two-stage stochastic integer programming. The paper addresses structure, stability, and algorithms for this class of models. In particular, we study continuity properties of the objective function, both with respect to the first-stage decisions and the integrating probability measure. Further, we present an explicit mixed-integer linear programming formulation of the problem when the probability distribution is discrete and finite. Finally, a solution algorithm based on Lagrangean relaxation of nonanticipativity is proposed.

Journal ArticleDOI
TL;DR: This paper discusses alternative decomposition methods in which the second-stage integer subproblems are solved using branch-and-cut methods, and lays the foundation for two-stage stochastic mixed-integer programs.
Abstract: Decomposition has proved to be one of the more effective tools for the solution of large-scale problems, especially those arising in stochastic programming. A decomposition method with wide applicability is Benders' decomposition, which has been applied to both stochastic programming as well as integer programming problems. However, this method of decomposition relies on convexity of the value function of linear programming subproblems. This paper is devoted to a class of problems in which the second-stage subproblem(s) may impose integer restrictions on some variables. The value function of such integer subproblem(s) is not convex, and new approaches must be designed. In this paper, we discuss alternative decomposition methods in which the second-stage integer subproblems are solved using branch-and-cut methods. One of the main advantages of our decomposition scheme is that Stochastic Mixed-Integer Programming (SMIP) problems can be solved by dividing a large problem into smaller MIP subproblems that can be solved in parallel. This paper lays the foundation for such decomposition methods for two-stage stochastic mixed-integer programs.

Journal ArticleDOI
TL;DR: This work considers the case when customers can call in orders during the daily operations, and a heuristic solution method is developed where sample scenarios are generated, solved heuristically and combined iteratively to form a solution to the overall problem.
Abstract: The statement of the standard vehicle routing problem cannot always capture all aspects of real-world applications. As a result, extensions or modifications to the model are warranted. Here we consider the case when customers can call in orders during the daily operations; i.e., both customer locations and demands may be unknown in advance. This is modeled as a combined dynamic and stochastic programming problem, and a heuristic solution method is developed where sample scenarios are generated, solved heuristically, and combined iteratively to form a solution to the overall problem.

Journal ArticleDOI
TL;DR: Estimates of the sample sizes required to solve a multistage stochastic programming problem with a given accuracy by the (conditional sampling) sample average approximation method are derived based on Cramer's Large Deviations Theorem.

Journal Article
TL;DR: In this paper, the run-time distributions of incomplete and randomized search methods, such as stochastic local search algorithms, are predicted using machine learning models, and information about an algorithm's parameter settings can be incorporated into a model, and automatically adjusted the algorithm's parameters on a per-instance basis in order to optimize its performance.
Abstract: Machine learning can be used to build models that predict the run-time of search algorithms for hard combinatorial problems. Such empirical hardness models have previously been studied for complete, deterministic search algorithms. In this work, we demonstrate that such models can also make surprisingly accurate predictions of the run-time distributions of incomplete and randomized search methods, such as stochastic local search algorithms. We also show for the first time how information about an algorithm's parameter settings can be incorporated into a model, and how such models can be used to automatically adjust the algorithm's parameters on a per-instance basis in order to optimize its performance. Empirical results for Novelty + and SAPS on structured and unstructured SAT instances show very good predictive performance and significant speedups of our automatically determined parameter settings when compared to the default and best fixed distribution-specific parameter settings.

Book ChapterDOI
25 Sep 2006
TL;DR: It is demonstrated for the first time how information about an algorithm's parameter settings can be incorporated into a model, and how such models can be used to automatically adjust the algorithm's parameters on a per-instance basis in order to optimize its performance.
Abstract: Machine learning can be used to build models that predict the run-time of search algorithms for hard combinatorial problems. Such empirical hardness models have previously been studied for complete, deterministic search algorithms. In this work, we demonstrate that such models can also make surprisingly accurate predictions of the run-time distributions of incomplete and randomized search methods, such as stochastic local search algorithms. We also show for the first time how information about an algorithm's parameter settings can be incorporated into a model, and how such models can be used to automatically adjust the algorithm's parameters on a per-instance basis in order to optimize its performance. Empirical results for Novelty+ and SAPS on structured and unstructured SAT instances show very good predictive performance and significant speedups of our automatically determined parameter settings when compared to the default and best fixed distribution-specific parameter settings.

Journal ArticleDOI
TL;DR: The simulated annealing (SA) approach, which is one of the leading stochastic search methods, is employed for specifying a large-scale linear regression model and the results are compared to the results of the more common stepwise regression (SWR) approach for model specification.

Proceedings ArticleDOI
11 Jun 2006
TL;DR: In this paper, a stochastic bottom-up electricity market model is presented to optimise the unit commitment considering five kinds of markets and taking explicitly into account the stochastically behavior of the wind power generation and the prediction error.
Abstract: A large share of integrated wind power causes technical and financial impacts on the operation of the existing electricity system due to the fluctuating behaviour and unpredictability of wind power. The presented stochastic bottom-up electricity market model optimises the unit commitment considering five kinds of markets and taking explicitly into account the stochastic behaviour of the wind power generation and of the prediction error. It can be used for the evaluation of varying electricity prices and system costs due to wind power integration and for the investigation of integration measures.

Journal ArticleDOI
TL;DR: In this paper, Monte Carlo sampling-based procedures for assessing solution quality in stochastic programs are developed. But the quality is defined via the optimality gap and the procedures' output is a confidence interval on this gap.
Abstract: Determining whether a solution is of high quality (optimal or near optimal) is fundamental in optimization theory and algorithms. In this paper, we develop Monte Carlo sampling-based procedures for assessing solution quality in stochastic programs. Quality is defined via the optimality gap and our procedures' output is a confidence interval on this gap. We review a multiple-replications procedure that requires solution of, say, 30 optimization problems and then, we present a result that justifies a computationally simplified single-replication procedure that only requires solving one optimization problem. Even though the single replication procedure is computationally significantly less demanding, the resulting confidence interval might have low coverage probability for small sample sizes for some problems. We provide variants of this procedure that require two replications instead of one and that perform better empirically. We present computational results for a newsvendor problem and for two-stage stochastic linear programs from the literature. We also discuss when the procedures perform well and when they fail, and we propose using ɛ-optimal solutions to strengthen the performance of our procedures.

Journal ArticleDOI
TL;DR: A stochastic model of freeway traffic at a time scale and of a level of detail suitable for on-line estimation, routing and ramp metering control is presented.
Abstract: Traffic flow on freeways is a non-linear, many-particle phenomenon, with complex interactions between vehicles. This paper presents a stochastic model of freeway traffic at a time scale and of a level of detail suitable for on-line estimation, routing and ramp metering control. The freeway is considered as a network of interconnected components, corresponding to one-way road links consisting of consecutively connected short sections (cells). The compositional model proposed here extends the Daganzo cell transmission model by defining sending and receiving functions explicitly as random variables, and by also specifying the dynamics of the average speed in each cell. Simple stochastic equations describing the macroscopic traffic behavior of each cell, as well as its interaction with neighboring cells are obtained. This will allow the simulation of quite large road networks by composing many links. The model is validated over synthetic data with abrupt changes in the number of lanes and over real traffic data sets collected from a Belgian freeway.

Book ChapterDOI
01 Jan 2006
TL;DR: In this article, the authors consider optimization problems involving coherent measures of risk and derive necessary and sufficient conditions of optimality for these problems, and discuss the nature of the non-anticipativity constraints.
Abstract: We consider optimization problems involving coherent measures of risk. We derive necessary and sufficient conditions of optimality for these problems, and we discuss the nature of the nonanticipativity constraints. Next, we introduce dynamic measures of risk, and formulate multistage optimization problems involving these measures. Conditions similar to dynamic programming equations are developed. The theoretical considerations are illustrated with many examples of mean-risk models applied in practice.

Journal ArticleDOI
TL;DR: It is shown that for a broad class of 2-stage linear models with recourse, one can, for any ε > 0, in time polynomial in 1/ε and the size of the input, compute a solution of value within a factor of the optimum, in spite of the fact that exponentially many second-stage scenarios may occur.
Abstract: Stochastic optimization problems attempt to model uncertainty in the data by assuming that the input is specified by a probability distribution. We consider the well-studied paradigm of 2-stage models with recourse: first, given only distributional information about (some of) the data one commits on initial actions, and then once the actual data is realized (according to the distribution), further (recourse) actions can be taken. We show that for a broad class of 2-stage linear models with recourse, one can, for any e > 0, in time polynomial in 1/e and the size of the input, compute a solution of value within a factor (1pe) of the optimum, in spite of the fact that exponentially many second-stage scenarios may occur. In conjunction with a suitable rounding scheme, this yields the first approximation algorithms for 2-stage stochastic integer optimization problems where the underlying random data is given by a “black box” and no restrictions are placed on the costs in the two stages. Our rounding approach for stochastic integer programs shows that an approximation algorithm for a deterministic analogue yields, with a small constant-factor loss, provably near-optimal solutions for the stochastic generalization. Among the range of applications, we consider are stochastic versions of the multicommodity flow, set cover, vertex cover, and facility location problems.

Journal ArticleDOI
TL;DR: The goal is to provide an effective computational tool for the optimization of a large-scale transit route network to minimize transfers with reasonable route directness while maximizing service coverage.
Abstract: This paper presents a mathematical stochastic methodology for transit route network optimization. The goal is to provide an effective computational tool for the optimization of a large-scale transit route network to minimize transfers with reasonable route directness while maximizing service coverage. The methodology includes representation of transit route network solution search spaces, representation of transit route and network constraints, and a stochastic search scheme based on an integrated simulated annealing and genetic algorithm solution search method. The methodology has been implemented as a computer program, tested using previously published results, and applied to a large-scale realistic network optimization problem.

Journal ArticleDOI
TL;DR: Quantitative stability of linear multistage stochastic programs is studied and it is shown that the infima of such programs behave (locally) Lipschitz continuous with respect to the sum of an $L_{r}$-distance and of a distance measure for the filtrations of the original and approximate Stochastic processes.
Abstract: Quantitative stability of linear multistage stochastic programs is studied. It is shown that the infima of such programs behave (locally) Lipschitz continuous with respect to the sum of an $L_{r}$-distance and of a distance measure for the filtrations of the original and approximate stochastic (input) processes. Various issues of the result are discussed and an illustrative example is given. Consequences for the reduction of scenario trees are also discussed.

Journal ArticleDOI
TL;DR: Experimental results show how dimensionality issues, given by the large number of basins and realistic modeling of the stochastic inflows, can be mitigated by employing neural approximators for the value functions, and efficient discretizations of the state space, such as orthogonal arrays, Latin hypercube designs and low-discrepancy sequences.

Book
09 Oct 2006
TL;DR: This book provides a practical introduction to computationally solving discrete optimization problems using dynamic programming from the examples presented, so that readers should more easily be able to formulate dynamic programming solutions to their own problems of interest.
Abstract: This book provides a practical introduction to computationally solving discrete optimization problems using dynamic programming. From the examples presented, readers should more easily be able to formulate dynamic programming solutions to their own problems of interest. We also provide and describe the design, implementation, and use of a software tool that has been used to numerically solve all of the problems presented earlier in the book.

Journal ArticleDOI
TL;DR: In this article, a stochastic version of the classical multi-item Capacitated Lot-Sizing Problem (CLSP) is considered, where demand uncertainty is explicitly modeled through a scenario tree.
Abstract: We consider a stochastic version of the classical multi-item Capacitated Lot-Sizing Problem (CLSP). Demand uncertainty is explicitly modeled through a scenario tree, resulting in a multi-stage mixed-integer stochastic programming model with recourse. We propose a plant-location-based model formulation and a heuristic solution approach based on a fix-and-relax strategy. We report computational experiments to assess not only the viability of the heuristic, but also the advantage (if any) of the stochastic programming model with respect to the considerably simpler deterministic model based on expected value of demand. To this aim we use a simulation architecture, whereby the production plan obtained from the optimization models is applied in a realistic rolling horizon framework, allowing for out-of-sample scenarios and errors in the model of demand uncertainty. We also experiment with different approaches to generate the scenario tree. The results suggest that there is an interplay between different manager...