scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Distributionally robust joint chance constraints with second-order moment information

01 Feb 2013-Mathematical Programming (Springer-Verlag)-Vol. 137, Iss: 1, pp 167-198
TL;DR: It is proved that this approximation is exact for robust individual chance constraints with concave or (not necessarily concave) quadratic constraint functions, and it is demonstrated that the Worst-Case CVaR can be computed efficiently for these classes of constraint functions.
Abstract: We develop tractable semidefinite programming based approximations for distributionally robust individual and joint chance constraints, assuming that only the first- and second-order moments as well as the support of the uncertain parameters are given. It is known that robust chance constraints can be conservatively approximated by Worst-Case Conditional Value-at-Risk (CVaR) constraints. We first prove that this approximation is exact for robust individual chance constraints with concave or (not necessarily concave) quadratic constraint functions, and we demonstrate that the Worst-Case CVaR can be computed efficiently for these classes of constraint functions. Next, we study the Worst-Case CVaR approximation for joint chance constraints. This approximation affords intuitive dual interpretations and is provably tighter than two popular benchmark approximations. The tightness depends on a set of scaling parameters, which can be tuned via a sequential convex optimization algorithm. We show that the approximation becomes essentially exact when the scaling parameters are chosen optimally and that the Worst-Case CVaR can be evaluated efficiently if the scaling parameters are kept constant. We evaluate our joint chance constraint approximation in the context of a dynamic water reservoir control problem and numerically demonstrate its superiority over the two benchmark approximations.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A unifying framework for modeling and solving distributionally robust optimization problems and introduces standardized ambiguity sets that contain all distributions with prescribed conic representable confidence sets and with mean values residing on an affine manifold.
Abstract: Distributionally robust optimization is a paradigm for decision making under uncertainty where the uncertain problem data are governed by a probability distribution that is itself subject to uncertainty. The distribution is then assumed to belong to an ambiguity set comprising all distributions that are compatible with the decision maker's prior information. In this paper, we propose a unifying framework for modeling and solving distributionally robust optimization problems. We introduce standardized ambiguity sets that contain all distributions with prescribed conic representable confidence sets and with mean values residing on an affine manifold. These ambiguity sets are highly expressive and encompass many ambiguity sets from the recent literature as special cases. They also allow us to characterize distributional families in terms of several classical and/or robust statistical indicators that have not yet been studied in the context of robust optimization. We determine conditions under which distributionally robust optimization problems based on our standardized ambiguity sets are computationally tractable. We also provide tractable conservative approximations for problems that violate these conditions.

789 citations


Additional excerpts

  • ...Robust optimization, ambiguous probability distributions, conic optimization....

    [...]

Journal ArticleDOI
TL;DR: An overview of developments in robust optimization since 2007 is provided to give a representative picture of the research topics most explored in recent years, highlight common themes in the investigations of independent research teams and highlight the contributions of rising as well as established researchers both to the theory of robust optimization and its practice.

742 citations

Posted Content
TL;DR: The paper argues that the set of distributions chosen should be chosen to be appropriate for the application at hand, and that some of the choices that have been popular until recently are, for many applications, not good choices.
Abstract: Distributionally robust stochastic optimization (DRSO) is an approach to optimization under uncertainty in which, instead of assuming that there is an underlying probability distribution that is known exactly, one hedges against a chosen set of distributions. In this paper, we consider sets of distributions that are within a chosen Wasserstein distance from a nominal distribution. We argue that such a choice of sets has two advantages: (1) The resulting distributions hedged against are more reasonable than those resulting from other popular choices of sets, such as {\Phi}-divergence ambiguity set. (2) The problem of determining the worst-case expectation has desirable tractability properties. We derive a dual reformulation of the corresponding DRSO problem and construct approximate worst-case distributions (or an exact worst-case distribution if it exists) explicitly via the first-order optimality conditions of the dual problem. Our contributions are five-fold. (i) We identify necessary and sufficient conditions for the existence of a worst-case distribution, which is naturally related to the growth rate of the objective function. (ii) We show that the worst-case distributions resulting from an appropriate Wasserstein distance have a concise structure and a clear interpretation. (iii) Using this structure, we show that data-driven DRSO problems can be approximated to any accuracy by robust optimization problems, and thereby many DRSO problems become tractable by using tools from robust optimization. (iv) To the best of our knowledge, our proof of strong duality is the first constructive proof for DRSO problems, and we show that the constructive proof technique is also useful in other contexts. (v) Our strong duality result holds in a very general setting, and we show that it can be applied to infinite dimensional process control problems and worst-case value-at-risk analysis.

505 citations


Additional excerpts

  • ...Key words : distributionally robust optimization; data-driven; ambiguity set; worst-case distribution MSC2000 subject classification : Primary: 90C15; secondary: 90C46 OR/MS subject classification : Primary: programming: stochastic 1....

    [...]

Journal ArticleDOI
TL;DR: This paper derives an equivalent reformulation for DCC and shows that it is equivalent to a classical chance constraint with a perturbed risk level, and analyzes the relationship between the conservatism of D CC and the size of historical data, which can help indicate the value of data.
Abstract: In this paper, we study data-driven chance constrained stochastic programs, or more specifically, stochastic programs with distributionally robust chance constraints (DCCs) in a data-driven setting to provide robust solutions for the classical chance constrained stochastic program facing ambiguous probability distributions of random parameters. We consider a family of density-based confidence sets based on a general $$\phi $$ź-divergence measure, and formulate DCC from the perspective of robust feasibility by allowing the ambiguous distribution to run adversely within its confidence set. We derive an equivalent reformulation for DCC and show that it is equivalent to a classical chance constraint with a perturbed risk level. We also show how to evaluate the perturbed risk level by using a bisection line search algorithm for general $$\phi $$ź-divergence measures. In several special cases, our results can be strengthened such that we can derive closed-form expressions for the perturbed risk levels. In addition, we show that the conservatism of DCC vanishes as the size of historical data goes to infinity. Furthermore, we analyze the relationship between the conservatism of DCC and the size of historical data, which can help indicate the value of data. Finally, we conduct extensive computational experiments to test the performance of the proposed DCC model and compare various $$\phi $$ź-divergence measures based on a capacitated lot-sizing problem with a quality-of-service requirement.

437 citations


Cites background or methods from "Distributionally robust joint chanc..."

  • ...[39], we develop an equivalent reformulation for the joint DCCs....

    [...]

  • ...[39], we propose an algorithm based on iteratively solving two convex optimization problems (hereafter denoted as iterative convex optimization) to solve [DCCP]....

    [...]

  • ...[39], we develop a more general approach to obtain the result, which can easily be extended to obtain reformulations under other forms of moment information (e....

    [...]

  • ...[39] consider exact solution approaches for the joint chance constraint version of DRCC....

    [...]

  • ...[39] develop an equivalent reformulation for the single DCCs and a worst-case CVaR-based approximation for the joint DCCs....

    [...]

Posted Content
TL;DR: Main concepts and contributions to DRO are surveyed, and its relationships with robust optimization, risk-aversion, chance-constrained optimization, and function regularization are surveyed.
Abstract: The concepts of risk-aversion, chance-constrained optimization, and robust optimization have developed significantly over the last decade. Statistical learning community has also witnessed a rapid theoretical and applied growth by relying on these concepts. A modeling framework, called distributionally robust optimization (DRO), has recently received significant attention in both the operations research and statistical learning communities. This paper surveys main concepts and contributions to DRO, and its relationships with robust optimization, risk-aversion, chance-constrained optimization, and function regularization.

348 citations


Cites background from "Distributionally robust joint chanc..."

  • ...[345] study a safe approximation to distributionally robust individual and joint chance constraints based on the worst-case CVaR....

    [...]

  • ...[345] show that the CVaR approximation is exact for joint chance constraints whose constraint functions depend linearly on ξ̃....

    [...]

References
More filters
Proceedings ArticleDOI
02 Sep 2004
TL;DR: Free MATLAB toolbox YALMIP is introduced, developed initially to model SDPs and solve these by interfacing eternal solvers by making development of optimization problems in general, and control oriented SDP problems in particular, extremely simple.
Abstract: The MATLAB toolbox YALMIP is introduced. It is described how YALMIP can be used to model and solve optimization problems typically occurring in systems and control theory. In this paper, free MATLAB toolbox YALMIP, developed initially to model SDPs and solve these by interfacing eternal solvers. The toolbox makes development of optimization problems in general, and control oriented SDP problems in particular, extremely simple. In fact, learning 3 YALMIP commands is enough for most users to model and solve the optimization problems

7,676 citations

Journal ArticleDOI
TL;DR: In this paper, a new approach to optimize or hedging a portfolio of financial instruments to reduce risk is presented and tested on applications, which focuses on minimizing Conditional Value-at-Risk (CVaR) rather than minimizing Value at Risk (VaR), but portfolios with low CVaR necessarily have low VaR as well.
Abstract: A new approach to optimizing or hedging a portfolio of nancial instruments to reduce risk is presented and tested on applications. It focuses on minimizing Conditional Value-at-Risk (CVaR) rather than minimizing Value-at-Risk (VaR), but portfolios with low CVaR necessarily have low VaR as well. CVaR, also called Mean Excess Loss, Mean Shortfall, or Tail VaR, is anyway considered to be a more consistent measure of risk than VaR. Central to the new approach is a technique for portfolio optimization which calculates VaR and optimizes CVaR simultaneously. This technique is suitable for use by investment companies, brokerage rms, mutual funds, and any business that evaluates risks. It can be combined with analytical or scenario-based methods to optimize portfolios with large numbers of instruments, in which case the calculations often come down to linear programming or nonsmooth programming. The methodology can be applied also to the optimization of percentiles in contexts outside of nance.

5,622 citations


"Distributionally robust joint chanc..." refers methods in this paper

  • ...To this end, we first recall the definition of CVaR due to Rockafellar and Uryasev [24]....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes a model that describes uncertainty in both the distribution form (discrete, Gaussian, exponential, etc.) and moments (mean and covariance matrix) and demonstrates that for a wide range of cost functions the associated distributionally robust stochastic program can be solved efficiently.
Abstract: Stochastic programming can effectively describe many decision-making problems in uncertain environments. Unfortunately, such programs are often computationally demanding to solve. In addition, their solution can be misleading when there is ambiguity in the choice of a distribution for the random parameters. In this paper, we propose a model that describes uncertainty in both the distribution form (discrete, Gaussian, exponential, etc.) and moments (mean and covariance matrix). We demonstrate that for a wide range of cost functions the associated distributionally robust (or min-max) stochastic program can be solved efficiently. Furthermore, by deriving a new confidence region for the mean and the covariance matrix of a random vector, we provide probabilistic arguments for using our model in problems that rely heavily on historical data. These arguments are confirmed in a practical example of portfolio selection, where our framework leads to better-performing policies on the “true” distribution underlying the daily returns of financial assets.

1,569 citations


"Distributionally robust joint chanc..." refers methods in this paper

  • ...where the interchange of the maximization and minimization operations is justified by a stochastic saddle point theorem due to Shapiro and Kleywegt [26], see also Delage and Ye [11] or Natarajan et al....

    [...]

Journal ArticleDOI
TL;DR: SOCP formulations are given for four examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadRatic functions, and many of the problems presented in the survey paper of Vandenberghe and Boyd as examples of SDPs can in fact be formulated as SOCPs and should be solved as such.
Abstract: Second-order cone programming (SOCP) problems are convex optimization problems in which a linear function is minimized over the intersection of an affine linear manifold with the Cartesian product of second-order (Lorentz) cones. Linear programs, convex quadratic programs and quadratically constrained convex quadratic programs can all be formulated as SOCP problems, as can many other problems that do not fall into these three categories. These latter problems model applications from a broad range of fields from engineering, control and finance to robust optimization and combinatorial optimization. On the other hand semidefinite programming (SDP)—that is the optimization problem over the intersection of an affine set and the cone of positive semidefinite matrices—includes SOCP as a special case. Therefore, SOCP falls between linear (LP) and quadratic (QP) programming and SDP. Like LP, QP and SDP problems, SOCP problems can be solved in polynomial time by interior point methods. The computational effort per iteration required by these methods to solve SOCP problems is greater than that required to solve LP and QP problems but less than that required to solve SDP’s of similar size and structure. Because the set of feasible solutions for an SOCP problem is not polyhedral as it is for LP and QP problems, it is not readily apparent how to develop a simplex or simplex-like method for SOCP. While SOCP problems can be solved as SDP problems, doing so is not advisable both on numerical grounds and computational complexity concerns. For instance, many of the problems presented in the survey paper of Vandenberghe and Boyd [VB96] as examples of SDPs can in fact be formulated as SOCPs and should be solved as such. In §2, 3 below we give SOCP formulations for four of these examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadratic functions ∗RUTCOR, Rutgers University, e-mail:alizadeh@rutcor.rutgers.edu. Research supported in part by the U.S. National Science Foundation grant CCR-9901991 †IEOR, Columbia University, e-mail: gold@ieor.columbia.edu. Research supported in part by the Department of Energy grant DE-FG02-92ER25126, National Science Foundation grants DMS-94-14438, CDA-97-26385 and DMS-01-04282.

1,535 citations


"Distributionally robust joint chanc..." refers background in this paper

  • ...In this case, the chance constrained problem becomes a tractable second-order cone program (SOCP), which can be solved in polynomial time, see Alizadeh and Goldfarb [1]....

    [...]

Journal ArticleDOI
TL;DR: A rich family of control problems which are in general hard to solve in a deterministically robust sense is therefore amenable to polynomial-time solution, if robustness is intended in the proposed risk-adjusted sense.
Abstract: This paper proposes a new probabilistic solution framework for robust control analysis and synthesis problems that can be expressed in the form of minimization of a linear objective subject to convex constraints parameterized by uncertainty terms. This includes the wide class of NP-hard control problems representable by means of parameter-dependent linear matrix inequalities (LMIs). It is shown in this paper that by appropriate sampling of the constraints one obtains a standard convex optimization problem (the scenario problem) whose solution is approximately feasible for the original (usually infinite) set of constraints, i.e., the measure of the set of original constraints that are violated by the scenario solution rapidly decreases to zero as the number of samples is increased. We provide an explicit and efficient bound on the number of samples required to attain a-priori specified levels of probabilistic guarantee of robustness. A rich family of control problems which are in general hard to solve in a deterministically robust sense is therefore amenable to polynomial-time solution, if robustness is intended in the proposed risk-adjusted sense.

1,122 citations


"Distributionally robust joint chanc..." refers background in this paper

  • ...Recently, Calafiore and Campi [5] as well as Luedtke and Ahmed [17] have proposed to replace the chance constraint (2) by a pointwise constraint that must hold at a finite number of sample points drawn randomly from the distribution Q....

    [...]

  • ...Calafiore and Campi [5] showed that one requires O(n/ ) samples to guarantee that a solution of the approximate problem is feasible in the original chance constrained program....

    [...]