scispace - formally typeset
Search or ask a question

Showing papers by "Jong-Shi Pang published in 2022"


Journal ArticleDOI
TL;DR: In this paper , the affine chance-constrained stochastic program (ACC-SP) is considered, and the authors propose a sampling-based algorithm with almost-sure convergence under a directional derivative condition to a Clarke stationary solution.
Abstract: Chance-constrained programs (CCPs) constitute a difficult class of stochastic programs due to its possible nondifferentiability and nonconvexity even with simple linear random functionals. Existing approaches for solving the CCPs mainly deal with convex random functionals within the probability function. In the present paper, we consider two generalizations of the class of chance constraints commonly studied in the literature; one generalization involves probabilities of disjunctive nonconvex functional events and the other generalization involves mixed-signed affine combinations of the resulting probabilities; together, we coin the term affine chance constraint (ACC) system for these generalized chance constraints. Our proposed treatment of such an ACC system involves the fusion of several individually known ideas: (a) parameterized upper and lower approximations of the indicator function in the expectation formulation of probability; (b) external (i.e., fixed) versus internal (i.e., sequential) sampling-based approximation of the expectation operator; (c) constraint penalization as relaxations of feasibility; and (d) convexification of nonconvexity and nondifferentiability via surrogation. The integration of these techniques for solving the affine chance-constrained stochastic program (ACC-SP) is the main contribution of this paper. Indeed, combined together, these ideas lead to several algorithmic strategies with various degrees of practicality and computational efforts for the nonconvex ACC-SP. In an external sampling scheme, a given sample batch (presumably large) is applied to a penalty formulation of a fixed-accuracy approximation of the chance constraints of the problem via their expectation formulation. This results in a sample average approximation scheme, whose almost-sure convergence under a directional derivative condition to a Clarke stationary solution of the expectation constrained-SP as the sample sizes tend to infinity is established. In contrast, sequential sampling, along with surrogation leads to a sequential convex programming based algorithm whose asymptotic convergence for fixed- and diminishing-accuracy approximations of the indicator function can be established under prescribed increments of the sample sizes.

5 citations



DOI
TL;DR: In this article , Liu et al. proposed a risk-based robust statistical learning model using stochastic difference-of-convex value function optimization (SDFO).
Abstract: For the treatment of outliers, the paper “Risk-Based Robust Statistical Learning by Stochastic Difference-of-Convex Value-Function Optimization” by Junyi Liu and Jong-Shi Pang proposes a risk-based robust statistical learning model. Employing a variant of the conditional value-at-risk risk measure, called the interval conditional value-at-risk (In-CVaR), the model aims to exclude the risks associated with the left and right tails of the loss. The resulting nonsmooth and nonconvex model considers the population In-CVaR risk and distinguishes the upside and downside losses with asymmetric weights. For the solution of the model in both regression and classification, the authors show that the objective function is the difference of two convex functions each being the optimal objective value of a univariate convex stochastic program. A sampling and convex programming-based algorithm is developed with the appropriate control of incremental sample sizes, and its subsequential almost-sure convergence to a critical point is established. Numerical results illustrate the practical performance of the model and methodology.

3 citations


27 Sep 2022
TL;DR: In this article , a general class of convex submodular optimization problems with indicator variables is studied, where the key insight is that, possibly after a suitable reformulation, indicator constraints preserve submodularity.
Abstract: . The problem of inferring Markov random fields (MRFs) with a sparsity or robustness prior can be naturally modeled as a mixed-integer program. This motivates us to study a general class of convex submodular optimization problems with indicator variables, which we show to be polynomially solvable in this paper. The key insight is that, possibly after a suitable reformulation, indicator constraints preserve submodularity. Fast computations of the associated Lov´asz extensions are also discussed under certain smoothness conditions, and can be im-plemented using only linear-algebraic operations in the case of quadratic objectives.

2 citations