scispace - formally typeset
Search or ask a question

Showing papers by "Aharon Ben-Tal published in 2009"


Journal ArticleDOI
TL;DR: This paper studies the dual problems associated with the robust counterparts of uncertain convex programs and shows that while the primal robust problem corresponds to a decision maker operating under the worst possible data, the dual problem corresponds the best possible data.

169 citations



Journal ArticleDOI
TL;DR: This paper considers the chance-constrained version of an affinely perturbed linear matrix inequality (LMI) constraint, assuming the primitive perturbations to be independent with light-tail distributions (e.g., bounded or Gaussian).
Abstract: In the paper we consider the chance-constrained version of an affinely perturbed linear matrix inequality (LMI) constraint, assuming the primitive perturbations to be independent with light-tail distributions (e.g., bounded or Gaussian). Constraints of this type, playing a central role in chance-constrained linear/conic quadratic/semidefinite programming, are typically computationally intractable. The goal of this paper is to develop a tractable approximation to these chance constraints. Our approximation is based on measure concentration results and is given by an explicit system of LMIs. Thus, the approximation is computationally tractable; moreover, it is also safe, meaning that a feasible solution of the approximation is feasible for the chance constraint.

57 citations


Proceedings Article
07 Dec 2009
TL;DR: Results on real-world datasets show that the new MKL formulation is well-suited for object categorization tasks and that the MD based algorithm outperforms state-of-the-art MKL solvers like simpleMKL in terms of computational effort.
Abstract: Motivated from real world problems, like object categorization, we study a particular mixed-norm regularization for Multiple Kernel Learning (MKL). It is assumed that the given set of kernels are grouped into distinct components where each component is crucial for the learning task at hand. The formulation hence employs l∞ regularization for promoting combinations at the component level and l1 regularization for promoting sparsity among kernels in each component. While previous attempts have formulated this as a non-convex problem, the formulation given here is an instance of non-smooth convex optimization problem which admits an efficient Mirror-Descent (MD) based procedure. The MD procedure optimizes over product of simplexes, which is not a well-studied case in literature. Results on real-world datasets show that the new MKL formulation is well-suited for object categorization tasks and that the MD based algorithm outperforms state-of-the-art MKL solvers like simpleMKL in terms of computational effort.

56 citations


Book ChapterDOI
19 Apr 2009
TL;DR: Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle interval-valued uncertainty than state-of-the-art.
Abstract: This paper presents a Chance-constraint Programming approach for constructing maximum-margin classifiers which are robust to interval-valued uncertainty in training examples. The methodology ensures that uncertain examples are classified correctly with high probability by employing chance-constraints. The main contribution of the paper is to pose the resultant optimization problem as a Second Order Cone Program by using large deviation inequalities, due to Bernstein. Apart from support and mean of the uncertain examples these Bernstein based relaxations make no further assumptions on the underlying uncertainty. Classifiers built using the proposed approach are less conservative, yield higher margins and hence are expected to generalize better than existing methods. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle interval-valued uncertainty than state-of-the-art.

16 citations


01 Jan 2009
TL;DR: In this paper, a robust linear programming model based on a robust optimization approach was developed for evacuation via surface transportation networks in an uncertain environment, where loss of life or property may appear.
Abstract: This paper considers evacuation via surface transportation networks in an uncertain environment. The authors focus on uncertainty associated with significant infeasibility cost in evacuation, where loss of life or property may appear. In particular, the authors flexibly change the impact of infeasibility in the form of penalty function. We develop a robust linear programming model based on a robust optimization approach. The authors show that the robustness in evacuation is important and a robust solution may outperform a nominal deterministic solution in both quality and feasibility.

5 citations


Book ChapterDOI
31 Jan 2009
TL;DR: In this chapter, the concept of the uncertain Linear Optimization problem and its Robust Counterpart are introduced, and the computational issues associated with the emerging optimization problems are studied.
Abstract: In this chapter, we introduce the concept of the uncertain Linear Optimization problem and its Robust Counterpart, and study the computational issues associated with the emerging optimization problems. Recall that the Linear Optimization (LO) problem is of the form min x c T x + d : Ax ≤ b , (1.1.1) where x ∈ R n is the vector of decision variables, c ∈ R n and d ∈ R form the objective, A is an m × n constraint matrix, and b ∈ R m is the right hand side vector. Clearly, the constant term d in the objective, while affecting the optimal value, does not affect the optimal solution, this is why it is traditionally skipped. As we shall see, when treating the LO problems with uncertain data there are good reasons not to neglect this constant term. The structure of problem (1.1.1) is given by the number m of constraints and the number n of variables, while the data of the problem are the collection (c, d, A, b), which we will arrange into an (m + 1) × (n + 1) data matrix D = c T d A b. Usually not all constraints of an LO program, as it arises in applications, are of the form a T x ≤ const; there can be linear " ≥ " inequalities and linear equalities as well. Clearly, the constraints of the latter two types can be represented equivalently by linear " ≤ " inequalities, and we will assume henceforth that these are the only constraints in the problem. Typically, the data of real world LOs (Linear Optimization problems) is not known exactly. The most common reasons for data uncertainty are as follows: • Some of data entries (future demands, returns, etc.) do not exist when the problem is solved and hence are replaced with their forecasts. These data entries are thus subject to prediction errors; • Some of the data (parameters of technological devices/processes, contents associated with raw materials, etc.) cannot be measured exactly – in reality their values drift around the measured " nominal " values; these data are subject to measurement errors; • Some of the decision variables (intensities with which we intend to use various technological processes, parameters of physical devices we are designing, etc.) cannot be implemented exactly as computed. The resulting implementation errors are equivalent to appropriate artificial data uncertainties. Indeed, …

1 citations