scispace - formally typeset
Search or ask a question

Showing papers by "Aharon Ben-Tal published in 2022"


Journal ArticleDOI
TL;DR: It is proved that such a problem can be reformulated as an adjustable robust optimization (ARO) problem in which each adjustable variable corresponds to a unique constraint of the original problem, and ARO techniques are used to obtain approximate solutions to the convex maximization problem.
Abstract: Maximizing a convex function over convex constraints is an NP-hard problem in general. We prove that such a problem can be reformulated as an adjustable robust optimization (ARO) problem in which each adjustable variable corresponds to a unique constraint of the original problem. We use ARO techniques to obtain approximate solutions to the convex maximization problem. In order to demonstrate the complete approximation scheme, we distinguish the cases in which we have just one nonlinear constraint and multiple linear constraints. Concerning the first case, we give three examples in which one can analytically eliminate the adjustable variable and approximately solve the resulting static robust optimization problem efficiently. More specifically, we show that the norm constrained log-sum-exp (geometric) maximization problem can be approximated by (convex) exponential cone optimization techniques. Concerning the second case of multiple linear constraints, the equivalent ARO problem can be represented as an adjustable robust linear optimization problem. Using linear decision rules then returns a safe approximation of the constraints. The resulting problem is a convex optimization problem, and solving this problem gives an upper bound on the global optimum value of the original problem. By using the optimal linear decision rule, we obtain a lower bound solution as well. We derive the approximation problems explicitly for quadratic maximization, geometric maximization, and sum-of-max-linear-terms maximization problems with multiple linear constraints. Numerical experiments show that, contrary to the state-of-the-art solvers, we can approximate large-scale problems swiftly with tight bounds. In several cases, we have equal upper and lower bounds, which concludes that we have global optimality guarantees in these cases. Summary of Contribution: Maximizing a convex function over a convex set is a hard optimization problem. We reformulate this problem as an optimization problem under uncertainty, which allows us to transfer the hardness of this problem from its nonconvexity to the uncertainty of the new problem. The equivalent uncertain optimization problem can be relaxed tightly by using adjustable robust optimization techniques. In addition to building a new bridge between convex maximization and robust optimization, this approach also gives us strong algorithms that improve the state-of-the-art optimization solvers both in solution time and quality for various convex maximization problems.

5 citations


Journal ArticleDOI
TL;DR: Five different methods are used to approximate the original NP-hard problem of maximizing a convex function (MCF) by a tractable convex optimization problem by using the minimizer of the convex objective function in their construction.
Abstract: In this paper, an algorithm for maximizing a convex function over a convex feasible set is proposed. The algorithm, called CoMax, consists of two phases: in phase 1, a feasible starting point is obtained that is used in a gradient ascent algorithm in phase 2. The main contribution of the paper is connected to phase 1; five different methods are used to approximate the original NP-hard problem of maximizing a convex function (MCF) by a tractable convex optimization problem. All the methods use the minimizer of the convex objective function in their construction. In phase 2, the gradient ascent algorithm yields stationary points to the MCF problem. The performance of CoMax is tested on a wide variety of MCF problems, demonstrating its efficiency.