scispace - formally typeset
Open AccessProceedings ArticleDOI

An improved cutting plane method for convex optimization, convex-concave games, and its applications

Reads0
Chats0
TLDR
A novel multi-layered data structure for leverage score maintenance is achieved by a sophisticated combination of diverse techniques such as random projection, batched low-rank update, inverse maintenance, polynomial interpolation, and fast rectangular matrix multiplication.
Abstract
Given a separation oracle for a convex set K ⊂ ℝ n that is contained in a box of radius R, the goal is to either compute a point in K or prove that K does not contain a ball of radius є. We propose a new cutting plane algorithm that uses an optimal O(n log(κ)) evaluations of the oracle and an additional O(n 2) time per evaluation, where κ = nR/є. This improves upon Vaidya’s O( SO · n log(κ) + n ω+1 log(κ)) time algorithm [Vaidya, FOCS 1989a] in terms of polynomial dependence on n, where ω O( SO · n log(κ) + n 3 log O(1) (κ)) time algorithm [Lee, Sidford and Wong, FOCS 2015] in terms of dependence on κ. For many important applications in economics, κ = Ω(exp(n)) and this leads to a significant difference between log(κ) and (log(κ)). We also provide evidence that the n 2 time per evaluation cannot be improved and thus our running time is optimal. A bottleneck of previous cutting plane methods is to compute leverage scores, a measure of the relative importance of past constraints. Our result is achieved by a novel multi-layered data structure for leverage score maintenance, which is a sophisticated combination of diverse techniques such as random projection, batched low-rank update, inverse maintenance, polynomial interpolation, and fast rectangular matrix multiplication. Interestingly, our method requires a combination of different fast rectangular matrix multiplication algorithms. Our algorithm not only works for the classical convex optimization setting, but also generalizes to convex-concave games. We apply our algorithm to improve the runtimes of many interesting problems, e.g., Linear Arrow-Debreu Markets, Fisher Markets, and Walrasian equilibrium.

read more

Citations
More filters
Journal ArticleDOI

Solving Linear Programs in the Current Matrix Multiplication Time

TL;DR: In this paper, the authors presented an O(nω+n 2.5−α/2+n2+1/6) log(n/δ) time algorithm for linear programs of the form minAx=b,x≥ 0 c⊤ x with n variables.
Posted Content

A Nearly-Linear Time Algorithm for Linear Programs with Small Treewidth: A Multiscale Representation of Robust Central Path

TL;DR: This paper shows how to solve a linear program of the form minAx=b in time O(n · τ2 log(1/ε), and obtains the first IPM with o(rank(A))) time per iteration when the treewidth is small, and a novel representation of the solution under a multiscale basis similar to the wavelet basis.
Proceedings ArticleDOI

A Faster Interior Point Method for Semidefinite Programming

TL;DR: In this paper, the authors presented a faster interior point method to solve generic SDPs with variable size constraints in time O(n \times n$ and m √ n √ m constraints.
Proceedings ArticleDOI

A faster algorithm for solving general LPs

TL;DR: The fastest known LP solver for general linear programs is due to as mentioned in this paper, which runs in O *(nω +n2.055) time instead of O*(n 2.16).
Posted Content

Training (Overparametrized) Neural Networks in Near-Linear Time

TL;DR: In this paper, the authors reformulate the Gauss-Newton iteration as an approximation problem, and then use a Fast-JL type dimension reduction to find a sufficiently good approximate solution via conjugate gradient.
References
More filters
Proceedings ArticleDOI

A new polynomial-time algorithm for linear programming

TL;DR: The algorithm consists of repeated application of such projective transformations each followed by optimization over an inscribed sphere to create a sequence of points which converges to the optimal solution in polynomial-time.
Proceedings ArticleDOI

Multiplying matrices faster than coppersmith-winograd

TL;DR: An automated approach for designing matrix multiplication algorithms based on constructions similar to the Coppersmith-Winograd construction is developed and a new improved bound on the matrix multiplication exponent ω<2.3727 is obtained.
Proceedings ArticleDOI

Powers of tensors and fast matrix multiplication

TL;DR: This paper presents a method to analyze the powers of a given trilinear form and obtain upper bounds on the asymptotic complexity of matrix multiplication and obtains the upper bound ω < 2.3728639 on the exponent of square matrix multiplication, which slightly improves the best known upper bound.
Proceedings ArticleDOI

Speeding-up linear programming using fast matrix multiplication

TL;DR: An algorithm for solving linear programming problems that requires O((m+n)/sup 1.5/nL) arithmetic operations in the worst case is presented, which improves on the best known time complexity for linear programming by about square root n.
Proceedings ArticleDOI

Unifying and Strengthening Hardness for Dynamic Problems via the Online Matrix-Vector Multiplication Conjecture

TL;DR: In this article, it was shown that there is no truly subcubic (O(n3-e) time algorithm for the online Boolean matrix-vector multiplication problem.
Related Papers (5)