scispace - formally typeset
Search or ask a question
Topic

Convex optimization

About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.


Papers
More filters
Book ChapterDOI
TL;DR: In this article, the convergence of Newton's method for finding saddle points of convex concave, smooth functions based on homotopy (i.e. continuation) ideas is accelerated by affine invariant relatives of Karmarkar's interior point, projective invariant method.
Abstract: For an arbitrary, finite intersection of halfspaces bi≥ i=1,...,m, xeRn, i.e. a bounded, convex polyhedron P(am, bm) we define a "central" point x(am,bm)eP, which has the following properties: x depends on (am,bm) analytically (i.e. rather smoothly); x is affinely invariant; there exist ellipsoids containing P and contained in P, centered at x with similarity ratio (m-1); x(am,bm) can be computed effectively by maximizing a strongly concave, analytic function over P. New methods are presented for globalizing (globally accelerating) the convergence of Newton'-s method for finding saddle points of convex concave, smooth functions based on homotopy (i.e. continuation) ideas. The above results are applied to outline new approaches to linear (smooth, convex) programming by constructing affine invariant relatives of Karmarkar'-s interior point, projective invariant method.

368 citations

Journal ArticleDOI
TL;DR: An iterative gradient user association and power allocation algorithm is proposed and shown to converge rapidly to an optimal point.
Abstract: Millimeter wave (mmWave) communication technologies have recently emerged as an attractive solution to meet the exponentially increasing demand on mobile data traffic. Moreover, ultra dense networks (UDNs) combined with mmWave technology are expected to increase both energy efficiency and spectral efficiency. In this paper, user association and power allocation in mmWave-based UDNs is considered with attention to load balance constraints, energy harvesting by base stations, user quality of service requirements, energy efficiency, and cross-tier interference limits. The joint user association and power optimization problem are modeled as a mixed-integer programming problem, which is then transformed into a convex optimization problem by relaxing the user association indicator and solved by Lagrangian dual decomposition. An iterative gradient user association and power allocation algorithm is proposed and shown to converge rapidly to an optimal point. The complexity of the proposed algorithm is analyzed and its effectiveness compared with existing methods is verified by simulations.

367 citations

Posted Content
TL;DR: This paper provides the first rigorous analysis that explains why phase transitions are ubiquitous in random convex optimization problems and introduces a summary parameter, called the statistical dimension, that canonically extends the dimension of a linear subspace to the class of convex cones.
Abstract: Recent research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the $\ell_1$ minimization method for identifying a sparse vector from random linear measurements. Indeed, the $\ell_1$ approach succeeds with high probability when the number of measurements exceeds a threshold that depends on the sparsity level; otherwise, it fails with high probability. This paper provides the first rigorous analysis that explains why phase transitions are ubiquitous in random convex optimization problems. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems with random measurements, to demixing problems under a random incoherence model, and also to cone programs with random affine constraints. The applied results depend on foundational research in conic geometry. This paper introduces a summary parameter, called the statistical dimension, that canonically extends the dimension of a linear subspace to the class of convex cones. The main technical result demonstrates that the sequence of intrinsic volumes of a convex cone concentrates sharply around the statistical dimension. This fact leads to accurate bounds on the probability that a randomly rotated cone shares a ray with a fixed cone.

366 citations

Journal ArticleDOI
TL;DR: This work proposes a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents for cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents.
Abstract: We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a time-varying network topology. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works on multi-agent optimization that make worst-case assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume that links fail according to a given stochastic process. Under the assumption that the link failures are independent and identically distributed over time (possibly correlated across links), we provide almost sure convergence results for our subgradient algorithm.

366 citations

Journal ArticleDOI
TL;DR: This paper studies two problems which often occur in various applications arising in wireless sensor networks, and provides a diminishing step size algorithm which guarantees asymptotic convergence of the consensus problem and the problem of cooperative solution to a convex optimization problem.
Abstract: In this paper, we study two problems which often occur in various applications arising in wireless sensor networks. These are the problem of reaching an agreement on the value of local variables in a network of computational agents and the problem of cooperative solution to a convex optimization problem, where the objective function is the aggregate sum of local convex objective functions. We incorporate the presence of a random communication graph between the agents in our model as a more realistic abstraction of the gossip and broadcast communication protocols of a wireless network. An added ingredient is the presence of local constraint sets to which the local variables of each agent is constrained. Our model allows for the objective functions to be nondifferentiable and accommodates the presence of noisy communication links and subgradient errors. For the consensus problem we provide a diminishing step size algorithm which guarantees asymptotic convergence. The distributed optimization algorithm uses two diminishing step size sequences to account for communication noise and subgradient errors. We establish conditions on these step sizes under which we can achieve the dual task of reaching consensus and convergence to the optimal set with probability one. In both cases we consider the constant step size behavior of the algorithm and establish asymptotic error bounds.

366 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
94% related
Robustness (computer science)
94.7K papers, 1.6M citations
89% related
Linear system
59.5K papers, 1.4M citations
88% related
Markov chain
51.9K papers, 1.3M citations
86% related
Control theory
299.6K papers, 3.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023392
2022849
20211,461
20201,673
20191,677
20181,580