Topic
Convex optimization
About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.
Papers published on a yearly basis
Papers
More filters
•
31 Mar 1994
TL;DR: In this article, the authors introduce the logarithmic barrier method (LBP) and the center method (CPM) for reducing the complexity of LP and CPM, respectively.
Abstract: Glossary of Symbols and Notations. 1. Introduction of IPMs. 2. The logarithmic barrier method. 3. The center method. 4. Reducing the complexity for LP. 5. Discussion of other IPMs. 6. Summary, conclusions and recommendations. Appendices: A. Self-concordance proofs. B. General technical lemmas. Bibliography. Index.
205 citations
••
TL;DR: Several important problems in control theory can be reformulated as semidefinite programming problems, i.e., minimization of a linear objective subject to linear matrix inequality constraints, yielding new results or new proofs for existing results from control theory.
Abstract: Several important problems in control theory can be reformulated as semidefinite programming problems, i.e., minimization of a linear objective subject to linear matrix inequality (LMI) constraints. From convex optimization duality theory, conditions for infeasibility of the LMIs, as well as dual optimization problems, can be formulated. These can in turn be reinterpreted in control or system theoretic terms, often yielding new results or new proofs for existing results from control theory. We explore such connections for a few problems associated with linear time-invariant systems.
205 citations
••
TL;DR: Theoretically, it is shown that if the nonexpansive operator $T$ has a fixed point, then with probability one, ARock generates a sequence that converges to a fixed points of $T$.
Abstract: Finding a fixed point to a nonexpansive operator, i.e., $x^*=Tx^*$, abstracts many problems in numerical linear algebra, optimization, and other areas of scientific computing. To solve fixed-point problems, we propose ARock, an algorithmic framework in which multiple agents (machines, processors, or cores) update $x$ in an asynchronous parallel fashion. Asynchrony is crucial to parallel computing since it reduces synchronization wait, relaxes communication bottleneck, and thus speeds up computing significantly. At each step of ARock, an agent updates a randomly selected coordinate $x_i$ based on possibly out-of-date information on $x$. The agents share $x$ through either global memory or communication. If writing $x_i$ is atomic, the agents can read and write $x$ without memory locks.
Theoretically, we show that if the nonexpansive operator $T$ has a fixed point, then with probability one, ARock generates a sequence that converges to a fixed points of $T$. Our conditions on $T$ and step sizes are weaker than comparable work. Linear convergence is also obtained.
We propose special cases of ARock for linear systems, convex optimization, machine learning, as well as distributed and decentralized consensus problems. Numerical experiments of solving sparse logistic regression problems are presented.
205 citations
••
TL;DR: It is shown that the system classes presented have the common feature that all stabilizing controllers can be characterized by convex constraints on the Youla-Kucera parameter, and a solution to a general optimal performance problem that incorporates time domain and frequency domain constraints is obtained.
Abstract: In this paper, the design of controllers that incorporate structural and multiobjective performance requirements is considered. The control structures under study cover nested, chained, hierarchical, delayed interaction and communications, and symmetric systems. Such structures are strongly related to several modern-day and future applications including integrated flight propulsion systems, platoons of vehicles, micro-electro-mechanical systems, networked control, control of networks, production lines and chemical processes. It is shown that the system classes presented have the common feature that all stabilizing controllers can be characterized by convex constraints on the Youla-Kucera parameter. Using this feature, a solution to a general optimal performance problem that incorporates time domain and frequency domain constraints is obtained. A synthesis procedure is provided which at every step yields a feasible controller together with a measure of its performance with respect to the optimal. Convergence to the optimal performance is established. An example of a multinode network congestion control problem is provided that illustrates the effectiveness of the developed methodology.
205 citations
•
19 Jun 2016TL;DR: In this article, stochastic variance reduced gradient (SVRG) is used for non-convex finite-sum problems and shown to be provably faster than SGD and gradient descent.
Abstract: We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (SVRG) methods for them. SVRG and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (SGD); but their theoretical analysis almost exclusively assumes convexity. In contrast, we obtain nonasymptotic rates of convergence of SVRG for nonconvex optimization, showing that it is provably faster than SGD and gradient descent. We also analyze a subclass of nonconvex problems on which SVRG attains linear convergence to the global optimum. We extend our analysis to mini-batch variants, showing (theoretical) linear speedup due to minibatching in parallel settings.
205 citations