Topic
Convex optimization
About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.
Papers published on a yearly basis
Papers
More filters
••
23 Jul 2014TL;DR: This paper presents a method to analyze the powers of a given trilinear form and obtain upper bounds on the asymptotic complexity of matrix multiplication and obtains the upper bound ω < 2.3728639 on the exponent of square matrix multiplication, which slightly improves the best known upper bound.
Abstract: This paper presents a method to analyze the powers of a given trilinear form (a special kind of algebraic construction also called a tensor) and obtain upper bounds on the asymptotic complexity of matrix multiplication. Compared with existing approaches, this method is based on convex optimization, and thus has polynomial-time complexity. As an application, we use this method to study powers of the construction given by Coppersmith and Winograd [Journal of Symbolic Computation, 1990] and obtain the upper bound ω
815 citations
••
TL;DR: This paper uses the n-rank of a tensor as a sparsity measure and considers the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-Rank that fulfills some linear constraints.
Abstract: In this paper we consider sparsity on a tensor level, as given by the n-rank of a tensor. In an important sparse-vector approximation problem (compressed sensing) and the low-rank matrix recovery problem, using a convex relaxation technique proved to be a valuable solution strategy. Here, we will adapt these techniques to the tensor setting. We use the n-rank of a tensor as a sparsity measure and consider the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-rank that fulfills some linear constraints. We introduce a tractable convex relaxation of the n-rank and propose efficient algorithms to solve the low-n-rank tensor recovery problem numerically. The algorithms are based on the Douglas–Rachford splitting technique and its dual variant, the alternating direction method of multipliers.
814 citations
•
01 Jan 1987TL;DR: This work follows Rockafellar’s conjugate duality approach to convex/nonconvex programs in nonlinear optimization, while technically relying on the fundamental theorems of matroid-theoretic nature.
Abstract: A theory of “discrete convex analysis” is developed for integer-valued functions defined on integer lattice points. The theory parallels the ordinary convex analysis, covering discrete analogues of the fundamental concepts such as conjugacy, subgradients, the Fenchel min-max duality, separation theorems and the Lagrange duality framework for convex/nonconvex optimization. The technical development is based on matroid-theoretic concepts, in particular, submodular functions and exchange axioms. Sections 1–4 extend the conjugacy relationship between submodularity and exchange ability, deepening our understanding of the relationship between convexity and submodularity investigated in the eighties by A. Frank, S. Fujishige, L. Lovasz and others. Sections 5 and 6 establish duality theorems for M- and L-convex functions, namely, the Fenchel min-max duality and separation theorems. These are the generalizations of the discrete separation theorem for submodular functions due to A. Frank and the optimality criteria for the submodular flow problem due to M. Iri-N. Tomizawa, S. Fujishige, and A. Frank. A novel Lagrange duality framework is also developed in integer programming. We follow Rockafellar’s conjugate duality approach to convex/nonconvex programs in nonlinear optimization, while technically relying on the fundamental theorems of matroid-theoretic nature.
810 citations
••
07 Jul 2001TL;DR: It is proved that the set of all reflectance functions (the mapping from surface normals to intensities) produced by Lambertian objects under distant, isotropic lighting lies close to a 9D linear subspace, implying that the images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately with a low-dimensional linear sub space.
Abstract: We prove that the set of all reflectance functions (the mapping from surface normals to intensities) produced by Lambertian objects under distant, isotropic lighting lies close to a 9D linear subspace. This implies that the images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately with a low-dimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce non-negative lighting functions.
806 citations