Topic
Convex optimization
About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.
Papers published on a yearly basis
Papers
More filters
••
01 May 2017TL;DR: This work asks how much interaction is necessary to optimize convex functions in the local DP model, and provides new algorithms which are either noninteractive or use relatively few rounds of interaction.
Abstract: Recent large-scale deployments of differentially private algorithms employ the local model for privacy (sometimes called PRAM or randomized response), where data are randomized on each individual's device before being sent to a server that computes approximate, aggregate statistics. The server need not be trusted for privacy, leaving data control in users' hands. For an important class of convex optimization problems (including logistic regression, support vector machines, and the Euclidean median), the best known locally differentially-private algorithms are highly interactive, requiring as many rounds of back and forth as there are users in the protocol. We ask: how much interaction is necessary to optimize convex functions in the local DP model? Existing lower bounds either do not apply to convex optimization, or say nothing about interaction. We provide new algorithms which are either noninteractive or use relatively few rounds of interaction. We also show lower bounds on the accuracy of an important class of noninteractive algorithms, suggesting a separation between what is possible with and without interaction.
160 citations
•
TL;DR: This paper derives linear convergence rates of several first order methods for solving smooth non-strongly convex constrained optimization problems, i.e. involving an objective function with a Lipschitz continuous gradient that satisfies some relaxed strong convexity condition.
Abstract: The standard assumption for proving linear convergence of first order methods for smooth convex optimization is the strong convexity of the objective function, an assumption which does not hold for many practical applications. In this paper, we derive linear convergence rates of several first order methods for solving smooth non-strongly convex constrained optimization problems, i.e. involving an objective function with a Lipschitz continuous gradient that satisfies some relaxed strong convexity condition. In particular, in the case of smooth constrained convex optimization, we provide several relaxations of the strong convexity conditions and prove that they are sufficient for getting linear convergence for several first order methods such as projected gradient, fast gradient and feasible descent methods. We also provide examples of functional classes that satisfy our proposed relaxations of strong convexity conditions. Finally, we show that the proposed relaxed strong convexity conditions cover important applications ranging from solving linear systems, Linear Programming, and dual formulations of linearly constrained convex problems.
160 citations
••
TL;DR: In this paper, the authors considered the problem of finding a point in the intersection of countably many closed and convex sets in a Hilbert space, and extended iterations of convex combinations of approximate projections onto subfamilies of sets were investigated to solve this problem.
Abstract: The classical problem of finding a point in the intersection of countably many closed and convex sets in a Hilbert space is considered. Extrapolated iterations of convex combinations of approximate projections onto subfamilies of sets are investigated to solve this problem. General hypotheses are made on the regularity of the sets and various strategies are considered to control the order in which the sets are selected. Weak and strong convergence results are established within thisbroad framework, which provides a unified view of projection methods for solving hilbertian convex feasibility problems.
159 citations
••
TL;DR: A new comparison model is proposed by employing a new approximation for the time-varying delay state, and then, sufficient conditions for the obtained filtering error system are derived by this comparison model.
Abstract: This paper is concerned with the problem of induced l2 filter design for a class of discrete-time Takagi-Sugeno fuzzy Ito stochastic systems with time-varying delays. Attention is focused on the design of the desired filter to guarantee an induced l2 performance for the filtering error system. A new comparison model is proposed by employing a new approximation for the time-varying delay state, and then, sufficient conditions for the obtained filtering error system are derived by this comparison model. A desired filter is constructed by solving a convex optimization problem, which can be efficiently solved by standard numerical algorithms. Finally, simulation examples are provided to illustrate the effectiveness of the proposed approaches.
159 citations
••
25 Mar 2012TL;DR: This work proposes a novel approach to reconstruct Hyperspectral images from very few number of noisy compressive measurements based on a convex minimization which penalizes both the nuclear norm and the ℓ2,1 mixed-norm of the data matrix.
Abstract: We propose a novel approach to reconstruct Hyperspectral images from very few number of noisy compressive measurements. Our reconstruction approach is based on a convex minimization which penalizes both the nuclear norm and the l 2,1 mixed-norm of the data matrix. Thus, the solution tends to have a simultaneous low-rank and joint-sparse structure. We explain how these two assumptions fit Hyperspectral data, and by severals simulations we show that our proposed reconstruction scheme significantly enhances the state-of-the-art tradeoffs between the reconstruction error and the required number of CS measurements.
159 citations