scispace - formally typeset
Search or ask a question
Topic

Convex optimization

About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.


Papers
More filters
Journal ArticleDOI
Masao Fukushima1
TL;DR: Each iteration of the proposed algorithm consists of projection onto a halfspace containing the given closed convex set rather than the latter set itself, so its global convergence to the solution can be established under suitable conditions.
Abstract: This paper presents a modification of the projection methods for solving variational inequality problems. Each iteration of the proposed algorithm consists of projection onto a halfspace containing the given closed convex set rather than the latter set itself. The algorithm can thus be implemented very easily and its global convergence to the solution can be established under suitable conditions.

188 citations

Book ChapterDOI
01 Jan 1993
TL;DR: Approximation of convex bodies is frequently encountered in geometric convexity, discrete geometry, the theory of finite-dimensional normed spaces, in geometric algorithms and optimization, and in the realm of engineering as discussed by the authors.
Abstract: Publisher Summary This chapter reviews various aspects of approximation of convex bodies. Approximation of convex bodies is frequently encountered in geometric convexity, discrete geometry, the theory of finite-dimensional normed spaces, in geometric algorithms and optimization, and in the realm of engineering. Also, approximation problems in optimization arise often from more practical problems of operations research and pattern recognition. Several effective approximation algorithms formulated for convex functions or convex bodies are described in the chapter. In the former case the approximation is considered with respect to the maximum norm, in the latter case with respect to the Hausdorff metric. The chapter presents more recent developments in approximation, but many older results are also described.

188 citations

Journal ArticleDOI
TL;DR: This paper presents necessary and sufficient conditions for a convex envelope to be apolyhedral function and illustrates how these conditions may be used inconstructing of convex envelopes.
Abstract: Convex envelopes of multilinear functions on a unit hypercube are polyhedral. This well-known fact makes the convex envelope approximation very useful in the linearization of non-linear 0–1 programming problems and in global bilinear optimization. This paper presents necessary and sufficient conditions for a convex envelope to be a polyhedral function and illustrates how these conditions may be used in constructing of convex envelopes. The main result of the paper is a simple analytical formula, which defines some faces of the convex envelope of a multilinear function. This formula proves to be a generalization of the well known convex envelope formula for multilinear monomial functions.

188 citations

Journal ArticleDOI
TL;DR: Based on a new model and an improved separation lemma, the observer-based controller is developed for the asymptotical stabilization of the NCSs, which are shown in terms of nonlinear matrices inequalities.

188 citations

Journal ArticleDOI
TL;DR: It is proved that the equilibrium point set of the proposed neural network is equivalent to the optimal solution of the original optimization problem by using the Lagrangian saddle-point theorem.
Abstract: A recurrent neural network is proposed for solving the non-smooth convex optimization problem with the convex inequality and linear equality constraints. Since the objective function and inequality constraints may not be smooth, the Clarke's generalized gradients of the objective function and inequality constraints are employed to describe the dynamics of the proposed neural network. It is proved that the equilibrium point set of the proposed neural network is equivalent to the optimal solution of the original optimization problem by using the Lagrangian saddle-point theorem. Under weak conditions, the proposed neural network is proved to be stable, and the state of the neural network is convergent to one of its equilibrium points. Compared with the existing neural network models for non-smooth optimization problems, the proposed neural network can deal with a larger class of constraints and is not based on the penalty method. Finally, the proposed neural network is used to solve the identification problem of genetic regulatory networks, which can be transformed into a non-smooth convex optimization problem. The simulation results show the satisfactory identification accuracy, which demonstrates the effectiveness and efficiency of the proposed approach.

188 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
94% related
Robustness (computer science)
94.7K papers, 1.6M citations
89% related
Linear system
59.5K papers, 1.4M citations
88% related
Markov chain
51.9K papers, 1.3M citations
86% related
Control theory
299.6K papers, 3.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023392
2022849
20211,461
20201,673
20191,677
20181,580