scispace - formally typeset
Search or ask a question
Author

Dominikus Noll

Bio: Dominikus Noll is an academic researcher from University of Toulouse. The author has contributed to research in topics: Iterative reconstruction & Semidefinite programming. The author has an hindex of 36, co-authored 157 publications receiving 4075 citations. Previous affiliations of Dominikus Noll include MathWorks & Paul Sabatier University.


Papers
More filters
Journal ArticleDOI
TL;DR: This work develops nonsmooth optimization techniques to solve H_inftysynthesis problems under additional structural constraints on the controller that avoids the use of Lyapunov variables and therefore leads to moderate size optimization programs even for very large systems.
Abstract: We develop nonsmooth optimization techniques to solve $H_infty$ synthesis problems under additional structural constraints on the controller. Our approach avoids the use of Lyapunov variables and therefore leads to moderate size optimization programs even for very large systems. The proposed framework is versatile and can accommodate a number of challenging design problems including static, fixed-order, fixed-structure, decentralized control, design of PID controllers and simultaneous design and stabilization problems. Our algorithmic strategy uses generalized gradients and bundling techniques suited for the $H_infty$ norm and other nonsmooth performance criteria. We compute descent directions by solving quadratic programs and generate steps via line search. Convergence to a critical point from an arbitrary starting point is proved and numerical tests are included to validate our methods. The proposed approach proves to be efficient even for systems with several hundreds of states.

577 citations

13 Mar 2005
TL;DR: In this paper, a nonsmooth optimization technique is proposed to solve H∞ synthesis problems under additional structural constraints on the controller, which avoids the use of Lyapunov variables and therefore leads to moderate size optimization programs even for very large systems.
Abstract: We develop nonsmooth optimization techniques to solve H∞ synthesis problems under additional structural constraints on the controller. Our approach avoids the use of Lyapunov variables and therefore leads to moderate size optimization programs even for very large systems. The proposed framework is very versatile and can accommodate a number of challenging design problems including static, fixed-order, fixed-structure, decentralized control, design of PID controllers and simultaneous design and stabilization problems. Our algorithmic strategy uses generalized gradients and bundling techniques suited for the H∞-norm and other nonsmooth performance criteria. Convergence to a critical point from an arbitrary starting point is proved (full version) and numerical tests are included to validate our methods.

334 citations

Journal ArticleDOI
TL;DR: This paper discusses nonlinear optimization techniques in robust control synthesis, with special emphasis on design problems which may be cast as minimizing a linear objective function under linear matrix inequality (LMI) constraints in tandem with nonlinear matrix equality constraints.
Abstract: This paper discusses nonlinear optimization techniques in robust control synthesis, with special emphasis on design problems which may be cast as minimizing a linear objective function under linear matrix inequality (LMI) constraints in tandem with nonlinear matrix equality constraints. The latter type of constraints renders the design numerically and algorithmically difficult. We solve the optimization problem via sequential semidefinite programming (SSDP), a technique which expands on sequential quadratic programming (SQP) known in nonlinear optimization. Global and fast local convergence properties of SSDP are similar to those of SQP, and SSDP is conveniently implemented with available semidefinite programming (SDP) solvers. Using two test examples, we compare SSDP to the augmented Lagrangian method, another classical scheme in nonlinear optimization, and to an approach using concave optimization.

146 citations

Journal ArticleDOI
TL;DR: In this paper, a nonsmooth minimization method tailored to functions which are semi-infinite minima of smooth functions is presented, which can deal with complex problems involving multiple possibly repeated uncertain parameters.
Abstract: We present a new approach to parametric robust controller design, where we compute controllers of arbitrary order and structure which minimize the worst-case $H_{\infty} $ norm over a pre-specified set of uncertain parameters. At the core of our method is a nonsmooth minimization method tailored to functions which are semi-infinite minima of smooth functions. A rich test bench and a more detailed example illustrate the potential of the technique, which can deal with complex problems involving multiple possibly repeated uncertain parameters.

109 citations

Journal ArticleDOI
TL;DR: In this article, an augmented Lagrangian method was developed to determine local optimal solutions of the reduced and fixed-order H∞ synthesis problems with linear matrix inequality (LMI) constraints along with nonlinear equality constraints representing a matrix inversion condition.
Abstract: In this paper we develop an augmented Lagrangian method to determine local optimal solutions of the reduced- and fixed-order H∞ synthesis problems. We cast these synthesis problems as optimization programs with a linear cost subject to linear matrix inequality (LMI) constraints along with nonlinear equality constraints representing a matrix inversion condition. The special feature of our algorithm is that only equality constraints are included in the augmented Lagrangian, while LMI constraints are kept explicitly in order to exploit currently available semi definite programming (SDP) codes. The step computation in the tangent problem is based on a Gauss–Newton model, and a specific line search and a first-order Lagrange multiplier update rule are used to enhance efficiency. A number of computational results are reported and underline the strong practical performance of the algorithm. Copyright © 2003 John Wiley & Sons, Ltd.

105 citations


Cited by
More filters
Book
01 Feb 1993
TL;DR: Inequalities for mixed volumes 7. Selected applications Appendix as discussed by the authors ] is a survey of mixed volumes with bounding boxes and quermass integrals, as well as a discussion of their applications.
Abstract: 1. Basic convexity 2. Boundary structure 3. Minkowski addition 4. Curvature measure and quermass integrals 5. Mixed volumes 6. Inequalities for mixed volumes 7. Selected applications Appendix.

3,954 citations

Book
27 Nov 2013
TL;DR: The many different interpretations of proximal operators and algorithms are discussed, their connections to many other topics in optimization and applied mathematics are described, some popular algorithms are surveyed, and a large number of examples of proxiesimal operators that commonly arise in practice are provided.
Abstract: This monograph is about a class of optimization algorithms called proximal algorithms. Much like Newton's method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Here, we discuss the many different interpretations of proximal operators and algorithms, describe their connections to many other topics in optimization and applied mathematics, survey some popular algorithms, and provide a large number of examples of proximal operators that commonly arise in practice.

3,627 citations

Journal ArticleDOI
TL;DR: It is shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.
Abstract: The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.

3,432 citations