scispace - formally typeset
Search or ask a question
Author

Jean-Baptiste Hiriart-Urruty

Bio: Jean-Baptiste Hiriart-Urruty is an academic researcher from Paul Sabatier University. The author has contributed to research in topics: Subderivative & Convex analysis. The author has an hindex of 26, co-authored 106 publications receiving 6645 citations. Previous affiliations of Jean-Baptiste Hiriart-Urruty include Institut de Mathématiques de Toulouse & University of Clermont-Ferrand.


Papers
More filters
Book
21 Oct 1993
TL;DR: In this article, the cutting plane algorithm is used to construct approximate subdifferentials of convex functions, and the inner construction of the subdifferential is performed by a dual form of Bundle Methods.
Abstract: IX. Inner Construction of the Subdifferential.- X. Conjugacy in Convex Analysis.- XI. Approximate Subdifferentials of Convex Functions.- XII. Abstract Duality for Practitioners.- XIII. Methods of ?-Descent.- XIV. Dynamic Construction of Approximate Subdifferentials: Dual Form of Bundle Methods.- XV. Acceleration of the Cutting-Plane Algorithm: Primal Forms of Bundle Methods.- Bibliographical Comments.- References.

3,043 citations

Book
25 Sep 2001
TL;DR: In this paper, the authors define and define Convex functions, Sublinear Functions and Sublinearity and Support Functions of a Nonempty Set Correspondence between ConveX Sets and SubLinear Functions, and Subdifferentials of Finite Functions.
Abstract: Introduction: Notation, Elementary Results.- Convex Sets: Generalities Convex Sets Attached to a Convex Set Projection onto Closed Convex Sets Separation and Applications Conical Approximations of Convex Sets.- Convex Functions: Basic Definitions and Examples Functional Operations Preserving Convexity Local and Global Behaviour of a Convex Function First- and Second-Order Differentiation.- Sublinearity and Support Functions: Sublinear Functions The Support Function of a Nonempty Set Correspondence Between Convex Sets and Sublinear Functions.- Subdifferentials of Finite Convex Functions: The Subdifferential: Definitions and Interpretations Local Properties of the Subdifferential First Examples Calculus Rules with Subdifferentials Further Examples The Subdifferential as a Multifunction.- Conjugacy in Convex Analysis: The Convex Conjugate of a Function Calculus Rules on the Conjugacy Operation Various Examples Differentiability of a Conjugate Function.

1,235 citations

Journal ArticleDOI
TL;DR: This paper presents a generalization of the Hessian matrix to C1,1 functions, i.e., to functions whose gradient mapping is locally Lipschitz, and derives a second-order Taylor expansion of aC1, 1 function.
Abstract: In this paper, we present a generalization of the Hessian matrix toC 1,1 functions, i.e., to functions whose gradient mapping is locally Lipschitz. This type of function arises quite naturally in nonlinear analysis and optimization. First the properties of the generalized Hessian matrix are investigated and then some calculus rules are given. In particular, a second-order Taylor expansion of aC 1,1 function is derived. This allows us to get second-order optimality conditions for nonlinearly constrained mathematical programming problems withC 1,1 data.

385 citations

Journal ArticleDOI
TL;DR: This study presents different properties of tangent cones associated with an arbitrary subset of a Banach space and establishes correlations with some of the existing results.
Abstract: This study is devoted to constrained optimization problems in Banach spaces. We present different properties of tangent cones associated with an arbitrary subset of a Banach space and establish correlations with some of the existing results. In absence of both differentiability and convexity assumptions on the functions involved in the optimization problem, the consideration of these tangent cones and their polars leads us to introduce new concepts in nondifferentiable programming. Necessary optimality conditions are first developed in a general abstract form; then these conditions are made more precise in the presence of equality constraints by introducing the concept of normal subcone.

331 citations

Journal ArticleDOI
TL;DR: This work surveys essential properties of the so-called copositive matrices, the study of which has been spread over more than fifty-five years, with special emphasis on variational aspects related to the concept of copositivity.
Abstract: This work surveys essential properties of the so-called copositive matrices, the study of which has been spread over more than fifty-five years. Special emphasis is given to variational aspects related to the concept of copositivity. In addition, some new results on the geometry of the cone of copositive matrices are presented here for the first time.

157 citations


Cited by
More filters
Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations

Book
01 Nov 2008
TL;DR: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization, responding to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems.
Abstract: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems. For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.

17,420 citations

Book
01 Jan 1994
TL;DR: In this paper, the authors present a brief history of LMIs in control theory and discuss some of the standard problems involved in LMIs, such as linear matrix inequalities, linear differential inequalities, and matrix problems with analytic solutions.
Abstract: Preface 1. Introduction Overview A Brief History of LMIs in Control Theory Notes on the Style of the Book Origin of the Book 2. Some Standard Problems Involving LMIs. Linear Matrix Inequalities Some Standard Problems Ellipsoid Algorithm Interior-Point Methods Strict and Nonstrict LMIs Miscellaneous Results on Matrix Inequalities Some LMI Problems with Analytic Solutions 3. Some Matrix Problems. Minimizing Condition Number by Scaling Minimizing Condition Number of a Positive-Definite Matrix Minimizing Norm by Scaling Rescaling a Matrix Positive-Definite Matrix Completion Problems Quadratic Approximation of a Polytopic Norm Ellipsoidal Approximation 4. Linear Differential Inclusions. Differential Inclusions Some Specific LDIs Nonlinear System Analysis via LDIs 5. Analysis of LDIs: State Properties. Quadratic Stability Invariant Ellipsoids 6. Analysis of LDIs: Input/Output Properties. Input-to-State Properties State-to-Output Properties Input-to-Output Properties 7. State-Feedback Synthesis for LDIs. Static State-Feedback Controllers State Properties Input-to-State Properties State-to-Output Properties Input-to-Output Properties Observer-Based Controllers for Nonlinear Systems 8. Lure and Multiplier Methods. Analysis of Lure Systems Integral Quadratic Constraints Multipliers for Systems with Unknown Parameters 9. Systems with Multiplicative Noise. Analysis of Systems with Multiplicative Noise State-Feedback Synthesis 10. Miscellaneous Problems. Optimization over an Affine Family of Linear Systems Analysis of Systems with LTI Perturbations Positive Orthant Stabilizability Linear Systems with Delays Interpolation Problems The Inverse Problem of Optimal Control System Realization Problems Multi-Criterion LQG Nonconvex Multi-Criterion Quadratic Problems Notation List of Acronyms Bibliography Index.

11,085 citations

Proceedings Article
01 Jan 2010
TL;DR: Adaptive subgradient methods as discussed by the authors dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning, which allows us to find needles in haystacks in the form of very predictive but rarely seen features.
Abstract: We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms.

7,244 citations

Journal Article
TL;DR: This work describes and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal functions that can be chosen in hindsight.
Abstract: We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms.

6,984 citations