scispace - formally typeset
Search or ask a question
Author

R. A. Poliquin

Bio: R. A. Poliquin is an academic researcher from University of Alberta. The author has contributed to research in topics: Convex function & Subgradient method. The author has an hindex of 14, co-authored 23 publications receiving 1435 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a local theory was developed for the property of a distance function being continuously differentiable outside of C on some neighborhood of a point x ∈ C, which is equivalent to the prox-regularity of C at x, a condition on normal vectors that is commonly fulfilled in variational analysis and has the advantage of being verifiable by calculation.
Abstract: Recently Clarke, Stern and Wolenski characterized, in a Hilbert space, the closed subsets C for which the distance function dC is continuously differentiable everywhere on an open “tube” of uniform thickness around C Here a corresponding local theory is developed for the property of dC being continuously differentiable outside of C on some neighborhood of a point x ∈ C This is shown to be equivalent to the prox-regularity of C at x, which is a condition on normal vectors that is commonly fulfilled in variational analysis and has the advantage of being verifiable by calculation Additional characterizations are provided in terms of dC being locally of class C 1+ or such that dC + σ| · |2 is convex around x for some σ > 0 Prox-regularity of C at x corresponds further to the normal cone mapping NC having a hypomonotone truncation around x, and leads to a formula for PC by way of NC The local theory also yields new insights on the global level of the Clarke-Stern-Wolenski results, and on a property of sets introduced by Shapiro, as well as on the concept of sets with positive reach considered by Federer in the finite dimensional setting

401 citations

Journal ArticleDOI
TL;DR: The class of prox-regular functions covers all lsc, proper, convex functions, lower-C2 functions and strongly amenable functions, hence a large core of functions of interest in variational analysis and optimization as mentioned in this paper.
Abstract: The class of prox-regular functions covers all lsc, proper, convex functions, lower-C2 functions and strongly amenable functions, hence a large core of functions of interest in variational analysis and optimization The subgradient mappings associated with prox-regular functions have unusually rich properties, which are brought to light here through the study of the associated Moreau envelope functions and proximal mappings Connections are made between second-order epi-derivatives of the functions and proto-derivatives of their subdifferentials Conditions are identified under which the Moreau envelope functions are convex or strongly convex, even if the given functions are not

337 citations

Journal ArticleDOI
TL;DR: The classical condition of a positive-definite Hessian in smooth problems without constraints is found to have an exact counterpart much more broadly in the positivity of a certain generalized Hessian mapping.
Abstract: The behavior of a minimizing point when an objective function is tilted by adding a small linear term is studied from the perspective of second-order conditions for local optimality. The classical condition of a positive-definite Hessian in smooth problems without constraints is found to have an exact counterpart much more broadly in the positivity of a certain generalized Hessian mapping. This fully characterizes the case where tilt perturbations cause the minimizing point to shift in a Lipschitzian manner.

165 citations

Journal ArticleDOI
TL;DR: Property of prox-regularity of the essential objective function and positive definiteness of its coderivative Hessian are the keys to the Lipschitzian stability of local solutions to finite-dimensional parameterized optimization problems in a very general setting.
Abstract: Necessary and sufficient conditions are obtained for the Lipschitzian stability of local solutions to finite-dimensional parameterized optimization problems in a very general setting. Properties of prox-regularity of the essential objective function and positive definiteness of its coderivative Hessian are the keys to these results. A previous characterization of tilt stability arises as a special case.

126 citations

Journal ArticleDOI
TL;DR: In this article, the authors studied the problem of determining functions that can be recovered, up to an additive constant, from the knowledge of their subgradients, which is not very well understood.
Abstract: IN NONSMOOTH analysis and optimization, subgradients come in many different flavors, e.g. approximate, Dini, proximal, (Clarke) generalized; see [2, 5, 6, 12, 14, 191. These subgradients are important and valuable tools. However, many questions remain unsolved concerning the exact link between the function and its subgradients. For instance, can two functions, not differing by an additive constant, have the same subgradients? In this paper, we study the fundamental problem of determining functions that can be recovered, up to an additive constant, from the knowledge of their subgradients. This “integration” problem is not very well understood, and very few functions or classes of functions are known to be recoverable from their subgradients. In Section 4, we show that if the “basic constraint qualification” holds at R, then the composition of a closed (i.e. lowersemicontinuous) proper convex function with a twice continuously differentiable mapping is determined up to an additive constant by its generalized subgradients (actually in this case all above-mentioned subgradients are the same). Beside the obvious theoretical interest of this integration problem, it is our hope (or perhaps our long-term goal) that once this problem is better understood, we can then tackle the question of uniqueness of solutions to generalized differential equations involving subgradients in place of partial derivatives. An example of such an equation that well deserves study is the extended Hamilton-Jacobi equation used in optimal control; see Clarke [2]. Let us also mention that a problem similar to the integration problem is the one of determining the set-valued mappings that are in fact subgradient set-valued mappings (uniqueness is not mandatory); for a contribution to this problem see Janin [8]. Before we look at some of the known cases, where the function can be recovered from its subgradients, let us look at some negative examples. It is clear that not every function can be recovered, up to an additive constant, from its subgradients. We only need to look at the following two functions: 0 x50 0 x10 f(x) = 1 x>o g(x) = 2 x>o.

88 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Book
27 Nov 2013
TL;DR: The many different interpretations of proximal operators and algorithms are discussed, their connections to many other topics in optimization and applied mathematics are described, some popular algorithms are surveyed, and a large number of examples of proxiesimal operators that commonly arise in practice are provided.
Abstract: This monograph is about a class of optimization algorithms called proximal algorithms. Much like Newton's method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Here, we discuss the many different interpretations of proximal operators and algorithms, describe their connections to many other topics in optimization and applied mathematics, survey some popular algorithms, and provide a large number of examples of proximal operators that commonly arise in practice.

3,627 citations

Journal ArticleDOI
TL;DR: This work proves an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance, that guarantees the convergence of bounded sequences under the assumption that the function f satisfies the Kurdyka–Łojasiewicz inequality.
Abstract: In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance. Our result guarantees the convergence of bounded sequences, under the assumption that the function f satisfies the Kurdyka–Łojasiewicz inequality. This assumption allows to cover a wide range of problems, including nonsmooth semi-algebraic (or more generally tame) minimization. The specialization of our result to different kinds of structured problems provides several new convergence results for inexact versions of the gradient method, the proximal method, the forward–backward splitting algorithm, the gradient projection and some proximal regularization of the Gauss–Seidel method in a nonconvex setting. Our results are illustrated through feasibility problems, or iterative thresholding procedures for compressive sensing.

1,282 citations

Book
01 Jan 2000
TL;DR: In this paper, the Karush-Kuhn-Tucker Theorem and Fenchel duality were used for infinite versus finite dimensions, with a list of results and notation.
Abstract: Background * Inequality constraints * Fenchel duality * Convex analysis * Special cases * Nonsmooth optimization * The Karush-Kuhn-Tucker Theorem * Fixed points * Postscript: infinite versus finite dimensions * List of results and notation.

1,063 citations