scispace - formally typeset
Search or ask a question
Author

Regina S. Burachik

Bio: Regina S. Burachik is an academic researcher from University of South Australia. The author has contributed to research in topics: Duality (optimization) & Monotone polygon. The author has an hindex of 27, co-authored 95 publications receiving 2562 citations. Previous affiliations of Regina S. Burachik include Pontifical Catholic University of Rio de Janeiro & Instituto Nacional de Matemática Pura e Aplicada.


Papers
More filters
BookDOI
01 Jun 2011
TL;DR: The material presented provides a survey of the state-of-the-art theory and practice in fixed-point algorithms, identifying emerging problems driven by applications, and discussing new approaches for solving these problems.
Abstract: "Fixed-Point Algorithms for Inverse Problems in Science and Engineering" presents some ofthe most recent work from top-notch researchers studying projection and other first-order fixed-point algorithms in several areas of mathematics and the applied sciences. The material presented provides a survey of the state-of-the-art theory and practice in fixed-point algorithms, identifying emerging problems driven by applications, and discussing new approaches for solving these problems. This book incorporates diverse perspectives from broad-ranging areas of research including, variational analysis, numerical linear algebra, biotechnology, materials science, computational solid-state physics, and chemistry. Topics presented include: Theory of Fixed-point algorithms: convex analysis, convex optimization, subdifferential calculus, nonsmooth analysis, proximal point methods, projection methods, resolvent and related fixed-point theoretic methods, and monotone operator theory. Numerical analysis of fixed-point algorithms: choice of step lengths, of weights, of blocks for block-iterative and parallel methods, and of relaxation parameters; regularization of ill-posed problems; numerical comparison of various methods. Areas of Applications: engineering (image and signal reconstruction and decompression problems), computer tomography and radiation treatment planning (convex feasibility problems), astronomy (adaptive optics), crystallography (molecular structure reconstruction), computational chemistry (molecular structure simulation) and other areas. Because of the variety of applications presented, this book can easily serve as a basis for new and innovated research and collaboration.

355 citations

Book
29 Nov 2010
TL;DR: Set Convergence and Point-to-Set Mappings are discussed in this article, as well as Maximal Monotone Operators and Enlargements of monotone operators in Proximal Theory.
Abstract: Set Convergence and Point-to-Set Mappings.- Convex Analysis and Fixed Point Theorems.- Maximal Monotone Operators.- Enlargements of Monotone Operators.- Recent Topics in Proximal Theory.

246 citations

Journal ArticleDOI
TL;DR: In this paper, a point-to-set operator Te defined as Te(x) is introduced, which inherits most properties of the e-subdifferential, e.g., it is bounded on bounded sets, it contains the image through T of a sufficiently small ball around x, etc., and apply it to generate an inexact proximal point method with generalized distances for variational inequalities.
Abstract: Given a point-to-set operator T, we introduce the operator Te defined as Te(x)= {u: 〈 u − v, x − y 〉 ≥ −e for all y ɛ Rn, v ɛ T(y)}. When T is maximal monotone Te inherits most properties of the e-subdifferential, e.g. it is bounded on bounded sets, Te(x) contains the image through T of a sufficiently small ball around x, etc. We prove these and other relevant properties of Te, and apply it to generate an inexact proximal point method with generalized distances for variational inequalities, whose subproblems consist of solving problems of the form 0 ɛ He(x), while the subproblems of the exact method are of the form 0 ɛ H(x). If ek is the coefficient used in the kth iteration and the ek's are summable, then the sequence generated by the inexact algorithm is still convergent to a solution of the original problem. If the original operator is well behaved enough, then the solution set of each subproblem contains a ball around the exact solution, and so each subproblem can be finitely solved.

201 citations

Journal ArticleDOI
TL;DR: It is proved that the sequence converges (weakly) if and only if the problem has solutions, in which case the weak limit is a solution, and if the solution does not have solutions, then the sequence is unbounded.
Abstract: We consider a generalized proximal point method for solving variational inequality problems with monotone operators in a Hilbert space. It differs from the classical proximal point method (as discussed by Rockafellar for the problem of finding zeroes of monotone operators) in the use of generalized distances, called Bregman distances, instead of the Euclidean one. These distances play not only a regularization role but also a penalization one, forcing the sequence generated by the method to remain in the interior of the feasible set so that the method becomes an interior point one. Under appropriate assumptions on the Bregman distance and the monotone operator we prove that the sequence converges (weakly) if and only if the problem has solutions, in which case the weak limit is a solution. If the problem does not have solutions, then the sequence is unbounded. We extend similar previous results for the proximal point method with Bregman distances which dealt only with the finite dimensional case and which applied only to convex optimization problems or to finding zeroes of monotone operators, which are particular cases of variational inequality problems.

132 citations

Journal ArticleDOI
TL;DR: In this article, the authors consider two finite procedures for determining the step size of the steepest descent method for unconstrained optimization, without performing exact one-dimensional minimizations, and prove that for a convex objective, convergence of the whole sequence to a minimizer without any level set boundedness assumption and without any Lipschitz condition.
Abstract: Several finite procedures for determining the step size of the steepest descent method for unconstrained optimization, without performing exact one-dimensional minimizations, have been considered in the literature. The convergence analysis of these methods requires that the objective function have bounded level sets and that its gradient satisfy a Lipschitz condition, in order to establish just stationarity of all cluster points. We consider two of such procedures and prove, for a convex objective, convergence of the whole sequence to a minimizer without any level set boundedness assumption and, for one of them, without any Lipschitz condition.

131 citations


Cited by
More filters
Book
27 Nov 2013
TL;DR: The many different interpretations of proximal operators and algorithms are discussed, their connections to many other topics in optimization and applied mathematics are described, some popular algorithms are surveyed, and a large number of examples of proxiesimal operators that commonly arise in practice are provided.
Abstract: This monograph is about a class of optimization algorithms called proximal algorithms. Much like Newton's method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Here, we discuss the many different interpretations of proximal operators and algorithms, describe their connections to many other topics in optimization and applied mathematics, survey some popular algorithms, and provide a large number of examples of proximal operators that commonly arise in practice.

3,627 citations

Journal ArticleDOI
01 Mar 1970

1,097 citations