Journal ArticleDOI
Error bounds and convergence analysis of feasible descent methods: a general approach
Zhi-Quan Luo,Paul Tseng +1 more
Reads0
Chats0
TLDR
A general approach to analyzing the convergence and the rate of convergence of feasible descent methods that does not require any nondegeneracy assumption on the problem is surveyed and extended.Abstract:
We survey and extend a general approach to analyzing the convergence and the rate of convergence of feasible descent methods that does not require any nondegeneracy assumption on the problem. This approach is based on a certain error bound for estimating the distance to the solution set and is applicable to a broad class of methods.read more
Citations
More filters
Journal ArticleDOI
Convergence of a block coordinate descent method for nondifferentiable minimization
TL;DR: In this article, the convergence properties of a block coordinate descent method applied to minimize a non-convex function f(x1,.., x 2, N 3 ) with certain separability and regularity properties were studied.
Journal ArticleDOI
Coordinate descent algorithms
TL;DR: A certain problem structure that arises frequently in machine learning applications is shown, showing that efficient implementations of accelerated coordinate descent algorithms are possible for problems of this type.
Journal ArticleDOI
A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion
Yangyang Xu,Wotao Yin +1 more
TL;DR: This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each block of variables and proposes a generalized block coordinate descent method.
Journal ArticleDOI
A Modified Forward-Backward Splitting Method for Maximal Monotone Mappings
TL;DR: A modification to the forward-backward splitting method for finding a zero of the sum of two maximal monotone mappings is proposed, under which the method converges assuming only the forward mapping is (Lipschitz) continuous on some closed convex subset of its domain.
Journal ArticleDOI
A coordinate gradient descent method for nonsmooth separable minimization
Paul Tseng,Sangwoon Yun +1 more
TL;DR: A (block) coordinate gradient descent method for solving this class of nonsmooth separable problems and establishes global convergence and, under a local Lipschitzian error bound assumption, linear convergence for this method.
References
More filters
Book
Iterative Solution of Nonlinear Equations in Several Variables
J.M. Ortega,Werner C. Rheinboldt +1 more
TL;DR: In this article, the authors present a list of basic reference books for convergence of Minimization Methods in linear algebra and linear algebra with a focus on convergence under partial ordering.
Book
Parallel and Distributed Computation: Numerical Methods
TL;DR: This work discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later.
Book
Linear and nonlinear programming
David G. Luenberger,Yinyu Ye +1 more
TL;DR: Strodiot and Zentralblatt as discussed by the authors introduced the concept of unconstrained optimization, which is a generalization of linear programming, and showed that it is possible to obtain convergence properties for both standard and accelerated steepest descent methods.
Journal ArticleDOI
Monotone Operators and the Proximal Point Algorithm
TL;DR: In this paper, the proximal point algorithm in exact form is investigated in a more general form where the requirement for exact minimization at each iteration is weakened, and the subdifferential $\partial f$ is replaced by an arbitrary maximal monotone operator T.
Journal ArticleDOI
The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming
TL;DR: This method can be regarded as a generalization of the methods discussed in [1–4] and applied to the approximate solution of problems in linear and convex programming.