scispace - formally typeset
Search or ask a question

Showing papers in "Optimization Methods & Software in 2020"


Journal ArticleDOI
TL;DR: In this paper, the authors study sketching in the context of Newton's method for solving finite-sum problems and show that it is a dimensionality reduction technique that has received much attention in the statistics community.
Abstract: Sketching, a dimensionality reduction technique, has received much attention in the statistics community. In this paper, we study sketching in the context of Newton's method for solving finite-sum ...

83 citations


Journal ArticleDOI
TL;DR: Sdpnal+ as mentioned in this paper is a MATLAB software package that implements an augmented Lagrangian based method to solve large scale semidefinite programming problems with bound constraints.
Abstract: Sdpnal+ is a MATLAB software package that implements an augmented Lagrangian based method to solve large scale semidefinite programming problems with bound constraints. The implementation w...

59 citations


Journal ArticleDOI
TL;DR: In this paper, a new variant of accelerated gradient descent is proposed, which does not require any information about the objective function, uses exact line search for the practical purpose of gradient descent.
Abstract: In this paper, a new variant of accelerated gradient descent is proposed. The proposed method does not require any information about the objective function, uses exact line search for the practical...

45 citations


Journal ArticleDOI
TL;DR: In this paper, the alternating direction method of multipliers (ADMM) is used to solve problems with multiaffine constraints that satisfy certain veri cation constraints, and it is shown that ADMM, when employed to solve multi-finite problems with multi-dimensional constraints, can satisfy certain properties.
Abstract: We expand the scope of the alternating direction method of multipliers (ADMM). Specifically, we show that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain ver...

44 citations


Journal ArticleDOI
TL;DR: An iterative algorithm is developed which solves MIP relaxations that are adaptively refined and able to give convergence results for a wide range of MINLPs requiring only continuous non linearities with bounded domains and an oracle computing maxima of the nonlinearities on their domain.
Abstract: We propose a method for solving mixed-integer nonlinear programmes (MINLPs) to global optimality by discretization of occurring nonlinearities. The main idea is based on using piecewise lin...

33 citations


Journal ArticleDOI
TL;DR: In this article, an implementation of the L-BFGS method designed to deal with two adversarial situations is described. The first occurs in distributed computing environments where some of the computations are computationally intensive.
Abstract: This paper describes an implementation of the L-BFGS method designed to deal with two adversarial situations. The first occurs in distributed computing environments where some of the comput...

31 citations


Journal ArticleDOI
TL;DR: Theoretical analysis shows that each of the defined dynamical system models ensures the convergence in the nonlinear gradient dynamic approach for solving the tensor complementarity problem (TCP) and computer-simulation results substantiate that the considered dynamicalsystem can solve the TCP.
Abstract: Nonlinear gradient dynamic approach for solving the tensor complementarity problem (TCP) are presented. Theoretical analysis shows that each of the defined dynamical system models ensures the conve...

29 citations


Journal ArticleDOI
TL;DR: The problem of DICOPT having difficulties solving instances in which some of the nonlinear constraints are so restrictive that nonlinear subproblems generated by the algorithm are infeasible is addressed with a feasibility pump algorithm, which modifies the objective function in order to efficiently find feasible solutions.
Abstract: The solver DICOPT is based on the outer-approximation algorithm used for solving mixed-integer nonlinear programming (MINLP) problems. This algorithm is very effective for solving some type...

24 citations


Journal ArticleDOI
TL;DR: Updating the augmented Lagrangian multiplier by closed-form expression yields efficient first-order infeasible approach for optimization problems with orthogonality constraints.
Abstract: Updating the augmented Lagrangian multiplier by closed-form expression yields efficient first-order infeasible approach for optimization problems with orthogonality constraints. Hence, parallelizat...

22 citations


Journal ArticleDOI
TL;DR: In this paper, the direction of centripetal acceleration of an object moving in a straight line is estimated based on a simple intuition that the direction is determined by the acceleration of the object itself.
Abstract: Training generative adversarial networks (GANs) often suffers from cyclic behaviours of iterates. Based on a simple intuition that the direction of centripetal acceleration of an object moving in u...

22 citations


Journal ArticleDOI
TL;DR: Algencan is a well established safeguarded Augmented Lagrangian algorithm introduced in [R. Andreani, E. G. Birgin, J. M. Martinez, and M. L. Schuverdt].
Abstract: Algencan is a well established safeguarded Augmented Lagrangian algorithm introduced in [R. Andreani, E. G. Birgin, J. M. Martinez, and M. L. Schuverdt, On Augmented Lagrangian methods with general...

Journal ArticleDOI
TL;DR: New versions of Newton-like algorithms are provided, resulting in combinations of Newton and damped Newton method with special step-size choice, and under some assumptions the convergence is global.
Abstract: Newton method is one of the most powerful methods for finding solutions of nonlinear equations and for proving their existence. In its ‘pure’ form it has fast convergence near the solution, but sma...

Journal ArticleDOI
TL;DR: A new mathematical model and a new efficient Branch & Bound method are proposed for the Constrained Shortest Path Tour Problem, aimed at finding a shortest path from asingle-origin to a single-destination, such that a sequence of disjoint and possibly different-sized node subsets are crossed in a given fixed order.
Abstract: Given a directed graph with non-negative arc lengths, the Constrained Shortest Path Tour Problem (CSPTP) is aimed at finding a shortest path from a single-origin to a single-destination, such that ...

Journal ArticleDOI
TL;DR: In this article, a new stepsize for the gradient method was proposed, which converges to the reciprocal of the largest eigenvalue of the Hessian, when Dai-Yang's asymptotic optimal gradient method fails.
Abstract: We propose a new stepsize for the gradient method. It is shown that this new stepsize will converge to the reciprocal of the largest eigenvalue of the Hessian, when Dai-Yang's asymptotic optimal gr...

Journal ArticleDOI
TL;DR: An extensible open-source deterministic global optimizer (EAGO) programmed entirely in the Julia language is presented.
Abstract: An extensible open-source deterministic global optimizer (EAGO) programmed entirely in the Julia language is presented. EAGO was developed to serve the need for supporting higher-complexity user-de...

Journal ArticleDOI
TL;DR: Global and super-exponential convergence properties of the proposed model as well as behaviour of its equilibrium state are investigated and generality and effectiveness of the discovered ZNN evolution design are illustrated.
Abstract: A varying-parameter ZNN (VPZNN) neural design is defined for approximating various generalized inverses and expressions involving generalized inverses of complex matrices. The proposed model is ter...

Journal ArticleDOI
TL;DR: It is shown that the non-accelerated schemes take at most at most $\mathcal{O}\left(\epsilon^{-1/(p+ u-1)}\right)$ iterations to reduce the norm of the gradient of the objective below a given $\ep silon\in (0,1)$.
Abstract: In this paper, we consider the problem of finding e-approximate stationary points of convex functions that are p-times differentiable with ν-Holder continuous pth derivatives. We present tensor met...

Journal ArticleDOI
TL;DR: Borders are derived for the objective errors and gradient residuals when finding approximations to the solution of common regularized quadratic optimization problems within evolving Krylov spaces to provide upper bounds on the number of iterations required to achieve a given stated accuracy.
Abstract: We derive bounds for the objective errors and gradient residuals when finding approximations to the solution of common regularized quadratic optimization problems within evolving Krylov spaces. The...

Journal ArticleDOI
TL;DR: A derivative-free variant of the -algorithm for convex finite-max objective functions is established, which demonstrates the feasibility and practical value of this superlinearly convergent method for minimizing nonsmooth, convex functions.
Abstract: The VU-algorithm is a superlinearly convergent method for minimizing nonsmooth, convex functions. At each iteration, the algorithm works with a certain V-space and its orthogonal U-space, such that...

Journal ArticleDOI
TL;DR: This paper studies the convergence rate of the Levenberg-Marquardt (LM) method under the Hölderian local error bound condition and the H Ölderian continuity of the Jacobian, which are more general than the local Error Bound Condition and the Lipschitz continuity ofThe Jacobian.
Abstract: In this paper, we study the convergence rate of the Levenberg-Marquardt (LM) method under the Holderian local error bound condition and the Holderian continuity of the Jacobian, which are more gene...

Journal ArticleDOI
TL;DR: If a is sufficiently large, satisfying a condition that depends only on the Armijo parameter, then, when the method is initiated at any point with, the iterates converge to a point with , although f is unbounded below.
Abstract: It has long been known that the gradient (steepest descent) method may fail on non-smooth problems, but the examples that have appeared in the literature are either devised specifically to defeat a...

Journal ArticleDOI
TL;DR: This work uses standard optimality conditions and practical subproblem solves to show a same-order sharp complexity bound for second-order criticality, and extends the method in Birgin et al. to finding second- order critical points, under the same problem smoothness assumptions as were needed for first-order complexity.
Abstract: An adaptive regularization algorithm is proposed that uses Taylor models of the objective of order p, p≥2, of the unconstrained objective function, and that is guaranteed to find a first- and secon...

Journal ArticleDOI
TL;DR: In this paper, first-order methods for minimization of a convex function on a simple convex set were proposed, assuming that the objective function is a composite function given as a sum of a set of simples.
Abstract: In this paper, we propose new first-order methods for minimization of a convex function on a simple convex set. We assume that the objective function is a composite function given as a sum of a sim...

Journal ArticleDOI
TL;DR: In this article, a stochastic gradient descent (SGD) algorithm is used to solve ML problems based on nonlinear and nonconvex unconstrained optimization problems.
Abstract: Machine learning (ML) problems are often posed as highly nonlinear and nonconvex unconstrained optimization problems. Methods for solving ML problems based on stochastic gradient descent are easily...

Journal ArticleDOI
TL;DR: A range constrained orthogonal CCA (OCCA) model and its variant are established and applied for three data analysis tasks of datasets in real-life applications, namely unsupervised feature fusion, multi-target regression and multi-label classification.
Abstract: Canonical correlation analysis (CCA) is a cornerstone of linear dimensionality reduction techniques that jointly maps two datasets to achieve maximal correlation. CCA has been widely used in applic...

Journal ArticleDOI
TL;DR: In this paper, the authors analyse the Basic Tensor Methods, which use approximate solutions of the auxiliary problems, and describe the quality of this solution by the residual in the function value.
Abstract: In this paper, we analyse the Basic Tensor Methods, which use approximate solutions of the auxiliary problems. The quality of this solution is described by the residual in the function value, which...

Journal ArticleDOI
TL;DR: It is shown that special class of optimal control problems with complementarity constraints on the control functions possess optimal solutions whenever the underlying control space is a first-order Sobolev space.
Abstract: A special class of optimal control problems with complementarity constraints on the control functions is studied. It is shown that such problems possess optimal solutions whenever the underlying co...

Journal ArticleDOI
TL;DR: The densest k-subgraph problem is the problem of finding a k-vertex subgraph of a graph with the maximum number of edges as mentioned in this paper, which is the most difficult subgraph problem to solve.
Abstract: The densest k-subgraph problem is the problem of finding a k-vertex subgraph of a graph with the maximum number of edges. In order to solve large instances of the densest k-subgraph problem, we int...

Journal ArticleDOI
TL;DR: This paper considers resource allocation problem stated as a convex minimization problem with linear constraints and uses gradient and accelerated gradient descent to solve this problem.
Abstract: In this paper, we consider resource allocation problem stated as a convex minimization problem with linear constraints. To solve this problem, we use gradient and accelerated gradient descent appli...

Journal ArticleDOI
TL;DR: An augmented Lagrangian method is proposed for the L1-loss model, which is designed to solve the primal problem of support vector machine (SVM) and is competitive with the most popular solvers in both speed and accuracy.
Abstract: Support vector machine (SVM) has proved to be a successful approach for machine learning. Two typical SVM models are the L1-loss model for support vector classification (SVC) and e-L1-loss model fo...