scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Optimization Theory and Applications in 2021"


Journal ArticleDOI
TL;DR: New second-order methods with convergence rate faster than the existing lower bound for this type of schemes are presented, which consists in implementation of the third-order scheme from Nesterov using the second- order oracle.
Abstract: In this paper, we present new second-order methods with convergence rate $$O\left( k^{-4}\right) $$ , where k is the iteration counter. This is faster than the existing lower bound for this type of schemes (Agarwal and Hazan in Proceedings of the 31st conference on learning theory, PMLR, pp. 774–792, 2018; Arjevani and Shiff in Math Program 178(1–2):327–360, 2019), which is $$O\left( k^{-7/2} \right) $$ . Our progress can be explained by a finer specification of the problem class. The main idea of this approach consists in implementation of the third-order scheme from Nesterov (Math Program 186:157–183, 2021) using the second-order oracle. At each iteration of our method, we solve a nontrivial auxiliary problem by a linearly convergent scheme based on the relative non-degeneracy condition (Bauschke et al. in Math Oper Res 42:330–348, 2016; Lu et al. in SIOPT 28(1):333–354, 2018). During this process, the Hessian of the objective function is computed once, and the gradient is computed $$O\left( \ln {1 \over \epsilon }\right) $$ times, where $$\epsilon $$ is the desired accuracy of the solution for our problem.

28 citations


Journal ArticleDOI
TL;DR: A projected subgradient method for solving constrained nondifferentiable quasiconvex multiobjective optimization problems based on the Plastria subdifferential based on suitable, yet rather general assumptions, assumptions is established.
Abstract: In this paper, we propose a projected subgradient method for solving constrained nondifferentiable quasiconvex multiobjective optimization problems. The algorithm is based on the Plastria subdifferential to overcome potential shortcomings known from algorithms based on the classical gradient. Under suitable, yet rather general assumptions, we establish the convergence of the full sequence generated by the algorithm to a Pareto efficient solution of the problem. Numerical results are presented to illustrate our findings.

28 citations


Journal ArticleDOI
TL;DR: In this paper, a second-order time-continuous dynamic system with fast convergence guarantees is proposed to solve structured convex minimization problems with an affine constraint, which is associated with the augmented Lagrangian formulation of the minimization problem.
Abstract: In this paper, we propose in a Hilbertian setting a second-order time-continuous dynamic system with fast convergence guarantees to solve structured convex minimization problems with an affine constraint. The system is associated with the augmented Lagrangian formulation of the minimization problem. The corresponding dynamics brings into play three general time-varying parameters, each with specific properties, and which are, respectively, associated with viscous damping, extrapolation and temporal scaling. By appropriately adjusting these parameters, we develop a Lyapunov analysis which provides fast convergence properties of the values and of the feasibility gap. These results will naturally pave the way for developing corresponding accelerated ADMM algorithms, obtained by temporal discretization.

23 citations


Journal ArticleDOI
TL;DR: This paper conducts a thorough study on the first-order and second-order (sub)differentiation of real-valued functions in quaternion matrices, with a newly introduced operation called R-product as the key tool for the authors' calculus.
Abstract: The class of quaternion matrix optimization (QMO) problems, with quaternion matrices as decision variables, has been widely used in color image processing and other engineering areas in recent years. However, optimization theory for QMO is far from adequate. The main objective of this paper is to provide necessary theoretical foundations on optimality analysis, in order to enrich the contents of optimization theory and to pave way for the design of efficient numerical algorithms as well. We achieve this goal by conducting a thorough study on the first-order and second-order (sub)differentiation of real-valued functions in quaternion matrices, with a newly introduced operation called R-product as the key tool for our calculus. Combining with the classical optimization theory, we establish the first-order and the second-order optimality analysis for QMO. Particular treatments on convex functions, the $$\ell _0$$ -norm and the rank function in quaternion matrices are tailored for a sparse low rank QMO model, arising from color image denoising, to establish its optimality conditions via stationarity.

23 citations


Journal ArticleDOI
TL;DR: In this paper, a new viscosity extragradient algorithm was proposed for solving variational inequality problems of pseudo-monotone and non-Lipschitz continuous operator in real Hilbert spaces.
Abstract: In this paper, we propose a new viscosity extragradient algorithm for solving variational inequality problems of pseudo-monotone and non-Lipschitz continuous operator in real Hilbert spaces. We prove a strong convergence theorem under some appropriate conditions imposed on the parameters. Finally, we give some numerical experiments to illustrate the advantages of our proposed algorithms. The main results obtained in this paper extend and improve some related works in the literature.

19 citations


Journal ArticleDOI
TL;DR: In this paper, the optimality conditions for a class of constrained interval-valued optimization problems governed by path-independent curvilinear integral (mechanical work) cost functionals are studied.
Abstract: In this paper, optimality conditions are studied for a class of constrained interval-valued optimization problems governed by path-independent curvilinear integral (mechanical work) cost functionals. Specifically, a minimal criterion of optimality for a local LU-optimal solution of the considered PDE&PDI-constrained variational control problem to be its global LU-optimal solution is formulated and proved. In addition, the main result is highlighted by an illustrative application describing the controlled behavior of an artificial neural system.

18 citations


Journal ArticleDOI
TL;DR: It is shown that the marginal function of the considered control system is lower semi-continuous and the optimal states operator generates a continuous branch in a suitable function space.
Abstract: We consider an optimal control problem for non-isothermal steady flows of low-concentrated aqueous polymer solutions in a bounded 3D domain. In this problem, the state functions are the flow velocity and the temperature, while the control function is the heat flux through a given part of the boundary of the flow domain. We obtain sufficient conditions for the existence of weak solutions that minimize a cost functional under a given bounded set of admissible controls. It is shown that the marginal function of the considered control system is lower semi-continuous and the optimal states operator generates a continuous branch in a suitable function space.

16 citations


Journal ArticleDOI
TL;DR: In this article, the worst-case convergence bound of the gradient norm of a first-order method for smooth convex minimization was optimized in terms of the worst case convergence bound (i.e., efficiency) of the decrease in gradient norm.
Abstract: This paper optimizes the step coefficients of first-order methods for smooth convex minimization in terms of the worst-case convergence bound (i.e., efficiency) of the decrease in the gradient norm. This work is based on the performance estimation problem approach. The worst-case gradient bound of the resulting method is optimal up to a constant for large-dimensional smooth convex minimization problems, under the initial bounded condition on the cost function value. This paper then illustrates that the proposed method has a computationally efficient form that is similar to the optimized gradient method.

16 citations


Journal ArticleDOI
TL;DR: The conic operator splitting method (COSMO) solver as discussed by the authors alternates between solving a quasi-definite linear system with a constant coefficient matrix and a projection onto convex sets.
Abstract: This paper describes the conic operator splitting method (COSMO) solver, an operator splitting algorithm and associated software package for convex optimisation problems with quadratic objective function and conic constraints. At each step, the algorithm alternates between solving a quasi-definite linear system with a constant coefficient matrix and a projection onto convex sets. The low per-iteration computational cost makes the method particularly efficient for large problems, e.g. semidefinite programs that arise in portfolio optimisation, graph theory, and robust control. Moreover, the solver uses chordal decomposition techniques and a new clique merging algorithm to effectively exploit sparsity in large, structured semidefinite programs. Numerical comparisons with other state-of-the-art solvers for a variety of benchmark problems show the effectiveness of our approach. Our Julia implementation is open source, designed to be extended and customised by the user, and is integrated into the Julia optimisation ecosystem.

16 citations


Journal ArticleDOI
TL;DR: This work investigates the viability of using a standard safeguarded multiplier penalty method without any problem-tailored modifications to solve the reformulated problem of cardinality-constrained optimization problems into continuous nonlinear optimization problems with an orthogonality-type constraint.
Abstract: A reformulation of cardinality-constrained optimization problems into continuous nonlinear optimization problems with an orthogonality-type constraint has gained some popularity during the last few years. Due to the special structure of the constraints, the reformulation violates many standard assumptions and therefore is often solved using specialized algorithms. In contrast to this, we investigate the viability of using a standard safeguarded multiplier penalty method without any problem-tailored modifications to solve the reformulated problem. We prove global convergence towards an (essentially strongly) stationary point under a suitable problem-tailored quasinormality constraint qualification. Numerical experiments illustrating the performance of the method in comparison to regularization-based approaches are provided.

16 citations


Journal ArticleDOI
TL;DR: In this paper, the authors study the problem of applying partial or total containment to SIR epidemic model during a given finite time interval in order to minimize the epidemic final size, that is the cumulative number of cases infected during the complete course of an epidemic.
Abstract: The aim of this article is to understand how to apply partial or total containment to SIR epidemic model during a given finite time interval in order to minimize the epidemic final size, that is the cumulative number of cases infected during the complete course of an epidemic The existence and uniqueness of an optimal strategy are proved for this infinite-horizon problem, and a full characterization of the solution is provided The best policy consists in applying the maximal allowed social distancing effort until the end of the interval, starting at a date that is not always the closest date and may be found by a simple algorithm Both theoretical results and numerical simulations demonstrate that it leads to a significant decrease in the epidemic final size We show that in any case the optimal intervention has to begin before the number of susceptible cases has crossed the herd immunity level, and that its limit is always smaller than this threshold This problem is also shown to be equivalent to the minimum containment time necessary to stop at a given distance after this threshold value

Journal ArticleDOI
TL;DR: In this paper, the authors studied the split common fixed point problem for Bregman relatively nonexpansive operators and the split feasibility problem with multiple output sets in real reflexive Banach spaces.
Abstract: We study the split common fixed point problem for Bregman relatively nonexpansive operators and the split feasibility problem with multiple output sets in real reflexive Banach spaces. Using Bregman distances, we propose several new cyclic projection algorithms for solving these problems.

Journal ArticleDOI
TL;DR: This paper generalizes the projected Jacobi and the projected Gauss–Seidel methods to vertical linear complementarity problems (VLCPs) characterized by matrices with positive diagonal entries and proves the convergence of the proposed procedures when the matrices of the problem satisfy some assumptions of strict or irreducible diagonal dominance.
Abstract: In this paper, we generalize the projected Jacobi and the projected Gauss–Seidel methods to vertical linear complementarity problems (VLCPs) characterized by matrices with positive diagonal entries. First, we formulate the methods and show that the subproblems that must be solved at each iteration have an explicit solution, which is easy to compute. Then, we prove the convergence of the proposed procedures when the matrices of the problem satisfy some assumptions of strict or irreducible diagonal dominance. In this context, for simplicity, we first analyze the convergence in the special case of VLCPs of dimension $$2n\times n$$ , and we then generalize the results to VLCPs of an arbitrary dimension $$\ell n\times n$$ . Finally, we provide several numerical experiments (involving both full and sparse matrices) that show the effectiveness of the proposed approaches. In this context, our methods are compared with existing solution methods for VLCPs. A parallel implementation of the projected Jacobi method in CUDA is also presented and analyzed.

Journal ArticleDOI
TL;DR: In this paper, a numerical method is developed for solving a class of delay fractional optimal control problems involving nonlinear time-delay systems and subject to state inequality constraints, where the fractional derivatives in this class of problems are described in the sense of Caputo, and they can be of different orders.
Abstract: In this paper, a numerical method is developed for solving a class of delay fractional optimal control problems involving nonlinear time-delay systems and subject to state inequality constraints. The fractional derivatives in this class of problems are described in the sense of Caputo, and they can be of different orders. First, we propose a numerical integration scheme for the fractional time-delay system and prove that the convergence rate of the numerical solution to the exact one is of second order based on Taylor expansion and linear interpolation. This gives rise to a discrete-time optimal control problem. Then, we derive the gradient formulas of the cost and constraint functions with respect to the decision variables and present a gradient computation procedure. On this basis, a gradient-based optimization algorithm is developed to solve the resulting discrete-time optimal control problem. Finally, several example problems are solved to demonstrate the effectiveness of the developed solution approach.

Journal ArticleDOI
TL;DR: The aim is to assess the quality of the computed solutions in terms of solution reliability, and to analyze the trade-off between the risk-adverseness of the decision maker and the transportation cost.
Abstract: We address the problem of delivering parcels in an urban area, within a given time horizon, by conventional vehicles, i.e., trucks, equipped with drones. Both the trucks and the drones perform deliveries, and the drones are carried by the trucks. We focus on the energy consumption of the drones that we assume to be influenced by atmospheric events. Specifically, we manage the delivery process in a such a way as to avoid energy disruption against adverse weather conditions. We address the problem under the field of robust optimization, thus preventing energy disruption in the worst case. We consider several polytopes to model the uncertain energy consumption, and we propose a decomposition approach based on Benders’ combinatorial cuts. A computational study is carried out on benchmark instances. The aim is to assess the quality of the computed solutions in terms of solution reliability, and to analyze the trade-off between the risk-adverseness of the decision maker and the transportation cost.

Journal ArticleDOI
TL;DR: This paper proves that the hybrid method has a global convergence property under the strong Wolfe conditions and the Hager–Zhang-type method has the sufficient descent property regardless of whether a line search is used or not.
Abstract: This paper considers sufficient descent Riemannian conjugate gradient methods with line search algorithms. We propose two kinds of sufficient descent nonlinear conjugate gradient method and prove that these methods satisfy the sufficient descent condition on Riemannian manifolds. One is a hybrid method combining a Fletcher–Reeves-type method with a Polak–Ribiere–Polyak-type method, and the other is a Hager–Zhang-type method, both of which are generalizations of those used in Euclidean space. Moreover, we prove that the hybrid method has a global convergence property under the strong Wolfe conditions and the Hager–Zhang-type method has the sufficient descent property regardless of whether a line search is used or not. Further, we review two kinds of line search algorithm on Riemannian manifolds and numerically compare our generalized methods by solving several Riemannian optimization problems. The results show that the performance of the proposed hybrid methods greatly depends on the type of line search used. Meanwhile, the Hager–Zhang-type method has the fast convergence property regardless of the type of line search used.

Journal ArticleDOI
TL;DR: It is proved that accumulation points of the sequence generated by the diagonal steepest descent method are Pareto-critical points under standard assumptions.
Abstract: In this paper, we propose two methods for solving unconstrained multiobjective optimization problems. First, we present a diagonal steepest descent method, in which, at each iteration, a common diagonal matrix is used to approximate the Hessian of every objective function. This method works directly with the objective functions, without using any kind of a priori chosen parameters. It is proved that accumulation points of the sequence generated by the method are Pareto-critical points under standard assumptions. Based on this approach and on the Nesterov step strategy, an improved version of the method is proposed and its convergence rate is analyzed. Finally, computational experiments are presented in order to analyze the performance of the proposed methods.

Journal ArticleDOI
TL;DR: In this article, an efficient descent method for unconstrained, locally Lipschitz multiobjective optimization problems is presented, which is realized by combining a theoretical result regarding the computation of descent directions for nonsmooth multiobjectively optimization problems with a practical method to approximate the subdifferentials of the objective functions.
Abstract: We present an efficient descent method for unconstrained, locally Lipschitz multiobjective optimization problems. The method is realized by combining a theoretical result regarding the computation of descent directions for nonsmooth multiobjective optimization problems with a practical method to approximate the subdifferentials of the objective functions. We show convergence to points which satisfy a necessary condition for Pareto optimality. Using a set of test problems, we compare our method with the multiobjective proximal bundle method by Makela. The results indicate that our method is competitive while being easier to implement. Although the number of objective function evaluations is larger, the overall number of subgradient evaluations is smaller. Our method can be combined with a subdivision algorithm to compute entire Pareto sets of nonsmooth problems. Finally, we demonstrate how our method can be used for solving sparse optimization problems, which are present in many real-life applications.

Journal ArticleDOI
TL;DR: The unique regular solution of this differential game that actually provides a semipermeable Barrier surface is synthesized and verified in this paper and is proven that the obtained strategies provide the saddle-point solution of the game.
Abstract: In the Target–Attacker–Defender differential game, an Attacker missile strives to capture a Target aircraft The Target tries to escape the Attacker and is aided by a Defender missile which aims at intercepting the Attacker, before the latter manages to close in on the Target The conflict between these intelligent adversaries is naturally modeled as a zero-sum differential game The Game of Degree when the Attacker is able to win the Target–Attacker–Defender differential game has not been fully solved, and it is addressed in this paper Previous attempts at designing the players’ strategies have not been proven to be optimal in the differential game sense In this paper, the optimal strategies of the Game of Degree in the Attacker’s winning region of the state space are synthesized Also, the value function is obtained, and it is shown that it is continuously differentiable, and it is the solution of the Hamilton–Jacobi–Isaacs equation The obtained state feedback strategies are compared to recent results addressing this differential game It is shown that the correct solution of the Target–Attacker–Defender differential game that provides a semipermeable Barrier surface is synthesized and verified in this paper

Journal ArticleDOI
TL;DR: In this paper, a nonlinear extension operator is introduced to link a boundary control to domain deformations, ensuring admissibility of resulting shapes, based on the method of mappings, which is able to handle large deformations while maintaining a high level of mesh quality.
Abstract: In this article, we propose a shape optimization algorithm which is able to handle large deformations while maintaining a high level of mesh quality. Based on the method of mappings, we introduce a nonlinear extension operator, which links a boundary control to domain deformations, ensuring admissibility of resulting shapes. The major focus is on comparisons between well-established approaches involving linear-elliptic operators for the extension and the effect of additional nonlinear advection on the set of reachable shapes. It is moreover discussed how the computational complexity of the proposed algorithm can be reduced. The benefit of the nonlinearity in the extension operator is substantiated by several numerical test cases of stationary, incompressible Navier–Stokes flows in 2d and 3d.

Journal ArticleDOI
TL;DR: A generalized nonconvex Burer-Monteiro formulation for low-rank minimization problems uses geometries induced by quartic kernels on matrix spaces and introduces a novel family of Gram kernels that considerably improves numerical performances.
Abstract: We study a general nonconvex formulation for low-rank minimization problems. We use recent results on non-Euclidean first-order methods to provide efficient and scalable algorithms. Our approach uses the geometry induced by the Bregman divergence of well-chosen kernel functions; for unconstrained problems, we introduce a novel family of Gram quartic kernels that improve numerical performance. Numerical experiments on Euclidean distance matrix completion and symmetric nonnegative matrix factorization show that our algorithms scale well and reach state-of-the-art performance when compared to specialized methods.

Journal ArticleDOI
TL;DR: This paper proposes a smoothing Newton algorithm with nonmonotone line search to solve weighted complementarity problem and shows that any accumulation point of the iterates generated by this algorithm, if exists, is a solution of the considered WCP.
Abstract: The weighted complementarity problem (denoted by WCP) significantly extends the general complementarity problem and can be used for modeling a larger class of problems from science and engineering. In this paper, by introducing a one-parametric class of smoothing functions which includes the weight vector, we propose a smoothing Newton algorithm with nonmonotone line search to solve WCP. We show that any accumulation point of the iterates generated by this algorithm, if exists, is a solution of the considered WCP. Moreover, when the solution set of WCP is nonempty, under assumptions weaker than the Jacobian nonsingularity assumption, we prove that the iteration sequence generated by our algorithm is bounded and converges to one solution of WCP with local superlinear or quadratic convergence rate. Promising numerical results are also reported.

Journal ArticleDOI
TL;DR: In this article, the authors provide an overview on optimality conditions and exact penalization for the mathematical program with switching constraints (MPSC), which is a new class of optimization problems with important applications.
Abstract: In this paper, we give an overview on optimality conditions and exact penalization for the mathematical program with switching constraints (MPSC). MPSC is a new class of optimization problems with important applications. It is well known that if MPSC is treated as a standard nonlinear program, some of the usual constraint qualifications may fail. To deal with this issue, one could reformulate it as a mathematical program with disjunctive constraints (MPDC). In this paper, we first survey recent results on constraint qualifications and optimality conditions for MPDC, then apply them to MPSC. Moreover, we provide two types of sufficient conditions for the local error bound and exact penalty results for MPSC. One comes from the directional quasi-normality for MPDC, and the other is obtained via the local decomposition approach.

Journal ArticleDOI
TL;DR: In this article, a universal lower bound on the tracking iterate error of online gradient descent was established for smooth time-varying optimization problems with constant step-size, momentum, and extrapolation-length.
Abstract: This paper investigates online algorithms for smooth time-varying optimization problems, focusing first on methods with constant step-size, momentum, and extrapolation-length. Assuming strong convexity, precise results for the tracking iterate error (the limit supremum of the norm of the difference between the optimal solution and the iterates) for online gradient descent are derived. The paper then considers a general first-order framework, where a universal lower bound on the tracking iterate error is established. Furthermore, a method using “long-steps” is proposed and shown to achieve the lower bound up to a fixed constant. This method is then compared with online gradient descent for specific examples. Finally, the paper analyzes the effect of regularization when the cost is not strongly convex. With regularization, it is possible to achieve a non-regret bound. The paper ends by testing the accelerated and regularized methods on synthetic time-varying least-squares and logistic regression problems, respectively.

Journal ArticleDOI
TL;DR: The complexity of the forward–backward splitting method with Beck–Teboulle’s line search for solving convex optimization problems, where the objective function can be split into the sum of a differentiable function and a nonsmooth function is studied.
Abstract: In this paper, we study the complexity of the forward–backward splitting method with Beck–Teboulle’s line search for solving convex optimization problems, where the objective function can be split into the sum of a differentiable function and a nonsmooth function. We show that the method converges weakly to an optimal solution in Hilbert spaces, under mild standing assumptions without the global Lipschitz continuity of the gradient of the differentiable function involved. Our standing assumptions is weaker than the corresponding conditions in the paper of Salzo (SIAM J Optim 27:2153–2181, 2017). The conventional complexity of sublinear convergence for the functional value is also obtained under the local Lipschitz continuity of the gradient of the differentiable function. Our main results are about the linear convergence of this method (in the quotient type), in terms of both the function value sequence and the iterative sequence, under only the quadratic growth condition. Our proof technique is direct from the quadratic growth conditions and some properties of the forward–backward splitting method without using error bounds or Kurdya-Łojasiewicz inequality as in other publications in this direction.

Journal ArticleDOI
TL;DR: In this paper, the authors presented the relationship between robust solutions of the uncertain optimization problem and that of its corresponding scalar optimization problem, and deduced sufficient optimality conditions for robust solutions under assumptions of generalized convexity.
Abstract: This paper deals with robust $$\varepsilon $$ -quasi Pareto efficient solutions of an uncertain semi-infinite multiobjective optimization problem. By using robust optimization and a modified $$\varepsilon $$ -constraint scalarization methodology, we first present the relationship between robust $$\varepsilon $$ -quasi solutions of the uncertain optimization problem and that of its corresponding scalar optimization problem. Then, we obtain necessary optimality conditions for robust $$\varepsilon $$ -quasi Pareto efficient solutions of the uncertain optimization problem in terms of a new robust-type subdifferential constraint qualification. We also deduce sufficient optimality conditions for robust $$\varepsilon $$ -quasi Pareto efficient solutions of the uncertain optimization problem under assumptions of generalized convexity. Besides, we introduce a Mixed-type robust $$\varepsilon $$ -multiobjective dual problem (including Wolfe type and Mond-Weir type dual problems as special cases) of the uncertain optimization problem, and explore robust $$\varepsilon $$ -quasi weak, $$\varepsilon $$ -quasi strong, and $$\varepsilon $$ -quasi converse duality properties. Furthermore, we introduce an $$\varepsilon $$ -quasi saddle point for the uncertain optimization problem and investigate the relationships between the $$\varepsilon $$ -quasi saddle point and the robust $$\varepsilon $$ -quasi Pareto efficient solution for the uncertain optimization problem.

Journal ArticleDOI
TL;DR: In this article, the authors present an indirect approach to trajectory optimization based on the use of the maximum principle and the continuation method, which simplifies the optimization of limited power trajectories with a fixed angular distance and free transfer duration.
Abstract: Optimization of low-thrust trajectories is necessary in the design of space missions using electric propulsion systems. We consider the problem of limited power trajectory optimization, which is a well-known case of the low-thrust optimization problem. In the article, we present an indirect approach to trajectory optimization based on the use of the maximum principle and the continuation method. We introduce the concept of auxiliary longitude and use it as a new independent variable instead of time. The use of equations of motion in the equinoctial elements and a new independent variable allowed us to simplify the optimization of limited power trajectories with a fixed angular distance and free transfer duration. The article presents a new form of necessary optimality conditions for this problem and describes an efficient new numerical method to solve the limited power trajectory optimization problem. We show the existence of several trajectories with a fixed transfer duration and free angular distance that satisfy the necessary optimality conditions. Using numerical examples, we confirm the existence of the limiting values of the characteristic velocity and the product of the cost function value and the transfer duration as the angular distance increases. The high computational performance of the developed technique makes it possible to carry out and present an analysis of the angular flight range and initial true longitude impact on the cost function, transfer duration, and characteristic velocity.

Journal ArticleDOI
TL;DR: In this article, a new theoretical analysis of local superlinear convergence of classical quasi-Newton methods from the convex Broyden class was presented, and it was shown that the corresponding rate of the Broyden-Fletcher-Goldfarb-Shanno method depends only on the product of the dimensionality of the problem and the logarithm of its condition number.
Abstract: We present a new theoretical analysis of local superlinear convergence of classical quasi-Newton methods from the convex Broyden class. As a result, we obtain a significant improvement in the currently known estimates of the convergence rates for these methods. In particular, we show that the corresponding rate of the Broyden–Fletcher–Goldfarb–Shanno method depends only on the product of the dimensionality of the problem and the logarithm of its condition number.

Journal ArticleDOI
TL;DR: In this paper, the convergence rate of the alternating direction method of multipliers for solving the nonconvex separable optimization problems was studied. But the convergence was not shown for the case of quadratic programming problems with simplex constraints.
Abstract: In this paper, we consider the convergence rate of the alternating direction method of multipliers for solving the nonconvex separable optimization problems. Based on the error bound condition, we prove that the sequence generated by the alternating direction method of multipliers converges locally to a critical point of the nonconvex optimization problem in a linear convergence rate, and the corresponding sequence of the augmented Lagrangian function value converges in a linear convergence rate. We illustrate the analysis by applying the alternating direction method of multipliers to solving the nonconvex quadratic programming problems with simplex constraint, and comparing it with some state-of-the-art algorithms, the proximal gradient algorithm, the proximal gradient algorithm with extrapolation, and the fast iterative shrinkage–thresholding algorithm.

Journal ArticleDOI
TL;DR: An optimal dosing algorithm (OptiDose) is developed and validated that computes the optimal individualized dosing regimen for pharmacokinetic–pharmacodynamic models in substantially different scenarios with various routes of administration by solving an optimal control problem.
Abstract: Providing the optimal dosing strategy of a drug for an individual patient is an important task in pharmaceutical sciences and daily clinical application. We developed and validated an optimal dosing algorithm (OptiDose) that computes the optimal individualized dosing regimen for pharmacokinetic–pharmacodynamic models in substantially different scenarios with various routes of administration by solving an optimal control problem. The aim is to compute a control that brings the underlying system as closely as possible to a desired reference function by minimizing a cost functional. In pharmacokinetic–pharmacodynamic modeling, the controls are the administered doses and the reference function can be the disease progression. Drug administration at certain time points provides a finite number of discrete controls, the drug doses, determining the drug concentration and its effect on the disease progression. Consequently, rewriting the cost functional gives a finite-dimensional optimal control problem depending only on the doses. Adjoint techniques allow to compute the gradient of the cost functional efficiently. This admits to solve the optimal control problem with robust algorithms such as quasi-Newton methods from finite-dimensional optimization. OptiDose is applied to three relevant but substantially different pharmacokinetic–pharmacodynamic examples.