scispace - formally typeset
Search or ask a question

Showing papers on "Convergence (routing) published in 2014"


Journal ArticleDOI
TL;DR: This paper establishes its linear convergence rate for the decentralized consensus optimization problem with strongly convex local objective functions in terms of the network topology, the properties ofLocal objective functions, and the algorithm parameter.
Abstract: In decentralized consensus optimization, a connected network of agents collaboratively minimize the sum of their local objective functions over a common decision variable, where their information exchange is restricted between the neighbors. To this end, one can first obtain a problem reformulation and then apply the alternating direction method of multipliers (ADMM). The method applies iterative computation at the individual agents and information exchange between the neighbors. This approach has been observed to converge quickly and deemed powerful. This paper establishes its linear convergence rate for the decentralized consensus optimization problem with strongly convex local objective functions. The theoretical convergence rate is explicitly given in terms of the network topology, the properties of local objective functions, and the algorithm parameter. This result is not only a performance guarantee but also a guideline toward accelerating the ADMM convergence.

836 citations


Journal ArticleDOI
TL;DR: It is shown that the iterative performance index function is nonincreasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman equation and it is proven that any of the iteratives control laws can stabilize the nonlinear systems.
Abstract: This paper is concerned with a new discrete-time policy iteration adaptive dynamic programming (ADP) method for solving the infinite horizon optimal control problem of nonlinear systems. The idea is to use an iterative ADP technique to obtain the iterative control law, which optimizes the iterative performance index function. The main contribution of this paper is to analyze the convergence and stability properties of policy iteration method for discrete-time nonlinear systems for the first time. It shows that the iterative performance index function is nonincreasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman equation. It is also proven that any of the iterative control laws can stabilize the nonlinear systems. Neural networks are used to approximate the performance index function and compute the optimal control law, respectively, for facilitating the implementation of the iterative ADP algorithm, where the convergence of the weight matrices is analyzed. Finally, the numerical results and analysis are presented to illustrate the performance of the developed method.

535 citations


Proceedings Article
21 Jun 2014
TL;DR: In this paper, a Newton-type method for distributed optimization is presented, which is particularly well suited for stochastic optimization and learning problems, with a linear rate of convergence for quadratic objectives.
Abstract: We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably improves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as one-shot parameter averaging and ADMM.

473 citations


Journal ArticleDOI
Zongyu Zuo1, Lin Tie1
TL;DR: It is shown that the settling time of the proposed new class of finite-time consensus protocols is upper bounded for arbitrary initial conditions, which makes it possible for network consensus problems that the convergence time is designed and estimated offline for a given undirected information flow and a group volume of agents.
Abstract: This paper is devoted to investigating the finite-time consensus problem for a multi-agent system in networks with undirected topology. A new class of global continuous time-invariant consensus protocols is constructed for each single-integrator agent dynamics with the aid of Lyapunov functions. In particular, it is shown that the settling time of the proposed new class of finite-time consensus protocols is upper bounded for arbitrary initial conditions. This makes it possible for network consensus problems that the convergence time is designed and estimated offline for a given undirected information flow and a group volume of agents. Finally, a numerical simulation example is presented as a proof of concept.

393 citations


Journal ArticleDOI
TL;DR: A procedure for the estimation of the numerical uncertainty of any integral or local flow quantity as a result of a fluid flow computation; the procedure requires solutions on systematically refined grids with least squares fits to power series expansions to handle noisy data.

369 citations


Journal ArticleDOI
TL;DR: An algorithm for solving a minimization problem composed of a differentiable and a convex function and the algorithm iPiano combines forward-backward splitting with an inertial force yields global convergence of the function values and the arguments.
Abstract: In this paper we study an algorithm for solving a minimization problem composed of a differentiable (possibly nonconvex) and a convex (possibly nondifferentiable) function. The algorithm iPiano combines forward-backward splitting with an inertial force. It can be seen as a nonsmooth split version of the Heavy-ball method from Polyak. A rigorous analysis of the algorithm for the proposed class of problems yields global convergence of the function values and the arguments. This makes the algorithm robust for usage on nonconvex problems. The convergence result is obtained based on the Kurdyka--Łojasiewicz inequality. This is a very weak restriction, which was used to prove convergence for several other gradient methods. First, an abstract convergence theorem for a generic algorithm is proved, and then iPiano is shown to satisfy the requirements of this theorem. Furthermore, a convergence rate is established for the general problem class. We demonstrate iPiano on computer vision problems---image denoising wit...

354 citations


Journal ArticleDOI
TL;DR: In this article, the authors use second-order cone programming (SOCP) to solve nonconvex optimal control problems with concave state inequality constraints and nonlinear terminal equality constraints.
Abstract: Motivated by aerospace applications, this paper presents a methodology to use second-order cone programming to solve nonconvex optimal control problems The nonconvexity arises from the presence of concave state inequality constraints and nonlinear terminal equality constraints The development relies on a solution paradigm, in which the concave inequality constraints are approximated by successive linearization Analysis is performed to establish the guaranteed satisfaction of the original inequality constraints, the existence of the successive solutions, and the equivalence of the solution of the original problem to the converged successive solution These results lead to a rigorous proof of the convergence of the successive solutions under appropriate conditions as well as nonconservativeness of the converged solution The nonlinear equality constraints are treated in a two-step procedure in which the constraints are first approximated by first-order expansions, then compensated by second-order correct

248 citations


Journal ArticleDOI
TL;DR: The demonstrated convergence is comparable to or even better than the one of the DMRG algorithm, and the proposed algorithms are also efficient for non-SPD systems, for example, those arising from the chemical master equation describing the gene regulatory model at the mesoscopic scale.
Abstract: We propose algorithms for the solution of high-dimensional symmetrical positive definite (SPD) linear systems with the matrix and the right-hand side given and the solution sought in a low-rank format. Similarly to density matrix renormalization group (DMRG) algorithms, our methods optimize the components of the tensor product format subsequently. To improve the convergence, we expand the search space by an inexact gradient direction. We prove the geometrical convergence and estimate the convergence rate of the proposed methods utilizing the analysis of the steepest descent algorithm. The complexity of the presented algorithms is linear in the mode size and dimension, and the demonstrated convergence is comparable to or even better than the one of the DMRG algorithm. In the numerical experiment we show that the proposed methods are also efficient for non-SPD systems, for example, those arising from the chemical master equation describing the gene regulatory model at the mesoscopic scale.

241 citations


Journal ArticleDOI
TL;DR: In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant.
Abstract: This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators.Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R -linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions.

234 citations


Journal ArticleDOI
TL;DR: This paper addresses the model-free nonlinear optimal control problem based on data by introducing the reinforcement learning (RL) technique by using a data-based approximate policy iteration (API) method by using real system data rather than a system model.

229 citations


Journal ArticleDOI
TL;DR: A graphical tool that allows to study the real dynamics of iterative methods whose iterations depends on one parameter in an easy and compact way and an example of the dynamics of the Damped Newton's method applied to a cubic polynomial is presented.

Journal ArticleDOI
TL;DR: A new data-based iterative optimal learning control scheme for discrete-time nonlinear systems using iterative adaptive dynamic programming (ADP) approach is established and the developed control scheme is applied to solve a coal gasification optimal tracking control problem.
Abstract: In this paper, we establish a new data-based iterative optimal learning control scheme for discrete-time nonlinear systems using iterative adaptive dynamic programming (ADP) approach and apply the developed control scheme to solve a coal gasification optimal tracking control problem. According to the system data, neural networks (NNs) are used to construct the dynamics of coal gasification process, coal quality and reference control, respectively, where the mathematical model of the system is unnecessary. The approximation errors from neural network construction of the disturbance and the controls are both considered. Via system transformation, the optimal tracking control problem with approximation errors and disturbances is effectively transformed into a two-person zero-sum optimal control problem. A new iterative ADP algorithm is then developed to obtain the optimal control laws for the transformed system. Convergence property is developed to guarantee that the performance index function converges to a finite neighborhood of the optimal performance index function, and the convergence criterion is also obtained. Finally, numerical results are given to illustrate the performance of the present method.

Book
23 Jan 2014
TL;DR: This paper presents preliminary results on rate of Convergence for Functions of BV/Bounded Functions on Bezier Variants, and some more results on Rate of convergence in Simultaneous Approximation.
Abstract: 1. Preliminaries.- 2. Approximation by Certain Operators.- 3. Complete Asymptotic Expansion.- 4. Linear and Iterative Combinations.- 5. Better Approximation.- 6. Complex Operators in Compact Disks.- 7. Rate of Convergence for Functions of BV.- 8. Convergence for BV/Bounded Functions on Bezier Variants.- 9. Some More Results on Rate of Convergence.- 10. Rate of Convergence in Simultaneous Approximation.- 11. Future Scope and Open Problems.

Journal ArticleDOI
TL;DR: In this paper, the formulation of time integration algorithms for mechanism analysis problems is discussed, and the treatment of constraints and of the finite rotation associated terms are considered. But, it is shown that in order to time integrate constrained systems, the algorithmic damping at infinite frequency is of utmost importance.

Journal ArticleDOI
TL;DR: The finite-time convergence of a nonlinear but continuous consensus algorithm for multi-agent networks with unknown inherent nonlinear dynamics is analyzed and it is shown that the proposed nonlinear consensus algorithm can guarantee infinite convergence if the directed switching interaction graph has a directed spanning tree at each time interval.

Proceedings ArticleDOI
11 Aug 2014
TL;DR: It is shown that, with sufficient damping the algorithm can be guaranteed to converge, but the amount of damping grows with peak-to-average ratio of the squared singular values of A, which explains the good performance of AMP methods on i.i.d. matrices.

Journal ArticleDOI
TL;DR: An online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed and it is shown that when radial basis function neural networks are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE.
Abstract: In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be gu...

Posted Content
TL;DR: In this paper, a stochastic proximal gradient algorithm for convex optimization problems is proposed, where a convex objective function is given by the sum of a smooth and a possibly non-smooth component.
Abstract: We prove novel convergence results for a stochastic proximal gradient algorithm suitable for solving a large class of convex optimization problems, where a convex objective function is given by the sum of a smooth and a possibly non-smooth component. We consider the iterates convergence and derive $O(1/n)$ non asymptotic bounds in expectation in the strongly convex case, as well as almost sure convergence results under weaker assumptions. Our approach allows to avoid averaging and weaken boundedness assumptions which are often considered in theoretical studies and might not be satisfied in practice.

Journal ArticleDOI
Lin Li1, Xin Yao2, Rustam Stolkin2, Maoguo Gong1, Shan He2 
TL;DR: A new soft-thresholding evolutionary multiobjective algorithm (StEMO) is presented, which uses a soft-Thresholding technique to incorporate two additional heuristics: one with greater chance to increase speed of convergence toward the PF, and another with higher probability to improve the spread of solutions along thePF, enabling an optimal solution to be found in the knee region.
Abstract: This paper addresses the problem of finding sparse solutions to linear systems. Although this problem involves two competing cost function terms (measurement error and a sparsity-inducing term), previous approaches combine these into a single cost term and solve the problem using conventional numerical optimization methods. In contrast, the main contribution of this paper is to use a multiobjective approach. The paper begins by investigating the sparse reconstruction problem, and presents data to show that knee regions do exist on the Pareto front (PF) for this problem and that optimal solutions can be found in these knee regions. Another contribution of the paper, a new soft-thresholding evolutionary multiobjective algorithm (StEMO), is then presented, which uses a soft-thresholding technique to incorporate two additional heuristics: one with greater chance to increase speed of convergence toward the PF, and another with higher probability to improve the spread of solutions along the PF, enabling an optimal solution to be found in the knee region. Experiments are presented, which show that StEMO significantly outperforms five other well known techniques that are commonly used for sparse reconstruction. Practical applications are also demonstrated to fundamental problems of recovering signals and images from noisy data.

Journal ArticleDOI
TL;DR: In this article, it was shown that the Douglas-Rachford splitting algorithm converges strongly to the projection of the starting point onto the intersection of two convex sets, and if the sum of the two subspaces is closed, then the convergence is linear with the rate being the cosine of the Friedrichs angle between the subspace.

Journal ArticleDOI
TL;DR: Value iteration-based approximate/adaptive dynamic programming (ADP) as an approximate solution to infinite-horizon optimal control problems with deterministic dynamics and continuous state and action spaces is investigated and a relatively simple proof for the convergence of the outer-loop iterations to the optimal solution is provided.
Abstract: Value iteration-based approximate/adaptive dynamic programming (ADP) as an approximate solution to infinite-horizon optimal control problems with deterministic dynamics and continuous state and action spaces is investigated. The learning iterations are decomposed into an outer loop and an inner loop. A relatively simple proof for the convergence of the outer-loop iterations to the optimal solution is provided using a novel idea with some new features. It presents an analogy between the value function during the iterations and the value function of a fixed-final-time optimal control problem. The inner loop is utilized to avoid the need for solving a set of nonlinear equations or a nonlinear optimization problem numerically, at each iteration of ADP for the policy update. Sufficient conditions for the uniqueness of the solution to the policy update equation and for the convergence of the inner-loop iterations to the solution are obtained. Afterwards, the results are formed as a learning algorithm for training a neurocontroller or creating a look-up table to be used for optimal control of nonlinear systems with different initial conditions. Finally, some of the features of the investigated method are numerically analyzed.

Journal ArticleDOI
TL;DR: This note aims to develop the nonnegative matrix theory, in particular the product properties of infinite row-stochastic matrices, to deal with the convergence analysis of general discrete-time linear multi-agent systems (MASs).
Abstract: This note aims to develop the nonnegative matrix theory, in particular the product properties of infinite row-stochastic matrices, which is widely used for multiple integrator agents, to deal with the convergence analysis of general discrete-time linear multi-agent systems (MASs). With the proposed approach, it is finally shown both theoretically and by simulation that the consensus for all the agents can be reached exponentially fast under relaxed conditions, i.e. the individual uncoupled system is allowed to be strictly unstable (in the discrete-time sense) and it is only required that the joint of the communication topologies has a spanning tree frequently enough. Moreover, a least convergence rate as well as an upper bound for the strictly unstable mode, which are independent of the switching mode of the system, are specified as well.

Posted Content
TL;DR: In this paper, the convergence rate analysis of the Douglas-Rachford splitting (DRS), Peaceman-rachford (PRS), and alternating direction method of multipliers (ADMM) algorithms under various regularity assumptions including strong convexity, Lipschitz differentiability, and bounded linear regularity is presented.
Abstract: Splitting schemes are a class of powerful algorithms that solve complicated monotone inclusion and convex optimization problems that are built from many simpler pieces. They give rise to algorithms in which the simple pieces of the decomposition are processed individually. This leads to easily implementable and highly parallelizable algorithms, which often obtain nearly state-of-the-art performance. In this paper, we provide a comprehensive convergence rate analysis of the Douglas-Rachford splitting (DRS), Peaceman-Rachford splitting (PRS), and alternating direction method of multipliers (ADMM) algorithms under various regularity assumptions including strong convexity, Lipschitz differentiability, and bounded linear regularity. The main consequence of this work is that relaxed PRS and ADMM automatically adapt to the regularity of the problem and achieve convergence rates that improve upon the (tight) worst-case rates that hold in the absence of such regularity. All of the results are obtained using simple techniques.

Journal ArticleDOI
TL;DR: For discrete memoryless channels, it is proved that a moderate deviation principle holds for all convergence rates between the large deviation and the central limit theorem regimes.
Abstract: We consider block codes whose rate converges to the channel capacity with increasing blocklength at a certain speed and examine the best possible decay of the probability of error. For discrete memoryless channels, we prove that a moderate deviation principle holds for all convergence rates between the large deviation and the central limit theorem regimes.

Journal ArticleDOI
TL;DR: It is proved that the original system has the similar convergence and bounded properties with the new one, and the comparison with barrier Lyapunov function-based algorithm reveals the advantages of NM algorithm.
Abstract: In this paper, nonlinear mapping (NM)-based backstepping control design is presented for a class of strict-feedback nonlinear systems with output constraint. By mapping output value set onto the set of all real numbers, the constrained system is transformed into a new strict-feedback unconstrained system to employ the traditional backstepping control while simultaneously preventing the constraint from being violated. It is proved that the original system has the similar convergence and bounded properties with the new one. Besides the nominal case where full knowledge of the plant is available, we also tackle scenarios wherein parametric uncertainties are present. Furthermore, the comparison with barrier Lyapunov function-based algorithm reveals the advantages of NM algorithm. The closed-loop system is guaranteed to be stable in the sense that all signals involved remain bounded, and the tracking error converges to zero asymptotically. Simulation studies illustrate the performance of the proposed control.

Journal ArticleDOI
TL;DR: This work analyzes adaptive mesh-refining algorithms for conforming finite element discretizations of certain nonlinear second-order partial differential equations and proves convergence even with optimal algebraic convergence rates.
Abstract: We analyze adaptive mesh-refining algorithms for conforming finite element discretizations of certain nonlinear second-order partial differential equations. We allow continuous polynomials of arbitrary but fixed polynomial order. The adaptivity is driven by the residual error estimator. We prove convergence even with optimal algebraic convergence rates. In particular, our analysis covers general linear second-order elliptic operators. Unlike prior works for linear nonsymmetric operators, our analysis avoids the interior node property for the refinement, and the differential operator has to satisfy a G\rarding inequality only. If the differential operator is uniformly elliptic, no additional assumption on the initial mesh is posed.

Journal ArticleDOI
TL;DR: In this paper, the optimal boundary control of a time-discrete Cahn--Hilliard--Navier--Stokes system is studied and a general class of free energy potentials is considered which, in particular, includes the double-obstacle potential.
Abstract: In this paper, the optimal boundary control of a time-discrete Cahn--Hilliard--Navier--Stokes system is studied. A general class of free energy potentials is considered which, in particular, includes the double-obstacle potential. The latter homogeneous free energy density yields an optimal control problem for a family of coupled systems, which result from a time discretization of a variational inequality of fourth order and the Navier--Stokes equation. The existence of an optimal solution to the time-discrete control problem as well as an approximate version is established. The latter approximation is obtained by mollifying the Moreau--Yosida approximation of the double-obstacle potential. First order optimality conditions for the mollified problems are given, and in addition to the convergence of optimal controls of the mollified problems to an optimal control of the original problem, first order optimality conditions for the original problem are derived through a limit process. The newly derived statio...

Proceedings ArticleDOI
11 Aug 2014
TL;DR: In this paper, the authors identify the reason of non-convergence for measurement matrices with iid entries and non-zero mean in the context of Bayes optimal inference and demonstrate numerically that when the iterative update is changed from parallel to sequential the convergence is restored.
Abstract: Approximate message passing is an iterative algorithm for compressed sensing and related applications. A solid theory about the performance and convergence of the algorithm exists for measurement matrices having iid entries of zero mean. However, it was observed by several authors that for more general matrices the algorithm often encounters convergence problems. In this paper we identify the reason of the non-convergence for measurement matrices with iid entries and non-zero mean in the context of Bayes optimal inference. Finally we demonstrate numerically that when the iterative update is changed from parallel to sequential the convergence is restored.

Journal ArticleDOI
TL;DR: In MODE-RMO, the ranking-based mutation operator is integrated into the MODE algorithm to accelerate the convergence speed, and thus enhance the performance, and this variant can generate Pareto optimal fronts with satisfactory convergence and diversity.

Journal ArticleDOI
TL;DR: This paper studies leader selection in order to minimize convergence errors experienced by the follower agents, and introduces a novel connection to random walks on the network graph that shows that the convergence error has an inherent supermodular structure as a function of the leader set.
Abstract: In a leader-follower multi-agent system (MAS), the leader agents act as control inputs and influence the states of the remaining follower agents. The rate at which the follower agents converge to their desired states, as well as the errors in the follower agent states prior to convergence, are determined by the choice of leader agents. In this paper, we study leader selection in order to minimize convergence errors experienced by the follower agents, which we define as a norm of the distance between the follower agents' intermediate states and the convex hull of the leader agent states. By introducing a novel connection to random walks on the network graph, we show that the convergence error has an inherent supermodular structure as a function of the leader set. Supermodularity enables development of efficient discrete optimization algorithms that directly approximate the optimal leader set, provide provable performance guarantees, and do not rely on continuous relaxations. We formulate two leader selection problems within the supermodular optimization framework, namely, the problem of selecting a fixed number of leader agents in order to minimize the convergence error, as well as the problem of selecting the minimum-size set of leader agents to achieve a given bound on the convergence error. We introduce algorithms for approximating the optimal solution to both problems in static networks, dynamic networks with known topology distributions, and dynamic networks with unknown and unpredictable topology distributions. Our approach is shown to provide significantly lower convergence errors than existing random and degree-based leader selection methods in a numerical study.