scispace - formally typeset
Search or ask a question

Showing papers on "Convergence (routing) published in 2016"


Journal ArticleDOI
Zongyu Zuo1, Lin Tie1
TL;DR: It is shown that the finite settling time of the proposed general framework for robust consensus design is upper bounded for any initial condition, which makes it possible for network consensus problems to design and estimate the convergence time offline for a multi-agent team with a given undirected information flow.
Abstract: This paper investigates the robust finite-time consensus problem of multi-agent systems in networks with undirected topology. Global nonlinear consensus protocols augmented with a variable structure are constructed with the aid of Lyapunov functions for each single-integrator agent dynamics in the presence of external disturbances. In particular, it is shown that the finite settling time of the proposed general framework for robust consensus design is upper bounded for any initial condition. This makes it possible for network consensus problems to design and estimate the convergence time offline for a multi-agent team with a given undirected information flow. Finally, simulation results are presented to demonstrate the performance and effectiveness of our finite-time protocols.

496 citations


Journal ArticleDOI
TL;DR: The proofs of convergence for a first order primal–dual algorithm for convex optimization is revisited, with simpler proofs and more complete results that can deal with explicit terms and nonlinear proximity operators in spaces with quite general norms.
Abstract: We revisit the proofs of convergence for a first order primal---dual algorithm for convex optimization which we have studied a few years ago. In particular, we prove rates of convergence for a more general version, with simpler proofs and more complete results. The new results can deal with explicit terms and nonlinear proximity operators in spaces with quite general norms.

429 citations


01 Jan 2016

420 citations


01 Jan 2016
TL;DR: Liberal conditions on the steps of a "descent" method for finding extrema of a function are given and most known results are special cases.
Abstract: Liberal conditions on the steps of a "descent" method for finding extrema of a function are given; most known results are special cases.

309 citations


Proceedings ArticleDOI
Guannan Qu1, Na Li1
01 Dec 2016
TL;DR: This paper proposes a distributed algorithm that, despite using the same amount of communication per iteration as DGD, can effectively harnesses the function smoothness and converge to the optimum with a rate of O(1/t) if the objective function is strongly convex and smooth.
Abstract: There has been a growing effort in studying the distributed optimization problem over a network. The objective is to optimize a global function formed by a sum of local functions, using only local computation and communication. Literature has developed consensus-based distributed (sub)gradient descent (DGD) methods and has shown that they have the same convergence rate O(log t/√t) as the centralized (sub)gradient methods (CGD) when the function is convex but possibly nonsmooth. However, when the function is convex and smooth, under the framework of DGD, it is unclear how to harness the smoothness to obtain a faster convergence rate comparable to CGD's convergence rate. In this paper, we propose a distributed algorithm that, despite using the same amount of communication per iteration as DGD, can effectively harnesses the function smoothness and converge to the optimum with a rate of O(1/t). If the objective function is further strongly convex, our algorithm has a linear convergence rate. Both rates match the convergence rate of CGD. The key step in our algorithm is a novel gradient estimation scheme that uses history information to achieve fast and accurate estimation of the average gradient. To motivate the necessity of history information, we also show that it is impossible for a class of distributed algorithms like DGD to achieve a linear convergence rate without using history information even if the objective function is strongly convex and smooth.

285 citations


Journal ArticleDOI
TL;DR: This paper aims to solve the model-free optimal tracking control problem of nonaffine nonlinear discrete-time systems with a critic-only Q-learning (CoQL) method, which avoids solving the tracking Hamilton-Jacobi-Bellman equation.
Abstract: Model-free control is an important and promising topic in control fields, which has attracted extensive attention in the past few years. In this paper, we aim to solve the model-free optimal tracking control problem of nonaffine nonlinear discrete-time systems. A critic-only Q-learning (CoQL) method is developed, which learns the optimal tracking control from real system data, and thus avoids solving the tracking Hamilton–Jacobi–Bellman equation. First, the Q-learning algorithm is proposed based on the augmented system, and its convergence is established. Using only one neural network for approximating the Q-function, the CoQL method is developed to implement the Q-learning algorithm. Furthermore, the convergence of the CoQL method is proved with the consideration of neural network approximation error. With the convergent Q-function obtained from the CoQL method, the adaptive optimal tracking control is designed based on the gradient descent scheme. Finally, the effectiveness of the developed CoQL method is demonstrated through simulation studies. The developed CoQL method learns with off-policy data and implements with a critic-only structure, thus it is easy to realize and overcome the inadequate exploration problem.

245 citations


Journal ArticleDOI
Weiye Zheng1, Wenchuan Wu1, Boming Zhang1, Hongbin Sun1, Liu Yibing1 
TL;DR: In this paper, a fully distributed reactive power optimization algorithm that can obtain the global optimum solution of nonconvex problems for distribution networks (DNs) without requiring a central coordinator is presented.
Abstract: This paper presents a fully distributed reactive power optimization algorithm that can obtain the global optimum solution of nonconvex problems for distribution networks (DNs) without requiring a central coordinator. Second-order conic relaxation is used to achieve exact convexification. A fully distributed second-order cone programming solver (D-SOCP) is formulated corresponding to the given division of areas based on the alternating direction method of multipliers (ADMM) algorithm, which is greatly simplified by exploiting the structure of active DNs. The problem is solved for each area with very little interchange of boundary information between neighboring areas. D-SOCP is extended by using a varying penalty parameter to improve convergence. A proof of its convergence is also given. The effectiveness of the method is demonstrated via numerical simulations using the IEEE 69-bus, 123-bus DNs, and a real 1066-bus distribution system.

243 citations


Journal ArticleDOI
TL;DR: In this paper, a distributed event-triggered algorithm is proposed to solve the multi-agent average consensus problem for networks whose communication topology is described by weight-balanced, strongly connected digraphs, and the resulting network executions provably converge to the average of the initial agents' states exponentially fast.

236 citations


Journal ArticleDOI
TL;DR: This work proposes a fluid model for a large class of MP-TCP algorithms and identifies design criteria that guarantee the existence, uniqueness, and stability of system equilibrium and motivates the algorithm Balia (balanced linked adaptation), which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation.
Abstract: Multipath TCP (MP-TCP) has the potential to greatly improve application performance by using multiple paths transparently. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate our algorithm Balia (balanced linked adaptation), which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new algorithm to existing MP-TCP algorithms.

225 citations


Journal ArticleDOI
01 Jan 2016
TL;DR: In HS/CS method, the pitch adjustment operation in harmony search (HS) that can be considered as a mutation operator is added to the process of the cuckoo updating so as to speed up convergence.
Abstract: For the purpose of enhancing the search ability of the cuckoo search (CS) algorithm, an improved robust approach, called HS/CS, is put forward to address the optimization problems. In HS/CS method, the pitch adjustment operation in harmony search (HS) that can be considered as a mutation operator is added to the process of the cuckoo updating so as to speed up convergence. Several benchmarks are applied to verify the proposed method and it is demonstrated that, in most cases, HS/CS performs better than the standard CS and other comparative methods. The parameters used in HS/CS are also investigated by various simulations.

221 citations


Journal ArticleDOI
TL;DR: A novel technique coined composite learning is developed to guarantee parameter convergence without the PE condition, where online recorded data together with instantaneous data are applied to generate prediction errors, and both tracking errors and prediction errors are utilized to update parametric estimates.
Abstract: In the conventional adaptive control, a stringent condition named persistent excitation (PE) must be satisfied to guarantee parameter convergence. This technical note focuses on adaptive dynamic surface control for a class of strict-feedback nonlinear systems with parametric uncertainties, where a novel technique coined composite learning is developed to guarantee parameter convergence without the PE condition. In the composite learning, online recorded data together with instantaneous data are applied to generate prediction errors, and both tracking errors and prediction errors are utilized to update parametric estimates. The proposed approach is also extended to an output-feedback case by using a nonlinear separation principle. The distinctive feature of the composite learning is that parameter convergence can be guaranteed by an interval-excitation condition which is much weaker than the PE condition such that the control performance can be improved from practical asymptotic stability to practical exponential stability. An illustrative example is used for verifying effectiveness of the proposed approach.

Posted Content
TL;DR: This work proposes an algorithm for the optimization of continuous hyperparameters using inexact gradient information and gives sufficient conditions for the global convergence of this method, based on regularity conditions of the involved functions and summability of errors.
Abstract: Most models in machine learning contain at least one hyperparameter to control for model complexity. Choosing an appropriate set of hyperparameters is both crucial in terms of model accuracy and computationally challenging. In this work we propose an algorithm for the optimization of continuous hyperparameters using inexact gradient information. An advantage of this method is that hyperparameters can be updated before model parameters have fully converged. We also give sufficient conditions for the global convergence of this method, based on regularity conditions of the involved functions and summability of errors. Finally, we validate the empirical performance of this method on the estimation of regularization constants of L2-regularized logistic regression and kernel Ridge regression. Empirical benchmarks indicate that our approach is highly competitive with respect to state of the art methods.

Journal ArticleDOI
TL;DR: In this article, a concurrent learning (CL)-based implementation of model-based RL to solve approximate optimal regulation problems online under a PE-like rank condition was developed, based on the observation that, given a model of the system, RL can be implemented by evaluating the Bellman error at any number of desired points in the state space.

Journal Article
TL;DR: In this article, a decentralized double stochastic averaging gradient (DSA) algorithm is proposed to solve large scale machine learning problems where elements of the training set are distributed to multiple computational elements.
Abstract: This paper considers optimization problems where nodes of a network have access to summands of a global objective. Each of these local objectives is further assumed to be an average of a finite set of functions. The motivation for this setup is to solve large scale machine learning problems where elements of the training set are distributed to multiple computational elements. The decentralized double stochastic averaging gradient (DSA) algorithm is proposed as a solution alternative that relies on: (i) The use of local stochastic averaging gradients. (ii) Determination of descent steps as differences of consecutive stochastic averaging gradients. Strong convexity of local functions and Lipschitz continuity of local gradients is shown to guarantee linear convergence of the sequence generated by DSA in expectation. Local iterates are further shown to approach the optimal argument for almost all realizations. The expected linear convergence of DSA is in contrast to the sublinear rate characteristic of existing methods for decentralized stochastic optimization. Numerical experiments on a logistic regression problem illustrate reductions in convergence time and number of feature vectors processed until convergence relative to these other alternatives.

Journal ArticleDOI
TL;DR: The convergence of the proposed continuous homogeneous sliding-mode control algorithm is proved via a homogeneous, continuously differentiable and strict Lyapunov function.

Journal ArticleDOI
TL;DR: In this article, the authors consider the discrete two-dimensional Gaussian free field with Dirichlet boundary data and prove the convergence of the law of the centered maximum of the field.
Abstract: We consider the discrete two-dimensional Gaussian free field on a box of side length $N$, with Dirichlet boundary data, and prove the convergence of the law of the centered maximum of the field.© 2015 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This paper considers designing adaptive finite-time controllers for a class of SISO strict feedback nonlinear plants with parametric uncertainties based on given specifications with requirement on transient response in terms of convergence time and convergence rate.

Proceedings ArticleDOI
TL;DR: Several safe-guarding techniques are incorporated into the algorithm, namely virtual control and trust regions, which add another layer of algorithmic robustness and convergence results will be independent from any numerical schemes used for discretization.
Abstract: This paper presents an algorithm to solve non-convex optimal control problems, where non-convexity can arise from nonlinear dynamics, and non-convex state and control constraints. This paper assumes that the state and control constraints are already convex or convexified, the proposed algorithm convexifies the nonlinear dynamics, via a linearization, in a successive manner. Thus at each succession, a convex optimal control subproblem is solved. Since the dynamics are linearized and other constraints are convex, after a discretization, the subproblem can be expressed as a finite dimensional convex programming subproblem. Since convex optimization problems can be solved very efficiently, especially with custom solvers, this subproblem can be solved in time-critical applications, such as real-time path planning for autonomous vehicles. Several safe-guarding techniques are incorporated into the algorithm, namely virtual control and trust regions, which add another layer of algorithmic robustness. A convergence analysis is presented in continuous- time setting. By doing so, our convergence results will be independent from any numerical schemes used for discretization. Numerical simulations are performed for an illustrative trajectory optimization example.

Journal ArticleDOI
TL;DR: By discussing ṫ(V)=μ(-1)(V), a general approach is provided to reveal the essence of finite-time stability and fixed-time convergence for the system V̇(t)=μ(V(t)).

Journal ArticleDOI
01 Apr 2016
TL;DR: An Improved Harmony Search Based Energy Efficient Routing Algorithm for WSNs is proposed, which is based on harmony search (HS) algorithm (a meta-heuristic) and an objective function model that considers both the energy consumption and the length of path is developed.
Abstract: Graphical abstractDisplay Omitted HighlightsA new encoding of harmony memory for routing in WSNs has been proposed.A new generation method of a new harmony for routing in WSNs has been proposed.The dynamic adaptation is introduced for the parameter HMCR to improve the performance of the proposed routing algorithm.An effective local search strategy is proposed to improve the convergence speed and the accuracy of the proposed routing algorithm.An energy efficient objective function model is proposed. Wireless sensor networks (WSNs) is one of the most important technologies in this century. As sensor nodes have limited energy resources, designing energy-efficient routing algorithms for WSNs has become the research focus. And because WSNs routing for maximizing the network lifetime is a NP-hard problem, many researchers try to optimize it with meta-heuristics. However, due to the uncertain variable number and strong constraints of WSNs routing problem, most meta-heuristics are inappropriate in designing routing algorithms for WSNs. This paper proposes an Improved Harmony Search Based Energy Efficient Routing Algorithm (IHSBEER) for WSNs, which is based on harmony search (HS) algorithm (a meta-heuristic). To address the WSNs routing problem with HS algorithm, several key improvements have been put forward: First of all, the encoding of harmony memory has been improved based on the characteristics of routing in WSNs. Secondly, the improvisation of a new harmony has also been improved. We have introduced dynamic adaptation for the parameter HMCR to avoid the prematurity in early generations and strengthen its local search ability in late generations. Meanwhile, the adjustment process of HS algorithm has been discarded to make the proposed routing algorithm containing less parameters. Thirdly, an effective local search strategy is proposed to enhance the local search ability, so as to improve the convergence speed and the accuracy of routing algorithm. In addition, an objective function model that considers both the energy consumption and the length of path is developed. The detailed descriptions and performance test results of the proposed approach are included. The experimental results clearly show the advantages of the proposed routing algorithm for WSNs.

Journal ArticleDOI
TL;DR: This paper identifies boundaries of coefficients for this algorithm that ensure particles converge to their equilibrium and investigates the local convergence property of this algorithm and proves that the original standard PSO algorithm is not sensitive to rotation, scaling, and translation of the search space.
Abstract: In this paper, we investigate three important properties (stability, local convergence, and transformation invariance) of a variant of particle swarm optimization (PSO) called standard PSO 2011 (SPSO2011). Through some experiments, we identify boundaries of coefficients for this algorithm that ensure particles converge to their equilibrium. Our experiments show that these convergence boundaries for this algorithm are: 1) dependent on the number of dimensions of the problem; 2) different from that of some other PSO variants; and 3) not affected by the stagnation assumption. We also determine boundaries for coefficients associated with different behaviors, e.g., nonoscillatory and zigzagging, of particles before convergence through analysis of particle positions in the frequency domain. In addition, we investigate the local convergence property of this algorithm and we prove that it is not locally convergent. We provide a sufficient condition and related proofs for local convergence for a formulation that represents updating rules of a large class of PSO variants. We modify the SPSO2011 in such a way that it satisfies that sufficient condition; hence, the modified algorithm is locally convergent. Also, we prove that the original standard PSO algorithm is not sensitive to rotation, scaling, and translation of the search space.

Journal ArticleDOI
TL;DR: A new iterative algorithm to approximate fixed point of Suzuki's generalized nonexpansive mappings is proposed and some weak and strong convergence theorems in a uniformly convex Banach space are established.

Journal ArticleDOI
TL;DR: This paper discusses generalized controllers for distance-based rigid formation shape stabilization and aims to provide a unified approach for the convergence analysis by proving the local exponential stability for rigid formation systems when using a general form of shape controllers with certain properties.

Journal ArticleDOI
TL;DR: NSync as discussed by the authors is a parallel coordinate descent method in which at each iteration a random subset of coordinates is updated, in parallel, allowing for the subsets to be chosen using an arbitrary probability law.
Abstract: We propose and analyze a new parallel coordinate descent method—NSync—in which at each iteration a random subset of coordinates is updated, in parallel, allowing for the subsets to be chosen using an arbitrary probability law. This is the first method of this type. We derive convergence rates under a strong convexity assumption, and comment on how to assign probabilities to the sets to optimize the bound. The complexity and practical performance of the method can outperform its uniform variant by an order of magnitude. Surprisingly, the strategy of updating a single randomly selected coordinate per iteration—with optimal probabilities—may require less iterations, both in theory and practice, than the strategy of updating all coordinates at every iteration.

Journal ArticleDOI
23 Jul 2016-Filomat
TL;DR: In this article, the authors proposed a rapid and effective way of working out the optimum convergence control parameter in the homotopy analysis method (HAM) for solving algebraic, highly nonlinear differentialdifference, integro-differential, and ordinary or partial differential equations or systems.
Abstract: A rapid and effective way of working out the optimum convergence control parameter in the homotopy analysis method (HAM) is introduced in this paper. As compared with the already known ways of evaluating the convergence control parameter in HAM either through the classical constant h − curves ( h is the convergence control parameter) or from the classical squared residual error as frequently used in the literature, a novel description is proposed to find out an optimal value for the convergence control parameter yielding the same optimum values. In most cases, the new method is shown to perform quicker and better against the residual error method when integrations are much harder to evaluate. Examples involving solution of algebraic, highly nonlinear differentialdifference, integro-differential, and ordinary or partial differential equations or systems, all from the literature demonstrate the validity and usefulness of the introduced technique

Journal ArticleDOI
TL;DR: An upper estimate of the convergence (settling) time is calculated for the finite-time convergent control algorithm that drives the state of a series ofIntegrators to the origin and a novel fixed-time continuous control law is proposed for a chain of integrators of an arbitrary dimension.
Abstract: The contribution of this paper is twofold. First, an upper estimate of the convergence (settling) time is calculated for the finite-time convergent control algorithm that drives the state of a series of integrators to the origin. To the best of our knowledge, such an estimate is obtained for the first time. Second, a novel fixed-time continuous control law is proposed for a chain of integrators of an arbitrary dimension. Its fixed-time convergence is established and the uniform upper bound of the settling time is computed. The theoretical developments are applied to a case study of controlling a DC motor.

Journal ArticleDOI
TL;DR: Optimum design problem of steel space frames is formulated according to the provisions of LRFD-AISC and its solution is obtained by using enhanced artificial bee colony algorithm by adding Levy flight distribution in the search of scout bees.

Posted Content
TL;DR: The empirical results for optimizing deep neural networks demonstrate that the stochastic variant of Nesterov's accelerated gradient method achieves a good tradeoff (between speed of convergence in training error and robustness of converge in testing error) among the three Stochastic methods.
Abstract: Recently, {\it stochastic momentum} methods have been widely adopted in training deep neural networks. However, their convergence analysis is still underexplored at the moment, in particular for non-convex optimization. This paper fills the gap between practice and theory by developing a basic convergence analysis of two stochastic momentum methods, namely stochastic heavy-ball method and the stochastic variant of Nesterov's accelerated gradient method. We hope that the basic convergence results developed in this paper can serve the reference to the convergence of stochastic momentum methods and also serve the baselines for comparison in future development of stochastic momentum methods. The novelty of convergence analysis presented in this paper is a unified framework, revealing more insights about the similarities and differences between different stochastic momentum methods and stochastic gradient method. The unified framework exhibits a continuous change from the gradient method to Nesterov's accelerated gradient method and finally the heavy-ball method incurred by a free parameter, which can help explain a similar change observed in the testing error convergence behavior for deep learning. Furthermore, our empirical results for optimizing deep neural networks demonstrate that the stochastic variant of Nesterov's accelerated gradient method achieves a good tradeoff (between speed of convergence in training error and robustness of convergence in testing error) among the three stochastic methods.

Proceedings Article
01 Jan 2016
TL;DR: This paper provides competitive convergence guarantees for without-replacement sampling under several scenarios, focusing on the natural regime of few passes over the data, yielding a nearly-optimal algorithm for regularized least squares under broad parameter regimes.
Abstract: Stochastic gradient methods for machine learning and optimization problems are usually analyzed assuming data points are sampled *with* replacement. In contrast, sampling *without* replacement is far less understood, yet in practice it is very common, often easier to implement, and usually performs better. In this paper, we provide competitive convergence guarantees for without-replacement sampling under several scenarios, focusing on the natural regime of few passes over the data. Moreover, we describe a useful application of these results in the context of distributed optimization with randomly-partitioned data, yielding a nearly-optimal algorithm for regularized least squares (in terms of both communication complexity and runtime complexity) under broad parameter regimes. Our proof techniques combine ideas from stochastic optimization, adversarial online learning and transductive learning theory, and can potentially be applied to other stochastic optimization and learning problems.

Journal ArticleDOI
TL;DR: In this paper, an adaptive isogeometric method (AIGM) for solving elliptic second-order partial differential equations with truncated hierarchical B-splines of arbitrary degree and different order of continuity is addressed.
Abstract: The problem of developing an adaptive isogeometric method (AIGM) for solving elliptic second-order partial differential equations with truncated hierarchical B-splines of arbitrary degree and different order of continuity is addressed. The adaptivity analysis holds in any space dimensions. We consider a simple residual-type error estimator for which we provide a posteriori upper and lower bound in terms of local error indicators, taking also into account the critical role of oscillations as in a standard adaptive finite element setting. The error estimates are properly combined with a simple marking strategy to define a sequence of admissible locally refined meshes and corresponding approximate solutions. The design of a refine module that preserves the admissibility of the hierarchical mesh configuration between two consecutive steps of the adaptive loop is presented. The contraction property of the quasi-error, given by the sum of the energy error and the scaled error estimator, leads to the convergence proof of the AIGM.