Optimal Scaling of a Gradient Method for Distributed Resource Allocation
read more
Citations
Initialization-free distributed algorithms for optimal resource allocation with feasibility constraints and application to economic dispatch of power systems
Distributed Generator Coordination for Initialization and Anytime Optimization in Economic Dispatch
Newton-Raphson Consensus for Distributed Convex Optimization
Online Optimal Generation Control Based on Constrained Distributed Gradient Algorithm
Convex optimization of graph Laplacian eigenvalues
References
Equation of state calculations by fast computing machines
Convex Optimization
Matrix Analysis
Related Papers (5)
Frequently Asked Questions (17)
Q2. What class of algorithms are considered in this paper?
The center-free algorithm considered in this paper belongs to a more general class of gradient-like algorithms studied in [TBA86].
Q3. What is the common method to select the weight matrix?
The simplest and most commonly used method is to have constant weight on all the edges of the graph, and obtain the self-weights Wii from the equality constraint W1 = 0:Wij = α (i, j) ∈ E −diα i = j 0 otherwise,where di = |Ni| is the degree of node i.
Q4. What is the way to minimize the convergence rate of the algorithm?
When the lower and upper bounds L and U are the only information available, it is reasonable to choose the weight matrix to minimize the guaranteed convergence rate η established in theorem 1.
Q5. What is the way to determine the weight of a graph?
For symmetric weight matrices, each edge of the graph is bidirectional and has the same weight in both directions, so each can be considered as an undirected edge with a single weight.
Q6. What is the way to minimize the rate of convergence?
In the special case of choosing the constant edge weight to minimize the guaranteed rate, the authors show that with appropriate scaling of the objective functions, the solution can be directly given in terms of the eigenvalues of the Laplacian matrix of the graph.
Q7. How can the authors find the optimal weights for the algorithm?
The authors observe that the optimal weights (in the sense of minimizing the guaranteed convergence rate) can be found by solving a semidefinite program (SDP).
Q8. What is the meaning of center-free algorithms?
They considered an undirected graph with symmetric weights on the edges, and called algorithms of this form center-free algorithms.
Q9. What is the LMI for symmetric W?
When the weight matrix W is symmetric, the convergence conditions reduce toW = W T , W1 = 0 (20) 2W + (1/n)11T 0 (21) 2U−1 − W 0. (22)To see this, the authors first rewrite the LMI (14) for symmetric W :[2W + (1/n)11T W W U−1]0.
Q10. What is the cost of computing the optimal weights in the SDP-based weight selection method?
While these weights evidently yield faster convergence of the method (compared to, say, a maximum degree or Metropolis choice of weights), it requires real computation, i.e., the solution of an SDP.
Q11. What is the way to select the weight matrix?
unless all the nodes have the same value of diui, the authors can always setWij = −min{1diui ,1djuj}, (i, j) ∈ E . (26)The authors call these weights the Metropolis weights, because the main idea of this method relates to the Metropolis algorithms for choosing transition probabilities on a graph to make the associated Markov chain mix rapidly ([MRR+53]; see also, e.g., [DSC98, BDX03]).
Q12. What is the smallest eigenvalue of V?
The derivation of the equation (19) holds without assuming the convergence condition (11), so long as the authors interpret λn−1(V ) as the smallest eigenvalue of V excluding the zero eigenvalue associated with the eigenvector 1.
Q13. What is the weight matrix W found by solving the SDP?
(The nonsymmetric weight matrix W found by solving the SDP (31) has two positive off-diagonal entries; see the comment in the paragraph after equation (9).)
Q14. What is the way to maximize the eigenvalue of the matrix V?
This is equivalent to maximizing the second smallest eigenvalue of the matrix V , i.e.,maximize λn−1 ( L1/2(W + W T − W T UW )L1/2 ) subject to W ∈ S, 1T W = 0, W1 = 0, (27)where the optimization variable is W .
Q15. What is the condition to obtain the weights of the graph?
From the condition (23), it is straightforward to obtain the following ranges for the edge weights that guarantee the convergence of the algorithm:−min{1diui ,1djuj}< Wij < 0, (i, j) ∈ E .
Q16. What is the way to solve a similar class of SDPs?
The authors discuss how to exploit sparsity in interior-point methods and a simple subgradient method for solving a similar class of SDPs in [XB03].
Q17. What is the way to solve the eigenvalue optimization problem?
(28)Using Schur complements, the above quadratic matrix inequality is equivalent to the LMI[ W + W T − s (L−1 − 1 1T L−11L−111T L−1 ) W TW U−1]0. (29)Therefore the eigenvalue optimization problem (27) is equivalent to the SDPmaximize ssubject to W ∈ S, 1T W = 0, W1 = 0 [W + W T − s (L−1 − 1 1T L−11L−111T L−1 ) W TW U−1]0(30)with optimization variables s and W .