Regularized Jacobi Iteration for Decentralized Convex Quadratic Optimization With Separable Constraints
read more
Citations
Price of anarchy in electric vehicle charging control games: When Nash equilibria achieve social welfare
On the connection between Nash equilibria and social optima in electric vehicle charging control games
On the Convergence of a Regularized Jacobi Algorithm for Convex Optimization
On the probabilistic feasibility of solutions in multi-agent optimization problems under uncertainty
Synchronous Parallel Block Coordinate Descent Method for Nonsmooth Convex Function Minimization
References
Parallel and Distributed Computation: Numerical Methods
Convex Analysis and Monotone Operator Theory in Hilbert Spaces
Proximal Algorithms
Weak convergence of the sequence of successive approximations for nonexpansive mappings
Constrained Consensus and Optimization in Multi-Agent Networks
Related Papers (5)
An approximate gradient algorithm for constrained distributed convex optimization
Regularized dual gradient distributed method for constrained convex optimization over unbalanced directed graphs
Frequently Asked Questions (14)
Q2. what is the objective function in a pvv?
The objective function in (1) encodes the total electricity cost given by the demand (both PEVs and nonPEVs) multiplied by the price of electricity, which in turn depends linearly on the total demand through p(t), thus giving rise to the quadratic function in (1).
Q3. What is the function that can be used to define the constraint set?
Letting yi = [xi,> hi]> be the decision vector of agent i, the local constraint set can be then defined as Y i = Xi ∩ {gi(xi) ≤ hi}, while the objective function can be rewritten as x>Qx+ q>x+ ∑m i=0 hi, which is quadratic in y = [y1,> . . . ym,>]>.
Q4. what is the performance criterion in this local problem?
The performance criterion in this local problem is a linear combination of the objective f(zi, x−ik ), where the variables of all other agents apart from the i-th one are fixed to their values at iteration k, and a quadratic regularization term, penalizing the difference between zi and the value of agent’s i own variable at iteration k, i.e., xik.
Q5. What is the performance criterion in Algorithm 1?
To implement Algorithm 1, at iteration k + 1, it is needed that some central authority collects and broadcasts the current solution of each agent to all others, so that each of them can compute f(·, x−ik ).
Q6. What is the solution for the constraint set?
Under Assumption 1, the function f is convex and hence continuous, while the constraint set X = X1 × · · · ×Xm is non-empty and compact, as result of Weierstrass’ theorem [12, Proposition A8, p. 625], P admits at least one optimal solution.
Q7. How many iterations of the Jacobi algorithm are there?
4: Evolution of the iterates xik(t) generated by Algorithm 1 at t = 12 as a function of the iteration index k, for i = 1, . . . , 10, i.e., the first 10 vehicles of the 1000-vehicle fleet.
Q8. What is the condition Qd + Ic 0?
Ic−Q 0 can be satisfied by choosing c > λmaxQz .C. Connection with gradient algorithms Recalling the formulation in (18) and (19), xik+1 = T̃i(xk), i = 1, . . . ,m, in step 6 of Algorithm 1 can be equivalently written as a scaled projected gradient step as follows:xk+1 = [ξ(xk)]
Q9. What is the importance of the two terms in Algorithm 1?
The relative importance of these two terms is dictated by the regularization coefficient c ∈ R+, which plays a key role in determining the convergence properties of Algorithm 1.
Q10. How does the central authority collect the solution of each agent?
at iteration k+ 1 of Algorithm 1, the central authority needs to collect the solution of each agent but it only has to broadcast x̄k = d +
Q11. what is the optimum charging strategy for a fleet of m plug-in electric vehicles?
R is the charging rate of vehicle i at time t, γi ∈ R represents a prescribed charging level to be reached by each vehicle i at the end of the considered time horizon, and xi(t), xi(t) ∈ R are bounds on the minimum and maximum value of xi(t), respectively.
Q12. how is the convergence to some minimizer of P guaranteed?
In that case, geometric convergence to some minimizer of P is guaranteed by means of Proposition 3.5 in [12].4 c > λmaxQz c > λmaxQ − λminQd c > λmaxQc > m− 1 2m− 12λ max QzFig.
Q13. How does the algorithm converge to the optimal value of P?
By Theorem 3 of [33] it can be shown that, as far as the optimal value is concerned, Algorithm 1 converges for c > m−1 2m−12λ max Qz.
Q14. What is the last equality of the unscaled projected gradient?
This implies thatλmaxQz ≤ v>Qvv>v − λminQd ≤ maxz 6=0z>Qzz>z − λminQd= λmaxQ − λminQd , (28) where the last equality follows recalling the definition of the induced 2-norm of a symmetric square matrix.