scispace - formally typeset

Proceedings ArticleDOI

On decentralized convex optimization in a multi-agent setting with separable constraints and its application to optimal charging of electric vehicles

01 Dec 2016-pp 6044-6049

TL;DR: A decentralized algorithm for multi-agent, convex optimization programs, subject to separable constraints, where the constraint function of each agent involves only its local decision vector, while the decision vectors of all agents are coupled via a common objective function is developed.
Abstract: We develop a decentralized algorithm for multi-agent, convex optimization programs, subject to separable constraints, where the constraint function of each agent involves only its local decision vector, while the decision vectors of all agents are coupled via a common objective function. We construct a variant of the so called Jacobi algorithm and show that, when the objective function is quadratic, convergence to some minimizer of the centralized problem counterpart is achieved. Our algorithm serves then as an effective alternative to gradient based methodologies. We illustrate its efficacy by applying it to the problem of optimal charging of electric vehicles, where, as opposed to earlier approaches, we show convergence to an optimal charging scheme for a finite, possibly large, number of vehicles.

Content maybe subject to copyright    Report

On decentralized convex optimization in a multi-agent setting with
separable constraints and its application to optimal charging of electric
vehicles
Luca Deori, Kostas Margellos, Maria Prandini
Abstract We develop a decentralized algorithm for multi-
agent, convex optimization programs, subject to separable
constraints, where the constraint function of each agent involves
only its local decision vector, while the decision vectors of
all agents are coupled via a common objective function. We
construct a variant of the so called Jacobi algorithm and show
that, when the objective function is quadratic, convergence
to some minimizer of the centralized problem counterpart is
achieved. Our algorithm serves then as an effective alternative
to gradient based methodologies. We illustrate its efficacy by
applying it to the problem of optimal charging of electric
vehicles, where, as opposed to earlier approaches, we show
convergence to an optimal charging scheme for a finite, possibly
large, number of vehicles.
I. INTRODUCTION
Optimization in multi-agent systems has attracted sig-
nificant attention in the control and operations research
communities, due to its applicability to different domains,
e.g., energy systems, mobility systems, robotic networks, etc.
In this paper we focus on a specific class of multi-agent
optimization programs that are convex and are subject to
constraints that are separable, i.e., the constraint function of
each agent involves only its local decision vector. The agents’
decision vectors are, however, coupled by means of a com-
mon objective function. The considered structure, although
specific, captures a wide class of engineering problems, like
the electric vehicle optimal charging problem studied in this
paper. Solving such problems in a centralized fashion would
require agents to share their local constraint functions with
each other. This would raise, however, information privacy
issues. Even if this were not an issue, solving the problem
in one shot, without exploiting the separable structure of the
constraints, would lead to an optimization program of larger
size, involving the decision variables and constraints of all
agents, and possibly pose computational challenges.
To allow for a computationally tractable solution, while
accounting for information privacy, we adopt a decentralized
perspective, where agents cooperate to obtain an optimal
solution of the centralized problem. We follow an iterative
algorithm, where at every iteration each agent solves a local
optimization problem with respect to its own local decision
Research was supported by the European Commission, H2020, under the
project UnCoVerCPS, grant number 643921.
L. Deori and M. Prandini are with the Dipartimento di Elet-
tronica Informazione e Bioingegneria, Politecnico di Milano, Piazza
Leonardo da Vinci 32, 20133 Milano, Italy, e-mail: {luca.deori,
maria.prandini}@polimi.it
K. Margellos is with the Department of Engineering Science, Univer-
sity of Oxford, Parks Road, OX1 3PJ, Oxford, United Kingdom, e-mail:
kostas.margellos@eng.ox.ac.uk
vector using the tentative solutions computed by the other
agents at the previous iteration. Agents then exchange with
each other their new tentative solutions, or broadcast them to
some central authority that sends an update to each agent; the
process is repeated on the basis of the received information.
Algorithms for the decentralized solution to convex opti-
mization problems with separable constraints can be found
in [1], [2], and references therein. Two main algorithmic
directions can be distinguished, both of them relying on
an iterative process. The first one is based on each agent
performing at every iteration a local gradient descent step,
while keeping the decision variables of all other agents fixed
to the values communicated at the previous iteration [3–5].
Under certain structural assumptions (differentiability of the
objective function and Lipschitz continuity of its gradient),
it is shown that this scheme converges to some minimizer
of the centralized problem, for an appropriate gradient step-
size. The second direction involves mainly the so called
Jacobi algorithm, which serves as an alternative to gradient
algorithms. In this framework also the Gauss-Seidel algo-
rithm which however is not of parallelizable nature unless a
coloring scheme is adopted (see [1]), and block coordinate
descent methods [6] can be considered. Under this set-up, at
every iteration, instead of performing a gradient step, each
agent minimizes the common objective function subject to
local constraints, while keeping the decision vectors of all
other agents fixed to their values at the previous iteration. It
is shown that the Jacobi algorithm converges under certain
contractiveness requirements, that are typically satisfied only
if the objective function is jointly strictly convex with respect
to the decision vectors of all agents.
An alternative research direction with a notable research
activity involves a non-cooperative treatment of the problem,
using tools from mean-field and aggregative game theory.
A complete theoretical characterization for the stochastic,
continuous-time variant of the problem, but in the absence
of constraints, is provided in [7], [8]. The deterministic,
discrete-time problem variant, accounting for the presence
of separable constraints, is treated using fixed-point theoretic
tools in [9]. In all cases, the considered algorithm is shown to
converge not to a minimizer, but to a Nash equilibrium of a
related game, in the limiting case where the number of agents
tends to infinity. Several applications in this context have
been provided, e.g., optimal power flow type of problems
[10], optimal charging of electric vehicles [11], [12], etc.
In this paper we adopt a cooperative point of view, and
construct a Jacobi-like algorithm. In contrast to the standard
Jacobi algorithm, the local minimization that each agent

solves at every iteration of the algorithm includes also an
inertial term that encompasses the solution of that agent at
the previous iteration. Our contributions can be summarized
as follows: 1) We establish an equivalence between the set
of minimizers of the problem under study and the set of
fixed-points of the mapping induced by our algorithm, for
any convex objective function. 2) For the case where the
objective function is quadratic we show that our algorithm
converges to some minimizer of the centralized problem, thus
constituting an alternative to gradient methods, and without
requiring strict convexity of the objective function as in the
standard Jacobi algorithm. This result extends the equiva-
lence between proximal operators and gradient algorithms
observed in [2] for the single-agent case, to the multi-agent
setting. 3) We apply the proposed algorithm to the problem
of optimal charging of electric vehicles, extending the results
of [9], [11], [12], and achieving convergence to an optimal
charging scheme with a finite number of vehicles.
Notation
For any a R
n
, kak denotes the Euclidean norm of a,
and kak
Q
denotes the Q-weighted Euclidean norm of a. For
a vector a, we denote by a
i
the i-th block component of
a, whereas a
i
denotes the vector emanating from a by
removing a
i
. Similarly for matrix A, A
i,i
denotes the i-
th diagonal block of A, whereas A
i,i
denotes the matrix
composed by the i-th block column of A and all, but the
i-th block row. For a continuously differentiable function
J(·) : R
n
R, J(a) is the gradient of J(·) evaluated at
a R
n
, and
i
J(a) is its i-th component, i = 1, . . . , n.
[a]
U
Q
denotes the projection of a vector a on the set U
with respect to the Q-weighted Eucliedean norm. 1
n×m
denotes the matrix with all entries equal to 1 with dimension
n × m, and I
c
denotes the identity matrix with appropriate
dimension, multiplied by the scalar c R.
II. PROBLEM STATEMENT
Consider the following optimization problem
P : min
{u
i
R
n
i
}
m
i=1
J(u
1
, . . . , u
m
) (1)
subject to u
i
U
i
, for all i = 1, . . . , m, (2)
where J(·, . . . , ·) : R
n
1
×. . .×R
n
m
R, and U
i
R
n
i
, for
all i = 1, . . . , m. Let n =
P
m
i=1
n
i
and U = U
1
× . . . × U
m
.
We impose the following assumption throughout the paper.
Assumption 1: The function J(·, . . . , ·) : R
n
1
× . . . ×
R
n
m
R is continuously differentiable, and jointly convex
with respect to all arguments. Moreover, the sets U
i
R
n
i
,
i = 1, . . . , m, are non-empty, compact and convex.
Under Assumption 1, by the Weierstrass’ theorem (Propo-
sition A8, p. 625 in [1]), P admits at least one optimal
solution. Note, however, that P does not necessarily admit a
unique minimizer. With a slight abuse of notation, for each
i, i = 1, . . . , m, let J(·, u
i
) : R
n
i
R be the objective
function in (1) as a function of the decision vector u
i
of agent
i, when the decision vectors of all other agents are fixed to
u
i
R
nn
i
. We will occasionally also write J(u) instead
of J(u
1
, . . . , u
m
), for u = (u
1
, . . . , u
m
), the interpretation
will always be clear from the context. Problem P can be
thought of as a multi-agent problem, where agents have a
Algorithm 1 Decentralized algorithm
1: Initialization
2: k = 0.
3: Consider u
i
(0) U
i
, for all i = 1, . . . , m.
4: For i = 1, . . . , m repeat until convergence
5: Agent i receives u
i
(k) from central authority.
6: u
i
(k + 1) = λu
i
(k)
+(1λ)argmin
z
i
U
i
J(z
i
, u
i
(k))+ckz
i
u
i
(k)k
2
.
7: k k + 1.
local decision vector u
i
and a local constraint set U
i
, and
cooperate to determine a minimizer of J, which couples
the individual decision vectors. Motivated by the particular
structure of P with separable constraint sets, we follow a
decentralized, iterative approach described in Algorithm 1.
This allows to cope with privacy and computational issues.
Initially, each agent i, i = 1, . . . , m, starts with some
tentative value u
i
(0) U
i
, such that
u
1
(0), . . . , u
m
(0)
is feasible and constitutes an estimate of what the minimizer
of P might be (step 3, Algorithm 1). At iteration k + 1,
each agent i receives the values of all other agents u
i
(k)
(step 5, Algorithm 1) from the central authority, and up-
dates its estimate by averaging with weight λ (0, 1) the
previous estimate and the solution of a local minimization
problem (step 6, Algorithm 1). The performance criterion in
this local problem is a linear combination of the objective
J(z
i
, u
i
(k)), where the variables of all other agents apart
from the i-th one are fixed to their values at iteration k,
and a quadratic term, penalizing the difference between the
decision variables and the value of agent’s i own variable at
iteration k, i.e., u
i
(k). The relative importance of these two
terms is dictated by c R
+
; we defer the discussion on the
importance of the penalty term until Section IV. Note that
under Assumption 1, and due to the presence of the quadratic
penalty term, the resulting problem is strictly convex with
respect to z
i
, and hence admits a unique minimizer.
III. PRELIMINARY DEFINITIONS AND RESULTS
A. Definitions
1) Minimizers: By (1)-(2), the set of minimizers of P is
M =
u U : u arg min
{z
i
U
i
}
m
i=1
J(z
1
, . . . , z
m
)
. (3)
Following the discussion below Assumption 1, M is non-
empty. Note that set of optimizers of M is not necessarily a
singleton; this will be the case if J is jointly strictly convex
with respect to its arguments.
2) Fixed-points: For each i, i = 1, . . . , m, consider the
mappings T
i
(·) : U U
i
and
e
T
i
(·) : U U
i
, defined such
that, for any u = (u
1
, . . . , u
m
) U,
T
i
(u) = arg min
z
i
U
i
kz
i
u
i
k
2
(4)
subject to J(z
i
, u
i
) min
ζ
i
U
i
J(ζ
i
, u
i
)
e
T
i
(u) = arg min
z
i
U
i
J(z
i
, u
i
) + ckz
i
u
i
k
2
. (5)
The mapping in (4) serves as a tie-break rule to select, in
case J(·, u
i
) admits multiple minimizers, the one closer to

u
i
with respect to the Euclidean norm. Note that both the
minimizers of (4) and (5) are unique, so that both mappings
are well defined. Note also that with u(k) in place of u,
(5) implies that the update step 6 in Algorithm 1 can be
equivalently written as u
i
(k+1) = λu
i
(k)+(1λ)
e
T
i
(u(k)).
Define also the mappings T (·) : U U and
e
T (·) : U U,
such that their components are given by T
i
(·) and
e
T
i
(·), re-
spectively, for i = 1, . . . , m, i.e., T (·) =
T
1
(·), . . . , T
m
(·)
and
e
T (·) =
e
T
1
(·), . . . ,
e
T
m
(·)
. The mappings T (·) and
e
T (·)
can be equivalently written as
T (u) = arg min
zU
m
X
i=1
kz
i
u
i
k
2
(6)
subject to J(z
i
, u
i
) min
ζ
i
U
i
J(ζ
i
, u
i
), i = 1, . . . , m
e
T (u) = arg min
zU
m
X
i=1
J(z
i
, u
i
) + ckz
i
u
i
k
2
, (7)
where the terms inside the summations are decoupled. The
set of fixed-points of T (·) and
e
T (·) is, respectively, given by
F
T
=
u U : u = T (u)
, and F
e
T
=
u U : u =
e
T (u)
.
B. Connections between minimizers and fixed-points
We report here a fundamental optimality result (e.g., see
Proposition 3.1 in [1]), that we will often use in the sequel.
Proposition 1 (Proposition 3.1 in [1]): Consider any n
N
+
, and assume that J(·) : R
n
R is a continuously
differentiable function, and U R
n
is non-empty, closed
and convex. It holds that i) if u U minimizes J(·) over
U, then (z u)
>
J(u) 0, for all z U ; ii) if J(·)
is also convex on U, then the condition of the previous
part is also sufficient for u to minimize J(·) over U , i.e.,
u arg min
zU
J(z).
The following propositions show that the set of minimizers
M of P in (3) and the set of fixed-points F
T
of the mapping
T in (6) coincide, and that the set of fixed-points F
T
of T (·)
and the set of fixed-points F
e
T
of
e
T (·) coincide.
Proposition 2: Under Assumption 1, M = F
T
.
Proposition 3: Under Assumption 1, F
T
= F
e
T
.
Proof: The proofs are omitted due to space limitation,
they can be found in [13, Proposition 2 and 3].
Note that the connection between minimizers and fixed-
points, similar to the ones in Proposition 2, has been also
investigated in [14], in the context of Nash equilibria in non-
cooperative games.
IV. MAIN CONVERGENCE RESULT
In this section we strengthen Assumption 1, and focus
on convex optimization problems with a convex, quadratic
objective function.
Assumption 2: For any u U, J(u) = u
>
Qu + q
>
u,
where Q 0 and q R
n
.
Note that Q can be assumed to be symmetric (i.e., Q = Q
>
)
without loss of generality.Moreover if additional terms that
depend on the local decision vectors u
i
, i = 1, . . . , m, and
encode the utility function of each agent were present in the
objective function, they could be incorporated in the local
constraint set U
i
, i = 1, . . . , m, by means of an epigraphic
reformulation, thus bringing the cost back to be quadratic.
Under Assumption 2, the mapping
e
T (·) in (7) is given by
e
T (u) = arg min
zU
m
X
i=1
J(z
i
, u
i
) + ckz
i
u
i
k
2
(8)
= arg min
zU
m
X
i=1
z
>
i
(Q
i,i
+I
c
)z
i
+(2u
>
i
Q
i,i
2u
>
i
I
c
+q
>
i
)z
i
= arg min
zU
z
>
(Q
d
+ I
c
)z + (2u
>
Q
z
2u
>
I
c
+ q
>
)z,
where, for all i = 1, . . . , m, Q
i,i
is the i-th block of Q, cor-
responding to the decision vector z
i
, Q
d
is a block diagonal
matrix whose i-th block is Q
i,i
, and Q
z
= QQ
d
. Notice the
slight abuse of notation in (8), where the weighted identity
matrix I
c
in the second and the third equality are not of the
same dimension. Let ξ(u) = (Q
d
+ I
c
)
1
(I
c
u Q
z
u q/2)
denote the unconstrained minimizer of (8). Then
e
T (u) = arg min
zU
(z ξ)
>
(Q
d
+ I
c
)(z ξ) = [ξ(u)]
U
Q
d
+I
c
.
Note that Q
d
+ I
c
is always positive definite for c R
+
, so
that the projection [ξ(u)]
U
Q
d
+I
c
is well defined.
In the next proposition the non-expansive property of the
mapping
e
T (·) is proven. This property will be exploited in
Theorem 1 to establish convergence of Algorithm 1.
Proposition 4: Consider Assumptions 1 and 2. If
2Q Q
Q Q
d
+ I
c
0, (9)
the mapping
e
T (u) = [ξ(u)]
U
Q
d
+I
c
is non-expansive with
respect to || · ||
Q
d
+I
c
, namely k
e
T (u)
e
T (v)k
Q
d
+I
c
ku
vk
Q
d
+I
c
, for all u, v U .
Proof: Any projection mapping is non-expansive (see
Proposition 3.2 in [1]). Therefore, we have that
k
e
T (u)
e
T (v)k
Q
d
+I
c
= k [ξ(u)]
U
Q
d
+I
c
[ξ(v)]
U
Q
d
+I
c
k
Q
d
+I
c
kξ(u) ξ(v)k
Q
d
+I
c
. (10)
We will show that, if (9) holds,
kξ(u) ξ(v)k
Q
d
+I
c
ku vk
Q
d
+I
c
, (11)
proving that the mapping
e
T (u) is non-expansive. Replacing
in (11) the expression of ξ, and raising to the square yields
k(Q
d
+ I
c
)
1
(I
c
Q
z
)(u v)k
2
Q
d
+I
c
ku vk
2
Q
d
+I
c
k(I (Q
d
+ I
c
)
1
Q)(u v)k
2
Q
d
+I
c
ku vk
2
Q
d
+I
c
(12)
where I is an identity matrix of appropriate dimension, and
the equivalence follows from the fact that Q
z
= Q Q
d
. By
bringing both terms in (12) in the left-hand side and by the
definition of || · ||
Q
d
+I
c
, (12) is satisfied if
(I (Q
d
+ I
c
)
1
Q)
>
(Q
d
+ I
c
)(I (Q
d
+ I
c
)
1
Q)
(Q
d
+ I
c
) 0 2Q Q(Q
d
+ I
c
)
1
Q 0. (13)
where the last inequality follows after some algebraic cal-
culations and the fact that Q and Q
d
+ I
c
are symmetric.
Equation (13) can be rewritten by means of Schur’s comple-
ment finally obtaining (9).

Notice that if ¯c satisfies (9), then any c ¯c satisfies (9)
as well. To see this, take any c ¯c and rewrite (9) as
2Q Q
Q Q
d
+ I
¯c
+
0 0
0 I
˜c
0, (14)
where ˜c = c¯c. The matrices in (14) are both positive semi-
definite and, hence, their sum is also positive semi-definite.
Condition (9) can be easily checked by means of standard
LMI solvers. In fact, it can be shown that for any Q 0
there exists c such that (9) is satisfied. Indeed condition, (9)
can be equivalently written as
u
>
v
>
2Q Q
Q Q
d
+ I
c
u
v
0, for all u, v R
n
u
>
2Qu + v
>
(Q
d
+ I
c
)v + 2v
>
Qu 0. (15)
From the fact that (u + v)
>
Q(u + v) 0 it follows that
2v
>
Qu u
>
Qu v
>
Qv. Replacing the latter in (15), the
following sufficient condition to satisfy (9) is obtained.
u
>
Qu + v
>
(Q
d
+ I
c
Q)v 0, for all u, v R
n
. (16)
The first term in (16) is non-negative because Q 0, while
the second term can be made non-negative exploiting the
matrix I
c
to move the eigenvalues of Q
d
Q by c. Indeed,
letting λ
I
c
+Q
d
Q
and λ
Q
d
Q
denote the eigenvalues of I
c
+
Q
d
Q and Q
d
Q, respectively, we have
λ
I
c
+Q
d
Q
= λ
Q
d
Q
+ c. (17)
Hence, (16) can be satisfied by choosing c such that c
λ
max
QQ
d
, where λ
max
QQ
d
denotes the maximum eigenvalue of
matrix Q Q
d
. Note that, since Q Q
d
is symmetric with
zero trace, its eigenvalues will be real and at least one should
be non-negative. As a result, c λ
max
QQ
d
0.
Theorem 1: Consider Assumptions 1 and 2. If c R
+
is
chosen so that (9) is satisfied, then Algorithm 1 converges
to a minimizer of P.
Proof: Step 6 of Algorithm 1 corresponds to the so
called Krasnoselskij iteration [15] (referred to as averaged
operator in [2]),
u(k + 1) = λu(k) + (1 λ)
e
T (u(k)), (18)
of the mapping
e
T (·), which, according to Proposition 4, is
non-expansive if (9) is satisfied. By Theorem 3.2 in [15], we
have that for any non-expansive mapping
e
T (u) : U U,
with U compact and convex, the Krasnoselskij iteration (18)
converges to a fixed-point of
e
T (·) for any λ (0, 1), and for
any initial condition u(0) U . Under Assumptions 1 and 2,
the mapping
e
T (·) defined in (8), satisfies the aforementioned
requirements, hence, Algorithm 1 leads to a fixed-point of
e
T (·). By Propositions 2 and 3, this fixed-point will also be
a minimizer of P.
It should be noted that the condition on c in Proposition
4 is related to the requirement imposed in [1] (Proposition
3.4, p. 214) on the step-size of a gradient based approach.
This is due to the fact that in the case where J(·) satisfies
Assumption 2, step 6 of Algorithm 1 can be shown to be
equivalent to a scaled gradient projection algorithm (see
Section 3.3.3 in [1]), with 1/c playing the role of the gradient
step-size and with (Q
d
+ I
c
)
1
(inverse of the Hessian of
the objective function in step 6 of Algorithm 1) being the
scaling matrix. For such algorithms convergence results exist
for an appropriate step-size, which is in turn related to c. In
particular, the step-size for which convergence is guaranteed
is related to the Lipschitz constant of the gradient of the
objective function, and, under Assumption 2, it can be shown
to be equivalent with choosing c so that (9) is satisfied.
Hence, in the case of a convex quadratic objective func-
tion, both the proposed algorithm and a gradient based
approach are applicable. Our analysis, however, not only
complements the scaled gradient projection algorithm by
constituting its Jacobi-like equivalent without requiring strict
convexity of the objective function as it is usually the
case for the standard Jacobi algorithm, but also follows
a different analysis from that in [1], motivated by the
fixed-point theoretic results of [9]. Moreover, the results of
Section II are valid for any convex objective function, not
necessarily quadratic, thus opening the road for extending the
convergence results of Section III by relaxing Assumption 2;
see [13] for preliminary results.
A. Alternative implementations
We investigate the convergence properties of Algorithm 1
when the so called Krasnoselskij iteration (step 6 of Algo-
rithm 1) is replaced by the simpler Picard-Banach iteration:
u(k + 1) =
e
T (u(k)). (19)
Note that this corresponds to setting λ = 0 in step 6 of
Algorithm 1. As in the proof of Theorem 1, once convergence
is proven, then it is easily shown that the solution is a
minimizer of P due to Propositions 2 and 3.
For Algorithm 1 to converge when step 6 is replaced by
(19), the mapping
e
T (·) has to be either contractive or firmly
non-expansive [15]. We first investigate conditions under
which
e
T (·) is contractive. Following a reasoning similar to
the one in the proof of Proposition 4, it can be seen that if
2Q (1 α
2
)(Q
d
+ I
c
) Q
Q Q
d
+ I
c
0, (20)
is satisfied for some α [0, 1), then k
e
T (u)
e
T (v)k
Q
d
+I
c
αku vk
Q
d
+I
c
, for all u, v U, which in turn implies that
the mapping
e
T (·) is contractive with respect to || · ||
Q
d
+I
c
[15]. Condition (20), however, can not be always satisfied
by appropriately choosing c; indeed, it is necessary that
Q is positive definite for (20) to be satisfied. The latter
is equivalent to requiring that the objective function in
Assumption 2 is strictly convex.
We now investigate conditions under which
e
T (·) is firmly
non-expansive. Motivated by the analysis of [9], we have that
k
e
T (u)
e
T (v)k
2
Q
d
+I
c
= k [ξ(u)]
U
Q
d
+I
c
[ξ(v)]
U
Q
d
+I
c
k
2
Q
d
+I
c
(ξ(u) ξ(v))
>
(Q
d
+ I
c
)([ξ(u)]
U
Q
d
+I
c
[ξ(v)]
U
Q
d
+I
c
)
=(uv)
>
(IQ(Q
d
+I
c
)
1
)(Q
d
+I
c
)([ξ(u)]
U
Q
d
+I
c
[ξ(v)]
U
Q
d
+I
c
)
=(uv)
>
(Q
d
+ I
c
Q)([ξ(u)]
U
Q
d
+I
c
[ξ(v)]
U
Q
d
+I
c
) (21)
where the first inequality follows from the definition of a
firmly non-expansive mapping and the fact that any projec-
tion mapping is firmly non-expansive (see Proposition 4.8 in

[16]). The second equality is due to the definition ξ(·), and
the last one follows after performing computation.
Since Q 0, then k
e
T (u)
e
T (v)k
2
Q
d
+I
c
Q
k
e
T (u)
e
T (v)k
2
Q
d
+I
c
. This, together with (21), implies that
k [ξ(u)]
U
Q
d
+I
c
[ξ(v)]
U
Q
d
+I
c
k
2
Q
d
+I
c
Q
(22)
(u v)
>
(Q
d
+ I
c
Q)([ξ(u)]
U
Q
d
+I
c
[ξ(v)]
U
Q
d
+I
c
).
By the definition of a firmly non-expansive mapping [15],
(22) implies that, if Q
d
+ I
c
Q 0,
e
T (·) is firmly non-
expansive with respect to || · ||
Q
d
+I
c
Q
. The condition Q
d
+
I
c
Q 0 can be satisfied by choosing c as in (15)-(17),
rendering (19) an alternative to step 6 of Algorithm 1.
B. Information exchange
To implement Algorithm (1), at iteration k+1, it is needed
that some central authority, or common processing node,
collects and broadcasts the tentative solution of each agent
to all others, and that the agents have knowledge of the
common objective function J(·) so that each of them can
compute J(·, u
i
(k)) (alternatively the central authority can
broadcast it to each agent i, i = 1, . . . , m). However, the
required amount of information that needs to be exchanged
can be significantly reduced when the objective function
exhibits some particular structure. This is the case, e.g., for
objective functions that are coupled only through the average
of some variables. The central authority needs then to collect
the solutions of all agents, but it only has to broadcast the
average value. Each agent will then be able to compute
J(·, u
i
(k)) by subtracting from the average the value of
its local decision vector at iteration k, i.e., u
i
(k).
V. OPTIMAL CHARGING OF ELECTRIC VEHICLES
We consider the problem of optimizing the charging
strategy for a fleet of m plug-in electric vehicles (PEVs)
over a finite horizon T . We follow the formulation of [9],
[11], [12]; but, our algorithm converges to a minimizer of the
centralized problem counterpart and with a finite number of
agents/vehicles, as opposed to the aforementioned references,
where convergence to a Nash equilibrium at the limiting case
of an infinite population of agents is established. The PEV
charging problem is formulated as follows:
min
{u
i,t
}
m
i=1
T
t=0
1
m
T
X
t=0
p
t
(d
t
+
m
X
i=1
u
i,t
)
2
(23)
subject to
P
T
t=0
u
i,t
= γ
i
, i=1, . . . , m
u
i,t
u
i,t
u
i,t
, i=1, . . . , m, t= 0, . . . , T,
where p
t
R is an electricity price coefficient at time t,
d
t
R represents the non-PEV demand at time t, u
i,t
R
is the charging rate of vehicle i at time t, γ
i
R represents
a prescribed charging level to be reached by each vehicle i
at the end of the considered time horizon, and u
i,t
, u
i,t
R
are bounds on the minimum and maximum value of u
i,t
,
respectively. The objective function in (23) encodes the total
electricity cost given by the demand (both PEVs and non-
PEVs) multiplied by the price of electricity, which in turn
depends linearly on the total demand through p
t
, thus giving
rise to the quadratic function in (23). This linear dependency
of price with respect to the total demand models the fact that
agents/vehicles are price anticipating authorities, anticipating
their consumption to have an effect on the electricity price
(see [17]). Problem (23) can be written in compact form as
min
uR
m(T +1)
(d + Au)
>
P (d + Au) (24)
subject to u
i
U
i
, for all i = 1, . . . , m,
where P = (1/m)diag(p) R
(T +1)×(T +1)
, and diag(p) is
a matrix with p = (p
0
, . . . , p
T
) R
T +1
on its diagonal.
A = 1
1×m
I R
(T +1)×m
, where denotes the
Kronecker product. Moreover, d = (d
0
, . . . , d
T
) R
T +1
,
u = (u
1
, . . . , u
m
) R
m(T +1)
with u
i
= (u
i,0
, . . . , u
i,T
)
R
T +1
, and U
i
encodes the constraints of each vehicle i,
i = 1, . . . , m, in (23). Problem (24) can be solved in a
decentralized fashion by means of Algorithm 1. We compute
the value of c so that (9) is satisfied and the mapping
e
T (·) associated to problem (24) is non-expansive. Note
that the objective function in (24) is not strictly convex
as A
>
P A = 1
m×m
P , and it exhibits a structure that
allows for reduced information exchange as described in
Section IV-B. Indeed, at iteration k + 1 of Algorithm 1,
the central authority needs to collect the solution of each
agent but it only has to broadcast V (k) = d + Au(k).
Each agent i, can then compute its tentative objective as
J(z
i
, u
i
(k)) = (V (k)u
i
(k)+ z
i
)
>
P (V (k) u
i
(k)+ z
i
).
Step 6 in Algorithm 1 for problem (24) reduces then to
u
i
(k + 1) = λu
i
(k) + (1 λ)
e
T (u(k)) =
λu
i
(k)+(1λ)argmin
z
i
U
i
(V (k)u
i
(k)+z
i
)
>
P (V (k)u
i
(k)+z
i
).
A. Simulation results
We consider first a fleet of m = 100 PEVs, each of them
having to reach a different level of charge γ
i
[0.1, 0.3], i =
1, . . . , m, at the end of a time horizon T = 24, corresponding
to hourly steps. The bounds on u
i,t
are taken to be u
i,t
= 0
and u
i,t
= 0.02, for all i = 1, . . . , m, t = 0, . . . , T . The
non-PEV demand profile is retrieved from [11], whereas the
price coefficient is p
t
= 0.15, t = 0, . . . , T . Note that, as in
[12], u
i,t
corresponds to normalized charging rate, which is
then rescaled to be turned into reasonable power values. All
optimization problems are solved using CPLEX, [18].
For comparison purposes, problem (24) is solved first in
a centralized fashion, achieving an optimal objective value
J
?
= 2.67. It is then solved in a decentralized fashion
by means of Algorithm 1, setting c = 0.1 and λ = 0.4.
Note that for (9) to be satisfied, according to the analysis of
Section IV, we should have c 0.0735. After 30 iterations
the difference between the decentralized and the centralized
objective is J(u(30)) J
?
= 1.36 · 10
6
, thus achieving
numerical convergence.
We perform a parametric analysis, running Algorithm 1 for
different values of λ and c. In Table I the number of iterations
needed to achieve a relative error between the decentralized
and the centralized objective value
J(u(k))J
?
J
?
< 10
6
is
reported. Note that if we choose a value of c that does
not satisfy (9), Algorithm 1 does not always converge (see
first row of Table I, for λ = 0.01, 0.1). As c and λ
increase, numerical convergence requires more iterations,

Citations
More filters

Journal ArticleDOI
01 Jul 2017-IFAC-PapersOnLine
Abstract: We consider the problem of optimal charging of heterogeneous plug-in electric vehicles (PEVs). We approach the problem as a multi-agent game in the presence of constraints and formulate an auxiliary minimization program whose solution is shown to be the unique Nash equilibrium of the PEV charging control game, for any finite number of possibly heterogeneous agents. Assuming that the parameters defining the constraints of each vehicle are drawn randomly from a given distribution, we show that, as the number of agents tends to infinity, the value of the game achieved by the Nash equilibrium and the social optimum of the cooperative counterpart of the problem under study coincide for almost any choice of the random heterogeneity parameters. To the best of our knowledge, this result quantifies for the first time the asymptotic behaviour of the price of anarchy for this class of games. A numerical investigation to support our result is also provided.

22 citations


Journal ArticleDOI
01 Oct 2018-Automatica
Abstract: We consider the problem of optimal charging of plug-in electric vehicles (PEVs). We treat this problem as a multi-agent game, where vehicles/agents are heterogeneous since they are subject to possibly different constraints. Under the assumption that electricity price is affine in total demand, we show that, for any finite number of heterogeneous agents, the PEV charging control game admits a unique Nash equilibrium, which is the optimizer of an auxiliary minimization program. We are also able to quantify the asymptotic behaviour of the price of anarchy for this class of games. More precisely, we prove that if the parameters defining the constraints of each vehicle are drawn randomly from a given distribution, then, the value of the game converges almost surely to the optimum of the cooperative problem counterpart as the number of agents tends to infinity. In the case of a discrete probability distribution, we provide a systematic way to abstract agents in homogeneous groups and show that, as the number of agents tends to infinity, the value of the game tends to a deterministic quantity.

22 citations


Journal ArticleDOI
TL;DR: The convergence analysis of the regularized Jacobi algorithm is revisited and it is shown that it also converges in iterates under very mild conditions on the objective function and achieves a linear convergence rate.
Abstract: In this paper, we consider the regularized version of the Jacobi algorithm, a block coordinate descent method for convex optimization with an objective function consisting of the sum of a differentiable function and a block-separable function. Under certain regularity assumptions on the objective function, this algorithm has been shown to satisfy the so-called sufficient decrease condition, and consequently, to converge in objective function value. In this paper, we revisit the convergence analysis of the regularized Jacobi algorithm and show that it also converges in iterates under very mild conditions on the objective function. Moreover, we establish conditions under which the algorithm achieves a linear convergence rate.

14 citations


Proceedings ArticleDOI
10 Jul 2019-
Abstract: We consider a resource allocation problem over an undirected network of agents, where edges of the network define communication links. The goal is to minimize the sum of agent-specific convex objective functions, while the agents' decisions are coupled via a convex conic constraint. We derive two methods by applying the alternating direction method of multipliers (ADMM) for decentralized consensus optimization to the dual of our resource allocation problem. Both methods are fully parallelizable and decentralized in the sense that each agent exchanges information only with its neighbors in the network and requires only its own data for updating its decision. We prove convergence of the proposed methods and demonstrate their effectiveness with a numerical example.

9 citations


Journal ArticleDOI
Abstract: We consider multiagent, convex quadratic optimization programs subject to separable constraints, where the constraint function of each agent involves only its local decision vector, while the decision vectors of all agents are coupled via a common objective function. We focus on a regularized variant of the so-called Jacobi algorithm for decentralized computation in such problems. We provide a fixed-point theoretic analysis showing that the algorithm converges to a minimizer of the centralized problem under more relaxed conditions on the regularization coefficient from those available in the literature, and in particular with respect to scaled projected gradient algorithms. The efficacy of the proposed algorithm is illustrated by applying it to the problem of optimal charging of electric vehicles.

8 citations


References
More filters

Book
01 Jan 1989-
TL;DR: This work discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later.
Abstract: gineering, computer science, operations research, and applied mathematics. It is essentially a self-contained work, with the development of the material occurring in the main body of the text and excellent appendices on linear algebra and analysis, graph theory, duality theory, and probability theory and Markov chains supporting it. The introduction discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later. After the introduction, the text is organized in two parts: synchronous algorithms and asynchronous algorithms. The discussion of synchronous algorithms comprises four chapters, with Chapter 2 presenting both direct methods (converging to the exact solution within a finite number of steps) and iterative methods for linear

5,430 citations


"On decentralized convex optimizatio..." refers background or methods in this paper

  • ...3 in [1]), with 1/c playing the role of the gradient step-size and with (Qd + Ic) (inverse of the Hessian of the objective function in step 6 of Algorithm 1) being the scaling matrix....

    [...]

  • ...In this framework also the Gauss-Seidel algorithm which however is not of parallelizable nature unless a coloring scheme is adopted (see [1]), and block coordinate descent methods [6] can be considered....

    [...]

  • ...It should be noted that the condition on c in Proposition 4 is related to the requirement imposed in [1] (Proposition 3....

    [...]

  • ...625 in [1]), P admits at least one optimal solution....

    [...]

  • ...Algorithms for the decentralized solution to convex optimization problems with separable constraints can be found in [1], [2], and references therein....

    [...]


Book
26 Apr 2011-
TL;DR: This book provides a largely self-contained account of the main results of convex analysis and optimization in Hilbert space, and a concise exposition of related constructive fixed point theory that allows for a wide range of algorithms to construct solutions to problems in optimization, equilibrium theory, monotone inclusions, variational inequalities, and convex feasibility.
Abstract: This book provides a largely self-contained account of the main results of convex analysis and optimization in Hilbert space. A concise exposition of related constructive fixed point theory is presented, that allows for a wide range of algorithms to construct solutions to problems in optimization, equilibrium theory, monotone inclusions, variational inequalities, best approximation theory, and convex feasibility. The book is accessible to a broad audience, and reaches out in particular to applied scientists and engineers, to whom these tools have become indispensable.

3,706 citations


Book
Neal Parikh1, Stephen Boyd1Institutions (1)
27 Nov 2013-
TL;DR: The many different interpretations of proximal operators and algorithms are discussed, their connections to many other topics in optimization and applied mathematics are described, some popular algorithms are surveyed, and a large number of examples of proxiesimal operators that commonly arise in practice are provided.
Abstract: This monograph is about a class of optimization algorithms called proximal algorithms. Much like Newton's method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Here, we discuss the many different interpretations of proximal operators and algorithms, describe their connections to many other topics in optimization and applied mathematics, survey some popular algorithms, and provide a large number of examples of proximal operators that commonly arise in practice.

3,174 citations


"On decentralized convex optimizatio..." refers background in this paper

  • ...This result extends the equivalence between proximal operators and gradient algorithms observed in [2] for the single-agent case, to the multi-agent setting....

    [...]

  • ...Proof: Step 6 of Algorithm 1 corresponds to the so called Krasnoselskij iteration [15] (referred to as averaged operator in [2]),...

    [...]

  • ...Algorithms for the decentralized solution to convex optimization problems with separable constraints can be found in [1], [2], and references therein....

    [...]


Journal ArticleDOI
Abstract: We survey here some recent studies concerning what we call mean-field models by analogy with Statistical Mechanics and Physics. More precisely, we present three examples of our mean-field approach to modelling in Economics and Finance (or other related subjects...). Roughly speaking, we are concerned with situations that involve a very large number of “rational players” with a limited information (or visibility) on the “game”. Each player chooses his optimal strategy in view of the global (or macroscopic) informations that are available to him and that result from the actions of all players. In the three examples we mention here, we derive a mean-field problem which consists in nonlinear differential equations. These equations are of a new type and our main goal here is to study them and establish their links with various fields of Analysis. We show in particular that these nonlinear problems are essentially well-posed problems i.e., have unique solutions. In addition, we give various limiting cases, examples and possible extensions. And we mention many open problems.

1,836 citations


"On decentralized convex optimizatio..." refers background in this paper

  • ...A complete theoretical characterization for the stochastic, continuous-time variant of the problem, but in the absence of constraints, is provided in [7], [8]....

    [...]


Journal ArticleDOI
TL;DR: A state aggregation technique is developed to obtain a set of decentralized control laws for the individuals which possesses an epsiv-Nash equilibrium property and a stability property of the mass behavior is established.
Abstract: We consider linear quadratic Gaussian (LQG) games in large population systems where the agents evolve according to nonuniform dynamics and are coupled via their individual costs. A state aggregation technique is developed to obtain a set of decentralized control laws for the individuals which possesses an epsiv-Nash equilibrium property. A stability property of the mass behavior is established, and the effect of inaccurate population statistics on an isolated agent is also analyzed by variational techniques.

855 citations


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20192
20185
20171
20162