scispace - formally typeset
Open AccessProceedings ArticleDOI

ECOS: An SOCP solver for embedded systems

TLDR
This paper describes the embedded conic solver (ECOS), an interior-point solver for second-order cone programming (SOCP) designed specifically for embedded applications, written in low footprint, single-threaded, library-free ANSI-C and so runs on most embedded platforms.
Abstract
In this paper, we describe the embedded conic solver (ECOS), an interior-point solver for second-order cone programming (SOCP) designed specifically for embedded applications. ECOS is written in low footprint, single-threaded, library-free ANSI-C and so runs on most embedded platforms. The main interior-point algorithm is a standard primal-dual Mehrotra predictor-corrector method with Nesterov-Todd scaling and self-dual embedding, with search directions found via a symmetric indefinite KKT system, chosen to allow stable factorization with a fixed pivoting order. The indefinite system is solved using Davis' SparseLDL package, which we modify by adding dynamic regularization and iterative refinement for stability and reliability, as is done in the CVXGEN code generation system, allowing us to avoid all numerical pivoting; the elimination ordering is found entirely symbolically. This keeps the solver simple, only 750 lines of code, with virtually no variation in run time. For small problems, ECOS is faster than most existing SOCP solvers; it is still competitive for medium-sized problems up to tens of thousands of variables.

read more

Content maybe subject to copyright    Report

ECOS: An SOCP Solver for Embedded Systems
Alexander Domahidi
1
, Eric Chu
2
and Stephen Boyd
2
Abstract In this paper, we describe the embedded conic
solver (ECOS), an interior-point solver for second-order cone
programming (SOCP) designed specifically for embedded ap-
plications. ECOS is written in low footprint, single-threaded,
library-free ANSI-C and so runs on most embedded platforms.
The main interior-point algorithm is a standard primal-dual
Mehrotra predictor-corrector method with Nesterov-Todd scal-
ing and self-dual embedding, with search directions found via
a symmetric indefinite KKT system, chosen to allow stable
factorization with a fixed pivoting order. The indefinite system
is solved using Davis’ SparseLDL package, which we modify
by adding dynamic regularization and iterative refinement
for stability and reliability, as is done in the CVXGEN code
generation system, allowing us to avoid all numerical pivoting;
the elimination ordering is found entirely symbolically. This
keeps the solver simple, only 750 lines of code, with virtually no
variation in run time. For small problems, ECOS is faster than
most existing SOCP solvers; it is still competitive for medium-
sized problems up to tens of thousands of variables.
I. INTRODUCTION
Convex optimization [1] is used in fields as diverse as
control and estimation [2], finance (see [1, §4.4] for numer-
ous examples), signal processing [3] and image reconstruc-
tion [4]. Methods for solving linear, quadratic, second-order
cone or semi-definite programs (LPs, QPs, SOCPs, SDPs)
are well understood, and many solver implementations have
been developed and released, both in the public domain [5],
[6] and as commercial codes [7], [8].
Almost all existing codes are designed for desktop com-
puters. In many applications, however, the convex solver is
to be run on a computing platform that is embedded in
a physical device, or that forms a small part of a larger
software system. In these applications, the same problem is
solved but with different data and often with a hard real-
time constraint. This requires that the solver be split into two
parts: (1) An initialization routine that symbolically analyzes
the problem structure and carries out other setup tasks, and
(2) a real-time routine that solves instances of the problem.
In these applications, high reliability is required: When the
solver does not attain the standard exit tolerance, it must at
least return reasonable results, even when called with poor
data (such as with rank deficient equality constraints). In
such cases, currently available solvers cannot be used due to
code size, memory requirements, dependencies on external
libraries, availability of source code, and other complexities.
In this paper, we describe the implementation of a solver
for second-order cone programming that targets embedded
1
Automatic Control Laboratory, ETH Zurich, Zurich 8092, Switzerland.
Email: domahidi@control.ee.ethz.ch
2
Department of Electrical Engineering, Stanford University, Stanford CA
94305, USA. Email: echu508|boyd@stanford.edu
systems. We focus on SOCPs since many potential real-
time applications can be transformed into this problem
class, including LPs, QPs and quadratically constrained QPs
(QCQPs). See [9] or [1, Chap. 6-8] for some potential appli-
cations of SOCPs; a recent example for real-time SOCP is
minimum-fuel powered descent for landing spacecraft [10].
Our embedded conic solver (ECOS) is written in single-
threaded ANSI-C with no library dependence (other than
sqrt from the standard math library). It can thus be
used on any platform for which a C compiler is avail-
able. The algorithm implemented is the same as in Vander-
berghe’s CVXOPT [11], [12]: a standard primal-dual Mehro-
tra predictor-corrector interior-point method with self-dual
embedding, which allows for detection of infeasibility and
unboundedness. CVXOPT obtains the search directions via
a Cholesky factorization of the reduced system; in contrast,
we use a sparse LDL solver on a variation of the standard
KKT system. All standard sparse LDL solvers for indefinite
linear systems use on-the-fly pivoting for numerical stability.
This numerical pivoting step, however, is disadvantageous
on embedded platforms due to code complexity and the
need for dynamic memory allocation. Instead, we use a
fixed elimination ordering, chosen once on the basis of
the problem’s sparsity structure. Consequently, no dynamic
pivoting is performed. To ensure the factorization exists, and
to enhance numerical stability, we use dynamic regularization
and iterative refinement, as is done in CVXGEN [13]. Our
current implementation uses a fill-in reducing ordering, based
on the approximate minimum degree heuristic [14]. We
also handle the diagonal plus rank one blocks appearing in
the linear system by expanding them to larger but sparse
blocks, improving solver efficiency when optimizing over
large second-order cones (SOCs). The expansion is chosen
in such a way that the lifted KKT matrix can be factored in
a numerically stable way, despite a fixed pivoting sequence.
With these techniques, ECOS is a low footprint, high
performance SOCP solver that is numerically reliable for
accuracies beyond what is typically needed in embedded
applications. Despite its simplicity, ECOS solves small prob-
lems more quickly than most existing SOCP solvers, and
is competitive for medium-sized problems up to about 20 k
variables. A Python interface, a native Matlab MEX interface
and integration with CVX, a Matlab package for specifying
and solving convex programs [15], is provided. ECOS is
available from github.com/ifa-ethz/ecos.
A. Related work
Several solvers that target embedded systems have been
made available recently. With exception of first order meth-
2013 European Control Conference (ECC)
July 17-19, 2013, Zürich, Switzerland.
978-3-952-41734-8/©2013 EUCA 3071

ods (cf. [16], [17]), all existing implementations are limited
to QPs, including efficient primal-barrier methods for model
predictive control [18], primal-dual interior-point methods by
C code-generation for small QPs with micro to millisecond
solution times (CVXGEN, [13]) and active set strategies
with good warm-starting capabilities (qpOASES, [19]). QPs,
QCQPs as well as LPs with multistage property can be solved
efficiently by FORCES [20], [21]. There are currently two
implementations of C-code generators based on first order
methods: µAO-MPC [22], [23] for linear MPC problems,
and FiOrdOs [24] for general convex problems, which also
supports SOCPs. First-order methods can be slow if the
problem is not well conditioned or if it has a feasible set
that does not allow for an efficient projection, while interior-
point methods have a convergence rate that is independent
of the problem data and the particular feasible set.
B. Outline
In §II, we define an SOCP. In §III, we outline an interior-
point method (implemented in CVXOPT [11]). §IV describes
modifications to the interior-point method in order to target
embedded systems. §V presents a numerical example and
run time comparison to existing solvers. §VI concludes the
paper.
II. SECOND-ORDER CONE PROBLEM
ECOS solves SOCPs in the standard form [11]:
minimize c
T
x
subject to Ax = b
Gx + s = h, s K,
(P)
where x are the primal variables, s denotes slack variables,
c R
n
, G R
M×n
, h R
M
, A R
p×n
, b R
p
are the
problem data, and K is the cone
K = Q
m
1
× Q
m
2
× · · · × Q
m
N
, (1)
where
Q
m
,
(u
0
, u
1
) R × R
m1
| u
0
ku
1
k
2
for m > 1, and we define Q
1
, R
+
. The order of the cone
K is denoted by M ,
P
N
i=1
m
i
. Without loss of generality,
we assume that the first l 0 cones in (1) correspond to the
positive orthant Q
1
. The associated dual problem to (P) is
(see [1, Chap. 5])
maximize b
T
y h
T
z
subject to G
T
z + A
T
y + c = 0
z K.
(D)
III. INTERIOR-POINT METHOD
Interior point methods (IPMs) are widely used to solve
convex optimization problems, see e.g. [1], [8], [25], [26].
Conceptually, a barrier function, e.g.
Φ(u) ,
N
X
i=1
(
ln u
i
u
i
Q
1
1
2
ln
u
2
i,0
u
T
i,1
u
i,1
u
i
Q
m
, (2)
is employed for K to replace the constrained optimization
problems (P) and (D) by a series of smooth convex uncon-
strained problems, each approximately solved by Newton’s
method in one step. Thereby a sequence
χ
k+1
= χ
k
+ α
k
χ
k
, k = 0, 1, 2, . . . (3)
is generated, where χ
k
, (x
k
, y
k
, s
k
, z
k
), α
k
> 0 is a
step length found by line search and χ
k
is a particular
search direction found by solving one or more linear systems.
In path-following algorithms, iterates (3) loosely track the
central path, which ends in the solution set [25]. Details on
the particular algorithm implemented by ECOS can be found
in [27]; we give the most important aspects in the following.
A. Extended self-dual homogeneous embedding
To detect and handle infeasibility or unboundedness, we
embed (P) and (D) into a single, self-dual problem that
readily provides certificates of infeasibility, cf. [8], [11], [26]:
minimize (M + 1)θ
subject to 0 = A
T
y + G
T
z + + q
x
θ
0 = Ax + + q
y
θ
s = Gx + + q
z
θ
κ = c
T
x b
T
y h
T
z + q
τ
θ
0 = q
T
x
x q
T
y
y q
T
z
z q
τ
τ + M + 1
(s, z) K, (τ, κ) 0,
(ESD)
with additional variables τ , κ and θ, and with
(q
x
, q
y
, q
z
, q
τ
) =
M + 1
s
T
z + τκ
(r
x
, r
y
, r
z
, r
τ
) ,
where
r
x
, A
T
y G
T
z , r
y
, Ax (4)
r
τ
, κ + c
T
x + b
T
y + h
T
z, r
z
, s + Gx ,
denote residuals given (x, y, z, s, τ, κ). Using this self-dual
embedding, the following certificates can be provided when
θ = 0:
1) τ > 0, κ = 0: Optimality,
2) τ = 0, κ > 0 and h
T
z + b
T
y < 0: Primal infeasibility,
3) τ = 0, κ > 0 and c
T
x < 0: Dual infeasibility,
4) τ = 0, κ = 0: None.
B. Central path
The central path of problem (ESD) is defined by the set
of points (x, y, s, z, τ, κ) satisfying
0
0
s
κ
=
0 A
T
G
T
c
A 0 0 b
G 0 0 h
c
T
b
T
h
T
0
x
y
z
τ
+ µ
q
x
q
y
q
z
q
τ
(s, z) K, (κ, τ ) 0,
W
T
s
(W z) = µe, τκ = µ, (CP)
where µ , (s
T
z+κτ )/(M +1) = θ and W is a (symmetric)
scaling matrix (cf. §III-C). Search directions are generated
by linearizing (CP) and choosing different µ, cf. §III-D. For
µ 0, (CP) is equivalent to the Karush-Kuhn-Tucker (KKT)
3072

optimality conditions, which are necessary and sufficient
conditions for optimality for convex problems.
In (CP), the operator denotes a cone product of two
vectors u, v Q
m
, and e is the unit element such that u
e = u for all u Q
m
. For m = 1, this is the standard
multiplication (and e = 1), while for m > 1 we have
u v =
u
T
v
u
0
v
1
+ v
0
u
1
and e =
1 0
T
m1
T
. (5)
The operator stems from the unifying treatment of
symmetric cones using real Jordan algebra. The interested
reader is referred to [28] and the references therein.
C. Symmetric scalings
Nesterov and Todd showed that the self-scaled property
of the barrier function Φ (2) can be exploited to construct
interior-point methods that tend to make long steps along the
search direction [29]. A primal-dual scaling W is a linear
transformation [11]
˜s = W
T
s, ˜z = W z
that leaves the cone and the central path invariant, i.e.,
s K ˜s K, z K ˜z K,
s z = µe ˜s ˜z = µe.
We use the symmetric Nesterov-Todd (NT) scaling that is
defined by the scaling point w K such that
2
Φ(w)s = z,
i.e. w is the point where the Hessian of the barrier function
maps s onto z. It can be shown that such w always exists
and that it is unique [29]. The NT-scaling W is then obtained
by a symmetric factorization of
2
Φ(w)
1
= W
T
W [11]:
1) NT-scaling for LP-cone Q
1
: For s, z Q
1
,
W
+
=
p
s/z. (6)
This is the standard scaling used in primal-dual interior point
methods for LPs and QPs.
2) NT-scaling for SOC Q
m
, m > 1: For s, z Q
m
define
the normalized points
¯z =
z
(z
2
0
z
T
1
z
1
)
1/2
, ¯s =
s
(s
2
0
s
T
1
s
1
)
1/2
,
γ =
1 + ¯z
T
¯s
2
1/2
, ¯w =
1
2γ
¯s +
¯z
0
¯z
1

.
Then the NT-scaling is given by
W
SOC
, η
¯w
0
¯w
T
1
¯w
1
I
m1
+ (1 + ¯w
0
) ¯w
1
¯w
T
1
η ,
(s
2
0
s
T
1
s
1
)/(z
2
0
z
T
1
z
1
)
1/4
(7)
with I
m1
being the identity matrix of size m 1.
3) Composition of scalings: The final scaling matrix W
is a block diagonal matrix comprised of scalings for each
cone Q
m
i
,
W = blkdiag (W
1
, W
2
, . . . , W
N
) ,
where the first l blocks W
1:l
are defined by (6) and the
remaining N l ones by (7). Note that W = W
T
for the
scalings considered in this paper.
D. Search directions
In predictor-corrector methods, the overall search direction
is the sum of three directions [25]: The affine search direction
aims at directly satisfying the KKT conditions, i.e. µ = 0
in (CP), and allows one to make progress (in terms of
distance to the optimal solution) at the current step. The
centering direction does not make progress in the current
step, but brings the current iterate closer to the central path,
such that larger progress can be made at the next affine step.
The amount of centering is determined by a parameter σ
[0, 1]. Since the affine direction is based on linearization, the
corrector direction aims at compensating the nonlinearities
at the predicted affine step. The centering and the corrector
direction can be pooled into a combined direction.
As suggested in [11], search directions can be computed
as follows. Define the vectors
ξ
1
,
x
T
1
y
T
1
z
T
1
T
,
ξ
2
,
x
T
2
y
T
2
z
T
2
T
,
β
1
,
c
T
b
T
h
T
T
,
β
2
,
d
T
x
d
T
y
d
T
z
T
,
and the matrix
K ,
0 A
T
G
T
A 0 0
G 0 W
2
, (8)
which we refer to as the KKT matrix in this paper, as
it follows from the coefficient matrix in the central path
equation (CP) after simple algebraic manipulations. A search
direction can now be computed by solving two linear sys-
tems,
Kξ
1
= β
1
and Kξ
2
= β
2
, (9)
and combining the results to
τ =
d
τ
d
κ
+
c
T
b
T
h
T
ξ
1
κ/τ
c
T
b
T
h
T
ξ
2
, (10a)
x
T
y
T
z
T
T
= ξ
1
+ τξ
2
, (10b)
s = W (λ \ d
s
+ W z) , (10c)
κ = (d
κ
+ κτ) , (10d)
where u \ w is the inverse operator to the conic product ,
such that if u v = w then u \ w = v (we give an efficient
formula in (16)), and
λ , W
1
s = W z.
3073

TABLE I
RHS FOR AFFINE AND COMBINED SEARCH DIRECTION. s
a
AND z
a
DENOTE AFFINE SEARCH DIRECTIONS.
RHS affine combined (centering & corrector)
d
x
r
x
(1 σ)r
x
d
y
r
y
(1 σ)r
y
d
z
r
z
+ s (1 σ)r
z
+ W (λ \ d
s
)
d
s
λ λ λ λ +
W
1
s
a
(W z
a
) σµe
d
τ
r
τ
(1 σ)r
τ
d
κ
τ κ τ κ + s
a
z
a
σµ
The corresponding right hand sides for the affine and
centering-corrector direction are summarized in Table I. For
the former, (10c) simplifies to
s
a
= W (λ + W z
a
) .
Since β
1
is the same for both directions, only three linear
systems have to be solved per iteration. This is the compu-
tational bottleneck of interior-point methods.
IV. IMPLEMENTATION FOR EMBEDDED SYSTEMS
In this section, we give implementation details which
differ from CVXOPT. These are important for the solver
to be run on embedded platforms. Some of the techniques
described here have already been successfully employed in
CVXGEN [13].
A. Efficient and stable sparse LDL factorization
Each iteration of our method solves the indefinite linear
system (9) with different righthand sides. Typically, this is
solved by factorizing K or an equivalent linear system. The
computational cost thus depends on the number of nonzeros
in this factorization.
We use the following strategies to minimize this overhead:
(1) we work with the indefinite matrix K and avoid potential
fill-in when performing a block reduction of K (as is done
in CVXOPT), (2) we use a fill-in reducing ordering on
K, (3) we perform dynamic regularization to ensure the
existence of a factorization for any static ordering, and (4)
we use iterative refinement to undo the effects of dynamic
regularization. These strategies allow us to compute the
permutation and symbolic factorization in an initialization
stage. We modify Tim Davis’ SparseLDL [30] to perform
dynamic regularization and iterative refinement.
1) LDL factorization of indefinite K: We perform a
(permuted) LDL factorization on K,
K = P LDL
T
P
T
,
where P is a permutation matrix of appropriate size, chosen
to minimize the fill-in of L; L is a lower triangular matrix;
and D is a diagonal matrix. Note that this differs from a
general LDL factorization in which D may contain 2 × 2
blocks [31]. Since K is indefinite, this (simpler) factorization
is not guaranteed to exist; hence, we introduce dynamic
regularization to turn K into a quasi-definite matrix [32].
2) Permutations to reduce fill-in: The factor L has at
least as many non-zero elements as the lower triangular part
of K. We use the approximate minimum degree (AMD)
code [14] to compute fill-in reducing orderings of K. AMD
is a heuristic that performs well in practice and has low
computational complexity.
3) Dynamic regularization: We turn K into a quasi-
definite matrix by perturbing its diagonal entries. We do this
only when needed during the numerical factorization, hence
the term dynamic regularization. Specifically, we alter D
ii
during the factorization whenever it becomes too small or
has the wrong sign:
D
ii
S
i
δ if S
i
D
ii
,
with parameters 10
14
, δ 10
7
. The signs S
i
of the
diagonal elements are known in advance,
S =
+1
T
n
1
T
p
1
T
l
ρ
T
,
where ρ is the sign vector associated to SOCs. For K as de-
fined in (8), ρ = 1
T
Ml
. For the sparse representation of K
introduced in §IV-B.1 and used in the current implementation
of ECOS,
ρ =
ρ
l+1
, . . . , ρ
N
with ρ
i
=
1
T
m
i
1 1
for each SOC.
4) Iterative refinement (IR): Due to regularization, we
solve a slightly perturbed system (K + K)
˜
ξ
i
= β
i
, and
thus make an error β
i
K
˜
ξ
i
with respect to (9). To com-
pensate for this error, we solve
(K + K)
˜
ξ
i
= β
i
K
˜
ξ
i
for a corrective term
˜
ξ
i
, and set
˜
ξ
i
˜
ξ
i
+
˜
ξ
i
. This refine-
ment requires only a forward- and a backward solve, since
the factorization of (K + K) has been computed already.
Successive application of iterative refinement converges to
the true solution of (9).
B. Structure exploitation of SOC scalings
The Nesterov-Todd scalings (7), which propagate to the
(3,3) block in the KKT matrix K as W
2
soc
, are dense
matrices involving a diagonal plus rank one term. We exploit
this to obtain a sparse KKT system and to accelerate scaling
operations. This is crucial for efficiently solving problems
with cones of large dimension.
1) Sparse KKT system: We consider a single cone (for
multiple cones, this construction can be carried out for each
SOC separately). Based on the observation that W
2
SOC
can
be rewritten as
W
2
soc
η
2
D + uu
T
vv
T
(11)
for some diagonal matrix D and vectors u, v, we introduce
two scalar variables q and t to obtain the sparse representa-
tion of the last equation in (9):
G
0
0
x η
2
D v u
v
T
1 0
u
T
0 1
| {z }
˜
W
2
SOC
z
q
t
=
d
z
0
0
. (12)
3074

As a result, the KKT matrix is sparse (given that A and G
are sparse), which is likely to yield sparse Cholesky factors
and thereby significant computational savings if large cones
are involved.
For any fixed permutation P , the condition
D vv
T
0 (13)
is sufficient for the factorization of the expanded KKT matrix
to exist, because (13) implies that the coefficient matrix
˜
W
2
SOC
in (12) is quasi-definite. Consequently, the expanded
KKT matrix is also quasi-definite (if regularized as described
in §IV-A.3). It can be shown that the following choice of D,
u, and v satisfies both (11) and (13), and that it is always
well defined:
D ,
d
I
m1
, u ,
u
0
u
1
¯w
1
, v ,
0
v
1
¯w
1
,
d ,
1
2
¯w
2
0
+
1
2
¯w
T
1
¯w
1
1
α
2
1 + ( ¯w
T
1
¯w
1
)β
,
u
0
,
q
¯w
2
0
+ ¯w
T
1
¯w
1
d, u
1
,
α
u
0
, v
1
,
q
u
2
1
β,
α , 1 + ¯w
0
+ ( ¯w
T
1
¯w
1
)/(1 + ¯w
0
),
β , 1 + 2/(1 + ¯w
0
) + ¯w
T
1
¯w
1
/(1 + ¯w
0
)
2
.
2) Fast scaling operations: With the definitions in (7), we
can apply the NT-scaling W
SOC
by
W
SOC
z = η
¯w
0
z
0
+ ζ
z
1
+ (z
0
+ ζ/(1 + ¯w
0
)) ¯w
1
, (14)
where ζ , ¯w
T
1
z
1
. Similarly, we can apply the inverse scaling
W
1
SOC
by
W
1
SOC
z =
1
η
¯w
0
z
0
ζ
z
1
+ (z
0
+ ζ/(1 + ¯w
0
)) ¯w
1
. (15)
3) Fast inverse conic product: An efficient formulation of
the inverse operator to (5) for second-order cones is given
by:
u \ w ,
1
%
u
0
w
0
ν
(ν/u
0
w
0
)u
1
+ (%/u
0
) w
1
,
with % = u
2
0
u
T
1
u
1
, ν = u
T
1
w
1
.
(16)
For Q
1
, u \ w , w/u, i.e. the usual scalar division.
By using (14), (15) and (16), a multiplication or left-
division by W and the inverse conic product require O(m)
instead of O(m
2
) operations, as in the LP case. This is
significant if the dimension of the cone is large.
C. Setup & solve: Dynamic and static memory
ECOS implements three functions: Setup, Solve and
Cleanup. Setup allocates memory and creates the upper
triangular part of the expanded sparse KKT matrix
˜
K as well
as the sign vector S needed for dynamic regularization. Then,
a permutation of
˜
K is computed using AMD, and
˜
K and S
are permuted accordingly. Last, a symbolic factorization of
P
T
˜
KP is carried out to determine the data structures needed
for numerical factorization in Solve.
Our Solve routine implements the same interior-point
algorithm as in CVXOPT [27, §7], but search directions are
found as described in this Section above. We use the same
initialization method and stopping criteria as CVXOPT [27,
§7.3]. If the data structure for ECOS is no longer needed,
the user can call the Cleanup routine to free memory that
has been allocated during Setup.
Only Setup and Cleanup call malloc and free, while
Solve uses only static memory allocation. Once the problem
has been set up (possibly accomplished by a code generator),
different problem instances can be solved without the need
for dynamic memory allocation (and without the Setup over-
head).
V. EXAMPLE: PORTFOLIO OPTIMIZATION
We consider a simple long-only portfolio optimization
problem [1, p. 185–186], where we choose relative weights
of assets to maximize risk-adjusted return. The problem is
maximize µ
T
x γ(x
T
Σx)
subject to 1
T
x = 1, x 0,
where the variable x R
n
represents the portfolio, µ R
n
is the vector of expected returns, γ > 0 is the risk-aversion
parameter, and Σ R
n×n
is the asset return covariance,
given in factor model form,
Σ = F F
T
+ D.
Here F R
n×m
is the factor-loading matrix, and D R
n×n
is a diagonal matrix (of idiosyncratic risk). The number
of factors in the risk model is m, which we assume is
substantially smaller than n, the number of assets.
This problem can be converted in the standard way into
an SOCP,
minimize µ
T
x + γ(t + s)
subject to 1
T
x = 1, x 0
kD
1/2
xk
2
u, kF
T
xk
2
v
k(1 t, 2u)k
2
1 + t
k(1 s, 2v)k
2
1 + s
(17)
with variables x R
n
and (t, s, u, v) R
4
.
A run time comparison of existing SOCP solvers and
ECOS for obtaining solutions to random instances of (17)
with accuracy 10
6
is given in Fig. 1. The solvers MOSEK
and Gurobi were run in single-threaded mode (the problems
were solved more quickly than with multiple threads), while
the current versions of SDPT3 and SeDuMi do not have the
option to change the number of threads. For these solvers, we
observed a CPU usage of more than 100%, which indicates
that SDPT3 and SeDuMi make use of multiple threads. We
switched off printing and used standard values for all other
solver options. The number of iterative refinement steps for
ECOS was limited to one.
As Fig. 1 shows, ECOS outperforms most established
solvers for small problems, and is overall competitive for
medium-sized problems up to several thousands of variables.
3075

Citations
More filters
Book

Model Predictive Control

TL;DR: This paper recalls a few past achievements in Model Predictive Control, gives an overview of some current developments and suggests a few avenues for future research.
Posted Content

CVXPY: A Python-Embedded Modeling Language for Convex Optimization

TL;DR: CVXPY allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers.
Journal Article

CVXPY: a python-embedded modeling language for convex optimization

TL;DR: CVXPY as mentioned in this paper is a domain-specific language for convex optimization embedded in Python, which allows the user to express convex optimisation problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers.
Journal ArticleDOI

Variations and extension of the convex–concave procedure

TL;DR: This work investigates the convex–concave procedure, a local heuristic that utilizes the tools of convex optimization to find local optima of difference of conveX (DC) programming problems, and generalizes the algorithm to include vector inequalities.
Journal ArticleDOI

On the Total Energy Efficiency of Cell-Free Massive MIMO

TL;DR: This work considers the cell-free massive multiple-input multiple-output (MIMO) downlink, where a very large number of distributed multiple-antenna access points (APs) serve many single-ant antenna users in the same time-frequency resource, and derives a closed-form expression for the spectral efficiency taking into account the effects of channel estimation errors and power control.
References
More filters
Book

Convex Optimization

TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Journal ArticleDOI

Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information

TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Journal ArticleDOI

Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones

TL;DR: This paper describes how to work with SeDuMi, an add-on for MATLAB, which lets you solve optimization problems with linear, quadratic and semidefiniteness constraints by exploiting sparsity.
Book

Primal-Dual Interior-Point Methods

TL;DR: This chapter discusses Primal Method Primal-Dual Methods, Path-Following Algorithm, and Infeasible-Interior-Point Algorithms, and their applications to Linear Programming and Interior-Point Methods.
Journal ArticleDOI

Applications of second-order cone programming

TL;DR: In this paper, an efficient primal-dual interior-point method for solving second-order cone programs (SOCP) is presented. But it is not a generalization of interior point methods for convex problems.
Frequently Asked Questions (18)
Q1. What are the contributions mentioned in the paper "Ecos: an socp solver for embedded systems" ?

In this paper, the authors describe the embedded conic solver ( ECOS ), an interior-point solver for second-order cone programming ( SOCP ) designed specifically for embedded applications. 

Since the affine direction is based on linearization, the corrector direction aims at compensating the nonlinearities at the predicted affine step. 

Setup allocates memory and creates the upper triangular part of the expanded sparse KKT matrix K̃ as well as the sign vector S needed for dynamic regularization. 

There are currently two implementations of C-code generators based on first order methods: µAO-MPC [22], [23] for linear MPC problems, and FiOrdOs [24] for general convex problems, which also supports SOCPs. 

ECOS solves SOCPs in the standard form [11]:minimize cTx subject to Ax = bGx+ s = h, s ∈ K, (P)where x are the primal variables, s denotes slack variables, c ∈ Rn, G ∈ RM×n, h ∈ RM , A ∈ Rp×n, b ∈ Rp are the problem data, and K is the coneK = Qm1 ×Qm2 × · · · × QmN , (1)where Qm , { (u0, u1) ∈ R× Rm−1 | u0 ≥ ‖u1‖2 }for m > 1, and the authors define Q1 , R+. 

By using (14), (15) and (16), a multiplication or leftdivision by W and the inverse conic product require O(m) instead of O(m2) operations, as in the LP case. 

The authors use the symmetric Nesterov-Todd (NT) scaling that is defined by the scaling point w ∈ K such that∇2Φ(w)s = z,i.e. w is the point where the Hessian of the barrier function maps s onto z. 

1. The solvers MOSEK and Gurobi were run in single-threaded mode (the problems were solved more quickly than with multiple threads), while the current versions of SDPT3 and SeDuMi do not have the option to change the number of threads. 

Search directions are generated by linearizing (CP) and choosing different µ, cf. §III-D. For µ→ 0, (CP) is equivalent to the Karush-Kuhn-Tucker (KKT)optimality conditions, which are necessary and sufficient conditions for optimality for convex problems. 

The authors consider a simple long-only portfolio optimization problem [1, p. 185–186], where the authors choose relative weights of assets to maximize risk-adjusted return. 

The central path of problem (ESD) is defined by the set of points (x, y, s, z, τ, κ) satisfying0 0 s κ = 0 AT GT c −A 0 0 b −G 0 0 h −cT −bT −hT 0 x y z τ + µ qx qy qz qτ (s, z) ∈ K, (κ, τ) ≥ 0,( W−T s ) ◦ 

An efficient formulation of the inverse operator to ◦ (5) for second-order cones is given by:u \\ w , 1 %[ u0w0 − ν(ν/u0 − w0)u1 + (%/u0)w1] ,with % = u20 − uT1 u1 , ν = uT1 w1. (16)For Q1, u \\ w , w/u, i.e. the usual scalar division. 

For these solvers, the authors observed a CPU usage of more than 100%, which indicates that SDPT3 and SeDuMi make use of multiple threads. 

This makes ECOS very attractive for use on embedded systems; in fact, it could be certified for code safety (such as for no divisions by zero) with reasonable effort. 

The Nesterov-Todd scalings (7), which propagate to the (3,3) block in the KKT matrix K as −W 2soc, are dense matrices involving a diagonal plus rank one term. 

A search direction can now be computed by solving two linear systems,K∆ξ1 = β1 and K∆ξ2 = β2, (9)and combining the results to∆τ = dτ − dκ/τ +[ cT bT hT ] ∆ξ1κ/τ − [ cT bT hT ] ∆ξ2, (10a)[ ∆xT ∆yT ∆zT ]T = ∆ξ1 + ∆τ∆ξ2, (10b)∆s = −W (λ \\ ds +W∆z) , (10c) ∆κ = − (dκ + κ∆τ) /τ, (10d)where u \\ w is the inverse operator to the conic product ◦, such that if u ◦ v = w then u \\ w = v (we give an efficient formula in (16)), andλ 

The authors use the approximate minimum degree (AMD) code [14] to compute fill-in reducing orderings of K. AMD is a heuristic that performs well in practice and has low computational complexity. 

Define the vectors∆ξ1 , [ ∆xT1 ∆y T 1 ∆z T 1 ]T ,∆ξ2 , [ ∆xT2 ∆y T 2 ∆z T 2 ]T ,β1 , [ −cT bT hT ]T ,β2 , [ dTx d T y d T z ]T ,and the matrixK , 0 AT GTA 0 0 G 0 −W 2 , (8) which the authors refer to as the KKT matrix in this paper, as it follows from the coefficient matrix in the central path equation (CP) after simple algebraic manipulations. 

Trending Questions (1)
What is o2des framework?

The paper does not mention anything about the o2des framework.