scispace - formally typeset
Open AccessJournal ArticleDOI

Distributed Maximum Likelihood Sensor Network Localization

Reads0
Chats0
TLDR
This work designs a distributed algorithm that enables the sensor nodes to solve these edge-based convex programs locally by communicating only with their close neighbors by using the alternating direction method of multipliers (ADMM).
Abstract
We propose a class of convex relaxations to solve the sensor network localization problem, based on a maximum likelihood (ML) formulation. This class, as well as the tightness of the relaxations, depends on the noise probability density function (PDF) of the collected measurements. We derive a computational efficient edge-based version of this ML convex relaxation class and we design a distributed algorithm that enables the sensor nodes to solve these edge-based convex programs locally by communicating only with their close neighbors. This algorithm relies on the alternating direction method of multipliers (ADMM), it converges to the centralized solution, it can run asynchronously, and it is computation error-resilient. Finally, we compare our proposed distributed scheme with other available methods, both analytically and numerically, and we argue the added value of ADMM, especially for large-scale networks.

read more

Content maybe subject to copyright    Report

1424 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 62, NO. 6, MARCH 15, 2014
Distributed Maximum Likelihood Sensor
Network Localization
Andrea Simonetto and Geert Leus, Fellow, IEEE
Abstract—We propose a class of convex relaxations to solve the
sensor network localization problem, based on a maximum likeli-
hood (ML) formulation. This class, as well as the tightness of the re-
laxations, depends on the noise probability density function (PDF)
of the collected measurements. We derive a computational efcient
edge-based version of this ML convex relaxation class and we d e-
sign a distributed algorithm that enables the sensor nodes to s
olve
these edge-based convex programs locally by communicating only
with their close neighbors. This algorithm relies on the alternating
direction method of multipliers (ADMM), it converges to the
cen-
tralized solution, it can run asynchronously, and it is computa-
tion error-resilient. Finally, we c ompare our proposed distributed
scheme with other available methods, both analytic
ally and nu-
merically, and we argue the added value of AD MM , especially for
large-scale networks .
Index TermsDistributed optimization, convex relaxations,
sensor network localization, distributed algorithms, ADMM,
distributed localization, sensor networks, maximum likelihood.
I. INTRODUCTION
N
OWADAYS, wireless sensor n
etworks are developed to
provide fast, cheap, reliable, and scalable hardware solu-
tions to a large number of indu stri al applications, r ang ing from
surveillance [1], [2] a
nd tracking [3], [4] to exploration [5], [6],
monitoring [7], [8], robotics [9], and other sensing tasks [10].
From the software perspective, an increasing effort is spent on
designing distri
buted algorithms that can be embedded in these
sensor networks, providing high reliability with limited compu-
tation and communicatio n req uiremen ts for the sensor nodes.
Estimating th
e location of the nodes based on pair-wise dis-
tance measurements is regarded as a key enabling technology
in man y of the aforemen tio ned scenarios, where GPS is often
not emplo
yable.
From a strictly mathematical standpoint, this sensor network
localization problem can be formulated as determining the
node po s
ition in
or ensuring their consistency with the
given inter-sensor distance measurem ents and (in some cases)
with the l ocati on of known anchors. As it is well known, such
Manuscript received April 24, 2013; revised October 19, 2013 and December
26, 2013; accepted January 14, 2014. Date of publication January 27, 2014;
date of current version February 26, 2014. The associate editor coordinating the
review of this manuscript and approving it for publication was Prof. Tongtong
Li. This research was supported in part by STW under the D2S2 project from
the ASSYS program (project 10561). (Corresponding author: A . Simonetto).
The a uthors are with the Faculty of Electrical Engineering, M athematics and
Computer Science, Delft University of Technology, Delft 2826 CD, The Nether-
lands. (e-mail: a.simonetto@tudelft.nl;g.j.t.leus@tudelft.nl).
Color versions of one or more of the gures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identier 10.1109/TSP.2014.2302746
a xed -dim ension al problem (often phrased as a polynomial
optimization) is NP-hard in general. Consequently, there have
been signicant research efforts in developing algorithms and
heuristics that can accurately and efciently localize the no
des
in a given dimension [11]–[13]. Besides heuristic geometric
schemes, such as mul ti-later ation, typical methods encompass
multi-dimensional scaling [14], [15], belief propagati
on tech-
niques [16], an d standard non-linear ltering [17].
A very powerful approach to the sensor network localization
problem is to use convex relaxation techniques to massa
ge
the non-convex problem to a more tractable yet approximate
formulation. First ado pted in [18], this modus operandi has
since been extensively developed in the liter
ature(seefor
example [19] for a comprehensive survey in the eld of signal
processing). Semidenite programming (SDP) relaxations for
the localization problem have been propo
sedin[20][27].
Theoretical properties of these methods have been discussed
in [28]–[3 0 ], while their efcient implementation has been
presented in [ 31]–[35]. Further c
onvex relaxations, namely
second-order cone programming relaxations (SOCP) have been
proposed in [36] to alleviate the computational load of standard
SDP relaxations, at the price
of some performance degradation.
Highly accurate and highly computational demanding sum of
squares (SOS) convex relaxations have been instead employed
in [37].
Despite the richness of the convex relaxation literature, two
main aspects have been overlooked. First of all, a comprehen-
sive characterizati
on of these convex r elaxations based on the
maximum likelihood (ML) formulation is missing. In [2 1],
[25], [38], [39] ML-based relaxations are explored, but only
for specicnois
e mod els (mainly Gaussian noise), witho ut a
proper understanding o f how different noise models would
affect performance.
The second over
looked aspect regards the lack of distributed
optimization algorithms to solve convex relaxatio n pr obl ems
with certicates of convergence to the centralized optimizer,
convergen
ce rate, and proven robustness when applied to real
sensor networks bounded by asynchronous comm un ication an d
limited computati on capab ili ties.
Contribu
tions. First, we generalize the current state-of-
the-art convex relaxatio ns by formulating the sensor network
localization problem in a maximum likelihood framework and
then r
elaxing it. This class of relaxatio ns (which depends on
the choice of the probability density function (PDF) of the
noise) is represented by the convex program (6). We show that
th
is program is a rank relaxation of the original non-conv ex
ML estimation problem, and at least for two widely used cases
1053-587X © 2014 IEEE. Personal use is permitted, but re pu b licatio n/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

SIMONETTO AND LEUS: DISTRIBUTED MAXIMUM LIKELIHOOD SENSOR NETWORK LOCALIZATION 1425
(Gaussian noise and Gaussian quantized measurements), it is a
rank-
relaxatio n ( being the dimension o f the space where
the sensor nodes live, Proposition 1 ). The relaxed convex
program is then further m assag ed in to the edge-based ML
relaxation (12) to lessen the computation requirements and
to facilitate the distribution of the problem a mo ng th e nod es.
Furthermore, we show nu merically that the tightness of the
relaxation (in particular, the property of being derived from
a rank-
relaxation or not) can affect the performance of th
e
convex program (12) more than the correctness of th e n oise
model.
As a second contributio n, we demonstrate how the
edge-based ML convex relaxation can be handled via the
alternating direction metho d of multiplier s (ADMM), which
gives us a powerful leverage for the analysis
of the resulting
algorithm. The pr oposed algo rithm , Algorithm 1, is distributed
in nature: the sensor nod es are able to lo cate th emselves and
the neighboring nodes without the kn owl
edge of the whole
network. This algorithm converges with a rate of
(
being the number of i teratio ns) to the solution of (12) (Theorem
1). Using Algorithm 1, each sens
or node has a total comm u -
nication cost to reach a certain average local accuracy of the
solution that is independent of the network size (Proposition
2 and Corollary 1). The prop
osed algorithm is then proven
to converge even when run asynchronously (Theorem 2) and
when the nodes are affected by computation errors (Theorem
3). These features, a
long with guaranteed convergence, are very
important in real-life sensor network applications. Finally, we
compare the usage of Algorithm 1 with some other available
possibilities,
in particular, the methods suggested in [40] and
[41], both in terms of theoretical performances and simulation
results. These analyses support our pro posed distributed algo-
rithm, espe
cially for large-scale settings.
Organization. The remainder of the paper is organized as
follows. Section II details the prob lem form ulat ion. Section III
presents
the proposed max imum likelihood conv ex relaxation
(6) along with some examples. Section IV introduces the edge-
based relaxation (12), which is the building block for our dis-
tribu
ted alg or ithm. Section V surveys briey distrib uted tech-
niques to solve the localization problem, while, in Section VI,
we focus on the development of our distributed algorithm and its
an
alysis. Numerical simulations and comparisons are displayed
in Section VII, while our conclusions are drawn in Section V II I.
II. P
RELIMINARIES AND PROBLEM STATEMENT
We consider a network of static wireless sensor nodes w ith
computation and communication capabilities, living in a
-di-
mensional space (typically
will be the standard 2-dimensional
or 3-dimensional Euclidean space). We denote the set of all
nodes
.Let be the position v ector
of the
-th sensor node, or equivalently, let
be the matrix collecting the position vectors. We consider
an environment with line-of-sight conditions between the nodes
and we assume that some pairs of sensor nodes
have ac-
cess to noisy range measurements as
(1)
where
is the noise-free Euclidean distance
and
is an additive no ise t erm with know n probability distri-
bution. We call
the inter-sensor sensing
PDF, where we have indicated explicitly the dependence of
on the sensor nod e positions .
In addition, we consider that some sensors also have access
to noisy range measurements with some xed anchor nodes
(whose position
,for , is known by all the
neighboring sensor nodes of each
)as
(2)
where,
is the noise-free Euclidean dis-
tance and
is an additive no ise term with known probability
distribution. We denote as
the an-
chor-sensor sensing PDF.
We use graph theory terminology to characterize the set of
sensor nodes
and the measurements and . In partic-
ular, we say that the measurements
induce a graph with
as vertex set, i.e., for each sensor node pair for which there
exists a measurement
, there exists an edge connecting and
. The set of a ll edges is and its cardin a lit y is .Wedenote
this undirected graph as
. T he neighb ors of sensor
node
are the sensor nodes that are connected to with an edge.
The set of these neighboring nodes is indicated with
,thatis
. Since the sensor nodes are assumed
to have communication capabili ties, we implicitl y assume that
each sensor node
can communicate with all the sensors in ,
and with these only. In a similar fashion, we collect the anchors
in the vertex set
and we say that the mea-
surements
induce an edge set , composed by the pairs
for w hich there exists a measurement . Also, we de-
note with
the neighboring anchors for sensor node ,i.e.,
.
Problem Statement. The sensor netw ork localization
problem is formulated as estimating the po si tion matrix
(in some cases, up to a n orthogonal transformation) given t he
measurements
and for all and ,
and the anchor positions
.When we call
the problem anchor-free localization. The sensor network
localization problem can be written in terms of m aximizing the
likelihood leading to the fo llowing optimization problem
(3)
This optimization problem is in general non-convex and it is
also NP-hard to nd any global solution. In this paper, und
er
the sole assum pti ons that:
Assumption 1: The sensing PDFs
and
are log-concav e functions of the u n-
known distances
and ,
Assumption 2: The graph induced by the inter-sensor rang
e
measurements
is connected,
we will propose a convex relaxation to transform the
ML

1426 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 62, NO. 6, MARCH 15, 2014
estimator (3) into a more tractable problem, which w e will then
solve using ADMM in a distributed setting, where each of the
sensor nodes, by communicating only with the neighbo rin g
nodes, will determine its own position.
III. C
ONVEX RELAXATIONS
A. Maximum Likelihood Relaxation
To derive the mentioned convex relaxation of the ML esti-
mator (3), several steps are needed. F irst of all, we introduce
the n ew variables
,andwe
collect the
scalar variables into the stacked
vectors
. Second, we rewrite the cost function of the
ML estimator as dependent only on the pair
as
(4)
Third, we re-introduce the dependencies of
on and on
by considering the following constrained opti mi zation
(5a)
(5b)
(5c)
(5d)
The problem (5) is equivalen t to (3): the constraints in the
problem (5) have both the scope of imposing the pair-wise
distance relations and of enforcing the chosen ch ange of vari-
ables (in fact, without the c onstrain ts, all the variables would
be independent of each other). In the new variables and under
Assumption 1,
is a convex function, however the
constraints o f (5) still dene a non-convex s et. Nonetheless, we
can massage the constraints by using Schur complements and
propose t he following convex relax atio n
(6a)
(6b)
(6c)
(6d)
The problem (6) is now convex (specically, it is a convex opti-
mization p roblem with generalized inequality constraints [42] )
and its optimal solu tion represents a lower bound for the orig-
inal non-convex ML estimator (3).
In the problem (6), all the three constraints (6b) till (6d) are
rank relaxed versions of (5b) till (5d), which mak es problem (6)
a rank relaxation. Usually, convex relaxations for sensor net-
work localization are formulated directly on th e squared dis-
tance variables
using a cost function (not ML)
and eliminating the variables
. This way of formulating
the problem do es not capture the noise distribution, but renders
the resulting relaxation a rank-
relaxation, since (6
d) is the
only relaxed constraint [21]. Problem (6) both models correctly
the n oise distrib ution, being derived from an ML formulation,
and for some common used noise PDFs can be tran
sformed into
arank-
relaxation, in which case it is eq uiv alent i n tightness
to relaxations based on squared distance alone.
In the nex t subsections, we specify the con
vex relaxation
(6) for different noise distributions (satisfying A ssum ption 1)
and prove that (6) can be expressed as a rank-
relaxation f or
two particular yet widely used cas
es. In Section VII, while pre-
senting simulation results, we d iscuss how this aspect can affect
the quality of the position estimation. In particular, it appears
that tigh ter relaxations may
have a lower estim ation error, even
when they employ less accurate noise models.
B. Example 1—Gaussian N oise Relaxation
In the case of Gaussian noise, we assume that the noises
and in the sensing equations (1) and (2) are drawn from
a white zero-mean PDF, i.e.,
and
. The cost fun ction then is
A natural way to rewrite this cost is to enforce the change of
variables
and , yiel ding
With the cost , t he optim ization p roblem reads
1
(7a)
(7b)
This relaxation is not only convex b ut also a semidenite pro-
gram (SDP), i.e., it has a linear cost function and generalized
linear constraints [42]. Some of its constraints are l inear matrix
inequalities (LMIs). For t he sem idenite program (7), the fol-
lowing proposition holds true.
1
A similar formulation for th is relaxation can be found in [21]. We note that
problem (7) is not eq uivalent to (6) with cost function
,sincefor
(6),
and .

SIMONETTO AND LEUS: DISTRIBUTED MAXIMUM LIKELIHOOD SENSOR NETWORK LOCALIZATION 1427
Proposition 1: Und er the assumption of Gaussian noise, the
semidenite program (7) is a rank-
relaxation of th e original
non-convex optimization problem (3).
Proof: We need to show that at optimality the relaxed
constraints (6b) and (6c) are equivalent to the original con-
straints (5b) and ( 5c). In other words, we need to show that
any o ptimal solution of the semidenite program (7), say
,satises th e followin g
for all and for all . To see this, note that
the LMIs in the constraints (6b) and (6c) can be rewritten as
(8)
The cost function (7a) maximizes the scalar variables
and
, which are constrained only by (8). Therefore at optimality,
we will always have
and , and thus the
claim holds.
C. Example 2—Quantized Observation Relaxation
An interesting, and realistic, elaboration of the ML estimator
is when, d ue to limited sensing cap abil iti es, the sensors produce
a quantized version of
and (see the discussion in [43],
[44] for its relevance in sensor networks). Consider an
-ele-
ment convex tessellation of
, comprised o f the c on vex sets
. A quantization of and produces the o bser-
vations
and , which are unitary if and
, respectively. Otherwise and are zero.
The resulting cost function for the convex relaxation (6) is now
which is convex, since the integral of a log-concave function
over a convex set is also log-concave. The resulting convex re-
laxation reads
(9a)
(9b)
which is a rank relaxation of (3), but in general not a r ank-
re-
laxation. We can specify (9a) for Gaussian noise (using the same
variable enforcing of
) as done in the equation at the bottom
of the page. It is not difcult to show that the convex relax ation
(9) equipped with the cost
is now a rank- re-
laxation, by using similar arguments as in Proposition 1.
D. Example 3—Laplacian Noise Relaxa tion
Laplacian noise is used for example to m odel outliers in ran ge
measurements [45] and to m odel errors coming from signal in-
terference, e.g., in UWB localization system s [46]. In the Lapla-
cian noise case the cost function can be specied as
and the ML convex relaxation reads
(10a)
(10b)
This ML convex rel axati on is neith er a rank-
relaxation, nor
it can be transformed into one by some variable enforcing in the
cost function, yet it correctly models Laplacian PDFs.
E. Example 4—Uniform No ise Relaxation
Uniform noise distributions areusedwhenthesourceoferror
is not known aprioriand only a bound on the noise level is
available. For example, this is the case when we are aware of a
lower bound on the pair-wise distances and of an upper bound
dictated by connectivity [47], [48]. Considering un ifo rm noise
PDFs in the range
and , the convex re-
laxation (6) becomes the f ollowing feasibili ty problem
(11a)
(11b)
(11c)
(11d)
Also in this case, the ML convex relaxation is neither a
rank-
relaxation, nor it can be transformed into one by some
variable enforcing in the cost function, yet it correctly models
uniform noise distributions.

1428 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 62, NO. 6, MARCH 15, 2014
IV. EDGE-BASED CONVEX RELAXATIONS
The convex relaxations derived fro m (6) couple arbitrarily far
away sensor nodes through the LMI constraint ( 6d). This com-
plicates the design of a distributed optimization algorithm. In
addition, due to (6d), the complex ity of solving the semidenite
program (6) scales at least as
, i.e., is at least cubic in the
number of sensor nodes [42], and it could become unfeasib le
for large-scale networks. In order to massage this coupling con-
straint, we introduce a f urt h er relaxation for (6), which w il l be
called edge-based ML (E-ML) relaxation. We consider the fol-
lowing relaxation of (6)
(12a)
(12b)
(12c)
This relaxation employs the same idea of the edge-based
semidenite program (ESD P) relaxation of [24], [25] of consid-
ering the cou pling constraint (6d) to be valid on the edges only.
Since the constrain t (6d) implies (12c) but not the contrary,
the relaxation (12) is not a rank-
relaxation. How e ver, it is
straightforward to see that, if the original convex relaxation (6)
was a rank-
relaxation, then for the derived (12), it would
be true that
. For example, this is the
case for Gaussian noise, and we show how this can play an
important role for the accuracy in Section VII.
The convex relaxation (12) is now ready to be distributed
among the sensor nodes.
V. D
ISTRIBUTED ALGORITHMS FOR SENSOR NETWORK
LOCALIZATION
Different distributed methods for sensor network localiza-
tion have been proposed in recent years. A rst group con-
sists of heuristic algorithms, which are typically based on the
paradigm of dividing the n odes into arbitrarily selected clus-
ters, solving the localization problem within every cluster and
then patching together the different solutions. Methods that be-
long to this group are [49]–[51] , while heur isti c approaches to
SDP relaxations are discussed in [47]. A mong the disadvan-
tages of the heuristic appro aches is that we introduce arbitrari -
ness into the problem and we typically lose all the guarantees of
performance of the “father” centralized approach. Furthermore,
very often these heuri stic methods are ad-hoc and problem-de-
pendent, which makes their theoretical characterization difcult
(in contrast with the usage of well-established decomposition
methods [52]).
The second group of methods employs decomposition tech-
niques to guarantee that the distributed scheme converges to
the centralized formulation asym p to tical ly. In this group, under
the Gaussian noise assumption, we can nd methods that tackle
directly the non- c onv ex optimizatio n problem (3) with parallel
gradient-descent iterative schemes [53], [54] or (very recently)
a work that uses a minimization-majorization technique to
massage (3) sequentially and then emp loys the alternating
direction method of mul tipliers (ADMM) to distribute the com-
putations among the sensor nodes [55]. These app roach es ha ve
certicates of convergence to a local minimum of the original
non-convex problem
2
. Other methods encompass algorithms
that tackle multi-dimensional scaling with a c ommunication-in-
tensive distributed spectral decomposition [56 ], and algorithm s
that tackle instead the convex SOCP/SDP relaxations [40],
[41], [57]. In particu lar [57] proposes a parallel distributed
version of a n SOCP relaxation (similar to the ESDP in [24]),
whose convergence properties are however n ot analyzed
3
.
In [40], the authors propose a further im prov emen t of [57]
based o n the Gauss-Seidel algorithm , which is s equ ential in
nature (meaning that sensors have to wait for each other before
running their own local algorithm) and offers convergence
guarantees to the ESDP of [24]. However, due to the sequential
nature, the convergence rate depends o n the number of sensor
nodes, which makes the approach impractical for large-scale
networks. Finally, in [41] duality is exploited to design inexact
primal-dual iterative algori thms based on the co nvex relaxation
of [22], [23], [33]. This last approach has the advantage to be
parallel and not sequ ential, nonetheless it is based on c o nsensus
algorithms whose convergence rate is also dependent on the
size of the network, thus less practical for a large number of
sensor nodes.
In the next section, we propose a distributed algorithm based
on ADMM to solve the edge-based convex relaxation (12). The
algorithm is proven to converge to the centralized optimizer as
,where is the number of iterations. Furthermore, the
computation and communication per iteration and per node do
not depend on the size of the n etwo rk, but only on the size of
each one’s neighborhood. Finally, we prove that the algorithm
converges also in the case o f asynchronous communication pro-
tocols and computation errors, m aking it robust to these two
common issues in sensor networks.
VI. P
ROPOSED DISTRIBUTED APPROACH
A. Preliminaries and Background on ADM M
In o rder to present our distributed algorith m, rst of all, we
rewrite the convex program (12) in a more compact way. Dene
the shared vector
for each and call the stacked vector comprised of
all the
s.Inasimilarfashion,dene the local vector
where and are the concatenated vectors of and for
all
, and call the stacked vector of all the ’s. W e
2
This may not be sufcient for a reasonable localization; thus the need for a
good starting condition which can be pro vid ed by convex relaxations, see [33]
for some interesting numerical examples.
3
As a matter of fact, the proposed Jacobi-like algorithm is very hard to be
proven converging to the centralized solution, since the constraints are coupled
and not Cartesian, see [52] for a detailed discussion.

Citations
More filters
Journal ArticleDOI

NEXT: In-Network Nonconvex Optimization

TL;DR: In this paper, the authors studied nonconvex distributed optimization in multi-agent networks with time-varying (nonsymmetric) connectivity and proposed an algorithmic framework for the distributed minimization of the sum of a smooth (possibly nonconcave and non-separable) function, the agents' sum-utility, plus a convex regularizer.
Posted Content

NEXT: In-Network Nonconvex Optimization

TL;DR: This work introduces the first algorithmic framework for the distributed minimization of the sum of a smooth function-the agents' sum-utility-plus a convex (possibly nonsmooth and nonseparable) regularizer, and shows that the new method compares favorably to existing distributed algorithms on both convex and nonconvex problems.
Journal ArticleDOI

Location of Things (LoT): A Review and Taxonomy of Sensors Localization in IoT Infrastructure

TL;DR: A recent extensive analysis of localization techniques and hierarchical taxonomy and their applications in the different context is presented and this taxonomy of the localization technique is classified based on presence of offline training in localization, namely self-determining and training dependent approaches.
Journal ArticleDOI

Advances on localization techniques for wireless sensor networks

TL;DR: The main goal in this paper is to present the state-of-the-art research results and approaches proposed for localization in WSNs by considering a wide variety of factors and categorizing them in terms of data processing, routing, algorithms, etc.
Journal ArticleDOI

ADMM Based Privacy-Preserving Decentralized Optimization

TL;DR: This work introduces a new ADMM, which allows time-varying penalty matrices and rigorously proves that it has a convergence rate of $O(1/t)$ .
References
More filters
Book

Convex Optimization

TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Book

Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers

TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Journal ArticleDOI

Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones

TL;DR: This paper describes how to work with SeDuMi, an add-on for MATLAB, which lets you solve optimization problems with linear, quadratic and semidefiniteness constraints by exploiting sparsity.
Book

Parallel and Distributed Computation: Numerical Methods

TL;DR: This work discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later.
Journal ArticleDOI

Locating the nodes: cooperative localization in wireless sensor networks

TL;DR: Using the models, the authors have shown the calculation of a Cramer-Rao bound (CRB) on the location estimation precision possible for a given set of measurements in wireless sensor networks.
Related Papers (5)
Frequently Asked Questions (17)
Q1. What are the contributions in "Distributed maximum likelihood sensor network localization" ?

The authors propose a class of convex relaxations to solve the sensor network localization problem, based on a maximum likelihood ( ML ) formulation. Finally, the authors compare their proposed distributed scheme with other available methods, both analytically and numerically, and they argue the added value of ADMM, especially for large-scale networks. 

Among future research plans, the authors are interested in studying mobile sensor network localization problems by using convex relaxations based on a maximum a posteriori formulation of the estimation problem. 

The second group of methods employs decomposition techniques to guarantee that the distributed scheme converges to the centralized formulation asymptotically. 

The convergence rate is based on the convergence of the decentralized spectral decomposition algorithm, which requires sub-iterations ( is the mixing time of a random walk on the graph ), and on the convergence rate of the primal-dual scheme, proven to be in ergodic sense. 

Among the disadvantages of the heuristic approaches is that the authors introduce arbitrariness into the problem and the authors typically lose all the guarantees of performance of the “father” centralized approach. 

For the convergence rate, the best convergence rate that the authors can expect from a Gauss-Seidel algorithm (with some strong assumptions on the constraints and cost function) is linear [52], i.e., the convergence rate is for a certain (problem-dependent and a priori unknown) . 

The communication costs per iteration for the one active sensor is proportional to the number of scalar variables that have to be sent (the updated ) multiplied by the number of sensor nodes they have to be sent to (the neighbors), yielding a cost of . 

The strength of ADMM, and the main reason of its employment in this paper, resides in its noise-resilience and computation error-resilience as well as the very loose assumptions required to guarantee its convergence (in contrast with typical dual, or primal-dual decomposition schemes.) 

convex relaxations for sensor network localization are formulated directly on the squared distance variables using a cost function (not ML) and eliminating the variables . 

In order to take full advantage of this aspect, the authors have shown that the relaxation has to be as tight as possible to the original non-convex problem, (in some cases, disregarding the noise model). 

In particular, it appears that tighter relaxations may have a lower estimation error, even when they employ less accurate noise models. 

Proposition 1: Under the assumption of Gaussian noise, the semidefinite program (7) is a rank- relaxation of the original non-convex optimization problem (3). 

Proposition 2 says that the number of iterations for a given average local accuracy does not depend on the network size, but only on the worst local initial error. 

E-ML with ADMM (Algorithm 1) At each iteration, for each sensor node, the most complex operation is to solve theconvex program (19). 

Consider ([63], Theorem 3): Assumption 1 is valid since in the problem (15), the sets and are closed and convex, and the costs are proper and convex. 

very often these heuristic methods are ad-hoc and problem-dependent, which makes their theoretical characterization difficult (in contrast with the usage of well-established decomposition methods [52]). 

The first step to derive the ADMMalgorithm is, given a scalar , defining the regularized Lagrangian of problem (15) as(16)where is the shorthand notation for the vector , while is the shorthand notation for the vector of multipliers.