Distributed Maximum Likelihood Sensor Network Localization
read more
Citations
NEXT: In-Network Nonconvex Optimization
NEXT: In-Network Nonconvex Optimization
Location of Things (LoT): A Review and Taxonomy of Sensors Localization in IoT Infrastructure
Advances on localization techniques for wireless sensor networks
ADMM Based Privacy-Preserving Decentralized Optimization
References
Convex Optimization
Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers
Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones
Parallel and Distributed Computation: Numerical Methods
Locating the nodes: cooperative localization in wireless sensor networks
Related Papers (5)
Frequently Asked Questions (17)
Q2. What are the future works in "Distributed maximum likelihood sensor network localization" ?
Among future research plans, the authors are interested in studying mobile sensor network localization problems by using convex relaxations based on a maximum a posteriori formulation of the estimation problem.
Q3. What is the second group of methods?
The second group of methods employs decomposition techniques to guarantee that the distributed scheme converges to the centralized formulation asymptotically.
Q4. What is the convergence rate of the Gauss-Seidel algorithm?
The convergence rate is based on the convergence of the decentralized spectral decomposition algorithm, which requires sub-iterations ( is the mixing time of a random walk on the graph ), and on the convergence rate of the primal-dual scheme, proven to be in ergodic sense.
Q5. What is the disadvantage of the heuristic approach?
Among the disadvantages of the heuristic approaches is that the authors introduce arbitrariness into the problem and the authors typically lose all the guarantees of performance of the “father” centralized approach.
Q6. What is the convergence rate for Gauss-Seidel?
For the convergence rate, the best convergence rate that the authors can expect from a Gauss-Seidel algorithm (with some strong assumptions on the constraints and cost function) is linear [52], i.e., the convergence rate is for a certain (problem-dependent and a priori unknown) .
Q7. How much is the communication cost per iteration for the sensor?
The communication costs per iteration for the one active sensor is proportional to the number of scalar variables that have to be sent (the updated ) multiplied by the number of sensor nodes they have to be sent to (the neighbors), yielding a cost of .
Q8. What is the main reason of ADMM's use in this paper?
The strength of ADMM, and the main reason of its employment in this paper, resides in its noise-resilience and computation error-resilience as well as the very loose assumptions required to guarantee its convergence (in contrast with typical dual, or primal-dual decomposition schemes.)
Q9. What is the way to solve the sensor network localization problem?
convex relaxations for sensor network localization are formulated directly on the squared distance variables using a cost function (not ML) and eliminating the variables .
Q10. How can the authors take full advantage of this aspect?
In order to take full advantage of this aspect, the authors have shown that the relaxation has to be as tight as possible to the original non-convex problem, (in some cases, disregarding the noise model).
Q11. What is the effect of tighter relaxations on the estimation error?
In particular, it appears that tighter relaxations may have a lower estimation error, even when they employ less accurate noise models.
Q12. What is the way to solve the semidefinite program?
Proposition 1: Under the assumption of Gaussian noise, the semidefinite program (7) is a rank- relaxation of the original non-convex optimization problem (3).
Q13. What does Proposition 2 say about the number of iterations for a given average local?
Proposition 2 says that the number of iterations for a given average local accuracy does not depend on the network size, but only on the worst local initial error.
Q14. What is the complex operation for each sensor node?
E-ML with ADMM (Algorithm 1) At each iteration, for each sensor node, the most complex operation is to solve theconvex program (19).
Q15. What is the proof of the convex problem?
Consider ([63], Theorem 3): Assumption 1 is valid since in the problem (15), the sets and are closed and convex, and the costs are proper and convex.
Q16. What is the disadvantage of the heuristic methods?
very often these heuristic methods are ad-hoc and problem-dependent, which makes their theoretical characterization difficult (in contrast with the usage of well-established decomposition methods [52]).
Q17. What is the shorthand notation for the vector of multipliers?
The first step to derive the ADMMalgorithm is, given a scalar , defining the regularized Lagrangian of problem (15) as(16)where is the shorthand notation for the vector , while is the shorthand notation for the vector of multipliers.