scispace - formally typeset
Search or ask a question

Showing papers on "Particle filter published in 2016"


Journal ArticleDOI
TL;DR: A useful definition of ‘big data’ is data that is too big to process comfortably on a single machine, either because of processor, memory, or disk bottlenecks, so there is a need for algorithms that perform distributed approximate Bayesian analyses with minimal communication.
Abstract: A useful definition of ‘big data’ is data that is too big to process comfortably on a single machine, either because of processor, memory, or disk bottlenecks. Graphics processing units can allevia...

418 citations


Journal ArticleDOI
TL;DR: The R package pomp as mentioned in this paper provides a very flexible framework for Monte Carlo statistical investigations using nonlinear, non-Gaussian POMP models, including iterated filtering, particle Markov chain Monte Carlo, approximate Bayesian computation, maximum synthetic likelihood estimation and trajectory matching.
Abstract: Partially observed Markov process (POMP) models, also known as hidden Markov models or state space models, are ubiquitous tools for time series analysis. The R package pomp provides a very flexible framework for Monte Carlo statistical investigations using nonlinear, non-Gaussian POMP models. A range of modern statistical methods for POMP models have been implemented in this framework including sequential Monte Carlo, iterated filtering, particle Markov chain Monte Carlo, approximate Bayesian computation, maximum synthetic likelihood estimation, nonlinear forecasting, and trajectory matching. In this paper, we demonstrate the application of these methodologies using some simple toy problems. We also illustrate the specification of more complex POMP models, using a nonlinear epidemiological model with a discrete population, seasonality, and extra-demographic stochasticity. We discuss the specification of user-defined models and the development of additional methods within the programming environment provided by pomp.

242 citations


Journal ArticleDOI
TL;DR: An improved particle filter based on Pearson correlation coefficient (PPC) is proposed to reduce the disadvantage of particle degeneracy and sample impoverishment and it performs better than generic PF.

220 citations


Journal ArticleDOI
TL;DR: A general overview on the concepts involved in Bayesian experimental design can be found in this article, where some of the more commonly used Bayesian utility functions and methods for their estimation, as well as a number of algorithms that are used to search over the design space to find the Bayesian optimal design.
Abstract: Bayesian experimental design is a fast growing area of research with many real-world applications. As computational power has increased over the years, so has the development of simulation-based design methods, which involve a number of algorithms, such as Markov chain Monte Carlo, sequential Monte Carlo and approximate Bayes methods, facilitating more complex design problems to be solved. The Bayesian framework provides a unified approach for incorporating prior information and/or uncertainties regarding the statistical model with a utility function which describes the experimental aims. In this paper, we provide a general overview on the concepts involved in Bayesian experimental design, and focus on describing some of the more commonly used Bayesian utility functions and methods for their estimation, as well as a number of algorithms that are used to search over the design space to find the Bayesian optimal design. We also discuss other computational strategies for further research in Bayesian optimal design.

199 citations


Journal ArticleDOI
TL;DR: In this paper, a nonlinear state-space model for nonlinearity mitigation, carrier recovery, and nanoscale device characterization is proposed, which allows for tracking and compensation of the XPM induced impairments by employing approximate stochastic filtering methods such as extended Kalman or particle filtering.
Abstract: Machine learning techniques relevant for nonlinearity mitigation, carrier recovery, and nanoscale device characterization are reviewed and employed. Markov Chain Monte Carlo in combination with Bayesian filtering is employed within the nonlinear state-space framework and demonstrated for parameter estimation. It is shown that the time-varying effects of cross-phase modulation (XPM) induced polarization scattering and phase noise can be formulated within the nonlinear state-space model (SSM). This allows for tracking and compensation of the XPM induced impairments by employing approximate stochastic filtering methods such as extended Kalman or particle filtering. The achievable gains are dependent on the autocorrelation (AC) function properties of the impairments under consideration which is strongly dependent on the transmissions scenario. The gain of the compensation method are therefore investigated by varying the parameters of the AC function describing XPM-induced polarization scattering and phase noise. It is shown that an increase in the nonlinear tolerance of more than 2 dB is achievable for 32 Gbaud QPSK and 16-quadratic-amplitude modulation (QAM). It is also reviewed how laser rate equations can be formulated within the nonlinear state-space framework which allows for tracking of nonLorentzian laser phase noise lineshapes. It is experimentally demonstrated for 28 Gbaud 16-QAM signals that if the laser phase noise shape strongly deviates from the Lorentzian, phase noise tracking algorithms employing rate equation-based SSM result in a significant performance improvement ( $>$ 8 dB) compared to traditional approaches using digital phase-locked loop. Finally, Gaussian mixture model is reviewed and employed for nonlinear phase noise compensation and characterization of nanoscale devices structure variations.

199 citations


Journal ArticleDOI
01 Jul 2016
TL;DR: The proposed hybrid/fusion prognostics framework was successfully applied on lithium-ion battery remaining useful life prediction and achieved a significantly better accuracy compared to the classical particle filter approach.
Abstract: Graphical abstractDisplay Omitted HighlightsA hybrid/fusion prognostics framework to predict remaining useful life by combining the data-driven methods and model-based methods.Introduce a data-driven method to estimate the measurement model in a model-based particle filter framework.Introduce a data-driven method to predicted future measurement in long term prediction in a model-based particle filter framework.Shown improved prediction accuracy using battery as a case study. Remaining useful life prediction is one of the key requirements in prognostics and health management. While a system or component exhibits degradation during its life cycle, there are various methods to predict its future performance and assess the time frame until it does no longer perform its desired functionality. The proposed data-driven and model-based hybrid/fusion prognostics framework interfaces a classical Bayesian model-based prognostics approach, namely particle filter, with two data-driven methods in purpose of improving the prediction accuracy. The first data-driven method establishes the measurement model (inferring the measurements from the internal system state) to account for situations where the internal system state is not accessible through direct measurements. The second data-driven method extrapolates the measurements beyond the range of actually available measurements to feed them back to the model-based method which further updates the particles and their weights during the long-term prediction phase. By leveraging the strengths of the data-driven and model-based methods, the proposed fusion prognostics framework can bridge the gap between data-driven prognostics and model-based prognostics when both abundant historical data and knowledge of the physical degradation process are available. The proposed framework was successfully applied on lithium-ion battery remaining useful life prediction and achieved a significantly better accuracy compared to the classical particle filter approach.

163 citations


Journal ArticleDOI
TL;DR: In this paper, three model-based filtering algorithms, including extended Kalman filter, unscented Kalman filtering, and particle filter, are respectively used to estimate state-of-charge (SOC) and their performances regarding to tracking accuracy, computation time, robustness against uncertainty of initial values of SOC, and battery degradation, are compared.

162 citations


Journal ArticleDOI
TL;DR: In this article, a new data assimilation approach based on the particle filter (PF) was proposed for nonlinear/non-Gaussian applications in geoscience, denoted the local PF, which extends the particle weights into vector quantities to reduce the influence of distant observations on the weight calculations via a localization function.
Abstract: This paper presents a new data assimilation approach based on the particle filter (PF) that has potential for nonlinear/non-Gaussian applications in geoscience. Particle filters provide a Monte Carlo approximation of a system’s probability density, while making no assumptions regarding the underlying error distribution. The proposed method is similar to the PF in that particles—also referred to as ensemble members—are weighted based on the likelihood of observations in order to approximate posterior probabilities of the system state. The new approach, denoted the local PF, extends the particle weights into vector quantities to reduce the influence of distant observations on the weight calculations via a localization function. While the number of particles required for standard PFs scales exponentially with the dimension of the system, the local PF provides accurate results using relatively few particles. In sensitivity experiments performed with a 40-variable dynamical system, the local PF require...

159 citations


Journal ArticleDOI
Hongwei Xie1, Tao Gu2, Xianping Tao1, Haibo Ye1, Jian Lu1 
TL;DR: This paper proposes a dynamic step length estimation algorithm and a heuristic particle resampling algorithm to minimize errors in motion estimation and improve the robustness of the basic particle filter, and proposes an adaptive sampling algorithm to reduce computation overhead.
Abstract: Using magnetic field data as fingerprints for smartphone indoor positioning has become popular in recent years. Particle filter is often used to improve accuracy. However, most of existing particle filter based approaches either are heavily affected by motion estimation errors, which result in unreliable systems, or impose strong restrictions on smartphone such as fixed phone orientation, which are not practical for real-life use. In this paper, we present a novel indoor positioning system for smartphones, which is built on our proposed reliability-augmented particle filter. We create several innovations on the motion model, the measurement model, and the resampling model to enhance the basic particle filter. To minimize errors in motion estimation and improve the robustness of the basic particle filter, we propose a dynamic step length estimation algorithm and a heuristic particle resampling algorithm. We use a hybrid measurement model, combining a new magnetic fingerprinting model and the existing magnitude fingerprinting model, to improve system performance, and importantly avoid calibrating magnetometers for different smartphones. In addition, we propose an adaptive sampling algorithm to reduce computation overhead, which in turn improves overall usability tremendously. Finally, we also analyze the “Kidnapped Robot Problem” and present a practical solution. We conduct comprehensive experimental studies, and the results show that our system achieves an accuracy of 1 $\sim$ 2 m on average in a large building.

148 citations


Journal ArticleDOI
TL;DR: This work analyzes two alternative schemes that do not involve a collective operation, and compares them to standard schemes, finding that, in certain circumstances, the alternative resamplers can perform significantly faster on a GPU, and to a lesser extent on a CPU, than the standard approaches.
Abstract: Modern parallel computing devices, such as the graphics processing unit (GPU), have gained significant traction in scientific and statistical computing. They are particularly well-suited to data-parallel algorithms such as the particle filter, or more generally sequential Monte Carlo (SMC), which are increasingly used in statistical inference. SMC methods carry a set of weighted particles through repeated propagation, weighting, and resampling steps. The propagation and weighting steps are straightforward to parallelize, as they require only independent operations on each particle. The resampling step is more difficult, as standard schemes require a collective operation, such as a sum, across particle weights. Focusing on this resampling step, we analyze two alternative schemes that do not involve a collective operation (Metropolis and rejection resamplers), and compare them to standard schemes (multinomial, stratified, and systematic resamplers). We find that, in certain circumstances, the alternative re...

137 citations


Journal ArticleDOI
TL;DR: A novel particle filtering technique named sequential evolutionary filter (SEF) is introduced, by which the particle impoverishment problem can be effectively mitigated, and the particle diversity can be maintained.
Abstract: As a commonly encountered problem in the particle filters (PFs), the particle impoverishment is caused partially by the reduction of particle diversity after resampling. In this paper, a novel particle filtering technique named sequential evolutionary filter (SEF) is introduced, by which the particle impoverishment problem can be effectively mitigated. SEF is proposed based on the genetic algorithm (GA). A GA-inspired strategy is designed and incorporated in SEF. With this strategy, the resampling used in most of the existing PFs is not necessary, and the particle diversity can be maintained. The experimental results also demonstrate the effectiveness of SEF.

Journal ArticleDOI
TL;DR: It is shown that a phrase-based statistical machine translation (SMT) system produces translations of higher quality when using word alignments from EFMARAL than from fast_align, and that translation quality is on par with what is obtained using GIZA++, a tool requiring orders of magnitude more processing time.
Abstract: We present efmaral, a new system for efficient and accurate word alignment using a Bayesian model with Markov Chain Monte Carlo (MCMC) inference. Through careful selection of data structures and mo ...

Posted Content
01 Feb 2016-viXra
TL;DR: In this article, the effective sample size (ESS) is defined as the inverse of the sum of the squares of the normalized importance weights, which is a measure of efficiency of Markov Chain Monte Carlo (MCMC) and Importance Sampling (IS) techniques.
Abstract: The Effective Sample Size (ESS) is an important measure of efficiency of Monte Carlo methods such as Markov Chain Monte Carlo (MCMC) and Importance Sampling (IS) techniques. In the IS context, an approximation $\widehat{ESS}$ of the theoretical ESS definition is widely applied, involving the inverse of the sum of the squares of the normalized importance weights. This formula, $\widehat{ESS}$, has become an essential piece within Sequential Monte Carlo (SMC) methods, to assess the convenience of a resampling step. From another perspective, the expression $\widehat{ESS}$ is related to the Euclidean distance between the probability mass described by the normalized weights and the discrete uniform probability mass function (pmf). In this work, we derive other possible ESS functions based on different discrepancy measures between these two pmfs. Several examples are provided involving, for instance, the geometric mean of the weights, the discrete entropy (including the {\it perplexity} measure, already proposed in literature) and the Gini coefficient among others. We list five theoretical requirements which a generic ESS function should satisfy, allowing us to classify different ESS measures. We also compare the most promising ones by means of numerical simulations.

Journal ArticleDOI
TL;DR: In this article, adaptive sequential Monte Carlo (SMC) sampling strategies to characterize the posterior distribution of a collection of models, as well as the parameters of those models are presented. But the performance of the proposed strategies is demonstrated via an extensive empirical study.
Abstract: Model comparison for the purposes of selection, averaging, and validation is a problem found throughout statistics Within the Bayesian paradigm, these problems all require the calculation of the posterior probabilities of models within a particular class Substantial progress has been made in recent years, but difficulties remain in the implementation of existing schemes This article presents adaptive sequential Monte Carlo (SMC) sampling strategies to characterize the posterior distribution of a collection of models, as well as the parameters of those models Both a simple product estimator and a combination of SMC and a path sampling estimator are considered and existing theoretical results are extended to include the path sampling variant A novel approach to the automatic specification of distributions within SMC algorithms is presented and shown to outperform the state of the art in this area The performance of the proposed strategies is demonstrated via an extensive empirical study Comparisons w

Journal ArticleDOI
TL;DR: An original algorithm for state-of-charge estimation using the pseudo two-dimensional model that circumvents the degeneracy problems associated with high-dimensional state estimation and avoids the repetitive solution of implicit equations by defining a ‘tether’ particle.

Journal ArticleDOI
TL;DR: A fully and unbiasedly parallel implementation framework of the SMC-PHD filtering is proposed based on the centralized distributed system that consists of one central unit (CU) and several independent processing elements (PEs).

Journal ArticleDOI
TL;DR: In this article, a gradient descent method is proposed to learn state-feedback controllers for non-linear stochastic control problems using path integral cross-entropy method (PICE).
Abstract: Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman–Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.

Journal ArticleDOI
TL;DR: Schafer et al. as discussed by the authors provided a carefully formulated asymptotic theory for a class of adaptive sequential Monte Carlo (SMC) methods, under assumptions, and provided a weak law of large numbers and a central limit theorem (CLT) for some of these algorithms.
Abstract: In several implementations of Sequential Monte Carlo (SMC) methods it is natural and important, in terms of algorithmic efficiency, to exploit the information of the history of the samples to optimally tune their subsequent propagations. In this article we provide a carefully formulated asymptotic theory for a class of such adaptive SMC methods. The theoretical framework developed here will cover, under assumptions, several commonly used SMC algorithms [Chopin, Biometrika 89 (2002) 539-551; Jasra et al., Scand. J. Stat. 38 (2011) 1-22; Schafer and Chopin, Stat. Comput. 23 (2013) 163- 184]. There are only limited results about the theoretical underpinning of such adaptive methods: We will bridge this gap by providing a weak law of large numbers (WLLN) and a central limit theorem (CLT) for some of these algorithms. The latter seems to be the first result of its kind in the literature and provides a formal justification of algorithms used in many real data contexts [Jasra et al. (2011); Schafer and Chopin (2013)]. We establish that for a general class of adaptive SMC algorithms [Chopin (2002)], the asymptotic variance of the estimators from the adaptive SMC method is identical to a "limiting" SMC algorithm which uses ideal proposal kernels. Our results are supported by application on a complex high-dimensional posterior distribution associated with the Navier-Stokes model, where adapting highdimensional parameters of the proposal kernels is critical for the efficiency of the algorithm.

Proceedings Article
19 Jun 2016
TL;DR: A procedure for constructing and learning a structured neural network which represents an inverse factorization of the graphical model, resulting in a conditional density estimator that takes as input particular values of the observed random variables, and returns an approximation to the distribution of the latent variables.
Abstract: We introduce a new approach for amortizing inference in directed graphical models by learning heuristic approximations to stochastic inverses, designed specifically for use as proposal distributions in sequential Monte Carlo methods. We describe a procedure for constructing and learning a structured neural network which represents an inverse factorization of the graphical model, resulting in a conditional density estimator that takes as input particular values of the observed random variables, and returns an approximation to the distribution of the latent variables. This recognition model can be learned offline, independent from any particular dataset, prior to performing inference. The output of these networks can be used as automatically-learned high-quality proposal distributions to accelerate sequential Monte Carlo across a diverse range of problem settings.

Journal ArticleDOI
TL;DR: This tutorial aims to provide an accessible introduction to particle filters, and sequential Monte Carlo (SMC) more generally, which allow for Bayesian inference in complex dynamic state-space models and have become increasingly popular over the last decades.

Journal ArticleDOI
TL;DR: GP emulators are a practical way to implement sophisticated parameter retrieval schemes in an era of increasing data volumes and appear more successful than the variational and particle filter approach as a result of using the temporal regularisation mode.
Abstract: There is an increasing need to consistently combine observations from different sensors to monitor the state of the land surface. In order to achieve this, robust methods based on the inversion of radiative transfer (RT) models can be used to interpret the satellite observations. This typically results in an inverse problem, but a major drawback of these methods is the computational complexity. We introduce the concept of Gaussian Process (GP) emulators: surrogate functions that accurately approximate RT models using a small set of input (e.g., leaf area index, leaf chlorophyll, etc.) and output (e.g., top-of-canopy reflectances or at sensor radiances) pairs. The emulators quantify the uncertainty of their approximation, and provide a fast and easy route to estimating the Jacobian of the original model, enabling the use of e.g., efficient gradient descent methods. We demonstrate the emulation of widely used RT models (PROSAIL and SEMIDISCRETE) and the coupling of vegetation and atmospheric (6S) RT models targetting particular sensor bands. A comparison with the full original model outputs shows that the emulators are a viable option to replace the original model, with negligible bias and discrepancies which are much smaller than the typical uncertainty in the observations. We also extend the theory of GP to cope with models with multivariate outputs (e.g., over the full solar reflective domain), and apply this to the emulation of PROSAIL, coupled 6S and PROSAIL and to the emulation of individual spectral components of 6S. In all cases, emulators successfully predict the full model output as well as accurately predict the gradient of the model calculated by finite differences, and produce speed ups between 10,000 and 50,000 times that of the original model. Finally, we use emulators to invert leaf area index ( L A I ), leaf chlorophyll content ( C a b ) and equivalent leaf water thickness ( C w ) from a time series of observations from Sentinel-2/MSI, Sentinel-3/SLSTR and Proba-V observations. We use sophisticated Hamiltonian Markov Chain Monte Carlo (MCMC) methods that exploit the speed of the emulators as well as the gradient estimation, a variational data assimilation (DA) method that extends the problem with temporal regularisation, and a particle filter using a regularisation model. The variational and particle filter approach appear more successful (meaning parameters closer to the truth, and smaller uncertainties) than the MCMC approach as a result of using the temporal regularisation mode. These work therefore suggests that GP emulators are a practical way to implement sophisticated parameter retrieval schemes in an era of increasing data volumes.

Journal ArticleDOI
TL;DR: A technique for modeling propagation of ultrawideband signals in indoor or outdoor environments is proposed, supporting the design of a positioning systems based on round-trip-time (RTT) measurements and on a particle filter, and it is shown that the particle filter solution may be a competitive solution, at a price of increased computational complexity.
Abstract: In this paper, a technique for modeling propagation of ultrawideband (UWB) signals in indoor or outdoor environments is proposed, supporting the design of a positioning systems based on round-trip-time (RTT) measurements and on a particle filter. By assuming that nonlinear pulses are transmitted in an additive white Gaussian noise channel and are detected using a threshold-based receiver, it is shown that RTT measurements may be affected by non-Gaussian noise. RTT noise properties are analyzed, and the effects of non-Gaussian noise on the performance of an RTT-based positioning system are investigated. To this aim, a classical least-squares estimator, an extended Kalman filter, and a particle filter are compared when used to detect a slowly moving target in the presence of the modeled noise. It is shown that, in a realistic indoor environment, the particle filter solution may be a competitive solution, at a price of increased computational complexity. Experimental verifications validate the presented approach.

Journal ArticleDOI
TL;DR: Simulation results prove that the proposed particle filter-based matching algorithm with gravity sample vector is robust to the changes of gravity anomaly in the matching areas, with more accurate and reliable matching results.
Abstract: Gravity matching algorithm is a key technique of gravity aided navigation for underwater vehicles. The reliability of traditional single point matching algorithm can be easily affected by environmental disturbance, which results in mismatching and decrease of navigation accuracy. Therefore, a particle filter (PF)-based matching algorithm with gravity sample vector is proposed. The correlation between adjacent sample points of inertial navigation system is considered in the vector matching algorithm in order to solve the mismatching problem. The current sampling point matching result is rectified by the vectors composed by the selected sampling points and matching point. The amount of selected sampling points is determined by the gravity field distribution and the real-time performance of the algorithm. A PF-based on Bayesian estimation is introduced in the proposed method to overcome the divergence disadvantage of the traditional point matching algorithm in some matching areas with obvious gravity variation. Simulation results prove that compared with the traditional methods, the proposed method is robust to the changes of gravity anomaly in the matching areas, with more accurate and reliable matching results.

Journal ArticleDOI
TL;DR: This paper uses object recognition to obtain semantic information from the robot’s sensors and considers the task of localizing the robot within a prior map of landmarks, which are annotated with semantic labels.
Abstract: Most approaches to robot localization rely on low-level geometric features such as points, lines, and planes. In this paper, we use object recognition to obtain semantic information from the robot's sensors and consider the task of localizing the robot within a prior map of landmarks, which are annotated with semantic labels. As object recognition algorithms miss detections and produce false alarms, correct data association between the detections and the landmarks on the map is central to the semantic localization problem. Instead of the traditional vector-based representation, we propose a sensor model, which encodes the semantic observations via random finite sets and enables a unified treatment of missed detections, false alarms, and data association. Our second contribution is to reduce the problem of computing the likelihood of a set-valued observation to the problem of computing a matrix permanent. It is this crucial transformation that allows us to solve the semantic localization problem with a polynomial-time approximation to the set-based Bayes filter. Finally, we address the active semantic localization problem, in which the observer's trajectory is planned in order to improve the accuracy and efficiency of the localization process. The performance of our approach is demonstrated in simulation and in real environments using deformable-part-model-based object detectors. Robust global localization from semantic observations is demonstrated for a mobile robot, for the Project Tango phone, and on the KITTI visual odometry dataset. Comparisons are made with the traditional lidar-based geometric Monte Carlo localization.

Journal ArticleDOI
TL;DR: New representations and algorithms for the FPF in the general multivariable non linear nonlinear non-Gaussian setting are described, including a Galerkin finite-element algorithm for approximation of the gain.

Journal ArticleDOI
TL;DR: Novel algorithms are suggested which are, in senses to be made precise, provably stable and yet designed to avoid the degree of interaction which hinders parallelization of standard algorithms.
Abstract: We introduce a general form of sequential Monte Carlo algorithm defined in terms of a parameterized resampling mechanism. We find that a suitably generalized notion of the Effective Sample Size (ESS), widely used to monitor algorithm degeneracy, appears naturally in a study of its convergence properties. We are then able to phrase sufficient conditions for time-uniform convergence in terms of algorithmic control of the ESS, in turn achievable by adaptively modulating the interaction between particles. This leads us to suggest novel algorithms which are, in senses to be made precise, provably stable and yet designed to avoid the degree of interaction which hinders parallelization of standard algorithms. As a byproduct, we prove time-uniform convergence of the popular adaptive resampling particle filter.

Journal ArticleDOI
TL;DR: Most of this work was undertaken at the University of Bath, where M.F. was a Ph.D. student, and it was supported in part by EPSRC Grants EP/I000917 and EP/K005251/1.
Abstract: Highly nonlinear, chaotic or near chaotic, dynamic models are important in fields such as ecology and epidemiology: for example, pest species and diseases often display highly nonlinear dynamics. However, such models are problematic from the point of view of statistical inference. The defining feature of chaotic and near chaotic systems is extreme sensitivity to small changes in system states and parameters, and this can interfere with inference. There are two main classes of methods for circumventing these difficulties: information reduction approaches, such as Approximate Bayesian Computation or Synthetic Likelihood, and state space methods, such as Particle Markov chain Monte Carlo, Iterated Filtering or Parameter Cascading. The purpose of this article is to compare the methods in order to reach conclusions about how to approach inference with such models in practice. We show that neither class of methods is universally superior to the other. We show that state space methods can suffer multimodality problems in settings with low process noise or model misspecification, leading to bias toward stable dynamics and high process noise. Information reduction methods avoid this problem, but, under the correct model and with sufficient process noise, state space methods lead to substantially sharper inference than information reduction methods. More practically, there are also differences in the tuning requirements of different methods. Our overall conclusion is that model development and checking should probably be performed using an information reduction method with low tuning requirements, while for final inference it is likely to be better to switch to a state space method, checking results against the information reduction approach.

Journal ArticleDOI
TL;DR: In this article, a local particle filter (LPF) is introduced that outperforms traditional ensemble Kalman filters in highly nonlinear/non-Gaussian scenarios, both in accuracy and computational cost.
Abstract: . A local particle filter (LPF) is introduced that outperforms traditional ensemble Kalman filters in highly nonlinear/non-Gaussian scenarios, both in accuracy and computational cost. The standard sampling importance resampling (SIR) particle filter is augmented with an observation-space localization approach, for which an independent analysis is computed locally at each grid point. The deterministic resampling approach of Kitagawa is adapted for application locally and combined with interpolation of the analysis weights to smooth the transition between neighboring points. Gaussian noise is applied with magnitude equal to the local analysis spread to prevent particle degeneracy while maintaining the estimate of the growing dynamical instabilities. The approach is validated against the local ensemble transform Kalman filter (LETKF) using the 40-variable Lorenz-96 (L96) model. The results show that (1) the accuracy of LPF surpasses LETKF as the forecast length increases (thus increasing the degree of nonlinearity), (2) the cost of LPF is significantly lower than LETKF as the ensemble size increases, and (3) LPF prevents filter divergence experienced by LETKF in cases with non-Gaussian observation error distributions.

Journal ArticleDOI
TL;DR: In this article, the authors presented a factor graph representation of the joint localization and time synchronization problem based on TOA measurements, in which the non-line-of-sight measurements are also taken into consideration.
Abstract: Localization and synchronization are very important in many wireless applications such as monitoring and vehicle tracking. Utilizing the same time of arrival (TOA) measurements for simultaneous localization and synchronization is challenging. In this paper, we present a factor graph (FG) representation of the joint localization and time synchronization problem based on TOA measurements, in which the non-line-of-sight measurements are also taken into consideration. On this FG, belief propagation (BP) message passing and variational message passing (VMP) are applied to derive two fully distributed cooperative algorithms with low computational requirements. Due to the nonlinearity in the observation function, it is intractable to compute the messages in closed form and most existing solutions rely on Monte Carlo methods, e.g., particle filtering. We linearize a specific nonlinear term in the expressions of messages, which enables us to use a Gaussian representation for all messages. Accordingly, only the mean and variance have to be updated and transmitted between neighboring nodes, which significantly reduces the communication overhead and computational complexity. A message passing schedule scheme is proposed to trade off between estimation performance and communication overhead. Simulation results show that the proposed algorithms perform very close to particle-based methods with much lower complexity especially in densely connected networks.

Journal ArticleDOI
TL;DR: A new sensor-selection solution within a multi-Bernoulli-based multi-target tracking framework with no prior knowledge of the clutter distribution or the probability of detection is presented, and uses a new task-driven objective function for this purpose.