scispace - formally typeset
Search or ask a question

Showing papers on "Particle filter published in 2018"


Journal ArticleDOI
TL;DR: This paper implements battery remaining available energy prediction and state-of-charge (SOC) estimation against testing temperature uncertainties, as well as inaccurate initial SOC values, against a double-scale particle filtering method.
Abstract: In order for the battery management system (BMS) in an electric vehicle to function properly, accurate and robust indication of the energy state of the lithium-ion batteries is necessary. This robustness requires that the energy state can be estimated accurately even when the working conditions of batteries change dramatically. This paper implements battery remaining available energy prediction and state-of-charge (SOC) estimation against testing temperature uncertainties, as well as inaccurate initial SOC values. A double-scale particle filtering method has been developed to estimate or predict the system state and parameters on two different time scales. The developed method considers the slow time-varying characteristics of the battery parameter set and the quick time-varying characteristics of the battery state set. In order to select the preferred battery model, the Akaike information criterion (AIC) is used to make a tradeoff between the model prediction accuracy and complexity. To validate the developed double-scale particle filtering method, two different kinds of lithium-ion batteries were tested at three temperatures. The experimental results show that, with 20% initial SOC deviation, the maximum remaining available energy prediction and SOC estimation errors are both within 2%, even when the wrong temperature is indicated. In this case, the developed double-scale particle filtering method is expected to be robust in practice.

193 citations


Journal ArticleDOI
TL;DR: An improved PF algorithm, that is, the unscented particle filter (UPF) based on linear optimizing combination resampling (U-LOCR-PF) to improve the prediction accuracy, and shows higher accuracy in the RUL prediction of lithium-ion battery, compared with the existing PF-based and UPF-based prognostic methods.

165 citations


Journal ArticleDOI
TL;DR: The correlated pseudomarginal method (CSM) as discussed by the authors is a modification of the pseudo-argininal method using a likelihood ratio estimator computed by using two correlated likelihood estimators.
Abstract: The pseudomarginal algorithm is a Metropolis–Hastings‐type scheme which samples asymptotically from a target probability density when we can only estimate unbiasedly an unnormalized version of it. In a Bayesian context, it is a state of the art posterior simulation technique when the likelihood function is intractable but can be estimated unbiasedly by using Monte Carlo samples. However, for the performance of this scheme not to degrade as the number T of data points increases, it is typically necessary for the number N of Monte Carlo samples to be proportional to T to control the relative variance of the likelihood ratio estimator appearing in the acceptance probability of this algorithm. The correlated pseudomarginal method is a modification of the pseudomarginal method using a likelihood ratio estimator computed by using two correlated likelihood estimators. For random‐effects models, we show under regularity conditions that the parameters of this scheme can be selected such that the relative variance of this likelihood ratio estimator is controlled when N increases sublinearly with T and we provide guidelines on how to optimize the algorithm on the basis of a non‐standard weak convergence analysis. The efficiency of computations for Bayesian inference relative to the pseudomarginal method empirically increases with T and exceeds two orders of magnitude in some examples.

124 citations


Proceedings Article
01 Jan 2018
TL;DR: Two new algorithms, POMCPOW and PFT-DPW, are proposed and evaluated that overcome this deficiency by using weighted particle filtering and Simulation results show that these modifications allow the algorithms to be successful where previous approaches fail.
Abstract: Online solvers for partially observable Markov decision processes have been applied to problems with large discrete state spaces, but continuous state, action, and observation spaces remain a challenge. This paper begins by investigating double progressive widening (DPW) as a solution to this challenge. However, we prove that this modification alone is not sufficient because the belief representations in the search tree collapse to a single particle causing the algorithm to converge to a policy that is suboptimal regardless of the computation time. This paper proposes and evaluates two new algorithms, POMCPOW and PFT-DPW, that overcome this deficiency by using weighted particle filtering. Simulation results show that these modifications allow the algorithms to be successful where previous approaches fail.

120 citations


Journal ArticleDOI
TL;DR: Comparison studies of tracking accuracy and speed of the Hybrid SCA-PSO based tracking framework and other trackers, viz., Particle filter, Mean-shift, Particle swarm optimization, Bat algorithm, Sine Cosine Algorithm (SCA) and Hybrid Gravitational Search Al algorithm (HGSA) is presented.
Abstract: Due to its simplicity and efficiency, a recently proposed optimization algorithm, Sine Cosine Algorithm (SCA), has gained the interest of researchers from various fields for solving optimization problems. However, it is prone to premature convergence at local minima as it lacks internal memory. To overcome this drawback, a novel Hybrid SCA-PSO algorithm for solving optimization problems and object tracking is proposed. The P b e s t and G b e s t components of PSO (Particle Swarm Optimization) is added to traditional SCA to guide the search process for potential candidate solutions and PSO is then initialized with P b e s t of SCA to exploit the search space further. The proposed algorithm combines the exploitation capability of PSO and exploration capability of SCA to achieve optimal global solutions. The effectiveness of this algorithm is evaluated using 23 classical, CEC 2005 and CEC 2014 benchmark functions. Statistical parameters are employed to observe the efficiency of the Hybrid SCA-PSO qualitatively and results prove that the proposed algorithm is very competitive compared to the state-of-the-art metaheuristic algorithms. The Hybrid SCA-PSO algorithm is applied for object tracking as a real thought-provoking case study. Experimental results show that the Hybrid SCA-PSO-based tracker can robustly track an arbitrary target in various challenging conditions. To reveal the capability of the proposed algorithm, comparative studies of tracking accuracy and speed of the Hybrid SCA-PSO based tracking framework and other trackers, viz., Particle filter, Mean-shift, Particle swarm optimization, Bat algorithm, Sine Cosine Algorithm (SCA) and Hybrid Gravitational Search Algorithm (HGSA) is presented.

120 citations


Journal ArticleDOI
TL;DR: Compared with existing tracking methods based on correlation filters and particle filters, the proposed CPF tracking algorithm has four major advantages: it is robust to partial and total occlusions, and can recover from lost tracks by maintaining multiple hypotheses.
Abstract: In this paper, we propose a novel correlation particle filter (CPF) for robust visual tracking. Instead of a simple combination of a correlation filter and a particle filter, we exploit and complement the strength of each one. Compared with existing tracking methods based on correlation filters and particle filters, the proposed tracker has four major advantages: 1) it is robust to partial and total occlusions, and can recover from lost tracks by maintaining multiple hypotheses; 2) it can effectively handle large-scale variation via a particle sampling strategy; 3) it can efficiently maintain multiple modes in the posterior density using fewer particles than conventional particle filters, resulting in low computational cost; and 4) it can shepherd the sampled particles toward the modes of the target state distribution using a mixture of correlation filters, resulting in robust tracking performance. Extensive experimental results on challenging benchmark data sets demonstrate that the proposed CPF tracking algorithm performs favorably against the state-of-the-art methods.

116 citations


Journal ArticleDOI
TL;DR: A systematic introduction to the Bayesian state estimation framework is offered and various Kalman filtering U+0028 KF U-0029 techniques are reviewed, progressively from the standard KF for linear systems to extended KF, unscented KF and ensemble KFFor nonlinear systems.
Abstract: This article presents an up-to-date tutorial review of nonlinear Bayesian estimation. State estimation for nonlinear systems has been a challenge encountered in a wide range of engineering fields, attracting decades of research effort. To date, one of the most promising and popular approaches is to view and address the problem from a Bayesian probabilistic perspective, which enables estimation of the unknown state variables by tracking their probabilistic distribution or statistics U+0028 e.g., mean and covariance U+0029 conditioned on a system U+02BC s measurement data. This article offers a systematic introduction to the Bayesian state estimation framework and reviews various Kalman filtering U+0028 KF U+0029 techniques, progressively from the standard KF for linear systems to extended KF, unscented KF and ensemble KF for nonlinear systems. It also overviews other prominent or emerging Bayesian estimation methods including Gaussian filtering, Gaussian-sum filtering, particle filtering and moving horizon estimation and extends the discussion of state estimation to more complicated problems such as simultaneous state and parameter U+002F input estimation.

115 citations


Proceedings Article
31 Mar 2018
TL;DR: The VSMC family is a variational family that can approximate the posterior arbitrarily well, while still allowing for efficient optimization of its parameters, and is demonstrated its utility on state space models, stochastic volatility models for financial data, and deep Markov models of brain neural circuits.
Abstract: Many recent advances in large scale probabilistic inference rely on variational methods. The success of variational approaches depends on (i) formulating a flexible parametric family of distributio ...

115 citations


Proceedings Article
Tuan Anh Le1, Maximilian Igl1, Tom Rainforth1, Tom Jin, Frank Wood1 
15 Feb 2018
TL;DR: In this article, auto-encoding sequential Monte Carlo (AESMC) is used for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models.
Abstract: We build on auto-encoding sequential Monte Carlo (AESMC): a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models. Our approach relies on the efficiency of sequential Monte Carlo (SMC) for performing inference in structured probabilistic models and the flexibility of deep neural networks to model complex conditional probability distributions. We develop additional theoretical insights and introduce a new training procedure which improves both model and proposal learning. We demonstrate that our approach provides a fast, easy-to-implement and scalable means for simultaneous model learning and proposal adaptation in deep generative models.

110 citations


Journal ArticleDOI
01 Feb 2018-Energy
TL;DR: A double-scale dual adaptive particle filter for online parameters and SoC estimation of lithium-ion batteries and the effectiveness and applicability of the two algorithms are verified by Lithium Nickel Manganese Cobalt Oxide batteries of different ages.

107 citations


Journal ArticleDOI
TL;DR: A new kinematic calibration method based on the extended Kalman filter (EKF) and particle filter (PF) algorithm that can significantly improves the positioning accuracy of the robot.
Abstract: Precise positioning of a robot plays an very important role in advanced industrial applications, and this paper presents a new kinematic calibration method based on the extended Kalman filter (EKF) and particle filter (PF) algorithm that can significantly improves the positioning accuracy of the robot. Kinematic and its error models of a robot are established, and its kinematic parameters are identified by using the EKF algorithm first. But the EKF algorithm has a kind of linear truncation error and it is useful for the Gauss noise system in general, so its identified accuracy will be affected for the highly nonlinear robot kinematic system with a non-Gauss noise system. The PF algorithm can solve this with non-Gauss noise and a high nonlinear problem well, but its calibration accuracy and efficiency are affected by the prior distribution of the initial values. Therefore, this paper proposes to use the calibration value of the EKF algorithm as the prior value of the PF algorithm, and then, the PF algorithm is used further to calibrate the kinematic parameters of the robot. Enough experiments have been carried out, and the experimental results validated the viability of the proposed method with the robot positioning accuracy improved significantly.

Journal ArticleDOI
TL;DR: The usefulness and effectiveness of the proposed EPFM is investigated by applying the technique on a conceptual and highly nonlinear hydrologic model over four river basins located in different climate and geographical regions of the United States.

Proceedings ArticleDOI
26 Jun 2018
TL;DR: Differentiable particle filters (DPFs) as mentioned in this paper encode the structure of recursive state estimation with prediction and measurement update that operate on a probability distribution over states, which can efficiently train their models by optimizing end-to-end state estimation performance.
Abstract: We present differentiable particle filters (DPFs): a differentiable implementation of the particle filter algorithm with learnable motion and measurement models. Since DPFs are end-to-end differentiable, we can efficiently train their models by optimizing end-to-end state estimation performance, rather than proxy objectives such as model accuracy. DPFs encode the structure of recursive state estimation with prediction and measurement update that operate on a probability distribution over states. This structure represents an algorithmic prior that improves learning performance in state estimation problems while enabling explainability of the learned model. Our experiments on simulated and real data show substantial benefits from end-to- end learning with algorithmic priors, e.g. reducing error rates by ~80%. Our experiments also show that, unlike long short-term memory networks, DPFs learn localization in a policy-agnostic way and thus greatly improve generalization. Source code is available at this https URL .

Journal ArticleDOI
21 Mar 2018
TL;DR: This paper compares several commonly used state-of-the-art ensemble-based data assimilation methods in a coherent mathematical notation to provide a unique overview and new insight in the workings and relative advantages of each method, theoretically and algorithmically.
Abstract: This paper compares several commonly used state-of-the-art ensemble-based data assimilation methods in a coherent mathematical notation. The study encompasses different methods that are applicable ...

Journal ArticleDOI
TL;DR: An informal introduction to piecewise deterministic Markov processes is given, covering the aspects relevant to these new Monte Carlo algorithms, with a view to making the development of new continuoustime Monte Carlo more accessible.
Abstract: Recently, there have been conceptually new developments in Monte Carlo methods through the introduction of new MCMC and sequential Monte Carlo (SMC) algorithms which are based on continuous-time, rather than discrete-time, Markov processes. This has led to some fundamentally new Monte Carlo algorithms which can be used to sample from, say, a posterior distribution. Interestingly, continuous-time algorithms seem particularly well suited to Bayesian analysis in big-data settings as they need only access a small sub-set of data points at each iteration, and yet are still guaranteed to target the true posterior distribution. Whilst continuous-time MCMC and SMC methods have been developed independently we show here that they are related by the fact that both involve simulating a piecewise deterministic Markov process. Furthermore, we show that the methods developed to date are just specific cases of a potentially much wider class of continuous-time Monte Carlo algorithms. We give an informal introduction to piecewise deterministic Markov processes, covering the aspects relevant to these new Monte Carlo algorithms, with a view to making the development of new continuous-time Monte Carlo more accessible. We focus on how and why sub-sampling ideas can be used with these algorithms, and aim to give insight into how these new algorithms can be implemented, and what are some of the issues that affect their efficiency.

Journal ArticleDOI
TL;DR: In this article, the posterior probability distribution of the fixed parameters of a state-space dynamical system using a sequential Monte Carlo method is approximated in a purely recursive manner, in which the computational complexity of the recursive steps of the method introduced herein is constant over time.
Abstract: We address the problem of approximating the posterior probability distribution of the fixed parameters of a state-space dynamical system using a sequential Monte Carlo method. The proposed approach relies on a nested structure that employs two layers of particle filters to approximate the posterior probability measure of the static parameters and the dynamic state variables of the system of interest, in a vein similar to the recent “sequential Monte Carlo square” (SMC$^{2}$) algorithm. However, unlike the SMC$^{2}$ scheme, the proposed technique operates in a purely recursive manner. In particular, the computational complexity of the recursive steps of the method introduced herein is constant over time. We analyse the approximation of integrals of real bounded functions with respect to the posterior distribution of the system parameters computed via the proposed scheme. As a result, we prove, under regularity assumptions, that the approximation errors vanish asymptotically in $L_{p}$ ($p\ge1$) with convergence rate proportional to $\frac{1}{\sqrt{N}}+\frac{1}{\sqrt{M}}$, where $N$ is the number of Monte Carlo samples in the parameter space and $N\times M$ is the number of samples in the state space. This result also holds for the approximation of the joint posterior distribution of the parameters and the state variables. We discuss the relationship between the SMC$^{2}$ algorithm and the new recursive method and present a simple example in order to illustrate some of the theoretical findings with computer simulations.

Journal ArticleDOI
TL;DR: Experimental results demonstrate the proposed enhanced sequential Monte Carlo probability hypothesis density filter-based multiple human tracking system achieves the best performance amongst state-of-the-art random finite set-based methods, and the second best online tracker ranked on the leaderboard of latest MOT17 challenge.
Abstract: An enhanced sequential Monte Carlo probability hypothesis density (PHD) filter-based multiple human tracking system is presented. The proposed system mainly exploits two concepts: a novel adaptive gating technique and an online group-structured dictionary learning strategy. Conventional PHD filtering methods preset the target birth intensity and the gating threshold for selecting real observations for the PHD update. This often yields inefficiency in false positives and missed detections in a cluttered environment. To address this issue, a measurement-driven mechanism based on a novel adaptive gating method is proposed to adaptively update the gating sizes. This yields an accurate approach to discriminate between survival and residual measurements by reducing the clutter inferences. In addition, online group-structured dictionary learning with a maximum voting method is used to robustly estimate the target birth intensity. It enables the new-born targets to be automatically detected from noisy sensor measurements. To improve the adaptability of our group-structured dictionary to appearance and illumination changes, we employ the simultaneous code word optimization algorithm for the dictionary update stage. Experimental results demonstrate our proposed method achieves the best performance amongst state-of-the-art random finite set-based methods, and the second best online tracker ranked on the leaderboard of latest MOT17 challenge.

Journal ArticleDOI
TL;DR: This work investigates online nonlinear regression and introduces novel regression structures based on the long short term memory (LSTM) networks by directly replacing the LSTM architecture with the GRU architecture, where the superiority of the introduced algorithms with respect to the conventional methods over several different benchmark real life data sets is illustrated.
Abstract: We investigate online nonlinear regression and introduce novel regression structures based on the long short term memory (LSTM) networks. For the introduced structures, we also provide highly efficient and effective online training methods. To train these novel LSTM-based structures, we put the underlying architecture in a state space form and introduce highly efficient and effective particle filtering (PF)-based updates. We also provide stochastic gradient descent and extended Kalman filter-based updates. Our PF-based training method guarantees convergence to the optimal parameter estimation in the mean square error sense provided that we have a sufficient number of particles and satisfy certain technical conditions. More importantly, we achieve this performance with a computational complexity in the order of the first-order gradient-based methods by controlling the number of particles. Since our approach is generic, we also introduce a gated recurrent unit (GRU)-based approach by directly replacing the LSTM architecture with the GRU architecture, where we demonstrate the superiority of our LSTM-based approach in the sequential prediction task via different real life data sets. In addition, the experimental results illustrate significant performance improvements achieved by the introduced algorithms with respect to the conventional methods over several different benchmark real life data sets.

Journal ArticleDOI
TL;DR: It is shown that the CNN solution is able to automatically learn location patterns, thus significantly lower the workforce burden of designing a localization system and achieves an accuracy of about 1 m under different smartphone orientations, users, and use patterns.
Abstract: Wi-Fi and magnetic field fingerprinting have been a hot topic in indoor positioning researches because of their ubiquity and location-related features. Wi-Fi signals can provide rough initial positions, and magnetic fields can further improve the positioning accuracies, therefore many researchers have tried to combine the two signals for high-accuracy indoor localization. Currently, state-of-the-art solutions design separate algorithms to process different indoor signals. Outputs of these algorithms are generally used as inputs of data fusion strategies. These methods rely on computationally expensive particle filters, labor-intensive feature analysis, and time-consuming parameter tuning to achieve better accuracies. Besides, particle filters need to estimate the moving directions of particles, limiting smartphone orientation to be stable, and aligned with the user’s moving directions. In this paper, we adopted a convolutional neural network (CNN) to implement an accurate and orientation-free positioning system. Inspired by the state-of-the-art image classification methods, we design a novel hybrid location image using Wi-Fi and magnetic field fingerprints, and then a CNN is employed to classify the locations of the fingerprint images. In order to prevent the overfitting problem of the positioning CNN on limited training datasets, we also propose to divide the learning process into two steps to adopt proper learning strategies for different network branches. We show that the CNN solution is able to automatically learn location patterns, thus significantly lower the workforce burden of designing a localization system. Our experimental results convincingly reveal that the proposed positioning method achieves an accuracy of about 1 m under different smartphone orientations, users, and use patterns.

Journal ArticleDOI
TL;DR: The challenges posed by models with high-dimensional states, joint estimation of parameters and the state, and inference for the history of the state process are discussed, including methods based on the particle filter and the ensemble Kalman filter.
Abstract: State-space models can be used to incorporate subject knowledge on the underlying dynamics of a time series by the introduction of a latent Markov state process. A user can specify the dynamics of this process together with how the state relates to partial and noisy observations that have been made. Inference and prediction then involve solving a challenging inverse problem: calculating the conditional distribution of quantities of interest given the observations. This article reviews Monte Carlo algorithms for solving this inverse problem, covering methods based on the particle filter and the ensemble Kalman filter. We discuss the challenges posed by models with high-dimensional states, joint estimation of parameters and the state, and inference for the history of the state process. We also point out some potential new developments that will be important for tackling cutting-edge filtering applications.

Journal ArticleDOI
TL;DR: This study introduces the Heuristic Kalman algorithm, a metaheuristic optimization approach, in combination with particle filtering to tackle sample degeneracy and impoverishment in particle filtering.

23 Oct 2018
TL;DR: The Particle Filter Network (PFnet) as mentioned in this paper encodes both a system model and a particle filter algorithm in a single neural network, which is trained end-to-end from data.
Abstract: Particle filtering is a powerful approach to sequential state estimation and finds application in many domains, including robot localization, object tracking, etc. To apply particle filtering in practice, a critical challenge is to construct probabilistic system models, especially for systems with complex dynamics or rich sensory inputs such as camera images. This paper introduces the Particle Filter Network (PFnet), which encodes both a system model and a particle filter algorithm in a single neural network. The PF-net is fully differentiable and trained end-to-end from data. Instead of learning a generic system model, it learns a model optimized for the particle filter algorithm. We apply the PF-net to a visual localization task, in which a robot must localize in a rich 3-D world, using only a schematic 2-D floor map. In simulation experiments, PF-net consistently outperforms alternative learning architectures, as well as a traditional model-based method, under a variety of sensor inputs. Further, PF-net generalizes well to new, unseen environments.

Journal ArticleDOI
TL;DR: This work introduces approximate MMSE filtering and smoothing algorithms based on the auxiliary particle filter (APF) method, which are called APF–BKF and APF-BKS, respectively for joint state and parameter estimation in POBDS models.

Journal ArticleDOI
TL;DR: In this article, a Markov Chain Monte Carlo (MCMC) algorithm based on Group Importance Sampling (GIS) is proposed for the sequential importance sampling (SIS) problem.

Journal ArticleDOI
TL;DR: This paper compares the performance and search behaviour of Entrotaxis with the popular Infotaxis algorithm, for searching in sparse and turbulent conditions where typical gradient-based approaches become inefficient or fail, and achieves a faster mean search time.

Journal ArticleDOI
TL;DR: The results show that the extended Kalman filter is the least sensitive to model degradation with the lowest computational effort, the particle filter shows the fastest convergence speed but has the highest computational effort; and the least-squares-based filter has an intermediate behavior in both long-term performance and computational effort.

Journal ArticleDOI
TL;DR: In this article, the essential boundedness of potential functions associated with the i-cSMC algorithm was shown to provide necessary and sufficient conditions for the uniform ergodicity of the MC Markov chain and quantitative bounds on its geometric rate of convergence.
Abstract: We establish quantitative bounds for rates of convergence and asymptotic variances for iterated conditional sequential Monte Carlo (i-cSMC) Markov chains and associated particle Gibbs samplers [J. R. Stat. Soc. Ser. B. Stat. Methodol. 72 (2010) 269–342]. Our main findings are that the essential boundedness of potential functions associated with the i-cSMC algorithm provide necessary and sufficient conditions for the uniform ergodicity of the i-cSMC Markov chain, as well as quantitative bounds on its (uniformly geometric) rate of convergence. Furthermore, we show that the i-cSMC Markov chain cannot even be geometrically ergodic if this essential boundedness does not hold in many applications of interest. Our sufficiency and quantitative bounds rely on a novel non-asymptotic analysis of the expectation of a standard normalizing constant estimate with respect to a “doubly conditional” SMC algorithm. In addition, our results for i-cSMC imply that the rate of convergence can be improved arbitrarily by increasing $N$, the number of particles in the algorithm, and that in the presence of mixing assumptions, the rate of convergence can be kept constant by increasing $N$ linearly with the time horizon. We translate the sufficiency of the boundedness condition for i-cSMC into sufficient conditions for the particle Gibbs Markov chain to be geometrically ergodic and quantitative bounds on its geometric rate of convergence, which imply convergence of properties of the particle Gibbs Markov chain to those of its corresponding Gibbs sampler. These results complement recently discovered, and related, conditions for the particle marginal Metropolis–Hastings (PMMH) Markov chain.

Journal ArticleDOI
Thomas Lux1
TL;DR: In this article, a particle filter is used to numerically approximate the conditional densities that enter into the likelihood function of the problem and obtain parameter estimates and filtered state probabilities for the unobservable variables.

Proceedings ArticleDOI
05 Sep 2018
TL;DR: In this article, the authors presented a method for scalable and fully 3D magnetic field simultaneous localization and mapping (SLAM) using local anomalies in the magnetic field as a source of position information.
Abstract: We present a method for scalable and fully 3D magnetic field simultaneous localisation and mapping (SLAM) using local anomalies in the magnetic field as a source of position information. These anomalies are due to the presence of ferromagnetic material in the structure of buildings and in objects such as furniture. We represent the magnetic field map using a Gaussian process model and take well-known physical properties of the magnetic field into account. We build local maps using three-dimensional hexagonal block tiling. To make our approach computationally tractable we use reduced-rank Gaussian process regression in combination with a Rao-Blackwellised particle filter. We show that it is possible to obtain accurate position and orientation estimates using measurements from a smartphone, and that our approach provides a scalable magnetic field SLAM algorithm in terms of both computational complexity and map storage.

Journal ArticleDOI
01 Jun 2018
TL;DR: A new dynamic grid mapping approach using an evidential representation using the Dempster–Shafer framework to model hypotheses for static occupancy, dynamic occupancy, free space, and their combined hypotheses that consistently estimated and accumulated in a dynamic grid map.
Abstract: Modeling and estimating the current local environment by processing sensor measurement data is essential for intelligent vehicles. Static obstacles, dynamic objects, and free space have to be appropriately represented, classified, and filtered. Occupancy grids, known for mapping static environments, provide a common low-level representation using occupancy probabilities with an implicit data association through the discrete grid structure. Extending this idea toward dynamic environments with moving objects requires a static/dynamic classification of measured occupancy and a tracking of the dynamic state of grid cells. In this paper, we propose a new dynamic grid mapping approach. An evidential representation using the Dempster–Shafer framework is used to model hypotheses for static occupancy, dynamic occupancy, free space, and their combined hypotheses. These hypotheses are consistently estimated and accumulated in a dynamic grid map by an adapted evidential filtering, allowing one to distinguish static and dynamic occupancy. The evidential grid mapping is combined with a low-level particle filter tracking that is used to estimate cell velocity distributions and predict dynamic occupancy of the grid map. Static occupancy is directly modeled in the grid map without requiring particles, increasing efficiency, and improving the static/dynamic classification due to the persistent map accumulation. Experimental results with real sensor data show the effectiveness of the proposed approach in challenging scenarios with occlusions and dense traffic.