scispace - formally typeset
Search or ask a question
Author

Matti Vihola

Bio: Matti Vihola is an academic researcher from University of Jyväskylä. The author has contributed to research in topics: Markov chain Monte Carlo & Particle filter. The author has an hindex of 18, co-authored 63 publications receiving 1330 citations. Previous affiliations of Matti Vihola include Tampere University of Technology & École Polytechnique.


Papers
More filters
Journal ArticleDOI
TL;DR: A new robust adaptive Metropolis algorithm estimating the shape of the target distribution and simultaneously coercing the acceptance rate and showing promising behaviour in an example with Student target distribution having no finite second moment.
Abstract: The adaptive Metropolis (AM) algorithm of Haario, Saksman and Tamminen (Bernoulli 7(2):223---242, 2001) uses the estimated covariance of the target distribution in the proposal distribution. This paper introduces a new robust adaptive Metropolis algorithm estimating the shape of the target distribution and simultaneously coercing the acceptance rate. The adaptation rule is computationally simple adding no extra cost compared with the AM algorithm. The adaptation strategy can be seen as a multidimensional extension of the previously proposed method adapting the scale of the proposal distribution in order to attain a given acceptance rate. The empirical results show promising behaviour of the new algorithm in an example with Student target distribution having no finite second moment, where the AM covariance estimate is unstable. In the examples with finite second moments, the performance of the new approach seems to be competitive with the AM algorithm combined with scale adaptation.

267 citations

Journal ArticleDOI
TL;DR: In this paper, Andrieu and Roberts showed that the asymptotic variance of the pseudo-marginal algorithm is always at least as large as that of the marginal algorithm.
Abstract: We study convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms (Andrieu and Roberts [Ann. Statist. 37 (2009) 697–725]). We find that the asymptotic variance of the pseudo-marginal algorithm is always at least as large as that of the marginal algorithm. We show that if the marginal chain admits a (right) spectral gap and the weights (normalised estimates of the target density) are uniformly bounded, then the pseudo-marginal chain has a spectral gap. In many cases, a similar result holds for the absolute spectral gap, which is equivalent to geometric ergodicity. We consider also unbounded weight distributions and recover polynomial convergence rates in more specific cases, when the marginal algorithm is uniformly ergodic or an independent Metropolis–Hastings or a random-walk Metropolis targeting a super-exponential density with regular contours. Our results on geometric and polynomial convergence rates imply central limit theorems. We also prove that under general conditions, the asymptotic variance of the pseudo-marginal algorithm converges to the asymptotic variance of the marginal algorithm if the accuracy of the estimators is increased.

134 citations

Journal ArticleDOI
TL;DR: In this paper, an adaptive algorithm with fixed number of temperatures is proposed, which tunes both the temperature schedule and the parameters of the random-walk Metropolis kernel automatically, and proves the convergence of the adaptation and a strong law of large numbers for the algorithm under general conditions.
Abstract: Parallel tempering is a generic Markov chain Monte Carlo sampling method which allows good mixing with multimodal target distributions, where conventional Metropolis-Hastings algorithms often fail. The mixing properties of the sampler depend strongly on the choice of tuning parameters, such as the temperature schedule and the proposal distribution used for local exploration. We propose an adaptive algorithm with fixed number of temperatures which tunes both the temperature schedule and the parameters of the random-walk Metropolis kernel automatically. We prove the convergence of the adaptation and a strong law of large numbers for the algorithm under general conditions. We also prove as a side result the geometric ergodicity of the parallel tempering algorithm. We illustrate the performance of our method with examples. Our empirical findings indicate that the algorithm can cope well with different kinds of scenarios without prior tuning. Supplementary materials including the proofs and the Matlab implemen...

105 citations

Journal ArticleDOI
TL;DR: In this article, Rao-Blackwellised particle filtering (RBPF) was proposed for finite set statistics (FISST) multi-target tracking, where each sensor is assumed to produce a sequence of detection reports each containing either one single-target measurement, or a "no detection" report.
Abstract: This article introduces a Rao-Blackwellised particle filtering (RBPF) approach in the finite set statistics (FISST) multitarget tracking framework. The RBPF approach is proposed in such a case, where each sensor is assumed to produce a sequence of detection reports each containing either one single-target measurement, or a "no detection" report. The tests cover two different measurement models: a linear-Gaussian measurement model, and a nonlinear model linearised in the extended Kalman filter (EKF) scheme. In the tests, Rao-Blackwellisation resulted in a significant reduction of the errors of the FISST estimators when compared with a previously proposed direct particle implementation. In addition, the RBPF approach was shown to be applicable in nonlinear bearings-only multitarget tracking.

73 citations

Journal ArticleDOI
TL;DR: In this article, the authors describe sufficient conditions to ensure the correct ergodicity of the Adaptive Metropolis (AM) algorithm for target distributions with a noncompact support.
Abstract: This paper describes sufficient conditions to ensure the correct ergodicity of the Adaptive Metropolis (AM) algorithm of Haario, Saksman and Tamminen [Bernoulli 7 (2001) 223--242] for target distributions with a noncompact support. The conditions ensuring a strong law of large numbers require that the tails of the target density decay super-exponentially and have regular contours. The result is based on the ergodicity of an auxiliary process that is sequentially constrained to feasible adaptation sets, independent estimates of the growth rate of the AM chain and the corresponding geometric drift constants. The ergodicity result of the constrained process is obtained through a modification of the approach due to Andrieu and Moulines [Ann. Appl. Probab. 16 (2006) 1462--1505].

67 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

DOI
31 May 2023
TL;DR: This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework and learns what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages.
Abstract: Now in its second edition, this accessible text presents a unified Bayesian treatment of state-of-the-art filtering, smoothing, and parameter estimation algorithms for non-linear state space models. The book focuses on discrete-time state space models and carefully introduces fundamental aspects related to optimal filtering and smoothing. In particular, it covers a range of efficient non-linear Gaussian filtering and smoothing algorithms, as well as Monte Carlo-based algorithms. This updated edition features new chapters on constructing state space models of practical systems, the discretization of continuous-time state space models, Gaussian filtering by enabling approximations, posterior linearization filtering, and the corresponding smoothers. Coverage of key topics is expanded, including extended Kalman filtering and smoothing, and parameter estimation. The book's practical, algorithmic approach assumes only modest mathematical prerequisites, suitable for graduate and advanced undergraduate students. Many examples are included, with Matlab and Python code available online, enabling readers to implement algorithms in their own projects.

1,373 citations

01 Jan 2015
TL;DR: This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework and learns what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages.
Abstract: Filtering and smoothing methods are used to produce an accurate estimate of the state of a time-varying system based on multiple observational inputs (data). Interest in these methods has exploded in recent years, with numerous applications emerging in fields such as navigation, aerospace engineering, telecommunications, and medicine. This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework. Readers learn what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages. They also discover how state-of-the-art Bayesian parameter estimation methods can be combined with state-of-the-art filtering and smoothing algorithms. The book’s practical and algorithmic approach assumes only modest mathematical prerequisites. Examples include MATLAB computations, and the numerous end-of-chapter exercises include computational assignments. MATLAB/GNU Octave source code is available for download at www.cambridge.org/sarkka, promoting hands-on work with the methods.

1,102 citations

Journal ArticleDOI
TL;DR: This work proposes a series of novel adaptive algorithms which prove to be robust and reliable in practice and reviews criteria and the useful framework of stochastic approximation, which allows one to systematically optimise generally used criteria.
Abstract: We review adaptive Markov chain Monte Carlo algorithms (MCMC) as a mean to optimise their performance Using simple toy examples we review their theoretical underpinnings, and in particular show why adaptive MCMC algorithms might fail when some fundamental properties are not satisfied This leads to guidelines concerning the design of correct algorithms We then review criteria and the useful framework of stochastic approximation, which allows one to systematically optimise generally used criteria, but also analyse the properties of adaptive MCMC algorithms We then propose a series of novel adaptive algorithms which prove to be robust and reliable in practice These algorithms are applied to artificial and high dimensional scenarios, but also to the classic mine disaster dataset inference problem

957 citations

Book
Simo Srkk1
01 Sep 2013
TL;DR: This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework, learning what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages.
Abstract: Filtering and smoothing methods are used to produce an accurate estimate of the state of a time-varying system based on multiple observational inputs (data). Interest in these methods has exploded in recent years, with numerous applications emerging in fields such as navigation, aerospace engineering, telecommunications and medicine. This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework. Readers learn what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages. They also discover how state-of-the-art Bayesian parameter estimation methods can be combined with state-of-the-art filtering and smoothing algorithms. The book's practical and algorithmic approach assumes only modest mathematical prerequisites. Examples include MATLAB computations, and the numerous end-of-chapter exercises include computational assignments. MATLAB/GNU Octave source code is available for download at www.cambridge.org/sarkka, promoting hands-on work with the methods.

879 citations