scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 2017"


Journal ArticleDOI
TL;DR: In this paper, a preintegrated inertial measurement unit model is integrated into a visual-inertial pipeline under the unifying framework of factor graphs, which enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation.
Abstract: Current approaches for visual--inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inertial measurements come at high rate, hence, leading to the fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a posteriori bias correction in analytic form. The second contribution is to show that the preintegrated inertial measurement unit model can be seamlessly integrated into a visual--inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation. We perform an extensive evaluation of our monocular VIO pipeline on real and simulated datasets. The results confirm that our modeling effort leads to an accurate state estimation in real time, outperforming state-of-the-art approaches.

524 citations


Book ChapterDOI
TL;DR: In this paper, the authors highlight the mathematical and computational structure relating to the formulation of, and development of algorithms for, the Bayesian approach to inverse problems in differential equations, and describe measure-preserving dynamics on the underlying infinite dimensional space.
Abstract: These lecture notes highlight the mathematical and computational structure relating to the formulation of, and development of algorithms for, the Bayesian approach to inverse problems in differential equations. This approach is fundamental in the quantification of uncertainty within applications in volving the blending of mathematical models with data. The finite dimensional situation is described first, along with some motivational examples. Then the development of probability measures on separable Banach space is undertaken, using a random series over an infinite set of functions to construct draws; these probability measures are used as priors in the Bayesian approach to inverse problems. Regularity of draws from the priors is studied in the natural Sobolev or Besov spaces implied by the choice of functions in the random series construction, and the Kolmogorov continuity theorem is used to extend regularity considerations to the space of Holder continuous functions. Bayes’ theorem is de rived in this prior setting, and here interpreted as finding conditions under which the posterior is absolutely continuous with respect to the prior, and determining a formula for the Radon-Nikodym derivative in terms of the likelihood of the data. Having established the form of the posterior, we then describe various properties common to it in the infinite dimensional setting. These properties include well-posedness, approximation theory, and the existence of maximum a posteriori estimators. We then describe measure-preserving dynamics, again on the infinite dimensional space, including Markov chain-Monte C arlo and sequential Monte Carlo methods, and measure-preserving reversible stochastic differential equations. By formulating the theory and algorithms on the underlying infinite dimensional space, we obtain a framework suitable for rigorous analysis of the accuracy of reconstructions, of computational complexity, as well as naturally constructing algorithms which perform well under mesh refinement, since they are inherently well-defined in infinite dimensions.

520 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel framework for learning/estimating graphs from data, which includes formulation of various graph learning problems, their probabilistic interpretations, and associated algorithms.
Abstract: Graphs are fundamental mathematical structures used in various fields to represent data, signals, and processes In this paper, we propose a novel framework for learning/estimating graphs from data The proposed framework includes (i) formulation of various graph learning problems, (ii) their probabilistic interpretations, and (iii) associated algorithms Specifically, graph learning problems are posed as the estimation of graph Laplacian matrices from some observed data under given structural constraints (eg, graph connectivity and sparsity level) From a probabilistic perspective, the problems of interest correspond to maximum a posteriori parameter estimation of Gaussian–Markov random field models, whose precision (inverse covariance) is a graph Laplacian matrix For the proposed graph learning problems, specialized algorithms are developed by incorporating the graph Laplacian and structural constraints The experimental results demonstrate that the proposed algorithms outperform the current state-of-the-art methods in terms of accuracy and computational efficiency

310 citations


Posted Content
TL;DR: The metric normalized validation error (NVE) is introduced in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity.
Abstract: We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evidence that neural networks can learn a form of decoding algorithm, rather than only a simple classifier. We introduce the metric normalized validation error (NVE) in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity.

267 citations


Journal ArticleDOI
TL;DR: The study demonstrates that the model selection can greatly benefit from using cross-validation outside the searching process both for guiding the model size selection and assessing the predictive performance of the finally selected model.
Abstract: The goal of this paper is to compare several widely used Bayesian model selection methods in practical model selection problems, highlight their differences and give recommendations about the preferred approaches. We focus on the variable subset selection for regression and classification and perform several numerical experiments using both simulated and real world data. The results show that the optimization of a utility estimate such as the cross-validation (CV) score is liable to finding overfitted models due to relatively high variance in the utility estimates when the data is scarce. This can also lead to substantial selection induced bias and optimism in the performance evaluation for the selected model. From a predictive viewpoint, best results are obtained by accounting for model uncertainty by forming the full encompassing model, such as the Bayesian model averaging solution over the candidate models. If the encompassing model is too complex, it can be robustly simplified by the projection method, in which the information of the full model is projected onto the submodels. This approach is substantially less prone to overfitting than selection based on CV-score. Overall, the projection method appears to outperform also the maximum a posteriori model and the selection of the most probable variables. The study also demonstrates that the model selection can greatly benefit from using cross-validation outside the searching process both for guiding the model size selection and assessing the predictive performance of the finally selected model.

207 citations


Journal ArticleDOI
TL;DR: A new post-classification method with iterative slow feature analysis (ISFA) and Bayesian soft fusion is proposed to obtain reliable and accurate change detection maps and achieve a clearly higher change detection accuracy than the current state-of-the-art methods.

189 citations


Journal ArticleDOI
TL;DR: A NILM algorithm based on the joint use of active and reactive power in the Additive Factorial Hidden Markov Models framework is proposed, which outperforms AFAMAP, Hart’s algorithm, and Hart's with MAP respectively.

165 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider Bayesian inference techniques for agent-based (AB) models, as an alternative to simulated minimum distance (SMD), and apply them to estimate the behavioural macroeconomic model of De Grauwe.

122 citations


Journal ArticleDOI
TL;DR: This paper applies Bayesian techniques to develop appropriate point estimates and credible sets to summarize the posterior of the clustering structure based on decision and information theoretic techniques.
Abstract: Clustering is widely studied in statistics and machine learning, with applications in a variety of fields. As opposed to popular algorithms such as agglomerative hierarchical clustering or k-means which return a single clustering solution, Bayesian nonparametric models provide a posterior over the entire space of partitions, allowing one to assess statistical properties, such as uncertainty on the number of clusters. However, an important problem is how to summarize the posterior; the huge dimension of partition space and difficulties in visualizing it add to this problem. In a Bayesian analysis, the posterior of a real-valued parameter of interest is often summarized by reporting a point estimate such as the posterior mean along with 95% credible intervals to characterize uncertainty. In this paper, we extend these ideas to develop appropriate point estimates and credible sets to summarize the posterior of the clustering structure based on decision and information theoretic techniques.

115 citations


Journal ArticleDOI
TL;DR: Experiments demonstrate that the proposed image-deblocking algorithm combining SSR and QC outperforms the current state-of-the-art methods in both peak signal-to-noise ratio and visual perception.
Abstract: The block discrete cosine transform (BDCT) has been widely used in current image and video coding standards, owing to its good energy compaction and decorrelation properties. However, because of independent quantization of DCT coefficients in each block, BDCT usually gives rise to visually annoying blocking compression artifacts, especially at low bit rates. In this paper, to reduce blocking artifacts and obtain high-quality images, image deblocking is cast as an optimization problem within maximum a posteriori framework, and a novel algorithm for image deblocking by using structural sparse representation (SSR) prior and quantization constraint (QC) prior is proposed. The SSR prior is utilized to simultaneously enforce the intrinsic local sparsity and the nonlocal self-similarity of natural images, while QC is explicitly incorporated to ensure a more reliable and robust estimation. A new split Bregman iteration-based method with an adaptively adjusted regularization parameter is developed to solve the proposed optimization problem, which makes the entire algorithm more practical. Experiments demonstrate that the proposed image-deblocking algorithm combining SSR and QC outperforms the current state-of-the-art methods in both peak signal-to-noise ratio and visual perception.

103 citations


Posted Content
TL;DR: This paper proposes a novel algorithm to greatly accelerate the greedy MAP inference for DPP, and shows that this algorithm is significantly faster than state-of-the-art competitors, and provides a better relevance-diversity trade-off on several public datasets.
Abstract: The determinantal point process (DPP) is an elegant probabilistic model of repulsion with applications in various machine learning tasks including summarization and search. However, the maximum a posteriori (MAP) inference for DPP which plays an important role in many applications is NP-hard, and even the popular greedy algorithm can still be too computationally expensive to be used in large-scale real-time scenarios. To overcome the computational challenge, in this paper, we propose a novel algorithm to greatly accelerate the greedy MAP inference for DPP. In addition, our algorithm also adapts to scenarios where the repulsion is only required among nearby few items in the result sequence. We apply the proposed algorithm to generate relevant and diverse recommendations. Experimental results show that our proposed algorithm is significantly faster than state-of-the-art competitors, and provides a better relevance-diversity trade-off on several public datasets, which is also confirmed in an online A/B test.

Journal ArticleDOI
TL;DR: In this article, a method and numerical code, lensit, was developed to find efficiently the most probable lensing map, introducing no significant approximations to the lensed CMB likelihood, and applicable to beamed and masked data with inhomogeneous noise.
Abstract: Gravitational lensing of the cosmic microwave background (CMB) is a valuable cosmological signal that correlates to tracers of large-scale structure and acts as a important source of confusion for primordial $B$-mode polarization. State-of-the-art lensing reconstruction analyses use quadratic estimators, which are easily applicable to data. However, these estimators are known to be suboptimal, in particular for polarization, and large improvements are expected to be possible for high signal-to-noise polarization experiments. We develop a method and numerical code, lensit, that is able to find efficiently the most probable lensing map, introducing no significant approximations to the lensed CMB likelihood, and applicable to beamed and masked data with inhomogeneous noise. It works by iteratively reconstructing the primordial unlensed CMB using a deflection estimate and its inverse, and removing residual lensing from these maps with quadratic estimator techniques. Roughly linear computational cost is maintained due to fast convergence of iterative searches, combined with the local nature of lensing. The method achieves the maximal improvement in signal to noise expected from analytical considerations on the unmasked parts of the sky. Delensing with this optimal map leads to forecast tensor-to-scalar ratio parameter errors improved by a factor $\ensuremath{\simeq}2$ compared to the quadratic estimator in a CMB stage IV configuration.

Journal ArticleDOI
TL;DR: Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, the ability to automatically determine physiologically consistent point estimates of the parameters is shown, to quantify uncertainty induced by errors and assumptions in the collected clinical data.
Abstract: Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper investigates the channel estimation issue when using an APD at the receiver and proposes an ML channel estimator based on the expectation–maximization (EM) algorithm which has a low implementation complexity, making it suitable for high data-rate FSO communications.

Journal ArticleDOI
TL;DR: A new general methodology for approximating Bayesian high-posterior-density credibility regions in inverse problems that are convex and potentially very high-dimensional and which can be computed very efficiently, even in large-scale problems, by using standard convex optimisation techniques.
Abstract: Solutions to inverse problems that are ill-conditioned or ill-posed may have significant intrinsic uncertainty. Unfortunately, analyzing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems. As a result, while most modern mathematical imaging methods produce impressive point estimation results, they are generally unable to quantify the uncertainty in the solutions delivered. This paper presents a new general methodology for approximating Bayesian high-posterior-density credibility regions in inverse problems that are convex and potentially very high-dimensional. The approximations are derived by using recent concentration of measure results related to information theory for log-concave random vectors. A remarkable property of the approximations is that they can be computed very efficiently, even in large-scale problems, by using standard convex optimization techniques. In particular, they are available as a by-product in problems solved by maximum-a-posteriori es...

Journal ArticleDOI
TL;DR: A Bayesian maximum a posteriori (MAP) framework is formulated to optimize the NLF estimation, and a method for image splicing detection according to noise level inconsistency in image blocks taking from different origins is developed.
Abstract: In a spliced image, areas from different origins contain different noise features, which may be exploited as evidence for forgery detection. In this paper, we propose a noise level evaluation method for digital photos, and use the method to detect image splicing. Unlike most noise-based forensic techniques in which an AWGN model is assumed, the noise distribution used in the present work is intensity-dependent. This model can be described with a noise level function (NLF) that better fits the actual noise characteristics. NLF reveals variation in the standard deviation of noise with respect to image intensity. In contrast to denoising problems, noise in forensic applications is generally weak and content-related, and estimation of noise characteristics must be done in small areas. By exploring the relationship between NLF and the camera response function (CRF), we fit the NLF curve under the CRF constraints. We then formulate a Bayesian maximum a posteriori (MAP) framework to optimize the NLF estimation, and develop a method for image splicing detection according to noise level inconsistency in image blocks taking from different origins. Experimental results are presented to show effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: Bayesian maximum a posteriori estimation can be employed to solve the inverse problem, where morphological and relevant biomedical knowledge are used as priors and solutions can be robustly computed using a gradient-based optimization algorithm.
Abstract: Quantitative susceptibility mapping (QSM) solves the magnetic field-to-magnetization (tissue susceptibility) inverse problem under conditions of noisy and incomplete field data acquired using magnetic resonance imaging. Therefore, sophisticated algorithms are necessary to treat the ill-posed nature of the problem and are reviewed here. The forward problem is typically presented as an integral form, where the field is the convolution of the dipole kernel and tissue susceptibility distribution. This integral form can be equivalently written as a partial differential equation (PDE). Algorithmic challenges are to reduce streaking and shadow artifacts characterized by the fundamental solution of the PDE. Bayesian maximum a posteriori estimation can be employed to solve the inverse problem, where morphological and relevant biomedical knowledge (specific to the imaging situation) are used as priors. As the cost functions in Bayesian QSM framework are typically convex, solutions can be robustly computed using a gradient-based optimization algorithm. Moreover, one can not only accelerate Bayesian QSM, but also increase its effectiveness at reducing shadows using prior knowledge based preconditioners. Improving the efficiency of QSM is under active development, and a rigorous analysis of preconditioning needs to be carried out for further investigation.

Journal ArticleDOI
TL;DR: This paper forms the low-dose CT sinogram preprocessing as a standard maximum a posteriori (MAP) estimation, which takes full consideration of the statistical properties of the two intrinsic noise sources in low- dose CT, i.e., the X-ray photon statistics and the electronic noise background.
Abstract: Computed tomography (CT) image recovery from low-mAs acquisitions without adequate treatment is always severely degraded due to a number of physical factors. In this paper, we formulate the low-dose CT sinogram preprocessing as a standard maximum a posteriori (MAP) estimation, which takes full consideration of the statistical properties of the two intrinsic noise sources in low-dose CT, i.e., the X-ray photon statistics and the electronic noise background. In addition, instead of using a general image prior as found in the traditional sinogram recovery models, we design a new prior formulation to more rationally encode the piecewise-linear configurations underlying a sinogram than previously used ones, like the TV prior term. As compared with the previous methods, especially the MAP-based ones, both the likelihood/loss and prior/regularization terms in the proposed model are ameliorated in a more accurate manner and better comply with the statistical essence of the generation mechanism of a practical sinogram. We further construct an efficient alternating direction method of multipliers algorithm to solve the proposed MAP framework. Experiments on simulated and real low-dose CT data demonstrate the superiority of the proposed method according to both visual inspection and comprehensive quantitative performance evaluation.

Journal Article
TL;DR: A generic Bayesian mixed-effects model to estimate the temporal progression of a biological phenomenon from observations obtained at multiple time points for a group of individuals and shows that the estimated spatiotemporal transformations effectively put into correspondence significant events in the progression of individuals.
Abstract: We propose a generic Bayesian mixed-effects model to estimate the temporal progression of a biological phenomenon from observations obtained at multiple time points for a group of individuals. The progression is modeled by continuous trajectories in the space of measurements. Individual trajectories of progression result from spatiotemporal transformations of an average trajectory. These transformations allow to quantify the changes in direction and pace at which the trajectories are followed. The framework of Rieman-nian geometry allows the model to be used with any kind of measurements with smooth constraints. A stochastic version of the Expectation-Maximization algorithm is used to produce produce maximum a posteriori estimates of the parameters. We evaluate our method using series of neuropsychological test scores from patients with mild cognitive impairments later diagnosed with Alzheimer's disease, and simulated evolutions of symmetric positive definite matrices. The data-driven model of the impairment of cognitive functions shows the variability in the ordering and timing of the decline of these functions in the population. We show also that the estimated spatiotemporal transformations effectively put into correspondence significant events in the progression of individuals.

Journal ArticleDOI
TL;DR: The LSNSGR exploits both the natural and learned priors of HR images, thus integrating the merits of conventional reconstruction-based and learning-based SISR algorithms and produces better HR estimations than many state-of-the-art works.
Abstract: Single image super-resolution (SISR) is a challenging work, which aims to recover the missing information in an observed low-resolution (LR) image and generate the corresponding high-resolution (HR) version. As the SISR problem is severely ill-conditioned, effective prior knowledge of HR images is necessary to well pose the HR estimation. In this paper, an effective SISR method is proposed via the local structure-adaptive transform-based nonlocal self-similarity modeling and learning-based gradient regularization (LSNSGR). The LSNSGR exploits both the natural and learned priors of HR images, thus integrating the merits of conventional reconstruction-based and learning-based SISR algorithms. More specifically, on the one hand, we characterize nonlocal self-similarity prior (natural prior) in transform domain by using the designed local structure-adaptive transform; on the other hand, the gradient prior (learned prior) is learned via the jointly optimized regression model. The former prior is effective in suppressing visual artifacts, while the latter performs well in recovering sharp edges and fine structures. By incorporating the two complementary priors into the maximum a posteriori-based reconstruction framework, we optimize a hybrid L1- and L2-regularized minimization problem to achieve an estimation of the desired HR image. Extensive experimental results suggest that the proposed LSNSGR produces better HR estimations than many state-of-the-art works in terms of both perceptual and quantitative evaluations.

Journal ArticleDOI
TL;DR: Two approaches are investigated, an adjoint-based method and a stochastic spectral method, which are used to estimate the maximum a posteriori point of the parameters and their variance, which quantifies their uncertainty in the solution of power grid inverse problems.
Abstract: We address the problem of estimating the uncertainty in the solution of power grid inverse problems within the framework of Bayesian inference. We investigate two approaches, an adjoint-based method and a stochastic spectral method. These methods are used to estimate the maximum a posteriori point of the parameters and their variance, which quantifies their uncertainty. Within this framework, we estimate several parameters of the dynamic power system, such as generator inertias, which are not quantifiable in steady-state models. We illustrate the performance of these approaches on a 9-bus power grid example and analyze the dependence on measurement frequency, estimation horizon, perturbation size, and measurement noise. We assess the computational efficiency, and discuss the expected performance when these methods are applied to large systems.

Journal ArticleDOI
TL;DR: Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework.
Abstract: The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework. In terms of the variance component estimates, both the ML and Bayesian estimation procedures result in biased and less precise variance estimates when the number of participants is small (i.e., 3). By increasing the number of participants to 5 or 7, the relative bias is close to 5% and more precise estimates are obtained for all approaches, except for the inverse-Wishart prior using the identity matrix. When a more informative prior was added, more precise estimates for the fixed effects and random effects were obtained, even when only 3 participants were included. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: This paper develops a model-based iterative reconstruction algorithm that computes the maximum a posteriori estimate of the phase and the speckle-free object reflectance and shows that the algorithm is robust against high noise and strong phase errors.
Abstract: The estimation of phase errors from digital-holography data is critical for applications such as imaging or wavefront sensing. Conventional techniques require multiple i.i.d. data and perform poorly in the presence of high noise or large phase errors. In this paper, we propose a method to estimate isoplanatic phase errors from a single data realization. We develop a model-based iterative reconstruction algorithm that computes the maximum a posteriori estimate of the phase and the speckle-free object reflectance. Using simulated data, we show that the algorithm is robust against high noise and strong phase errors.

Journal ArticleDOI
TL;DR: This work proposes the use of a statistical shape model (SSM) as a prior for surface reconstruction and compares its method to the extensively used Iterative Closest Points method on several different anatomical datasets/SSMs and demonstrates superior accuracy and robustness on sparse data.

Journal ArticleDOI
TL;DR: This work considers Bayesian empirical likelihood estimation and develops an efficient Hamiltonian Monte Carlo method for sampling from the posterior distribution of the parameters of interest and uses hitherto unknown properties of the gradient of the underlying log‐empirical‐likelihood function to show its utility.
Abstract: Summary We consider Bayesian empirical likelihood estimation and develop an efficient Hamiltonian Monte Carlo method for sampling from the posterior distribution of the parameters of interest. The method proposed uses hitherto unknown properties of the gradient of the underlying log-empirical-likelihood function. We use results from convex analysis to show that these properties hold under minimal assumptions on the parameter space, prior density and the functions used in the estimating equations determining the empirical likelihood. Our method employs a finite number of estimating equations and observations but produces valid semiparametric inference for a large class of statistical models including mixed effects models, generalized linear models and hierarchical Bayes models. We overcome major challenges posed by complex, non-convex boundaries of the support routinely observed for empirical likelihood which prevent efficient implementation of traditional Markov chain Monte Carlo methods like random-walk Metropolis–Hastings sampling etc. with or without parallel tempering. A simulation study confirms that our method converges quickly and draws samples from the posterior support efficiently. We further illustrate its utility through an analysis of a discrete data set in small area estimation.

Journal ArticleDOI
TL;DR: An orthogonal frequency-division multiplexing (OFDM) system based on the long-term evolution (LTE) railway standard is considered and a maximum a posteriori estimator (MAPE) is proposed to provide an accurate estimation of Doppler shift.
Abstract: Due to the high mobility of high-speed trains (HSTs), Doppler shift estimation has been a big challenge for HSTs. In this paper, we consider an orthogonal frequency-division multiplexing (OFDM) system based on the long-term evolution (LTE) railway standard and design the novel Doppler shift estimation algorithm. By exploiting features of HSTs, i.e., regular and repetitive routes and timetables, resulting in a predictable Doppler shift curve, a radio environment map (REM) including the Doppler shift information can be constructed via field tests. Based on REM, a maximum a posteriori estimator (MAPE) is proposed to provide an accurate estimation of Doppler shift. It uses the estimation from REM (REME) as a priori knowledge and exploits the cyclic prefix (CP) structure of OFDM to provide a maximum a posteriori estimation. The Cramer–Rao lower bounds (CRLBs) are derived. The performance of MAPE is evaluated via simulations and compared to that of REME, the classical CP-based estimator, and other existing methods. It is shown that MAPE significantly outperforms the existing methods in terms of both estimation mean square error (MSE) and bit error rate.

Journal ArticleDOI
TL;DR: This letter presents a multi-fault diagnosis scheme for bearings using hybrid features extracted from their acoustic emissions and a Bayesian inference-based one-against-all support vector machine (Bayesian OAASVM) for multi-class classification.
Abstract: This letter presents a multi-fault diagnosis scheme for bearings using hybrid features extracted from their acoustic emissions and a Bayesian inference-based one-against-all support vector machine (Bayesian OAASVM) for multi-class classification. The Bayesian OAASVM, which is a standard multi-class extension of the binary support vector machine, results in ambiguously labeled regions in the input space that degrade its classification performance. The proposed Bayesian OAASVM formulates the feature space as an appropriate Gaussian process prior, interprets the decision value of the Bayesian OAASVM as a maximum a posteriori evidence function, and uses Bayesian inference to label unknown samples.

Journal ArticleDOI
TL;DR: A Maximum A Posteriori estimator developed for the handling complex data, which adopts Markov Random Fields for modeling the images, is proposed and first results and comparison with other widely adopted denoising filters confirm the validity of the method.

Journal ArticleDOI
TL;DR: A novel activity class representation using a single sequence for training, useful in new scenarios where capturing and labeling sequences is expensive or impractical, and the discriminative properties of the representation and the validity of application in recognition systems are demonstrated.
Abstract: This paper presents a novel activity class representation using a single sequence for training. The contribution of this representation lays on the ability to train an one-shot learning recognition system, useful in new scenarios where capturing and labeling sequences is expensive or impractical. The method uses a universal background model of local descriptors obtained from source databases available on-line and adapts it to a new sequence in the target scenario through a maximum a posteriori adaptation. Each activity sample is encoded in a sequence of normalized bag of features and modeled by a new hidden Markov model formulation, where the expectation-maximization algorithm for training is modified to deal with observations consisting in vectors in a unit simplex. Extensive experiments in recognition have been performed using one-shot learning over the public datasets Weizmann, KTH, and IXMAS. These experiments demonstrate the discriminative properties of the representation and the validity of application in recognition systems, achieving state-of-the-art results.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the maximum a-posteriori (MAP) estimation of autoregressive model parameters when the innovations (errors) follow a finite mixture of distributions that, in turn, are scale-mixtures of skew-normal distributions.
Abstract: This article investigates maximum a-posteriori (MAP) estimation of autoregressive model parameters when the innovations (errors) follow a finite mixture of distributions that, in turn, are scale-mixtures of skew-normal distributions (SMSN), an attractive and extremely flexible family of probabilistic distributions The proposed model allows to fit different types of data which can be associated with different noise levels, and provides a robust modelling with great flexibility to accommodate skewness, heavy tails, multimodality and stationarity simultaneously Also, the existence of convenient hierarchical representations of the SMSN random variables allows us to develop an EM-type algorithm to perform the MAP estimates A comprehensive simulation study is then conducted to illustrate the superior performance of the proposed method The new methodology is also applied to annual barley yields data