scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 2019"


Book
02 Dec 2019
TL;DR: In this article, Bayesian Factor Analysis Model Likelihood Conjugate Priors and Posterior ConjugATE Estimation and Inference Generalized Priors, Posterior Generalized Estimation-and-Inference Interpretation Interpretation Discussion BAYESIAN this article.
Abstract: Introduction Part l: FUNDAMENTALS STATISTICAL DISTRIBUTIONS Scalar Distributions Vector Distributions Matrix Distributions INTRODUCTORY BAYESIAN STATISTICS Discrete Scalar Variables Continuous Scalar Variables Continuous Vector Variables Continuous Matrix Variables PRIOR DISTRIBUTIONS Vague Priors Conjugate Priors Generaliz ed Priors Correlation Priors HYPERPARAMETER ASSESSMENT Introduction Binomial Likelihood Scalar Normal Likelihood Multivariate Normal Likelihood Matrix Normal Likelihood BAYESIAN ESTIMATION METHODS Marginal Posterior Mean Maximum a Posteriori Advantages of ICM over Gibbs Sampling Advantages of Gibbs Sampling over ICM REGRESSION Introduction Normal Samples Simple Linear Regression Multiple Linear Regression Multivariate Linear Regression Part II: II Models BAYESIAN REGRESSION Introduction The Bayesian Regression Model Likelihood Conjugate Priors and Posterior Conjugate Estimation and Inference Generalized Priors and Posterior Generalized Estimation and Inference Interpretation Discussion BAYESIAN FACTOR ANALYSIS Introduction The Bayesian Factor Analysis Model Likelihood Conjugate Priors and Posterior Conjugate Estimation and Inference Generalized Priors and Posterior Generalized Estimation and Inference Interpretation Discussion BAYESIAN SOURCE SEPARATION Introduction Source Separation Model Source Separation Likelihood Conjugate Priors and Posterior Conjugate Estimation and Inference Generalized Priors and Posterior Generalized Estimation and Inference Interpretation Discussion UNOBSERVABLE AND OBSERVABLE SOURCE SEPARATION Introduction Model Likelihood Conjugate Priors and Posterior Conjugate Estimation and Inference Generalized Priors and Posterior Generalized Estimation and Inference Interpretation Discussion FMRI CASE STUDY Introduction Model Priors and Posterior Estimation and Inference Simulated FMRI Experiment Real FMRI Experiment FMRI Conclusion Part III: Generalizations DELAYED SOURCES AND DYNAMIC COEFFICIENTS Introduction Model Delayed Constant Mixing Delayed Nonconstant Mixing Instantaneous Nonconstant Mixing Likelihood Conjugate Priors and Posterior Conjugate Estimation and Inference Generalized Priors and Posterior Generalized Estimation and Inference Interpretation Discussion CORRELATED OBSERVATION AND SOURCE VECTORS Introduction Model Likelihood Conjugate Priors and Posterior Conjugate Estimation and Inference Posterior Conditionals Generalized Priors and Posterior Generalized Estimation and Inference Interpretation Discussion CONCLUSION Appendix A FMRI Activation Determination Appendix B FMRI Hyperparameter Assessment Bibliography Index

114 citations


Journal ArticleDOI
TL;DR: The presented approach accounts for all sources of uncertainties involved in hydrologic predictions, uses a small ensemble size, and precludes the particle degeneracy and sample impoverishment, and the effectiveness, robustness, and reliability of the method is demonstrated for several river basins across the United States.
Abstract: This article presents a novel approach to couple a deterministic four-dimensional variational (4DVAR) assimilation method with the particle filter (PF) ensemble data assimilation system, to produce a robust approach for dual-state-parameter estimation. In our proposed method, the Hybrid Ensemble and Variational Data Assimilation framework for Environmental systems (HEAVEN), we characterize the model structural uncertainty in addition to model parameter and input uncertainties. The sequential PF is formulated within the 4DVAR system to design a computationally efficient feedback mechanism throughout the assimilation period. In this framework, the 4DVAR optimization produces the maximum a posteriori estimate of state variables at the beginning of the assimilation window without the need to develop the adjoint of the forecast model. The 4DVAR solution is then perturbed by a newly defined prior error covariance matrix to generate an initial condition ensemble for the PF system to provide more accurate and reliable posterior distributions within the same assimilation window. The prior error covariance matrix is updated from one cycle to another over the main assimilation period to account for model structural uncertainty resulting in an improved estimation of posterior distribution. The premise of the presented approach is that it (1) accounts for all sources of uncertainties involved in hydrologic predictions, (2) uses a small ensemble size, and (3) precludes the particle degeneracy and sample impoverishment. The proposed method is applied on a nonlinear hydrologic model and the effectiveness, robustness, and reliability of the method is demonstrated for several river basins across the United States.

69 citations


Journal ArticleDOI
TL;DR: This paper provides a counterexample which shows that in general this claim that maximum a posteriori estimators are a limiting case of Bayes estimators with 0–1 loss is false and corrects that by providing a level-set condition for posterior densities such that the result holds.
Abstract: Maximum a posteriori and Bayes estimators are two common methods of point estimation in Bayesian statistics. It is commonly accepted that maximum a posteriori estimators are a limiting case of Bayes estimators with 0–1 loss. In this paper, we provide a counterexample which shows that in general this claim is false. We then correct the claim that by providing a level-set condition for posterior densities such that the result holds. Since both estimators are defined in terms of optimization problems, the tools of variational analysis find a natural application to Bayesian point estimation.

56 citations


Journal ArticleDOI
TL;DR: In this paper, a free-form curve registration method is applied to an efficient RGB-D visual odometry system called Canny-VO, as it efficiently tracks all Canny edge features extracted from the images.
Abstract: This paper reviews the classical problem of free-form curve registration and applies it to an efficient RGB-D visual odometry system called Canny-VO, as it efficiently tracks all Canny edge features extracted from the images. Two replacements for the distance transformation commonly used in edge registration are proposed: approximate nearest neighbor fields and oriented nearest neighbor fields. 3-D–2-D edge alignment benefits from these alternative formulations in terms of both efficiency and accuracy. It removes the need for the more computationally demanding paradigms of data-to-model registration, bilinear interpolation, and subgradient computation. To ensure robustness of the system in the presence of outliers and sensor noise, the registration is formulated as a maximum a posteriori problem and the resulting weighted least-squares objective is solved by the iteratively reweighted least-squares method. A variety of robust weight functions are investigated and the optimal choice is made based on the statistics of the residual errors. Efficiency is furthermore boosted by an adaptively sampled definition of the nearest neighbor fields. Extensive evaluations on public SLAM benchmark sequences demonstrate state-of-the-art performance and an advantage over classical Euclidean distance fields.

53 citations


Journal ArticleDOI
TL;DR: A colour mixture analysis (CMA) method based on the Hue-Saturation-Value (HSV) colour space is proposed, thereby improving the accuracy and efficiency of FVC estimation from UAV-captured RGB images and is shown to be superior to that of all three algorithms.
Abstract: Remote sensing via unmanned aerial vehicles (UAVs) is becoming a very important tool for augmenting traditional spaceborne and airborne remote sensing techniques. Commercial RGB cameras are often the payload on UAVs, because they are inexpensive, easy to operate and require little data processing. RGB images are increasingly being used for mapping of fractional vegetation cover (FVC). However, the presence of significantly mixed pixels in close-range RGB images prevents the accurate estimation of FVC. Even where pixel unmixing is applied, limited quantitative spectral information and colour variability within these images could lead to profound errors and uncertainties. This paper proposes a colour mixture analysis (CMA) method based on the Hue-Saturation-Value (HSV) colour space to alleviate the above-mentioned concerns, thereby improving the accuracy and efficiency of FVC estimation from UAV-captured RGB images. First, the a priori colour information of the pure vegetation and background endmembers are extracted from the Hue channel of the UAV proximal sensing images, obviating ground-based image capture and the attendant cost and inconvenience. Second, the relationship between the probability distribution of mixed pixels and that of the two endmembers is estimated. Finally, we estimate FVC from UAV remote sensing images with a maximum a posteriori parameter (MAP) estimator. Two UAV-captured RGB image datasets and a synthetic RGB image dataset were used to test the new method. CMA was compared with three other FVC estimation algorithms, namely, FCLS, HAGFVC and LAB2. The FVC estimates by CMA were found to be highly accurate, with root mean squared errors (RMSE) of less than 0.007 and mean absolute error (MAE) of less than 0.01 for both field datasets. The accuracy was shown to be superior to that of all three algorithms. A comprehensive analysis of the estimation accuracy under various spatial resolutions and vegetation cover levels was conducted using both field and synthetic datasets. Results show that the CMA method can robustly and accurately estimate FVC across the full range of vegetation coverage and various resolutions. Uncertainty and sensitivity analysis of colour variability due to heterogeneity and shadow were also tested. Overall, CMA was shown to be robust to variation in colour and illumination.

50 citations


Journal ArticleDOI
TL;DR: An extensible and comprehensive model to solve the load disaggregation problem is proposed, which employs the additive factorial approximate maximum a posteriori (AFAMAP) based on iterative fuzzy ${c}$ -means (IFCM).
Abstract: With the promotion and application of smart grid, the technology of non-intrusive load monitoring (NILM) has gained more and more attention in recent years. Different from direct device monitoring, it identifies the type of appliances using the aggregated load profile measured at a single metering point, which is more convenient and flexible, and thus has the potential to be extended to all households equipped with smart meters. This paper proposes an extensible and comprehensive model to solve the load disaggregation problem, which employs the additive factorial approximate maximum a posteriori (AFAMAP) based on iterative fuzzy ${c}$ -means (IFCM). To make sure that the model is adaptive to other households, the hidden Markov models (HMMs) are applied to obtain the independent load model of each appliance, and the IFCM is used to determine the number of hidden states adaptively. Finally, the AFAMAP is utilized to decompose the aggregated power consumption based on the independent load models built by HMM. Simulation studies are conducted on the open database of Almanac of Minutely Power dataset (AMPds), and the results have demonstrated that the proposed model is more accurate in comparison with the other models.

48 citations


Journal ArticleDOI
TL;DR: In this article, a Bayesian uncertainty quantification method for large-scale imaging inverse problems is proposed, which applies to all Bayesian models that are log-concave, where maximum a posteriori (MAP) is used.
Abstract: We propose a Bayesian uncertainty quantification method for large-scale imaging inverse problems. Our method applies to all Bayesian models that are log-concave, where maximum a posteriori (MAP) es...

47 citations


Journal ArticleDOI
TL;DR: The experimental results provide evidence that a priori classifier as Bayes classifier which performs well in terms of OA does not necessarily perform well as a posteriori classifiers interms of PR, the only criterion that can be used as a Priori classification measure to evaluate how well a classifier performs.
Abstract: This paper presents a statistical detection theory approach to hyperspectral image (HSI) classification which is quite different from many conventional approaches reported in the HSI classification literature. It translates a multi-target detection problem into a multi-class classification problem so that the well-established statistical detection theory can be readily applicable to solving classification problems. In particular, two types of classification, a priori classification and a posteriori classification, are developed in corresponding to Bayes detection and maximum a posteriori (MAP) detection, respectively, in detection theory. As a result, detection probability and false alarm probability can also be translated to classification rate and false classification rate derived from a confusion classification matrix used for classification. To evaluate the effectiveness of a posteriori classification, a new a posteriori classification measure, to be called precision rate (PR), is also introduced by MAP classification in contrast to overall accuracy (OA) that can be considered as a priori classification measure and has been used for Bayes classification. The experimental results provide evidence that a priori classifier as Bayes classifier which performs well in terms of OA does not necessarily perform well as a posteriori classifier in terms of PR. That is, PR is the only criterion that can be used as a posteriori classification measure to evaluate how well a classifier performs.

46 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed an algorithm to jointly estimate the cosmological information of the CMB temperature and polarization fields, the gravitational potential by which they are lensed, and the tensor-to-scalar ratio, which is an important step towards sampling from the joint posterior probability function of these quantities.
Abstract: We develop the first algorithm able to jointly compute the maximum a posteriori estimate of the Cosmic Microwave Background (CMB) temperature and polarization fields, the gravitational potential by which they are lensed, and the tensor-to-scalar ratio, $r$. This is an important step towards sampling from the joint posterior probability function of these quantities, which, assuming Gaussianity of the CMB fields and lensing potential, contains all available cosmological information and would yield theoretically optimal constraints. Attaining such optimal constraints will be crucial for next-generation CMB surveys like CMB-S4, where limits on $r$ could be improved by factors of a few over currently used suboptimal quadratic estimators. The maximization procedure described here depends on a newly developed lensing algorithm, which we term LenseFlow, and which lenses a map by solving a system of ordinary differential equations. This description has conceptual advantages, such as allowing us to give a simple nonperturbative proof that the determinant of LenseFlow on pixelized maps is equal to unity, which is crucial for our purposes and unique to LenseFlow as compared to other lensing algorithms we have tested. It also has other useful properties such as that it can be trivially inverted (i.e., delensing) for the same computational cost as the forward operation, and can be used to compute lensing adjoint, Jacobian, and Hessian operators. We test and validate the maximization procedure on flat-sky simulations covering up to $600\text{ }\text{ }{\mathrm{deg}}^{2}$ with nonwhite noise and masking.

42 citations


Journal ArticleDOI
TL;DR: In this paper, a new channel estimation algorithm is proposed that exploits channel sparsity in the time domain for an orthogonal frequency division multiplexing (OFDM)-based underwater acoustical (UWA) communications systems in the presence of Rician fading.
Abstract: In this paper, a new channel estimation algorithm is proposed that exploits channel sparsity in the time domain for an orthogonal frequency division multiplexing (OFDM)-based underwater acoustical (UWA) communications systems in the presence of Rician fading. A path-based channel model is used, in which the channel is described by a limited number of paths, each characterized by a delay, Doppler scale, and attenuation factor. The resulting algorithm initially estimates the overall sparse channel tap delays and Doppler shifts using a compressed sensing approach, in the form of the orthogonal matching pursuit (OMP) algorithm. Then, a computationally efficient and novel channel estimation algorithm is developed by combining the OMP and maximum a posteriori probability (MAP) techniques for estimating the sparse complex channel path gains whose prior densities have complex Gaussian distributions with unknown mean and variance vectors, where a computationally efficient maximum likelihood algorithm is proposed for their estimation. Monte Carlo simulation results show that the mean square error and symbol error rate performances of the OMP–MAP algorithm uniformly outperforms the conventional OMP-based channel estimation algorithm, in case of uncoded OFDM-based UWA communications systems.

41 citations


Posted Content
TL;DR: This paper proposes a Bayesian tensorized neural network that performs automatic model compression via an adaptive tensor rank determination and presents approaches for posterior density calculation and maximum a posteriori (MAP) estimation for the end-to-end training of the network.
Abstract: Tensor decomposition is an effective approach to compress over-parameterized neural networks and to enable their deployment on resource-constrained hardware platforms. However, directly applying tensor compression in the training process is a challenging task due to the difficulty of choosing a proper tensor rank. In order to achieve this goal, this paper proposes a Bayesian tensorized neural network. Our Bayesian method performs automatic model compression via an adaptive tensor rank determination. We also present approaches for posterior density calculation and maximum a posteriori (MAP) estimation for the end-to-end training of our tensorized neural network. We provide experimental validation on a fully connected neural network, a CNN and a residual neural network where our work produces $7.4\times$ to $137\times$ more compact neural networks directly from the training.

Journal ArticleDOI
TL;DR: Simulation results with practical channel models demonstrate that the proposed semiblind scheme significantly outperforms the training-based and data-aided schemes.
Abstract: This paper considers and analyzes the performance of semiblind, training, and data-aided channel estimation schemes for multiple-input multiple-output (MIMO) filter bank multicarrier (FBMC) systems with offset quadrature amplitude modulation. A semiblind MIMO-FBMC (SB-MF) channel estimator is developed that exploits both the training symbols and second-order statistical properties of the data symbols, which leads to a significant decrease in the mean squared error (MSE) with respect to its conventional training-based counterpart. Its performance is compared with that of the interference approximation method-based least squares MIMO-FBMC (LS-MF) channel estimator, wherein the channel is estimated using exclusively training symbols. The Cramer–Rao lower bounds are derived to characterize the MSE of the proposed and LS-MF estimators, which interestingly demonstrate that while the MSE per parameter of the proposed scheme decreases with the number of receive antennas, it remains constant for the training-based scheme. The resulting bit error rates are derived for the proposed SB-MF and LS-MF channel estimators. An expectation maximization-based data-aided MIMO-FBMC channel estimator is also investigated that performs iterative maximum a posteriori channel estimation in the E-step followed by data detection in the M-step. A comparative analysis is presented for the computational complexities of the various schemes. Simulation results with practical channel models demonstrate that the proposed semiblind scheme significantly outperforms the training-based and data-aided schemes.

Journal ArticleDOI
TL;DR: Bayesian Representational Similarity Analysis (BRSA) is proposed, an alternative method for computing representational similarity, in which the covariance structure of neural activity patterns is treated as a hyper-parameter in a generative model of the neural data.
Abstract: The activity of neural populations in the brains of humans and animals can exhibit vastly different spatial patterns when faced with different tasks or environmental stimuli. The degrees of similarity between these neural activity patterns in response to different events are used to characterize the representational structure of cognitive states in a neural population. The dominant methods of investigating this similarity structure first estimate neural activity patterns from noisy neural imaging data using linear regression, and then examine the similarity between the estimated patterns. Here, we show that this approach introduces spurious bias structure in the resulting similarity matrix, in particular when applied to fMRI data. This problem is especially severe when the signal-to-noise ratio is low and in cases where experimental conditions cannot be fully randomized in a task. We propose Bayesian Representational Similarity Analysis (BRSA), an alternative method for computing representational similarity, in which we treat the covariance structure of neural activity patterns as a hyper-parameter in a generative model of the neural data. By marginalizing over the unknown activity patterns, we can directly estimate this covariance structure from imaging data. This method offers significant reductions in bias and allows estimation of neural representational similarity with previously unattained levels of precision at low signal-to-noise ratio, without losing the possibility of deriving an interpretable distance measure from the estimated similarity. The method is closely related to Pattern Component Model (PCM), but instead of modeling the estimated neural patterns as in PCM, BRSA models the imaging data directly and is suited for analyzing data in which the order of task conditions is not fully counterbalanced. The probabilistic framework allows for jointly analyzing data from a group of participants. The method can also simultaneously estimate a signal-to-noise ratio map that shows where the learned representational structure is supported more strongly. Both this map and the learned covariance matrix can be used as a structured prior for maximum a posteriori estimation of neural activity patterns, which can be further used for fMRI decoding. Our method therefore paves the way towards a more unified and principled analysis of neural representations underlying fMRI signals. We make our tool freely available in Brain Imaging Analysis Kit (BrainIAK).

Journal ArticleDOI
TL;DR: This work reformulates the question of sparse recovery as an inverse problem in the Bayesian framework, expresses the sparsity belief by means of a hierachical prior model and shows that the maximum a posteriori (MAP) solution computed by a recently proposed iterative alternating sequential (IAS) algorithm converges linearly to the unique minimum for any matrix, and quadratically on the complement of the support of the minimizer.
Abstract: Sparse recovery seeks to estimate the support and the non-zero entries of a sparse signal from possibly incomplete noisy observations , with , . It has been shown that under various restrictive conditions on the matrix , the problem can be reduced to the regularized problem where is the size of the error , and the approximation error is well controlled by . A popular method for solving the above minimization problem is the iteratively reweighted least squares algorithm. Here we reformulate the question of sparse recovery as an inverse problem in the Bayesian framework, express the sparsity belief by means of a hierachical prior model and show that the maximum a posteriori (MAP) solution computed by a recently proposed iterative alternating sequential (IAS) algorithm, requiring only the solution of linear systems in the least squares sense, converges linearly to the unique minimum for any matrix , and quadratically on the complement of the support of the minimizer. The values of the parameters of the hierarchical model are assigned from an estimate of the signal to noise ratio and a priori belief of the degree of sparsity of the underlying signal, and automatically take into account the sensitivity of the data to the different components of x. The approach gives a solid Bayesian interpretation for the commonly used sensitivity weighting in geophysics and biomedical applications. Moreover, since for a suitable choice of sequences of parameters of the hyperprior, the IAS solution converges to the regularized solution, the Bayesian framework for inverse problems makes the -magic happen in the framework.

Journal ArticleDOI
TL;DR: A Bayesian framework, via a two-stage maximum a posteriori optimization routine, is employed in order to locate the most probable source of a forced oscillation given an uncertain prior model.
Abstract: Since forced oscillations are exogenous to dynamic power system models, the models by themselves cannot predict when or where a forced oscillation will occur. Locating the sources of these oscillations, therefore, is a challenging problem which requires analytical methods capable of using real time power system data to trace an observed oscillation back to its source. The difficulty of this problem is exacerbated by the fact that the parameters associated with a given power system model can range from slightly uncertain to entirely unknown. In this paper, a Bayesian framework, via a two-stage maximum a posteriori optimization routine, is employed in order to locate the most probable source of a forced oscillation given an uncertain prior model. The approach leverages an equivalent circuit representation of the system in the frequency domain and employs a numerical procedure, which makes the problem suitable for real time application. The derived framework lends itself to successful performance in the presence of phasor measurement unit measurement noise, high generator parameter uncertainty, and multiple forced oscillations occurring simultaneously. The approach is tested on a four-bus system with a single forced oscillation source and on the WECC 179-bus system with multiple oscillation sources.

Journal ArticleDOI
TL;DR: A maximum a posteriori principle based adaptive fractional central difference Kalman filter is derived that can estimate the noise statistics and system state simultaneously and the unbiasedness of the proposed algorithm is analyzed.

Journal ArticleDOI
TL;DR: The experimental results showed that the proposed hybrid despeckling method leads to a better speckle reduction in homogeneous areas while preserving details.
Abstract: In this paper, a new hybrid despeckling method, based on Undecimated Dual-Tree Complex Wavelet Transform (UDT-CWT) using maximum a posteriori (MAP) estimator and non-local Principal Component Analysis (PCA)-based filtering with local pixel grouping (LPG-PCA), was proposed. To achieve a heterogeneous-adaptive speckle reduction, SAR image is classified into three classes of point targets, details, or homogeneous areas. The despeckling is done for each pixel based on its class of information. Logarithm transform was applied to the SAR image to convert the multiplicative speckle into additive noise. Our proposed method contains two principal steps. In the first step, denoising was done in the complex wavelet domain via MAP estimator. After performing UDT-CWT, the noise-free complex wavelet coefficients of the log-transformed SAR image were modeled as a two-state Gaussian mixture model. Furthermore, the additive noise in the complex wavelet domain was considered as a zero-mean Gaussian distribution. In the second step, after applying inverse UDT-CWT, an iterative LPG-PCA method was used to smooth the homogeneous areas and enhance the details. The proposed method was compared with some state-of-the-art despeckling methods. The experimental results showed that the proposed method leads to a better speckle reduction in homogeneous areas while preserving details.

Journal ArticleDOI
TL;DR: The proposed approach to estimating the maximum queue lengths of vehicles at signalized intersections using high-frequency trajectory data of probe vehicles is shown to produce more accurate and robust estimates than two benchmark estimation methods.
Abstract: A novel Bayesian approach is proposed for estimating the maximum queue lengths of vehicles at signalized intersections using high-frequency trajectory data of probe vehicles. The queue length estimates are obtained from a distribution estimated over several neighboring cycles via a maximum a posteriori method. An expectation maximum algorithm is proposed for efficiently solving the estimation problem. Through a battery of simulation experiments and a real-world case study, the proposed approach is shown to produce more accurate and robust estimates than two benchmark estimation methods. Fairly good accuracy is achieved even when the probe vehicle penetration rate is 2%.

Journal ArticleDOI
13 Jul 2019-Sensors
TL;DR: This paper describes both sparsity-driven regularization and CS-based radar imaging methods, along with other approaches in a unified mathematical framework, to provide readers with a systematic overview of radar imaging theories and methods from a clear mathematical viewpoint.
Abstract: In recent years, sparsity-driven regularization and compressed sensing (CS)-based radar imaging methods have attracted significant attention This paper provides an introduction to the fundamental concepts of this area In addition, we will describe both sparsity-driven regularization and CS-based radar imaging methods, along with other approaches in a unified mathematical framework This will provide readers with a systematic overview of radar imaging theories and methods from a clear mathematical viewpoint The methods presented in this paper include the minimum variance unbiased estimation, least squares (LS) estimation, Bayesian maximum a posteriori (MAP) estimation, matched filtering, regularization, and CS reconstruction The characteristics of these methods and their connections are also analyzed Sparsity-driven regularization and CS based radar imaging methods represent an active research area; there are still many unsolved or open problems, such as the sampling scheme, computational complexity, sparse representation, influence of clutter, and model error compensation We will summarize the challenges as well as recent advances related to these issues

Journal ArticleDOI
11 Mar 2019-ACS Nano
TL;DR: A Bayesian method that is called maximum a posteriori nanoparticle tracking analysis (MApNTA) for estimating the size distributions of nanoparticle samples from high-throughput single-particle tracking experiments and demonstrates particular utility for characterizing minority components and impurity populations.
Abstract: The rapid and efficient characterization of polydisperse nanoparticle dispersions remains a challenge within nanotechnology and biopharmaceuticals. Current methods for particle sizing, such as dynamic light scattering, analytical ultracentrifugation, and field-flow fractionation, can suffer from a combination of statistical biases, difficult sample preparation, insufficient sampling, and ill-posed data analysis. As an alternative, we introduce a Bayesian method that we call maximum a posteriori nanoparticle tracking analysis (MApNTA) for estimating the size distributions of nanoparticle samples from high-throughput single-particle tracking experiments. We derive unbiased statistical models for two observable quantities in a typical nanoparticle trajectory—the mean square displacement and the trajectory length—as a function of the particle size and calculate size distributions using maximum a posteriori (MAP) estimation with cross validation to mildly regularize solutions. We show that this approach infers...

Journal ArticleDOI
TL;DR: The results indicate that the proposed method outperforms over existing methods, both in noise reduction and edge preservation.
Abstract: Medical ultrasound images are used in clinical diagnosis and generally degraded by speckle noise. This makes difficulty in automatic interpretation of diseases in ultrasound images. This paper presents a speckle removal algorithm by modeling the wavelet coefficients. A Bayesian approach is implemented to find the noise free coefficients. Cauchy prior and Gaussian Probability Density Function (PDF) are used to model the true wavelet coefficients and noisy coefficients respectively. A Maximum a Posteriori (MAP) estimator is used to estimate the noise free wavelet coefficients. A Median Absolute Deviation (MAD) estimator is used to find the variance of affected wavelet coefficients in finest scale. The proposed method is compared with existing denoising methods. The experimental results show that the method offer up to 21.48% enhancement in Peak Signal to Noise Ratio (PSNR), 1.82% enhancement in Structural Similarity Index (SSIM), 1% enhancement in Correlation coefficient (ρ) and 7.68% enhancement in Edge Preserving Index (EPI) than best existing wavelet modeling method. The results indicate that the proposed method outperforms over existing methods, both in noise reduction and edge preservation.

Journal ArticleDOI
TL;DR: It is shown that the event-triggered risk-sensitive maximum a posteriori probability estimates can be obtained based on a newly defined unnormalized information state, which has a linear recursive form.
Abstract: An event-triggered risk-sensitive state estimation problem for hidden Markov models is investigated in this work. The event-triggered scheme considered is fairly general, which covers most existing event-triggered conditions. By utilizing the reference probability measure approach, this estimation problem is reformulated as an equivalent one and solved. We show that the event-triggered risk-sensitive maximum a posteriori probability estimates can be obtained based on a newly defined unnormalized information state, which has a linear recursive form. Furthermore, the explicit solutions for two major classes of event-triggered conditions are derived if the measurement noise is Gaussian. A numerical comparison is provided to illustrate the effectiveness of the proposed results.

Journal ArticleDOI
TL;DR: This paper used the complex Gaussian distribution and Laplace distribution to describe the distribution of noise and targets, respectively, and transformed the superresolution problem into a convex optimization problem by maximum a posteriori estimation in the Bayesian framework to achieve azimuth superresolution of forward-looking radar imaging.
Abstract: Forward-looking radar plays an important role in many military and civilian fields. However, the problem of low azimuth resolution has restricted its applications seriously. Although many methods have been used to achieve azimuth superresolution, the traditional methods suffer from noise amplification or limited resolution under low signal-to-noise (SNR) condition. In this paper, we proposed a Bayesian deconvolution method which relies on linearized Bregman to achieve azimuth superresolution of forward-looking radar imaging. We first used the complex Gaussian distribution and Laplace distribution to describe the distribution of noise and targets, respectively, and transformed the superresolution problem into a convex optimization problem by maximum a posteriori estimation in the Bayesian framework. Second, linearized Bregman algorithm was used to solve the convex optimization problem. The proposed method introduces the prior information of noise and target, and overcomes the ill-posedness of deconvolution. As a result, the azimuth resolution is remarkably enhanced. Besides, the proposed method has high computational efficiency by linearizing objection function, so it can take both time cost and resolution improvement into consideration. Finally, the superior performance was verified by simulation and experimental data.

Journal ArticleDOI
TL;DR: A graphical model for probabilistic SE modeling that captures both the uncertainties and the power grid via embedding physical laws, i.e., KCL and KVL is described.
Abstract: Due to a high penetration of renewable energy, power systems operational planning today needs to capture unprecedented uncertainties in a short period. Fast probabilistic state estimation (SE), which creates probabilistic load flow estimates, represents one such planning tool. This paper describes a graphical model for probabilistic SE modeling that captures both the uncertainties and the power grid via embedding physical laws, i.e., KCL and KVL. With such a modeling, the resulting maximum a posteriori (MAP) SE problem is formulated by measuring state variables and their interactions. To resolve the computational difficulty in calculating the marginal distribution for interested quantities, a distributed message passing method is proposed to compute MAP estimates using increasingly available cyber resources, i.e., computational and communication intelligence. A modified message passing algorithm is then introduced to improve the convergence and optimality. Simulation results illustrate the probabilistic SE and demonstrate the improved performance over traditional deterministic approaches via: 1) the more accuracy mean estimate; 2) the confidence interval covering the true state; and 3) the reduced computational time.

Posted Content
TL;DR: V-MPO is introduced, an on-policy adaptation of Maximum a Posteriori Policy Optimization that performs policy iteration based on a learned state-value function and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters.
Abstract: Some of the most successful applications of deep reinforcement learning to challenging domains in discrete and continuous control have used policy gradient methods in the on-policy setting. However, policy gradients can suffer from large variance that may limit performance, and in practice require carefully tuned entropy regularization to prevent policy collapse. As an alternative to policy gradient algorithms, we introduce V-MPO, an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) that performs policy iteration based on a learned state-value function. We show that V-MPO surpasses previously reported scores for both the Atari-57 and DMLab-30 benchmark suites in the multi-task setting, and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters. On individual DMLab and Atari levels, the proposed algorithm can achieve scores that are substantially higher than has previously been reported. V-MPO is also applicable to problems with high-dimensional, continuous action spaces, which we demonstrate in the context of learning to control simulated humanoids with 22 degrees of freedom from full state observations and 56 degrees of freedom from pixel observations, as well as example OpenAI Gym tasks where V-MPO achieves substantially higher asymptotic scores than previously reported.

Journal ArticleDOI
TL;DR: Simulation results and complexity analysis show that the proposed RGSM-SCMA system delivers the same SE with significant savings in the number of transmit antennas, at the expense of close bit error rate and a negligible increase in the decoding complexity, when compared with SM- SCMA.
Abstract: Spatial modulation (SM) sparse code multiple access (SCMA) systems provide high spectral efficiency (SE) at the expense of using a high number of transmit antennas. To overcome this drawback, this paper proposes a novel SM-SCMA system operating in uplink transmission, referred to as rotational generalized SM-SCMA (RGSM-SCMA). For the proposed system, the following are introduced, first, transmitter design and its formulation, second, maximum likelihood and maximum a posteriori probability decoders, and finally, practical low-complexity message passing algorithm and its complexity analysis. Simulation results and complexity analysis show that the proposed RGSM-SCMA system delivers the same SE with significant savings in the number of transmit antennas, at the expense of close bit error rate and a negligible increase in the decoding complexity, when compared with SM-SCMA.

Proceedings ArticleDOI
20 May 2019
TL;DR: This paper uses a deep neural network to automatically learn features from data itself, and develops a data-driven detection approach, which outperforms the conventional maximum eigenvalue detection method and derives the cost function for offline training in the spectrum sensing model, which guarantees the optimality of the designed test statistic.
Abstract: The existing spectrum sensing methods mostly make decisions using model-driven test statistics, such as energy and eigenvalues. A weakness of these model-driven methods is the difficulty in accurately modeling for practical environment. In contrast to the model-driven approach, in this paper, we use a deep neural network to automatically learn features from data itself, and develop a data-driven detection approach. Inspired by the powerful capability of convolutional neural network (CNN) in extracting features of matrix-shaped data, we use the sample covariance matrix as the input of CNN, proposing a novel covariance matrix-aware CNN-based detection scheme, which consists of offline training and online detection. Different from the existing deep learning-based detection methods which replace the whole detection system by an end-to-end neural network, in this work, we use CNN for offline test statistic design and develop a practical threshold-based online detection mechanism. Specially, according to the maximum a posteriori probability (MAP) criterion, we derive the cost function for offline training in the spectrum sensing model, which guarantees the optimality of the designed test statistic. Simulation results have shown that whether the PU signals are independent or correlated, the detection performance of the proposed method is close to the optimal bound of estimator-correlator detector. Particularly, when the PU signals are correlated with a correlation coefficient 0.7, the probability of detection of the proposed method outperforms the conventional maximum eigenvalue detection method by nearly 7.5 times at SNR = −14dB.

Journal ArticleDOI
TL;DR: In this paper, the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and logarithmic.
Abstract: Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and...

Journal ArticleDOI
TL;DR: This paper proposes a vision-based fire and smoke segmentation system which uses spatial, temporal and motion information to extract the desired regions from the video frames and achieves a frame-wise fire detection rate of 95.39%.
Abstract: This paper proposes a vision-based fire and smoke segmentation system which uses spatial, temporal and motion information to extract the desired regions from the video frames. The fusion of information is done using multiple features such as optical flow, divergence and intensity values. These features extracted from the images are used to segment the pixels into different classes in an unsupervised way. A comparative analysis is done by using multiple clustering algorithms for segmentation. Here the Markov Random Field performs more accurately than other segmentation algorithms since it characterizes the spatial interactions of pixels using a finite number of parameters. It builds a probabilistic image model that selects the most likely labeling using the maximum a posteriori (MAP) estimation. This unsupervised approach is tested on various images and achieves a frame-wise fire detection rate of 95.39%. Hence this method can be used for early detection of fire in real-time and it can be incorporated into an indoor or outdoor surveillance system.

Journal ArticleDOI
TL;DR: A superpixel-based normal guided scale invariant deep convolutional field is proposed by encouraging the neighboring superpixels with similar appearance to lie on the same 3D plane of the scene by encouragingThe proposed network can be efficiently trained in an end-to-end manner.
Abstract: Estimating scene depth from a single image can be widely applied to understand 3D environments due to the easy access of the images captured by consumer-level cameras. Previous works exploit conditional random fields (CRFs) to estimate image depth, where neighboring pixels (superpixels) with similar appearances are constrained to share the same depth. However, the depth may vary significantly in the slanted surface, thus leading to severe estimation errors. In order to eliminate those errors, we propose a superpixel-based normal guided scale invariant deep convolutional field by encouraging the neighboring superpixels with similar appearance to lie on the same 3D plane of the scene. In doing so, a depth-normal multitask CNN is introduced to produce the superpixel-wise depth and surface normal predictions simultaneously. To correct the errors of the roughly estimated superpiexl-wise depth, we develop a normal guided scale invariant CRF (NGSI-CRF). NGSI-CRF consists of a scale invariant unary potential that is able to measure the relative depth between superpixels as well as the absolute depth of superpixels, and a normal guided pairwise potential that constrains spatial relationships between superpixels in accordance with the 3D layout of the scene. In other words, the normal guided pairwise potential is designed to smooth the depth prediction without deteriorating the 3D structure of the depth prediction. The superpixel-wise depth maps estimated by NGSI-CRF will be fed into a pixel-wise refinement module to produce a smooth fine-grained depth prediction. Furthermore, we derive a closed-form solution for the maximum a posteriori (MAP) inference of NGSI-CRF. Thus, our proposed network can be efficiently trained in an end-to-end manner. We conduct our experiments on various datasets, such as NYU-D2, KITTI, and Make 3D. As demonstrated in the experimental results, our method achieves superior performance in both indoor and outdoor scenes.