scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 2022"


Journal ArticleDOI
TL;DR: In this paper , the maximum a posteriori (MAP) estimation for Bayesian models with PnP priors is studied, and a convergence proof for MAP computation is presented under realistic assumptions on the denoiser used.
Abstract: Bayesian methods to solve imaging inverse problems usually combine an explicit data likelihood function with a prior distribution that explicitly models expected properties of the solution. Many kinds of priors have been explored in the literature, from simple ones expressing local properties to more involved ones exploiting image redundancy at a non-local scale. In a departure from explicit modelling, several recent works have proposed and studied the use of implicit priors defined by an image denoising algorithm. This approach, commonly known as Plug & Play (PnP) regularization, can deliver remarkably accurate results, particularly when combined with state-of-the-art denoisers based on convolutional neural networks. However, the theoretical analysis of PnP Bayesian models and algorithms is difficult and works on the topic often rely on unrealistic assumptions on the properties of the image denoiser. This papers studies maximum a posteriori (MAP) estimation for Bayesian models with PnP priors. We first consider questions related to existence, stability and well-posedness and then present a convergence proof for MAP computation by PnP stochastic gradient descent (PnP-SGD) under realistic assumptions on the denoiser used. We report a range of imaging experiments demonstrating PnP-SGD as well as comparisons with other PnP schemes.

11 citations


Journal ArticleDOI
TL;DR: In this paper , the authors presented statistical inference of Gompertz distribution based on unified hybrid censored data under constant-stress partially accelerated life test (CSPALT) model and applied the stochastic expectation maximization algorithm to estimate the CSPALT parameters and to reduce computational complexity.
Abstract: The accelerated life testing is the key methodology of evaluating product reliability rapidly. This paper presents statistical inference of Gompertz distribution based on unified hybrid censored data under constant-stress partially accelerated life test (CSPALT) model. We apply the stochastic expectation-maximization algorithm to estimate the CSPALT parameters and to reduce computational complexity. It is shown that the maximum likelihood estimates exist uniquely. Asymptotic confidence intervals and confidence intervals using bootstrap-p and bootstrap-t methods are constructed. Moreover the maximum product of spacing (MPS) and maximum a posteriori (MAP) estimates of the model parameters and accelerated factor are discussed. The performances of the various estimators of the CSPALT parameters are compared through the simulation study. In summary, the MAP estimates perform superior than MLEs (or MPSs) with respect to the smallest MSE values.

8 citations


Journal ArticleDOI
TL;DR: XGBoost as discussed by the authors was first trained on 508 patient interdose AUCs estimated using MAP•BE, and then on 500-10,000 rich interdose PK profiles simulated using previously published population PK parameters.
Abstract: Everolimus is an immunosuppressant with a small therapeutic index and large between‐patient variability. The area under the concentration versus time curve (AUC) is the best marker of exposure but measuring it requires collecting many blood samples. The objective of this study was to train machine learning (ML) algorithms using pharmacokinetic (PK) profiles from kidney transplant recipients, simulated profiles, or both types, and compare their performance for everolimus AUC0‐12h estimation using a limited number of predictors, as compared to an independent set of full PK profiles from patients, as well as to the corresponding maximum a posteriori Bayesian estimates (MAP‐BE). XGBoost was first trained on 508 patient interdose AUCs estimated using MAP‐BE, and then on 500–10,000 rich interdose PK profiles simulated using previously published population PK parameters. The predictors used were: predose, ~1 h, and ~2 h whole blood concentrations, differences between these concentrations, relative deviations from theoretical sampling times, morning dose, patient age, and time elapsed since transplantation. The best results were obtained with XGBoost trained on 5016 simulated profiles. AUC estimation achieved in an external dataset of 114 full‐PK profiles was excellent (root mean squared error [RMSE] = 10.8 μg*h/L) and slightly better than MAP‐BE (RMSE = 11.9 μg*h/L). Using more profiles (n = 10,035) did not improve the ML algorithm performance. The contribution of mixing patient and simulated profiles was significant only when they were in balanced numbers, with ~500 for each (RMSE = 12.5 μg*h/L), compared with patient data alone (RMSE = 18.0 μg*h/L).

8 citations


Journal ArticleDOI
TL;DR: An a posteriori estimator of the energy error between the solutions of the exact and the defeatured geometries in [Formula: see text], that is simple, reliable and efficient up to oscillations.
Abstract: Defeaturing consists in simplifying geometrical models by removing the geometrical features that are considered not relevant for a given simulation. Feature removal and simplification of computer-aided design models enables faster simulations for engineering analysis problems, and simplifies the meshing problem that is otherwise often unfeasible. The effects of defeaturing on the analysis are then neglected and as of today, there are basically very few strategies to quantitatively evaluate such an impact. Understanding well the effects of this process is an important step for automatic integration of design and analysis. We formalize the process of defeaturing by understanding its effect on the solution of Poisson equation defined on the geometrical model of interest containing a single feature, with Neumann boundary conditions on the feature itself. We derive an a posteriori estimator of the energy error between the solutions of the exact and the defeatured geometries in [Formula: see text], [Formula: see text], that is simple, reliable and efficient up to oscillations. The dependence of the estimator upon the size of the features is explicit.

8 citations


Journal ArticleDOI
TL;DR: In this article , a deep learning-based linear minimum mean-squared error (LMMSE) estimator for the basis expansion model (BEM) coefficients is proposed.
Abstract: This letter investigates joint estimation of the channel, phase noise (PN), and in-phase (I) and quadrature-phase (Q) imbalance in multicarrier MIMO full-duplex wireless systems. We approximate the time-varying channels with a basis expansion model (BEM) to reduce the number of unknowns. We then propose a pilot-aided linear minimum mean-squared error (LMMSE) estimator for the BEM coefficients. To improve its performance, we develop a deep learning (DL) network. The DL network is trained offline by using simulation data and then deployed for online estimation. The numerical results illustrate that the proposed DL-LMMSE estimator outperforms conventional estimators, such as maximum-a-posteriori (MAP) in terms of the mean-squared error (MSE).

8 citations


Journal ArticleDOI
TL;DR: In this paper , a framework that enables efficient sampling from learned probability distributions for MRI reconstruction is introduced, which can be used to sample from a learned probability distribution for the task of MRI reconstruction.
Abstract: We introduce a framework that enables efficient sampling from learned probability distributions for MRI reconstruction.

7 citations


Journal ArticleDOI
TL;DR: In this paper , the authors develop theory, methods, and provably convergent algorithms for performing Bayesian inference with PnP priors, which are demonstrated on several canonical problems such as image deblurring, inpainting, and denoising, where they are used for point estimation and quantification.
Abstract: Since the seminal work of Venkatakrishnan et al. in 2013, Plug & Play (PnP) methods have become ubiquitous in Bayesian imaging. These methods derive Minimum Mean Square Error (MMSE) or Maximum A Posteriori (MAP) estimators for inverse problems in imaging by combining an explicit likelihood function with a prior that is implicitly defined by an image denoising algorithm. The PnP algorithms proposed in the literature mainly differ in the iterative schemes they use for optimisation or for sampling. In the case of optimisation schemes, some recent works guarantee the convergence to a fixed point, albeit not necessarily a MAP estimate. In the case of sampling schemes, to the best of our knowledge, there is no known proof of convergence. There also remain important open questions regarding whether the underlying Bayesian models and estimators are well defined, well-posed, and have the basic regularity properties required to support these numerical schemes. To address these limitations, this paper develops theory, methods, and provably convergent algorithms for performing Bayesian inference with PnP priors. We introduce two algorithms: 1) PnP-ULA (Unadjusted Langevin Algorithm) for Monte Carlo sampling and MMSE inference; and 2) PnP-SGD (Stochastic Gradient Descent) for MAP inference. Using recent results on the quantitative convergence of Markov chains, we establish detailed convergence guarantees for these two algorithms under realistic assumptions on the denoising operators used, with special attention to denoisers based on deep neural networks. We also show that these algorithms approximately target a decision-theoretically optimal Bayesian model that is well-posed. The proposed algorithms are demonstrated on several canonical problems such as image deblurring, inpainting, and denoising, where they are used for point estimation as well as for uncertainty visualisation and quantification.

7 citations


Journal ArticleDOI
TL;DR: In this article , a two-stage load profile super-resolution (LPSR) framework, ProfileSR-GAN, is proposed to upsample the low-resolution load profiles (LRLPs) to high-resolution HRLPs.
Abstract: This paper presents a novel two-stage load profile super-resolution (LPSR) framework, ProfileSR-GAN, to upsample the low-resolution load profiles (LRLPs) to high-resolution load profiles (HRLPs). The LPSR problem is formulated as a Maximum-a-Posteriori problem. In the first-stage, a GAN-based model is adopted to restore high-frequency components from the LRLPs. To reflect the load-weather dependency, aside from the LRLPs, the weather data is added as an input to the GAN-based model. In the second-stage, a polishing network guided by outline loss and switching loss is novelly introduced to remove the unrealistic power fluctuations in the generated HRLPs and improve the point-to-point matching accuracy. To evaluate the realisticness of the generated HRLPs, a new set of load shape evaluation metrics is developed. Simulation results show that: i) ProfileSR-GAN outperforms the state-of-the-art methods in all shape-based metrics and can achieve comparable performance with those methods in point-to-point matching accuracy, and ii) after applying ProfileSR-GAN to convert LRLPs to HRLPs, the performance of a downstream task, non-intrusive load monitoring, can be significantly improved. This demonstrates that ProfileSR-GAN is an effective new mechanism for restoring high-frequency components in downsampled time-series data sets and improves the performance of downstream tasks that require HR load profiles as inputs.

7 citations


Journal ArticleDOI
TL;DR: In this article , the authors developed the theoretical foundation for time-domain, phase-based, joint maximum likelihood estimation of the unknown carrier frequency and the initial carrier phase, with simultaneous maximum a posteriori probability (MAP) estimation of time-varying carrier phase noise.
Abstract: We address here the issue of jointly estimating the angle parameters of a single sinusoid with Wiener carrier phase noise and observed in additive, white, Gaussian noise (AWGN). We develop the theoretical foundation for time-domain, phase-based, joint maximum likelihood (ML) estimation of the unknown carrier frequency and the initial carrier phase, with simultaneous maximum a posteriori probability (MAP) estimation of the time-varying carrier phase noise. The derivation is based on the amplitude and phase-form of the noisy received signal model together with the use of the best, linearized, additive observation phase noise model due to AWGN. Our newly derived estimators are closed-form expressions, consisting of both the phase and the magnitude of all the received signal samples. More importantly, they all have a low-complexity, sample-by-sample iterative processing structure, which can be implemented iteratively in real-time. As a basis for comparison, the Cramer-Rao lower bound (CRLB) for the ML estimators and the Bayesian CRLB (BCRLB) for the MAP estimator are derived in the presence of carrier phase noise, and the results simply depend on the signal-to-noise ratio (SNR), the observation length and the phase noise variance. It is theoretically shown that the estimates obtained are unbiased, and the mean-square error (MSE) of the estimators attain the CRLB/BCRLB at high SNR. The MSE performance as a function of the SNR, the observation length and the phase noise variance is verified using Monte Carlo simulation, which shows a remarkable improvement in estimation accuracy in large phase noise.

7 citations


Journal ArticleDOI
TL;DR: This work develops a new, general, formal and computationally efficient bayesian Poisson denoising algorithm, based on the Nonlocal Means framework and replacing the euclidean distance by stochastic distances, which are more appropriate for the Denoising problem.

6 citations


Journal ArticleDOI
TL;DR: In this article , the authors propose a Bayesian approach for estimating the rank of the block-term decomposition (BTD) tensor model, which is based on sparse Bayesian learning (SBL).
Abstract: The so-called block-term decomposition (BTD) tensor model, especially in its rank-$(L_r,L_r,1)$ version, has been recently receiving increasing attention due to its enhanced ability of representing systems and signals that are composed of \emph{blocks} of rank higher than one, a scenario encountered in numerous and diverse applications. Uniqueness conditions and fitting methods have thus been thoroughly studied. Nevertheless, the challenging problem of estimating the BTD model structure, namely the number of block terms, $R$, and their individual ranks, $L_r$, has only recently started to attract significant attention, mainly through regularization-based approaches which entail the need to tune the regularization parameter(s). In this work, we build on ideas of sparse Bayesian learning (SBL) and put forward a fully automated Bayesian approach. Through a suitably crafted multi-level \emph{hierarchical} probabilistic model, which gives rise to heavy-tailed prior distributions for the BTD factors, structured sparsity is \emph{jointly} imposed. Ranks are then estimated from the numbers of blocks ($R$) and columns ($L_r$) of non-negligible energy. Approximate posterior inference is implemented, within the variational inference framework. The resulting iterative algorithm completely avoids hyperparameter tuning, which is a significant defect of regularization-based methods. Alternative probabilistic models are also explored and the connections with their regularization-based counterparts are brought to light with the aid of the associated maximum a-posteriori (MAP) estimators. We report simulation results with both synthetic and real-word data, which demonstrate the merits of the proposed method in terms of both rank estimation and model fitting as compared to state-of-the-art relevant methods.

Journal ArticleDOI
TL;DR: In this article , a super-resolution imaging method that relies on the Weibull distribution was proposed to realize high azimuth resolution for sea-surface targets, and the proposed method introduces the generalized Gaussian distribution and WeibULL distribution to represent the statistical distribution function of the target prior information and sea clutter, respectively.
Abstract: To realize high azimuth resolution for sea-surface targets, this article proposes a superresolution imaging method that relies on the Weibull distribution. The proposed method introduces the generalized Gaussian distribution and Weibull distribution to represent the statistical distribution function of the target prior information and sea clutter, respectively. The corresponding objective function was derived under the maximum a posteriori (MAP) criterion. To address the nonlinearity of the objective function, this article adopts the Newton–Raphson iterative method to resolve it. Simulations and experimental data assessment indicate that the proposed method has superior superresolution imaging performance compared with other traditional superresolution methods for sea-surface target imaging.

Journal ArticleDOI
TL;DR: In this article , a boosted difference of convex functions algorithm (BDCA) was proposed to solve the problem of image deblurring under Rician noise in medical imaging. But the authors only considered the maximum a posteriori (MAP) estimation approach.
Abstract: Image deblurring under Rician noise has attracted considerable attention in imaging science. Frequently appearing in medical imaging, Rician noise leads to an interesting nonconvex optimization problem, termed as the MAP-Rician model, which is based on the Maximum a Posteriori (MAP) estimation approach. As the MAP-Rician model is deeply rooted in Bayesian analysis, we want to understand its mathematical analysis carefully. Moreover, one needs to properly select a suitable algorithm for tackling this nonconvex problem to get the best performance. This paper investigates both issues. Indeed, we first present a theoretical result about the existence of a minimizer for the MAP-Rician model under mild conditions. Next, we aim to adopt an efficient boosted difference of convex functions algorithm (BDCA) to handle this challenging problem. Basically, BDCA combines the classical difference of convex functions algorithm (DCA) with a backtracking line search, which utilizes the point generated by DCA to define a search direction. In particular, we apply a smoothing scheme to handle the nonsmooth total variation (TV) regularization term in the discrete MAP-Rician model. Theoretically, using the Kurdyka--Lojasiewicz (KL) property, the convergence of the numerical algorithm can be guaranteed. We also prove that the sequence generated by the proposed algorithm converges to a stationary point with the objective function values decreasing monotonically. Numerical simulations are then reported to clearly illustrate that our BDCA approach outperforms some state-of-the-art methods for both medical and natural images in terms of image recovery capability and CPU-time cost.

Journal ArticleDOI
Yinxuan Li, Hongche Yin, Jian Yao, Hanyun Wang, Li Li 
TL;DR: Li et al. as discussed by the authors proposed a unified probabilistic framework that formulates global color correction as a maximum posteriori probability (MAP) estimation, which is flexible enough to allow for any assumptions of residual distribution.
Abstract: The task of color consistency correction for multiple images mainly arises from applications like orthoimage producing, panoramic image stitching and 3D reconstruction. In these applications, images usually have been geometrically aligned. So correspondences can be easily extracted and used to solve color correction models. Almost all previous methods assume that the color residuals of correspondences follow Gaussian distribution and solve color models based on least squares. However, correspondences often contain unreliable ones due to altered areas and misalignments, which results in unusual large color residuals, namely, outliers. Imposing color consistency constaints on unreliable correspondences significantly affects the performance of color correction since Gaussian is highly sensitive to outliers. In this paper, to solve this problem theoretically, we first propose a unified probabilistic framework that formulates global color correction as a maximum posteriori probability (MAP) estimation. It is flexible enough to allow for any assumptions of residual distribution. And most color correction methods can be explained in this unified framework. Then, to robust against outliers, we use t-distribution with heavier tails than Gaussian to fit the color residuals. It is more robust because higher probabilities can be assigned to outliers. We show that the MAP formulation based on t-distribution actually leads to weighted least squares, which downweights outliers adaptively. Besides, our framework requires no user-defined robustness parameter. Because all parameters of color models and t-distribution are optimized jointly. In addition, to decrease the huge computational cost of large scale dataset, we extend the proposed framework to a parallel vesion which can achieve efficiency and global optimal at the same time. In the experiments, we compare our approach with the state-of-the-art approaches of Shen et al., Xia et al., etc. on several challenging datasets with outliers. The results demonstrate that our approach achieves the best robustness (average color consistency scores CD=5.4, DeltaE2000=5.7 and PSNR=24.0) and the best efficiency (given 100 images, non-parallel/parallel runs more than 5/50 times faster than others). The implementation is available at https://github.com/yinxuanLi/ColorConsistencyCorrectionForMultipleImages.

Proceedings ArticleDOI
23 May 2022
TL;DR: In this article , a neural network is trained to map a noisy spectrogram to the Wiener filter and its associated variance, which quantifies uncertainty, based on the maximum a posteriori (MAP) inference of spectral coefficients.
Abstract: Speech enhancement in the time-frequency domain is often performed by estimating a multiplicative mask to extract clean speech. However, most neural network-based methods perform point estimation, i.e., their output consists of a single mask. In this paper, we study the benefits of modeling uncertainty in neural network-based speech enhancement. For this, our neural network is trained to map a noisy spectrogram to the Wiener filter and its associated variance, which quantifies uncertainty, based on the maximum a posteriori (MAP) inference of spectral coefficients. By estimating the distribution instead of the point estimate, one can model the uncertainty associated with each estimate. We further propose to use the estimated Wiener filter and its uncertainty to build an approximate MAP (A-MAP) estimator of spectral magnitudes, which in turn is combined with the MAP inference of spectral coefficients to form a hybrid loss function to jointly reinforce the estimation. Experimental results on different datasets show that the proposed method can not only capture the uncertainty associated with the estimated filters, but also yield a higher enhancement performance over comparable models that do not take uncertainty into account.

Journal ArticleDOI
Shamik Dey1
TL;DR: The Patient-Reported Outcome Measurement Information System (PROMIS) as mentioned in this paper was developed to reliably measure health-related quality of life using the patient's voice using Item Response Theory methods in its development, validation and implementation.
Abstract: Abstract Background The Patient-Reported Outcome Measurement Information System ® (PROMIS ® ) was developed to reliably measure health-related quality of life using the patient’s voice. To achieve these aims, PROMIS utilized Item Response Theory methods in its development, validation and implementation. PROMIS measures are typically scored using a specific method to calculate scores, called Expected A Posteriori estimation. Body Expected A Posteriori scoring methods are flexible, produce accurate scores and can be efficiently calculated by statistical software. This work seeks to make Expected A Posteriori scoring methods transparent and accessible to a larger audience through description, graphical demonstration and examples. Further applications and practical considerations of Expected A Posteriori scoring are presented and discussed. All materials used in this paper are made available through the R Markdown reproducibility framework and are intended to be reviewed and reused. Commented statistical code for the calculation of Expected A Posteriori scores is included. Conclusion This work seeks to provide the reader with a summary and visualization of the operation of Expected A Posteriori scoring, as implemented in PROMIS. As PROMIS is increasingly adopted and implemented, this work will provide a basis for making psychometric methods more accessible to the PROMIS user base.

Journal ArticleDOI
TL;DR: In this paper , a physics-informed machine learning approach for large-scale data assimilation and parameter estimation was developed and applied for estimating transmissivity and hydraulic head in the two-dimensional steady-state subsurface flow model of the Hanford Site given synthetic measurements of said variables.
Abstract: We develop a physics-informed machine learning approach for large-scale data assimilation and parameter estimation and apply it for estimating transmissivity and hydraulic head in the two-dimensional steady-state subsurface flow model of the Hanford Site given synthetic measurements of said variables. In our approach, we extend the physics-informed conditional Karhunen-Loéve expansion (PICKLE) method to modeling subsurface flow with unknown flux (Neumann) and varying head (time-dependent Dirichlet) boundary conditions. We demonstrate that the PICKLE method is comparable in accuracy with the standard maximum a posteriori (MAP) method, but is significantly faster than MAP for large-scale problems. Both methods use a mesh to discretize the computational domain. In MAP, the parameters and states are discretized on the mesh; therefore, the size of the MAP parameter estimation problem directly depends on the mesh size. In PICKLE, the mesh is used to evaluate the residuals of the governing equation, while the parameters and states are approximated by the truncated conditional Karhunen-Loéve expansions with the number of parameters controlled by the smoothness of the parameter and state fields, and not by the mesh size. For a considered example, we demonstrate that the computational cost of PICKLE increases near linearly (as N1.15) with the number of grid nodes N, while that of MAP increases much faster (as N3.28). We also show that once trained for one set of Dirichlet boundary conditions (i.e., one river stage), the PICKLE method provides accurate estimates of the hydraulic head for any value of the Dirichlet boundary conditions (i.e., for any river stage).

Journal ArticleDOI
01 Feb 2022
TL;DR: In this paper , the authors use amortized neural posterior estimation to produce a model that approximates the high-dimensional posterior distribution for spectroscopic observations of selected spectral ranges sampled at arbitrary rotation phases.
Abstract: Aims. The non-uniform surface temperature distribution of rotating active stars is routinely mapped with the Doppler imaging technique. Inhomogeneities in the surface produce features in high-resolution spectroscopic observations that shift in wavelength because of the Doppler effect, depending on their position on the visible hemisphere. The inversion problem has been systematically solved using maximum a posteriori regularized methods assuming smoothness or maximum entropy. Our aim in this work is to solve the full Bayesian inference problem by providing access to the posterior distribution of the surface temperature in the star compatible with the observations. Methods. We use amortized neural posterior estimation to produce a model that approximates the high-dimensional posterior distribution for spectroscopic observations of selected spectral ranges sampled at arbitrary rotation phases. The posterior distribution is approximated with conditional normalizing flows, which are flexible, tractable, and easy-to-sample approximations to arbitrary distributions. When conditioned on the spectroscopic observations, these normalizing flows provide a very efficient way of obtaining samples from the posterior distribution. The conditioning on observations is achieved through the use of Transformer encoders, which can deal with arbitrary wavelength sampling and rotation phases. Results. Our model can produce thousands of posterior samples per second, each one accompanied by an estimation of the log-probability. Our exhaustive validation of the model for very high-signal-to-noise observations shows that it correctly approximates the posterior, albeit with some overestimation of the broadening. We apply the model to the moderately fast rotator II Peg, producing the first Bayesian map of its temperature inhomogenities. We conclude that conditional normalizing flows are a very promising tool for carrying out approximate Bayesian inference in more complex problems in stellar physics, such as constraining the magnetic properties using polarimetry.

Journal ArticleDOI
TL;DR: In this article , the authors showed that the maximum a posteriori estimators are well defined for diagonal Gaussian priors µ on ℓp under common assumptions on the potential Φ.
Abstract: We prove that maximum a posteriori estimators are well-defined for diagonal Gaussian priors µ on ℓp under common assumptions on the potential Φ. Further, we show connections to the Onsager–Machlup functional and provide a corrected and strongly simplified proof in the Hilbert space case p = 2, previously established by Dashti et al (2013 Inverse Problems 29 095017); Kretschmann (2019 PhD Thesis). These corrections do not generalize to the setting 1⩽p<∞ , which requires a novel convexification result for the difference between the Cameron–Martin norm and the p-norm.

Journal ArticleDOI
TL;DR: In this article , a bayesian Poisson denoising algorithm based on the Nonlocal Means framework and replacing the euclidean distance by stochastic distances was proposed for low-dose CT images.

Posted ContentDOI
20 May 2022-bioRxiv
TL;DR: This work develops a novel method for approximate maximum a posteriori (MAP) reconstruction by combining a generalized linear model of light responses in retinal neurons and their dependence on spike history and spikes of neighboring cells, with an image prior implicitly embedded in a deep convolutional neural network trained for image denoising.
Abstract: A fraction of the visual information arriving at the retina is transmitted to the brain by signals in the optic nerve, and the brain must rely solely on these signals to make inferences about the visual world. Previous work has probed the visual information contained in retinal signals by reconstructing images from retinal activity using linear regression and nonlinear regression with neural networks. Maximum a posteriori (MAP) reconstruction offers a more general and principled approach. We develop a novel method for approximate MAP reconstruction by combining a generalized linear model of light responses in retinal neurons and their dependence on spike history and spikes of neighboring cells, with an image prior implicitly embedded in a deep convolutional neural network trained for image denoising. We use this method to reconstruct natural images from ex vivo simultaneously-recorded spikes of hundreds of ganglion cells uniformly sampling a region of the retina. The method produces reconstructions that match or exceed the state-of-the-art in perceptual similarity and exhibit additional fine detail, while using substantially fewer model parameters than previous approaches. The use of more rudimentary encoding models (a linear-nonlinear-Poisson cascade) or image priors (a 1/F spectral model) significantly reduces reconstruction performance, indicating the essential role of both components in achieving high-quality reconstructed images from the retinal signal.


Journal ArticleDOI
TL;DR: Posologyr as mentioned in this paper is a free and open-source R package developed to enable Bayesian individual parameter estimation and dose individualization, and it has been evaluated on a wide variety of models and pharmacokinetic profiles.
Abstract: Model-informed precision dosing is being increasingly used to improve therapeutic drug monitoring. To meet this need, several tools have been developed, but open-source software remains uncommon. Posologyr is a free and open-source R package developed to enable Bayesian individual parameter estimation and dose individualization. Before using it for clinical practice, performance validation is mandatory. The estimation functions implemented in posologyr were benchmarked against reference software products on a wide variety of models and pharmacokinetic profiles: 35 population pharmacokinetic models, with 4.000 simulated subjects by model. Maximum A Posteriori (MAP) estimates were compared to NONMEM post hoc estimates, and full posterior distributions were compared to Monolix conditional distribution estimates. The performance of MAP estimation was excellent in 98.7% of the cases. Considering the full posterior distributions of individual parameters, the bias on dosage adjustment proposals was acceptable in 97% of cases with a median bias of 0.65%. These results confirmed the ability of posologyr to serve as a basis for the development of future Bayesian dose individualization tools.

Journal ArticleDOI
TL;DR: In this article , a Bayesian estimation method of land deformation combining PSs and DSs is proposed. But the phase quality of a large number of DSs was far inferior to that of PSs, which deteriorated the deformation measurement accuracy.
Abstract: Persistent Scatterer Interferometry (PSI) has been widely used for monitoring land deformation in urban areas with millimeter accuracy. In natural terrain, combining persistent scatterers (PSs) and distributed scatterers (DSs) to jointly estimate deformation, such as SqueeSAR, can enhance PSI results for denser and better coverage. However, the phase quality of a large number of DSs is far inferior to that of PSs, which deteriorates the deformation measurement accuracy. To solve the contradiction between measurement accuracy and coverage, a Bayesian estimation method of land deformation combining PSs and DSs is proposed in this paper. First, a two-level network is introduced into the traditional PSI to deal with PSs and DSs. In the first-level network, the Maximum Likelihood Estimation (MLE) of deformation parameters at PSs and high-quality DSs is obtained accurately. In the secondary-level network, the remaining DSs are connected to the nearest PSs or high-quality DSs, and the deformation parameters are estimated by Maximum A Posteriori (MAP) based on Bayesian theory. Due to the poor phase quality of the remaining DSs, MAP can achieve better estimation results than the MLE based on the spatial correlation of the deformation field. Simulation and Sentinel-1A satellite data results verified the feasibility and reliability of the proposed method. Regularized by the spatial deformation field derived from the high-quality PSs and DSs, the proposed method is expected to achieve robust results even in low-coherence areas, such as rural areas, vegetation coverage areas, or deserts.

Journal ArticleDOI
TL;DR: In this paper , a probability-aided maximum-likelihood sequence detector (PMLSD) is experimentally investigated through a 64-GBaud probabilistic shaped 16-ary quadrature amplitude modulation (PS-16QAM) transmission experiment.
Abstract: In this paper, for the first time, a probability-aided maximum-likelihood sequence detector (PMLSD) is experimentally investigated through a 64-GBaud probabilistic shaped 16-ary quadrature amplitude modulation (PS-16QAM) transmission experiment. In order to relax the impacts of PS technology on the decision module, a PMLSD decision scheme is investigated by modifying the decision criterion of maximum-likelihood sequence detector (MLSD) correctly. Meanwhile, a symbol-wise probability-aided maximum a posteriori probability (PMAP) scheme is also demonstrated for comparison. The results show that the PMLSD scheme outperforms the direct decision scheme about 1.0-dB optical signal to noise ratio (OSNR) sensitivity. Compared with symbol-wise PMAP scheme, PMLSD scheme can effectively relax the impacts of PS technology on the decision module and a more than 0.8-dB improvement in terms of OSNR sensitivity in back-to-back (B2B) case is obtained. Finally, we successfully transmit the PS-16QAM signals over a 2400-km fiber link with a bit error ratio (BER) lower than 1.00×10-3 by adopting the PMLSD scheme.


Journal ArticleDOI
TL;DR: In this paper , three levels of prior information are developed: (1) an uninformative prior and an informative prior which can be implemented as (2a) less strict prior and (2b) strict prior.
Abstract: SAR interferometry has developed rapidly in recent years and now allows measurements of subtle deformation of the Earth's surface with millimeter accuracy. All state-of-the-art processing methods require a precise coherence estimate. However, this estimate is a random variable and biased toward higher values. Up to now, little is published on the Bayesian estimation of the degree of coherence. The objective of the article is to develop empirical Bayesian estimators and to assess their characteristics by simulations. Bayesian estimation is understood as a regularization of the maximum likelihood estimation. The more information is used and the stricter the general prior, the more accurate the estimate will be. Three levels of prior information are developed: (1) an uninformative prior and an informative prior which can be implemented as (2a) less strict prior and (2b) strict prior. The informative priors are described by a single parameter only i.e., the maximum underlaying coherence. The article reports on the bias, the standard deviation and the root-mean-square error of the developed estimators. It was found that all empirical Bayes estimators improve the coherence estimation from small samples and for small underlaying coherences compared to the conventional sample estimator, e.g., a zero underlaying coherence is estimated by the expected a posteriori estimator without additional information with a 33.3% reduced bias using three samples only. Assuming the maximum underlaying coherence is 0.6, the bias is reduced by 51.3% for the strict prior and by 36.6% for the less strict prior. In addition, it was found that the methods work very well even for the extremely small sample size of only two values.

Journal ArticleDOI
TL;DR: The relationship between the boundary problem and irregular standard errors has not been analytically explored as mentioned in this paper , and prior research has not shown how maximum-a-posteriori estimates avoid the boundary problems and affect the standard errors of estimates.
Abstract: In diagnostic classification models, parameter estimation sometimes provides estimates that stick to the boundaries of the parameter space, which is called the boundary problem and may lead to extreme values of standard errors. However, the relationship between the boundary problem and irregular standard errors has not been analytically explored. In addition, prior research has not shown how maximum-a-posteriori estimates avoid the boundary problem and affect the standard errors of estimates. To analyze these relationships, the expectation–maximization algorithm for maximum-a-posteriori estimates and a complete data Fisher information matrix are explicitly derived for a mixture formulation of saturated diagnostic classification models. Theoretical considerations show that the emptiness of attribute mastery patterns causes both the boundary problem and the inaccurate standard error estimates. Furthermore, unfortunate boundary problem without emptiness causes shorter standard errors. A simulation study shows that the maximum-a-posteriori method prevents boundary problems. Moreover, this method with monotonicity constraint estimation improves standard error estimates more than unconstrained maximum likelihood estimates do.

Journal ArticleDOI
TL;DR: In this article , the authors use the Expectation Propagation (EP) framework to approximate minimum mean squared error (MMSE) estimates and marginal (pixel-wise) variances, without resorting to Monte Carlo sampling.
Abstract: This paper presents a scalable approximate Bayesian method for image restoration using Total Variation (TV) priors, with the ability to offer uncertainty quantification. In contrast to most optimization methods based on maximum a posteriori estimation, we use the Expectation Propagation (EP) framework to approximate minimum mean squared error (MMSE) estimates and marginal (pixel-wise) variances, without resorting to Monte Carlo sampling. For the classical anisotropic TV-based prior, we also propose an iterative scheme to automatically adjust the regularization parameter via Expectation Maximization (EM). Using Gaussian approximating densities with diagonal covariance matrices, the resulting method allows highly parallelizable steps and can scale to large images for denoising, deconvolution, and compressive sensing (CS) problems. The simulation results illustrate that such EP methods can provide a posteriori estimates on par with those obtained via sampling methods but at a fraction of the computational cost. Moreover, EP does not exhibit strong underestimation of posteriori variances, in contrast to variational Bayes alternatives.

Proceedings ArticleDOI
28 Aug 2022
TL;DR: In this paper , a maximum a-posteriori probability-based terahertz parameter extraction method was developed to obtain tissue characteristics of paraffin-embedded murine pancreatic ductal adenocarcinoma samples.
Abstract: We developed a maximum a-posteriori probability-based terahertz parameter extraction method to obtain tissue characteristics of paraffin-embedded murine pancreatic ductal adenocarcinoma samples. This estimation algorithm can produce conservative estimates of tissue parameters without resorting to frequency-domain analysis. We report well-resolved and statistically significant differences in tumor tissue regions as compared to their healthy counterparts using our parameters extraction algorithm. We propose to use the index of refraction and the absorption as imaging biomarker parameters for biological tissue characterization.