scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 2021"


Proceedings ArticleDOI
20 Jun 2021
TL;DR: In this paper, a model-based denoising method is proposed to improve the interpretability of deep networks by introducing a non-linear filtering operator, a reliability matrix, and a high-dimensional feature transformation function.
Abstract: Recent studies have shown that deep networks can achieve promising results for image denoising. However, how to simultaneously incorporate the valuable achievements of traditional methods into the network design and improve network interpretability is still an open problem. To solve this problem, we propose a novel model-based denoising method to inform the design of our denoising network. First, by introducing a non-linear filtering operator, a reliability matrix, and a high-dimensional feature transformation function into the traditional consistency prior, we propose a novel adaptive consistency prior (ACP). Second, by incorporating the ACP term into the maximum a posteriori framework, a model-based denoising method is proposed. This method is further used to inform the network design, leading to a novel end-to-end trainable and interpretable deep denoising network, called DeamNet. Note that the unfolding process leads to a promising module called dual element-wise attention mechanism (DEAM) module. To the best of our knowledge, both our ACP constraint and DEAM module have not been reported in the previous literature. Extensive experiments verify the superiority of DeamNet on both synthetic and real noisy image datasets.

89 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian retinex algorithm for underwater image enhancement with multi-order gradient priors of reflectance and illumination is proposed, which can be used for color correction, naturalness preservation, structures and details promotion, artifacts or noise suppression.

85 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a predictive beamforming scheme in the context of dual-functional radar-communication (DFRC) systems, where the road-side unit estimates and predicts the motion parameters of vehicles based on the echoes of the DFRC signal.
Abstract: The development of dual-functional radar-communication (DFRC) systems, where vehicle localization and tracking can be combined with vehicular communication, will lead to more efficient future vehicular networks. In this paper, we develop a predictive beamforming scheme in the context of DFRC systems. We consider a system model where the road-side unit estimates and predicts the motion parameters of vehicles based on the echoes of the DFRC signal. Compared to the conventional feedback-based beam tracking approaches, the proposed method can reduce the signaling overhead and improve the accuracy of the angle estimation. To accurately estimate the motion parameters of vehicles in real-time, we propose a novel message passing algorithm based on factor graph, which yields a near optimal performance achieved by the maximum a posteriori estimation. The beamformers are then designed based on the predicted angles for establishing the communication links. With the employment of appropriate approximations, all messages on the factor graph can be derived in a closed-form, thus reduce the complexity. Simulation results show that the proposed DFRC based beamforming scheme is superior to the feedback-based approach in terms of both estimation and communication performance. Moreover, the proposed message passing algorithm achieves a similar performance of the high-complexity particle filtering-based methods.

64 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed hybrid detection algorithm can not only approach the performance of the near-optimal symbol-wise maximum a posteriori MAP algorithms, but also offer a substantial performance gain compared with existing algorithms.
Abstract: Orthogonal time frequency space (OTFS) modulation has attracted substantial attention recently due to its great potential of providing reliable communications in high-mobility scenarios. In this article, we propose a novel hybrid signal detection algorithm for OTFS modulation. Based on the system model, we first derive the near-optimal symbol-wise maximum a posteriori (MAP) detection algorithm for OTFS modulation. Then, in order to reduce the detection complexity, we propose a partitioning rule that separates the related received symbols into two subsets for detecting each transmitted symbol, according to the corresponding path gains. According to the partitioning rule, we design the hybrid detection algorithm to exploit the power discrepancy of each subset, where the MAP detection is applied to the subset with larger channel gains, while the parallel interference cancellation (PIC) detection is applied to the subset with smaller channel gains. We also provide the error performance analysis of the proposed hybrid detection algorithm. Simulation results show that the proposed hybrid detection algorithm can not only approach the performance of the near-optimal symbol-wise MAP algorithms, but also offer a substantial performance gain compared with existing algorithms.

59 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new class of Bayesian neural networks (BNNs) that can be trained using noisy data of variable fidelity, and they apply them to learn function approximations as well as to solve inverse problems based on partial differential equations.

51 citations


Journal ArticleDOI
TL;DR: A thorough review of MC methods for the estimation of static parameters in signal processing applications can be found in this paper, where the basic MC method and a brief description of the rejection sampling (RS) algorithm, as well as three sections describing many relevant MCMC and IS algorithms, and their combined use.
Abstract: Statistical signal processing applications usually require the estimation of some parameters of interest given a set of observed data. These estimates are typically obtained either by solving a multi-variate optimization problem, as in the maximum likelihood (ML) or maximum a posteriori (MAP) estimators, or by performing a multi-dimensional integration, as in the minimum mean squared error (MMSE) estimators. Unfortunately, analytical expressions for these estimators cannot be found in most real-world applications, and the Monte Carlo (MC) methodology is one feasible approach. MC methods proceed by drawing random samples, either from the desired distribution or from a simpler one, and using them to compute consistent estimators. The most important families of MC algorithms are Markov chain MC (MCMC) and importance sampling (IS). On the one hand, MCMC methods draw samples from a proposal density, building then an ergodic Markov chain whose stationary distribution is the desired distribution by accepting or rejecting those candidate samples as the new state of the chain. On the other hand, IS techniques draw samples from a simple proposal density, and then assign them suitable weights that measure their quality in some appropriate way. In this paper, we perform a thorough review of MC methods for the estimation of static parameters in signal processing applications. A historical note on the development of MC schemes is also provided, followed by the basic MC method and a brief description of the rejection sampling (RS) algorithm, as well as three sections describing many of the most relevant MCMC and IS algorithms, and their combined use.

34 citations


Journal ArticleDOI
Niall Jeffrey1, Niall Jeffrey2, M Gatti3, Chihway Chang4, L. Whiteway2, U. Demirbozan, A. Kovács5, A. Kovács6, G. Pollina7, David Bacon8, Nico Hamaus7, T. Kacprzak9, Ofer Lahav2, Francois Lanusse10, B. Mawdsley8, Seshadri Nadathur8, Jean-Luc Starck10, P. Vielzeuf, D. Zeurcher9, A. Alarcon11, Alexandra Amon12, Keith Bechtol13, Gary Bernstein3, A. Campos14, A. Carnero Rosell5, A. Carnero Rosell6, M. Carrasco Kind15, R. Cawthon13, R. Chen16, Ami Choi17, J. Cordero18, C. Davis12, J. DeRose19, J. DeRose20, C. Doux3, Alex Drlica-Wagner4, Alex Drlica-Wagner21, K. D. Eckert3, F. Elsner2, Jack Elvin-Poole17, S. Everett19, Agnès Ferté22, G. Giannini, Daniel Gruen12, Robert A. Gruendl15, I. Harrison18, I. Harrison23, W. G. Hartley24, K. Herner21, E. M. Huff22, Dragan Huterer25, N. Kuropatkin21, Matt J. Jarvis3, P. F. Leget12, Niall MacCrann26, J. McCullough12, J. Muir12, J. Myles12, A. Navarro-Alsina27, S. Pandey3, J. Prat4, Marco Raveri4, R. P. Rollins18, Ashley J. Ross17, Eli S. Rykoff12, Carlos Solans Sanchez3, L. F. Secco3, I. Sevilla-Noarbe, Erin Sheldon28, T. Shin3, Michael Troxel16, I. Tutusaus6, T. N. Varga7, T. N. Varga29, Brian Yanny21, B. Yin14, Yanxi Zhang21, Joe Zuntz30, T. M. C. Abbott, Michel Aguena31, S. Allam21, F. Andrade-Oliveira32, Matthew R. Becker11, E. Bertin33, E. Bertin34, Sunayana Bhargava35, David J. Brooks2, D. L. Burke12, J. Carretero, F. J. Castander6, Christopher J. Conselice18, Christopher J. Conselice36, M. Costanzi37, Martin Crocce6, L. N. da Costa, Maria E. S. Pereira25, J. De Vicente, S. Desai38, H. T. Diehl21, J. P. Dietrich7, Peter Doel2, I. Ferrero39, B. Flaugher21, Pablo Fosalba6, Juan Garcia-Bellido6, Enrique Gaztanaga6, D. W. Gerdes25, Tommaso Giannantonio26, J. Gschwend, G. Gutierrez21, Samuel Hinton40, D. L. Hollowood19, Ben Hoyle29, Ben Hoyle7, Bhuvnesh Jain3, David J. James41, Marcos Lima31, M. A. G. Maia, M. March3, Jennifer L. Marshall42, Peter Melchior, Felipe Menanteau15, Ramon Miquel, Joseph J. Mohr7, Joseph J. Mohr29, Robert Morgan13, R. L. C. Ogando, Antonella Palmese21, Antonella Palmese4, F. Paz-Chinchón43, F. Paz-Chinchón26, A. A. Plazas44, M. Rodriguez-Monroy, A. Roodman12, E. J. Sanchez, V. Scarpine21, S. Serrano6, M. Smith45, M. Soares-Santos25, E. Suchyta46, G. Tarle25, Daniel Thomas8, Chun-Hao To12, Jochen Weller29, Jochen Weller7 
TL;DR: In this article, the authors reconstructed convergence maps, mass maps, from the Dark Energy Survey (DES) third year (Y3) weak gravitational lensing data set, which are weighted projections of the density field (primarily dark matter) in the foreground of the observed galaxies.
Abstract: We present reconstructed convergence maps, mass maps, from the Dark Energy Survey (DES) third year (Y3) weak gravitational lensing data set. The mass maps are weighted projections of the density field (primarily dark matter) in the foreground of the observed galaxies. We use four reconstruction methods, each is a maximum a posteriori estimate with a different model for the prior probability of the map: Kaiser-Squires, null B-mode prior, Gaussian prior, and a sparsity prior. All methods are implemented on the celestial sphere to accommodate the large sky coverage of the DES Y3 data. We compare the methods using realistic ΛCDM simulations with mock data that are closely matched to the DES Y3 data. We quantify the performance of the methods at the map level and then apply the reconstruction methods to the DES Y3 data, performing tests for systematic error effects. The maps are compared with optical foreground cosmic-web structures and are used to evaluate the lensing signal from cosmic-void profiles. The recovered dark matter map covers the largest sky fraction of any galaxy weak lensing map to date.

31 citations


Journal ArticleDOI
TL;DR: This work proposes a multi-objective optimization algorithm for PET image reconstruction to identify a set of images that are optimal for more than one task and generates solutions with comparable to improved objective function values compared to the conventional approaches for trading off performance amongst different tasks.
Abstract: In many diagnostic imaging settings, including positron emission tomography (PET), images are typically used for multiple tasks such as detecting disease and quantifying disease. Unlike conventional image reconstruction that optimizes a single objective, this work proposes a multi-objective optimization algorithm for PET image reconstruction to identify a set of images that are optimal for more than one task. This work is reliant on a genetic algorithm to evolve a set of solutions that satisfies two distinct objectives. In this paper, we defined the objectives as the commonly used Poisson log-likelihood function, typically reflective of quantitative accuracy, and a variant of the generalized scan-statistic model, to reflect detection performance. The genetic algorithm uses new mutation and crossover operations at each iteration. After each iteration, the child population is selected with non-dominated sorting to identify the set of solutions along the dominant front or fronts. After multiple iterations, these fronts approach a single non-dominated optimal front, defined as the set of PET images for which none the objective function values can be improved without reducing the opposing objective function. This method was applied to simulated 2D PET data of the heart and liver with hot features. We compared this approach to conventional, single-objective approaches for trading off performance: maximum likelihood estimation with increasing explicit regularization and maximum a posteriori estimation with varying penalty strength. Results demonstrate that the proposed method generates solutions with comparable to improved objective function values compared to the conventional approaches for trading off performance amongst different tasks. In addition, this approach identifies a diverse set of solutions in the multi-objective function space which can be challenging to estimate with single-objective formulations.

23 citations


Journal ArticleDOI
TL;DR: In this paper, a statistical modeling based on stochastic differential equations (SDEs) is proposed for retinal optical coherence tomography (OCT) images, where pixel intensities of image are considered as discrete realizations of a Levy stable process.
Abstract: In this paper a statistical modeling, based on stochastic differential equations (SDEs), is proposed for retinal Optical Coherence Tomography (OCT) images. In this method, pixel intensities of image are considered as discrete realizations of a Levy stable process. This process has independent increments and can be expressed as response of SDE to a white symmetric alpha stable (s $\boldsymbol {\alpha }\text{s}$ ) noise. Based on this assumption, applying appropriate differential operator makes intensities statistically independent. Mentioned white stable noise can be regenerated by applying fractional Laplacian operator to image intensities. In this way, we modeled OCT images as s $\boldsymbol {\alpha }\text{s}$ distribution. We applied fractional Laplacian operator to image and fitted s $\boldsymbol {\alpha }\text{s}$ to its histogram. Statistical tests were used to evaluate goodness of fit of stable distribution and its heavy tailed and stability characteristics. We used modeled s $\boldsymbol {\alpha }\text{s}$ distribution as prior information in maximum a posteriori (MAP) estimator in order to reduce the speckle noise of OCT images. Such a statistically independent prior distribution simplified denoising optimization problem to a regularization algorithm with an adjustable shrinkage operator for each image. Alternating Direction Method of Multipliers (ADMM) algorithm was utilized to solve the denoising problem. We presented visual and quantitative evaluation results of the performance of this modeling and denoising methods for normal and abnormal images. Applying parameters of model in classification task as well as indicating effect of denoising in layer segmentation improvement illustrates that the proposed method describes OCT data more accurately than other models that do not remove statistical dependencies between pixel intensities.

17 citations


Journal ArticleDOI
TL;DR: In this article, the shape and scale parameter of Poisson-exponential distribution for complete sample is estimated using Markov Chain Monte Carlo (MCMCMC) technique. And the proposed Bayes estimators have been studied and compared with their maximum likelihood estimators on the basis of Monte Carlo study of simulated samples in terms of their risks.
Abstract: The present paper deals with the maximum likelihood and Bayes estimation procedure for the shape and scale parameter of Poisson-exponential distribution for complete sample. Bayes estimators under symmetric and asymmetric loss function are obtained using Markov Chain Monte Carlo (MCMC) technique. Performances of the proposed Bayes estimators have been studied and compared with their maximum likelihood estimators on the basis of Monte Carlo study of simulated samples in terms of their risks. The methodology is also illustrated on a real data set.

17 citations


Journal ArticleDOI
TL;DR: A novel variational convex optimization model for the single SAR image SR reconstruction with speckle noise is proposed that is one of the first works in this field and the split Bregman algorithm is employed efficiently.
Abstract: Super resolution (SR) is an attractive issue in image processing. In the synthetic aperture radar (SAR) image, speckle noise is a crucial problem that is multiplicative. Therefore, numerous custom SR methods considering additive Gaussian noise cannot respond to this image degradation model. The main contribution of this paper is to propose a novel variational convex optimization model for the single SAR image SR reconstruction with speckle noise that is one of the first works in this field. Employing maximum a posteriori (MAP) estimator and proposing an effective regularization based on combination of sparse representation, total variation (TV) and a novel feature space based soft projection tool to use merits of them is the main idea. To solve the proposed model, the split Bregman algorithm is employed efficiently. Experimental results for the multiple synthetic and realistic SAR images show the effectiveness of proposed method in terms of both fidelity and visual perception.

Journal ArticleDOI
TL;DR: In this article, the Bayesian approximation error (BAE) approach is employed to approximately quantify simultaneously the noise in measurements and uncertainty in the forward model, which provides a systematic means of quantifying uncertainties in the solution.
Abstract: . We consider the problem of inferring the basal sliding coefficient field for an uncertain Stokes ice sheet forward model from synthetic surface velocity measurements. The uncertainty in the forward model stems from unknown (or uncertain) auxiliary parameters (e.g., rheology parameters). This inverse problem is posed within the Bayesian framework, which provides a systematic means of quantifying uncertainty in the solution. To account for the associated model uncertainty (error), we employ the Bayesian approximation error (BAE) approach to approximately premarginalize simultaneously over both the noise in measurements and uncertainty in the forward model. We also carry out approximative posterior uncertainty quantification based on a linearization of the parameter-to-observable map centered at the maximum a posteriori (MAP) basal sliding coefficient estimate, i.e., by taking the Laplace approximation. The MAP estimate is found by minimizing the negative log posterior using an inexact Newton conjugate gradient method. The gradient and Hessian actions to vectors are efficiently computed using adjoints. Sampling from the approximate covariance is made tractable by invoking a low-rank approximation of the data misfit component of the Hessian. We study the performance of the BAE approach in the context of three numerical examples in two and three dimensions. For each example, the basal sliding coefficient field is the parameter of primary interest which we seek to infer, and the rheology parameters (e.g., the flow rate factor or the Glen's flow law exponent coefficient field) represent so-called nuisance (secondary uncertain) parameters. Our results indicate that accounting for model uncertainty stemming from the presence of nuisance parameters is crucial. Namely our findings suggest that using nominal values for these parameters, as is often done in practice, without taking into account the resulting modeling error, can lead to overconfident and heavily biased results. We also show that the BAE approach can be used to account for the additional model uncertainty at no additional cost at the online stage.

Proceedings ArticleDOI
20 Jun 2021
TL;DR: In this article, a simple and effective approach was proposed to jointly learn data and regularization terms, embedding deep neural networks within the constraints of the MAP framework, trained in an end-to-end manner.
Abstract: The classical maximum a-posteriori (MAP) framework for non-blind image deblurring requires defining suitable data and regularization terms, whose interplay yields the desired clear image through optimization. The vast majority of prior work focuses on advancing one of these two crucial ingredients, while keeping the other one standard. Considering the indispensable roles and interplay of both data and regularization terms, we propose a simple and effective approach to jointly learn these two terms, embedding deep neural networks within the constraints of the MAP framework, trained in an end-to-end manner. The neural networks not only yield suitable image-adaptive features for both terms, but actually predict per-pixel spatially-variant features instead of the commonly used spatially-uniform ones. The resulting spatially-variant data and regularization terms particularly improve the restoration of fine-scale structures and detail. Quantitative and qualitative results underline the effectiveness of our approach, substantially outperforming the current state of the art.

Journal ArticleDOI
Joon-Ha Kim1, Seungwoo Hong1, Gwanghyeon Ji1, Seunghun Jeon1, Jemin Hwangbo1, Jun-Ho Oh1, Hae-Won Park1 
30 Jun 2021
TL;DR: In this article, a state estimation algorithm for the legged robot by defining the problem as a Maximum A Posteriori (MAP) estimation problem and solving the problem with the Gauss-Newton algorithm is presented.
Abstract: This letter presents a state estimation algorithm for the legged robot by defining the problem as a Maximum A Posteriori (MAP) estimation problem and solving the problem with the Gauss-Newton algorithm. Moreover, marginalization by the Schur Complement method is adopted to make a fixed size problem. Each component of the cost function and its Jacobian are derived utilizing the SO(3) manifold structure, while we reparameterize the state with nominal state and variation to make linear algebra and vector calculus applied properly. Furthermore, a slip rejection method is proposed to reduce the erroneous effect of fault modeling of kinematics models. The proposed algorithm is verified by comparison with the Invariant Extended Kalman Filter (IEKF) in real robot experiments on various environments.

Journal ArticleDOI
TL;DR: In this paper, the maximum a posteriori estimate of a Gauss-Markov prior with an iterated extended Kalman smoother was studied under mild conditions on the vector field and convergence rates were obtained via nonlinear analysis and scattered data approximation.
Abstract: There is a growing interest in probabilistic numerical solutions to ordinary differential equations. In this paper, the maximum a posteriori estimate is studied under the class of $$ u $$ times differentiable linear time-invariant Gauss–Markov priors, which can be computed with an iterated extended Kalman smoother. The maximum a posteriori estimate corresponds to an optimal interpolant in the reproducing kernel Hilbert space associated with the prior, which in the present case is equivalent to a Sobolev space of smoothness $$ u +1$$ . Subject to mild conditions on the vector field, convergence rates of the maximum a posteriori estimate are then obtained via methods from nonlinear analysis and scattered data approximation. These results closely resemble classical convergence results in the sense that a $$ u $$ times differentiable prior process obtains a global order of $$ u $$ , which is demonstrated in numerical examples.


Proceedings Article
18 Aug 2021
TL;DR: In this paper, a deep reparametrization of the maximum a posteriori formulation commonly employed in multi-frame image restoration tasks is derived by introducing a learned error metric and a latent representation of the target image.
Abstract: We propose a deep reparametrization of the maximum a posteriori formulation commonly employed in multi-frame image restoration tasks. Our approach is derived by introducing a learned error metric and a latent representation of the target image, which transforms the MAP objective to a deep feature space. The deep reparametrization allows us to directly model the image formation process in the latent space, and to integrate learned image priors into the prediction. Our approach thereby leverages the advantages of deep learning, while also benefiting from the principled multi-frame fusion provided by the classical MAP formulation. We validate our approach through comprehensive experiments on burst denoising and burst super-resolution datasets. Our approach sets a new state-of-the-art for both tasks, demonstrating the generality and effectiveness of the proposed formulation.

Journal ArticleDOI
TL;DR: Better visual and metric results as well as fast testing performance support the argument of boosted denoising capability against a majority of the benchmarks for MRI noise removal.

Journal ArticleDOI
TL;DR: In this article, a Bayesian framework for quantifying parameter uncertainty while simultaneously estimating the internal state of a dynamical system is proposed, based on maximum a posteriori estimation and the numerical Newton's method.
Abstract: Parameters of the mathematical model describing many practical dynamical systems are prone to vary due to aging or renewal, wear and tear, as well as changes in environmental or service conditions. These variabilities will adversely affect the accuracy of state estimation. In this paper, we introduce SSUE: Simultaneous State and Uncertainty Estimation for quantifying parameter uncertainty while simultaneously estimating the internal state of a system. Our approach involves the development of a Bayesian framework that recursively updates the posterior joint density of the unknown state vector and parameter uncertainty. To execute the framework for practical implementation, we develop a computational algorithm based on maximum a posteriori estimation and the numerical Newton's method. Observability analysis is conducted for linear systems, and its relation with the consistency of the estimation of the uncertainty's location is unveiled. Additional simulation results are provided to demonstrate the effectiveness of the proposed SSUE approach.

Journal ArticleDOI
01 Oct 2021
TL;DR: In this article, the authors evaluated the predictive performance of FPs and when to apply FPs in MIPD using a data set of 4679 adult patients treated with vancomycin.
Abstract: Model-informed precision dosing (MIPD) approaches typically apply maximum a posteriori (MAP) Bayesian estimation to determine individual pharmacokinetic (PK) parameters with the goal of optimizing future dosing regimens. This process combines knowledge about the individual, in the form of drug levels or pharmacodynamic biomarkers, with prior knowledge of the drug PK in the general population. Use of "flattened priors" (FPs), in which the weight of the model priors is reduced relative to observations about the patient, has been previously proposed to estimate individual PK parameters in instances where the patient is poorly described by the PK model. However, little is known about the predictive performance of FPs and when to apply FPs in MIPD. Here, FP is evaluated in a data set of 4679 adult patients treated with vancomycin. Depending on the PK model, prediction error could be reduced by applying FPs in 42-55% of PK parameter estimations. Machine learning (ML) models could identify instances where FPs would outperform MAPs with a specificity of 81-86%, reducing overall root mean squared error (RMSE) of PK model predictions by 12-22% (0.5-1.2 mg/L) relative to MAP alone. The factors most indicative of the use of FPs were past prediction residuals and bias in past PK predictions. A more clinically practical minimal model was developed using only these two features, reducing RMSE by 5-18% (0.20-0.93 mg/L) relative to MAP. This hybrid ML/PK approach advances the precision dosing toolkit by leveraging the power of ML while maintaining the mechanistic insight and interpretability of PK models.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a simple yet effective method to build a model for novel categories with few samples, where each category in the base set follows a Gaussian distribution, so that they can employ Maximum A Posteriori to estimate the distribution of a novel category with even one example.
Abstract: Few-shot learning aims to train an effective classifier in a small data regime. Due to the scarcity of training samples (usually as small as 1 or 5), traditional deep learning solutions often suffer from overfitting. To address this issue, an intuitive idea is to augment or hallucinate sufficient training data. For this purpose, in this paper, we propose a simple yet effective method to build a model for novel categories with few samples. Specifically, we assume that each category in the base set follows a Gaussian distribution, so that we can employ Maximum A Posteriori (MAP) to estimate the distribution of a novel category with even one example. To achieve this goal, we first transform each base category into Gaussian form with power transformation for MAP estimation. Then, we estimate the Gaussian mean of the novel category under the Gaussian prior given few samples from it. Finally, each novel category is represented by a unique Gaussian distribution, where sufficient trainable features can be sampled to obtain a highly accurate classifier for final predictions. Experimental results on four few-shot benchmarks show that it significantly outperforms the baseline methods on both 1- and 5-shot tasks. Extensive results on cross-domain tasks and visualization of estimated feature distribution also demonstrate its effectiveness.

Journal ArticleDOI
TL;DR: In this paper, a blind identification method based on a two-stage maximum a posteriori probability estimation was proposed, where the first stage measures the parity-check relationship between the received vectors and the rows of the parity check matrices in the candidate set, and the second stage deals with the effect of row weights.
Abstract: Blind identification of encoders has received increasing attention in recent years. In this letter, we focus on LDPC coded communication systems and study the problem of blind identification over a candidate set. We propose a blind identification method based on a two-stage maximum a posteriori probability estimation. The first stage measures the parity-check relationship between the received vectors and the rows of the parity-check matrices in the candidate set, and the second stage deals with the effect of row weights. Moreover, We theoretically explained the mechanism by which row weight affects identification results and proved that the proposed method can considerably suppress the preference of the existing methods for low row weights. Simulation results show that the proposed method always has a higher probability of correct identification for parity-check matrices with high row weights in the candidate set than the existing methods.

Journal ArticleDOI
TL;DR: It is proved that with (a variant of) Poisson noise and any prior probability on the unknowns, MMSE estimation can again be expressed as the solution of a penalized least squares optimization problem.

Proceedings ArticleDOI
25 May 2021
TL;DR: In this paper, a partially-observed discrete dynamical systems (PODDS) model is introduced, where the state is a vector containing the information of different components of the system, and each component takes its value from a finite real-valued set.
Abstract: This paper introduces a new signal model called partially-observed discrete dynamical systems (PODDS). This signal model is a special case of the hidden Markov model (HMM), where the state is a vector containing the information of different components of the system, and each component takes its value from a finite real-valued set. This signal model is currently treated as a finite-state HMM, where maximum a posteriori (MAP) criterion is used for state estimator purpose. This paper takes advantage of the discrete structure of the state variables in PODDS and develops the optimal componentwise MAP (CMAP) state estimator, which yields the MAP solution in each state variable. A fully-recursive process is provided for computation of this optimal estimator, followed by introducing a specific instance of the PODDS model suitable for regulatory networks observed through noisy time series data. The high performance of the proposed estimator is demonstrated by numerical experiments with a PODDS model of random regulatory networks.

Journal ArticleDOI
TL;DR: A deep learning-based camera management system as a substitute for the academic filming crew and the accuracy can be improved by the Markov model and MAP estimator to reach as high as 95.5%.
Abstract: Internet of Things is making objects smarter and more autonomous. At the other side, online education is gaining momentum and many universities are now offering online degrees. Content preparation for such programs usually involves recording the classes. In this article, we intend to introduce a deep learning-based camera management system as a substitute for the academic filming crew. The solution mainly consists of two cameras and a wearable gadget for the instructor. The fixed camera is used for the instructor's position and pose detection and the pan–tilt–zoom (PTZ) camera does the filming. In the proposed solution, image processing and deep learning techniques are merged together. Face recognition and skeleton detection algorithms are used to detect the position of instructor. But the main contribution lies in the application of deep learning for instructor's skeleton detection and postprocessing of the deep network output for correction of the pose detection results using a Bayesian Maximum A Posteriori (MAP) estimator. This estimator is defined on a Markov state machine. The pose detection result along with the position info is then used by the PTZ camera controller for filming purposes. The proposed solution is implemented by using OpenPose which is a convolutional neural network for detection of body parts. Feeding a neural network pose classifier with 12 features extracted from the output of the deep network yields an accuracy of 89%. However, as we show, the accuracy can be improved by the Markov model and MAP estimator to reach as high as 95.5%.

Journal ArticleDOI
TL;DR: In this paper, a low-rank Bayesian tensorized neural network is proposed to directly apply tensor compression in the training process, which is a challenging task due to the difficulty of choosing a proper tensor rank.

Journal ArticleDOI
TL;DR: The Maximum a Posteriori (MAP) Estimate, and the Monte Carlo Markov Chain (MCMC) is used to estimate the unknown quantities’ input impedance from a developed system.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper formulated a constrained maximum a posteriori (MAP) problem with likelihood and prior terms defined using the three information sources, and solved the MAP problem alternately using a hybrid iterative reweighted least square (IRLS) and Frank-Wolfe (FW) optimization strategy.
Abstract: Ultra-high definition (UHD) 360 videos encoded in fine quality are typically too large to stream in its entirety over bandwidth (BW)-constrained networks. One popular approach is to interactively extract and send a spatial sub-region corresponding to a viewer’s current field-of-view (FoV) in a head-mounted display (HMD) for more BW-efficient streaming. Due to the non-negligible round-trip-time (RTT) delay between server and client, accurate head movement prediction foretelling a viewer’s future FoVs is essential. In this paper, we cast the head movement prediction task as a sparse directed graph learning problem: three sources of relevant information—collected viewers’ head movement traces, a 360 image saliency map, and a biological human head model—are distilled into a view transition Markov model. Specifically, we formulate a constrained maximum a posteriori (MAP) problem with likelihood and prior terms defined using the three information sources. We solve the MAP problem alternately using a hybrid iterative reweighted least square (IRLS) and Frank-Wolfe (FW) optimization strategy. In each FW iteration, a linear program (LP) is solved, whose runtime is reduced thanks to warm start initialization. Having estimated a Markov model from data, we employ it to optimize a tile-based 360 video streaming system. Extensive experiments show that our head movement prediction scheme noticeably outperformed existing proposals, and our optimized tile-based streaming scheme outperformed competitors in rate-distortion performance.

Journal ArticleDOI
TL;DR: In this article, a non-data-aided, expectation-maximization (EM)-based maximum a posteriori probability sparse channel estimation was proposed for underwater acoustic (UWA) communications.
Abstract: In this paper, a new channel estimation and equalization algorithm for underwater acoustic (UWA) communications is presented. The proposed algorithm is developed to meet the requirements of underwater time-varying sparse channels that undergo Rayleigh fading. In addition, the algorithm takes into consideration a path-based channel model which describes each received path with significant power by an attenuation factor, a Doppler scale, and a delay. Transmit diversity enabled by Alamouti space-frequency block coding coupled with orthogonal frequency division multiplexing is employed in the form of two transmitters and multiple receivers. The proposed, non-data-aided, expectation-maximization (EM)-based maximum a posteriori probability sparse channel estimation first estimates the channel transfer functions from each transmit antenna to the receiver. Then, the estimation performance is greatly improved by taking into account the sparseness of the UWA channel and refining the estimation based on the sparse solution that best matches the frequency-domain channel estimates obtained during the first phase of the estimation process. Sparse channel path delays and Doppler scaling factors are estimated by a novel technique called delay focusing . After that, slow time-varying, complex-valued channel path gains are estimated using a basis expansion model based on the discrete Legendre polynomial expansion. Computer simulation results show that the resulting channel estimation algorithm can achieve excellent mean-square error and symbol error rate for both generated data and semi-experimental data taken at Sapanca Lake in Turkey and is capable of handling some mismatch due to different fading models.

Posted Content
TL;DR: A new derivative-free approach to Bayesian inversion is proposed, which may be employed for posterior sampling or for maximum a posteriori estimation, and may be systematically refined.
Abstract: Inverse problems are ubiquitous because they formalize the integration of data with mathematical models In many scientific applications the forward model is expensive to evaluate, and adjoint computations are difficult to employ; in this setting derivative-free methods which involve a small number of forward model evaluations are an attractive proposition Ensemble Kalman based interacting particle systems (and variants such as consensus based and unscented Kalman approaches) have proven empirically successful in this context, but suffer from the fact that they cannot be systematically refined to return the true solution, except in the setting of linear forward models In this paper, we propose a new derivative-free approach to Bayesian inversion, which may be employed for posterior sampling or for maximum a posteriori estimation, and may be systematically refined The method relies on a fast/slow system of stochastic differential equations for the local approximation of the gradient of the log-likelihood appearing in a Langevin diffusion Furthermore the method may be preconditioned by use of information from ensemble Kalman based methods (and variants), providing a methodology which leverages the documented advantages of those methods, whilst also being provably refineable We define the methodology, highlighting its flexibility and many variants, provide a theoretical analysis of the proposed approach, and demonstrate its efficacy by means of numerical experiments