scispace - formally typeset
Search or ask a question
Topic

Maximum a posteriori estimation

About: Maximum a posteriori estimation is a research topic. Over the lifetime, 7486 publications have been published within this topic receiving 222291 citations. The topic is also known as: Maximum a posteriori, MAP & maximum a posteriori probability.


Papers
More filters
Journal ArticleDOI
TL;DR: The present paper exploits fruitfully a priori information to improve performance of multiuser detectors based on a sparse symbol vector with entries drawn from a finite alphabet that is augmented by the zero symbol to capture user inactivity.
Abstract: The number of active users in code-division multiple access (CDMA) systems is often much lower than the spreading gain. The present paper exploits fruitfully this a priori information to improve performance of multiuser detectors. A low-activity factor manifests itself in a sparse symbol vector with entries drawn from a finite alphabet that is augmented by the zero symbol to capture user inactivity. The non-equiprobable symbols of the augmented alphabet motivate a sparsity-exploiting maximum a posteriori probability (S-MAP) criterion, which is shown to yield a cost comprising the l2 least-squares error penalized by the p-th norm of the wanted symbol vector (p = 0, 1, 2). Related optimization problems appear in variable selection (shrinkage) schemes developed for linear regression, as well as in the emerging field of compressive sampling (CS). The contribution of this work to such sparse CDMA systems is a gamut of sparsity-exploiting multiuser detectors trading off performance for complexity requirements. From the vantage point of CS and the least-absolute shrinkage selection operator (Lasso) spectrum of applications, the contribution amounts to sparsity-exploiting algorithms when the entries of the wanted signal vector adhere to finite-alphabet constraints.

280 citations

Book
30 Sep 1999
TL;DR: In this article, the Kaiman Filter is used to estimate the state of the target in a multistep Kaiman state model and the number of states in the state model.
Abstract: 1 Introduction.- 1.1 Signal Estimation.- 1.2 State Estimation.- 1.3 Least Squares Estimation.- Problems.- 2 Random Signals and Systems with Random Inputs.- 2.1 Random Variables.- 2.2 Random Discrete-Time Signals.- 2.3 Discrete-Time Systems with Random Inputs.- Problems.- 3 Optimal Estimation.- 3.1 Formulating the Problem.- 3.2 Maximum Likelihood and Maximum a posteriori Estimation.- 3.3 Minimum Mean-Square Error Estimation.- 3.4 Linear MMSE Estimation.- 3.5 Comparison of Estimation Methods.- Problems.- 4 The Wiener Filter.- 4.1 Linear Time-Invariant MMSE Filters.- 4.2 The FIR Wiener Filter.- 4.3 The Noncausal Wiener Filter.- 4.4 Toward the Causal Wiener Filter.- 4.5 Derivation of the Causal Wiener Filter.- 4.6 Summary of Wiener Filters.- Problems.- 5 Recursive Estimation and the Kaiman Filter.- 5.1 Estimation with Growing Memory.- 5.2 Estimation of a Constant Signal.- 5.3 The Recursive Estimation Problem.- 5.4 The Signal/Measurement Model.- 5.5 Derivation of the Kaiman Filter.- 5.6 Summary of Kaiman Filter Equations.- 5.7 Kaiman Filter Properties.- 5.8 The Steady-state Kaiman Filter.- 5.9 The SSKF as an Unbiased Estimator.- 5.10 Summary.- Problems.- 6 Further Development of the Kaiman Filter.- 6.1 The Innovations.- 6.2 Derivation of the Kaiman Filter from the Innovations.- 6.3 Time-varying State Model and Nonstationary Noises.- 6.4 Modeling Errors.- 6.5 Multistep Kaiman Prediction.- 6.6 Kaiman Smoothing.- Problems.- 7 Kaiman Filter Applications.- 7.1 Target Tracking.- 7.2 Colored Process Noise.- 7.3 Correlated Noises.- 7.4 Colored Measurement Noise.- 7.5 Target Tracking with Polar Measurements.- 7.6 System Identification.- Problems.- 8 Nonlinear Estimation.- 8.1 The Extended Kalman Filter.- 8.2 An Alternate Measurement Update.- 8.3 Nonlinear System Identification Using Neural Networks.- 8.4 Frequency Demodulation.- 8.5 Target Tracking Using the EKF.- 8.6 Multiple Target Tracking.- Problems.- A The State Representation.- A.1 Discrete-Time Case.- A.2 Construction of State Models.- A.3 Dynamical Properties.- A.4 Discretization of Noise Covariance Matrices.- B The z-transform.- B.1 Region of Convergence.- B.2 z-transform Pairs and Properties.- B.3 The Inverse z-transform.- C Stability of the Kaiman Filter.- C.1 Observability.- C.2 Controllability.- C.3 Types of Stability.- C.4 Positive-Definiteness of P(n).- C.5 An Upper Bound for P(n).- C.6 A Lower Bound for P(n).- C.7 A Useful Control Lemma.- C.8 A Kaiman Filter Stability Theorem.- C.9 Bounds for P(n).- D The Steady-State Kaiman Filter.- D.2 A Stabilizability Lemma.- D.3 Preservation of Ordering.- D.5 Existence and Stability.- E Modeling Errors.- E.1 Inaccurate Initial Conditions.- E.2 Nonlinearities and Neglected States.- References.

280 citations

Journal ArticleDOI
TL;DR: RadVel as discussed by the authors is an open-source Python package for modeling Keplerian orbits in radial velocity (RV) timeseries, which allows users to float or fix parameters, impose priors, and perform Bayesian model comparison.
Abstract: RadVel is an open-source Python package for modeling Keplerian orbits in radial velocity (RV) timeseries. RadVel provides a convenient framework to fit RVs using maximum a posteriori optimization and to compute robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel allows users to float or fix parameters, impose priors, and perform Bayesian model comparison. We have implemented real-time MCMC convergence tests to ensure adequate sampling of the posterior. RadVel can output a number of publication-quality plots and tables. Users may interface with RadVel through a convenient command-line interface or directly from Python. The code is object-oriented and thus naturally extensible. We encourage contributions from the community. Documentation is available at http://radvel.readthedocs.io.

279 citations

Journal ArticleDOI
Philip H. S. Torr1
TL;DR: This paper explores ways of automating the model selection process with specific emphasis on the least squares problem of fitting manifolds to data points, illustrated with respect to epipolar geometry.
Abstract: Computer vision often involves estimating models from visual input. Sometimes it is possible to fit several different models or hypotheses to a set of data, and a decision must be made as to which is most appropriate. This paper explores ways of automating the model selection process with specific emphasis on the least squares problem of fitting manifolds (in particular algebraic varieties e.g. lines, algebraic curves, planes etc.) to data points, illustrated with respect to epipolar geometry. The approach is Bayesian and the contribution three fold, first a new Bayesian description of the problem is laid out that supersedes the author's previous maximum likelihood formulations, this formulation will reveal some hidden elements of the problem. Second an algorithm, ‘MAPSAC’, is provided to obtain the robust MAP estimate of an arbitrary manifold. Third, a Bayesian model selection paradigm is proposed, the Bayesian formulation of the manifold fitting problem uncovers an elegant solution to this problem, for which a new method ‘GRIC’ for approximating the posterior probability of each putative model is derived. This approximations bears some similarity to the penalized likelihoods used by AIC, BIC and MDL however it is far more accurate in situations involving large numbers of latent variables whose number increases with the data. This will be empirically and theoretically demonstrated.

277 citations

Journal ArticleDOI
TL;DR: It is shown that the linear domain model can better represent prior information for better estimation of reflectance and illumination than the logarithmic domain.
Abstract: In this paper, a new probabilistic method for image enhancement is presented based on a simultaneous estimation of illumination and reflectance in the linear domain We show that the linear domain model can better represent prior information for better estimation of reflectance and illumination than the logarithmic domain A maximum a posteriori (MAP) formulation is employed with priors of both illumination and reflectance To estimate illumination and reflectance effectively, an alternating direction method of multipliers is adopted to solve the MAP problem The experimental results show the satisfactory performance of the proposed method to obtain reflectance and illumination with visually pleasing enhanced results and a promising convergence rate Compared with other testing methods, the proposed method yields comparable or better results on both subjective and objective assessments

276 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
85% related
Image processing
229.9K papers, 3.5M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202364
2022125
2021211
2020244
2019250
2018236