scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 1996"


Journal ArticleDOI
T.K. Moon1
TL;DR: The EM (expectation-maximization) algorithm is ideally suited to problems of parameter estimation, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation.
Abstract: A common task in signal processing is the estimation of the parameters of a probability distribution function Perhaps the most frequently encountered estimation problem is the estimation of the mean of a signal in noise In many parameter estimation problems the situation is more complicated because direct access to the data necessary to estimate the parameters is impossible, or some of the data are missing Such difficulties arise when an outcome is a result of an accumulation of simpler outcomes, or when outcomes are clumped together, for example, in a binning or histogram operation There may also be data dropouts or clustering in such a way that the number of underlying data points is unknown (censoring and/or truncation) The EM (expectation-maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation The EM algorithm is presented at a level suitable for signal processing practitioners who have had some exposure to estimation theory

2,573 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a functional formulation of the groundwater flow inverse problem that is sufficiently general to accommodate most commonly used inverse algorithms, including the Gaussian maximum a posteriori (GAP) algorithm.
Abstract: This paper presents a functional formulation of the groundwaterflow inverse problem that is sufficiently general to accommodate most commonly used inverse algorithms. Unknown hydrogeological properties are assumed to be spatial functions that can be represented in terms of a (possibly infinite) basis function expansion with random coefficients. The unknown parameter function is related to the measurements used for estimation by a ''forward operator'' which describes the measurement process. In the particular case considered here, the parameter of interest is the large-scale log hydraulic conductivity, the measurements are point values of log conductivity and piezometric head, and the forward operator is derived from an upscaled groundwaterflow equation. The inverse algorithm seeks the ''most probable'' or maximum a posteriori estimate of the unknown parameter function. When the measurement errors and parameter function are Gaussian and independent, the maximum a posteriori estimate may be obtained by minimizing a least squares performance index which can be partitioned into goodness-of- fit and prior terms. When the parameter is a stationary random function the prior portion of the performance index is equivalent to a regularization term which imposes a smoothness constraint on the estimate. This constraint tends to make the problem well- posed by limiting the range of admissible solutions. The Gaussian maximum a posteriori problem may be solved with variational methods, using functional generalizations of Gauss-Newton or gradient-based search techniques. Several popular groundwater inverse algorithms are either special cases of, or variants on, the functional maximum a posteriori algorithm. These algorithms differ primarily with respect to the way they describe spatial variability and the type of search technique they use (linear versus nonlinear). The accuracy of estimates produced by both linear and nonlinear inverse algorithms may be measured in terms of a Bayesian extension of the Cramer-Rao lower bound on the estimation error covariance. This bound suggests how parameter identifiability can be improved by modifying the problem structure and adding new measurements.

564 citations


Journal ArticleDOI
TL;DR: This work proposes a new approach to statistically optimal image reconstruction based on direct optimization of the MAP criterion, which requires approximately the same amount of computation per iteration as EM-based approaches, but the new method converges much more rapidly.
Abstract: Over the past years there has been considerable interest in statistically optimal reconstruction of cross-sectional images from tomographic data. In particular, a variety of such algorithms have been proposed for maximum a posteriori (MAP) reconstruction from emission tomographic data. While MAP estimation requires the solution of an optimization problem, most existing reconstruction algorithms take an indirect approach based on the expectation maximization (EM) algorithm. We propose a new approach to statistically optimal image reconstruction based on direct optimization of the MAP criterion. The key to this direct optimization approach is greedy pixel-wise computations known as iterative coordinate decent (ICD). We propose a novel method for computing the ICD updates, which we call ICD/Newton-Raphson. We show that ICD/Newton-Raphson requires approximately the same amount of computation per iteration as EM-based approaches, but the new method converges much more rapidly (in our experiments, typically five to ten iterations). Other advantages of the ICD/Newton-Raphson method are that it is easily applied to MAP estimation of transmission tomograms, and typical convex constraints, such as positivity, are easily incorporated.

493 citations


Journal ArticleDOI
TL;DR: In this paper, approximate expressions for the mean and variance of implicitly defined estimators of unconstrained continuous parameters are derived using the implicit function theorem, the Taylor expansion, and the chain rule.
Abstract: Many estimators in signal processing problems are defined implicitly as the maximum of some objective function. Examples of implicitly defined estimators include maximum likelihood, penalized likelihood, maximum a posteriori, and nonlinear least squares estimation. For such estimators, exact analytical expressions for the mean and variance are usually unavailable. Therefore, investigators usually resort to numerical simulations to examine the properties of the mean and variance of such estimators. This paper describes approximate expressions for the mean and variance of implicitly defined estimators of unconstrained continuous parameters. We derive the approximations using the implicit function theorem, the Taylor expansion, and the chain rule. The expressions are defined solely in terms of the partial derivatives of whatever objective function one uses for estimation. As illustrations, we demonstrate that the approximations work well in two tomographic imaging applications with Poisson statistics. We also describe a "plug-in" approximation that provides a remarkably accurate estimate of variability even from a single noisy Poisson sinogram measurement. The approximations should be useful in a wide range of estimation problems.

426 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate the performance of enumeration and several sampling based techniques such as a Gibbs' sampler, PGS and several multiple maximum a posteriori (MAP) algorithms for a simple geophysical problem of inversion of resistivity sounding data.
Abstract: The posterior probability density function (PPD), σ(m|d obs ), of earth model m, where d obs are the measured data, describes the solution of a geophysical inverse problem, when a Bayesian inference model is used to describe the problem. In many applications, the PPD is neither analytically tractable nor easily approximated and simple analytic expressions for the mean and variance of the PPD are not available. Since the complete description of the PPD is impossible in the highly multi-dimensional model space of many geophysical applications, several measures such as the highest posterior density regions, marginal PPD and several orders of moments are often used to describe the solutions. Calculation of such quantities requires evaluation of multidimensional integrals. A faster alternative to enumeration and blind Monte-Carlo integration is importance sampling which may be useful in several applications. Thus how to draw samples of m from the PPD becomes an important aspect of geophysical inversion such that importance sampling can be used in the evaluation of these multi-dimensional integrals. Importance sampling can be carried out most efficiently by a Gibbs' sampler (GS). We also introduce a method which we called parallel Gibbs' sampler (PGS) based on genetic algorithms (GA) and show numerically that the results from the two samplers are nearly identical. We first investigate the performance of enumeration and several sampling based techniques such as a GS, PGS and several multiple maximum a posteriori (MAP) algorithms for a simple geophysical problem of inversion of resistivity sounding data. Several non-linear optimization methods based on simulated annealing (SA), GA and some of their variants can be devised which can be made to reach very close to the maximum of the PPD. Such MAP estimation algorithms also sample different points in the model space. By repeating these MAP inversions several times, it is possible to sample adequately the most significant portion(s) of the PPD and all these models can be used to construct the marginal PPD, mean) covariance, etc. We observe that the GS and PGS results are identical and indistinguishable from the enumeration scheme. Multiple MAP algorithms slightly underestimate the posterior variances although the correlation values obtained by all the methods agree very well. Multiple MAP estimation required 0.3% of the computational effort of enumeration and 40% of the effort of a GS or PGS for this problem. Next, we apply GS to the inversion of a marine seismic data set to quantify uncertainties in the derived model, given the prior distribution determined from several common midpoint gathers.

305 citations


Journal ArticleDOI
TL;DR: The approach is to build geometric-probabilistic models for road image generation using Gibbs distributions and produces two boundaries for each road, or four boundaries when a mid-road barrier is present.
Abstract: This paper presents an automated approach to finding main roads in aerial images. The approach is to build geometric-probabilistic models for road image generation. We use Gibbs distributions. Then, given an image, roads are found by MAP (maximum a posteriori probability) estimation. The MAP estimation is handled by partitioning an image into windows, realizing the estimation in each window through the use of dynamic programming, and then, starting with the windows containing high confidence estimates, using dynamic programming again to obtain optimal global estimates of the roads present. The approach is model-based from the outset and is completely different than those appearing in the published literature. It produces two boundaries for each road, or four boundaries when a mid-road barrier is present.

292 citations


Journal ArticleDOI
TL;DR: A maximum a posteriori (MAP) approach to linearized image reconstruction using knowledge of the noise variance of the measurements and the covariance of the conductivity distribution has the advantage of an intuitive interpretation of the algorithm parameters as well as fast image reconstruction.
Abstract: Dynamic electrical impedance tomography (EIT) images changes in the conductivity distribution of a medium from low frequency electrical measurements made at electrodes on the medium surface. Reconstruction of the conductivity distribution is an under-determined and ill-posed problem, typically requiring either simplifying assumptions or regularization based on a priori knowledge. This paper presents a maximum a posteriori (MAP) approach to linearized image reconstruction using knowledge of the noise variance of the measurements and the covariance of the conductivity distribution. This approach has the advantage of an intuitive interpretation of the algorithm parameters as well as fast (near real time) image reconstruction. In order to compare this approach to existing algorithms, the authors develop figures of merit to measure the reconstructed image resolution, the noise amplification of the image reconstruction, and the fidelity of positioning in the image. Finally, the authors develop a communications systems approach to calculate the probability of detection of a conductivity contrast in the reconstructed image as a function of the measurement noise and the reconstruction algorithm used.

273 citations


Journal ArticleDOI
TL;DR: An algorithm herein named iterative multigrid dynamic programming (IMDP) is introduced, a fully data-driven scheme with no ad-hoc parameters, leading to computation times compatible with operational use.
Abstract: Presents a new method for endocardial (inner) and epicardial (outer) contour estimation from sequences of echocardiographic images. The framework herein introduced is fine-tuned for parasternal short axis views at the papillary muscle level. The underlying model is probabilistic; it captures the relevant features of the image generation physical mechanisms and of the heart morphology. Contour sequences are assumed to be two-dimensional noncausal first-order Markov random processes; each variable has a spatial index and a temporal index. The image pixels are modeled as Rayleigh distributed random variables with means depending on their positions (inside endocardium, between endocardium and pericardium, or outside pericardium). The complete probabilistic model is built under the Bayesian framework. As estimation criterion the maximum a posteriori (MAP) is adopted. To solve the optimization problem, one is led to (joint estimation of contours and distributions' parameters), the authors introduce an algorithm herein named iterative multigrid dynamic programming (IMDP). It is a fully data-driven scheme with no ad-hoc parameters. The method is implemented on an ordinary workstation, leading to computation times compatible with operational use. Experiments with simulated and real images are presented.

170 citations


Journal ArticleDOI
TL;DR: In this paper, the geostatistical approach to the inverse problem is discussed with emphasis on the importance of structural analysis, where instead of adjusting a grid-dependent and potentially large number of block conductivities (or other distributed parameters), a small number of structural parameters are fitted to the data.

117 citations


Journal ArticleDOI
TL;DR: Four different decoder structures are illustratively characterized and their error rate performance capabilities compared in both additive white Gaussian noise (AWGN) as well as flat Rayleigh-fading channels based on extensive simulation results for short frames used for speech transmission in the uplink of a digital mobile radio system.
Abstract: A novel class of binary parallel concatenated recursive systematic convolutional codes termed turbo-codes, having amazing error correcting capabilities, has previously been proposed. However, the decoding of turbo-codes relies on the application of soft input/soft output decoders. Such decoders can be realized either using maximum a posteriori (MAP) symbol estimators or MAP sequence estimators, e.g., the a priori soft output Viterbi algorithm (APRI-SOVA). In this paper, the structure of turbo-code encoders as well as of turbo-code decoders is described. In particular, four different decoder structures are illustratively characterized and their error rate performance capabilities compared in both additive white Gaussian noise (AWGN) as well as flat Rayleigh-fading channels based on extensive simulation results for short frames used for speech transmission in the uplink of a digital mobile radio system applying code division multiple access and joint detection. The decoders are investigated as follows: 1) the MAP symbol estimator-based approach used by Berrou et al. [1993], 2) the MAP symbol estimator-based approach used by Robertson [1994], 3) a new reduced complexity MAP symbol estimator-based approach [Jung 1995], and 4) an APRI-SOVA based approach used by Hagenauer et al. [1994].

101 citations


Journal ArticleDOI
TL;DR: The authors represent the standard ramp filter operator of the filtered-back-projection (FBP) reconstruction in different bases composed of Haar and Daubechies compactly supported wavelets to formulate a multiscale tomographic reconstruction technique in which the object is reconstructed at multiple scales or resolutions.
Abstract: The authors represent the standard ramp filter operator of the filtered-back-projection (FBP) reconstruction in different bases composed of Haar and Daubechies compactly supported wavelets. The resulting multiscale representation of the ramp-filter matrix operator is approximately diagonal. The accuracy of this diagonal approximation becomes better as wavelets with larger numbers of vanishing moments are used. This wavelet-based representation enables the authors to formulate a multiscale tomographic reconstruction technique in which the object is reconstructed at multiple scales or resolutions. A complete reconstruction is obtained by combining the reconstructions at different scales. The authors' multiscale reconstruction technique has the same computational complexity as the FBP reconstruction method. It differs from other multiscale reconstruction techniques in that (1) the object is defined through a one-dimensional multiscale transformation of the projection domain, and (2) the authors explicitly account for noise in the projection data by calculating maximum a posteriori probability (MAP) multiscale reconstruction estimates based on a chosen fractal prior on the multiscale object coefficients. The computational complexity of this maximum a posteriori probability (MAP) solution is also the same as that of the FBP reconstruction. This result is in contrast to commonly used methods of statistical regularization, which result in computationally intensive optimization algorithms.

Journal ArticleDOI
TL;DR: A supervised texture segmentation scheme is proposed in this article that results in an optimal segmentation of the textured image including images from remote sensing.
Abstract: A supervised texture segmentation scheme is proposed in this article. The texture features are extracted by filtering the given image using a filter bank consisting of a number of Gabor filters with different frequencies, resolutions, and orientations. The segmentation model consists of feature formation, partition, and competition processes. In the feature formation process, the texture features from the Gabor filter bank are modeled as a Gaussian distribution. The image partition is represented as a noncausal Markov random field (MRF) by means of the partition process. The competition process constrains the overall system to have a single label for each pixel. Using these three random processes, the a posteriori probability of each pixel label is expressed as a Gibbs distribution. The corresponding Gibbs energy function is implemented as a set of constraints on each pixel by using a neural network model based on Hopfield network. A deterministic relaxation strategy is used to evolve the minimum energy state of the network, corresponding to a maximum a posteriori (MAP) probability. This results in an optimal segmentation of the textured image. The performance of the scheme is demonstrated on a variety of images including images from remote sensing.

Journal ArticleDOI
TL;DR: A fast, robust, and completely data-driven Bayesian solution to the problem of locating two straight and parallel road edges in images that are acquired from a stationary millimeter-wave radar platform positioned near ground-level is developed.
Abstract: This paper addresses the problem of locating two straight and parallel road edges in images that are acquired from a stationary millimeter-wave radar platform positioned near ground-level. A fast, robust, and completely data-driven Bayesian solution to this problem is developed, and it has applications in automotive vision enhancement. The method employed in this paper makes use of a deformable template model of the expected road edges, a two-parameter log-normal model of the ground-level millimeter-wave (GLEM) radar imaging process, a maximum a posteriori (MAP) formulation of the straight edge detection problem, and a Monte Carlo algorithm to maximize the posterior density. Experimental results are presented by applying the method on GLEM radar images of actual roads. The performance of the method is assessed against ground truth for a variety of road scenes.

Book ChapterDOI
01 Jan 1996
TL;DR: In this paper, a simple modification to the treatment of inlier observations was proposed to reduce the excess kurtosis in the distribution of the observation disturbances and improve the performance of the quasi-maximum likelihood procedure.
Abstract: Jacquier, Poison and Rossi (1994, Journal of Business and Economic Statistics) have proposed a Bayesian hierarchical model and Markov Chain Monte Carlo methodology for parameter estimation and smoothing in a stochastic volatility model, where the logarithm of the conditional variance follows an autoregressive process. In sampling experiments, their estimators perform particularly well relative to a quasi-maximum likelihood approach, in which the nonlinear stochastic volatility model is linearized via a logarithmic transformation and the resulting linear state-space model is treated as Gaussian. In this paper, we explore a simple modification to the treatment of inlier observations which reduces the excess kurtosis in the distribution of the observation disturbances and improves the performance of the quasi-maximum likelihood procedure. The method we propose can be carried out with commercial software.

Patent
Minoru Namekata1
09 Aug 1996
TL;DR: In this article, an adaptive maximum likelihood sequence estimation apparatus includes a first estimation unit for estimating a transmission signal sequence from received signals on the basis of an estimated transmission path impulse response, and a second estimation unit estimates an estimated received signal at time k.
Abstract: An adaptive maximum likelihood sequence estimation apparatus includes a first estimation unit for estimating a transmission signal sequence from received signals on the basis of an estimated transmission path impulse response. A second estimation unit estimates an estimated received signal at time k on the basis of a known signal sequence or the transmission signal sequence estimated by the first estimation unit, and a transmission path impulse response estimated at time k-1. An error signal generation unit generates an error signal on the basis of a received signal at time k and the estimated received signal at time k. A third estimation unit estimates a transmission path impulse response at time k using a predetermined adaptive algorithm on the basis of the error signal. Furthermore, the third estimation unit estimates a transmission path impulse response by a non-recursive calculation during the reception period of a known signal sequence of the received signal, and estimates a transmission path impulse response by a recursive calculation during the reception period of an unknown data signal sequence of the received signals following the known signal sequence period.

Journal ArticleDOI
TL;DR: A new maximum a posteriori (MAP) criterion for determination of the number of classes is proposed and its performance is compared to other approaches by computer simulations.
Abstract: In recent years, many image segmentation approaches have been based on Markov random fields (MRFs). The main assumption of the MRF approaches is that the class parameters are known or can be obtained from training data. In this paper the authors propose a novel method that relaxes this assumption and allows for simultaneous parameter estimation and vector image segmentation. The method is based on a tree structure (TS) algorithm which is combined with Besag's iterated conditional modes (ICM) procedure. The TS algorithm provides a mechanism for choosing initial cluster centers needed for initialization of the ICM. The authors' method has been tested on various one-dimensional (1-D) and multidimensional medical images and shows excellent performance. In this paper the authors also address the problem of cluster validation. They propose a new maximum a posteriori (MAP) criterion for determination of the number of classes and compare its performance to other approaches by computer simulations.

Journal ArticleDOI
TL;DR: In this article, a modified maximum likelihood estimation of the scale parameter of Rayleigh distribution is proposed. But, the hyperbolic approximation is used instead of linear approximation for a function which appears in the Maximum Likelihood equation.
Abstract: For defining a Modified Maximum Likelihood Estimate of the scale parameter of Rayleigh distribution, a hyperbolic approximation is used instead of linear approximation for a function which appears in the Maximum Likelihood equation. This estimate is shown to perform better, in the sense of accuracy and simplicity of calculation, than the one based on linear approximation for the same function. Also the estimate of the scale parameter obtained is shown to be asymptotically unbiased. Numerical computation for random samples of different sizes from Rayleigh distribution, using type I1 censoring is done and is shown to be better than that obtained by Lee et al. (1980)

Journal ArticleDOI
TL;DR: This paper forms the problem of image interpretation as the maximum a posteriori (MAP) estimate of a properly defined probability distribution function (PDF) and shows that a Bayesian network can be used to represent this PDF as well as the domain knowledge needed for interpretation.
Abstract: The problem of image interpretation is one of inference with the help of domain knowledge. In this paper, we formulate the problem as the maximum a posteriori (MAP) estimate of a properly defined probability distribution function (PDF). We show that a Bayesian network can be used to represent this PDF as well as the domain knowledge needed for interpretation. The Bayesian network may be relaxed to obtain the set of optimum interpretations.

Journal ArticleDOI
TL;DR: Analytical as well as simulation results show the existence of a "mismatch" between the source and the channel (the performance degrades as the channel noise becomes more correlated), which is reduced by the use of a simple rate-one convolutional encoder.
Abstract: We consider maximum a posteriori (MAP) detection of a binary asymmetric Markov source transmitted over a binary Markov channel. The MAP detector observes a long (but finite) sequence of channel outputs and determines the most probable source sequence. In some cases, the MAP detector can be implemented by simple rules such as the "believe what you see" rule or the "guess zero (or one) regardless of what you see" rule. We provide necessary and sufficient conditions under which this is true. When these conditions are satisfied, the exact bit error probability of the sequence MAP detector can be determined. We examine in detail two special cases of the above source: (i) binary independent and identically distributed (i.i.d.) source and (ii) binary symmetric Markov source. In case (i), our simulations show that the performance of the MAP detector improves as the channel noise becomes more correlated. Furthermore, a comparison of the proposed system with a (substantially more complex) traditional tandem source-channel coding scheme portrays superior performance for the proposed scheme at relatively high channel bit error rates. In case (ii), analytical as well as simulation results show the existence of a "mismatch" between the source and the channel (the performance degrades as the channel noise becomes more correlated). This mismatch is reduced by the use of a simple rate-one convolutional encoder.

Journal ArticleDOI
TL;DR: This paper deals with the radar detection of targets embedded in K-distributed clutter with partially correlated texture with a recursive implementation, which exploits the correlation properties of both texture and speckle components.

Journal ArticleDOI
TL;DR: A retrieval technique for estimating rainfall rate and precipitating cloud parameters from spaceborne multifrequency microwave radiometers based on the maximum a posteriori probability criterion applied to a simulated data base of cloud structures and related upward brightness temperatures is described.
Abstract: A retrieval technique for estimating rainfall rate and precipitating cloud parameters from spaceborne multifrequency microwave radiometers is described. The algorithm is based on the maximum a posteriori probability criterion (MAP) applied to a simulated data base of cloud structures and related upward brightness temperatures. The cloud data base is randomly generated by imposing the mean values, the variances, and the correlations among the hydrometeor contents at each layer of the cloud vertical structure, derived from the outputs of a time-dependent microphysical cloud model. The simulated upward brightness temperatures are computed by applying a plane-parallel radiative transfer scheme. Given a multifrequency brightness temperature measurement, the MAP criterion is used to select the most probable cloud structure within the cloud-radiation data base. The algorithm is computationally efficient and has been numerically tested and compared against other methods. Its potential to retrieve rainfall over land has been explored by means of Special Sensor Microwave/Imager measurements for a rainfall event over Central Italy. The comparison of estimated rain rates with available raingauge measurements is also shown.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: A Bayesian approach to conceal errors in digital video encoded using the MPEG1 or MPEG2 compression scheme and a maximum a posteriori estimate of the missing macroblocks and motion vectors is described based on the model.
Abstract: In ATM networks cell loss causes data to be dropped in the channel. When digital video is transmitted over these networks one must be able to reconstruct the missing data so that the impact of these errors is minimized. In this paper we describe a Bayesian approach to conceal these errors. Assuming that the digital video has been encoded using the MPEG1 or MPEG2 compression scheme, each frame is modeled as a Markov random field. A maximum a posteriori estimate of the missing macroblocks and motion vectors is described based on the model.

Book ChapterDOI
01 Jan 1996
TL;DR: Maximum a posteriori (MAP) estimation algorithms are developed for hidden Markov models and for a number of useful parametric densities commonly used in automatic speech recognition and natural language processing.
Abstract: A mathematical framework for Bayesian adaptive learning of the parameters of stochastic models is presented. Maximum a posteriori (MAP) estimation algorithms are then developed for hidden Markov models and for a number of useful parametric densities commonly used in automatic speech recognition and natural language processing. The MAP formulation offers a way to combine existing prior knowledge and a small set of newly acquired task-specific data in an optimal manner. Other techniques can also be combined with Bayesian learning to improve adaptation efficiency and effectiveness.

Proceedings ArticleDOI
07 May 1996
TL;DR: A framework for maximum a posteriori (MAP) adaptation of large scale HMM recognizers is presented and each of the HMM models is adapted based on an interpolation of MAP estimates obtained under varying degrees of sharing.
Abstract: We present a framework for maximum a posteriori (MAP) adaptation of large scale HMM recognizers. First we review the standard MAP adaptation for Gaussian mixtures. We then show how MAP can be used to estimated transformations which are shared across many parameters. Finally, we combine both techniques: each of the HMM models is adapted based on an interpolation of MAP estimates obtained under varying degrees of sharing. We evaluate this algorithm for adaptation of a continuous density HMM with 96 K Gaussians and show that very satisfactory improvements can be achieved, especially for adaptation of non-native speakers of American English.

Journal ArticleDOI
TL;DR: In this article, the authors present an approach to the nonlinear inverse scattering problem using the extended Born approximation (EBA) on the basis of methods from the fields of multiscale and statistical signal processing.
Abstract: In this paper, we present an approach to the nonlinear inverse scattering problem using the extended Born approximation (EBA) on the basis of methods from the fields of multiscale and statistical signal processing. By posing the problem directly in the wavelet transform domain, regularization is provided through the use of a multiscale prior statistical model. Using the maximum a posteriori (MAP) framework, we introduce the relative Cramer-Rao bound (RCRB) as a tool for analyzing the level of detail in a reconstruction supported by a data set as a function of the physics, the source-receiver geometry, and the nature of our prior information. The MAP estimate is determined using a novel implementation of the Levenberg-Marquardt algorithm in which the RCRB is used to achieve a substantial reduction in the effective dimensionality of the inversion problem with minimal degradation in performance. Additional reduction in complexity is achieved by taking advantage of the sparse structure of the matrices defining the EBA in scale space. An inverse electrical conductivity problem arising in geophysical prospecting applications provides the vehicle for demonstrating the analysis and algorithmic techniques developed in this paper.

Proceedings ArticleDOI
05 Nov 1996
TL;DR: In this paper, a generalization of restoration theory for the problem of super-resolution reconstruction (SRR) of an image is presented, where a set of low quality images is given, and a single improved quality image which fuses their information is required.
Abstract: This paper presents a generalization of restoration theory for the problem of super-resolution reconstruction (SRR) of an image. In the SRR problem, a set of low quality images is given, and a single improved quality image which fuses their information is required. We present a model for this problem, and show how the classic restoration theory tools-maximum likelihood estimator (ML), maximum a posteriori probability estimator (MAP) and the projection onto convex sets (POCS)-can be applied as a solution. A hybrid algorithm which joins the POCS and the ML benefits is suggested.

Book ChapterDOI
23 Oct 1996
TL;DR: This work applies the information-theoretic Minimum Message Length principle to the problem of estimating the concentration parameter, κ, of spherical Fisher distributions, and shows that the MML estimator compares quite favourably against alternative Bayesian methods.
Abstract: The information-theoretic Minimum Message Length (MML) principle leads to a general invariant Bayesian technique for point estimation. We apply MML to the problem of estimating the concentration parameter, κ, of spherical Fisher distributions. (Assuming a uniform prior on the field direction, μ, MML simply returns the Maximum Likelihood estimate for μ.) In earlier work, we dealt with the von Mises circular case, d=2. We say something about the general case for arbitrary d ≥ 2 and how to derive the MML estimator, but here we only carry out a complete calculation for the spherical distribution, with d=3. Our simulation results show that the MML estimator compares very favourably against the classical methods of Maximum Likelihood and marginal Maximum Likelihood (R.A. Fisher (1953), Schou (1978)). Our simulation results also show that the MML estimator compares quite favourably against alternative Bayesian methods.

Proceedings ArticleDOI
07 May 1996
TL;DR: A method of updating a hidden Markov model for speaker verification using a small amount of new data for each speaker by adapting the model parameters to the new data by maximum a posteriori (MAP) estimation.
Abstract: We describe a method of updating a hidden Markov model (HMM) for speaker verification using a small amount of new data for each speaker. The HMM is updated by adapting the model parameters to the new data by maximum a posteriori (MAP) estimation. The initial values of the a priori parameters in MAP estimation are set using training speech used for first creating a speaker HMM. We also present a method of resetting the a priori threshold as the updating of the model proceeds. Evaluation of the performance of the two methods using 10 male speakers showed that the verification error rate was about 42% of that without updating.

Journal ArticleDOI
TL;DR: An algorithm for the computation of the maximum likelihood and the maximum a posteriori estimates of the parameters of PMD models is presented and is a special case of a more general algorithm that can be used for the whole class of LRMs.
Abstract: In this paper, we consider a class of models for two-way matrices with binary entries of 0 and 1. First, we considerBoolean matrix decomposition, conceptualize it as alatent response model (LRM) and, by making use of this conceptualization, generalize it to a larger class of matrix decomposition models. Second,probability matrix decomposition (PMD) models are introduced as a probabilistic version of this larger class of deterministic matrix decomposition models. Third, an algorithm for the computation of the maximum likelihood (ML) and the maximum a posteriori (MAP) estimates of the parameters of PMD models is presented. This algorithm is an EM-algorithm, and is a special case of a more general algorithm that can be used for the whole class of LRMs. And fourth, as an example, a PMD model is applied to data on decision making in psychiatric diagnosis.

Journal ArticleDOI
TL;DR: In this article, it was shown that the maximum likelihood estimator in a model used in the statistical analysis of computer experiments is asymptotically efficient, i.e.
Abstract: It is shown that the maximum likelihood estimator in a model used in the statistical analysis of computer experiments is asymptotically efficient.