scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 1991"


Journal ArticleDOI
TL;DR: The algorithms are evaluated with respect to improving automatic recognition of speech in the presence of additive noise and shown to outperform other enhancement methods in this application.
Abstract: The basis of an improved form of iterative speech enhancement for single-channel inputs is sequential maximum a posteriori estimation of the speech waveform and its all-pole parameters, followed by imposition of constraints upon the sequence of speech spectra. The approaches impose intraframe and interframe constraints on the input speech signal. Properties of the line spectral pair representation of speech allow for an efficient and direct procedure for application of many of the constraint requirements. Substantial improvement over the unconstrained method is observed in a variety of domains. Informed listener quality evaluation tests and objective speech quality measures demonstrate the technique's effectiveness for additive white Gaussian noise. A consistent terminating point of the iterative technique is shown. The current systems result in substantially improved speech quality and linear predictive coding (LPC) parameter estimation with only a minor increase in computational requirements. The algorithms are evaluated with respect to improving automatic recognition of speech in the presence of additive noise and shown to outperform other enhancement methods in this application. >

263 citations


Journal ArticleDOI
TL;DR: The authors consider both the maximum a posteriori probability (MAP) estimate and the minimum mean-squared error (MMSE) estimate for image estimation and image restoration for images modeled as compound Gauss-Markov random fields.
Abstract: Algorithms for obtaining approximations to statistically optimal estimates for images modeled as compound Gauss-Markov random fields are discussed. The authors consider both the maximum a posteriori probability (MAP) estimate and the minimum mean-squared error (MMSE) estimate for image estimation and image restoration. Compound image models consist of several submodels having different characteristics along with an underlying structure model which govern transitions between these image submodels. Two different compound random field models are employed, the doubly stochastic Gaussian (DSG) random field and a compound Gauss-Markov (CGM) random field. The authors present MAP estimators for DSG and CGM random fields using simulated annealing. A fast-converging algorithm called deterministic relaxation, which, however, converges to only a locally optimal MAP estimate, is also presented as an alternative for reducing computational loading on sequential machines. For comparison purposes, the authors include results on the fixed-lag smoothing MMSE estimator for the DSG field and its suboptimal M-algorithm approximation. >

203 citations


Book ChapterDOI
01 Jan 1991
TL;DR: A number of proofs are presented that equate the outputs of a Multi-Layer Perceptron (MLP) classifier and the optimal Bayesian discriminant function for asymptotically large sets of statistically independent training samples.
Abstract: This paper presents a number of proofs that equate the outputs of a Multi-Layer Perceptron (MLP) classifier and the optimal Bayesian discriminant function for asymptotically large sets of statistically independent training samples. Two broad classes of objective functions are shown to yield Bayesian discriminant performance. The first class are “reasonable error measures,” which achieve Bayesian discriminant performance by engendering classifier outputs that asymptotically equate to a posterioriprobabilities. This class includes the mean-squared error (MSE) objective function as well as a number of information theoretic objective functions. The second class are classification figures of merit (CFM mono ), which yield a qualified approximation to Bayesian discriminant performance by engendering classifier outputs that asymptotically identify the maximum a posteriori probability for a given input. Conditions and relationships for Bayesian discriminant functional equivalence are given for both classes of objective functions. Differences between the two classes are then discussed very briefly in the context of how they might affect MLP classifier generalization, given relatively small training sets.

127 citations


Journal ArticleDOI
TL;DR: In this article, an image reconstruction method motivated by positron emission tomography (PET) is discussed and an iterative approach which requires the solution of simple quadratic equations is proposed.
Abstract: An image reconstruction method motivated by positron emission tomography (PET) is discussed. The measurements tend to be noisy and so the reconstruction method should incorporate the statistical nature of the noise. The authors set up a discrete model to represent the physical situation and arrive at a nonlinear maximum a posteriori probability (MAP) formulation of the problem. An iterative approach which requires the solution of simple quadratic equations is proposed. The authors also present a methodology which allows them to experimentally optimize an image reconstruction method for a specific medical task and to evaluate the relative efficacy of two reconstruction methods for a particular task in a manner which meets the high standards set by the methodology of statistical hypothesis testing. The new MAP algorithm is compared to a method which maximizes likelihood and with two variants of the filtered backprojection method. >

99 citations


Journal ArticleDOI
TL;DR: Simulations show that in some cases, it is possible to avoid data association and directly compute the maximum a posteriori mixed track.
Abstract: The authors consider the application of hidden Markov models (HMMs) to the problem of multitarget tracking-specifically, to the problem of tracking multiple frequency lines. The idea of a mixed track is introduced, a multitrack Viterbi algorithm is described and a detailed analysis of the underlying Markov model is presented. Simulations show that in some cases, it is possible to avoid data association and directly compute the maximum a posteriori mixed track. Some practical aspects of the algorithm are discussed and simulation results, presented. >

88 citations


Proceedings ArticleDOI
14 Apr 1991
TL;DR: Signal parameter estimators which are less sensitive to perturbations in the array manifold are presented and a compact expression for the MAP Cramer-Rao bound (CRB) on the signal and array parameter estimates is derived.
Abstract: Signal parameter estimators which are less sensitive to perturbations in the array manifold are presented. A parametrized stochastic model for the array uncertainties is introduced. The unknown array parameters can include the individual gain and phase responses of the sensors as well as their positions. Based on this model, a maximum a posteriori (MAP) estimator is formulated. This results in a fairly complex optimization problem which is computationally expensive. The MAP estimator is simplified by exploiting properties of the weighted subspace fitting method. An approximate method that further reduces the complexity is also presented, assuming small array perturbations. A compact expression for the MAP Cramer-Rao bound (CRB) on the signal and array parameter estimates is derived. A simulation study indicates that the proposed robust estimation procedures achieve the MAP-CRB even for moderate sample sizes. >

62 citations


Journal ArticleDOI
TL;DR: This paper outlines the empirical Bayes approach and development and comparison of approaches based on parametric priors and non-parametric prior, discussion of the importance of accounting for uncertainty in the estimated prior, comparison of the output and interpretation of fixed and random effects approaches to estimating population values, estimating histograms, and identification of key considerations.
Abstract: A compound sampling model, where a unit-specific parameter is sampled from a prior distribution and then observed are generated by a sampling distribution depending on the parameter, underlies a wide variety of biopharmaceutical data. For example, in a multi-centre clinical trial the true treatment effect varies from centre to centre. Observed treatment effects deviate from these true effects through sampling variation. Knowledge of the prior distribution allows use of Bayesian analysis to compute the posterior distribution of clinic-specific treatment effects (frequently summarized by the posterior mean and variance). More commonly, with the prior not completely specified, observed data can be used to estimate the prior and use it to produce the posterior distribution: an empirical Bayes (or variance component) analysis. In the empirical Bayes model the estimated prior mean gives the typical treatment effect and the estimated prior standard deviation indicates the heterogeneity of treatment effects. In both the Bayes and empirical Bayes approaches, estimated clinic effects are shrunken towards a common value from estimates based on single clinics. This shrinkage produces more efficient estimates. In addition, the compound model helps structure approaches to ranking and selection, provides adjustments for multiplicity, allows estimation of the histogram of clinic-specific effects, and structures incorporation of external information. This paper outlines the empirical Bayes approach. Coverage will include development and comparison of approaches based on parametric priors (for example, a Gaussian prior with unknown mean and variance) and non-parametric priors, discussion of the importance of accounting for uncertainty in the estimated prior, comparison of the output and interpretation of fixed and random effects approaches to estimating population values, estimating histograms, and identification of key considerations in the use and interpretation of empirical Bayes methods.

57 citations


Journal ArticleDOI
TL;DR: The aim of this paper is to present a specific but natural signal model — which is called a changing regression model — and to point out a method to compute an optimal estimate of the segmentation problem linearly in time.

49 citations


Journal ArticleDOI
Neri Merhav1, Yariv Ephraim1
TL;DR: A Bayesian approach to classification of parametric information sources whose statistics are not explicitly given is studied and applied to recognition of speech signals based upon Markov modeling, and a classifier based on generalized likelihood ratios is developed.
Abstract: A Bayesian approach to classification of parametric information sources whose statistics are not explicitly given is studied and applied to recognition of speech signals based upon Markov modeling. A classifier based on generalized likelihood ratios, which depends only on the available training and testing data, is developed and shown to be optimal in the sense of achieving the highest asymptotic exponential rate of decay of the error probability. The proposed approach is compared to the standard classification approach used in speech recognition, in which the parameters for the sources are first estimated from the given training data, and then the maximum a posteriori decision rule is applied using the estimated statistics. >

43 citations


Journal ArticleDOI
TL;DR: An object extraction problem based on the Gibbs Random Field model, which is a modified version of that of Hopfield, is suggested for solving the problem, and is found to be highly robust and immune to noise.
Abstract: An object extraction problem based on the Gibbs Random Field model is discussed. The Maximum a'posteriori probability (MAP) estimate of a scene based on a noise-corrupted realization is found to be computationally exponential in nature. A neural network, which is a modified version of that of Hopfield, is suggested for solving the problem. A single neuron is assigned to every pixel. Each neuron is supposed to be connected only to all of its nearest neighbours. The energy function of the network is designed in such a way that its minimum value corresponds to the MAP estimate of the scene. The dynamics of the network are described. A possible hardware realization of a neuron is also suggested. The technique is implemented on a set of noisy images and found to be highly robust and immune to noise.

42 citations


Dissertation
01 Nov 1991
TL;DR: A new approach to image segmentation is presented that integrates region and boundary information within a multiresolution framework and is effective at extracting boundary orientations from data with low signal-to-noise ratios.
Abstract: Image segmentation is an important area in the general field of image processing and computer vision. It is a fundamental part of the `low level' aspects of computer vision and has many practical applications such as in medical imaging, industrial automation and satellite imagery. Traditional methods for image segmentation have approached the problem either from localisation in class space using region information, or from localisation in position, using edge or boundary information. More recently, however, attempts have been made to combine both region and boundary information in order to overcome the inherent limitations of using either approach alone. In this thesis, a new approach to image segmentation is presented that integrates region and boundary information within a multiresolution framework. The role of uncertainty is described, which imposes a limit on the simultaneous localisation in both class and position space. It is shown how a multiresolution approach allows the trade-off between position and class resolution and ensures both robustness in noise and efficiency of computation. The segmentation is based on an image model derived from a general class of multiresolution signal models, which incorporates both region and boundary features. A four stage algorithm is described consisting of: generation of a low-pass pyramid, separate region and boundary estimation processes and an integration strategy. Both the region and boundary processes consist of scale-selection, creation of adjacency graphs, and iterative estimation within a general framework of maximum a posteriori (MAP) estimation and decision theory. Parameter estimation is performed in situ, and the decision processes are both flexible and spatially local, thus avoiding assumptions about global homogeneity or size and number of regions which characterise some of the earlier algorithms. A method for robust estimation of edge orientation and position is described which addresses the problem in the form of a multiresolution minimum mean square error (MMSE) estimation. The method effectively uses the spatial consistency of output of small kernel gradient operators from different scales to produce more reliable edge position and orientation and is effective at extracting boundary orientations from data with low signal-to-noise ratios. Segmentation results are presented for a number of synthetic and natural images which show the cooperative method to give accurate segmentations at low signal-to-noise ratios (0 dB) and to be more effective than previous methods at capturing complex region shapes.

Journal ArticleDOI
TL;DR: A study of block-oriented motion estimation algorithms is presented, and their application to motion-compensated temporal interpolation is described and the use of multiresolution techniques, essential for satisfactory performance, is discussed.
Abstract: A study of block-oriented motion estimation algorithms is presented, and their application to motion-compensated temporal interpolation is described. In the proposed approach, the motion field within each block is described by a function of a few parameters that can represent typical local motion vector fields.. A probabilistic formulation is then used to develop maximum-likelihood (ML) and maximum a posteriori probability (MAP) estimation criteria. The MAP criterion takes into account the dependence of the motion fields in adjacent blocks. A procedure for minimizing the resulting objective function based on the Gauss-Newton algorithm is presented. The use of multiresolution techniques, essential for satisfactory performance, is discussed. Experimental results evaluating the algorithms for the task of motion-compensated temporal interpolation are presented. The relative complexity of the algorithms is also discussed. >

Journal ArticleDOI
TL;DR: In this article, the reliability of a multi-component stress-strength system when both stress and strength are independently identically distributed (idd) Burr random variables was investigated using both maximum likelihood and Bayes estimators.

Journal ArticleDOI
TL;DR: Compared with maximum likelihood and filtered-backprojection approaches, the results obtained using the maximum a posteriori probability with the intensity-level information demonstrated qualitative and quantitative improvement in localizing the regions of varying intensities.
Abstract: A multinomial image model is proposed which uses intensity-level information for reconstruction of contiguous image regions The intensity-level information assumes that image intensities are relatively constant within contiguous regions over the image-pixel array and that intensity levels of these regions are determined either empirically or theoretically by information criteria These conditions may be valid, for example, for cardiac blood-pool imaging, where the intensity levels (or radionuclide activities) of myocardium, blood-pool, and background regions are distinct and the activities within each region of muscle, blood, or background are relatively uniform To test the model, a mathematical phantom over a 64 x 64 array was constructed The phantom had three contiguous regions Each region had a different intensity level Measurements from the phantom were simulated using an emission-tomography geometry Fifty projections were generated over 180 degrees, with 64 equally spaced parallel rays per projection Projection data were randomized to contain Poisson noise Image reconstructions were performed using an iterative maximum a posteriori probability procedure The contiguous regions corresponding to the three intensity levels were automatically segmented Simultaneously, the edges of the regions were sharpened Noise in the reconstructed images was significantly suppressed Convergence of the iterative procedure to the phantom was observed Compared with maximum likelihood and filtered-backprojection approaches, the results obtained using the maximum a posteriori probability with the intensity-level information demonstrated qualitative and quantitative improvement in localizing the regions of varying intensities

Book ChapterDOI
07 Jul 1991
TL;DR: Compared with maximum likelihood and a Bayesian approach using a Gibbs prior, the results obtained using the image model demonstrated the improvement in identifying the contiguous regions and the associated activities.
Abstract: We evaluate an image model for simultaneous reconstruction and segmentation of piecewise continuous images. The model assumes that the intensities of the piecewise continuous image are relatively constant within contiguous regions and that the intensity levels of these regions can be determined either empirically or theoretically before reconstruction. The assumptions might be valid, for example, in cardiac blood-pool imaging or in transmission tomography of the thorax for non-uniform attenuation correction of emission tomography. In the former imaging situation, the intensities or radionuclide activities within the regions of myocardium, blood-pool and background may be relatively constant and the three activity levels can be distinct. For the latter case, the attenuation coefficients of bone, lungs and soft tissues can be determined prior to reconstructing the attenuation map. The contiguous image regions are expected to be simultaneously segmented during image reconstruction. We tested the image model with experimental phantom studies. The phantom consisted of a plastic cylinder having an elliptical cross section and containing five contiguous regions. There were three distinct activity levels within the phantom. Projection data were acquired using a SPECT system. Reconstructions were performed using an iterative maximum a posteriori probability procedure. As expected, the reconstructed image consisted of contiguous regions and the acitivities within the regions were relatively constant. Compared with maximum likelihood and a Bayesian approach using a Gibbs prior, the results obtained using the image model demonstrated the improvement in identifying the contiguous regions and the associated activities.

Journal ArticleDOI
TL;DR: All the usual tools of statistical signal analysis, for example, statistical decision theory for object recognition, can be brought to bear and the information extraction appears to be robust and computationally reasonable; the concepts are geometric and simple; and essentially optimal accuracy should result.
Abstract: A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A 3D surface of interest here is modeled as a function known up to the values of a few parameters. Surface estimation is then treated as the general problem of maximum-likelihood parameter estimation based on two or more functionally related data sets. In our case, these data sets constitute a sequence of images taken at different locations and orientations. Experiments are run to illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. In order to extract all the usable information from the sequence of images, all the images should be available simultaneously for the parameter estimation. We introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and the amount of processing required. This leads to a sequential Bayesian estimator for the surface parameters, where the information extracted from previous images is summarized in a quadratic form. The attractiveness of our approach is that now all the usual tools of statistical signal analysis, for example, statistical decision theory for object recognition, can be brought to bear; the information extraction appears to be robust and computationally reasonable; the concepts are geometric and simple; and essentially optimal accuracy should result. Experimental results are shown for extending this approach in two ways. One is to model a highly variable surface as a collection of small patches jointly constituting a stochastic process (e.g., a Markov random field) and to reconstruct this surface using maximum a posteriori probability (MAP) estimation. The other is to cluster together those patches constituting the same primitive object through the use of MAP segmentation. This provides a simultaneous estimation and segmentation of a surface into primitive constituent surfaces.

Journal ArticleDOI
TL;DR: This paper develops the Scale-Invariant algorithms, which incorporate prior shape information by defining prior probabilities on support vectors, where a support vector is a vector formed from the lateral displacements of a particular set of support lines of an object.

Book ChapterDOI
01 Jan 1991
TL;DR: This paper presents practical remarks and numerical results on maximum likelihood estimation algorithms for perfectly and imperfectly observed Gibbsian fields on a finite lattice and consistency ofmaximum likelihood estimation is proved.
Abstract: This paper presents practical remarks and numerical results on maximum likelihood estimation algorithms for perfectly and imperfectly observed Gibbsian fields on a finite lattice. These remarks are preceded by the definition of the algorithms. In the appendix, consistency of maximum likelihood estimation is proved. Math. Reviews Classification: Primary:62F 10 Secondary:60G80.

Book ChapterDOI
01 Jan 1991
TL;DR: Four tomographic reconstruction algorithms employing a nonnegativity constraint are compared, including maximum a posteriori (MAP) estimation based on the Bayesian method with entropy and Gaussian priors as well as the additive and multiplicative versions of the algebraic reconstruction technique (ART and MART).
Abstract: We evaluate several tomographic reconstruction algorithms on the basis of how well one can perform the Rayleigh discrimination task using the reconstructed images. The Rayleigh task is defined here as deciding whether a perceived object is either a pair of neighboring points or a line, both convolved with a 2D Gaussian. The method of evaluation is based on the results of a numerical testing procedure in which the stated discrimination task is carried out on reconstructions of a randomly generated sequence of images. The ability to perform the Rayleigh task is summarized in terms of a discriminability index that is derived from the area under the receiver-operating characteristic (ROC) curve. Reconstruction algorithms employing a nonnegativity constraint are compared, including maximum a posteriori (MAP) estimation based on the Bayesian method with entropy and Gaussian priors as well as the additive and multiplicative versions of the algebraic reconstruction technique (ART and MART). The performance of all four algorithms tested is found to be similar for complete noisy data. However, for sparse noiseless data, the MAP algorithm based on the Gaussian prior does not perform as well as the others.

01 Jan 1991
TL;DR: The EM algorithm yields a simple identification procedure to facilitate the maximum likelihood estimation for the state-space models to show that the accuracy of the dynamic model is greatly improved by introducing the possibility that the observed data contains outliers.
Abstract: An iterative identification method for a linear state-space model with outliers and /or missing data is proposed by applying the Expectation-Maximization (EM) algorithm . The EM algorithm yields a simple identification procedure to facilitate the maximum likelihood (ML) estimation for the state-space models. The missing data case is easily manipulated by the EM algorithm. The outliers arc treated as missing data, and the outliers are detected by maximum a posteriori (MAP) estimate of the occurrence of outlier which is modeled by a Bernoulli sequence, EM algorithm is also applied to the MAP estimation, The fixed-interval smoothed estimate of the state vector is simultaneously obtained, since it is used for the parameter identification, The presen: algorithm is applied to real data to show that the accuracy of the dynamic model is greatly improved by introducing the possibility that the observed data contains outliers.

Proceedings ArticleDOI
14 Apr 1991
TL;DR: A new segmentation algorithm for still black and white images is introduced that forms the basis of a region-oriented sequence coding technique, currently under development, and suboptimal versions of the algorithm are proposed.
Abstract: A new segmentation algorithm for still black and white images is introduced. This algorithm forms the basis of a region-oriented sequence coding technique, currently under development. The algorithm models the human mechanism of selecting regions both by their interior characteristics and their boundaries. This is carried out in two different stages: with a preprocessing that takes into account only gray level information, and with a stochastic model for segmented images that uses both region interior and boundary information. In the stochastic model, the gray level information within the regions is modeled by stationary Gaussian processes, and the boundary information by a Gibbs-Markov random field (GMRF). The segmentation is carried out by finding the most likely realization of the joint process (maximum a posteriori criterion), given the preprocessed image. For decreasing the computational load while avoiding local maxima in the probability function, suboptimal versions of the algorithm are proposed. >

Proceedings ArticleDOI
02 Nov 1991
TL;DR: Single photon emission computed tomography (SPECT) reconstructions were performed using maximum a posteriori (penalized likelihood) estimation via the expectation maximization algorithm on a massively parallel single-instruction multiple-data computer.
Abstract: Single photon emission computed tomography (SPECT) reconstructions were performed using maximum a posteriori (penalized likelihood) estimation via the expectation maximization algorithm Due to the large number of computations, the algorithms were performed on a massively parallel single-instruction multiple-data computer Computation times for 200 iterations using Good's rotationally invariant roughness penalty were on the order of 5 min for a 64*64 image with 96 view angles on a 4096 processor machine, and 40 s on a 16384 processor machine Computer simulations were performed using parameters for the Siemens gamma camera and clinical brain scan parameters comparing two regularization techniques to conventional reconstructions Regularization by kernel sieves and penalized likelihood with Good's rotationally invariant roughness measure are compared to filtered backprojection Twenty-five independent sets of data are reconstructed for the pie and Hoffman brain phantoms >

Proceedings ArticleDOI
14 Apr 1991
TL;DR: A parallel simulated annealing algorithm is presented for image restoration using a compound Gauss-Markov field model for the image, and the total time required for restoring a monochrome blurred and noisy image with continuous range of intensities is reduced.
Abstract: A parallel simulated annealing algorithm is presented for image restoration using a compound Gauss-Markov field model for the image. Results are provided for its implementation on a distributed array processor (DAP510) which is a single instruction multiple data (SIMD) machine with 1024(32*32) mesh-connected processor elements, and a clock rate of 10 MHz. The total time required for restoring a monochrome blurred and noisy image with continuous range of intensities is reduced to about 10 minutes as compared to 20 hours for its sequential implementation (VAX11/785). Both the maximum a posteriori (MAP) and minimum mean square error (MMSE) estimates of the original image are obtained. The parallel estimates are shown, as well as the sequential estimate and the classical Wiener filter estimate. >

01 May 1991
TL;DR: A practical system for boundary finding of natural objects in images has been developed, based on a new general probabilistic method of boundary finding that allows the incorporation of prior information about the global shape of the target object.
Abstract: A practical system for boundary finding of natural objects in images has been developed It is based on a new general probabilistic method of boundary finding that allows the incorporation of prior information about the global shape of the target object Determining the boundaries of objects, and thereby their shape and location, is an important task in computer vision Segmentation using boundary finding is enhanced both by considering the boundary as a whole and by using model-based global shape information Previous boundary finding methods have either not used global shape or have designed individual shape models specific to particular shapes Imperfect image data can be augmented by exploiting the extrinsic information that a model provides Flexible constraints in the form of a probabilistic deformable model are applied to the problem of segmenting natural objects whose diversity and irregularity of shape makes them poorly represented in terms of fixed features or form The objects being considered are expected, however, to have a tendency toward some average shape The parametric model is based on the elliptic Fourier decomposition of the boundary This is augmented with probability distributions defined on the parameters, which bias the model to a particular overall shape while allowing for deformations Boundary finding is formulated as an optimization problem using a maximum a posteriori objective function The best match is found between the boundary, as defined by the parameter vector, and a measure of image boundary strength derived from the image, as biased by the shape prior probability A computer implementation was constructed and applied to object delineation problems from a variety of two-dimensional images Results of the method applied to real and synthetic images are presented Extensions of this method to three dimensions and temporal sequences are outlined

Journal ArticleDOI
02 Nov 1991
TL;DR: In this paper, the relationship between the choice of parameters for a generalized Gibbs prior for the MAP-EM (maximum a posteriori, expectation maximization) algorithm and the model of the projection/backprojection process used in SPECT (single photon emission computed tomography) reconstruction is studied.
Abstract: The relationship between the choice of parameters for a generalized Gibbs prior for the MAP-EM (maximum a posteriori, expectation maximization) algorithm and the model of the projection/backprojection process used in SPECT (single photon emission computed tomography) reconstruction is studied. A realistic phantom, derived from an X-ray CT study and average Tl-201 uptake distributions in patients, was used. Simulated projection data, including nonuniform attenuation, detector response, scatter, and Poisson noise, were generated. From this data set, reconstructions were created using a MAP-EM technique with a generalized Gibbs prior, which is designed to smooth noise with minimal smoothing of edge information. Reconstructions were performed over several different values of the prior parameters for three projector/backprojector models: one with no compensations at all, one incorporating only nonuniform attenuation compensation, and one incorporating both nonuniform attenuation and detector response compensations. Analysis of several measures of image quality in a region of interest surrounding the myocardium shows that, for each projection model, there is an optimal value of the weighting parameter which decreases as the projection process is modeled more accurately. >

Journal ArticleDOI
TL;DR: In this article, the Onsager-Machlup functional is defined for solutions of semilinear elliptic type PDEs driven by white noise and the existence of this functional is proved by applying a general theorem of Ramer on the equivalence of measures on Wiener space.

Book ChapterDOI
01 Jan 1991
TL;DR: It is found that, in spite of the very noisy appearance of the reconstructed images, the maximum likelihood method outperforms the others from the point of view of estimating average activity in individual neurological structures of interest.
Abstract: We present a methodology which allows us to experimentally optimize an image reconstruction method for a specific medical task and to evaluate the relative efficacy of two reconstruction methods for a particular task in a manner which meets the high standards set by the methodology of statistical hypothesis testing. We illustrate this by comparing, in the area of Positron Emission Tomography (PET), a Maximum A posteriori Probability (MAP) algorithm with a method which maximizes likelihood and with two variants of the filtered backprojection method. We find that the relative performance of techniques is extremely task dependent, with the MAP method superior to the others from the point of view of pointwise accuracy, but not from the points of view of two other PET-related figures of merit. In particular, we find that, in spite of the very noisy appearance of the reconstructed images, the maximum likelihood method outperforms the others from the point of view of estimating average activity in individual neurological structures of interest.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the detection of differential item functioning (DIF) on items intentionally constructed to favor one group over another in two item response theory-based computer programs, LOGIST and BILOG.
Abstract: Detection of differential item functioning (DIF) on items intentionally constructed to favor one group over another was investigated on item parameter estimates obtained from two item response theory-based computer programs, LOGIST and BILOG. Signed- and unsigned-area measures based on joint maximum likelihood estimation, marginal maximum likelihood estimation, and two marginal maximum a posteriori estimation procedures were compared with each other to determine whether detection of DIF could be improved using prior distributions. Results indicated that item parameter estimates obtained using either prior condition were less deviant than when priors were not used. Differences in detection of DIF appeared to be related to item parameter estimation condition and to some extent to sample size.

Journal ArticleDOI
TL;DR: Comparing the performance of a linear space-invariant (LSI) maximum a posteriori (MAP) filter, an LSI reduced update Kalman filter (RUKF), an edge-adaptive RUKF, and an adaptive convex-type constraint-based restoration implemented via the method of projection onto convex sets (POCS) finds the space-variant restoration methods which are adaptive to local image properties obtain the best results.

Journal ArticleDOI
TL;DR: Experimental results comparing the new method with other popular AR spectrum estimation methods indicate thenew method achieves low bias and low variance AR parameter estimates comparable with the existing methods.
Abstract: A method for obtaining an exact maximum likelihood estimate (MLE) of the autoregressive (AR) parameters is proposed. The method is called the forward-backward maximum likelihood algorithm. Based on a new form of the log likelihood function for a Gaussian AR process, an iterative maximization is used to obtain an MLE of the inverse covariance matrix. The AR parameters are then determined via the normal equations. Experimental results comparing the new method with other popular AR spectrum estimation methods indicate the new method achieves low bias and low variance AR parameter estimates comparable with the existing methods. >