scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 1993"


Journal ArticleDOI
TL;DR: In many cases, complete-data maximum likelihood estimation is relatively simple when conditional on some function of the parameters being estimated as mentioned in this paper, and convergence is stable, with each iteration increasing the likelihood.
Abstract: Two major reasons for the popularity of the EM algorithm are that its maximum step involves only complete-data maximum likelihood estimation, which is often computationally simple, and that its convergence is stable, with each iteration increasing the likelihood. When the associated complete-data maximum likelihood estimation itself is complicated, EM is less attractive because the M-step is computationally unattractive. In many cases, however, complete-data maximum likelihood estimation is relatively simple when conditional on some function of the parameters being estimated

1,816 citations


Journal ArticleDOI
TL;DR: It is shown that Bayesian segmentation using Gauss-Seidel iteration produces useful estimates at much lower signal-to-noise ratios than required for continuously valued reconstruction.
Abstract: A method for Bayesian reconstruction which relies on updates of single pixel values, rather than the entire image, at each iteration is presented. The technique is similar to Gauss-Seidel (GS) iteration for the solution of differential equations on finite grids. The computational cost per iteration of the GS approach is found to be approximately equal to that of gradient methods. For continuously valued images, GS is found to have significantly better convergence at modes representing high spatial frequencies. In addition, GS is well suited to segmentation when the image is constrained to be discretely valued. It is shown that Bayesian segmentation using GS iteration produces useful estimates at much lower signal-to-noise ratios than required for continuously valued reconstruction. The convergence properties of gradient ascent and GS for reconstruction from integral projections are analyzed, and simulations of both maximum-likelihood and maximum a posteriori cases are included. >

543 citations


Journal ArticleDOI
TL;DR: It is shown that multiple constraints can provide more accurate flow estimation in a wide range of circumstances and is presented a multimodal approach to the problem of motion estimation in which the computation of visual motion is based on several complementary constraints.
Abstract: The estimation of dense velocity fields from image sequences is basically an ill-posed problem, primarily because the data only partially constrain the solution. It is rendered especially difficult by the presence of motion boundaries and occlusion regions which are not taken into account by standard regularization approaches. In this paper, the authors present a multimodal approach to the problem of motion estimation in which the computation of visual motion is based on several complementary constraints. It is shown that multiple constraints can provide more accurate flow estimation in a wide range of circumstances. The theoretical framework relies on Bayesian estimation associated with global statistical models, namely, Markov random fields. The constraints introduced here aim to address the following issues: optical flow estimation while preserving motion boundaries, processing of occlusion regions, fusion between gradient and feature-based motion constraint equations. Deterministic relaxation algorithms are used to merge information and to provide a solution to the maximum a posteriori estimation of the unknown dense motion field. The algorithm is well suited to a multiresolution implementation which brings an appreciable speed-up as well as a significant improvement of estimation when large displacements are present in the scene. Experiments on synthetic and real world image sequences are reported. >

322 citations


Journal ArticleDOI
TL;DR: The simulations show that the inclusion of position-dependent anatomical prior Information leads to further improvement relative to Bayesian reconstructions without the anatomical prior, and the algorithm exhibits a certain degree of robustness with respect to errors in the location of anatomical boundaries.
Abstract: Proposes a Bayesian method whereby maximum a posteriori (MAP) estimates of functional (PET and SPECT) images may be reconstructed with the aid of prior information derived from registered anatomical MR images of the same slice. The prior information consists of significant anatomical boundaries that are likely to correspond to discontinuities in an otherwise spatially smooth radionuclide distribution. The authors' algorithm, like others proposed recently, seeks smooth solutions with occasional discontinuities; the contribution here is the inclusion of a coupling term that influences the creation of discontinuities in the vicinity of the significant anatomical boundaries. Simulations on anatomically derived mathematical phantoms are presented. Although computationally intense in its current implication, the reconstructions are improved (ROI-RMS error) relative to filtered backprojection and EM-ML reconstructions. The simulations show that the inclusion of position-dependent anatomical prior Information leads to further improvement relative to Bayesian reconstructions without the anatomical prior. The algorithm exhibits a certain degree of robustness with respect to errors in the location of anatomical boundaries. >

238 citations


Journal ArticleDOI
TL;DR: It is shown that the Bayesian equalizer has a structure equivalent to that of the radial basis function network, the latter being a one-hidden-layer artificial neural network widely used in pattern classification and many other areas of signal processing.
Abstract: A Bayesian solution is derived for digital communication channel equalization with decision feedback. This is an extension of the maximum a posteriori probability symbol-decision equalizer to include decision feedback. A novel scheme utilizing decision feedback that not only improves equalization performance but also reduces computational complexity greatly is proposed. It is shown that the Bayesian equalizer has a structure equivalent to that of the radial basis function network, the latter being a one-hidden-layer artificial neural network widely used in pattern classification and many other areas of signal processing. Two adaptive approaches are developed to realize the Bayesian solution. The maximum-likelihood Viterbi algorithm and the conventional decision feedback equalizer are used as two benchmarks to asses the performance of the Bayesian decision feedback equalizer. >

218 citations


Proceedings ArticleDOI
27 Apr 1993
TL;DR: It was found that supervised speaker adaptation based on two gender-dependent models gave a better result than that obtained with a single SI seed, compared with speaker-dependent training.
Abstract: A number of issues related to the application of Bayesian learning techniques to speaker adaptation are investigated. It is shown that the seed models required to construct prior densities to obtain the MAP (maximum a posteriori) estimate can be a speaker-independent (SI) model, a set of female and male models, or even a task-independent acoustic model. Speaker-adaptive training algorithms are shown to be effective in improving the performance of both speaker-dependent and speaker-independent speech recognition systems. The segmental MAP estimation formulation is used to perform adaptive acoustic modeling for speaker adaptation applications. Tested on an RM (resource management) task, it was found that supervised speaker adaptation based on two gender-dependent models gave a better result than that obtained with a single SI seed. Compared with speaker-dependent training, speaker adaptation achieved an equal or better performance with the same amount of training/adaptation data. >

170 citations


Journal ArticleDOI
17 Jan 1993
TL;DR: It is shown that the instantaneous MAP detector can be combined with the VQ decoder to form an approximate minimum mean-squared error decoder and the residual redundancy can be used by the MAP detectors to combat channel errors.
Abstract: The authors consider the problem of detecting a discrete Markov source which is transmitted across a discrete memoryless channel. Two maximum a posteriori (MAP) formulations are considered: (i) a sequence MAP detection in which the objective is to determine the most probable transmitted sequence given the observed sequence and (ii) an instantaneous MAP detection which is to determine the most probable transmitted symbol at time n given all the observations prior to and including time n. The solution to the first problem results in a "Viterbi-like" implementation of the MAP detector (with Large delay) while the latter problem results in a recursive implementation (with no delay). For the special case of the binary symmetric Markov source and binary symmetric channel, simulation results are presented and an analysis of these two systems yields explicit critical channel bit error rates above which the MAP detectors become useful. Applications of the MAP detection problem in a combined source-channel coding system are considered. Here, it is assumed that the source is highly correlated and that the source encoder (a vector quantizer (VQ)) fails to remove all of the source redundancy. The remaining redundancy at the output of the source encoder is referred to as the "residual" redundancy. It is shown, through simulation, that the residual redundancy can be used by the MAP detectors to combat channel errors. For small block sizes, the proposed system beats Farvardin and Vaishampayan's channel-optimized VQ by wide margins. Finally, it is shown that the instantaneous MAP detector can be combined with the VQ decoder to form an approximate minimum mean-squared error decoder. >

147 citations


Proceedings ArticleDOI
27 Apr 1993
TL;DR: The problem of image decompression is cast as an ill-posed inverse problem, and a stochastic regularization technique is used to form a well-posed reconstruction algorithm which produces reconstructed images which greatly reduced the noticeable artifacts which exist using standard techniques.
Abstract: The problem of image decompression is cast as an ill-posed inverse problem, and a stochastic regularization technique is used to form a well-posed reconstruction algorithm. A statistical model for the image which incorporates the convex Huber minimax function is proposed. The use of the Huber minimax function rho T(.) helps to maintain the discontinuities from the original image which produces high-resolution edge boundaries. Since rho T(.) is convex, the resulting multidimensional minimization problem is a constrained convex optimization problem. The maximum a posteriori (MAP) estimation technique that is proposed results in the constrained optimization of a convex functional. The proposed image decompression algorithm produces reconstructed images which greatly reduced the noticeable artifacts which exist using standard techniques. >

105 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the asymptotic properties of maximum likelihood estimators of parameters when observations are taken from a two-dimensional Gaussian random field with a multiplicative Ornstein-Uhlenbeck covariance function.
Abstract: We study in detail asymptotic properties of maximum likelihood estimators of parameters when observations are taken from a two-dimensional Gaussian random field with a multiplicative Ornstein-Uhlenbeck covariance function. Under the complete lattice sampling plan, it is shown that the maximum likelihood estimators are strongly consistent and asymptotically normal. The asymptotic normality here is normalized by the fourth root of the sample size and is obtained through higher order expansions of the likelihood score equations. Extensions of these results to higher-dimensional processes are also obtained, showing that the convergence rate becomes better as the dimension gets higher.

98 citations


Journal ArticleDOI
TL;DR: It is proven that the sequence of iterates that is generated by using the expectation maximization algorithm is monotonically increasing in posterior probability, with stable points of the iteration satisfying the necessary maximizer conditions of the maximum a posteriori solution.
Abstract: The three-dimensional image-reconstruction problem solved here for optical-sectioning microscopy is to estimate the fluorescence intensity λ(x), where x ∈ ℛ3, given a series of Poisson counting process measurements {Mj(dx)}j=1J, each with intensity sj(y) ∫ℛ3pj(y|x)λ(x)dx, with pj(y|x) being the point spread of the optics focused to the jth plane and sj(y) the detection probability for detector pointy at focal depth j. A maximum a posteriori reconstruction generated by inducing a prior distribution on the space of images via Good’s three-dimensional rotationally invariant roughness penalty ∫ℛ3 [|Δλ(x)|2/λ(x)]dx. It is proven that the sequence of iterates that is generated by using the expectation maximization algorithm is monotonically increasing in posterior probability, with stable points of the iteration satisfying the necessary maximizer conditions of the maximum a posteriori solution. The algorithms were implemented on the DECmpp-SX, a 64 × 64 parallel processor, running at <2 s/(643, 3-D iteration). Results are demonstrated from simulated as well as amoebae and volvox data. We study performance comparisons of the algorithms for the missing-data problems corresponding to fast data collection for rapid motion studies in which every other focal plane is removed and for imaging with limited detector areas and efficiency.

84 citations


Journal ArticleDOI
TL;DR: A deterministic relaxation method based on mean field annealing with a compound Gauss-Markov random (CGMRF) field model is proposed and a set of iterative equations for the mean values of the intensity and both horizontal and vertical line processes with or without taking into account some interaction between them are presented.
Abstract: The authors consider the problem of edge detection and image estimation in nonstationary images corrupted by additive Gaussian noise. The noise-free image is represented using the compound Gauss-Markov random field developed by F.C. Jeng and J.W. Woods (1990), and the problem of image estimation and edge detection is posed as a maximum a posteriori estimation problem. Since the a posteriori probability function is nonconvex, computationally intensive stochastic relaxation algorithms are normally required. A deterministic relaxation method based on mean field annealing with a compound Gauss-Markov random (CGMRF) field model is proposed. The authors present a set of iterative equations for the mean values of the intensity and both horizontal and vertical line processes with or without taking into account some interaction between them. The relationship between this technique and two other methods is considered. Edge detection and image estimation results on several noisy images are included. >

Journal ArticleDOI
TL;DR: In this paper, a Gibbs prior with three parameters was proposed for maximum a posteriori reconstruction in SPECT, which is able to approximate the results of previously-proposed priors with two parameters, as well as a continuum of others.
Abstract: The authors introduce a Gibbs prior for use in MAP (maximum a posteriori) reconstruction in SPECT. This new prior, with three parameters, is able to approximate the results of previously-proposed priors with two parameters, as well as a continuum of others. Also, it allows the user increased flexibility in selecting the properties to be emphasised in the final reconstructed image estimate. The additional flexibility offered by the new prior is important in addressing the problem of selecting a prior and its associated parameters in a clinical situation. The paper demonstrates the importance of the derivative potential function (DPF) of the Gibbs distribution in determining which properties will be emphasized in the iterated image estimates. The effects of each of the three parameters are demonstrated on reconstructions from acquired SPECT data. The authors conclude that the parameters must be chosen carefully with consideration for the object distribution and the relative requirements for low-contrast detail, smoothing and edge sharpness in the reconstructed image.

Proceedings ArticleDOI
27 Apr 1993
TL;DR: In this paper, sequence estimation and symbol detection algorithms for the demodulation of co-channel narrowband signals in additive noise are proposed based on the maximum likelihood (ML) and maximum a posteriori (MAP) criteria for the joint recovery of both cochannel signals.
Abstract: Sequence estimation and symbol detection algorithms for the demodulation of cochannel narrowband signals in additive noise are proposed. These algorithms are based on the maximum likelihood (ML) and maximum a posteriori (MAP) criteria for the joint recovery of both cochannel signals. The error rate performance characteristics of these nonlinear algorithms were investigated through computer simulations. The results are presented. >

Journal ArticleDOI
TL;DR: Here, the deconvolution of hormone time series to reconstruct the instantaneous secretion rate of glands is considered and various techniques are discussed and compared in order to overcome the ill-conditioning of the problem and reduce the computational burden.
Abstract: Pulsatile hormone secretion is usually investigated by measuring hormone concentration in samples of peripheral plasma. Here, the deconvolution of hormone time series to reconstruct the instantaneous secretion rate of glands is considered. Various techniques are discussed and compared in order to overcome the ill-conditioning of the problem and reduce the computational burden. In particular, linear techniques based on least squares, maximum a posteriori (MAP) estimation, and Wiener filtering are compared. A new nonlinear MAP estimator that keeps into account the non-Gaussian distribution of the unknown signal is worked out and shown to yield the best results. The performances of the algorithms are tested on simulated time series as well as on series of luteinizing hormone. >

Book ChapterDOI
01 Jan 1993
TL;DR: The goal for motion estimation is to propose a general formulation that incorporates object acceleration, nonlinear motion trajectories, occlusion effects and multichannel (vector) observations, and Gibbs-Markov models linked together by the Maximum A Posteriori Probability criterion results in minimization of a multiple-term cost function.
Abstract: In this chapter we are concerned with the estimation of 2-D motion from time-varying images and with the application of the computed motion to image sequence processing. Our goal for motion estimation is to propose a general formulation that incorporates object acceleration, nonlinear motion trajectories, occlusion effects and multichannel (vector) observations. To achieve this objective we use Gibbs-Markov models linked together by the Maximum A Posteriori Probability criterion which results in minimization of a multiple-term cost function. The specific applications of motion-compensated processing of image sequences are prediction, noise reduction and spatiotemporal interpolation.

Proceedings ArticleDOI
23 May 1993
TL;DR: Nonlinear algorithms for the joint recovery of cochannel narrowband signals are proposed and maximum likelihood and maximum a posteriori criteria are employed to derive cochannel demodulators of varying complexities and degrees of performance.
Abstract: Nonlinear algorithms for the joint recovery of cochannel narrowband signals are proposed. For finite impulse response channel characteristics, maximum likelihood and maximum a posteriori criteria are employed to derive cochannel demodulators of varying complexities and degrees of performance. The error rate performance of these joint estimation algorithms is examined through computer simulations. >

Journal ArticleDOI
TL;DR: A fully three-dimensional (3-D) implementation of the maximum a posteriori (MAP) method for single photon emission computed tomography (SPECT) is demonstrated, and the 3-D reconstruction exhibits a major increase in resolution when compared to the generation of the series of separate 2-D slice reconstructions.
Abstract: A fully three-dimensional (3-D) implementation of the maximum a posteriori (MAP) method for single photon emission computed tomography (SPECT) is demonstrated. The 3-D reconstruction exhibits a major increase in resolution when compared to the generation of the series of separate 2-D slice reconstructions. As has been noted, the iterative EM algorithm for 2-D reconstruction is highly computational; the 3-D algorithm is far worse. To accommodate the computational complexity, previous work in the 2-D arena is extended, and an implementation on the class of massively parallel processors of the 3-D algorithm is demonstrated. Using a 16000- (4000-) processor MasPar/DECmpp-Sx machine, the algorithm is demonstrated to execute at 2.5

Journal ArticleDOI
TL;DR: Single photon emission computed tomography (SPECT) reconstructions were performed using maximum a posteriori (penalized likelihood) estimation via the expectation maximization algorithm on a massively parallel single-instruction multiple-data computer.
Abstract: Single photon emission computed tomography (SPECT) reconstructions performed using maximum a posteriori (penalized likelihood) estimation with the expectation maximization algorithm are discussed. Due to the large number of computations, the algorithms were performed on a massively parallel single-instruction multiple-data computer. Computation times for 200 iterations, using I.J. Good and R.A. Gaskins's (1971) roughness as a rotationally invariant roughness penalty, are shown to be on the order of 5 min for a 64*64 image with 96 view angles on an AMT-DAP 4096 processor machine and 1 min on a MasPar 4096 processor machine. Computer simulations performed using parameters for the Siemens gamma camera and clinical brain scan parameters are presented to compare two regularization techniques-regularization by kernel sieves and penalized likelihood with Good's rotationally invariant roughness measure-to filtered backprojection. Twenty-five independent sets of data are reconstructed for the pie and Hoffman brain phantoms. The average variance and average deviation are examined in various areas of the brain phantom. It is shown that while the geometry of the area examined greatly affects the observed results, in all cases the reconstructions using Good's roughness give superior variance and bias results to the two alternative methods. >

Journal ArticleDOI
TL;DR: In this article, the relative error covariance matrix (RECM) is introduced as a tool for quantitatively evaluating the manner in which data contribute to the structure of a reconstruction.

Journal ArticleDOI
TL;DR: The maximum a posteriori (MAP) classifier is extended to the case in which the radar backscatter from the remotely sensed surface varies within the SAR image because of incidence angle effects, illustrating the practicality of the method for combining SAR intensity observations acquired at two different frequencies and for improving classification accuracy of SAR data.
Abstract: We present a maximum a posteriori (MAP) classifier for classifying multifrequency, multilook, single polarization SAR intensity data into regions or ensembles of pixels of homogeneous and similar radar backscatter characteristics. A model for the prior joint distribution of the multifrequency SAR intensity data is combined with a Markov random field for representing the interactions between region labels to obtain an expression for the posterior distribution of the region labels given the multifrequency SAR observations. The maximization of the posterior distribution yields Bayes's optimum region labeling or classification of the SAR data or its MAP estimate. The performance of the MAP classifier is evaluated by using computer-simulated multilook SAR intensity data as a function of the parameters in the classification process. Multilook SAR intensity data are shown to yield higher classification accuracies than one-look SAR complex amplitude data. The MAP classifier is extended to the case in which the radar backscatter from the remotely sensed surface varies within the SAR image because of incidence angle effects. The results obtained illustrate the practicality of the method for combining SAR intensity observations acquired at two different frequencies and for improving classification accuracy of SAR data.

Journal ArticleDOI
TL;DR: In this paper, discontinuities of the motion field are taken into account by using a Markov random field (MRF) model, which leads to solutions less sensitive to noise than an all-or-nothing Boolean line process.
Abstract: A motion field estimation method for image sequence coding is presented. Motion vector field is estimated to remove the temporal redundancy between two successive images of a sequence. Motion estimation is an ill-posed inverse problem. Usually, the solution has been stabilized by regularization, as proposed by Tikhonov in 1963, i.e., by assuming a priori the smoothness of the solution. Here, discontinuities of the motion field are taken into account by using a Markov random field (MRF) model. Discontinuities, which unavoidably appear at the edges of a moving object, can be modeled by a continuous line process, as introduced by Geman and Reynolds in 1992, via a regularization function that belongs to the Φ function family. This line process leads to solutions less sensitive to noise than an all-or-nothing Boolean line process. Taking discontinuities into account leads to the minimization of a nonconvex functional to get the maximum a posteriori (MAP) optimal solution. We derive a new deterministic relaxation algorithm associated with the Φ function, to minimize the nonconvex criterion. We apply this algorithm in a coarse-to-fine multiresolution scheme, leading to more accurate results. We show results on synthetic and real-life sequences.

Proceedings ArticleDOI
27 Apr 1993
TL;DR: The performance of the proposed SDIE algorithm was shown to be superior to that of the Wiener-based PR algorithm and the 2-D Kalman filter in estimating the DVF and intensity field, respectively, from a noisy-blurred image sequence.
Abstract: A recursive model-based maximum a posteriori (MAP) estimator that simultaneously estimates the displacement vector field (DVF) and intensity field from a noisy-blurred image sequence is developed. By simultaneously estimating these two fields, information is made available to each filter regarding the reliability of estimates that they are dependent upon. Nonstationary models are used for the DVF and the intensity field in the proposed estimator, thus avoiding the smoothing of boundaries present in both. The advantage of the proposed SDIE (simultaneous displacement and intensity field estimation) algorithm is that the error inherent in estimating the DVF is taken into account in the filtering of the intensity field. A second advantage is that, through the use of the nonstationary VCGM (vector coupled Gauss-Markov) and STCGM (spatiotemporal coupled Gauss-Markov) models, boundaries in both the DVF and the intensity fields are preserved. The performance of the proposed SDIE algorithm was shown to be superior to that of the Wiener-based PR algorithm and the 2-D Kalman filter in estimating the DVF and intensity field, respectively, from a noisy-blurred image sequence. >

Posted Content
TL;DR: In this article, it was shown that for neural networks, the evidence procedure's MAP estimate is the same as the MAP estimate of neural networks for any Bayesian interpolation problem.
Abstract: The Bayesian ``evidence'' approximation, which is closely related to generalized maximum likelihood, has recently been employed to determine the noise and weight-penalty terms for training neural nets. This paper shows that it is far simpler to perform the exact calculation than it is to set up the evidence approximation. Moreover, unlike that approximation, the exact result does not have to be re-calculated for every new data set. Nor does it require the running of complex numerical computer code (the exact result is closed form). In addition, it turns out that for neural nets, the evidence procedure's MAP estimate is {\it in toto} approximation error. Another advantage of the exact analysis is that it does not lead to incorrect intuition, like the claim that one can ``evaluate different priors in light of the data.'' This paper ends by discussing sufficiency conditions for the evidence approximation to hold, along with the implications of those conditions. Although couched in terms of neural nets, the anlaysis of this paper holds for any Bayesian interpolation problem.

Journal ArticleDOI
TL;DR: The maximum a posteriori (MAP) approach usually applied as an estimation criterion for single-levelGMRFs is shown to be a special case of the most probable explanation (MPE) criterion, which is valid for multilevel GMRFs.

Patent
22 Mar 1993
TL;DR: In this article, a maximum a posteriori (MAP) algorithm processing provides a track output of the signal which is used as a guide or template to provide optimal spectral integration on an unstable or frequency varying line.
Abstract: A system and method for automating signal tracking, estimation of signal parameters, and extraction of signals from sonar data to detect weaker signals. A maximum a posteriori (MAP) algorithm processing provides a track output of the signal which is used as a guide or template to provide optimal spectral integration on an unstable or frequency varying line. The present invention includes track integration, parameter estimation, signal track normalization, and sequential signal detection. The present invention partitions the input band into frequency subwindows. For each subwindow, the strongest signal is tracked, its parameters are estimated, and then the signal is normalized (removed) from the subwindow. This is repeated until the entire subwindow set is processed. Then the subwindows, now with their strongest signals removed, are recombined to form one input band. This aggregated procedure represents one processing pass. In the next pass, the entire above procedure is repeated with either the same or new subwindow boundaries. This continues until a predetermined number of passes is completed. Sequential signal detection is provided from one data frame to the next, a problem that is beyond the capability of conventional systems and methods for tracking frequency lines of unknown frequency modulation and amplitude.

Proceedings ArticleDOI
27 Apr 1993
TL;DR: A novel combined speech extrapolation and error detection algorithm is presented which can improve the speech significantly in the case of residual bit errors and is superior to the MAP (maximum a posteriori) estimator in terms of perceptual performance.
Abstract: In digital mobile radio systems the speech quality can be degraded severely if the channel decoder produces residual bit errors, e.g., due to heavy burst errors on the channel. A novel combined speech extrapolation and error detection algorithm is presented which can improve the speech significantly in the case of residual bit errors. This algorithm, which is part of the speech decoding process, uses a posteriori probabilities of speech parameters. With the extracted a posteriori probability, optimum estimators adapted to human perception can be applied and soft decision information can be exploited fully. In terms of perceptual performance, the MS (mean-square) estimator is superior to the MAP (maximum a posteriori) estimator. The method was tested under realistic conditions using an 8-kbit/s CELP (code excited linear prediction) codec. A significant improvement of subjective speech quality can be achieved. >

Proceedings ArticleDOI
27 Apr 1993
TL;DR: The authors propose a general formulation for adaptive, maximum a posteriori probability (MAP) segmentation of image sequences on the basis of interframe displacement and gray level information, and two methods for characterizing the conditional probability distribution of the data given the segmentation process are proposed.
Abstract: The authors propose a general formulation for adaptive, maximum a posteriori probability (MAP) segmentation of image sequences on the basis of interframe displacement and gray level information. The segmentation classifies pixel sites to independently moving objects in the scene. In this formulation, two methods for characterizing the conditional probability distribution of the data given the segmentation process are proposed. The a priori probability distribution is characterized on the basis of a Gibbsian model of the segmentation process, where a novel motion-compensated spatiotemporal neighborhood system is defined. The proposed formulation adapts to the displacement field accuracy by appropriately adjusting the relative emphasis on the estimated displacement field, gray level information, and prior knowledge implied by the Gibbsian model. Experiments have been performed with a five-frame simulated sequence containing translation and rotation. >

Patent
22 Mar 1993
TL;DR: In this article, the Short and Toomey algorithm is applied to a frequency and beam direction windowed and time segmented set of Fast Fourier Transform (FFT) magnitude detected data to determine the presence or absence of narrowband lines indicative of target tracks.
Abstract: An automatic detection apparatus and method applies the Short and Toomey algorithmic processing procedure to a frequency and beam direction windowed and time segmented set of Fast Fourier Transform (FFT) magnitude detected data, comprising time, frequency and beam direction data, to determine the presence or absence of narrowband lines indicative of target tracks. This is achieved by storing the time, frequency and beam direction data, and then processing this data using a predetermined three-dimensional maximum a posteriori procedure whereby individual target tracks associated with each beam direction are concurrently processed, and whereby transitions are made between adjacent beam directions in order to process target tracks having a maximum signal to noise ratio to provide for detection of a target. An output target track is generated by combining the high signal to noise ratio portions of the processed individual target tracks into a single output target track. The present invention extends the tracking capabilities that are obtained using the Short and Toomey processing procedure from two dimensions to three dimensions, and this added dimension provides spatial tracking in addition to spectral tracking. The spatial tracking is performed concurrently with the spectral tracking in an array processor.

Journal ArticleDOI
TL;DR: In this article, the concentration parameters of the Fisher matrix distribution for rotations or orientations in three dimensions were estimated using a 1-dimensional integral representation of the normalising constant.
Abstract: Summary Two procedures are considered for estimating the concentration parameters of the Fisher matrix distribution for rotations or orientations in three dimensions. The first is maximum likelihood. The use of a convenient 1-dimensional integral representation of the normalising constant, which greatly simplifies the computation, is suggested. The second approach exploits the equivalence of the Fisher distribution for rotations in three dimensions, and the Bingham distribution for axes in four dimensions. We describe a pseudo likelihood procedure which works for the Bingham distribution in any dimension. This alternative approach does not require numerical integration. Results on the asymptotic efficiency of the pseudo likelihood estimator relative to the maximum likelihood estimator are given, and the two estimators are compared in the analysis of a well-known vectorcardiography dataset.

Proceedings ArticleDOI
08 Apr 1993
TL;DR: A Bayesian approach for segmentation of three-dimensional (3-D) magnetic resonance imaging (MRI) data of the human brain is presented and the maximum a posteriori probability (MAP) criterion is used to model the a priori probability distribution of the segmentation.
Abstract: A Bayesian approach for segmentation of three-dimensional (3-D) magnetic resonance imaging (MRI) data of the human brain is presented. Connectivity and smoothness constraints are imposed on the segmentation in 3 dimensions. The resulting segmentation is suitable for 3-D display and for volumetric analysis of structures. The algorithm is based on the maximum a posteriori probability (MAP) criterion, where a 3-D Gibbs random field (GRF) is used to model the a priori probability distribution of the segmentation. The proposed method can be applied to a spatial sequence of 2-D images (cross-sections through a volume), as well as 3-D sampled data. We discuss the optimization methods for obtaining the MAP estimate. Experimental results obtained using clinical data are included.