scispace - formally typeset
Search or ask a question

Showing papers on "Maximum a posteriori estimation published in 2004"


Journal ArticleDOI
TL;DR: An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation that provides more accurate and more consistent estimates of the parameters of the diffusion term.

351 citations


Journal ArticleDOI
TL;DR: An online (recursive) algorithm is proposed that estimates the parameters of the mixture and that simultaneously selects the number of components to search for the maximum a posteriori (MAP) solution and to discard the irrelevant components.
Abstract: There are two open problems when finite mixture densities are used to model multivariate data: the selection of the number of components and the initialization. In this paper, we propose an online (recursive) algorithm that estimates the parameters of the mixture and that simultaneously selects the number of components. The new algorithm starts with a large number of randomly initialized components. A prior is used as a bias for maximally structured models. A stochastic approximation recursive learning algorithm is proposed to search for the maximum a posteriori (MAP) solution and to discard the irrelevant components.

269 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe extensions to the conventional Bayesian treatment that assign uncertainty to the parameters defining the prior distribution and the distribution of the measurement errors, known as empirical and hierarchical Bayes.
Abstract: A common way to account for uncertainty in inverse problems is to apply Bayes' rule and obtain a posterior distribution of the quantities of interest given a set of measurements. A conventional Bayesian treatment, however, requires assuming specific values for parameters of the prior distribution and of the distribution of the measurement errors (e.g., the standard deviation of the errors). In practice, these parameters are often poorly known a priori, and choosing a particular value is often problematic. Moreover, the posterior uncertainty is computed assuming that these parameters are fixed; if they are not well known a priori, the posterior uncertainties have dubious value.This paper describes extensions to the conventional Bayesian treatment that assign uncertainty to the parameters defining the prior distribution and the distribution of the measurement errors. These extensions are known in the statistical literature as “empirical Bayes” and “hierarchical Bayes.” We demonstrate the practical applicati...

238 citations


Journal ArticleDOI
TL;DR: This work optimize the received user signal-to-noise ratio (SNR) distribution in order to maximize the system spectral efficiency for given user channel codes, channel load, and target user bit-error rate.
Abstract: We consider a canonical model for coded code-division multiple access (CDMA) with random spreading, where the receiver makes use of iterative belief-propagation (BP) joint decoding. We provide simple density-evolution analysis in the large-system limit (large number of users) of the performance of the BP decoder and of some suboptimal approximations based on interference cancellation (IC). Based on this analysis, we optimize the received user signal-to-noise ratio (SNR) distribution in order to maximize the system spectral efficiency for given user channel codes, channel load (users per chip), and target user bit-error rate (BER). The optimization of the received SNR distribution is obtained by solving a simple linear program and can be easily incorporated into practical power control algorithms. Remarkably, under the optimized SNR assignment, the suboptimal minimum mean-square error (MMSE) IC-based decoder performs almost as well as the more complex BP decoder. Moreover, for a large class of commonly used convolutional codes, we observe that the optimized SNR distribution consists of a finite number of discrete SNR levels. Based on this observation, we provide a low-complexity approximation of the MMSE-IC decoder that suffers from very small performance degradation while attaining considerable savings in complexity. As by-products of this work, we obtain a closed-form expression of the multiuser efficiency (ME) of power-mismatched MMSE filters in the large-system limit, and we extend the analysis of the symbol-by-symbol maximum a posteriori probability (MAP) multiuser detector in the large-system limit to the case of nonconstant user powers and nonuniform symbol prior probabilities.

202 citations


Journal ArticleDOI
TL;DR: An expectation- maximization algorithm is derived to efficiently compute a maximum a posteriori point estimate of the various parameters and demonstrates both parsimonious feature selection and excellent classification accuracy on a range of synthetic and benchmark data sets.
Abstract: This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear classifier and a subset of predictor variables (or features) that are most relevant to the classification task. The approach uses heavy-tailed priors to promote sparsity in the utilization of both basis functions and features; these priors act as regularizers for the likelihood function that rewards good classification on the training data. We derive an expectation- maximization (EM) algorithm to efficiently compute a maximum a posteriori (MAP) point estimate of the various parameters. The algorithm is an extension of recent state-of-the-art sparse Bayesian classifiers, which in turn can be seen as Bayesian counterparts of support vector machines. Experimental comparisons using kernel classifiers demonstrate both parsimonious feature selection and excellent classification accuracy on a range of synthetic and benchmark data sets.

164 citations


Journal ArticleDOI
TL;DR: A novel method for the segmentation of multiple objects from three-dimensional (3-D) medical images using interobject constraints is presented, motivated by the observation that neighboring structures have consistent locations and shapes that provide configurations and context that aid in segmentation.
Abstract: A novel method for the segmentation of multiple objects from three-dimensional (3-D) medical images using interobject constraints is presented. Our method is motivated by the observation that neighboring structures have consistent locations and shapes that provide configurations and context that aid in segmentation. We define a maximum a posteriori (MAP) estimation framework using the constraining information provided by neighboring objects to segment several objects simultaneously. We introduce a representation for the joint density function of the neighbor objects, and define joint probability distributions over the variations of the neighboring shape and position relationships of a set of training images. In order to estimate the MAP shapes of the objects, we formulate the model in terms of level set functions, and compute the associated Euler-Lagrange equations. The contours evolve both according to the neighbor prior information and the image gray level information. This method is useful in situations where there is limited interobject information as opposed to robust global atlases. In addition, we compare our level set representation of the object shape to the point distribution model. Results and validation from experiments on synthetic data and medical imagery in two-dimensional and 3-D are demonstrated.

163 citations


Journal ArticleDOI
TL;DR: A novel perspective on the max-product algorithm is provided, based on the idea of reparameterizing the distribution in terms of so-called pseudo-max-marginals on nodes and edges of the graph, to provide conceptual insight into the algorithm in application to graphs with cycles.
Abstract: Finding the maximum a posteriori (MAP) assignment of a discrete-state distribution specified by a graphical model requires solving an integer program. The max-product algorithm, also known as the max-plus or min-sum algorithm, is an iterative method for (approximately) solving such a problem on graphs with cycles. We provide a novel perspective on the algorithm, which is based on the idea of reparameterizing the distribution in terms of so-called pseudo-max-marginals on nodes and edges of the graph. This viewpoint provides conceptual insight into the max-product algorithm in application to graphs with cycles. First, we prove the existence of max-product fixed points for positive distributions on arbitrary graphs. Next, we show that the approximate max-marginals computed by max-product are guaranteed to be consistent, in a suitable sense to be defined, over every tree of the graph. We then turn to characterizing the nature of the approximation to the MAP assignment computed by max-product. We generalize previous work by showing that for any graph, the max-product assignment satisfies a particular optimality condition with respect to any subgraph containing at most one cycle per connected component. We use this optimality condition to derive upper bounds on the difference between the log probability of the true MAP assignment, and the log probability of a max-product assignment. Finally, we consider extensions of the max-product algorithm that operate over higher-order cliques, and show how our reparameterization analysis extends in a natural manner.

155 citations


Journal ArticleDOI
TL;DR: It is found that the MAP/SMM method is able to reconstruct subpixel information in several principal components of the high-resolution hyperspectral image estimate, while the enhancement for conventional methods, like those based on least squares estimation, is limited primarily to the first principal component.
Abstract: A maximum a posteriori (MAP) estimation method is described for enhancing the spatial resolution of a hyperspectral image using a higher resolution coincident panchromatic image. The approach makes use of a stochastic mixing model (SMM) of the underlying spectral scene content to develop a cost function that simultaneously optimizes the estimated hyperspectral scene relative to the observed hyperspectral and panchromatic imagery, as well as the local statistics of the spectral mixing model. The incorporation of the stochastic mixing model is found to be the key ingredient for reconstructing subpixel spectral information in that it provides the necessary constraints that lead to a well-conditioned linear system of equations for the high-resolution hyperspectral image estimate. Here, the mathematical formulation of the proposed MAP method is described. Also, enhancement results using various hyperspectral image datasets are provided. In general, it is found that the MAP/SMM method is able to reconstruct subpixel information in several principal components of the high-resolution hyperspectral image estimate, while the enhancement for conventional methods, like those based on least squares estimation, is limited primarily to the first principal component (i.e., the intensity component).

148 citations


Journal ArticleDOI
TL;DR: Experimental results on simulated and real-world data sets indicate that the approach works well even on large data sets, and has the advantages of Bayesian methods for model adaptation and error bars of its predictions.
Abstract: In this paper, we use a unified loss function, called the soft insensitive loss function, for Bayesian support vector regression. We follow standard Gaussian processes for regression to set up the Bayesian framework, in which the unified loss function is used in the likelihood evaluation. Under this framework, the maximum a posteriori estimate of the function values corresponds to the solution of an extended support vector regression problem. The overall approach has the merits of support vector regression such as convex quadratic programming and sparsity in solution representation. It also has the advantages of Bayesian methods for model adaptation and error bars of its predictions. Experimental results on simulated and real-world data sets indicate that the approach works well even on large data sets.

147 citations


Journal ArticleDOI
TL;DR: In this article, a nonparametric prior on the spectral density is described through Bernstein polynomials, and a pseudoposterior distribution is obtained by updating the prior using the Whittle likelihood.
Abstract: This article describes a Bayesian approach to estimating the spectral density of a stationary time series. A nonparametric prior on the spectral density is described through Bernstein polynomials. Because the actual likelihood is very complicated, a pseudoposterior distribution is obtained by updating the prior using the Whittle likelihood. A Markov chain Monte Carlo algorithm for sampling from this posterior distribution is described that is used for computing the posterior mean, variance, and other statistics. A consistency result is established for this pseudoposterior distribution that holds for a short-memory Gaussian time series and under some conditions on the prior. To prove this asymptotic result, a general consistency theorem of Schwartz is extended for a triangular array of independent, nonidentically distributed observations. This extension is also of independent interest. A simulation study is conducted to compare the proposed method with some existing methods. The method is illustrated with ...

144 citations


Journal ArticleDOI
TL;DR: A statistical method based on the use of multifrequency SAR raw datasets obtained by partitioning in subbands the available raw data spectrum and on a Bayesian estimator using Markov random fields to model the a priori distribution of the unknown images is presented.
Abstract: We present a statistical method to solve the height estimation problem in interferometric synthetic aperture radar (InSAR) applications. It is based on the use of multifrequency SAR raw datasets obtained by partitioning in subbands the available raw data spectrum, and on a Bayesian estimator using Markov random fields to model the a priori distribution of the unknown images. The method allows recovering topographic profiles affected by strong height discontinuities and allows to perform efficient noise rejections.

Journal ArticleDOI
TL;DR: This work defines a maximum a posteriori (MAP) estimation model using the joint prior information of the object shape and the image gray levels to realize image segmentation and finds the algorithm to be robust to noise and able to handle multidimensional data, while able to avoid the need for explicit point correspondences during the training phase.

Journal ArticleDOI
TL;DR: Novel speech feature enhancement technique based on a probabilistic, nonlinear acoustic environment model that effectively incorporates the phase relationship (hence phase sensitive) between the clean speech and the corrupting noise in the acoustic distortion process is presented.
Abstract: This paper presents a novel speech feature enhancement technique based on a probabilistic, nonlinear acoustic environment model that effectively incorporates the phase relationship (hence phase sensitive) between the clean speech and the corrupting noise in the acoustic distortion process. The core of the enhancement algorithm is the MMSE (minimum mean square error) estimator for the log Mel power spectra of clean speech based on the phase-sensitive environment model, using highly efficient single-point, second-order Taylor series expansion to approximate the joint probability of clean and noisy speech modeled as a multivariate Gaussian. Since a noise estimate is required by the MMSE estimator, a high-quality, sequential noise estimation algorithm is also developed and presented. Both the noise estimation and speech feature enhancement algorithms are evaluated on the Aurora2 task of connected digit recognition. Noise-robust speech recognition results demonstrate that the new acoustic environment model which takes into account the relative phase in speech and noise mixing is superior to the earlier environment model which discards the phase under otherwise identical experimental conditions. The results also show that the sequential MAP (maximum a posteriori) learning for noise estimation is better than the sequential ML (maximum likelihood) learning, both evaluated under the identical phase-sensitive MMSE enhancement condition.

Journal ArticleDOI
TL;DR: This paper discusses a set of possible estimation procedures that are based on the Prony and the Pencil methods, relate them one to the other, and compare them through simulations, and presents an improvement over these methodsbased on the direct use of the maximum-likelihood estimator, exploiting the above methods as initialization.
Abstract: This paper discusses the problem of recovering a planar polygon from its measured complex moments These moments correspond to an indicator function defined over the polygon's support Previous work on this problem gave necessary and sufficient conditions for such successful recovery process and focused mainly on the case of exact measurements being given In this paper, we extend these results and treat the same problem in the case where a longer than necessary series of noise corrupted moments is given Similar to methods found in array processing, system identification, and signal processing, we discuss a set of possible estimation procedures that are based on the Prony and the Pencil methods, relate them one to the other, and compare them through simulations We then present an improvement over these methods based on the direct use of the maximum-likelihood estimator, exploiting the above methods as initialization Finally, we show how regularization and, thus, maximum a posteriori probability estimator could be applied to reflect prior knowledge about the recovered polygon

Proceedings ArticleDOI
19 Jul 2004
TL;DR: This paper proposes a multiple object tracking algorithm that seeks the optimal state sequence which maximizes the joint state-observation probability and names this algorithm trajectory tracking since it estimates the state sequence or "trajectory" instead of the current state.
Abstract: Most tracking algorithms are based on the maximum a posteriori (MAP) solution of a probabilistic framework called Hidden Markov Model, where the distribution of the object state at current time instance is estimated based on current and previous observations. However this approach is prone to errors caused by temporal distractions such as occlusion, background clutter and multi-object confusion. In this paper we propose a multiple object tracking algorithm that seeks the optimal state sequence which maximizes the joint state-observation probability. We name this algorithm trajectory tracking since it estimates the state sequence or "trajectory" instead of the current state. The algorithm is capable of tracking multiple objects whose number is unknown and varies during tracking. We introduce an observation model which is composed of the original image, the foreground mask given by background subtraction and the object detection map generated by an object detector The image provides the object appearance information. The foreground mask enables the likelihood computation to consider the multi-object configuration in its entirety. The detection map consists of pixel-wise object detection scores, which drives the tracking algorithm to perform joint inference on both the number of objects and their configurations efficiently.

Journal ArticleDOI
TL;DR: A model-based algorithm for the automatic reconstruction of building areas from single-observation meter-resolution SAR intensity data is introduced, based on the maximum a posteriori estimation by Monte Carlo methods of an optimal scene that is modeled as a set of mutually interacting Poisson-distributed marked points describing parametric building objects.
Abstract: To investigate the limits and merits of information extraction from a single high-resolution synthetic aperture radar (SAR) backscatter image, we introduce a model-based algorithm for the automatic reconstruction of building areas from single-observation meter-resolution SAR intensity data. The reconstruction is based on the maximum a posteriori estimation by Monte Carlo methods of an optimal scene that is modeled as a set of mutually interacting Poisson-distributed marked points describing parametric building objects. Each of the objects can be hierarchically decomposed into a collection of radiometrically and geometrically specified object facets that in turn get mapped into data features by ground-to-range projection and inverse Gaussian statistics. The detection of the facets is based on a likelihood ratio. Results are presented for airborne data with resolutions in the range of 0.5-2 m on urban scenes covering agglomerations of buildings. To achieve robust results for building reconstruction, the integration with data from other sources is needed.

Journal ArticleDOI
TL;DR: This paper presents an optimal estimation procedure for sound signals (such as speech) that are modeled by harmonic sources, and achieves more robust and accurate estimation of voiced speech parameters.
Abstract: Modern speech processing applications require operation on signal of interest that is contaminated by high level of noise. This situation calls for a greater robustness in estimation of the speech parameters, a task which is hard to achieve using standard speech models. In this paper, we present an optimal estimation procedure for sound signals (such as speech) that are modeled by harmonic sources. The harmonic model achieves more robust and accurate estimation of voiced speech parameters. Using maximum a posteriori probability framework, successful tracking of pitch parameters is possible in ultra low signal to noise conditions (as low as -15 dB). The performance of the method is evaluated using the Keele pitch detection database with realistic background noise. The results show best performance in comparison to other state-of-the-art pitch detectors. Application of the proposed algorithm in a simple speaker identification system shows significant improvement in the performance.

Journal ArticleDOI
TL;DR: The research described in this paper establishes a foundation for future development of a (four-dimensional) space-time reconstruction framework for image sequences in which a built-in deformable mesh model is used to track the image motion.
Abstract: In this paper, we explore the use of a content-adaptive mesh model (CAMM) for tomographic image reconstruction. In the proposed framework, the image to be reconstructed is first represented by a mesh model, an efficient image description based on nonuniform sampling. In the CAMM, image samples (represented as mesh nodes) are placed most densely in image regions having fine detail. Tomographic image reconstruction in the mesh domain is performed by maximum-likelihood (ML) or maximum a posteriori (MAP) estimation of the nodal values from the measured data. A CAMM greatly reduces the number of unknown parameters to be determined, leading to improved image quality and reduced computation time. We demonstrated the method in our experiments using simulated gated single photon emission computed tomography (SPECT) cardiac-perfusion images. A channelized Hotelling observer (CHO) was used to evaluate the detectability of perfusion defects in the reconstructed images, a task-based measure of image quality. A minimum description length (MDL) criterion was also used to evaluate the effect of the representation size. In our application, both MDL and CHO suggested that the optimal number of mesh nodes is roughly five to seven times smaller than the number of projection bins. When compared to several commonly used methods for image reconstruction, the proposed approach achieved the best performance, in terms of defect detection and computation time. The research described in this paper establishes a foundation for future development of a (four-dimensional) space-time reconstruction framework for image sequences in which a built-in deformable mesh model is used to track the image motion.

Journal ArticleDOI
TL;DR: This paper applies the maximum-likelihood and maximum a posteriori (MAP) techniques to simultaneously estimate the layer conductivity ratios and source signal using EEG data, and uses the classical 4-sphere model to approximate the head geometry, and assumes a known dipole source position.
Abstract: Techniques based on electroencephalography (EEG) measure the electric potentials on the scalp and process them to infer the location, distribution, and intensity of underlying neural activity. Accuracy in estimating these parameters is highly sensitive to uncertainty in the conductivities of the head tissues. Furthermore, dissimilarities among individuals are ignored when standardized values are used. In this paper, we apply the maximum-likelihood and maximum a posteriori (MAP) techniques to simultaneously estimate the layer conductivity ratios and source signal using EEG data. We use the classical 4-sphere model to approximate the head geometry, and assume a known dipole source position. The accuracy of our estimates is evaluated by comparing their standard deviations with the Crame/spl acute/r-Rao bound (CRB). The applicability of these techniques is illustrated with numerical examples on simulated EEG data. Our results show that the estimates have low bias and attain the CRB for sufficiently large number of experiments. We also present numerical examples evaluating the sensitivity to imprecise assumptions on the source position and skull thickness. Finally, we propose extensions to the case of unknown source position and present examples for real data.

Journal ArticleDOI
TL;DR: The results show that the proposed PC-MRA segmentation method can segment normal vessels and vascular regions with relatively low flow rate and low signal-to-noise ratio, e.g., aneurysms and veins.
Abstract: In this paper, we present an approach to segmenting the brain vasculature in phase contrast magnetic resonance angiography (PC-MRA). According to our prior work, we can describe the overall probability density function of a PC-MRA speed image as either a Maxwell-uniform (MU) or Maxwell-Gaussian-uniform (MGU) mixture model. An automatic mechanism based on Kullback-Leibler divergence is proposed for selecting between the MGU and MU models given a speed image volume. A coherence measure, namely local phase coherence (LPC), which incorporates information about the spatial relationships between neighboring flow vectors, is defined and shown to be more robust to noise than previously described coherence measures. A statistical measure from the speed images and the LPC measure from the phase images are combined in a probabilistic framework, based on the maximum a posteriori method and Markov random fields, to estimate the posterior probabilities of vessel and background for classification. It is shown that segmentation based on both measures gives a more accurate segmentation than using either speed or flow coherence information alone. The proposed method is tested on synthetic, flow phantom and clinical datasets. The results show that the method can segment normal vessels and vascular regions with relatively low flow rate and low signal-to-noise ratio, e.g., aneurysms and veins.

Journal ArticleDOI
TL;DR: An algorithm, E-COSEM (enhanced complete-data ordered subsets expectation-maximization), for fast maximum likelihood (ML) reconstruction in emission tomography, founded on an incremental EM approach, and it is shown that E- COSEM converges to the ML solution.
Abstract: We propose an algorithm, E-COSEM (enhanced complete-data ordered subsets expectation-maximization), for fast maximum likelihood (ML) reconstruction in emission tomography E-COSEM is founded on an incremental EM approach Unlike the familiar OSEM (ordered subsets EM) algorithm which is not convergent, we show that E-COSEM converges to the ML solution Alternatives to the OSEM include RAMLA, and for the related maximum a posteriori (MAP) problem, the BSREM and OS-SPS algorithms These are fast and convergent, but require a judicious choice of a user-specified relaxation schedule E-COSEM itself uses a sequence of iteration-dependent parameters (very roughly akin to relaxation parameters) to control a tradeoff between a greedy, fast but non-convergent update and a slower but convergent update These parameters are computed automatically at each iteration and require no user specification For the ML case, our simulations show that E-COSEM is nearly as fast as RAMLA

Proceedings ArticleDOI
17 May 2004
TL;DR: An iterative algorithm that relaxes the requirement of sharing all the data is given, based on a local Fisher scoring method and an iterative information sharing procedure, for finding the maximum likelihood estimator of a commonly observed model.
Abstract: The problem of finding the maximum likelihood estimator of a commonly observed model, based on data collected by a sensor network under power and bandwidth constraints, is considered. In particular, a case where the sensors cannot fully share their data is treated. An iterative algorithm that relaxes the requirement of sharing all the data is given. The algorithm is based on a local Fisher scoring method and an iterative information sharing procedure. The case where the sensors share sub-optimal estimates is also analyzed. The asymptotic distribution of the estimates is derived and used to provide a means of discrimination between estimates that are associated with different local maxima of the log-likelihood function. The results are validated by a simulation.

Journal ArticleDOI
TL;DR: In this paper, the problem of estimating exponential parameters, on the basis of a general progressive Type II censored sample, using both classical and Bayesian viewpoints, has been dealt with.

Proceedings ArticleDOI
17 May 2004
TL;DR: This paper proposes to increase the performance of the GMM approach (without sacrificing its simplicity) through the use of local features with embedded positional information and shows that the performance obtained is comparable to 1D HMMs.
Abstract: It has been shown previously that systems based on local features and relatively complex generative models, namely 1D hidden Markov models (HMMs) and pseudo-2D HMMs, are suitable for face recognition (here we mean both identification and verification). Recently a simpler generative model, namely the Gaussian mixture model (GMM), was also shown to perform well. In this paper we first propose to increase the performance of the GMM approach (without sacrificing its simplicity) through the use of local features with embedded positional information; we show that the performance obtained is comparable to 1D HMMs. Secondly, we evaluate different training techniques for both GMM and HMM based systems. We show that the traditionally used maximum likelihood (ML) training approach has problems estimating robust model parameters when there is only a few training images available; we propose to tackle this problem through the use of maximum a posteriori (MAP) training, where the lack of data problem can be effectively circumvented; we show that models estimated with MAP are significantly more robust and are able to generalize to adverse conditions present in the BANCA database.

Journal ArticleDOI
TL;DR: This paper model the super-resolution image as a Markov random field (MRF) and a maximum a posteriori (MAP) estimation method is used to derive a cost function which is then optimized to recover the high-resolution field.

Journal ArticleDOI
TL;DR: Experimental results show that, using this method, foreground (vehicles) and nonforeground regions including the shadows of moving vehicles can be discriminated with high accuracy.
Abstract: Shadows of moving objects often obstruct robust visual tracking. In this paper, we present a car tracker based on a hidden Markov model/Markov random field (HMM/MRF)-based segmentation method that is capable of classifying each small region of an image into three different categories: vehicles, shadows of vehicles, and background from a traffic-monitoring movie. The temporal continuity of the different categories for one small region location is modeled as a single HMM along the time axis, independently of the neighboring regions. In order to incorporate spatial-dependent information among neighboring regions into the tracking process, at the state-estimation stage, the output from the HMMs is regarded as an MRF and the maximum a posteriori criterion is employed in conjunction with the MRF for optimization. At each time step, the state estimation for the image is equivalent to the optimal configuration of the MRF generated through a stochastic relaxation process. Experimental results show that, using this method, foreground (vehicles) and nonforeground regions including the shadows of moving vehicles can be discriminated with high accuracy.

Journal ArticleDOI
TL;DR: A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure, showing that the convergence problems are no longer present.
Abstract: This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can be assumed neither Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the g0 law. This paper deals with amplitude data, so the gA0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics) of the parameters of the gA0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.

Book ChapterDOI
26 Sep 2004
TL;DR: In this paper, a maximum a posteriori (MAP) model was used for segmentation and registration of brain MR images, and an additional hidden Markov random vector field was incorporated into the model to solve both rigid and non-rigid registration.
Abstract: Although segmentation and registration are usually considered separately in medical image analysis, they can obviously benefit a great deal from each other In this paper, we propose a novel scheme of simultaneously solving for segmentation and registration This is achieved by a maximum a posteriori (MAP) model The key idea is to introduce an additional hidden Markov random vector field into the model Both rigid and non-rigid registration have been incorporated We have used a B-spline based free-form deformation for non-rigid registration case The method has been applied to the segmentation and registration of brain MR images

Proceedings Article
01 Dec 2004
TL;DR: It is shown that many simple Bayesian algorithms (such as Gaussian linear regression and Bayesian logistic regression) perform favorably when compared, in retrospect, to the single best model in the model class.
Abstract: We present a competitive analysis of Bayesian learning algorithms in the online learning setting and show that many simple Bayesian algorithms (such as Gaussian linear regression and Bayesian logistic regression) perform favorably when compared, in retrospect, to the single best model in the model class. The analysis does not assume that the Bayesian algorithms' modeling assumptions are "correct," and our bounds hold even if the data is adversarially chosen. For Gaussian linear regression (using logloss), our error bounds are comparable to the best bounds in the online learning literature, and we also provide a lower bound showing that Gaussian linear regression is optimal in a certain worst case sense. We also give bounds for some widely used maximum a posteriori (MAP) estimation algorithms, including regularized logistic regression.

Proceedings Article
01 Jan 2004
TL;DR: This paper shows that a simple probabilistic technique, maximum likelihood estimation namely, can reduce these two problems substantially when employed as the feedback aggregation strategy, and concludes that no complex exploration of the feedback is necessary.
Abstract: The problem of encouraging trustworthy behavior in P2P online communities by managing peers’ reputations has drawn a lot of attention recently. However, most of the proposed solutions exhibit the following two problems: huge implementation overhead and unclear trust related model semantics. In this paper we show that a simple probabilistic technique, maximum likelihood estimation namely, can reduce these two problems substantially when employed as the feedback aggregation strategy. Thus, no complex exploration of the feedback is necessary. Instead, simple, intuitive and efficient probabilistic estimation methods suffice.