scispace - formally typeset
Search or ask a question

Showing papers on "Expectation–maximization algorithm published in 2023"


Journal ArticleDOI
TL;DR: In this article , a Monte Carlo simulation was performed to compare methods for handling missing data in growth mixture models, and it was found that the Bayesian approach and two-stage multiple imputation methods generally produce less biased parameter estimates compared to maximum likelihood or single imputation, although key differences were observed.
Abstract: A Monte Carlo simulation was performed to compare methods for handling missing data in growth mixture models. The methods considered in the current study were (a) a fully Bayesian approach using a Gibbs sampler, (b) full information maximum likelihood using the expectation–maximization algorithm, (c) multiple imputation, (d) a two-stage multiple imputation method, and (e) listwise deletion. Of the five methods, it was found that the Bayesian approach and two-stage multiple imputation methods generally produce less biased parameter estimates compared to maximum likelihood or single imputation methods, although key differences were observed. Similarities and disparities among methods are highlighted and general recommendations articulated.

3 citations


Journal ArticleDOI
TL;DR: In this paper , a robust estimation approach based on the EM algorithm was proposed to handle various forms of heterogeneity in latent variable models, which might be a reason for their underutilization.
Abstract: Individuals routinely differ in how they present with psychiatric illnesses and in how they respond to treatment. This heterogeneity, when overlooked in data analysis, can lead to misspecified models and distorted inferences. While several methods exist to handle various forms of heterogeneity in latent variable models, their implementation in applied research requires additional layers of model crafting, which might be a reason for their underutilization. In response, we present a robust estimation approach based on the expectation-maximization (EM) algorithm. Our method makes minor adjustments to EM to enable automatic detection of population heterogeneity and to recognize individuals who are inadequately explained by the assumed model. Each individual is associated with a probability that reflects how likely their data were to have been generated from the assumed model. The individual-level probabilities are simultaneously estimated and used to weight each individual's contribution in parameter estimation. We examine the utility of our approach for Gaussian mixture models and linear factor models through several simulation studies, drawing contrasts with the EM algorithm. We demonstrate that our method yields inferences more robust to population heterogeneity or other model misspecifications than EM does. We hope that the proposed approach can be incorporated into the model-building process to improve population-level estimates and to shed light on subsets of the population that demand further attention. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

1 citations


Journal ArticleDOI
TL;DR: In this article , a parameter estimation method for the trapezoidal Kumaraswamy model using stochastic expectation maximization algorithm is proposed, which effectively tackles the challenges commonly encountered in the traditional expectation maximisation algorithm.
Abstract: Extensive research has been conducted on models that utilize the Kumaraswamy distribution to describe continuous variables with bounded support. In this study, we examine the trapezoidal Kumaraswamy model. Our objective is to propose a parameter estimation method for this model using the stochastic expectation maximization algorithm, which effectively tackles the challenges commonly encountered in the traditional expectation maximization algorithm. We then apply our results to the modeling of daily COVID-19 cases in Chile.

1 citations


Journal ArticleDOI
TL;DR: In this article , an extension of the multivariate nonlinear mixed-effects models (MNLMMs) is proposed by taking the censored and non-ignorable missing responses into account simultaneously.
Abstract: Multivariate nonlinear mixed-effects models (MNLMMs) have become a promising tool for analyzing multi-outcome longitudinal data following nonlinear trajectory patterns. However, such a classical analysis can be challenging due to censorship induced by detection limits of the quantification assay or non-response occurring when participants missed scheduled visits intermittently or discontinued participation. This article proposes an extension of the MNLMM approach, called the MNLMM-CM, by taking the censored and non-ignorable missing responses into account simultaneously. The non-ignorable missingness is described by the selection-modeling factorization to tackle the missing not at random mechanism. A Monte Carlo expectation conditional maximization algorithm coupled with the first-order Taylor approximation is developed for parameter estimation. The techniques for the calculation of standard errors of fixed effects, estimation of unobservable random effects, imputation of censored and missing responses and prediction of future values are also provided. The proposed methodology is motivated and illustrated by the analysis of a clinical HIV/AIDS dataset with censored RNA viral loads and the presence of missing CD4 and CD8 cell counts. The superiority of our method on the provision of more adequate estimation is validated by a simulation study.

1 citations


Journal ArticleDOI
TL;DR: In this paper , a family of mixtures of multivariate skewed power exponential distributions is proposed that combines the flexibility of the MPE distribution with the ability to model skewness, and a generalized expectation maximization approach, which combines minorization-maximization and optimization based on accelerated line search algorithms on the Stiefel manifold, is used for parameter estimation.
Abstract: Families of mixtures of multivariate power exponential (MPE) distributions have already been introduced and shown to be competitive for cluster analysis in comparison to other mixtures of elliptical distributions, including mixtures of Gaussian distributions. A family of mixtures of multivariate skewed power exponential distributions is proposed that combines the flexibility of the MPE distribution with the ability to model skewness. These mixtures are more robust to variations from normality and can account for skewness, varying tail weight, and peakedness of data. A generalized expectation-maximization approach, which combines minorization-maximization and optimization based on accelerated line search algorithms on the Stiefel manifold, is used for parameter estimation. These mixtures are implemented both in the unsupervised and semi-supervised classification frameworks. Both simulated and real data are used for illustration and comparison to other mixture families.

1 citations


Journal ArticleDOI
TL;DR: In this paper , a new class of models is proposed for modeling nonlinear and stationary time series, referred to as the Markov-switching bilinear GARCH (MS-BLGARCH) models.
Abstract: In this paper a new class of models is proposed for modeling nonlinear and stationary time series. This new class of models is referred to as the Markov-switching bilinear GARCH (MS-BLGARCH) models. In these models, the parameters are allowed to depend on an unobservable time-homogeneous and stationary Markov chain with finite state space. The statistical inference for these models is rather difficult due to the dependence on the whole regime path. We propose a recursive algorithm for parameter estimation in MS − BLGARCH. The proposed method is useful for long time series as well as for data available in real time. The main idea is to use the maximum likelihood estimation (MLE) method and from this develop a recursive Expectation-Maximization (EM) algorithm.

1 citations


Journal ArticleDOI
TL;DR: In this article , the authors compare simulation-based approaches for estimating both the state and unknown parameters in nonlinear state-space models, and show that the combination of a CPF with Backward Simulation (BS) smoother and a Stochastic Expectation-Maximization (SEM) algorithm is a promising approach.
Abstract: This study aims at comparing simulation-based approaches for estimating both the state and unknown parameters in nonlinear state-space models. Numerical results on different toy models show that the combination of a Conditional Particle Filter (CPF) with Backward Simulation (BS) smoother and a Stochastic Expectation-Maximization (SEM) algorithm is a promising approach. The CPFBS smoother run with a small number of particles allows to explore efficiently the state-space and simulate relevant trajectories of the state conditionally to the observations. When combined with the SEM algorithm, this algorithm provides accurate estimates of the state and the parameters in nonlinear models, where the application of EM algorithms combined with a standard particle smoother or an ensemble Kalman smoother is limited.

1 citations


Posted ContentDOI
23 May 2023
TL;DR: In this article , an end-to-end learning framework for blind single image super resolution (SISR) is proposed, which enables image restoration within a unified Bayesian framework with either full- or semi-supervision.
Abstract: Learning-based methods for blind single image super resolution (SISR) conduct the restoration by a learned mapping between high-resolution (HR) images and their low-resolution (LR) counterparts degraded with arbitrary blur kernels. However, these methods mostly require an independent step to estimate the blur kernel, leading to error accumulation between steps. We propose an end-to-end learning framework for the blind SISR problem, which enables image restoration within a unified Bayesian framework with either full- or semi-supervision. The proposed method, namely SREMN, integrates learning techniques into the generalized expectation-maximization (GEM) algorithm and infers HR images from the maximum likelihood estimation (MLE). Extensive experiments show the superiority of the proposed method with comparison to existing work and novelty in semi-supervised learning.

Journal ArticleDOI
TL;DR: In this article , a nonparametric mixture has component distributions mixed together with a mixing distribution that is completely unspecified and needs to be determined from data, and a two-parameter distribution family can be used, with one parameter as the mixing variable and the other to control the smoothness of the density estimator.

Journal ArticleDOI
TL;DR: In this paper , the authors formulate the standard mixture learning problem as a Markov Decision Process (MDP) and theoretically show that the objective value of the MDP is equivalent to the log-likelihood of the observed data with a slightly different parameter space constrained by the policy.

Journal ArticleDOI
TL;DR: In this article , a gradient-based variant of the EM algorithm was proposed for the case of truncated mixture of two balanced d-dimensional Gaussians, which has global convergence guarantees when d = 1 and local convergence for d>1 to the true mean.
Abstract: Even though data is abundant, it is often subjected to some form of censoring or truncation which inherently creates biases. Removing such biases and performing parameter estimation is a classical challenge in Statistics. In this paper, we focus on the problem of estimating the means of a mixture of two balanced d-dimensional Gaussians when the samples are prone to truncation. A recent theoretical study on the performance of the Expectation-Maximization (EM) algorithm for the aforementioned problem showed EM almost surely converges for d=1 and exhibits local convergence for d>1 to the true means. Nevertheless, the EM algorithm for the case of truncated mixture of two Gaussians is not easy to implement as it requires solving a set of nonlinear equations at every iteration which makes the algorithm impractical. In this work, we propose a gradient based variant of the EM algorithm that has global convergence guarantees when d=1 and local convergence for d>1 to the true means. Moreover, the update rule at every iteration is easy to compute which makes the proposed method practical. We also provide numerous experiments to obtain more insights into the effect of truncation on the convergence to the true parameters in high dimensions.


Posted ContentDOI
08 Apr 2023
TL;DR: In this article , a modified version of a logistic regression EM algorithm is proposed, which can substantially improve computationally efficiency while preserving the monotonicity of EM and the simplicity of the EM parameter updates.
Abstract: Parameter estimation in logistic regression is a well-studied problem with the Newton-Raphson method being one of the most prominent optimization techniques used in practice. A number of monotone optimization methods including minorization-maximization (MM) algorithms, expectation-maximization (EM) algorithms and related variational Bayes approaches offer a family of useful alternatives guaranteed to increase the logistic regression likelihood at every iteration. In this article, we propose a modified version of a logistic regression EM algorithm which can substantially improve computationally efficiency while preserving the monotonicity of EM and the simplicity of the EM parameter updates. By introducing an additional latent parameter and selecting this parameter to maximize the penalized observed-data log-likelihood at every iteration, our iterative algorithm can be interpreted as a parameter-expanded expectation-condition maximization either (ECME) algorithm, and we demonstrate how to use the parameter-expanded ECME with an arbitrary choice of weights and penalty function. In addition, we describe a generalized version of our parameter-expanded ECME algorithm that can be tailored to the challenges encountered in specific high-dimensional problems, and we study several interesting connections between this generalized algorithm and other well-known methods. Performance comparisons between our method, the EM algorithm, and several other optimization methods are presented using a series of simulation studies based upon both real and synthetic datasets.

Posted ContentDOI
24 Mar 2023
TL;DR: In this paper , a Bayesian parameter estimation scheme for mixtures of shifted asymmetric Laplace distributions is proposed, which gives better parameter estimates compared to the expectation-maximization based scheme.
Abstract: Mixtures of shifted asymmetric Laplace distributions were introduced as a tool for model-based clustering that allowed for the direct parameterization of skewness in addition to location and scale. Following common practices, an expectation-maximization algorithm was developed to fit these mixtures. However, adaptations to account for the `infinite likelihood problem' led to fits that gave good classification performance at the expense of parameter recovery. In this paper, we propose a more valuable solution to this problem by developing a novel Bayesian parameter estimation scheme for mixtures of shifted asymmetric Laplace distributions. Through simulation studies, we show that the proposed parameter estimation scheme gives better parameter estimates compared to the expectation-maximization based scheme. In addition, we also show that the classification performance is as good, and in some cases better, than the expectation-maximization based scheme. The performance of both schemes are also assessed using well-known real data sets.

Journal ArticleDOI
TL;DR: In this article , a filtering method is proposed based on Expectation Maximization (EM) algorithm employing a sliding window and polynomial fitting method, which can detect outliers for different orbital elements and space events.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a mixture of warped Gaussian processes (MWGP) model as well as its classification expectation-maximization (CEM) algorithm to solve general non-stationary probabilistic regression.
Abstract: The mixture of experts (ME) model is effective for multimodal data in statistics and machine learning. To treat non-stationary probabilistic regression, the mixture of Gaussian processes (MGP) model has been proposed, but it may not perform well in some cases due to the limited ability of each Gaussian process (GP) expert. Although the mixture of Gaussian processes (MGP) and warped Gaussian process (WGP) models are dominant and effective for non-stationary probabilistic regression, they may not be able to handle general non-stationary probabilistic regression in practice. In this paper, we first propose the mixture of warped Gaussian processes (MWGP) model as well as its classification expectation–maximization (CEM) algorithm to address this problem. To overcome the local optimum of the CEM algorithm, we then propose the split and merge CEM (SMC EM) algorithm for MWGP. Experiments were done on synthetic and real-world datasets, which show that our proposed MWGP is more effective than the models used for comparison, and the SMCEM algorithm can solve the local optimum for MWGP.

Posted ContentDOI
16 Jan 2023
TL;DR: In this paper , a Gaussian mixture model (GMM) based channel estimator is proposed for imperfect training data, i.e., the training data are solely comprised of noisy and sparsely allocated pilot observations.
Abstract: In this letter, we propose a Gaussian mixture model (GMM)-based channel estimator which is learned on imperfect training data, i.e., the training data are solely comprised of noisy and sparsely allocated pilot observations. In a practical application, recent pilot observations at the base station (BS) can be utilized for training. This is in sharp contrast to state-of-theart machine learning (ML) techniques where a training dataset consisting of perfect channel state information (CSI) samples is a prerequisite, which is generally unaffordable. In particular, we propose an adapted training procedure for fitting the GMM which is a generative model that represents the distribution of all potential channels associated with a specific BS cell. To this end, the necessary modifications of the underlying expectation-maximization (EM) algorithm are derived. Numerical results show that the proposed estimator performs close to the case where perfect CSI is available for the training and exhibits a higher robustness against imperfections in the training data as compared to state-of-the-art ML techniques.

Journal ArticleDOI
TL;DR: In this paper , a fully unsupervised network-based methodology for estimating Gaussian Mixture Models on financial time series by maximum likelihood using the Expectation-Maximization algorithm is proposed.
Abstract: Abstract We propose a fully unsupervised network-based methodology for estimating Gaussian Mixture Models on financial time series by maximum likelihood using the Expectation-Maximization algorithm. Visibility graph-structured information of observed data is used to initialize the algorithm. The proposed methodology is applied to the US wholesale electricity market. We will demonstrate that encoding time series through Visibility Graphs allows us to capture the behavior of the time series and the nonlinear interactions between observations well. The results reveal that the proposed methodology outperforms more established approaches.

Journal ArticleDOI
01 Feb 2023-Axioms
TL;DR: In this paper , a statistical approach to combine incomplete and partially-overlapping pieces of covariance matrices that come from independent experiments is proposed, and an expectation-maximization algorithm for parameter estimation is derived.
Abstract: The generation of unprecedented amounts of data brings new challenges in data management, but also an opportunity to accelerate the identification of processes of multiple science disciplines. One of these challenges is the harmonization of high-dimensional unbalanced and heterogeneous data. In this manuscript, we propose a statistical approach to combine incomplete and partially-overlapping pieces of covariance matrices that come from independent experiments. We assume that the data are a random sample of partial covariance matrices sampled from Wishart distributions and we derive an expectation-maximization algorithm for parameter estimation. We demonstrate the properties of our method by (i) using simulation studies and (ii) using empirical datasets. In general, being able to make inferences about the covariance of variables not observed in the same experiment is a valuable tool for data analysis since covariance estimation is an important step in many statistical applications, such as multivariate analysis, principal component analysis, factor analysis, and structural equation modeling.

Journal ArticleDOI
TL;DR: In this paper , a novel thresholding approach by combining Expectation Maximization (EM) and Salp Swarm Algorithm (SSA) is developed, which suggests potential points to the EM algorithm to fly to a better position.
Abstract: Multilevel image thresholding using Expectation Maximization (EM) is an efficient method for image segmentation. However, it has two weaknesses: 1) EM is a greedy algorithm and cannot jump out of local optima. 2) it cannot guarantee the number of required classes while estimating the histogram by Gaussian Mixture Models (GMM). in this paper, to overcome these shortages, a novel thresholding approach by combining EM and Salp Swarm Algorithm (SSA) is developed. SSA suggests potential points to the EM algorithm to fly to a better position. Moreover, a new mechanism is considered to maintain the number of desired clusters. Twenty-four medical test images are selected and examined by standard metrics such as PSNR and FSIM. The proposed method is compared with the traditional EM algorithm, and an average improvement of 5.27% in PSNR values and 2.01% in FSIM values were recorded. Also, the proposed approach is compared with four existing segmentation techniques by using CT scan images that Qatar University has collected. Experimental results depict that the proposed method obtains the first rank in terms of PSNR and the second rank in terms of FSIM. It has been observed that the proposed technique performs better performance in the segmentation result compared to other considered state-of-the-art methods.

Journal ArticleDOI
01 Feb 2023-Sensors
TL;DR: In this paper , an attack-detection algorithm based on statistical learning according to the different characteristic parameters of measurement error before and after tampering is proposed to detect and classify false data from the measurement data, which can decrease the detection time to less than 0.011883 s and correctly locate the false data with a probability of more than 95%.
Abstract: The secure operation of smart grids is closely linked to state estimates that accurately reflect the physical characteristics of the grid. However, well-designed false data injection attacks (FDIAs) can manipulate the process of state estimation by injecting malicious data into the measurement data while bypassing the detection of the security system, ultimately causing the results of state estimation to deviate from secure values. Since FDIAs tampering with the measurement data of some buses will lead to error offset, this paper proposes an attack-detection algorithm based on statistical learning according to the different characteristic parameters of measurement error before and after tampering. In order to detect and classify false data from the measurement data, in this paper, we report the model establishment and estimation of error parameters for the tampered measurement data by combining the the k-means++ algorithm with the expectation maximization (EM) algorithm. At the same time, we located and recorded the bus that the attacker attempted to tamper with. In order to verify the feasibility of the algorithm proposed in this paper, the IEEE 5-bus standard test system and the IEEE 14-bus standard test system were used for simulation analysis. Numerical examples demonstrate that the combined use of the two algorithms can decrease the detection time to less than 0.011883 s and correctly locate the false data with a probability of more than 95%.

Posted ContentDOI
11 Jan 2023
TL;DR: In this paper , the authors proposed a novel frailty model with change points applying random effects to a Cox proportional hazard model to adjust the heterogeneity between clusters, which can be easily analyzed using the existing R package.
Abstract: We propose a novel frailty model with change points applying random effects to a Cox proportional hazard model to adjust the heterogeneity between clusters. Because the frailty model includes random effects, the parameters are estimated using the expectation-maximization (EM) algorithm. Additionally, our model needs to estimate change points; we thus propose a new algorithm extending the conventional estimation algorithm to the frailty model with change points to solve the problem. We show a practical example to demonstrate how to estimate the change point and random effect. Our proposed model can be easily analyzed using the existing R package. We conducted simulation studies with three scenarios to confirm the performance of our proposed model. We re-analyzed data of two clinical trials to show the difference in analysis results with and without random effect. In conclusion, we confirmed that the frailty model with change points has a higher accuracy than the model without the random effect. Our proposed model is useful when heterogeneity needs to be taken into account. Additionally, the absence of heterogeneity did not affect the estimation of the regression coefficient parameters.

Journal ArticleDOI
TL;DR: In this paper , a COVID-19 dataset is analyzed using a combination of K-means and Expectation-Maximization (EM) algorithms to cluster the data.
Abstract: In this paper, a COVID-19 dataset is analyzed using a combination of K-Means and Expectation-Maximization (EM) algorithms to cluster the data. The purpose of this method is to gain insight into and interpret the various components of the data. The study focuses on tracking the evolution of confirmed, death, and recovered cases from March to October 2020, using a two-dimensional dataset approach. K-Means is used to group the data into three categories: “Confirmed-Recovered”, “Confirmed-Death”, and “Recovered-Death”, and each category is modeled using a bivariate Gaussian density. The optimal value for k, which represents the number of groups, is determined using the Elbow method. The results indicate that the clusters generated by K-Means provide limited information, whereas the EM algorithm reveals the correlation between “ConfirmedRecovered”, “Confirmed-Death”, and “Recovered-Death”. The advantages of using the EM algorithm include stability in computation and improved clustering through the Gaussian Mixture Model (GMM). Keywords—COVID-19; clustering; k-means; EM algorithm; GMM

Journal ArticleDOI
TL;DR: In this article , the photon counting histogram expectation maximization (PCH-EM) algorithm is applied to a DSERN capable quanta image sensor and the per-pixel characterization results of the sensor are combined with the proposed Photon Counting Distribution (PCD) model to predict the ensemble distribution of the device and the agreement between experimental observations and model predictions demonstrates both the applicability of the PCD model in the DSERN regime as well as the ability of the PCH algorithm to accurately estimate the underlying model parameters.
Abstract: The Photon Counting Histogram Expectation Maximization (PCH-EM) algorithm has recently been reported as a candidate method for the characterization of Deep Sub-Electron Read Noise (DSERN) image sensors. This work describes a comprehensive demonstration of the PCH-EM algorithm applied to a DSERN capable quanta image sensor. The results show that PCH-EM is able to characterize DSERN pixels for a large span of quanta exposure and read noise values. The per-pixel characterization results of the sensor are combined with the proposed Photon Counting Distribution (PCD) model to demonstrate the ability of PCH-EM to predict the ensemble distribution of the device. The agreement between experimental observations and model predictions demonstrates both the applicability of the PCD model in the DSERN regime as well as the ability of the PCH-EM algorithm to accurately estimate the underlying model parameters.

Journal ArticleDOI
TL;DR: In this article , a bag-of-models method based on the mixture of SHMMs is proposed to describe dynamic texture for dynamic texture classification, in which codebook is constructed with SHMs.

Posted ContentDOI
04 Jul 2023
TL;DR: In this article , a regularized expectation maximization (EM) algorithm was proposed for Gaussian Mixture Models (GMMs), which aims to maximize a penalized GMM likelihood where regularized estimation may ensure positive definiteness of covariance matrix updates.
Abstract: Expectation-Maximization (EM) algorithm is a widely used iterative algorithm for computing maximum likelihood estimate when dealing with Gaussian Mixture Model (GMM). When the sample size is smaller than the data dimension, this could lead to a singular or poorly conditioned covariance matrix and, thus, to performance reduction. This paper presents a regularized version of the EM algorithm that efficiently uses prior knowledge to cope with a small sample size. This method aims to maximize a penalized GMM likelihood where regularized estimation may ensure positive definiteness of covariance matrix updates by shrinking the estimators towards some structured target covariance matrices. Finally, experiments on real data highlight the good performance of the proposed algorithm for clustering purposes

Journal ArticleDOI
TL;DR: This paper proposed a hidden Markov model for multivariate continuous longitudinal responses with covariates that accounts for three different types of missing pattern: (i) partially missing outcomes at a given time occasion, (ii) completely missing outcome at a different time occasion (intermittent pattern), and (iii) dropout before the end of the period of observation (monotone pattern).
Abstract: We propose a hidden Markov model for multivariate continuous longitudinal responses with covariates that accounts for three different types of missing pattern: (I) partially missing outcomes at a given time occasion, (II) completely missing outcomes at a given time occasion (intermittent pattern), and (III) dropout before the end of the period of observation (monotone pattern). The missing‐at‐random (MAR) assumption is formulated to deal with the first two types of missingness, while to account for the informative dropout, we rely on an extra absorbing state. Estimation of the model parameters is based on the maximum likelihood method that is implemented by an expectation‐maximization (EM) algorithm relying on suitable recursions. The proposal is illustrated by a Monte Carlo simulation study and an application based on historical data on primary biliary cholangitis.

Posted ContentDOI
21 Feb 2023
TL;DR: In this article , the photon counting histogram expectation maximization (PCH-EM) algorithm is applied to a DSERN capable quanta image sensor and the per-pixel characterization results of the sensor are combined with the proposed Photon Counting Distribution (PCD) model to predict the ensemble distribution of the device and the agreement between experimental observations and model predictions demonstrates both the applicability of the PCD model in the DSERN regime as well as the ability of the PCH algorithm to accurately estimate the underlying model parameters.
Abstract: The Photon Counting Histogram Expectation Maximization (PCH-EM) algorithm has recently been reported as a candidate method for the characterization of Deep Sub-Electron Read Noise (DSERN) image sensors. This work describes a comprehensive demonstration of the PCH-EM algorithm applied to a DSERN capable quanta image sensor. The results show that PCH-EM is able to characterize DSERN pixels for a large span of quanta exposure and read noise values. The per-pixel characterization results of the sensor are combined with the proposed Photon Counting Distribution (PCD) model to demonstrate the ability of PCH-EM to predict the ensemble distribution of the device. The agreement between experimental observations and model predictions demonstrates both the applicability of the PCD model in the DSERN regime as well as the ability of the PCH-EM algorithm to accurately estimate the underlying model parameters.

Journal ArticleDOI
TL;DR: In this paper , the authors examined the response characteristics of the MetisTM PET/CT system by acquiring 22Na point source at different locations in the field of view (FOV) of the scanner and reconstructing with small pixel size for images to obtain their radial, tangential, and axial full-width half maximum (FWHM).
Abstract: Positron emission tomography (PET) is a popular research topic. People are becoming more interested in PET images as they become more widely available. However, the partial volume effect (PVE) in PET images remains one of the most influential factors causing the resolution of PET images to degrade. It is possible to reduce this PVE and achieve better image quality by measuring and modeling the point spread function (PSF) and then accounting for it inside the reconstruction algorithm. In this work, we examined the response characteristics of the MetisTM PET/CT system by acquiring 22Na point source at different locations in the field of view (FOV) of the scanner and reconstructing with small pixel size for images to obtain their radial, tangential, and axial full-width half maximum (FWHM). An image-based model of the PSF model was then obtained by fitting asymmetric two-dimensional Gaussians on the 22Na images. This PSF model determined by FWHM in three directions was integrated into a three-dimensional ordered subsets expectation maximization (3D-OSEM) algorithm based on a list-mode format to form a new PSF-OSEM algorithm. We used both algorithms to reconstruct point source, Derenzo phantom, and mouse PET images and performed qualitative and quantitative analyses. In the point source study, the PSF-OSEM algorithm reduced the FWHM of the point source PET image in three directions to about 0.67 mm, and in the phantom study, the PET image reconstructed by the PSF-OSEM algorithm had better visual effects. At the same time, the quantitative analysis results of the Derenzo phantom were better than the original 3D-OSEM algorithm. In the mouse experiment, the results of qualitative and quantitative analyses showed that the imaging quality of PSF-OSEM algorithm was better than that of 3D-OSEM algorithm. Our results show that adding the PSF model to the 3D-OSEM algorithm in the MetisTM PET/CT system helps to improve the resolution of the image and satisfy the qualitative and quantitative analysis criteria.

Proceedings ArticleDOI
04 Jun 2023
TL;DR: In this article , a regularized expectation maximization (EM) algorithm for Gaussian mixture models (GMMs) is proposed to ensure positive definiteness of covariance matrix updates and shrink the estimators towards some structured target covariance matrices.
Abstract: Expectation-Maximization (EM) algorithm is a widely used iterative algorithm for computing (local) maximum likelihood estimate (MLE). It can be used in an extensive range of problems, including the clustering of data based on the Gaussian mixture model (GMM). Numerical instability and convergence problems may arise in situations where the sample size is not much larger than the data dimensionality. In such low sample support (LSS) settings, the covariance matrix update in the EM-GMM algorithm may become singular or poorly conditioned, causing the algorithm to crash. On the other hand, in many signal processing problems, a priori information can be available indicating certain structures for different cluster covariance matrices. In this paper, we present a regularized EM algorithm for GMM-s that can make efficient use of such prior knowledge as well as cope with LSS situations. The method aims to maximize a penalized GMM likelihood where regularized estimation may be used to ensure positive definiteness of covariance matrix updates and shrink the estimators towards some structured target covariance matrices. We show that the theoretical guarantees of convergence hold, leading to better performing EM algorithm for structured covariance matrix models or with low sample settings.