scispace - formally typeset
Search or ask a question

Showing papers on "Expectation–maximization algorithm published in 2021"


Journal ArticleDOI
TL;DR: In this article, the authors relax the assumption that the random effects and model errors follow a skew-normal distribution, which includes normality as a special case and provides flexibility in capturing a broad range of non-normal behavior.
Abstract: Normality (symmetric) of the random effects and the within-subject errors is a routine assumptions for the linear mixed model, but it may be unrealistic, obscuring important features of among- and within-subjects variation. We relax this assumption by considering that the random effects and model errors follow a skew-normal distributions, which includes normality as a special case and provides flexibility in capturing a broad range of non-normal behavior. The marginal distribution for the observed quantity is derived which is expressed in closed form, so inference may be carried out using existing statistical software and standard optimization techniques. We also implement an EM type algorithm which seem to provide some advantages over a direct maximization of the likelihood. Results of simulation studies and applications to real data sets are reported.

193 citations


Book
19 Nov 2021
TL;DR: This book explains how to use the tried and tested Imputation Variance Estimation method to estimate the likelihood of a particular event happening in the future.
Abstract: Introduction Introduction Outline How to Use This Book Likelihood-Based Approach Introduction Observed Likelihood Mean Score Approach Observed Information Computation Introduction Factoring Likelihood Approach EM Algorithm Monte Carlo Computation Monte Carlo EM Data Augmentation Imputation Introduction Basic Theory for Imputation Variance Estimation after Imputation Replication Variance Estimation Multiple Imputation Fractional Imputation Propensity Scoring Approach Introduction Regression Weighting Method Propensity Score Method Optimal Estimation Doubly Robust Method Empirical Likelihood Method Nonparametric Method Nonignorable Missing Data Nonresponse Instrument Conditional Likelihood Approach Generalized Method of Moments (GMM) Approach Pseudo Likelihood Approach Exponential Tilting (ET) Model Latent Variable Approach Callbacks Capture-Recapture (CR) Experiment Longitudinal and Clustered Data Ignorable Missing Data Nonignorable Monotone Missing Data Past-Value-Dependent Missing Data Random-Effect-Dependent Missing Data Application to Survey Sampling Introduction Calibration Estimation Propensity Score Weighting Method Fractional Imputation Fractional Hot Deck Imputation Imputation for Two-Phase Sampling Synthetic Imputation Statistical Matching Introduction Instrumental Variable Approach Measurement Error Models Causal Inference Bibliography Index

133 citations


Journal ArticleDOI
TL;DR: A lightweight single image super-resolution network with an expectation-maximization attention mechanism (EMASRN) for better balancing performance and applicability and the experimental results demonstrate the superiority of the EMASRN over state-of-the-art lightweight SISR methods in terms of both quantitative metrics and visual quality.
Abstract: In recent years, with the rapid development of deep learning, super-resolution methods based on convolutional neural networks (CNNs) have made great progress. However, the parameters and the required consumption of computing resources of these methods are also increasing to the point that such methods are difficult to implement on devices with low computing power. To address this issue, we propose a lightweight single image super-resolution network with an expectation-maximization attention mechanism (EMASRN) for better balancing performance and applicability. Specifically, a progressive multi-scale feature extraction block (PMSFE) is proposed to extract feature maps of different sizes. Furthermore, we propose an HR-size expectation-maximization attention block (HREMAB) that directly captures the long-range dependencies of HR-size feature maps. We also utilize a feedback network to feed the high-level features of each generation into the next generationb’s shallow network. Compared with the existing lightweight single image super-resolution (SISR) methods, our EMASRN reduces the number of parameters by almost one-third. The experimental results demonstrate the superiority of our EMASRN over state-of-the-art lightweight SISR methods in terms of both quantitative metrics and visual quality. The source code can be downloaded at https://github.com/xyzhu1/EMASRN.

68 citations


Journal ArticleDOI
01 Jan 2021
TL;DR: A forward–backward splitting algorithm to integrate deep learning into maximum-a-posteriori (MAP) positron emission tomography (PET) image reconstruction and the studied U-Net denoising method achieved a comparable performance to a representative implementation of the FBSEM net.
Abstract: We propose a forward–backward splitting algorithm to integrate deep learning into maximum- a-posteriori (MAP) positron emission tomography (PET) image reconstruction The MAP reconstruction is split into regularization, expectation–maximization (EM), and a weighted fusion For regularization, the use of either a Bowsher prior (using Markov-random fields) or a residual learning unit (using convolutional-neural networks) were considered For the latter, our proposed forward–backward splitting EM (FBSEM), accelerated with ordered subsets (OS), was unrolled into a recurrent-neural network in which network parameters (including regularization strength) are shared across all states and learned during PET reconstruction Our network was trained and evaluated using PET-only (FBSEM-p) and PET-MR (FBSEM-pm) datasets for low-dose simulations and short-duration in-vivo brain imaging It was compared to OSEM, Bowsher MAPEM, and a post-reconstruction U-Net denoising trained on the same PET-only (Unet-p) or PET-MR (Unet-pm) datasets For simulations, FBSEM-p(m) and Unet-p(m) nets achieved a comparable performance, on average, 144% and 134% normalized root-mean square error (NRMSE), respectively; and both outperformed OSEM and MAPEM methods (with 207% and 177% NRMSE, respectively) For in-vivo datasets, FBSEM-p(m), Unet-p(m), MAPEM, and OSEM methods achieved average root-sum-of-squared errors of 39%, 57%, 59%, and 78% in different brain regions, respectively In conclusion, the studied U-Net denoising method achieved a comparable performance to a representative implementation of the FBSEM net

62 citations


Journal ArticleDOI
TL;DR: Based on the doubly interval-censored data model, Wang et al. as discussed by the authors estimate the parameters of the incubation period of COVID-19 by adopting the maximum likelihood estimation, expectation maximization algorithm and a newly proposed algorithm (expectation mostly conditional maximization, referred as ECIMM).
Abstract: With the spread of the novel coronavirus disease 2019 (COVID-19) around the world, the estimation of the incubation period of COVID-19 has become a hot issue. Based on the doubly interval-censored data model, we assume that the incubation period follows lognormal and Gamma distribution, and estimate the parameters of the incubation period of COVID-19 by adopting the maximum likelihood estimation, expectation maximization algorithm and a newly proposed algorithm (expectation mostly conditional maximization algorithm, referred as ECIMM). The main innovation of this paper lies in two aspects: Firstly, we regard the sample data of the incubation period as the doubly interval-censored data without unnecessary data simplification to improve the accuracy and credibility of the results; secondly, our new ECIMM algorithm enjoys better convergence and universality compared with others. With the framework of this paper, we conclude that 14-day quarantine period can largely interrupt the transmission of COVID-19, however, people who need specially monitoring should be isolated for about 20 days for the sake of safety. The results provide some suggestions for the prevention and control of COVID-19. The newly proposed ECIMM algorithm can also be used to deal with the doubly interval-censored data model appearing in various fields.

59 citations


Journal ArticleDOI
TL;DR: The results indicate that the proposed Bayesian dynamic linear model-based approach for detecting anomalies of the structural health monitoring data exhibits good accuracy and high computational efficiency and also allows for reconstructing the strain measurements to replace anomalies.
Abstract: Enormous data are continuously collected by the structural health monitoring system of civil infrastructures. The structural health monitoring data inevitably involve anomalies caused by sensors, t...

43 citations


Journal ArticleDOI
TL;DR: In this article, a variational inference model is proposed to approximate the joint distribution with a factorized distribution, and the solution takes the form of a closed-form expectation maximization procedure, which is viewed as the problem of maximizing the posterior joint distribution of a set of continuous and discrete latent variables given the past and current observations.
Abstract: In this article, we address the problem of tracking multiple speakers via the fusion of visual and auditory information. We propose to exploit the complementary nature and roles of these two modalities in order to accurately estimate smooth trajectories of the tracked persons, to deal with the partial or total absence of one of the modalities over short periods of time, and to estimate the acoustic status–either speaking or silent–of each tracked person over time. We propose to cast the problem at hand into a generative audio-visual fusion (or association) model formulated as a latent-variable temporal graphical model. This may well be viewed as the problem of maximizing the posterior joint distribution of a set of continuous and discrete latent variables given the past and current observations, which is intractable. We propose a variational inference model which amounts to approximate the joint distribution with a factorized distribution. The solution takes the form of a closed-form expectation maximization procedure. We describe in detail the inference algorithm, we evaluate its performance and we compare it with several baseline methods. These experiments show that the proposed audio-visual tracker performs well in informal meetings involving a time-varying number of people.

41 citations


Journal ArticleDOI
TL;DR: A multi-phase degradation model with jumps based on Wiener process is formulated to describe the multi- phase degradation pattern, and a simple yet effective algorithm is proposed for obtaining the change-point locations, which are critical for remaining useful life prediction.

39 citations


Journal ArticleDOI
TL;DR: A novel Bayesian framework based on Kalman filter, which does not need a predefined model and can adapt itself to different ECG morphologies and is compared with several popular ECG denoising methods such as wavelet transform and empirical mode decomposition.
Abstract: Model-based Bayesian frameworks proved their effectiveness in the field of ECG processing. However, their performances rely heavily on the pre-defined models extracted from ECG signals. Furthermore, their performances decrease substantially when ECG signals do not comply with their models- a situation generally occurs in the case of arrhythmia-. In this paper, we propose a novel Bayesian framework based on Kalman filter, which does not need a predefined model and can adapt itself to different ECG morphologies. Compared with the previous Bayesian techniques, the proposed method requires much less preprocessing and it only needs to know the location of R-peaks to start ECG processing. Our method uses a filter bank comprised of two adaptive Kalman filters, one for denoising QRS complex (high frequency section) and another one for denoising P and T waves (low frequency section). The parameters of these filters are estimated and iteratively updated using expectation maximization (EM) algorithm. In order to deal with nonstationary noises such as muscle artifact (MA) noise, we used Bryson and Henrikson's technique for the prediction and update steps inside the Kalman filter bank. We evaluated the performance of the proposed method on different ECG databases containing signals having morphological changes and abnormalities such as atrial premature complex (APC), premature ventricular contractions (PVC), Ventricular Tachyarrhythmia (VT) and sudden cardiac death (SCD). The proposed algorithm was compared with several popular ECG denoising methods such as wavelet transform (WD) and empirical mode decomposition (EMD). The comparison results showed that the proposed method performs well in the presence of various ECG morphologies in both stationary and non-stationary environments especially at low input SNRs.

37 citations


Journal ArticleDOI
TL;DR: An FD method based on expectation–maximization (EM) algorithm and Bayesian network (BN), which is called EM-BN method, which significantly reduces the model complexity and improves computational efficiency, particularly under the missing multivariate data.

33 citations


Journal ArticleDOI
TL;DR: A neural-network architecture that solves convolutional dictionary learning problems, thus establishing a link between dictionary learning and neural networks, and it is demonstrated in an image-denoising task that CRsAE learns Gabor-like filters and that the EM-inspired approach for learning biases is superior to the conventional approach.
Abstract: We introduce a neural-network architecture, termed the constrained recurrent sparse autoencoder (CRsAE), that solves convolutional dictionary learning problems, thus establishing a link between dictionary learning and neural networks. Specifically, we leverage the interpretation of the alternating-minimization algorithm for dictionary learning as an approximate expectation-maximization algorithm to develop autoencoders that enable the simultaneous training of the dictionary and regularization parameter (ReLU bias). The forward pass of the encoder approximates the sufficient statistics of the E-step as the solution to a sparse coding problem, using an iterative proximal gradient algorithm called FISTA. The encoder can be interpreted either as a recurrent neural network or as a deep residual network, with two-sided ReLU nonlinearities in both cases. The M-step is implemented via a two-stage backpropagation. The first stage relies on a linear decoder applied to the encoder and a norm-squared loss. It parallels the dictionary update step in dictionary learning. The second stage updates the regularization parameter by applying a loss function to the encoder that includes a prior on the parameter motivated by Bayesian statistics. We demonstrate in an image-denoising task that CRsAE learns Gabor-like filters and that the EM-inspired approach for learning biases is superior to the conventional approach. In an application to recordings of electrical activity from the brain, we demonstrate that CRsAE learns realistic spike templates and speeds up the process of identifying spike times by ${900}\times $ compared with algorithms based on convex optimization.

Journal ArticleDOI
TL;DR: In this paper, the authors employ a Bayesian approach by specifying a prior distribution for the variances of unique factors and derive a model selection criterion for evaluating a bayesian factor analysis model.
Abstract: In maximum likelihood exploratory factor analysis, the estimates of unique variances can often turn out to be zero or negative, which makes no sense from a statistical point of view. In order to overcome this diculty, we employ a Bayesian approach by specifying a prior distribution for the variances of unique factors. The factor analysis model is estimated by EM algorithm, for which we provide the expectation and maximization steps within a general framework of EM algorithms. Crucial issues in Bayesian factor analysis model are the choice of adjusted parameters including the number of factors and also the hyper-parameters for the prior distribution. The choice of these parameters can be viewed as a model selection and evaluation problem. We derive a model selection criterion for evaluating a Bayesian factor analysis model. Monte Carlo simulations are conducted to investigate the eectiveness of the proposed procedure. A real data example is also given to illustrate our procedure. We observe that our modeling procedure prevents the occurrence of improper solutions and also chooses the appropriate number of factors objectively.

Journal ArticleDOI
TL;DR: This work shows that with the low rank structure of the common component, it can estimate the factors and factor loadings consistently with the missing values replaced by zeros and proposes a cross-validation-based method to determine the number of factors in factor models with or without missing values.

Journal ArticleDOI
TL;DR: In this paper, a latent-class approach is used to jointly model wave and wind data by a mixture of conditionally independent Gamma and von Mises distributions, which is validated on hourly marine data obtained from a buoy and two tide gauges in the Adriatic Sea.
Abstract: Identication of representative regimes of wave height and direc- tion under dierent wind conditions is complicated by issues that relate to the specication of the joint distribution of variables that are dened on linear and circular supports and the occurrence of missing values. We take a latent-class approach and jointly model wave and wind data by a nite mixture of conditionally independent Gamma and von Mises distributions. Maximum-likelihood estimates of parameters are obtained by exploiting a suitable EM algorithm that allows for missing data. The proposed model is validated on hourly marine data obtained from a buoy and two tide gauges in the Adriatic Sea.

Journal ArticleDOI
TL;DR: A pressing need for tensor cluster analysis in scientific studies, where data sets in the form of tensors are collected and statistical analysis methods are called for.
Abstract: Modern scientific studies often collect datasets in the form of tensors. These datasets call for innovative statistical analysis methods. In particular, there is a pressing need for tensor clusteri...

Journal ArticleDOI
TL;DR: In this paper, a message passing based algorithm, termed temporal-structure-assisted gradient aggregation (TSA-GA), is proposed to solve the model aggregation problem in federated edge learning.
Abstract: In this paper, we investigate over-the-air model aggregation in a federated edge learning (FEEL) system. We introduce a Markovian probability model to characterize the intrinsic temporal structure of the model aggregation series. With this temporal probability model, we formulate the model aggregation problem as to infer the desired aggregated update given all the past observations from a Bayesian perspective. We develop a message passing based algorithm, termed temporal-structure-assisted gradient aggregation (TSA-GA), to fulfil this estimation task with low complexity and near-optimal performance. We further establish the state evolution (SE) analysis to characterize the behaviour of the proposed TSA-GA algorithm, and derive an explicit bound of the expected loss reduction of the FEEL system under certain standard regularity conditions. In addition, we develop an expectation maximization (EM) strategy to learn the unknown parameters in the Markovian model. We show that the proposed TSA-GA significantly outperforms the state-of-the-art analog compression scheme, and is able to achieve comparable learning performance as the error-free benchmark in terms of final test accuracy.

Journal ArticleDOI
TL;DR: A generic Gaussian Bayesian network based soft-sensor framework is developed, which can account multiple hidden states and multirate/missing data and will allow users to integrate prior knowledge into the BN structure.

Journal ArticleDOI
TL;DR: This paper investigates a novel mixture of probabilistic PCA with clusterings for process monitoring with three clustering approaches and the effectiveness of the proposed approach is demonstrated by a practical coal pulverizing system.

Journal ArticleDOI
TL;DR: A novel robust Gaussian approximation smoother based on expectation–maximization (EM) algorithm is proposed for cooperative localization (CL) with faulty Doppler velocity log (DVL) and heavy-tailed measurement noise that can be used as a backup algorithm for CL in cases where DVL is not available due to failure.
Abstract: In this article, a novel robust Gaussian approximation smoother based on expectation–maximization (EM) algorithm is proposed for cooperative localization (CL) with faulty Doppler velocity log (DVL) and heavy-tailed measurement noise. In our model, the autonomous underwater vehicle (AUV) velocity information that is not available due to DVL failure and the bias in the underwater acoustic modem are considered as unknown inputs. Then, the Student’s t distribution is used to model the heavy-tailed measurement noise. An EM algorithm is also developed for the state-space model with heavy-tailed measurement noise. The state, noise covariance matrices, and auxiliary random variables are regarded as hidden variables to obtain the maximum likelihood estimation of unknown inputs. Gaussian smoother where modified process and measurement noise covariance are inferred by variational Bayesian (VB) approach is applied to estimate the state. The experimental results illustrate that given the heavy-tailed noise, the proposed method estimates the unknown input, and the state with a high level of accuracy. The proposed algorithm can be used as a backup algorithm for CL in cases where DVL is not available due to failure.

Journal ArticleDOI
Zan Li1, Fengming Wang1, Chengjie Wang1, Qingpei Hu1, Dan Yu1 
TL;DR: General reliability inference approaches involving the joint likelihood function are developed for the LDDP with repeated measurements, according to the expectation maximization (EM) and stochastic EM algorithms, along with numerical simulations and practical application based on real data.

Journal ArticleDOI
TL;DR: The authors proposed a pseudo-likelihood to estimate the covariate effects on the marginal probabilities of the outcomes, in addition to the association parameters and missingness parameters for longitudinal binary data with non-monotone non-ignorable missing outcomes over time.
Abstract: For longitudinal binary data with non-monotone non-ignorable missing outcomes over time, a full likelihood approach is complicated alge- braically, and maximum likelihood estimation can be computationally pro- hibitive with many times of follow-up. We propose pseudo-likelihoods to estimate the covariate effects on the marginal probabilities of the outcomes, in addition to the association parameters and missingness parameters. The pseudo-likelihood requires specification of the distribution for the data at all pairs of times on the same subject, but makes no assumptions about the joint distribution of the data at three or more times on the same sub- ject, so the method can be considered semi-parametric. If using maximum likelihood, the full likelihood must be correctly specified in order to obtain consistent estimates. We show in simulations that our proposed pseudo- likelihood produces a more efficient estimate of the regression parameters than the pseudo-likelihood for non-ignorable missingness proposed by Troxel et al. (1998). Application to data from the Six Cities study (Ware, et.al, 1984), a longitudinal study of the health effects of air pollution, is discussed.

Journal ArticleDOI
TL;DR: In this article, a Gamma mixture based channel modeling for the terahertz (THz) band via the expectation-maximization (EM) algorithm is proposed, where maximum likelihood estimation (MLE) is applied to characterize the Gamma mixture model parameters, and then EM algorithm is used to compute MLEs of the unknown parameters of the measurement data.
Abstract: With the recent developments on opening the terahertz (THz) spectrum for experimental purposes by the Federal Communications Commission, transceivers operating in the range of 0.1THz-10THz, which are known as THz bands, will enable ultra-high throughput wireless communications. However, actual implementation of the high-speed and high reliability THz band communication systems should start with providing extensive knowledge in regards to the propagation channel characteristics. Considering the huge bandwidth and the rapid changes in the characteristics of THz wireless channels, ray tracing and one-shot statistical modeling are not adequate to define an accurate channel model. In this work, we propose Gamma mixture based channel modeling for the THz band via the expectation-maximization (EM) algorithm. First, maximum likelihood estimation (MLE) is applied to characterize the Gamma mixture model parameters, and then EM algorithm is used to compute MLEs of the unknown parameters of the measurement data. The accuracy of the proposed model is investigated by using the Weighted relative mean difference (WMRD) error metrics, Kullback-Leibler (KL)-divergence, and Kolmogorov-Smirnov (KS) test to show the difference between the proposed model and the actual probability density functions (PDFs) that are obtained via the designed test environment. To efficiently evaluate the performance of the proposed method in more realistic scenarios, all the analysis is done by examining measurement data from a measurement campaign in the 240 GHz to 300 GHz frequency range, using a well-isolated anechoic chamber. According to WMRD error metrics, KL-divergence, and KS test results, PDFs generated by the mixture of Gamma distributions fit to the actual histogram of the measurement data. It is shown that instead of taking pseudo-average characteristics of sub-bands in the wide band, using the mixture models allows for determining channel parameters more precisely.


Journal ArticleDOI
TL;DR: This work proposes a new class of max-linear regression models to take advantages of easy interpretable features embedded in linear regression models, and develops an EM algorithm based maximum likelihood estimation procedure.

Journal ArticleDOI
TL;DR: This work presents a new Gaussian prior model, inspired by sparse Bayesian learning (SBL), which incorporates parameters to capture the channel correlation in addition to sparsity, and develops the Corr-SBL algorithm, which uses an expectation maximization procedure to learn the parameters of the prior and update the posterior channel estimates.
Abstract: In this work, we address the problem of multiple-input multiple-output mmWave channel estimation in a hybrid analog-digital architecture, by exploiting both the underlying spatial sparsity as well as the spatial correlation in the channel. We accomplish this via compressive covariance estimation, where we estimate the channel covariance matrix from noisy low dimensional projections of the channel obtained in the pilot transmission phase. We use the estimated covariance matrix as a plug-in to the linear minimum mean square estimator to obtain the channel estimate. We present a new Gaussian prior model, inspired by sparse Bayesian learning (SBL), which incorporates parameters to capture the channel correlation in addition to sparsity. Based on this prior, we develop the Corr-SBL algorithm, which uses an expectation maximization procedure to learn the parameters of the prior and update the posterior channel estimates. A closed form solution is obtained for the maximization step based on fixed-point iterations. To facilitate practical implementation, an online version of the algorithm is developed which significantly reduces the latency at a marginal loss in performance. The efficacy of the prior model is studied by analyzing the normalized mean squared error in the channel estimate. Our results show that, when compared to a genie-aided estimator and other existing sparse recovery algorithms, exploiting both sparsity and correlation results in significant performance gains, even under imperfect covariance estimates obtained using a limited number of samples.

Journal ArticleDOI
TL;DR: This paper compares random starts and partitional and model‐based strategies for choosing the initial values for the EM algorithm in the case of multivariate Gaussian emission distributions (EDs) and assess the performance of each strategy with different assessment criteria.
Abstract: The expectation–maximization (EM) algorithm is a familiar tool for computing the maximum likelihood estimate of the parameters in hidden Markov and semi‐Markov models. This paper carries out a detailed study on the influence that the initial values of the parameters impose on the results produced by the algorithm. We compare random starts and partitional and model‐based strategies for choosing the initial values for the EM algorithm in the case of multivariate Gaussian emission distributions (EDs) and assess the performance of each strategy with different assessment criteria. Several data generation settings are considered with varying number of latent states, of variables as well as of the level of fuzziness in the data, and discussion on how each factor influences the obtained results is provided. Simulation results show that different initialization strategies may lead to different log‐likelihood values and, accordingly, to different estimated partitions. A clear indication of which strategies should be preferred is given. We further include two real‐data examples, widely analysed in the hidden semi‐Markov model literature.

Journal ArticleDOI
TL;DR: A novel image deblurring method that does not need to estimate blur kernels, and outperforms state-of-the-art techniques, in terms of robustness, visual quality, and quantitative metrics.
Abstract: Complex blur such as the mixup of space-variant and space-invariant blur, which is hard to model mathematically, widely exists in real images. In this article, we propose a novel image deblurring method that does not need to estimate blur kernels. We utilize a pair of images that can be easily acquired in low-light situations: (1) a blurred image taken with low shutter speed and low ISO noise; and (2) a noisy image captured with high shutter speed and high ISO noise. Slicing the blurred image into patches, we extend the Gaussian mixture model (GMM) to model the underlying intensity distribution of each patch using the corresponding patches in the noisy image. We compute patch correspondences by analyzing the optical flow between the two images. The Expectation Maximization (EM) algorithm is utilized to estimate the parameters of GMM. To preserve sharp features, we add an additional bilateral term to the objective function in the M-step. We eventually add a detail layer to the deblurred image for refinement. Extensive experiments on both synthetic and real-world data demonstrate that our method outperforms state-of-the-art techniques, in terms of robustness, visual quality, and quantitative metrics.

Journal ArticleDOI
Tao Fang1, Songzuo Liu1, Lu Ma1, Lanyue Zhang1, Imran Ullah Khan1 
TL;DR: A novel Expectation Maximization-Block-Quasi Hybrid Likelihood Ratio Test (EM- block-QHLRT) method which effectively improved the identification rate of the Orthogonal frequency division multiplexing based subcarrier modulation in underwater acoustic multipath channel.

Journal ArticleDOI
TL;DR: A two-dimensional warranty analysis on the data with heterogeneity in terms of both age and usage finds that more than 85% of the claims are classified into normal failures, and local dependence structures might vary from a symmetric one to an asymmetric one.

Journal ArticleDOI
TL;DR: A method to adjust for such mismatches under “partial shuffling” in which a sufficiently large fraction of (predictors, response)-pairs are observed in their correct correspondence is presented, based on a pseudo-likelihood in which each term takes the form of a two-component mixture density.
Abstract: Recently, there has been significant interest in linear regression in the situation where predictors and responses are not observed in matching pairs corresponding to the same statistical unit as a...