scispace - formally typeset
Search or ask a question

Showing papers on "Mixture model published in 2022"


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed to use unsupervised ensemble autoencoders connected to the Gaussian mixture model (GMM) to adapt to multiple domains regardless of the skewness of each domain.
Abstract: Previous studies have adopted unsupervised machine learning with dimension reduction functions for cyberattack detection, which are limited to performing robust anomaly detection with high-dimensional and sparse data. Most of them usually assume homogeneous parameters with a specific Gaussian distribution for each domain, ignoring the robust testing of data skewness. This paper proposes to use unsupervised ensemble autoencoders connected to the Gaussian mixture model (GMM) to adapt to multiple domains regardless of the skewness of each domain. In the hidden space of the ensemble autoencoder, the attention-based latent representation and reconstructed features of the minimum error are utilized. The expectation maximization (EM) algorithm is used to estimate the sample density in the GMM. When the estimated sample density exceeds the learning threshold obtained in the training phase, the sample is identified as an outlier related to an attack anomaly. Finally, the ensemble autoencoder and the GMM are jointly optimized, which transforms the optimization of objective function into a Lagrangian dual problem. Experiments conducted on three public data sets validate that the performance of the proposed model is significantly competitive with the selected anomaly detection baselines. • An ensemble framework of multichannel network anomaly detection model that combines deep autoencoders and the GMM. • A robust optimization version of EM 3 for multiple domains, which transforms the optimization problem of the objective function into a Lagrangian dual. • We deduce the formula and analyze the convergence of the full text, and prove that our model has stability and robustness. • To the best of our knowledge is the first work that performs algorithms on both differentiated data domains and data distributions.

158 citations


Journal ArticleDOI
TL;DR: In this paper, an action-independent Gaussian mixture model (AIGMM) is trained on the extracted features of all fine-grained actions to analyze spatio-temporal information and preserve the local similarities among fine grained actions.

29 citations


Journal ArticleDOI
TL;DR: In this paper , a soft clustering method based on the Gaussian mixture model (GMM) using electrochemical impedance spectroscopy (EIS) is proposed to address the issues.

25 citations


Journal ArticleDOI
01 Jan 2022
TL;DR: In this paper , the Gaussian Mixture Model-based Second-Order Mean-Value Saddlepoint Approximation (GMM-SOMVSA) is introduced to tackle the problem of uncertain factors.
Abstract: Actual engineering systems will be inevitably affected by uncertain factors. Thus, the Reliability-Based Multidisciplinary Design Optimization (RBMDO) has become a hotspot for recent research and application in complex engineering system design. The Second-Order/First-Order Mean-Value Saddlepoint Approximate (SOMVSA/FOMVSA) are two popular reliability analysis strategies that are widely used in RBMDO. However, the SOMVSA method can only be used efficiently when the distribution of input variables is Gaussian distribution, which significantly limits its application. In this study, the Gaussian Mixture Model-based Second-Order Mean-Value Saddlepoint Approximation (GMM-SOMVSA) is introduced to tackle above problem. It is integrated with the Collaborative Optimization (CO) method to solve RBMDO problems. Furthermore, the formula and procedure of RBMDO using GMM-SOMVSA-Based CO(GMM-SOMVSA-CO) are proposed. Finally, an engineering example is given to show the application of the GMM-SOMVSA-CO method.

22 citations


Journal ArticleDOI
01 Jan 2022
TL;DR: GMM-Det as discussed by the authors is a real-time method for extracting epistemic uncertainty from object detectors to identify and reject open-set errors, where the detector produces a structured logit space that is modelled with class-specific Gaussian Mixture Models.
Abstract: Deployed into an open world, object detectors are prone to open-set errors, false positive detections of object classes not present in the training dataset.We propose GMM-Det, a real-time method for extracting epistemic uncertainty from object detectors to identify and reject open-set errors. GMM-Det trains the detector to produce a structured logit space that is modelled with class-specific Gaussian Mixture Models. At test time, open-set errors are identified by their low log-probability under all Gaussian Mixture Models. We test two common detector architectures, Faster R-CNN and RetinaNet, across three varied datasets spanning robotics and computer vision. Our results show that GMM-Det consistently outperforms existing uncertainty techniques for identifying and rejecting open-set detections, especially at the low-error-rate operating point required for safety-critical applications. GMM-Det maintains object detection performance, and introduces only minimal computational overhead. We also introduce a methodology for converting existing object detection datasets into specific open-set datasets to evaluate open-set performance in object detection.

21 citations


Journal ArticleDOI
TL;DR: In this paper , a Gaussian Mixture Model (GMM)-based method was proposed for harmonization of radiomic features by multiple imaging parameters (Nested ComBat), where scans are split into groupings based on the shape of the distribution used to harmonize as a batch effect and subsequent harmonization by a known imaging parameter.
Abstract: Radiomic features have a wide range of clinical applications, but variability due to image acquisition factors can affect their performance. The harmonization tool ComBat is a promising solution but is limited by inability to harmonize multimodal distributions, unknown imaging parameters, and multiple imaging parameters. In this study, we propose two methods for addressing these limitations. We propose a sequential method that allows for harmonization of radiomic features by multiple imaging parameters (Nested ComBat). We also employ a Gaussian Mixture Model (GMM)-based method (GMM ComBat) where scans are split into groupings based on the shape of the distribution used for harmonization as a batch effect and subsequent harmonization by a known imaging parameter. These two methods were evaluated on features extracted with CapTK and PyRadiomics from two public lung computed tomography datasets. We found that Nested ComBat exhibited similar performance to standard ComBat in reducing the percentage of features with statistically significant differences in distribution attributable to imaging parameters. GMM ComBat improved harmonization performance over standard ComBat (- 11%, - 10% for Lung3/CAPTK, Lung3/PyRadiomics harmonizing by kernel resolution). Features harmonized with a variant of the Nested method and the GMM split method demonstrated similar c-statistics and Kaplan-Meier curves when used in survival analyses.

19 citations


Posted ContentDOI
TL;DR: A novel sequence-to-sequence predictive model based on a variational auto-encoder (VAE) that is trained with Generative Adversarial Networks (GAN) is proposed, demonstrating that significant performance improvement can be achieved in long-term degradation progress and RUL prediction tasks.
Abstract: Prognostics predicts the future performance progression and remaining useful life (RUL) of in-service systems based on historical and contemporary data. One of the challenges in prognostics is the development of methods that are capable of handling real-world uncertainties that typically lead to inaccurate predictions. To alleviate the impacts of uncertainties and to achieve accurate degradation trajectory and RUL predictions, a novel sequence-to-sequence predictive model is proposed based on a variational autoencoder that is trained with generative adversarial networks. A long short-term memory network and a Gaussian mixture model are utilized as building blocks so that the model is capable of providing probabilistic predictions. Correlative and monotonic metrics are applied to identify sensitive features in the degradation progress, in order to reduce the uncertainty induced from raw data. Then, the selected features are concatenated with one-hot health state indicators as training data for the model to learn end of life without the need for prior knowledge of failure thresholds. Performance of the proposed model is validated by health monitoring data collected from real-world aeroengines, wind turbines, and lithium-ion batteries. The results demonstrate that significant performance improvement can be achieved in long-term degradation progress and RUL prediction tasks.

16 citations


Proceedings ArticleDOI
14 Feb 2022
TL;DR: Experimental results show that the proposed approach outperforms the conventional approach in terms of diarization error rate (DER), especially by substantially reducing speaker confusion errors, that indeed reflects the effectiveness of the proposed iGMM integration.
Abstract: Speaker diarization has been investigated extensively as an important central task for meeting analysis. Recent trend shows that integration of end-to-end neural (EEND)- and clustering-based diarization is a promising approach to handle realistic conversational data containing overlapped speech with an arbitrarily large number of speakers, and achieved state-of-the-art results on various tasks. However, the approaches proposed so far have not realized tight integration yet, because the clustering employed therein was not optimal in any sense for clustering the speaker embeddings estimated by the EEND module. To address this problem, this paper introduces a trainable clustering algorithm into the integration framework, by deep-unfolding a non-parametric Bayesian model called the infinite Gaussian mixture model (iGMM). Specifically, the speaker embeddings are optimized during training such that it better fits iGMM clustering, based on a novel clustering loss based on Adjusted Rand Index (ARI). Experimental results based on CALLHOME data show that the proposed approach outperforms the conventional approach in terms of diarization error rate (DER), especially by substantially reducing speaker confusion errors, that indeed reflects the effectiveness of the proposed iGMM integration.

14 citations


Journal ArticleDOI
TL;DR: The Gaussian Mixture Model Association (GaMMA) as discussed by the authors combines the Gaussian mixture model for phase measurements (both time and amplitude) with earthquake location, origin time, and magnitude estimation.
Abstract: Earthquake phase association algorithms aggregate picked seismic phases from a network of seismometers into individual earthquakes and play an important role in earthquake monitoring. Dense seismic networks and improved phase picking methods produce massive earthquake phase data sets, particularly for earthquake swarms and aftershocks occurring closely in time and space, making phase association a challenging problem. We present a new association method, the Gaussian Mixture Model Association (GaMMA), that combines the Gaussian mixture model for phase measurements (both time and amplitude), with earthquake location, origin time, and magnitude estimation. We treat earthquake phase association as an unsupervised clustering problem in a probabilistic framework, where each earthquake corresponds to a cluster of P and S phases with hyperbolic moveout of arrival times and a decay of amplitude with distance. We use a multivariate Gaussian distribution to model the collection of phase picks for an event, the mean of which is given by the predicted arrival time and amplitude from the causative event. We carry out the pick assignment for each earthquake and determine earthquake parameters (i.e., earthquake location, origin time, and magnitude) under the maximum likelihood criterion using the Expectation-Maximization (EM) algorithm. The GaMMA method does not require the typical association steps of other algorithms, such as grid-search or supervised training. The results on both synthetic test and the 2019 Ridgecrest earthquake sequence show that GaMMA effectively associates phases from a temporally and spatially dense earthquake sequence while producing useful estimates of earthquake location and magnitude.

14 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a novel multi-module integrated intrusion detection system, namely GMM-WGAN-IDS, which consists of three parts, such as feature extraction, imbalance processing, and classification.
Abstract: The high dimension, complexity, and imbalance of network data are hot issues in the field of intrusion detection. Nowadays, intrusion detection systems face some challenges in improving the accuracy of minority classes detection, detecting unknown attacks, and reducing false alarm rates. To address the above problems, we propose a novel multi-module integrated intrusion detection system, namely GMM-WGAN-IDS. The system consists of three parts, such as feature extraction, imbalance processing, and classification. Firstly, the stacked autoencoder-based feature extraction module (SAE module) is proposed to obtain a deeper representation of the data. Secondly, on the basis of combining the clustering algorithm based on gaussian mixture model and the wasserstein generative adversarial network based on gaussian mixture model, the imbalance processing module (GMM-WGAN) is proposed. Thirdly, the classification module (CNN-LSTM) is designed based on convolutional neural network (CNN) and long short-term memory (LSTM). We evaluate the performance of GMM-WGAN-IDS on the NSL-KDD and UNSW-NB15 datasets, comparing it with other intrusion detection methods. Finally, the experimental results show that our proposed GMM-WGAN-IDS outperforms the state-of-the-art methods and achieves better performance.

13 citations


Journal ArticleDOI
TL;DR: A new unsupervised learning-based probabilistic registration algorithm to reconstruct the unified GMM and solve the registration problem simultaneously is proposed, which achieves better registration accuracy and efficiency than the state-of-the-art supervised and semi-supervised methods in handling noisy and density variant point clouds.
Abstract: Sampling noise and density variation widely exist in the point cloud acquisition process, leading to few accurate point-to-point correspondences. Since they rely on point-to-point correspondence search, existing state-of-the-art point cloud registration methods face difficulty in overcoming the sampling noise and density variation accurately or efficiently. Moreover, the recent state-of-the-art learning-based methods requires ground-truth transformation as supervised information which lead to large labor costs in real scenes. In this paper, our motivation is that two point-clouds are considered two samples from a unified Gaussian Mixture Model (UGMM). Then, we leverage the advantage of the statistic model to overcome the noise and density variants, and uses the alignment score in the UGMM to supervise the network training. To achieve this motivation, we propose a new unsupervised learning-based probabilistic registration algorithm to reconstruct the unified GMM and solve the registration problem simultaneously. The proposed method formulates the registration problem into a clustering problem, which estimates the posterior probability that classifies the points of two input point clouds to components of the unified GMM. A new feature interaction module is designed to learn the posterior probability using both the self and cross point cloud information. Then, two differential modules are proposed to calculate the GMM parameters and transformation matrices. Experimental results on synthetic and real-world point cloud datasets demonstrate that our unsupervised method achieves better registration accuracy and efficiency than the state-of-the-art supervised and semi-supervised methods in handling noisy and density variant point clouds.

Journal ArticleDOI
TL;DR: Comparative experiments with non-transfer methods indicate that the proposed framework obtains a higher accuracy in recognizing BIL in the car following scenario, especially when sufficient data are not available.
Abstract: Accurately recognizing braking intensity levels (BIL) of drivers is important for guaranteeing the safety and avoiding traffic accidents in intelligent transportation systems. In this paper, an instance-level transfer learning (TL) framework is proposed to recognize BIL for a new driver with insufficient driving data by combining Gaussian Mixture Model (GMM) and importance weighted least squares probabilistic classifier (IWLSPC). By considering the statistic distribution, GMM is applied to cluster the data of braking behaviors into three levels with different intensities. With the density ratio calculated by unconstrained least-square importance fitting (ULSIF), LSPC is modified as IWLSPC to transfer the knowledge from one driver to another and recognize BIL for a new driver with insufficient driving data. Comparative experiments with non-transfer methods indicate that the proposed framework obtains a higher accuracy in recognizing BIL in the car following scenario, especially when sufficient data are not available.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a supervised multiclass deep autoencoding Gaussian mixture model (S-DAGMM) algorithm which is an ensemble model of individual unsupervised DAGMMs.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a Gaussian mixture model (GMM) to preprocess historical data to obtain steady-state measurements under various operating conditions, and a variational Bayes expectation maximization (VBEM) algorithm was applied to the GMM to increase its clustering accuracy and remove human intervention.

Journal ArticleDOI
TL;DR: In this article , a target search problem in a curve-shape area using multiple UAVs was studied, with the demand for obtaining the maximum cumulative detection reward, as well as the constraint of maneuverability and obstacle avoidance.
Abstract: This article focuses on the target search problem in a curve-shape area using multiple unmanned aerial vehicles (UAVs), with the demand for obtaining the maximum cumulative detection reward, as well as the constraint of maneuverability and obstacle avoidance. First, the prior target probability map of the curve-shape area, generated by Parzen windows with Gaussian kernels, is approximated by the 1-D Gaussian mixture model (GMM) in order to extract some high-value curve segments corresponding to Gaussian components. Based on the parameterized curve segments from GMM, the self-organizing map (SOM) neural network is then established to achieve the coverage search. The step of winner neuron selection in SOM will prioritize and allocate the curve segments to UAVs, with the comprehensive consideration of multiple evaluation factors and allocation balance. The following step of neuron weight update will plan the UAV paths under the constraint of maneuverability and obstacle avoidance, using the modified Dubins guidance vector field. Finally, the good performance of GMM-SOM is evaluated on a coastline map.

Journal ArticleDOI
TL;DR: In this article , the accelerating generalized autoregressive score (aGAS) technique was introduced into the Gaussian-Cauchy mixture model and proposed a novel time-varying mixture (TVM)-a-GAS model, which is suitable for processing the fat-tailed and extreme volatility characteristics of cryptocurrency returns.

Journal ArticleDOI
TL;DR: An Autoencoder(AE)-based feature construction approach to remove the dependency of manually correlating commands and generate an efficient representation by automatically learning the semantic similarity between input features extracted through commands data, which resulted in providing meaningful clustering interpretations.

Journal ArticleDOI
TL;DR: In this article , a Gaussian Mixture Model (GMM)-based method was proposed for harmonization of radiomic features by multiple imaging parameters (Nested ComBat), where scans are split into groupings based on the shape of the distribution used to harmonize as a batch effect and subsequent harmonization by a known imaging parameter.
Abstract: Radiomic features have a wide range of clinical applications, but variability due to image acquisition factors can affect their performance. The harmonization tool ComBat is a promising solution but is limited by inability to harmonize multimodal distributions, unknown imaging parameters, and multiple imaging parameters. In this study, we propose two methods for addressing these limitations. We propose a sequential method that allows for harmonization of radiomic features by multiple imaging parameters (Nested ComBat). We also employ a Gaussian Mixture Model (GMM)-based method (GMM ComBat) where scans are split into groupings based on the shape of the distribution used for harmonization as a batch effect and subsequent harmonization by a known imaging parameter. These two methods were evaluated on features extracted with CapTK and PyRadiomics from two public lung computed tomography datasets. We found that Nested ComBat exhibited similar performance to standard ComBat in reducing the percentage of features with statistically significant differences in distribution attributable to imaging parameters. GMM ComBat improved harmonization performance over standard ComBat (- 11%, - 10% for Lung3/CAPTK, Lung3/PyRadiomics harmonizing by kernel resolution). Features harmonized with a variant of the Nested method and the GMM split method demonstrated similar c-statistics and Kaplan-Meier curves when used in survival analyses.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a center-oriented margin-free triplet loss (COM-Triplet) enforced on image embeddings from a convolutional neural network backbone.
Abstract: Melanoma is a fatal skin cancer that is curable and has dramatically increasing survival rate when diagnosed at early stages. Learning-based methods hold significant promise for the detection of melanoma from dermoscopic images. However, since melanoma is a rare disease, existing databases of skin lesions predominantly contain highly imbalanced numbers of benign versus malignant samples. In turn, this imbalance introduces substantial bias in classification models due to the statistical dominance of the majority class. To address this issue, we introduce a deep clustering approach based on the latent-space embedding of dermoscopic images. Clustering is achieved using a novel center-oriented margin-free triplet loss (COM-Triplet) enforced on image embeddings from a convolutional neural network backbone. The proposed method aims to form maximally-separated cluster centers as opposed to minimizing classification error, so it is less sensitive to class imbalance. To avoid the need for labeled data, we further propose to implement COM-Triplet based on pseudo-labels generated by a Gaussian mixture model (GMM). Comprehensive experiments show that deep clustering with COM-Triplet loss outperforms clustering with triplet loss, and competing classifiers in both supervised and unsupervised settings.

Journal ArticleDOI
TL;DR: This work focuses on consistency for the unknown number of clusters when the observed data are generated from a finite mixture, and considers the situation where a prior is placed on the concentration parameter of the underlying Dirichlet process.
Abstract: Dirichlet process mixtures are flexible nonparametric models, particularly suited to density estimation and probabilistic clustering. In this work we study the posterior distribution induced by Dirichlet process mixtures as the sample size increases, and more specifically focus on consistency for the unknown number of clusters when the observed data are generated from a finite mixture. Crucially, we consider the situation where a prior is placed on the concentration parameter of the underlying Dirichlet process. Previous findings in the literature suggest that Dirichlet process mixtures are typically not consistent for the number of clusters if the concentration parameter is held fixed and data come from a finite mixture. Here we show that consistency for the number of clusters can be achieved if the concentration parameter is adapted in a fully Bayesian way, as commonly done in practice. Our results are derived for data coming from a class of finite mixtures, with mild assumptions on the prior for the concentration parameter and for a variety of choices of likelihood kernels for the mixture.

Journal ArticleDOI
TL;DR: In this paper , a Gaussian Mixture Model (GMM) and Deep Belief Network (DBN) model is used to extract feature vectors from identified voice data and the SoftMax (SM) classifier was used to detect the presence of MCI and AD disorders in the used speech signals.
Abstract: Early detection of Moderate Cognitive Impairment (MCI) and Alzheimer's disease (AD) is critical for increasing survival rates. Speech feature extraction is used in prior MCI and AD detection algorithms during neuropsychological assessments by medical specialists. The study's goal is to create an MCI and AD detection model using Automatic Speech Recognition (ASR) and a DL model. The suggested approach largely employs the Gaussian Mixture Model (GMM) for ASR in the patient's spontaneous speech. Furthermore, the Deep Belief Network (DBN) model is used to extract feature vectors from identified voice data. Finally, the SoftMax (SM) classifier is used to detect the presence of MCI and AD disorders in the used speech signals. A series of simulations were run to assess the superior performance of the GMM-DBN (Gaussian Mixture Model-Deep Belief Network model). Effectively describes the distribution of data observations as a weighted average of parameterized Gaussian distributions. The testing results indicated the GMM-DBN model's superior performance, with maximum accuracy of 90.28% and 86.76% on the binary and multiple class classifications, DN respectively. The GMM-DBN methodology has been successful in the classification of multiple classes, as evidenced by its F1-score reaching a maximum of 90.19% and its accuracy reaching 90.28%.

Journal ArticleDOI
TL;DR: A novel statistics pooling method that can produce more descriptive statistics through a mixture representation and is inspired by the expectation–maximization algorithm in Gaussian mixture models (GMMs).
Abstract: How to effectively convert a sequence of variable-length acoustic features to a fixed-dimension representation has always been a research focus in speaker recognition. In state-of-the-art speaker recognition systems, the conversion is implemented by concatenating the mean and the standard deviation of a sequence of frame-level features. However, a single mean and a single standard deviation are limited descriptive statistics for an acoustic sequence even with powerful feature extractors such as convolutional neural networks. In this paper, we propose a novel statistics pooling method that can produce more descriptive statistics through a mixture representation. Our approach is inspired by the expectation–maximization (EM) algorithm in Gaussian mixture models (GMMs). Instead of using traditional GMM style alignment, we novelly leverage modern deep learning tools to produce a more powerful mixture representation. The novelty includes: (1) unlike GMMs, the mixture assignments are determined by an attention network instead of the Euclidean distances between the frame-level features and explicit centers; (2) instead of using a single frame as input to the attention network, contextual frames are included to smooth out attention transition; and (3) soft-attention assignments are replaced by hard-attention assignments via the Gumbel-Softmax with straight-through estimators. With the proposed attention mechanism, we obtained a 13.7% relative improvement over vanilla mean and standard deviation pooling in the VOiCES19-eval set.

Journal ArticleDOI
TL;DR: In this article , a review of finite mixture model literature via bibliometric analysis, focusing on the trend and link between finite mixture models studies is presented, and the results show that there is an increasing trend of annual publication on FMM studies.
Abstract: A finite mixture model is well-known in statistics due to its versatility and is being actively researched. This paper reviews finite mixture model literature via bibliometric analysis, focusing on the trend and link between finite mixture model studies. The bibliometric analysis consists of four main phases; formulating research questions, locating research, research selection, evaluation, and analyzing and synthesizing selected papers. There are 667 journal articles extracted from the Web of Science (WoS) database from publications within 1988 to 2020. The Biblioshiny with R packages and VOSViewer were used as analytical tools. The findings show that there is an increasing trend of annual publication on the finite mixture model study. The results also outline key journals and the highest cited articles. Network analysis was also conducted and explored in scientific cooperation in the finite mixture model study. This study proposed a research agenda in the finite mixture model by identifying its current state and population trends.

Journal ArticleDOI
TL;DR: In this paper , the authors provide an overview and introduction to the development of non-ergodic ground-motion models, GMMs, with an emphasis on Gaussian process regression.
Abstract: Abstract This paper provides an overview and introduction to the development of non-ergodic ground-motion models, GMMs. It is intended for a reader who is familiar with the standard approach for developing ergodic GMMs. It starts with a brief summary of the development of ergodic GMMs and then describes different methods that are used in the development of non-ergodic GMMs with an emphasis on Gaussian process (GP) regression, as that is currently the method preferred by most researchers contributing to this special issue. Non-ergodic modeling requires the definition of locations for the source and site characterizing the systematic source and site effects; the non-ergodic domain is divided into cells for describing the systematic path effects. Modeling the cell-specific anelastic attenuation as a GP, and considerations on constraints for extrapolation of the non-ergodic GMMs are also discussed. An updated unifying notation for non-ergodic GMMs is also presented, which has been adopted by the authors of this issue.

Journal ArticleDOI
TL;DR: MR-GMMapping is presented, a Multi-Robot GMM-based mapping system in which robots only transfer GMM submaps and an adaptive model selection method, which can dynamically select the appropriate Gaussian model during exploration is proposed.
Abstract: Collaborative perception in unknown environments is a critical task for multi-robot systems. Without external positioning, multi-robot mapping systems have relied on the transfer place recognition (PR) descriptors and sensor data for the relative pose estimation (RelPose) and share their local maps for collaborative mapping. Thus, in a communication limited environment, data transmission can become a significant communication bottleneck in the multi-robot mapping system. Although a Gaussian Mixture Model (GMM) map and a submap-based framework have been proposed to reduce map data transmissions, the PR descriptors and sensor data for RelPose consume much of the communication bandwidth. Furthermore, the previous GMM submap construction methods may fail the multi-agent RelPose due to inconsistent weights. With a fixed number of Gaussian components, GMM submaps also have a limited ability to adapt to drastic changes in environmental characteristics while exploring. To address these limitations, this paper presents MR-GMMapping, a Multi-Robot GMM-based mapping system in which robots only transfer GMM submaps. We propose a novel GMM submap construction strategy with an adaptive model selection method, which can dynamically select the appropriate Gaussian model during exploration. Experiments on both simulators and real robots show that MR-GMMapping improves the accuracy of RelPose by 11% in average translation error and 30% in average rotation error in comparison with not GMM-submap-based method. In addition, data transmissions between robots are reduced by 98% in comparison to point cloud maps. MR-GMMapping is published as an open-source ROS project at https://github.com/efc-robot/gmm_map_python.git.

Journal ArticleDOI
TL;DR: In this article , the authors used the metaheuristic Aquila Optimizer (AO) method to estimate the parameters of proposed original and mixture PDFs in order to model wind speed characteristics.

Journal ArticleDOI
TL;DR: In this paper , the authors investigated both the practical and theoretical aspects of the mixture of Lindley model with 2-component (2-CMLM) and showed that it is a good candidate distribution for modeling COVID-19 and other related data sets.
Abstract: The mathematical characteristics of the mixture of Lindley model with 2-component (2-CMLM) are discussed. In this paper, we investigate both the practical and theoretical aspects of the 2-CMLM. We investigate several statistical features of the mixed model like probability generating function, cumulants, characteristic function, factorial moment generating function, mean time to failure, Mills Ratio, mean residual life. The density, hazard rate functions, mean, coefficient of variation, skewness, and kurtosis are all shown graphically. Furthermore, we use appropriate approaches such as maximum likelihood, least square and weighted least square methods to estimate the pertinent parameters of the mixture model. We use a simulation study to assess the performance of suggested methods. Eventually, modelling COVID-19 patient data demonstrates the effectiveness and utility of the 2-CMLM. The proposed model outperformed the two component mixture of exponential model as well as two component mixture of Weibull model in practical applications, indicating that it is a good candidate distribution for modelling COVID-19 and other related data sets.

Journal ArticleDOI
TL;DR: GMM-Det as discussed by the authors is a real-time method for extracting epistemic uncertainty from object detectors to identify and reject open-set errors, which can be used to improve object detection performance.
Abstract: Deployed into an open world, object detectors are prone to open-set errors, false positive detections of object classes not present in the training dataset.We propose GMM-Det, a real-time method for extracting epistemic uncertainty from object detectors to identify and reject open-set errors. GMM-Det trains the detector to produce a structured logit space that is modelled with class-specific Gaussian Mixture Models. At test time, open-set errors are identified by their low log-probability under all Gaussian Mixture Models. We test two common detector architectures, Faster R-CNN and RetinaNet, across three varied datasets spanning robotics and computer vision. Our results show that GMM-Det consistently outperforms existing uncertainty techniques for identifying and rejecting open-set detections, especially at the low-error-rate operating point required for safety-critical applications. GMM-Det maintains object detection performance, and introduces only minimal computational overhead. We also introduce a methodology for converting existing object detection datasets into specific open-set datasets to evaluate open-set performance in object detection.

Journal ArticleDOI
TL;DR: Jiang et al. as mentioned in this paper proposed a new denoising method for hyperspectral images (HSIs) corrupted by mixtures of stripe noise, Gaussian noise, and impulsive noise.
Abstract: This article proposes a new denoising method for hyperspectral images (HSIs) corrupted by mixtures (in a statistical sense) of stripe noise, Gaussian noise, and impulsive noise. The proposed method has three distinctive features: 1) it exploits the intrinsic characteristics of HSIs, namely, low-rank and self-similarity; 2) the observation noise is assumed to be additive and modeled by a mixture of Gaussian (MoG) densities; 3) the inference is performed with an expectation maximization (EM) algorithm, which, in addition to the clean HSI, also estimates the mixture parameters (posterior probability of each mode and variances). Comparisons of the proposed method with state-of-the-art algorithms provide experimental evidence of the effectiveness of the proposed denoising algorithm. A MATLAB demo of this work will be available at https://github.com/TaiXiangJiang for the sake of reproducibility.

Journal ArticleDOI
TL;DR: In this article , a compute-in-memory (CIM)-based ultralow-power framework for probabilistic localization of insect-scale drones is proposed, where the likelihood function useful for drone localization can be efficiently implemented by connecting many multi-input inverters in parallel.
Abstract: We propose a novel compute-in-memory (CIM)-based ultralow-power framework for probabilistic localization of insect-scale drones. Localization is a critical subroutine for path planning and rotor control in drones, where a drone is required to continuously estimate its pose (position and orientation) in flying space. The conventional probabilistic localization approaches rely on the 3-D Gaussian mixture model (GMM)-based representation of a 3-D map. A GMM model with hundreds of mixture functions is typically needed to adequately learn and represent the intricacies of the map. Meanwhile, localization using complex GMM map models is computationally intensive. Since insect-scale drones operate under extremely limited area/power budget, continuous localization using GMM models entails much higher operating energy, thereby limiting flying duration and/or size of the drone due to a larger battery. Addressing the computational challenges of localization in an insect-scale drone using a CIM approach, we propose a novel framework of 3-D map representation using a harmonic mean of the “Gaussian-like” mixture (HMGM) model. We show that short-circuit current of a multiinput floating-gate CMOS-based inverter follows the harmonic mean of a Gaussian-like function. Therefore, the likelihood function useful for drone localization can be efficiently implemented by connecting many multiinput inverters in parallel, each programmed with the parameters of the 3-D map model represented as HMGM. When the depth measurements are projected to the input of the implementation, the summed current of the inverters emulates the likelihood of the measurement. We have characterized our approach on an RGB-D scenes dataset. The proposed localization framework is $\sim 25\times $ energy-efficient than the traditional, 8-bit digital GMM-based processor paving the way for tiny autonomous drones.