scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian process published in 2008"


Journal Article
TL;DR: It is proved that the problem of finding the configuration that maximizes mutual information is NP-complete, and a polynomial-time approximation is described that is within (1-1/e) of the optimum by exploiting the submodularity of mutual information.
Abstract: When monitoring spatial phenomena, which can often be modeled as Gaussian processes (GPs), choosing sensor locations is a fundamental task. There are several common strategies to address this task, for example, geometry or disk models, placing sensors at the points of highest entropy (variance) in the GP model, and A-, D-, or E-optimal design. In this paper, we tackle the combinatorial optimization problem of maximizing the mutual information between the chosen locations and the locations which are not selected. We prove that the problem of finding the configuration that maximizes mutual information is NP-complete. To address this issue, we describe a polynomial-time approximation that is within (1-1/e) of the optimum by exploiting the submodularity of mutual information. We also show how submodularity can be used to obtain online bounds, and design branch and bound search procedures. We then extend our algorithm to exploit lazy evaluations and local structure in the GP, yielding significant speedups. We also extend our approach to find placements which are robust against node failures and uncertainties in the model. These extensions are again associated with rigorous theoretical approximation guarantees, exploiting the submodularity of the objective function. We demonstrate the advantages of our approach towards optimizing mutual information in a very extensive empirical study on two real-world data sets.

1,593 citations


Book
25 Aug 2008
TL;DR: General Methods and Algorithms for Generating Random Objects and Output Analysis and Variance-Reduction Methods for Stochastic Optimization.
Abstract: General Methods and Algorithms.- Generating Random Objects.- Output Analysis.- Steady-State Simulation.- Variance-Reduction Methods.- Rare-Event Simulation.- Derivative Estimation.- Stochastic Optimization.- Algorithms for Special Models.- Numerical Integration.- Stochastic Di3erential Equations.- Gaussian Processes.- Levy Processes.- Markov Chain Monte Carlo Methods.- Selected Topics and Extended Examples.- What This Book Is About.- What This Book Is About.

1,265 citations


Book
25 Aug 2008
TL;DR: In this article, Gaussian Processes and Gaussian Model Selection are used to estimate density estimation via model selection via statistical learning.Exponential and Information Inequalities, Gaussian processes and model selection.
Abstract: Exponential and Information Inequalities- Gaussian Processes- Gaussian Model Selection- Concentration Inequalities- Maximal Inequalities- Density Estimation via Model Selection- Statistical Learning

1,115 citations


Journal ArticleDOI
TL;DR: This work achieves the flexibility to accommodate non‐stationary, non‐Gaussian, possibly multivariate, possibly spatiotemporal processes in the context of large data sets in the form of a computational template encompassing these diverse settings.
Abstract: With scientific data available at geocoded locations, investigators are increasingly turning to spatial process models for carrying out statistical inference. Over the last decade, hierarchical models implemented through Markov chain Monte Carlo methods have become especially popular for spatial modelling, given their flexibility and power to fit models that would be infeasible with classical methods as well as their avoidance of possibly inappropriate asymptotics. However, fitting hierarchical spatial models often involves expensive matrix decompositions whose computational complexity increases in cubic order with the number of spatial locations, rendering such models infeasible for large spatial data sets. This computational burden is exacerbated in multivariate settings with several spatially dependent response variables. It is also aggravated when data are collected at frequent time points and spatiotemporal process models are used. With regard to this challenge, our contribution is to work with what we call predictive process models for spatial and spatiotemporal data. Every spatial (or spatiotemporal) process induces a predictive process model (in fact, arbitrarily many of them). The latter models project process realizations of the former to a lower dimensional subspace, thereby reducing the computational burden. Hence, we achieve the flexibility to accommodate non-stationary, non-Gaussian, possibly multivariate, possibly spatiotemporal processes in the context of large data sets. We discuss attractive theoretical properties of these predictive processes. We also provide a computational template encompassing these diverse settings. Finally, we illustrate the approach with simulated and real data sets.

1,083 citations


Journal ArticleDOI
01 Feb 2008
TL;DR: This work marginalize out the model parameters in closed form by using Gaussian process priors for both the dynamical and the observation mappings, which results in a nonparametric model for dynamical systems that accounts for uncertainty in the model.
Abstract: We introduce Gaussian process dynamical models (GPDMs) for nonlinear time series analysis, with applications to learning models of human pose and motion from high-dimensional motion capture data. A GPDM is a latent variable model. It comprises a low-dimensional latent space with associated dynamics, as well as a map from the latent space to an observation space. We marginalize out the model parameters in closed form by using Gaussian process priors for both the dynamical and the observation mappings. This results in a nonparametric model for dynamical systems that accounts for uncertainty in the model. We demonstrate the approach and compare four learning algorithms on human motion capture data, in which each pose is 50-dimensional. Despite the use of small data sets, the GPDM learns an effective representation of the nonlinear dynamics in these spaces.

1,026 citations


Journal ArticleDOI
TL;DR: This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space and is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions.
Abstract: Many engineering applications are characterized by implicit response functions that are expensive to evaluate and sometimes nonlinear in their behavior, making reliability analysis difficult. This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space. The method begins with a Gaussian process model built from a very small number of samples, and then adaptively chooses where to generate subsequent samples to ensure that the model is accurate in the vicinity of the limit state. The resulting Gaussian process model is then sampled using multimodal adaptive importance sampling to calculate the probability of exceeding (or failing to exceed) the response level of interest. By locating multiple points on or near the limit state, more complex and nonlinear limit states can be modeled, leading to more accurate probability integration. By concentrating the samples in the area where accuracy is important (i.e., in the vicinity of the limit state), only a small number of true function evaluations are required to build a quality surrogate model. The resulting method is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions. This new method is applied to a collection of example problems including one that analyzes the reliability of a microelectromechanical system device that current available methods have difficulty solving either accurately or efficiently.

804 citations


Journal ArticleDOI
TL;DR: In this paper, a non-stationary modeling methodologies that couple stationary Gaussian processes with treed partitioning is presented. But this method is not applicable to the design of a rocket booster.
Abstract: Motivated by a computer experiment for the design of a rocket booster, this article explores nonstationary modeling methodologies that couple stationary Gaussian processes with treed partitioning. Partitioning is a simple but effective method for dealing with nonstationarity. The methodological developments and statistical computing details that make this approach efficient are described in detail. In addition to providing an analysis of the rocket booster simulator, we show that our approach is effective in other arenas as well.

540 citations


Journal ArticleDOI
TL;DR: The rate of contraction of the posterior distribution based on sampling from a smooth density model when the prior models the log density as a (fractionally integrated) Brownian motion is shown to depend on the position of the true parameter relative to the reproducing kernel Hilbert space of the Gaussian process.
Abstract: We derive rates of contraction of posterior distributions on nonparametric or semiparametric models based on Gaussian processes. The rate of contraction is shown to depend on the position of the true parameter relative to the reproducing kernel Hilbert space of the Gaussian process and the small ball probabilities of the Gaussian process. We determine these quantities for a range of examples of Gaussian priors and in several statistical settings. For instance, we consider the rate of contraction of the posterior distribution based on sampling from a smooth density model when the prior models the log density as a (fractionally integrated) Brownian motion. We also consider regression with Gaussian errors and smooth classification under a logistic or probit link function combined with various priors.

423 citations


Journal Article
TL;DR: A comprehensive overview of many recent algorithms for approximate inference in Gaussian process models for probabilistic binary classification and the relationships between several approaches are elucidated theoretically, and the properties of the different algorithms are corroborated by experimental results.
Abstract: We provide a comprehensive overview of many recent algorithms for approximate inference in Gaussian process models for probabilistic binary classification. The relationships between several approaches are elucidated theoretically, and the properties of the different algorithms are corroborated by experimental results. We examine both 1) the quality of the predictive distributions and 2) the suitability of the different marginal likelihood approximations for model selection (selecting hyperparameters) and compare to a gold standard based on MCMC. Interestingly, some methods produce good predictive distributions although their marginal likelihood approximations are poor. Strong conclusions are drawn about the methods: The Expectation Propagation algorithm is almost always the method of choice unless the computational budget is very tight. We also extend existing methods in various ways, and provide unifying code implementing all approaches.

392 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: The proposed method provides a new higher-level layer to the traditional surveillance pipeline for anomalous event detection and scene model feedback and successfully used the proposed scene model to detect local as well as global anomalies in object tracks.
Abstract: We present a novel framework for learning patterns of motion and sizes of objects in static camera surveillance. The proposed method provides a new higher-level layer to the traditional surveillance pipeline for anomalous event detection and scene model feedback. Pixel level probability density functions (pdfs) of appearance have been used for background modelling in the past, but modelling pixel level pdfs of object speed and size from the tracks is novel. Each pdf is modelled as a multivariate Gaussian mixture model (GMM) of the motion (destination location & transition time) and the size (width & height) parameters of the objects at that location. Output of the tracking module is used to perform unsupervised EM-based learning of every GMM. We have successfully used the proposed scene model to detect local as well as global anomalies in object tracks. We also show the use of this scene model to improve object detection through pixel-level parameter feedback of the minimum object size and background learning rate. Most object path modelling approaches first cluster the tracks into major paths in the scene, which can be a source of error. We avoid this by building local pdfs that capture a variety of tracks which are passing through them. Qualitative and quantitative analysis of actual surveillance videos proved the effectiveness of the proposed approach.

389 citations


Journal ArticleDOI
TL;DR: A variance stabilizing transform (VST) is applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes.
Abstract: In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.

Journal ArticleDOI
TL;DR: In this article, the authors determine the rate region of the quadratic Gaussian two-encoder source-coding problem, which is achieved by a simple architecture that separates the analog and digital aspects of the compression.
Abstract: We determine the rate region of the quadratic Gaussian two-encoder source-coding problem. This rate region is achieved by a simple architecture that separates the analog and digital aspects of the compression. Furthermore, this architecture requires higher rates to send a Gaussian source than it does to send any other source with the same covariance. Our techniques can also be used to determine the sum-rate of some generalizations of this classical problem. Our approach involves coupling the problem to a quadratic Gaussian ldquoCEO problem.rdquo

01 Dec 2008
TL;DR: In this paper, a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a spatial or temporal field, endowed with a hierarchical Gaussian process prior, is proposed, where truncated Karhunen-Loeve expansions are introduced to efficiently parameterize the unknown field and specify a stochastic forward problem whose solution captures that of the deterministic forward model over the support of the prior.
Abstract: We consider a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a spatial or temporal field, endowed with a hierarchical Gaussian process prior. Computational challenges in this construction arise from the need for repeated evaluations of the forward model (e.g., in the context of Markov chain Monte Carlo) and are compounded by high dimensionality of the posterior. We address these challenges by introducing truncated Karhunen-Loeve expansions, based on the prior distribution, to efficiently parameterize the unknown field and to specify a stochastic forward problem whose solution captures that of the deterministic forward model over the support of the prior. We seek a solution of this problem using Galerkin projection on a polynomial chaos basis, and use the solution to construct a reduced-dimensionality surrogate posterior density that is inexpensive to evaluate. We demonstrate the formulation on a transient diffusion equation with prescribed source terms, inferring the spatially-varying diffusivity of the medium from limited and noisy data.

Journal ArticleDOI
TL;DR: A framework for building Gaussian process models that incorporate both types of factors and modern optimization techniques are used in the estimation to ensure the validity of the constructed correlation functions.
Abstract: Modeling experiments with qualitative and quantitative factors is an important issue in computer modeling. We propose a framework for building Gaussian process models that incorporate both types of factors. The key to the development of these new models is an approach for constructing correlation functions with qualitative and quantitative factors. An iterative estimation procedure is developed for the proposed models. Modern optimization techniques are used in the estimation to ensure the validity of the constructed correlation functions. The proposed method is illustrated with an example involving a known function and a real example for modeling the thermal distribution of a data center.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the MLE- based mapping with dynamic features can significantly improve the mapping performance compared with the MMSE-based mapping in both the articulatory-to-acoustic mapping and the inversion mapping.

Journal ArticleDOI
TL;DR: The number of samples needed to have a globally accurate surface stays generally out of reach for problems considering more than four design variables.
Abstract: In this paper, we compare the global accuracy of different strategies to build response surfaces by varying sampling methods and modeling techniques. The aerodynamic test functions are obtained by deforming the shape of a transonic airfoil. For comparisons, a robust strategy for model fit using a new efficient initialization technique followed by a gradient optimization was applied. First, a study of different sampling methods proves that including a posteriori information on the function to sample distribution can improve accuracy over classical space-filling methods such as Latin hypercube sampling. Second, comparing kriging and gradient-enhanced kriging on two- to six-dimensional test cases shows that interpolating gradient vectors drastically improves response-surface accuracy. Although direct and indirect cokriging have equivalent formulations, the indirect cokriging outperforms the direct approach. The slow linear phase of error convergence when increasing sample size is not avoided by cokriging. Thus, the number of samples needed to have a globally accurate surface stays generally out of reach for problems considering more than four design variables.

Proceedings Article
08 Dec 2008
TL;DR: A sparse approximation approach for dependent output Gaussian processes (GP) using the convolution process formalism to establish dependencies between output variables, where each latent function is represented as a GP.
Abstract: We present a sparse approximation approach for dependent output Gaussian processes (GP). Employing a latent function framework, we apply the convolution process formalism to establish dependencies between output variables, where each latent function is represented as a GP. Based on these latent functions, we establish an approximation scheme using a conditional independence assumption between the output processes, leading to an approximation of the full covariance which is determined by the locations at which the latent functions are evaluated. We show results of the proposed methodology for synthetic data and real world applications on pollution prediction and a sensor network.

Journal ArticleDOI
TL;DR: In this paper, a generalized Gaussian approximation for correlated Gaussian fields observed on part of the sky is proposed and evaluated using a precomputed covariance matrix and set of power spectrum estimators.
Abstract: Microwave background temperature and polarization observations are a powerful way to constrain cosmological parameters if the likelihood function can be calculated accurately. The temperature and polarization fields are correlated, partial-sky coverage correlates power spectrum estimators at different l, and the likelihood function for a theory spectrum given a set of observed estimators is non-Gaussian. An accurate analysis must model all these properties. Most existing likelihood approximations are good enough for a temperature-only analysis, however they cannot reliably handle temperature-polarization correlations. We give a new general approximation applicable for correlated Gaussian fields observed on part of the sky. The approximation models the non-Gaussian form exactly in the ideal full-sky limit and is fast to evaluate using a precomputed covariance matrix and set of power spectrum estimators. We show with simulations that it is good enough to obtain correct results at l?30 where an exact calculation becomes impossible. We also show that some Gaussian approximations give reliable parameter constraints even though they do not capture the shape of the likelihood function at each l accurately. Finally we test the approximations on simulations with realistically anisotropic noise and asymmetric foreground mask.

Journal ArticleDOI
TL;DR: It is rigorously established that the mutual information of correlated multiple-input multiple-output (MIMO) Rayleigh channels when properly centered and rescaled converges to a standard Gaussian random variable.
Abstract: This paper adresses the behavior of the mutual information of correlated multiple-input multiple-output (MIMO) Rayleigh channels when the numbers of transmit and receive antennas converge to +infin at the same rate. Using a new and simple approach based on Poincare-Nash inequality and on an integration by parts formula, it is rigorously established that the mutual information when properly centered and rescaled converges to a standard Gaussian random variable. Simple expressions for the centering and scaling parameters are provided. These results confirm previous evaluations based on the powerful but nonrigorous replica method. It is believed that the tools that are used in this paper are simple, robust, and of interest for the communications engineering community.

Journal ArticleDOI
TL;DR: This paper proposes a robust postprocessing model to infer the latent heart rate time series and applies the method to a wide range of heart rate data and obtains convincing predictions along with uncertainty estimates.
Abstract: Heart rate data collected during nonlaboratory conditions present several data-modeling challenges. First, the noise in such data is often poorly described by a simple Gaussian; it has outliers and errors come in bursts. Second, in large-scale studies the ECG waveform is usually not recorded in full, so one has to deal with missing information. In this paper, we propose a robust postprocessing model for such applications. Our model to infer the latent heart rate time series consists of two main components: unsupervised clustering followed by Bayesian regression. The clustering component uses auxiliary data to learn the structure of outliers and noise bursts. The subsequent Gaussian process regression model uses the cluster assignments as prior information and incorporates expert knowledge about the physiology of the heart. We apply the method to a wide range of heart rate data and obtain convincing predictions along with uncertainty estimates. In a quantitative comparison with existing postprocessing methodology, our model achieves a significant increase in performance.

Journal ArticleDOI
TL;DR: The experimental result shows that GPR models have the advantage over other regressive models in terms of model accuracy and feature scaling and probabilistic variance, and the effectiveness of controlling optimization process to acquire more reliable optimum predictive solutions.
Abstract: The paper discusses the development of reliable multi-objective optimization based on Gaussian process regression (GPR) to optimize the high-speed wire-cut electrical discharge machining (WEDM-HS) process, considering mean current, on-time and off-time as input features and material remove rate (MRR) and Surface Roughness (SR) as output responses. In order to achieve an accurate estimation for the nonlinear electrical discharging and thermal erosion process, the multiple GPR models due to its simplicity and flexibility identify WEDM-HS process with measurement noise. Objective functions of predictive reliability multi-objectives optimization are built by probabilistic variance of predictive response used as empirical reliability measurement and responses of GPR models. Finally, the cluster class centers of Pareto front are the optional solutions to be chosen. Experiments on WEDM-HS (DK7732C2) are conducted to evaluate the proposed intelligent approach in terms of optimization process accuracy and reliability. The experimental result shows that GPR models have the advantage over other regressive models in terms of model accuracy and feature scaling and probabilistic variance. Given the regulable coefficient parameters, the experimental optimization and optional solutions show the effectiveness of controlling optimization process to acquire more reliable optimum predictive solutions.

Journal ArticleDOI
TL;DR: A two-cycle algorithm to approximate level-set-based curve evolution without the need of solving partial differential equations (PDEs) is proposed, applicable to a broad class of evolution speeds that can be viewed as composed of a data-dependent term and a curve smoothness regularization term.
Abstract: In this paper, we present a complete and practical algorithm for the approximation of level-set-based curve evolution suitable for real-time implementation. In particular, we propose a two-cycle algorithm to approximate level-set-based curve evolution without the need of solving partial differential equations (PDEs). Our algorithm is applicable to a broad class of evolution speeds that can be viewed as composed of a data-dependent term and a curve smoothness regularization term. We achieve curve evolution corresponding to such evolution speeds by separating the evolution process into two different cycles: one cycle for the data-dependent term and a second cycle for the smoothness regularization. The smoothing term is derived from a Gaussian filtering process. In both cycles, the evolution is realized through a simple element switching mechanism between two linked lists, that implicitly represents the curve using an integer valued level-set function. By careful construction, all the key evolution steps require only integer operations. A consequence is that we obtain significant computation speedups compared to exact PDE-based approaches while obtaining excellent agreement with these methods for problems of practical engineering interest. In particular, the resulting algorithm is fast enough for use in real-time video processing applications, which we demonstrate through several image segmentation and video tracking experiments.

Posted Content
TL;DR: In this paper, the Gaussian process model which gives analytical expressions of Sobol indices is discussed, and the techniques are finally applied to a real case of hydrogeological modeling.
Abstract: Global sensitivity analysis of complex numerical models can be performed by calculating variance-based importance measures of the input variables, such as the Sobol indices. However, these techniques, requiring a large number of model evaluations, are often unacceptable for time expensive computer codes. A well known and widely used decision consists in replacing the computer code by a metamodel, predicting the model responses with a negligible computation time and rending straightforward the estimation of Sobol indices. In this paper, we discuss about the Gaussian process model which gives analytical expressions of Sobol indices. Two approaches are studied to compute the Sobol indices: the first based on the predictor of the Gaussian process model and the second based on the global stochastic process model. Comparisons between the two estimates, made on analytical examples, show the superiority of the second approach in terms of convergence and robustness. Moreover, the second approach allows to integrate the modeling error of the Gaussian process model by directly giving some confidence intervals on the Sobol indices. These techniques are finally applied to a real case of hydrogeological modeling.

Journal ArticleDOI
TL;DR: It is shown that, with non-Gaussian data, causal inference is possible even in the presence of hidden variables (unobserved confounders), even when the existence of such variables is unknown a priori.

Journal ArticleDOI
TL;DR: A specific estimation procedure is developed to adjust a Gaussian process model that is characterized by its mean and covariance functions in complex cases (non-linear relations, highly dispersed or discontinuous output, high-dimensional input, inadequate sampling designs, etc.).

Journal ArticleDOI
TL;DR: This paper introduces a multivariate Bayesian scheme to decode or recognise brain states from neuroimages, and reduces the problem to the same form used in Gaussian process modelling, which affords a generic and efficient scheme for model optimisation and evaluating model evidence.

Proceedings ArticleDOI
15 Aug 2008
TL;DR: A low-complexity recursive procedure is presented for minimum mean squared error (MMSE) estimation in linear regression models and a Gaussian mixture is chosen as the prior on the unknown parameter vector.
Abstract: A low-complexity recursive procedure is presented for minimum mean squared error (MMSE) estimation in linear regression models. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both an approximate MMSE estimate of the parameter vector and a set of high posterior probability mixing parameters. Emphasis is given to the case of a sparse parameter vector. Numerical simulations demonstrate estimation performance and illustrate the distinctions between MMSE estimation and MAP model selection. The set of high probability mixing parameters not only provides MAP basis selection, but also yields relative probabilities that reveal potential ambiguity in the sparse model.

Proceedings Article
08 Dec 2008
TL;DR: This work presents an accelerated sampling procedure which enables Bayesian inference of parameters in nonlinear ordinary and delay differential equations via the novel use of Gaussian processes (GP).
Abstract: Identification and comparison of nonlinear dynamical system models using noisy and sparse experimental data is a vital task in many fields, however current methods are computationally expensive and prone to error due in part to the nonlinear nature of the likelihood surfaces induced. We present an accelerated sampling procedure which enables Bayesian inference of parameters in nonlinear ordinary and delay differential equations via the novel use of Gaussian processes (GP). Our method involves GP regression over time-series data, and the resulting derivative and time delay estimates make parameter inference possible without solving the dynamical system explicitly, resulting in dramatic savings of computational time. We demonstrate the speed and statistical accuracy of our approach using examples of both ordinary and delay differential equations, and provide a comprehensive comparison with current state of the art methods.

Journal ArticleDOI
TL;DR: A Gaussian-mixture-model approach is proposed for accurate uncertainty propagation through a general nonlinear system and is argued to be an excellent candidate for higher-dimensional uncertainty-propagation problems.
Abstract: A Gaussian-mixture-model approach is proposed for accurate uncertainty propagation through a general nonlinear system. The transition probability density function is approximated by a finite sum of Gaussian density functions for which the parameters (mean and covariance) are propagated using linear propagation theory. Two different approaches are introduced to update the weights of different components of a Gaussian-mixture model for uncertainty propagation through nonlinear system. The first method updates the weights such that they minimize the integral square difference between the true forecast probability density function and its Gaussian-sum approximation. The second method uses the Fokker-Planck-Kohnogorov equation error as feedback to adapt for the amplitude of different Gaussian components while solving a quadratic programming problem. The proposed methods are applied to a variety of problems in the open literature and are argued to be an excellent candidate for higher-dimensional uncertainty-propagation problems.

Journal ArticleDOI
TL;DR: A statistical study of the main covariance matrix estimates used in the literature is performed through bias analysis, consistency, and asymptotic distribution to compare the performance of the estimates and to establish simple relationships between them.
Abstract: This paper deals with covariance matrix estimates in impulsive noise environments. Physical models based on compound noise modeling [spherically invariant random vectors (SIRV), compound Gaussian processes] allow to correctly describe reality (e.g., range power variations or clutter transitions areas in radar problems). However, these models depend on several unknown parameters (covariance matrix, statistical distribution of the texture, disturbance parameters) that have to be estimated. Based on these noise models, this paper presents a complete analysis of the main covariance matrix estimates used in the literature. Four estimates are studied: the well-known sample covariance matrix MSCM and a normalized version MN, the fixed-point (FP) estimate MFP, and a theoretical benchmark MTFP. Among these estimates, the only one of practical interest in impulsive noise is the FP. The three others, which could be used in a Gaussian context, are, in this paper, only of academic interest, i.e., for comparison with the FP. A statistical study of these estimates is performed through bias analysis, consistency, and asymptotic distribution. This study allows to compare the performance of the estimates and to establish simple relationships between them. Finally, theoretical results are emphasized by several simulations corresponding to real situations.