scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian process published in 2006"


Journal ArticleDOI
TL;DR: Under linear, Gaussian assumptions on the target dynamics and birth process, the posterior intensity at any time step is a Gaussian mixture and closed-form recursions for propagating the means, covariances, and weights of the constituent Gaussian components of the posteriorintensity are derived.
Abstract: A new recursive algorithm is proposed for jointly estimating the time-varying number of targets and their states from a sequence of observation sets in the presence of data association uncertainty, detection uncertainty, noise, and false alarms. The approach involves modelling the respective collections of targets and measurements as random finite sets and applying the probability hypothesis density (PHD) recursion to propagate the posterior intensity, which is a first-order statistic of the random finite set of targets, in time. At present, there is no closed-form solution to the PHD recursion. This paper shows that under linear, Gaussian assumptions on the target dynamics and birth process, the posterior intensity at any time step is a Gaussian mixture. More importantly, closed-form recursions for propagating the means, covariances, and weights of the constituent Gaussian components of the posterior intensity are derived. The proposed algorithm combines these recursions with a strategy for managing the number of Gaussian components to increase efficiency. This algorithm is extended to accommodate mildly nonlinear target dynamics using approximation strategies from the extended and unscented Kalman filters

1,805 citations


Book ChapterDOI
28 May 2006
TL;DR: In this paper, a distributed protocol for generating shares of random noise, secure against malicious participants, was proposed, where the purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers.
Abstract: In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form ∑if(di), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution.

1,567 citations


Journal ArticleDOI
TL;DR: This study reveals the highly non-Gaussian marginal statistics and strong interlocation, interscale, and interdirection dependencies of contourlet coefficients and finds that conditioned on the magnitudes of their generalized neighborhood coefficients, contours coefficients can be approximately modeled as Gaussian random variables.
Abstract: The contourlet transform is a new two-dimensional extension of the wavelet transform using multiscale and directional filter banks. The contourlet expansion is composed of basis images oriented at various directions in multiple scales, with flexible aspect ratios. Given this rich set of basis images, the contourlet transform effectively captures smooth contours that are the dominant feature in natural images. We begin with a detailed study on the statistics of the contourlet coefficients of natural images: using histograms to estimate the marginal and joint distributions and mutual information to measure the dependencies between coefficients. This study reveals the highly non-Gaussian marginal statistics and strong interlocation, interscale, and interdirection dependencies of contourlet coefficients. We also find that conditioned on the magnitudes of their generalized neighborhood coefficients, contourlet coefficients can be approximately modeled as Gaussian random variables. Based on these findings, we model contourlet coefficients using a hidden Markov tree (HMT) model with Gaussian mixtures that can capture all interscale, interdirection, and interlocation dependencies. We present experimental results using this model in image denoising and texture retrieval applications. In denoising, the contourlet HMT outperforms other wavelet methods in terms of visual quality, especially around edges. In texture retrieval, it shows improvements in performance for various oriented textures.

583 citations


Journal ArticleDOI
TL;DR: A class of maximum-likelihood estimators that require transmitting just one bit per sensor to achieve an estimation variance close to that of the sample mean estimator of the deterministic mean-location parameter estimation when only quantized versions of the original observations are available.
Abstract: We study deterministic mean-location parameter estimation when only quantized versions of the original observations are available, due to bandwidth constraints. When the dynamic range of the parameter is small or comparable with the noise variance, we introduce a class of maximum-likelihood estimators that require transmitting just one bit per sensor to achieve an estimation variance close to that of the (clairvoyant) sample mean estimator. When the dynamic range is comparable or larger than the noise standard deviation, we show that an optimum quantization step exists to achieve the best possible variance for a given bandwidth constraint. We will also establish that in certain cases the sample mean estimator formed by quantized observations is preferable for complexity reasons. We finally touch upon algorithm implementation issues and guarantee that all the numerical maximizations required by the proposed estimators are concave.

578 citations


Proceedings ArticleDOI
17 Jun 2006
TL;DR: This work modifications the GPDM to permit learning from motions with significant stylistic variation, and the resulting priors are effective for tracking a range of human walking styles, despite weak and noisy image measurements and significant occlusions.
Abstract: We advocate the use of Gaussian Process Dynamical Models (GPDMs) for learning human pose and motion priors for 3D people tracking. A GPDM provides a lowdimensional embedding of human motion data, with a density function that gives higher probability to poses and motions close to the training data. With Bayesian model averaging a GPDM can be learned from relatively small amounts of data, and it generalizes gracefully to motions outside the training set. Here we modify the GPDM to permit learning from motions with significant stylistic variation. The resulting priors are effective for tracking a range of human walking styles, despite weak and noisy image measurements and significant occlusions.

526 citations


Journal ArticleDOI
TL;DR: A new class of nonstationary covariance functions for spatial modelling, which includes a non stationary version of the Matérn stationary covariance, in which the differentiability of the spatial surface is controlled by a parameter, freeing one from fixing the differentiable in advance.
Abstract: We introduce a new class of nonstationary covariance functions for spatial modelling. Nonstationary covariance functions allow the model to adapt to spatial surfaces whose variability changes with location. The class includes a nonstationary version of the Matern stationary covariance, in which the differentiability of the spatial surface is controlled by a parameter, freeing one from fixing the differentiability in advance. The class allows one to knit together local covariance parameters into a valid global nonstationary covariance, regardless of how the local covariance structure is estimated. We employ this new nonstationary covariance in a fully Bayesian model in which the unknown spatial process has a Gaussian process (GP) prior distribution with a nonstationary covariance function from the class. We model the nonstationary structure in a computationally efficient way that creates nearly stationary local behavior and for which stationarity is a special case. We also suggest non-Bayesian approaches to nonstationary kriging.To assess the method, we use real climate data to compare the Bayesian nonstationary GP model with a Bayesian stationary GP model, various standard spatial smoothing approaches, and nonstationary models that can adapt to function heterogeneity. The GP models outperform the competitors, but while the nonstationary GP gives qualitatively more sensible results, it shows little advantage over the stationary GP on held-out data, illustrating the difficulty in fitting complicated spatial data.

487 citations


Proceedings ArticleDOI
16 Aug 2006
TL;DR: Gaussian processes can be used to generate a likelihood model for signal strength measurements and parameters of the model, such as signal noise and spatial correlation between measurements, can be learned from data via hyperparameter estimation.
Abstract: Estimating the location of a mobile device or a robot from wireless signal strength has become an area of highly active research. The key problem in this context stems from the complexity of how signals propagate through space, especially in the presence of obstacles such as buildings, walls or people. In this paper we show how Gaussian processes can be used to generate a likelihood model for signal strength measurements. We also show how parameters of the model, such as signal noise and spatial correlation between measurements, can be learned from data via hyperparameter estimation. Experiments using WiFi indoor data and GSM cellphone connectivity demonstrate the superior performance of our approach.

423 citations


Journal ArticleDOI
TL;DR: A common base is provided for the first time to analyze and compare Gaussian filters with respect to accuracy, efficiency and stability factor and to help design more efficient filters by employing better numerical integration methods.
Abstract: This paper proposes a numerical-integration perspective on the Gaussian filters. A Gaussian filter is approximation of the Bayesian inference with the Gaussian posterior probability density assumption being valid. There exists a variation of Gaussian filters in the literature that derived themselves from very different backgrounds. From the numerical-integration viewpoint, various versions of Gaussian filters are only distinctive from each other in their specific treatments of approximating the multiple statistical integrations. A common base is provided for the first time to analyze and compare Gaussian filters with respect to accuracy, efficiency and stability factor. This study is expected to facilitate the selection of appropriate Gaussian filters in practice and to help design more efficient filters by employing better numerical integration methods

284 citations


Proceedings ArticleDOI
22 Mar 2006
TL;DR: In this paper, the best known guarantees for exact reconstruction of a sparse signal f from few nonadaptive universal linear measurements were shown. But these guarantees involve huge constants, in spite of very good performance of the algorithms in practice.
Abstract: This paper proves best known guarantees for exact reconstruction of a sparse signal f from few non-adaptive universal linear measurements. We consider Fourier measurements (random sample of frequencies of f) and random Gaussian measurements. The method for reconstruction that has recently gained momentum in the sparse approximation theory is to relax this highly non-convex problem to a convex problem, and then solve it as a linear program. What are best guarantees for the reconstruction problem to be equivalent to its convex relaxation is an open question. Recent work shows that the number of measurements k(r,n) needed to exactly reconstruct any r-sparse signal f of length n from its linear measurements with convex relaxation is usually O(r poly log (n)). However, known guarantees involve huge constants, in spite of very good performance of the algorithms in practice. In attempt to reconcile theory with practice, we prove the first guarantees for universal measurements (i.e. which work for all sparse functions) with reasonable constants. For Gaussian measurements, k(r,n) lsim 11.7 r [1.5 + log(n/r)], which is optimal up to constants. For Fourier measurements, we prove the best known bound k(r, n) = O(r log(n) middot log2(r) log(r log n)), which is optimal within the log log n and log3 r factors. Our arguments are based on the technique of geometric functional analysis and probability in Banach spaces.

276 citations


Journal ArticleDOI
TL;DR: This paper examines the asymptotic performance of MUSIC-like algorithms for estimating directions of arrival (DOA) of narrowband complex noncircular sources using closed-form expressions of the covariance of the asylptotic distribution of different projection matrices to provide a unifying framework for investigating the ascyptoticperformance of arbitrary subspace-based algorithms.
Abstract: This paper examines the asymptotic performance of MUSIC-like algorithms for estimating directions of arrival (DOA) of narrowband complex noncircular sources. Using closed-form expressions of the covariance of the asymptotic distribution of different projection matrices, it provides a unifying framework for investigating the asymptotic performance of arbitrary subspace-based algorithms valid for Gaussian or non-Gaussian and complex circular or noncircular sources. We also derive different robustness properties from the asymptotic covariance of the estimated DOA given by such algorithms. These results are successively applied to four algorithms: to two attractive MUSIC-like algorithms previously introduced in the literature, to an extension of these algorithms, and to an optimally weighted MUSIC algorithm proposed in this paper. Numerical examples illustrate the performance of the studied algorithms compared to the asymptotically minimum variance (AMV) algorithms introduced as benchmarks

269 citations


Journal ArticleDOI
TL;DR: An automated algorithm for tissue segmentation of noisy, low-contrast magnetic resonance (MR) images of the brain is presented and the applicability of the framework can be extended to diseased brains and neonatal brains.
Abstract: An automated algorithm for tissue segmentation of noisy, low-contrast magnetic resonance (MR) images of the brain is presented. A mixture model composed of a large number of Gaussians is used to represent the brain image. Each tissue is represented by a large number of Gaussian components to capture the complex tissue spatial layout. The intensity of a tissue is considered a global feature and is incorporated into the model through tying of all the related Gaussian parameters. The expectation-maximization (EM) algorithm is utilized to learn the parameter-tied, constrained Gaussian mixture model. An elaborate initialization scheme is suggested to link the set of Gaussians per tissue type, such that each Gaussian in the set has similar intensity characteristics with minimal overlapping spatial supports. Segmentation of the brain image is achieved by the affiliation of each voxel to the component of the model that maximized the a posteriori probability. The presented algorithm is used to segment three-dimensional, T1-weighted, simulated and real MR images of the brain into three different tissues, under varying noise conditions. Results are compared with state-of-the-art algorithms in the literature. The algorithm does not use an atlas for initialization or parameter learning. Registration processes are therefore not required and the applicability of the framework can be extended to diseased brains and neonatal brains

Journal ArticleDOI
TL;DR: Two distinct explicit descriptions of the RKHSs corresponding to Gaussian RBF kernels are given and some consequences are discussed and an orthonormal basis for these spaces is presented.
Abstract: Although Gaussian radial basis function (RBF) kernels are one of the most often used kernels in modern machine learning methods such as support vector machines (SVMs), little is known about the structure of their reproducing kernel Hilbert spaces (RKHSs). In this work, two distinct explicit descriptions of the RKHSs corresponding to Gaussian RBF kernels are given and some consequences are discussed. Furthermore, an orthonormal basis for these spaces is presented. Finally, it is discussed how the results can be used for analyzing the learning performance of SVMs

Journal ArticleDOI
TL;DR: This paper proposes a class of VAD algorithms based on several statistical models based on the Gaussian model, and incorporates the complex Laplacian and Gamma probability density functions to the analysis of statistical properties.
Abstract: One of the key issues in practical speech processing is to achieve robust voice activity detection (VAD) against the background noise. Most of the statistical model-based approaches have tried to employ the Gaussian assumption in the discrete Fourier transform (DFT) domain, which, however, deviates from the real observation. In this paper, we propose a class of VAD algorithms based on several statistical models. In addition to the Gaussian model, we also incorporate the complex Laplacian and Gamma probability density functions to our analysis of statistical properties. With a goodness-of-fit tests, we analyze the statistical properties of the DFT spectra of the noisy speech under various noise conditions. Based on the statistical analysis, the likelihood ratio test under the given statistical models is established for the purpose of VAD. Since the statistical characteristics of the speech signal are differently affected by the noise types and levels, to cope with the time-varying environments, our approach is aimed at finding adaptively an appropriate statistical model in an online fashion. The performance of the proposed VAD approaches in both the stationary and nonstationary noise environments is evaluated with the aid of an objective measure.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the Hausdorff distance between the estimator and the population identification region, when properly normalized by square n, converges in distribution to the supremum of a Gaussian process whose covariance kernel depends on parameters of the population identificaiton region.
Abstract: We propose inference procedures for partially identified population features for which the population identification region can be written as a transformation of the Aumann expectation of a properly defined set valued random variable (SVRV). An SVRV is a mapping that associates a set (rather than a real number) with each element of the sample space. Examples of population features in this class include sample means and best linear predictors with interval outcome data, and parameters of semiparametric binary models with interval regressor data. We extend the analogy principle to SVRVs, and show that the sample analog estimator of the population identificaiton region is given by a transformation of a Minkowski average SVRVs. Using the results of the mathematics literature on SVRVs, we show that this estimator converges in probability to the identificaiton region of the model with respect to the Hausdorff distance. We then show that the Hausdorff distance between the estimator and the population identification region, when properly normalized by square n, converges in distribution to the supremum of a Gaussian process whose covariance kernel depends on parameters of the population identificaiton region. We provide consistent bootstrap procedures to approximate this limiting distribution. Using similar arguments as those applied for vector valued random variables, we develop a methodology to test assumptions about the true identificaiton region and to calcuate the power of the test. We show that these results can be used to construct a confidence collection, that is a collection of sets that, when specified as null hypothesis for the true value of the population identification region, cannot be rejected by our test.

Journal ArticleDOI
TL;DR: This is the first time that a fully variational Bayesian treatment for multiclass GP classification has been developed without having to resort to additional explicit approximations to the nongaussian likelihood term.
Abstract: It is well known in the statistics literature that augmenting binary and polychotomous response models with gaussian latent variables enables exact Bayesian analysis via Gibbs sampling from the parameter posterior. By adopting such a data augmentation strategy, dispensing with priors over regression coefficients in favor of gaussian process (GP) priors over functions, and employing variational approximations to the full posterior, we obtain efficient computational methods for GP classification in the multiclass setting. The model augmentation with additional latent variables ensures full a posteriori class coupling while retaining the simple a priori independent GP covariance structure from which sparse approximations, such as multiclass informative vector machines (IVM), emerge in a natural and straightforward manner. This is the first time that a fully variational Bayesian treatment for multiclass GP classification has been developed without having to resort to additional explicit approximations to the nongaussian likelihood term. Empirical comparisons with exact analysis use Markov Chain Monte Carlo (MCMC) and Laplace approximations illustrate the utility of the variational approximation as a computationally economic alternative to full MCMC and it is shown to be more accurate than the Laplace approximation.

MonographDOI
01 Jul 2006
TL;DR: A comparison of Brownian motion and Ray-Knight theorems with Gaussian processes shows how the model derived recently in [Bouchut-Boyaval, M3AS (23) 2013] can be modified for linear algebra.
Abstract: 1 Introduction 2 Brownian motion and Ray-Knight theorems 3 Markov processes and local times 4 Constructing Markov processes 5 Basic properties of Gaussian processes 6 Continuity and boundedness 7 Moduli of continuity 8 Isomorphism theorems 9 Sample path properties of local times 10 p-Variation 11 Most visited site 12 Local times of diffusions 13 Associated Gaussian processes Appendices: A Kolmogorov's theorem for path continuity B Bessel processes C Analytic sets and the projection theorem D Hille-Yosida theorem E Stone-Weierstrass theorems F Independent random variables G Regularly varying functions H Some useful inequalities I Some linear algebra References Index

Journal ArticleDOI
TL;DR: In this article, the authors discuss the nonlinear propagation of spacecraft trajectory uncertainties via solutions of the Fokker-Planck equation and derive an analytic expression of a nonlinear trajectory solution using a higher-order Taylor series approach.
Abstract: This paper discusses the nonlinear propagation of spacecraft trajectory uncertainties via solutions of the Fokker– Planck equation. We first discuss the solutions of the Fokker–Planck equation for a deterministic system with a Gaussian boundary condition. Next, we derive an analytic expression of a nonlinear trajectory solution using a higher-order Taylor series approach, discuss the region of convergence for the solutions, and apply the result to spacecraft applications. Such applications consist of nonlinear propagation of the mean and covariance matrix, design of statistically correct trajectories, and nonlinear statistical targeting. The two-body and Hill three-body problems are chosen as examples and realistic initial uncertainty models are considered. The results show that the nonlinear map of the trajectory uncertainties can be approximated in an analytic form, and there exists an optimal place to perform a correction maneuver, which is not found using the linear method.

Journal ArticleDOI
TL;DR: This paper addresses the problem of audio source separation with one single sensor, using a statistical model of the sources, based on a learning step from samples of each source separately, during which Gaussian scaled mixture models (GSMM) are trained.
Abstract: In this paper, we address the problem of audio source separation with one single sensor, using a statistical model of the sources. The approach is based on a learning step from samples of each source separately, during which we train Gaussian scaled mixture models (GSMM). During the separation step, we derive maximum a posteriori (MAP) and/or posterior mean (PM) estimates of the sources, given the observed audio mixture (Bayesian framework). From the experimental point of view, we test and evaluate the method on real audio examples.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the sample analog estimator of the population identification region is given by a transformation of a Minkowski average of set valued random variables (SVRVs), which is a mapping that associates a set (rather than a real number) with each element of the sample space.
Abstract: We propose inference procedures for partially identified population features for which the population identification region can be written as a transformation of the Aumann expectation of a properly defined set valued random variable (SVRV). An SVRV is a mapping that associates a set (rather than a real number) with each element of the sample space. Examples of population features in this class include interval-identified scalar parameters, best linear predictors with interval outcome data, and parameters of semiparametric binary models with interval regressor data. We extend the analogy principle to SVRVs and show that the sample analog estimator of the population identification region is given by a transformation of a Minkowski average of SVRVs. Using the results of the mathematics literature on SVRVs, we show that this estimator converges in probability to the population identification region with respect to the Hausdorff distance. We then show that the Hausdorff distance and the directed Hausdorff distance between the population identification region and the estimator, when properly normalized by √n, converge in distribution to functions of a Gaussian process whose covariance kernel depends on parameters of the population identification region. We provide consistent bootstrap procedures to approximate these limiting distributions. Using similar arguments as those applied for vector valued random variables, we develop a methodology to test assumptions about the true identification region and its subsets. We show that these results can be used to construct a confidence collection and a directed confidence collection. Those are (respectively) collection of sets that, when specified as a null hypothesis for the true value (a subset of values) of the population identification region, cannot be rejected by our tests.

Proceedings ArticleDOI
09 Jul 2006
TL;DR: This work presents a method for secrecy extraction from jointly Gaussian random sources that has applications in enhancing security for wireless communications and is closely related to some well known lossy source coding problems.
Abstract: We present a method for secrecy extraction from jointly Gaussian random sources. The approach is motivated by and has applications in enhancing security for wireless communications. The problem is also found to be closely related to some well known lossy source coding problems.

Journal ArticleDOI
TL;DR: A new generalized expectation maximization (GEM) algorithm, where the missing variables are the scale factors of the GSM densities, and the maximization step of the underlying expectation maximizations algorithm is replaced with a linear stationary second-order iterative method.
Abstract: Image deconvolution is formulated in the wavelet domain under the Bayesian framework. The well-known sparsity of the wavelet coefficients of real-world images is modeled by heavy-tailed priors belonging to the Gaussian scale mixture (GSM) class; i.e., priors given by a linear (finite of infinite) combination of Gaussian densities. This class includes, among others, the generalized Gaussian, the Jeffreys , and the Gaussian mixture priors. Necessary and sufficient conditions are stated under which the prior induced by a thresholding/shrinking denoising rule is a GSM. This result is then used to show that the prior induced by the "nonnegative garrote" thresholding/shrinking rule, herein termed the garrote prior, is a GSM. To compute the maximum a posteriori estimate, we propose a new generalized expectation maximization (GEM) algorithm, where the missing variables are the scale factors of the GSM densities. The maximization step of the underlying expectation maximization algorithm is replaced with a linear stationary second-order iterative method. The result is a GEM algorithm of O(NlogN) computational complexity. In a series of benchmark tests, the proposed approach outperforms or performs similarly to state-of-the art methods, demanding comparable (in some cases, much less) computational complexity.

Journal ArticleDOI
TL;DR: In this paper, the conditional density for the Sharpe ratio optimal weights were derived and the asymptotic distributions of the estimated weights were determined under the assumption that the returns follow a multivariate stationary Gaussian process.

Journal ArticleDOI
TL;DR: A Bayesian method for mixture model training that simultaneously treats the feature selection and the model selection problem and can simultaneously optimize over the number of components, the saliency of the features, and the parameters of the mixture model is presented.
Abstract: We present a Bayesian method for mixture model training that simultaneously treats the feature selection and the model selection problem. The method is based on the integration of a mixture model formulation that takes into account the saliency of the features and a Bayesian approach to mixture learning that can be used to estimate the number of mixture components. The proposed learning algorithm follows the variational framework and can simultaneously optimize over the number of components, the saliency of the features, and the parameters of the mixture model. Experimental results using high-dimensional artificial and real data illustrate the effectiveness of the method.

Journal ArticleDOI
TL;DR: In this paper, a spatial sampling design for prediction of stationary isotropic Gaussian processes with estimated parameters of the covariance function is studied, and several possible design criteria are discussed that incorporate the parameter uncertainty.
Abstract: We study spatial sampling design for prediction of stationary isotropic Gaussian processes with estimated parameters of the covariance function. The key issue is how to incorporate the parameter uncertainty into design criteria to correctly represent the uncertainty in prediction. Several possible design criteria are discussed that incorporate the parameter uncertainty. A simulated annealing algorithm is employed to search for the optimal design of small sample size and a two-step algorithm is proposed for moderately large sample sizes. Simulation results are presented for the Matern class of covariance functions. An example of redesigning the air monitoring network in EPA Region 5 for monitoring sulfur dioxide is given to illustrate the possible differences our proposed design criterion can make in practice.

Journal ArticleDOI
TL;DR: If the covariance kernel has derivatives up to a desired order and the bandwidth parameter of the kernel is allowed to take arbitrarily small values, it is shown that the posterior distribution is consistent in the L 1 -distance.
Abstract: Consider binary observations whose response probability is an unknown smooth function of a set of covariates. Suppose that a prior on the response probability function is induced by a Gaussian process mapped to the unit interval through a link function. In this paper we study consistency of the resulting posterior distribution. If the covariance kernel has derivatives up to a desired order and the bandwidth parameter of the kernel is allowed to take arbitrarily small values, we show that the posterior distribution is consistent in the L 1 -distance. As an auxiliary result to our proofs, we show that, under certain conditions, a Gaussian process assigns positive probabilities to the uniform neighborhoods of a continuous function. This result may be of independent interest in the literature for small ball probabilities of Gaussian processes.

Journal ArticleDOI
TL;DR: A new, simple method for identifying active factors in computer screening experiments that only requires the generation of a new inert variable in the analysis and uses the posterior distribution of the inert factor as a reference distribution against which the importance of the experimental factors can be assessed.
Abstract: In many situations, simulation of complex phenomena requires a large number of inputs and is computationally expensive. Identifying the inputs that most impact the system so that these factors can be further investigated can be a critical step in the scientific endeavor. In computer experiments, it is common to use a Gaussian spatial process to model the output of the simulator. In this article we introduce a new, simple method for identifying active factors in computer screening experiments. The approach is Bayesian and only requires the generation of a new inert variable in the analysis; however, in the spirit of frequentist hypothesis testing, the posterior distribution of the inert factor is used as a reference distribution against which the importance of the experimental factors can be assessed. The methodology is demonstrated on an application in material science, a computer experiment from the literature, and simulated examples.

Journal ArticleDOI
TL;DR: An irregularly spaced sampling raster formed from a sequence of low-resolution frames is the input to an image sequence superresolution algorithm whose output is the set of image intensity values at the desired high-resolution image grid.
Abstract: An irregularly spaced sampling raster formed from a sequence of low-resolution frames is the input to an image sequence superresolution algorithm whose output is the set of image intensity values at the desired high-resolution image grid. The method of moving least squares (MLS) in polynomial space has proved to be useful in filtering the noise and approximating scattered data by minimizing a weighted mean-square error norm, but introducing blur in the process. Starting with the continuous version of the MLS, an explicit expression for the filter bandwidth is obtained as a function of the polynomial order of approximation and the standard deviation (scale) of the Gaussian weight function. A discrete implementation of the MLS is performed on images and the effect of choice of the two dependent parameters, scale and order, on noise filtering and reduction of blur introduced during the MLS process is studied

Journal ArticleDOI
TL;DR: In this paper, fatigue analysis of broad-band Gaussian random processes is discussed, with attention focused on the distribution of rainflow cycles and the fatigue damage under the linear rule, and several spectral methods are reviewed: the narrowband approximation, the Wirsching-Light formula, Dirlik's amplitude density, the Zhao-Baker technique, together with an approach recently developed by the authors, as well as a new (completely empirical) method.

Journal ArticleDOI
TL;DR: An innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian model for the complex SAR backscattered signal and results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena.
Abstract: In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena.

Dissertation
Malte Kuß1
07 Apr 2006
TL;DR: Gaussian process models constitute a class of probabilistic statistical models in which a Gaussian process is used to describe the Bayesian a priori uncertainty about a latent function, and it will be shown how this can be used to estimate value functions.
Abstract: Gaussian process models constitute a class of probabilistic statistical models in which a Gaussian process (GP) is used to describe the Bayesian a priori uncertainty about a latent function. After a brief introduction of Bayesian analysis, Chapter 3 describes the general construction of GP models with the conjugate model for regression as a special case (OHagan 1978). Furthermore, it will be discussed how GP can be interpreted as priors over functions and what beliefs are implicitly represented by this. The conceptual clearness of the Bayesian approach is often in contrast with the practical difficulties that result from its analytically intractable computations. Therefore approximation techniques are of central importance for applied Bayesian analysis. Chapter 4 describes Laplace's method, the Expectation Propagation approximation, and Markov chain Monte Carlo sampling for approximate inference in GP models. The most common and successful application of GP models is in regression problems where the noise is assumed to be homoscedastic and distributed according to a normal distribution. In practical data analysis this assumption is often inappropriate and inference is sensitive to the occurrence of more extreme errors (so called outliers). Chapter 5 proposes several variants of GP models for robust regression and describes how Bayesian inference can be approximated in each. Experiments on several data sets are presented in which the proposed models are compared with respect to their predictive performance and practical applicability. Gaussian process priors can also be used to define flexible, probabilistic classification models. Again, exact Bayesian inference is analytically intractable and various approximation techniques have been proposed, but no clear picture has yet emerged, as to when and why which algorithm should be preferred. Chapter 6 presents a detailed examination of the model, focusing on the question which approximation technique is most appropriate by investigating the structure of the posterior distribution. An experimental study is presented which corroborates the theoretical insights. Reinforcement learning deals with the problem of how an agent can optimise its behaviour in a sequential decision process such that its utility over time is maximised. Chapter 7 addresses applications of GPs for model-based reinforcement learning in continuous domains. If the environment's response to the agent's actions can be predicted using GP regression models, probabilistic planning and an approximate policy iteration algorithm can be implemented. A core concept in reinforcement learning is the value function, which describes the long-term strategic value of a state. Using GP models we are able to solve an approximate continuous equivalent of the Bellman equations, and it will be shown how this can be used to estimate value functions.