scispace - formally typeset
Search or ask a question
Topic

Gaussian process

About: Gaussian process is a research topic. Over the lifetime, 18944 publications have been published within this topic receiving 486645 citations. The topic is also known as: Gaussian stochastic process.


Papers
More filters
Journal ArticleDOI
01 Jun 1997
TL;DR: In this paper, a detailed analysis of experimental data, collected at Osborne Head Gunnery Range with McMaster University IPIX radar, to test theoretical models developed in the literature is devoted.
Abstract: The paper is devoted to a detailed analysis of experimental data, collected at Osborne Head Gunnery Range with McMaster University IPIX radar, to test theoretical models developed in the literature. The validity of the compound model has been proven for VV polarisation both for amplitude and correlation properties. Cross-polarised data also exhibit a compound behaviour but require an additional Gaussian component due to thermal noise. HH data deviate from the K model and seem to better approach a log-normal distribution. Previous results have been obtained by a correlation test that allows separation of the short and long correlation terms, a modified Kolmogoroff Smirnoff test to verify the fitting and a cumulants domain analysis to quantify the Gaussian component. The interest of the work lies in its application for successful radar design.

279 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide large-sample simultaneous confidence bands for SO, centred at SN, using weak convergence of N1{AqN(t) SO(t), on a finite interval, to a Gaussian process, a theorem of Breslow & Crowley (1974), and transforms both the time and space axes of the limiting process to achieve a Brownian bridge limit.
Abstract: For arbitrarily right-censored data, the Kaplan-Meier product-limit estimator 9N provides a nonparametric estimate of the survival function SO = 1FO. We provide large-sample simultaneous confidence bands for SO, centred at SN. The derivation uses the weak convergence of N1{AqN(t) SO(t)}, on a finite interval, to a Gaussian process, a theorem of Breslow & Crowley (1974), and transforms both the time and space axes of the limiting process to achieve a Brownian bridge limit. Parameters in the transformation are replaced by uniformly consistent estimates to form the bands. The new bands reduce to the well-known Kolmogorov bands in the absence of censoring. Comparisons are made with recent bands of Gillespie & Fisher (1979) and V. N. Nair. The bands are illustrated by imposing some different kinds of random censorship on a set of uncensored data.

278 citations

Journal ArticleDOI
TL;DR: This paper revisits the problem of sampling and reconstruction of signals with finite rate of innovation and proposes improved, more robust methods that have better numerical conditioning in the presence of noise and yield more accurate reconstruction.
Abstract: Recently, it was shown that it is possible to develop exact sampling schemes for a large class of parametric nonbandlimited signals, namely certain signals of finite rate of innovation. A common feature of such signals is that they have a finite number of degrees of freedom per unit of time and can be reconstructed from a finite number of uniform samples. In order to prove sampling theorems, Vetterli et al. considered the case of deterministic, noiseless signals and developed algebraic methods that lead to perfect reconstruction. However, when noise is present, many of those schemes can become ill-conditioned. In this paper, we revisit the problem of sampling and reconstruction of signals with finite rate of innovation and propose improved, more robust methods that have better numerical conditioning in the presence of noise and yield more accurate reconstruction. We analyze, in detail, a signal made up of a stream of Diracs and develop algorithmic tools that will be used as a basis in all constructions. While some of the techniques have been already encountered in the spectral estimation framework, we further explore preconditioning methods that lead to improved resolution performance in the case when the signal contains closely spaced components. For classes of periodic signals, such as piecewise polynomials and nonuniform splines, we propose novel algebraic approaches that solve the sampling problem in the Laplace domain, after appropriate windowing. Building on the results for periodic signals, we extend our analysis to finite-length signals and develop schemes based on a Gaussian kernel, which avoid the problem of ill-conditioning by proper weighting of the data matrix. Our methods use structured linear systems and robust algorithmic solutions, which we show through simulation results.

278 citations

Book
09 Jan 1992
TL;DR: A survey of the normal distribution can be found in this paper, where the authors consider the following classes of Gaussian processes: (1) Stationary Gaussian Processes on a finite interval (2) Processes with stationary independent increments.
Abstract: Sojourn time distributions. Survey of the normal distribution. Stationary Gaussian processes on a finite interval. Processes with stationary independent increments. Diffusion processes. Random walk and birth-and-death processes. Stationary Gaussian processes on a long interval. Central limit theorems. Extremes of Gaussian sequences and diffusion processes. Maximum of a Gaussian process. Other Gaussian sequences and Markov random fields. Processes (X,f (t)) with orthogonally invariant X.

277 citations

Proceedings ArticleDOI
22 Mar 2006
TL;DR: In this paper, the best known guarantees for exact reconstruction of a sparse signal f from few nonadaptive universal linear measurements were shown. But these guarantees involve huge constants, in spite of very good performance of the algorithms in practice.
Abstract: This paper proves best known guarantees for exact reconstruction of a sparse signal f from few non-adaptive universal linear measurements. We consider Fourier measurements (random sample of frequencies of f) and random Gaussian measurements. The method for reconstruction that has recently gained momentum in the sparse approximation theory is to relax this highly non-convex problem to a convex problem, and then solve it as a linear program. What are best guarantees for the reconstruction problem to be equivalent to its convex relaxation is an open question. Recent work shows that the number of measurements k(r,n) needed to exactly reconstruct any r-sparse signal f of length n from its linear measurements with convex relaxation is usually O(r poly log (n)). However, known guarantees involve huge constants, in spite of very good performance of the algorithms in practice. In attempt to reconcile theory with practice, we prove the first guarantees for universal measurements (i.e. which work for all sparse functions) with reasonable constants. For Gaussian measurements, k(r,n) lsim 11.7 r [1.5 + log(n/r)], which is optimal up to constants. For Fourier measurements, we prove the best known bound k(r, n) = O(r log(n) middot log2(r) log(r log n)), which is optimal within the log log n and log3 r factors. Our arguments are based on the technique of geometric functional analysis and probability in Banach spaces.

276 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
87% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Artificial neural network
207K papers, 4.5M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Deep learning
79.8K papers, 2.1M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023502
20221,181
20211,132
20201,220
20191,119
2018978