scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian process published in 1970"



Journal ArticleDOI
TL;DR: The probability density functions of products of independent beta, gamma and central Gaussian random variables were shown to be Meijer G-functions in this paper, and the density function of product of random beta variables was shown to have a Meijers G-function.
Abstract: The probability density functions of products of independent beta, gamma and central Gaussian random variables are shown to be Meijer G-functions. The density function of products of random beta va...

244 citations


Journal ArticleDOI
01 May 1970
TL;DR: Seven applications to linear and nonlinear least-squares estimation, Gaussian and non-Gaussian detection problems, solution of Fredholm integral equations, and the calculation of mutual information, will be described.
Abstract: Given a stochastic process, its innovations process will be defined as a white Gaussian noise process obtained from the original process by a causal and causally invertible transformation. The significance of such a representation, when it exists, is that statistical inference problems based on observation of the original process can be replaced by simpler problems based on white noise observations. Seven applications to linear and nonlinear least-squares estimation. Gaussian and non-Gaussian detection problems, solution of Fredholm integral equations, and the calculation of mutual information, will be described. The major new results are summarized in seven theorems. Some powerful mathematical tools will be introduced, but emphasis will be placed on the considerable physical significance of the results.

229 citations


20 Jan 1970
TL;DR: The aim of this review is to outline the unifying role of reproducing kernel Hilbert spaces (RKHS) in the theory of time series.
Abstract: : The theory of time series is studied by probabilists (under such names as Gaussian processes and generalized processes), by statisticians (who are mainly concerned with modelling discrete parameter time series by finite parameter schemes), and by communication and control engineers (who are mainly concerned with the extraction and detection of signals in noise). The aim of this review is to outline the unifying role of reproducing kernel Hilbert spaces (RKHS) in the theory of time series. There are 13 sections (which are divided into an introduction and 4 chapters). The chapter headings are the following: Time series and RKHS; Parameter estimation and optimization; Examples of RKHS; and Probability density functionals of normal processes. (Author)

117 citations


Proceedings ArticleDOI
01 Dec 1970
TL;DR: In this article, the optimal structure and parameter adaptive estimators have been obtained for continuous as well as discrete data gaussian process models with linear dynamics, and the conditional-error-covariance matrix of the estimator is also obtained in a form suitable for on-line performance evaluation.
Abstract: Optimal structure and parameter adaptive estimators have been obtained for continuous as well as discrete data gaussian process models with linear dynamics. Specifically, the essentially nonlinear adaptive estimators are shown to be decomposable (partition theorem) into two parts, a linear non-adaptive part consisting of a bank of Kalman-Bucy filters, and a nonlinear part that incorporates the learning or adaptive nature of the estimator. The conditional-error-covariance matrix of the estimator is also obtained in a form suitable for on-line performance evaluation. The adaptive estimators are applied to the problem of state-estimation with nongaussian initial state and also to estimation under measurement uncertainty (joint detection-estimation). Examples are given of the application of the proposed adaptive estimators to structure and parameter adaptation indicating their applicability to practical engineering problems.

101 citations


Journal ArticleDOI
TL;DR: This is a concise critical survey of the theory and practice relating to the ordered Gaussian elimination on sparse systems and a new method of renumbering by clusters is developed, and its properties described.
Abstract: This is a concise critical survey of the theory and practice relating to the ordered Gaussian elimination on sparse systems. A new method of renumbering by clusters is developed, and its properties described. By establishing a correspondence between matrix patterns and directed graphs, a sequential binary partition is used to decompose the nodes of a graph into clusters. By appropriate ordering of the nodes within each cluster and by selecting clusters, one at a time, both optimal ordering and a useful form of matrix banding are achieved. Some results pertaining to the compatibility between optimal ordering for sparsity and the usual pivoting for numerical accuracy are included.

90 citations


Journal ArticleDOI
TL;DR: Some zero-one laws are proved for Gaussian processes defined on linear spaces of functions, generalizations of a result for Wiener measure due to R. H. Cameron and R. E. Graves.
Abstract: Some zero-one laws are proved for Gaussian processes defined on linear spaces of functions. They are generalizations of a result for Wiener measure due to R. H. Cameron and R. E. Graves. The proofs exploit an interesting relationship between a Gaussian process and its reproducing kernel Hubert space. Applications are discussed.

67 citations


Journal ArticleDOI
TL;DR: It is shown that nonsingular detection problems of this form can always be interpreted as problems of the apparently more special "signal-in-noise" type, where the cross-covariance function of the signal and noise must be of a special "one-sided" form.
Abstract: We give a comprehensive discussion of the structure of the likelihood ratio (LR) for discrimination between two Gaussian processes, one of which is white. Several more general problems can be reduced, usually by differentiation, to this form. We shall show that nonsingular detection problems of this form can always be interpreted as problems of the apparently more special "signal-in-noise" type, where the cross-covariance function of the signal and noise must be of a special "one-sided" form. Moreover, the LR for this equivalent problem can be written in the same form as that for known signals in white Gaussian noise, with the causal estimate of the signal process replacing the known signal. This single formula will be shown to be equivalent to a variety of other formulas, including all those previously known. The proofs are based on a resolvent identity and on a representation theorem for second-order processes, both of which have other applications. This paper also contains a discussion of the various stochastic integrals and infinite determinants that arise in Gaussian detection problems

60 citations


Journal ArticleDOI
TL;DR: The comparison shows that spherically invariant processes are slightly more general than Gaussian compound processes, and a simple expression for the probability distribution is given and some expectation values are calculated.
Abstract: This correspondence discusses the comparison between the class of spherically invariant processes and a particular class of Gaussian compound processes. We give a simple expression for the probability distribution and calculate some expectation values. The comparison shows that spherically invariant processes are slightly more general.

53 citations


Journal ArticleDOI
TL;DR: In this article, the authors give a proof of Fernique's theorem that a stationary Gaussian process has continuous sample paths provided that, for some E > O, a(h) 0, w.p.l.
Abstract: We give a proof of Fernique's theorem that if Xis a stationary Gaussian process and a2(h) =E(X(h) X(O))2 then X has continuous sample paths provided that, for some E > O, a(h) 0, w.p.l.

52 citations



Journal ArticleDOI
TL;DR: A finite multivariate autoregression is fitted to pre-stimulus data using a step-wise procedure with tests of significance to detect change in biological multivariate stationary processes of a single subject after stimulation.
Abstract: SUMMARY A method is presented for detecting change in biological multivariate stationary processes of a single subject after stimulation. A finite multivariate autoregression is fitted to pre-stimulus data using a step-wise procedure with tests of significance. The fit of the model is also checked by comparing the estimated spectra, phase, and coherence with fitted curves. The statistic which tests for change at a given time point is a quadratic form involving the one-step prediction error vector ancd the inverse of the one-step prediction error covariance matrix. Under the hypothesis of no change, and for a Gaussian process, these statistics have independent chi-square distributions. The technilque has been applied to the detection of change in the brain waves of two human newborn infant's following stimulation.

Journal ArticleDOI
TL;DR: A new expression for average mutual information of two Gaussian processes is obtained, extending earlier work of Gel’fand and Yaglom in terms of the covariance and cross-covariance operators of the two processes.
Abstract: A new expression for average mutual information (AMI) of two Gaussian processes is obtained, extending earlier work of Gel’fand and Yaglom. This result is expressed in terms of the covariance and cross-covariance operators of the two processes, while the previous results were stated in terms of projection operators in random variable space. Relations are also obtained between the occurrence of finite or infinite AMI and nonsingular or singular detection for a Gaussian signal imbedded in additive Gaussian noise.



Journal ArticleDOI
TL;DR: Gaussian probability paper is used for estimating the normal range and the conditions discussed under which this method may be expected to give a valid estimate and the chi-square test to evaluate the long-term constancy of clinical laboratory data distribution is considered.
Abstract: Distribution of a quantity, e.g. , concentration of a serum constituent, in a typical general hospital population is considered. It is assumed that the distribution within any subpopulation is Gaussian and that adjacent subpopulations overlap somewhat, presenting overlapping Gaussian distributions. Bhattacharya’s procedure for resolving such overlapping distributions, based upon differentiation of the Gaussian distribution equation, is applied to the determination of the apparent normal range as well as, in some cases, an abnormal range. Gaussian probability paper is used for estimating the normal range and the conditions discussed under which this method may be expected to give a valid estimate. Use of the chi-square test to evaluate the long-term constancy of clinical laboratory data distribution, normal and abnormal, is also considered.


Journal ArticleDOI
TL;DR: Bayes optimal sequential structure and parameter-adaptive pattern-recognition systems for continuous data are derived and adaptive pattern- Recognition systems are shown to be decomposable ("partition theorem") into a linear nonadaptive part consisting of recursive matched Kalman filters.
Abstract: Bayes optimal sequential structure and parameter-adaptive pattern-recognition systems for continuous data are derived. Both off-line (or prior to actual operation) and on-line (while in operation) supervised learning is considered. The concept of structure adaptation is introduced and both structure as well as parameter-adaptive optimal pattern-recognition systems are obtained. Specifically, for the class of supervised-learning pattern-recognition problems with Gaussian process models and linear dynamics, the adaptive pattern-recognition systems are shown to be decomposable ("partition theorem") into a linear nonadaptive part consisting of recursive matched Kalman filters, a nonlinear part--a set of probability computers--that incorporates the adaptive nature of the system, and finally a part of the correlator-estimator (Kailath) form. Extensions of the above results to the M -ary hypotheses case where M \geq 2 are given.

Journal ArticleDOI
01 Apr 1970
TL;DR: In this article, the authors generalized this result to Gaussian processes with continuous paths and obtained such expansions for a Gaussian random variable taking values in an arbitrary separable Banach space.
Abstract: Several authors have recently shown that Brownian motion with continuous paths on (0, 1 ) can be expanded into a uniformly convergent (a.s.) orthogonal series in terms of a given complete orthonormal system (CONS) in its reproducing kernel Hilbert space (RKHS). In an earlier paper we generalized this result to Gaussian processes with continuous paths. Here we obtain such expansions for a Gaussian random variable taking values in an arbitrary separable Banach space. A related problem is also considered in which starting from a separable Hilbert space H with a measurable norm 11 -II defined on it, it is shown that the corresponding abstract Wiener process has a 11 ll-convergent orthogonal expansion in terms of a CONS chosen from H. (1.1) E~~~~~~~~~~~ t(wo)ej (t) j=1

Journal ArticleDOI
TL;DR: A method is given for obtaining a closed-form expression for the characteristic function of the average power of a zero-mean Gaussian random process using the class of covariance kernels whose corresponding homogeneous Fredholm integral equations admit reduction to an equivalent linear differential system.
Abstract: A method is given for obtaining a closed-form expression for the characteristic function of the average power, in a T -second interval, of a zero-mean Gaussian random process. The technique, which is constructive in nature, is applicable to the class of covariance kernels whose corresponding homogeneous Fredholm integral equations admit reduction to an equivalent linear differential system. Eigenvalue evaluations are not required.

Journal ArticleDOI
TL;DR: Gaussian elimination and subsequent back substitution is presented for its implementation to minimize the amount of central computer memory required; provide a more flexible means of manipulating large matrices; and dramatically reduce computer time.
Abstract: The most efficient means of solving most systems of linear equations arising in structural analysis is by Gaussian elimination and subsequent back substitution. A method is presented for its implementation. Direct solutions are obtained with sparse matrix factors which preserve the operations of the Gaussian elimination for repeat solutions. The method together with techniques for its application are graphically described. Its use in many types of engineering problems requiring solutions to systems of linear equations will: (1) Minimize the amount of central computer memory required; (2) provide a more flexible means of manipulating large matrices; and (3) dramatically reduce computer time.

Journal ArticleDOI
W.M. Hubbard1
01 Sep 1970
TL;DR: This letter describes a modified Gaussian approximation to a Poisson distribution which, unlike the usualGaussian approximation, gives good agreement on the tails of the distribution and is therefore useful in error-rate calculations where the usual Gaussian approximation often is not.
Abstract: This letter describes a modified Gaussian approximation to a Poisson distribution which, unlike the usual Gaussian approximation, gives good agreement on the tails of the distribution. It is therefore useful in error-rate calculations where the usual Gaussian approximation often is not.

Proceedings ArticleDOI
01 Dec 1970
TL;DR: In this article, the problem of simultaneous estimation of state and parameters in linear discrete-time dynamic systems is formulated under the assumption that the system parameters are unknown constants and the solution is obtained by iterating between two systems of linear equations.
Abstract: The topic of this paper is the simultaneous estimation of state and parameters in linear discrete-time dynamic systems. The system is subject to a known arbitrary input (control), a random input (additive driving noise) and the output observation is also contaminated by noise. The noises are Gaussian, zero-mean, independent and with known variances. The problem is formulated under the assumption that the system parameters are unknown constants. Previous works in the literature treated this problem assuming that each parameter can take values over a finite set with known a priori probabilities. The proposed scheme yields the maximum a posteriori and maximum likelihood estimates for the system's state and parameters, respectively. They are obtained by solving the likelihood equations-a system of nonlinear equations with the state and parameters as unknowns. Use is made of the fact that the dynamical system considered is linear and the problem is separated into two interconnected linear problems: one for the state, the other for the parameters. The solution is obtained by iterating between two systems of linear equations. The estimation technique presented is optimal in the following sense: no approximations are involved and the estimates of the parameters converge to the true values at the fastest possible rate, as given by the Cram�r-Rao lower bound, i.e. they are asympotically efficient. This is proved, using a theorem which states that, under certain general conditions, the maximum likelihood estimate with dependent observations is consistent and asymptotically efficient. The problem of uniqueness of the solution is discussed for the case of a scalar unknown parameter. Use is made of a theorem due to Perlman, generalized for the case of dependent observations. Due to the fact that the estimation-identification is done in the presence of input and output noise and an arbitrary known input, the procedure can be considered an on-line technique. Since estimates are available after each measurement, this estimationidentification procedure is suited for use in the adaptive control of unknown (or partially known) linear plants.


Journal ArticleDOI
TL;DR: It is found that the effects of dependence on ARE with respect to a parametric test can be offset to some extent by appropriately grouping sample values, and either the form of the dependence must be known or some learning scheme must be applied.
Abstract: This paper investigates the effects of dependence on rank tests, in particular on a class of recently defined nonparametric tests called "mixed" statistical tests. It is shown that the mixed test statistic is asymptotically normal for Gaussian processes with mild regularity properties justifying the use of asymptotic relative efficiency (ARE) as a figure of merit. Results are presented in terms of variations on three well-known statistics--the one-sample Wilcoxon, the two-sample Mann-Whitney, and the Kendall \tau . It is found that the effects of dependence on ARE with respect to a parametric test can be offset to some extent by appropriately grouping sample values. If, however, a constant false-alarm rate is to be attained, either the form of the dependence must be known or some learning scheme must be applied.

Journal ArticleDOI
TL;DR: In this article, the quantum statistics of continuous space time dependent electromagnetic fields are analyzed by means of functionals and a masterequation is derived for the density operator which is a functional of the field operators.
Abstract: The quantum statistics of continuous space time dependent electromagnetic fields is analyzed by means of functionals. The case of a field propagating in a thermal reservoir serves as a simple example to illustrate the succeeding steps: a masterequation is derived for the density operator which is a functional of the field operators. By means of the coherent state representation for continuous fields the masterequation is transformed into a functional differential equation in the function space, spanned by the coherent state amplitudes. This equation is of the Fokker-Planck type and determines a Gaussian process for a continuum of variables or a field. It is solved by determining the characteristics in function space of the associated equation of motion for the characteristic functional and subsequent functional integration. The solution is used to calculate some correlation functions and the spectral function of the field.

Journal ArticleDOI
01 May 1970
TL;DR: This paper forms a basic state-variable model for real and complex random processes and introduces distributed state variables in order to study the detection of doubly spread targets.
Abstract: The central issue in detection theory is that of detecting known or random signals in the presence of random noise. The detailed problem description depends on the physical situation of interest. In most of the original work on detection theory the random processes were modeled as Gaussian processes and characterized by their covariance function. In many cases the solution for the optimum detector is expressed in terms of an integral equation which is difficult to solve. In this paper we demonstrate how to formulate and solve detection theory problems using state-variable techniques. These techniques enable us to obtain complete solutions to almost all problems of interest. In addition, they offer new insights into the problems. We first formulate our basic state-variable model for real and complex random processes. We then study five applications of state-variable techniques. The first application is in the solution of linear homogeneous Fredholm equations. This problem arises whenever we want to find the eigenvalues and eigenfunction of a random process. The second application is the detection of a slowly fluctuating point target in the presence of colored noise. The third application is the detection of Gaussian processes in Gaussian noise. The fourth application is detection of Doppler spread targets and communication over Doppler spread channels. In the final application, we introduce distributed state variables in order to study the detection of doubly spread targets. The goal of the paper is to demonstrate the importance of state-variable techniques in a wide variety of detection theory problems.



Journal ArticleDOI
TL;DR: In this article, a stationary Gaussian process with zero mean, unit variance and continuous covariance function r(t) is considered, for some e > 0 so that there is a version of the process whose sample functions are continuous.
Abstract: Let X(t), t ≧ 0, be a stationary Gaussian process with zero mean, unit variance and continuous covariance function r(t). Suppose that, for some e > 0 so that there is a version of the process whose sample functions are continuous [1].