scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian process published in 2000"


Proceedings ArticleDOI
05 Jun 2000
TL;DR: A large improvement in word recognition performance is shown by combining neural-net discriminative feature processing with Gaussian-mixture distribution modeling.
Abstract: Hidden Markov model speech recognition systems typically use Gaussian mixture models to estimate the distributions of decorrelated acoustic feature vectors that correspond to individual subword units. By contrast, hybrid connectionist-HMM systems use discriminatively-trained neural networks to estimate the probability distribution among subword units given the acoustic observations. In this work we show a large improvement in word recognition performance by combining neural-net discriminative feature processing with Gaussian-mixture distribution modeling. By training the network to generate the subword probability posteriors, then using transformations of these estimates as the base features for a conventionally-trained Gaussian-mixture based system, we achieve relative error rate reductions of 35% or more on the multicondition Aurora noisy continuous digits task.

803 citations


Journal ArticleDOI
TL;DR: In this paper, a two-step local polynomial smoothing spline and kernel method is proposed to estimate the coefficient functions of functional linear models for longitudinal data analysis, which is a simple and powerful two-stage alternative.
Abstract: Functional linear models are useful in longitudinal data analysis. They include many classical and recently proposed statistical models for longitudinal data and other functional data. Recently, smoothing spline and kernel methods have been proposed for estimating their coefficient functions nonparametrically but these methods are either intensive in computation or inefficient in performance. To overcome these drawbacks, in this paper, a simple and powerful two-step alternative is proposed. In particular, the implementation of the proposed approach via local polynomial smoothing is discussed. Methods for estimating standard deviations of estimated coefficient functions are also proposed. Some asymptotic results for the local polynomial estimators are established. Two longitudinal data sets, one of which involves time-dependent covariates, are used to demonstrate the approach proposed. Simulation studies show that our two-step approach improves the kernel method proposed by Hoover and co-workers in several aspects such as accuracy, computational time and visual appeal of the estimators.

419 citations


Proceedings Article
01 Jan 2000
TL;DR: A simple sparse greedy technique to approximate the maximum a posteriori estimate of Gaussian Processes with much improved scaling behaviour in the sample size m, and shows applications to large scale problems.
Abstract: We present a simple sparse greedy technique to approximate the maximum a posteriori estimate of Gaussian Processes with much improved scaling behaviour in the sample size m. In particular, computational requirements are O(n2m), storage is O(nm), the cost for prediction is O(n) and the cost to compute confidence bounds is O(nm), where n ≪ m. We show how to compute a stopping criterion, give bounds on the approximation error, and show applications to large scale problems.

411 citations


01 Jan 2000
TL;DR: This is a tutorial for how to use the MATLAB toolbox WAFO for analysis and simulation of random waves and random fatigue, which represents a considerable development of two earlier toolboxes, the FAT and WAT, for fatigue and wave analysis, respectively.
Abstract: iii Foreword This is a tutorial for how to use the MATLAB toolbox WAFO for analysis and simulation of random waves and random fatigue. The toolbox consists of a number of MATLAB m-files together with executable routines from FORTRAN or C++ source, and it requires only a standard MATLAB setup, with no additional toolboxes. A main and unique feature of WAFO is the module of routines for computation of the exact statistical distributions of wave and cycle characteristics in a Gaussian wave or load process. The routines are described in a series of examples on wave data from sea surface measurements and other load sequences. There are also sections for fatigue analysis and for general extreme value analysis. Although the main applications at hand are from marine and reliability engineering, the routines are useful for many other applications of Gaussian and related stochastic processes. The routines are based on algorithms for extreme value and crossing analysis, developed over many years by the authors as well as many results available in the literature. References are given to the source of the algorithms whenever it is possible. These references are given in the MATLAB-code for all the routines and they are also listed in the last section of this tutorial. If the references are not used explicitly in the tutorial; it means that it is referred to in one of the MATLAB m-files. Besides the dedicated wave and fatigue analysis routines the toolbox contains many statistical simulation and estimation routines for general use, and it can therefore be used as a toolbox for statistical work. These routines are listed, but not explicitly explained in this tutorial. The present toolbox represents a considerable development of two earlier toolboxes, the FAT and WAT toolboxes, for fatigue and wave analysis, respectively. These toolboxes were both Version 1; therefore WAFO has been named Version 2. The routines in the tutorial are tested on WAFO-version 2.5, which was made available in beta-version in January 2009 and in a stable version in February 2011. The persons that take actively part in creating this tutorial are (in alphabetical order): Per v vi Many other people have contributed to our understanding of the problems dealt with in this text, first of all Professor Ross Leadbetter at the University of North Carolina at Chapel Hill and Professor Krzysztof Podgórski, Mathematical Statistics, Lund University. We would also like to particularly thank Michel …

403 citations


Posted Content
TL;DR: The spectrum and coherency are useful quantities for characterizing the temporal correlations and functional relations within and between point processes and a known statistical test applies with little modification to point process spectra and is of utility in studying a point process driven by a continuous stimulus.
Abstract: The spectrum and coherency are useful quantities for characterizing the temporal correlations and functional relations within and between point processes. This paper begins with a review of these quantities, their interpretation and how they may be estimated. A discussion of how to assess the statistical significance of features in these measures is included. In addition, new work is presented which builds on the framework established in the review section. This work investigates how the estimates and their error bars are modified by finite sample sizes. Finite sample corrections are derived based on a doubly stochastic inhomogeneous Poisson process model in which the rate functions are drawn from a low variance Gaussian process. It is found that, in contrast to continuous processes, the variance of the estimators cannot be reduced by smoothing beyond a scale which is set by the number of point events in the interval. Alternatively, the degrees of freedom of the estimators can be thought of as bounded from above by the expected number of point events in the interval. Further new work describing and illustrating a method for detecting the presence of a line in a point process spectrum is also presented, corresponding to the detection of a periodic modulation of the underlying rate. This work demonstrates that a known statistical test, applicable to continuous processes, applies, with little modification, to point process spectra, and is of utility in studying a point process driven by a continuous stimulus. While the material discussed is of general applicability to point processes attention will be confined to sequences of neuronal action potentials (spike trains) which were the motivation for this work.

367 citations


Proceedings ArticleDOI
25 Jun 2000
TL;DR: The real, discrete-time Gaussian parallel relay network is introduced and upper and lower bounds to capacity are presented and explained where they coincide.
Abstract: We introduce the real, discrete-time Gaussian parallel relay network. This simple network is theoretically important in the context of network information theory. We present upper and lower bounds to capacity and explain where they coincide.

362 citations


Journal ArticleDOI
TL;DR: The aim of this work is to obtain the analytical expressions for the output correlation function of a nonlinear device and for the BER performance.
Abstract: Orthogonal frequency-division multiplexing (OFDM) baseband signals may be modeled by complex Gaussian processes with Rayleigh envelope distribution and uniform phase distribution, if the number of carriers is sufficiently large. The output correlation function of instantaneous nonlinear amplifiers and the signal-to-distortion ratio can be derived and expressed in an easy way. As a consequence, the output spectrum and the bit-error rate (BER) performance of OFDM systems in nonlinear additive white Gaussian noise channels are predictable both for uncompensated amplitude modulation/amplitude modulation (AM/AM) and amplitude modulation/pulse modulation (AM/PM) distortions and for ideal predistortion. The aim of this work is to obtain the analytical expressions for the output correlation function of a nonlinear device and for the BER performance. The results in closed-form solutions are derived for AM/AM and AM/PM curves approximated by Bessel series expansion and for the ideal predistortion case.

319 citations


Journal ArticleDOI
TL;DR: A mean-field algorithm for binary classification with gaussian processes that is based on the TAP approach originally proposed in statistical physics of disordered systems is derived and an approximate leave-one-out estimator for the generalization error is computed.
Abstract: We derive a mean-field algorithm for binary classification with gaussian processes that is based on the TAP approach originally proposed in statistical physics of disordered systems. The theory also yields an approximate leave-one-out estimator for the generalization error, which is computed with no extra computational cost. We show that from the TAP approach, it is possible to derive both a simpler "naive" mean-field theory and support vector machines (SVMs) as limiting cases. For both mean-field algorithms and support vector machines, simulation results for three small benchmark data sets are presented. They show that one may get state-of-the-art performance by using the leave-one-out estimator for model selection and the built-in leave-one-out estimators are extremely precise when compared to the exact leave-one-out estimate. The second result is taken as strong support for the internal consistency of the mean-field approach.

268 citations


Journal ArticleDOI
TL;DR: Bayes-optimal binary quantization for the detection of a shift in mean in a pair of dependent Gaussian random variables is studied, and it is seen that in certain situations, an XOR fusion rule is optimal, and in these cases, the implied decision rule is bizarre.
Abstract: Most results about quantized detection rely strongly on an assumption of independence among random variables. With this assumption removed, little is known. Thus, in this paper, Bayes-optimal binary quantization for the detection of a shift in mean in a pair of dependent Gaussian random variables is studied. This is arguably the simplest meaningful problem one could consider. If results and rules are to be found, they ought to make themselves plain in this problem. For certain problem parametrizations (meaning the signals and correlation coefficient), optimal quantization is achievable via a single threshold applied to each observation-the same as under independence. In other cases, one observation is best ignored or is quantized with two thresholds; neither behavior is seen under independence. Further, and again in distinction from the case of independence, it is seen that in certain situations, an XOR fusion rule is optimal, and in these cases, the implied decision rule is bizarre. The analysis is extended to the multivariate Gaussian problem.

251 citations


Journal ArticleDOI
TL;DR: Nonparametric statistics for comparing two mean frequency functions and for combining data on recurrent events and death, together with consistent variance estimators, are developed and an application to a cancer clinical trial is provided.
Abstract: This article is concerned with the analysis of recurrent events in the presence of a terminal event such as death. We consider the mean frequency function, defined as the marginal mean of the cumulative number of recurrent events over time. A simple nonparametric estimator for this quantity is presented. It is shown that the estimator, properly normalized, converges weakly to a zero-mean Gaussian process with an easily estimable covariance function. Nonparametric statistics for comparing two mean frequency functions and for combining data on recurrent events and death are also developed. The asymptotic null distributions of these statistics, together with consistent variance estimators, are derived. The small-sample properties of the proposed estimators and test statistics are examined through simulation studies. An application to a cancer clinical trial is provided.

241 citations


Proceedings Article
Volker Tresp1
01 Jan 2000
TL;DR: How Gaussian processes - in particular in form of Gaussian process classification, the support vector machine and the MGP model--can be used for quantifying the dependencies in graphical models is discussed.
Abstract: We introduce the mixture of Gaussian processes (MGP) model which is useful for applications in which the optimal bandwidth of a map is input dependent. The MGP is derived from the mixture of experts model and can also be used for modeling general conditional probability densities. We discuss how Gaussian processes - in particular in form of Gaussian process classification, the support vector machine and the MGP model--can be used for quantifying the dependencies in graphical models.

Journal ArticleDOI
TL;DR: The variational methods of Jaakkola and Jordan are applied to Gaussian processes to produce an efficient Bayesian binary classifier.
Abstract: Gaussian processes are a promising nonlinear regression tool, but it is not straightforward to solve classification problems with them. In the paper the variational methods of Jaakkola and Jordan (2000) are applied to Gaussian processes to produce an efficient Bayesian binary classifier.

Proceedings ArticleDOI
05 Jun 2000
TL;DR: A new approach to HDA is presented by defining an objective function which maximizes the class discrimination in the projected subspace while ignoring the rejected dimensions, and it is shown that, under diagonal covariance Gaussian modeling constraints, applying a diagonalizing linear transformation to the HDA space results in increased classification accuracy even though HDA alone actually degrades the recognition performance.
Abstract: Linear discriminant analysis (LDA) is known to be inappropriate for the case of classes with unequal sample covariances. There has been an interest in generalizing LDA to heteroscedastic discriminant analysis (HDA) by removing the equal within-class covariance constraint. This paper presents a new approach to HDA by defining an objective function which maximizes the class discrimination in the projected subspace while ignoring the rejected dimensions. Moreover, we investigate the link between discrimination and the likelihood of the projected samples and show that HDA can be viewed as a constrained ML projection for a full covariance Gaussian model, the constraint being given by the maximization of the projected between-class scatter volume. It is shown that, under diagonal covariance Gaussian modeling constraints, applying a diagonalizing linear transformation (MLLT) to the HDA space results in increased classification accuracy even though HDA alone actually degrades the recognition performance. Experiments performed on the Switchboard and Voicemail databases show a 10%-13% relative improvement in the word error rate over standard cepstral processing.

Journal ArticleDOI
TL;DR: In this article, the geometry of random vibration problems in the space of standard normal random variables obtained from discretization of the input process is investigated and an approximate method for their solution is presented.

Journal ArticleDOI
TL;DR: In this article, a new approximation to the Gaussian likelihood of a multivariate locally stationary process is introduced, based on an approximation of the inverse of the covariance matrix of such processes.
Abstract: A new approximation to the Gaussian likelihood of a multivariate locally stationary process is introduced. It is based on an approximation of the inverse of the covariance matrix of such processes. The new quasi-likelihood is a generalisation of the classical Whittle-likelihood for stationary processes. For parametric models asymptotic normality and efficiency of the resulting estimator are proved. Since the likelihood has a special local structure it can be used for nonparametric inference as well. This is briefly sketched for different estimates.

Journal ArticleDOI
TL;DR: The Cramer-Rao bound on the variance of angle-of-arrival estimates for arbitrary additive, independent, identically distributed, symmetric, non-Gaussian noise is presented and improved over initial robust estimates and is valid for a wide SNR range.
Abstract: Many approaches have been studied for the array processing problem when the additive noise is modeled with a Gaussian distribution, but these schemes typically perform poorly when the noise is non-Gaussian and/or impulsive. This paper is concerned with maximum likelihood array processing in non-Gaussian noise. We present the Cramer-Rao bound on the variance of angle-of-arrival estimates for arbitrary additive, independent, identically distributed (iid), symmetric, non-Gaussian noise. Then, we focus on non-Gaussian noise modeling with a finite Gaussian mixture distribution, which is capable of representing a broad class of non-Gaussian distributions that include heavy tailed, impulsive cases arising in wireless communications and other applications. Based on the Gaussian mixture model, we develop an expectation-maximization (EM) algorithm for estimating the source locations, the signal waveforms, and the noise distribution parameters. The important problems of detecting the number of sources and obtaining initial parameter estimates for the iterative EM algorithm are discussed in detail. The initialization procedure by itself is an effective algorithm for array processing in impulsive noise. Novel features of the EM algorithm and the associated maximum likelihood formulation include a nonlinear beamformer that separates multiple source signals in non-Gaussian noise and a robust covariance matrix estimate that suppresses impulsive noise while also performing a model-based interpolation to restore the low-rank signal subspace. The EM approach yields improvement over initial robust estimates and is valid for a wide SNR range. The results are also robust to PDF model mismatch and work well with infinite variance cases such as the symmetric stable distributions. Simulations confirm the optimality of the EM estimation procedure in a variety of cases, including a multiuser communications scenario. We also compare with existing array processing algorithms for non-Gaussian noise.

Journal ArticleDOI
TL;DR: An adaptive algorithm for blind source separation is derived, which is called the multiuser kurtosis (MUK) algorithm, which combines a stochastic gradient update and a Gram-Schmidt orthogonalization procedure in order to satisfy the criterion's whiteness constraints.
Abstract: We consider the problem of recovering blindly (i.e., without the use of training sequences) a number of independent and identically distributed source (user) signals that are transmitted simultaneously through a linear instantaneous mixing channel. The received signals are, hence, corrupted by interuser interference (IUI), and we can model them as the outputs of a linear multiple-input-multiple-output (MIMO) memoryless system. Assuming the transmitted signals to be mutually independent, i.i.d., and to share the same non-Gaussian distribution, a set of necessary and sufficient conditions for the perfect blind recovery (up to scalar phase ambiguities) of all the signals exists and involves the kurtosis as well as the covariance of the output signals. We focus on a straightforward blind constrained criterion stemming from these conditions. From this criterion, we derive an adaptive algorithm for blind source separation, which we call the multiuser kurtosis (MUK) algorithm. At each iteration, the algorithm combines a stochastic gradient update and a Gram-Schmidt orthogonalization procedure in order to satisfy the criterion's whiteness constraints. A performance analysis of its stationary points reveals that the MUK algorithm is free of any stable undesired local stationary points for any number of sources; hence, it is globally convergent to a setting that recovers them all.

Journal ArticleDOI
TL;DR: In this paper, the variograms of data simulated from stationary Gaussian processes were used to estimate variograms from actual metal concentrations in topsoil in the Swiss Jura, and the variogram was used for kriging.
Abstract: Summary Variograms of soil properties are usually obtained by estimating the variogram for distinct lag classes by the method-of-moments and fitting an appropriate model to the estimates. An alternative is to fit a model by maximum likelihood to data on the assumption that they are a realization of a multivariate Gaussian process. This paper compares the two using both simulation and real data. The method-of-moments and maximum likelihood were used to estimate the variograms of data simulated from stationary Gaussian processes. In one example, where the simulated field was sampled at different intensities, maximum likelihood estimation was consistently more efficient than the method-of-moments, but this result was not general and the relative performance of the methods depends on the form of the variogram. Where the nugget variance was relatively small and the correlation range of the data was large the method-of-moments was at an advantage and likewise in the presence of data from a contaminating distribution. When fields were simulated with positive skew this affected the results of both the method-of-moments and maximum likelihood. The two methods were used to estimate variograms from actual metal concentrations in topsoil in the Swiss Jura, and the variograms were used for kriging. Both estimators were susceptible to sampling problems which resulted in over- or underestimation of the variance of three of the metals by kriging. For four other metals the results for kriging using the variogram obtained by maximum likelihood were consistently closer to the theoretical expectation than the results for kriging with the variogram obtained by the method-of-moments, although the differences between the results using the two approaches were not significantly different from each other or from expectation. Soil scientists should use both procedures in their analysis and compare the results.

Journal ArticleDOI
TL;DR: It is found that an optimal single-stage VQ can operate at approximately 3 bits less than a state-of-the-art LSF-based 2-split VQ.
Abstract: We model the underlying probability density function of vectors in a database as a Gaussian mixture (GM) model. The model is employed for high rate vector quantization analysis and for design of vector quantizers. It is shown that the high rate formulas accurately predict the performance of model-based quantizers. We propose a novel method for optimizing GM model parameters for high rate performance, and an extension to the EM algorithm for densities having bounded support is also presented. The methods are applied to quantization of LPC parameters in speech coding and we present new high rate analysis results for band-limited spectral distortion and outlier statistics. In practical terms, we find that an optimal single-stage VQ can operate at approximately 3 bits less than a state-of-the-art LSF-based 2-split VQ.

Posted Content
TL;DR: In this paper, a scaling limit of the height function on the domino tiling model (dimer model) on simply-connected regions in Z^2 was defined and shown to be a Gaussian process with independent coefficients when expanded in the eigenbasis of the Laplacian.
Abstract: We define a scaling limit of the height function on the domino tiling model (dimer model) on simply-connected regions in Z^2 and show that it is the ``massless free field'', a Gaussian process with independent coefficients when expanded in the eigenbasis of the Laplacian.

Proceedings Article
29 Jun 2000
TL;DR: Gaussian process ; Nystroem approximation Reference EPFL-CONF-161323 Record created on 2010-12-02, modified on 2016-08-09.
Abstract: Keywords: Gaussian process ; Nystroem approximation Reference EPFL-CONF-161323 Record created on 2010-12-02, modified on 2016-08-09

13 Sep 2000
TL;DR: It is found that, for both a two-dimensional toy problem and a real-world benchmark problem, the variance is a reasonable criterion for both active data selection and test point rejection.
Abstract: We consider active data selection and test point rejection strategies for Gaussian process regression based on the variance of the posterior over target values Gaussian process regression is viewed as transductive regression that provides target distributions for given points rather than selecting an explicit regression function Since not only the posterior mean but also the posterior variance are easily calculated we use this additional information to two ends: active data selection is performed by either querying at points of high estimated posterior variance or at points that minimize the estimated posterior variance averaged over the input distribution of interest or (in a transductive manner) averaged over the test set Test point rejection is performed using the estimated posterior variance as a confidence measure We find that, for both a two-dimensional toy problem and a real-world benchmark problem, the variance is a reasonable criterion for both active data selection and test point rejection

Journal ArticleDOI
TL;DR: In this article, an estimator of the wavelet variance based on the maximal-overlap (undecimated) discrete wavelet transform is derived for a wide class of stochastic processes.
Abstract: Many physical processes are an amalgam of components operating on different scales, and scientific questions about observed data are often inherently linked to understanding the behavior at different scales. We explore time-scale properties of time series through the variance at different scales derived using wavelet methods. The great advantage of wavelet methods over ad hoc modifications of existing techniques is that wavelets provide exact scale-based decomposition results. We consider processes that are stationary, nonstationary but with stationary dth order differences, and nonstationary but with local stationarity. We study an estimator of the wavelet variance based on the maximal-overlap (undecimated) discrete wavelet transform. The asymptotic distribution of this wavelet variance estimator is derived for a wide class of stochastic processes, not necessarily Gaussian or linear. The variance of this distribution is estimated using spectral methods. Simulations confirm the theoretical result...

Journal ArticleDOI
Pierre Siohan1, C. Roche1
TL;DR: This analytical design method can be used to produce, with a controlled accuracy, filterbanks with practically no upper limitations in the number of coefficients and subbands.
Abstract: A new family of cosine-modulated filterbanks based on functions called extended Gaussian functions (EGFs) is obtained. The design is particularly simple since it is mainly based on a closed-form expression. Nearly perfect reconstruction cosine-modulated filterbanks are obtained as well as guidelines to estimate the filterbank parameters. This analytical design method can be used to produce, with a controlled accuracy, filterbanks with practically no upper limitations in the number of coefficients and subbands. Furthermore, a slight modification of the prototype filter coefficients is sufficient to satisfy exactly the perfect reconstruction constraints. An analysis of the time-frequency localization of the discrete prototype filters also shows that under certain conditions, EGF prototypes are at less than 0.3% from the optimal upper bound.

Proceedings Article
01 Jan 2000
TL;DR: An approach for a sparse representation for Gaussian Process models in order to overcome the limitations of GPs caused by large data sets is developed based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the model.
Abstract: We develop an approach for a sparse representation for Gaussian Process (GP) models in order to overcome the limitations of GPs caused by large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the model. Experimental results on toy examples and large real-world data sets indicate the efficiency of the approach.

Journal ArticleDOI
TL;DR: A bound on the rate of convergence in Hellinger distance for densityestimation is established using the Gaussian mixture sieve assuming that the true density is itself a mixture of Gaussians; the underlying mixing measure of thetrue density is not necessarilyassumed to have finite support.
Abstract: Gaussian mixtures provide a convenient method of densityestimation that lies somewhere between parametric models and kernel densityestimators When the number of components of the mixture is allowed to increase as sample size increases, the model is called a mixture sieve We establish a bound on the rate of convergence in Hellinger distance for densityestimation using the Gaussian mixture sieve assuming that the true density is itself a mixture of Gaussians; the underlying mixing measure of the true densityis not necessarilyassumed to have finite support Computing the rate involves some delicate calculations since the size of the sieve—as measured bybracketing entropy—and the saturation rate, cannot be found using standard methods When the mixing measure has compact support, using kn ∼ n 2/3 /� log n� 1/3 components in the mixture yields a rate of order � log n� � 1+η� /6/n1/6 for every η> 0� The rates depend heavilyon the tail behavior of the true density The sensitivity to the tail behavior is diminished byusing a robust sieve which includes a long-tailed component in the

Journal ArticleDOI
TL;DR: By introducing a direct-sum decomposition principle and determining the statistical mapping between the correlated Nakagami process and a set of Gaussian vectors for its generation, a simple general procedure is derived for the generation of correlated Nakgami channels with arbitrary parameters.
Abstract: Correlated Nakagami m-fading is commonly encountered in wireless communications. Its generation in a laboratory environment is therefore of theoretical and practical importance. However, no generic technique for this purpose is available in the literature. Correlated Rayleigh fading is easy to simulate since it has a simple relationship with a complex Gaussian process. Unfortunately, this is not the case for Nakagami fading. The difficulty lies in that the fading parameter can be a real number and there is no general theory linking a Nakagami vector to a finite set of correlated Gaussian vectors. In this paper, by introducing a direct-sum decomposition principle and determining the statistical mapping between the correlated Nakagami process and a set of Gaussian vectors for its generation, a simple general procedure is derived for the generation of correlated Nakagami channels with arbitrary parameters. A key parameter in the statistical mapping can be determined by using an iterative method. The validity of the new technique is examined through the generation of a correlated Nakagami sequence, as encountered in U.S. digital cellular, and a multibranch vector channel as encountered in diversity reception.

Proceedings ArticleDOI
05 Jun 2000
TL;DR: It is shown that the increments of mBm exhibit long range dependence under general conditions, and an explicit formula for this covariance structure is presented, which provides a full characterization of its stochastic properties.
Abstract: Multifractional Brownian motion (mBm) was introduced to overcome certain limitations of the classical fractional Brownian motion (fBm). The major difference between the two processes is that, contrarily to fBm, the almost sure Holder exponent of mBm is allowed to vary along the trajectory, a useful feature when one needs to model processes whose regularity evolves in time, such as Internet traffic or images. Various properties of mBm have been studied in the literature, related to its dimensions or the statistical estimation of its pointwise Holder regularity. However, the covariance structure of mBm has not been investigated so far. We present in this work an explicit formula for this covariance. Since mBm is a zero mean Gaussian process, this provides a full characterization of its stochastic properties. We report on some applications, including the synthesis problem and the long term structure: in particular, we show that the increments of mBm exhibit long range dependence under general conditions.

Journal ArticleDOI
TL;DR: In this article, a Gaussian process generalizing the MBM and having a Holder that can be a "very irregular" function is presented, which can be used to solve the problem of continuous MBM Holder.
Abstract: It is well known that the fractional Brownian motion (FBM) is of great interest in modeling. However, its Holder is the same all along its path and this restricts its field of application. Therefore, it would be useful to construct a Gaussian process extending the FBM and having a Holder that is allowed to change. A partial answer to this problem is supplied by the multifractional Brownian motion (MBM); but the Holder of the MBM must necessarily be continuous and this may be a drawback in some situations. In this paper we construct a Gaussian process generalizing the MBM and having a Holder that can be a ‘very irregular’ function.

Proceedings ArticleDOI
01 Jul 2000
TL;DR: It is found that, for both a two-dimensional toy problem and a real-world benchmark problem, the variance is a reasonable criterion for both active data selection and test point rejection.
Abstract: We consider active data selection and test point rejection strategies for Gaussian process regression based on the variance of the posterior over target values. Gaussian process regression is viewed as transductive regression that provides target distributions for given points rather than selecting an explicit regression function. Since not only the posterior mean but also the posterior variance are easily calculated we use this additional information to two ends: active data selection is performed by either querying at points of high estimated posterior variance or at points that minimize the estimated posterior variance averaged over the input distribution of interest or (in a transductive manner) averaged over the test set. Test point rejection is performed using the estimated posterior variance as a confidence measure. We find that, for both a two-dimensional toy problem and a real-world benchmark problem, the variance is a reasonable criterion for both active data selection and test point rejection.