scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian process published in 1995"


Journal ArticleDOI
TL;DR: The individual Gaussian components of a GMM are shown to represent some general speaker-dependent spectral shapes that are effective for modeling speaker identity and is shown to outperform the other speaker modeling techniques on an identical 16 speaker telephone speech task.
Abstract: This paper introduces and motivates the use of Gaussian mixture models (GMM) for robust text-independent speaker identification. The individual Gaussian components of a GMM are shown to represent some general speaker-dependent spectral shapes that are effective for modeling speaker identity. The focus of this work is on applications which require high identification rates using short utterance from unconstrained conversational speech and robustness to degradations produced by transmission over a telephone channel. A complete experimental evaluation of the Gaussian mixture speaker model is conducted on a 49 speaker, conversational telephone speech database. The experiments examine algorithmic issues (initialization, variance limiting, model order selection), spectral variability robustness techniques, large population performance, and comparisons to other speaker modeling techniques (uni-modal Gaussian, VQ codebook, tied Gaussian mixture, and radial basis functions). The Gaussian mixture speaker model attains 96.8% identification accuracy using 5 second clean speech utterances and 80.8% accuracy using 15 second telephone speech utterances with a 49 speaker population and is shown to outperform the other speaker modeling techniques on an identical 16 speaker telephone speech task. >

3,134 citations


MonographDOI
TL;DR: In this paper, the authors introduce sample path properties such as boundedness, continuity, and oscillations, as well as integrability, and absolute continuity of the path in the real line.
Abstract: Stable random variables on the real line Multivariate stable distributions Stable stochastic integrals Dependence structures of multivariate stable distributions Non-linear regression Complex stable stochastic integrals and harmonizable processes Self-similar processes Chentsov random fields Introduction to sample path properties Boundedness, continuity and oscillations Measurability, integrability and absolute continuity Boundedness and continuity via metric entropy Integral representation Historical notes and extensions.

2,611 citations


Proceedings Article
27 Nov 1995
TL;DR: This paper investigates the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations.
Abstract: The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions. In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations. Two methods, using optimization and averaging (via Hybrid Monte Carlo) over hyperparameters have been tested on a number of challenging problems and have produced excellent results.

1,225 citations


Journal ArticleDOI
TL;DR: An abstract model for aggregated connectionless traffic, based on the fractional Brownian motion, is presented, and the notion of ideal free traffic is introduced.
Abstract: An abstract model for aggregated connectionless traffic, based on the fractional Brownian motion, is presented. Insight into the parameters is obtained by relating the model to an equivalent burst model. Results on a corresponding storage process are presented. The buffer occupancy distribution is approximated by a Weibull distribution. The model is compared with publicly available samples of real Ethernet traffic. The degree of the short-term predictability of the traffic model is studied through an exact formula for the conditional variance of a future value given the past. The applicability and interpretation of the self-similar model are discussed extensively, and the notion of ideal free traffic is introduced. >

800 citations


Book
01 Dec 1995
TL;DR: In this article, the double sum method and the method of moments limit theorems for the number of high excursions and for maxima of Gaussian processes and fields are studied.
Abstract: Introduction The method of comparison The double sum method The method of moments Limit theorems for the number of high excursions and for maxima of Gaussian processes and fields References.

600 citations


Journal ArticleDOI
TL;DR: The simulation smoother is introduced, which draws from the multivariate posterior distribution of the disturbances of the model, so avoiding the degeneracies inherent in state samplers.
Abstract: SUMMARY Recently suggested procedures for simulating from the posterior density of states given a Gaussian state space time series are refined and extended. We introduce and study the simulation smoother, which draws from the multivariate posterior distribution of the disturbances of the model, so avoiding the degeneracies inherent in state samplers. The technique is important in Gibbs sampling with non-Gaussian time series models, and for performing Bayesian analysis of Gaussian time series.

587 citations


BookDOI
01 Jan 1995
TL;DR: This book discusses Gaussian distributions and random variables, the functional law of the iterated logarithm, and several open problems.
Abstract: Preface. 1: Gaussian distributions and random variables. 2: Multi-dimensional Gaussian distributions. 3: Covariances. 4: Random functions. 5: Examples of Gaussian random functions. 6: Modelling the covariances. 7: Oscillations. 8: Infinite-dimensional Gaussian distributions. 9: Linear functionals, admissible shifts, and the kernel. 10: The most important Gaussian distributions. 11: Convexity and the isoperimetric inequality. 12: The large deviations principle. 13: Exact asymptotics of large deviations. 14: Metric entropy and the comparison principle. 15: Continuity and boundedness. 16: Majorizing measures. 17: The functional law of the iterated logarithm. 18: Small deviations. 19: Several open problems. Comments. References. Subject index. List of basic notations.

459 citations


01 Jan 1995
TL;DR: In this article, the authors generalize the definition of fractional Brownian motion of exponent $H$ to the case where the exponent is no longer a constant, but a function of the time index of the process.
Abstract: We generalize the definition of the fractional Brownian motion of exponent $H$ to the case where $H$ is no longer a constant, but a function of the time index of the process. This allows us to model non stationary continuous processes, and we show that $H(t)$ and $2-H(t)$ are indeed respectively the local Holder exponent and the local box and Hausdorff dimension at point $t$. Finally, we propose a simulation method and an estimation procedure for $H(t)$ for our model.

410 citations


Journal ArticleDOI
TL;DR: Standard techniques for improved generalization from neural networks include weight decay and pruning and a comparison is made with results of MacKay using the evidence framework and a gaussian regularizer.
Abstract: Standard techniques for improved generalization from neural networks include weight decay and pruning. Weight decay has a Bayesian interpretation with the decay function corresponding to a prior over weights. The method of transformation groups and maximum entropy suggests a Laplace rather than a gaussian prior. After training, the weights then arrange themselves into two classes: (1) those with a common sensitivity to the data error and (2) those failing to achieve this sensitivity and that therefore vanish. Since the critical value is determined adaptively during training, pruning---in the sense of setting weights to exact zeros---becomes an automatic consequence of regularization alone. The count of free parameters is also reduced automatically as weights are pruned. A comparison is made with results of MacKay using the evidence framework and a gaussian regularizer.

362 citations


Journal ArticleDOI
17 Sep 1995
TL;DR: The rate-distortion region is determined in a special case that one source plays a role of partial side information to reproduce sequences emitted from the other source with a prescribed average distortion level.
Abstract: We consider the problem of separate coding for two correlated memoryless Gaussian source. We determine the rate-distortion region in a special case that one source plays a role of partial side information to reproduce sequences emitted from the other source with a prescribed average distortion level. We also derive an explicit outer bound of the rate-distortion region, demonstrating that the inner bound obtained by Berger (1978) partially coincides with the rate-distortion region.

341 citations


Journal ArticleDOI
17 Sep 1995
TL;DR: There is a significant loss between the cases when the agents are allowed to convene and when they are not, and it is established that the distortion decays asymptotically only as R-l.
Abstract: A firm's CEO employs a team of L agents who observe independently corrupted versions of a data sequence {X(t)}/sub t=1//sup /spl infin//. Let R be the total data rate at which the agents may communicate information about their observations to the CEO. The agents are not allowed to convene. Berger, Zhang and Viswanathan (see ibid., vol.42, no.5, p.887-902, 1996) determined the asymptotic behavior of the minimal error frequency in the limit as L and R tend to infinity for the case in which the source and observations are discrete and memoryless. We consider the same multiterminal source coding problem when {X(t)}/sub t=1//sup /spl infin// is independent and identically distributed (i.i.d.) Gaussian random variable corrupted by independent Gaussian noise. We study, under quadratic distortion, the rate-distortion tradeoff in the limit as L and R tend to infinity. As in the discrete case, there is a significant loss between the cases when the agents are allowed to convene and when they are not. As L/spl rarr//spl infin/, if the agents may pool their data before communicating with the CEO, the distortion decays exponentially with the total rate R; this corresponds to the distortion-rate function for an i.i.d. Gaussian source. However, for the case in which they are not permitted to convene, we establish that the distortion decays asymptotically only as R-l.

Book
13 Nov 1995
TL;DR: In this paper, the authors present an approach to Kinetic theory models, including Stochastic Processes, Polymer Dynamics, and Fluid Mechanics, based on the CONNFFESSIT idea.
Abstract: 1 Stochastic Processes, Polymer Dynamics, and Fluid Mechanics.- 1.1 Approach to Kinetic Theory Models.- 1.2 Flow Calculation and Material Functions.- 1.2.1 Shear Flows.- 1.2.2 General Extensional Flows.- 1.2.3 The CONNFFESSIT Idea.- References.- I Stochastic Processes.- 2 Basic Concepts from Stochastics.- 2.1 Events and Probabilities.- 2.1.1 Events and ?-Algebras.- 2.1.2 Probability Axioms.- 2.1.3 Gaussian Probability Measures.- 2.2 Random Variables.- 2.2.1 Definitions and Examples.- 2.2.2 Expectations and Moments.- 2.2.3 Joint Distributions and Independence.- 2.2.4 Conditional Expectations and Probabilities.- 2.2.5 Gaussian Random Variables.- 2.2.6 Convergence of Random Variables.- 2.3 Basic Theory of Stochastic Processes.- 2.3.1 Definitions and Distributions.- 2.3.2 Gaussian Processes.- 2.3.3 Markov Processes.- 2.3.4 Martingales.- References.- 3 Stochastic Calculus.- 3.1 Motivation.- 3.1.1 Naive Approach to Stochastic Differential Equations.- 3.1.2 Criticism of the Naive Approach.- 3.2 Stochastic Integration.- 3.2.1 Definition of the Ito Integral.- 3.2.2 Properties of the Ito Integral.- 3.2.3 Ito's Formula.- 3.3 Stochastic Differential Equations.- 3.3.1 Definitions and Basic Theorems.- 3.3.2 Linear Stochastic Differential Equations.- 3.3.3 Fokker-Planck Equations.- 3.3.4 Mean Field Interactions.- 3.3.5 Boundary Conditions.- 3.3.6 Stratonovich's Stochastic Calculus.- 3.4 Numerical Integration Schemes.- 3.4.1 Euler's Method.- 3.4.2 Mil'shtein's Method.- 3.4.3 Weak Approximation Schemes.- 3.4.4 More Sophisticated Methods.- References.- II Polymer Dynamics.- 4 Bead-Spring Models for Dilute Solutions.- 4.1 Rouse Model.- 4.1.1 Analytical Solution for the Equations of Motion.- 4.1.2 Stress Tensor.- 4.1.3 Material Functions in Shear and Extensional Flows.- 4.1.4 A Primer in Brownian Dynamics Simulations.- 4.1.5 Variance Reduced Simulations.- 4.2 Hydrodynamic Interaction.- 4.2.1 Description of Hydrodynamic Interaction.- 4.2.2 Zimm Model.- 4.2.3 Long Chain Limit and Universal Behavior.- 4.2.4 Gaussian Approximation.- 4.2.5 Simulation of Dumbbells.- 4.3 Nonlinear Forces.- 4.3.1 Excluded Volume.- 4.3.2 Finite Polymer Extensibility.- References.- 5 Models with Constraints.- 5.1 General Bead-Rod-Spring Models.- 5.1.1 Philosophy of Constraints.- 5.1.2 Formulation of Stochastic Differential Equations.- 5.1.3 Generalized Coordinates Versus Constraint Conditions.- 5.1.4 Numerical Integration Schemes.- 5.1.5 Stress Tensor.- 5.2 Rigid Rod Models.- 5.2.1 Dilute Solutions of Rod-like Molecules.- 5.2.2 Liquid Crystal Polymers.- References.- 6 Reptation Models for Concentrated Solutions and Melts.- 6.1 Doi-Edwards and Curtiss-Bird Models.- 6.1.1 Polymer Dynamics.- 6.1.2 Stress Tensor.- 6.1.3 Simulations in Steady Shear Flow.- 6.1.4 Efficiency of Simulations.- 6.2 Reptating-Rope Model.- 6.2.1 Basic Model Equations.- 6.2.2 Results for Steady Shear Flow.- 6.3 Modified Reptation Models.- 6.3.1 A Model Related to "Double Reptation".- 6.3.2 Doi-Edwards Model Without Independent Alignment.- References.- Landmark Papers and Books.- Solutions to Exercises.- References.- Author Index.

Proceedings ArticleDOI
09 May 1995
TL;DR: In this paper, a robust variable step size LMS-type algorithm with the attractive property of achieving a small final misadjustment while providing fast convergence at early stages of adaptation is presented.
Abstract: The paper presents a robust variable step size LMS-type algorithm with the attractive property of achieving a small final misadjustment while providing fast convergence at early stages of adaptation. The performance of the algorithm is not affected by the presence of noise. Approximate analysis of convergence and steady state performance for zero-mean stationary Gaussian inputs and a nonstationary optimal weight vector is provided. Simulation results clearly indicate its superior performance for stationary cases. For the nonstationary environment, the algorithm provides performance equivalent to that of the regular LMS algorithm.

Book
01 Jan 1995
TL;DR: Non-Gaussian data and probalilistic methods classes of non- Gaussian processes simulation of non -Gaussian processes response of linear systems to non-Gaussian inputs.
Abstract: Non-Gaussian data and probalilistic methods classes of non-Gaussian processes simulation of non-Gaussian processes response of linear systems to non-Gaussian inputs.


Journal ArticleDOI
TL;DR: In this article, the L2 integration theory of bounded sure processes based on K. Bichteler's integral extension theory is presented. But the L 2 integration theory is not applicable to fractional Brownian motions (fBm's).
Abstract: In this article some of the important ideas in ordinary stochastic analysis are applied to fractional Brownian motions (fBm's). First we give a simple and elementary proof of the fact that any fBm has zero quadratic variation. This fact leads to the non-semimartingale structure of fBm's. Another consequence is that we can integrate (in probability) the functionals of fBm's with fBm differentials. With the same integrator, we then develop the L2 integration theory of bounded sure processes based on of K. Bichteler's integral extension theory. Finally, we investigate the corresponding stochastic differential equations with fractional Brownian noise

Journal ArticleDOI
TL;DR: Image subband and discrete cosine transform coefficients are modeled for efficient quantization and noiseless coding and pyramid codes for transform and subband image coding are selected.
Abstract: Image subband and discrete cosine transform coefficients are modeled for efficient quantization and noiseless coding. Quantizers and codes are selected based on Laplacian, fixed generalized Gaussian, and adaptive generalized Gaussian models. The quantizers and codes based on the adaptive generalized Gaussian models are always superior in mean-squared error distortion performance but, generally, by no more than 0.08 bit/pixel, compared with the much simpler Laplacian model-based quantizers and noiseless codes. This provides strong motivation for the selection of pyramid codes for transform and subband image coding. >

Journal ArticleDOI
TL;DR: The novel feature of this work is that the width of the signal is also unknown and the test is based on maximising a Gaussian random field in N + 1 dimensions, N dimensions for the location plus one dimension for the width.
Abstract: We suppose that our observations can be decomposed into a fixed signal plus random noise, where the noise is modelled as a particular stationary Gaussian random field in $N$-dimensional Euclidean space. The signal has the form of a known function centered at an unknown location and multiplied by an unknown amplitude, and we are primarily interested in a test to detect such a signal. There are many examples where the signal scale or width is assumed known, and the test is based on maximising a Gaussian random field over all locations in a subset of $N$-dimensional Euclidean space. The novel feature of this work is that the width of the signal is also unknown and the test is based on maximising a Gaussian random field in $N + 1$ dimensions, $N$ dimensions for the location plus one dimension for the width. Two convergent approaches are used to approximate the null distribution: one based on the method of Knowles and Siegmund, which uses a version of Weyl's tube formula for manifolds with boundaries, and the other based on some recent work by Worsley, which uses the Hadwiger characteristic of excursion sets as introduced by Adler. Finally we compare the power of our method with one based on a fixed but perhaps incorrect signal width.

Journal ArticleDOI
TL;DR: Generalized Gaussian and Laplacian source models are compared in discrete cosine transform (DCT) image coding and with block classification based on AC energy, the densities of the DCT coefficients are much closer to the LaPLacian or even the Gaussian.
Abstract: Generalized Gaussian and Laplacian source models are compared in discrete cosine transform (DCT) image coding. A difference in peak signal to noise ratio (PSNR) of at most 0.5 dB is observed for encoding different images. We also compare maximum likelihood estimation of the generalized Gaussian density parameters with a simpler method proposed by Mallat (1989). With block classification based on AC energy, the densities of the DCT coefficients are much closer to the Laplacian or even the Gaussian. >

Journal Article
TL;DR: In this paper, three explicit estimators of the bivariate survival functions for three types of data are analyzed by proving the analytical and probabilistic conditions necessary for application of the functional deltamethod.
Abstract: Three explicit estimators of the bivariate survival functions for three types of data are analyzed by proving the analytical and probabilistic conditions necessary for application of the functional deltamethod. It tells us that the estimators converge weakly at p n-rate to a Gaussian process and that for estimation of their asymptotical distribution the bootstrap works well. We also prove eciency of the Dabrowska and Prentice-Cai estimator in the bivariate censoring model under independence.

Journal ArticleDOI
TL;DR: In this paper, small ball probabilities for locally non-deterministic Gaussian processes with stationary increments, a class of processes that includes the fractional Brownian motions, were used to prove Chung type laws of the iterated logarithm.
Abstract: We estimate small ball probabilities for locally nondeterministic Gaussian processes with stationary increments, a class of processes that includes the fractional Brownian motions. These estimates are used to prove Chung type laws of the iterated logarithm.

Journal ArticleDOI
TL;DR: In this article, a new dynamical method with the help of brownian motions and continuous martingales indexed by the square root of the inverse temperature as parameter is introduced, thus formulating the thermodynamic formalism in terms of random processes.
Abstract: We study the fluctuations of free energy, energy and entropy in the high temperature regime for the Sherrington-Kirkpatrick model of spin glasses. We introduce here a new dynamical method with the help of brownian motions and continuous martingales indexed by the square root of the inverse temperature as parameter, thus formulating the thermodynamic formalism in terms of random processes. The well established technique of stochastic calculus leads us naturally to prove that these fluctuations are simple gaussian processes with independent increments, a generalization of a result proved by Aizenman, Lebowitz and Ruelle [1].

Journal ArticleDOI
TL;DR: This paper extends Bennett's (1948) integral from scalar to vector quantizers, giving a simple formula that expresses the rth-power distortion of a many-point vector quantizer in terms of the number of points, point density function, inertial profile, and the distribution of the source.
Abstract: This paper extends Bennett's (1948) integral from scalar to vector quantizers, giving a simple formula that expresses the rth-power distortion of a many-point vector quantizer in terms of the number of points, point density function, inertial profile, and the distribution of the source. The inertial profile specifies the normalized moment of inertia of quantization cells as a function of location. The extension is formulated in terms of a sequence of quantizers whose point density and inertial profile approach known functions as the number of points increase. Precise conditions are given for the convergence of distortion (suitably normalized) to Bennett's integral. Previous extensions did not include the inertial profile and, consequently, provided only bounds or applied only to quantizers with congruent cells, such as lattice and optimal quantizers. The new version of Bennett's integral provides a framework for the analysis of suboptimal structured vector quantizers. It is shown how the loss in performance of such quantizers, relative to optimal unstructured ones, can be decomposed into point density and cell shape losses. As examples, these losses are computed for product quantizers and used to gain further understanding of the performance of scalar quantizers applied to stationary, memoryless sources and of transform codes applied to Gaussian sources with memory. It is shown that the short-coming of such quantizers is that they must compromise between point density and cell shapes. >

Journal ArticleDOI
17 Sep 1995
TL;DR: It is shown that the use of a Gaussian codebook to compress any ergodic source results in an average distortion which depends on the source via its second moment only, and a sequence of bounds is constructed which are shown to converge to the least distortion achievable in this setup.
Abstract: Using a codebook C, a source sequence is described by the codeword that is closest to it according to the distortion measure d/sub 0/(x,x/spl circ//sub 0/). Based on this description, the source sequence is reconstructed to minimize the reconstruction distortion as measured by d/sub 1/(x,x/spl circ//sub 1/), where, in general, d/sub 1/(x,x/spl circ//sub 1/)/spl ne/d/sub 0/(x,x/spl circ//sub 0/). We study the minimum resulting d/sub 1/(x,x/spl circ//sub 1/)-distortion between the reconstructed sequence and the source sequence as we optimize over the codebook subject to a rate constraint. Using a random coding argument we derive an upper bound on the resulting distortion. Applying this bound to blocks of source symbols we construct a sequence of bounds which are shown to converge to the least distortion achievable in this setup. This solves the rate distortion dual of an open problem related to the capacity of channels with a given decoding rule-the mismatch capacity. Addressing a different kind of mismatch, we also study the mean-squared error description of non-Gaussian sources with random Gaussian codebooks. It is shown that the use of a Gaussian codebook to compress any ergodic source results in an average distortion which depends on the source via its second moment only. The source with a given second moment that is most difficult to describe is the memoryless zero-mean Gaussian source, and it is best described using a Gaussian codebook. Once a Gaussian codebook is used, we show that all sources of a given second moment become equally hard to describe.

Journal ArticleDOI
TL;DR: In this paper, the authors address the empirical bandwidth choice problem in cases where the range of dependence may be virtually arbitrarily long and provide surprising evidence that, even for some strongly dependent data sequences, the asymptotically optimal bandwidth for independent data is a good choice.
Abstract: We address the empirical bandwidth choice problem in cases where the range of dependence may be virtually arbitrarily long. Assuming that the observed data derive from an unknown function of a Gaussian process, it is argued that, unlike more traditional contexts of statistical inference, in density estimation there is no clear role for the classical distinction between short- and long-range dependence. Indeed, the "boundaries" that separate different modes of behaviour for optimal bandwidths and mean squared errors are determined more by kernel order than by traditional notions of strength of dependence, for example, by whether or not the sum of the covariances converges. We provide surprising evidence that, even for some strongly dependent data sequences, the asymptotically optimal bandwidth for independent data is a good choice. A plug-in empirical bandwidth selector based on this observation is suggested. We determine the properties of this choice for a wide range of different strengths of dependence. Properties of cross-validation are also addressed.

Journal ArticleDOI
TL;DR: The authors show that only a discrete subset of filters gives rise to an evolution which can be characterized by means of a partial differential equation.
Abstract: Explores how the functional form of scale space filters is determined by a number of a priori conditions. In particular, if one assumes scale space filters to be linear, isotropic convolution filters, then two conditions (viz. recursivity and scale-invariance) suffice to narrow down the collection of possible filters to a family that essentially depends on one parameter which determines the qualitative shape of the filter. Gaussian filters correspond to one particular value of this shape-parameter. For other values the filters exhibit a more complicated pattern of excitatory and inhibitory regions. This might well be relevant to the study of the neurophysiology of biological visual systems, for recent research shows the existence of extensive disinhibitory regions outside the periphery of the classical center-surround receptive field of LGN and retinal ganglion cells (in cats). Such regions cannot be accounted for by models based on the second order derivative of the Gaussian. Finally, the authors investigate how this work ties in with another axiomatic approach of scale space operators which focuses on the semigroup properties of the operator family. The authors show that only a discrete subset of filters gives rise to an evolution which can be characterized by means of a partial differential equation. >

Journal ArticleDOI
TL;DR: In this article, the authors show that for up-and-down pulses with random moments of birth τ and random lifetime w determined by a Poisson random measure, when the pulse amplitude e → 0, while the pulse density δ increases to infinity, one obtains a process of fractal sum of micropulses.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the asymptotic distribution of a mixture of two binomial distributions with the Kullback Leibler information grows stochastically as log k.

Journal ArticleDOI
TL;DR: A theoretical framework for Bayesian adaptive training of the parameters of a discrete hidden Markov model and a semi-continuous HMM with Gaussian mixture state observation densities is presented and the proposed MAP algorithms are shown to be effective especially in the cases in which the training or adaptation data are limited.
Abstract: A theoretical framework for Bayesian adaptive training of the parameters of a discrete hidden Markov model (DHMM) and of a semi-continuous HMM (SCHMM) with Gaussian mixture state observation densities is presented. In addition to formulating the forward-backward MAP (maximum a posteriori) and the segmental MAP algorithms for estimating the above HMM parameters, a computationally efficient segmental quasi-Bayes algorithm for estimating the state-specific mixture coefficients in SCHMM is developed. For estimating the parameters of the prior densities, a new empirical Bayes method based on the moment estimates is also proposed. The MAP algorithms and the prior parameter specification are directly applicable to training speaker adaptive HMMs. Practical issues related to the use of the proposed techniques for HMM-based speaker adaptation are studied. The proposed MAP algorithms are shown to be effective especially in the cases in which the training or adaptation data are limited. >

Journal ArticleDOI
TL;DR: Two stochastic models are presented that describe the relationship between biomarker process values at random time points, event times, and a vector of covariates that represent the decay of systems over time.
Abstract: We present two stochastic models that describe the relationship between biomarker process values at random time points, event times, and a vector of covariates. In both models the biomarker processes are degradation processes that represent the decay of systems over time. In the first model the biomarker process is a Wiener process whose drift is a function of the covariate vector. In the second model the biomarker process is taken to be the difference between a stationary Gaussian process and a time drift whose drift parameter is a function of the covariates. For both models we present statistical methods for estimation of the regression coefficients. The first model is useful for predicting the residual time from study entry to the time a critical boundary is reached while the second model is useful for predicting the latency time from the infection until the time the presence of the infection is detected. We present our methods principally in the context of conducting inference in a population of HIV infected individuals.