scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Simulating bessel random variables

Luc Devroye1
15 Apr 2002-Statistics & Probability Letters (North-Holland)-Vol. 57, Iss: 3, pp 249-257
TL;DR: In this article, an exact random variate generation for the Bessel distribution is discussed. But the expected time of the algorithm is uniformly bounded over all choices of the parameters, and the algorithm avoids any computation of Bessel functions or Bessel ratios.
About: This article is published in Statistics & Probability Letters.The article was published on 2002-04-15. It has received 58 citations till now. The article focuses on the topics: Bessel polynomials & Bessel process.
Citations
More filters
Journal ArticleDOI
TL;DR: An explicit representation of the transitions of the Heston stochastic volatility model is derived and used for fast and accurate simulation of the model and the integral of the variance process over an interval is given in terms of infinite sums and mixtures of gamma random variables.
Abstract: We derive an explicit representation of the transitions of the Heston stochastic volatility model and use it for fast and accurate simulation of the model. Of particular interest is the integral of the variance process over an interval, conditional on the level of the variance at the endpoints. We give an explicit representation of this quantity in terms of infinite sums and mixtures of gamma random variables. The increments of the variance process are themselves mixtures of gamma random variables. The representation of the integrated conditional variance applies the Pitman-Yor decomposition of Bessel bridges. We combine this representation with the Broadie-Kaya exact simulation method and use it to circumvent the most time-consuming step in that method.

81 citations


Cites methods from "Simulating bessel random variables"

  • ...Devroye [13] proposed and analyzed fast acceptance-rejection algorithms for sampling Bessel random variables; Iliopoulos et al....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the basic properties of the Bessel distribution are investigated and links with some well-known distributions such as the von Mises-Fisher distribution are described, which is useful in Bayesian inferences and Monte Carlo computation.
Abstract: This article investigates basic properties of the Bessel distribution, a power series distribution which has not been fully explored before. Links with some well-known distributions such as the von Mises-Fisher distribution are described. A simulation scheme is also proposed to generate random samples from the Bessel distribution. This scheme is useful in Bayesian inferences and Monte Carlo computation.

80 citations

Journal ArticleDOI
TL;DR: In this article, the authors derive an explicit representation of the transitions of the Heston stochastic volatility model and use it for fast and accurate simulation of the model, using the Pitman-Yor decomposition of Bessel bridges.
Abstract: We derive an explicit representation of the transitions of the Heston stochastic volatility model and use it for fast and accurate simulation of the model. Of particular interest is the integral of the variance process over an interval, conditional on the level of the variance at the endpoints. We give an explicit representation of this quantity in terms of infinite sums and mixtures of gamma random variables. The increments of the variance process are themselves mixtures of gamma random variables. The representation of the integrated conditional variance applies the Pitman–Yor decomposition of Bessel bridges. We combine this representation with the Broadie–Kaya exact simulation method and use it to circumvent the most time-consuming step in that method.

75 citations

Journal Article
TL;DR: An augmentable gamma belief network (GBN) that factorizes each of its hidden layers into the product of a sparse connection weight matrix and the nonnegative real hidden units of the next layer to infer multilayer deep representations of high-dimensional discrete and non negative real vectors.
Abstract: To infer multilayer deep representations of high-dimensional discrete and nonnegative real vectors, we propose an augmentable gamma belief network (GBN) that factorizes each of its hidden layers into the product of a sparse connection weight matrix and the nonnegative real hidden units of the next layer. The GBN's hidden layers are jointly trained with an upward-downward Gibbs sampler that solves each layer with the same subroutine. The gamma-negative binomial process combined with a layer-wise training strategy allows inferring the width of each layer given a fixed budget on the width of the first layer. Example results illustrate interesting relationships between the width of the first layer and the inferred network structure, and demonstrate that the GBN can add more layers to improve its performance in both unsupervisedly extracting features and predicting heldout data. For exploratory data analysis, we extract trees and subnetworks from the learned deep network to visualize how the very specific factors discovered at the first hidden layer and the increasingly more general factors discovered at deeper hidden layers are related to each other, and we generate synthetic data by propagating random variables through the deep network from the top hidden layer back to the bottom data layer.

70 citations

References
More filters
Journal ArticleDOI
TL;DR: This chapter reviews the main methods for generating random variables, vectors and processes in non-uniform random variate generation, and provides information on the expected time complexity of various algorithms before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods.

3,304 citations


"Simulating bessel random variables" refers background in this paper

  • ...Poisson variates can be generated in expected time uniformly bounded in the parameters (see Devroye (1986), Hörmann (1993, 1994), Stadlober (1990), and Ahrens and Dieter (1991))....

    [...]

  • ...See pages 489–493 of Devroye (1986) on how to avoid computing the gamma function in rejection algorithms....

    [...]

Book
16 Apr 1986
TL;DR: A survey of the main methods in non-uniform random variate generation can be found in this article, where the authors provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes and Markov chain methods.
Abstract: This is a survey of the main methods in non-uniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods. Authors’ address: School of Computer Science, McGill University, 3480 University Street, Montreal, Canada H3A 2K6. The authors’ research was sponsored by NSERC Grant A3456 and FCAR Grant 90-ER-0291. 1. The main paradigms The purpose of this chapter is to review the main methods for generating random variables, vectors and processes. Classical workhorses such as the inversion method, the rejection method and table methods are reviewed in section 1. In section 2, we discuss the expected time complexity of various algorithms, and give a few examples of the design of generators that are uniformly fast over entire families of distributions. In section 3, we develop a few universal generators, such as generators for all log concave distributions on the real line. Section 4 deals with random variate generation when distributions are indirectly specified, e.g, via Fourier coefficients, characteristic functions, the moments, the moment generating function, distributional identities, infinite series or Kolmogorov measures. Random processes are briefly touched upon in section 5. Finally, the latest developments in Markov chain methods are discussed in section 6. Some of this work grew from Devroye (1986a), and we are carefully documenting work that was done since 1986. More recent references can be found in the book by Hörmann, Leydold and Derflinger (2004). Non-uniform random variate generation is concerned with the generation of random variables with certain distributions. Such random variables are often discrete, taking values in a countable set, or absolutely continuous, and thus described by a density. The methods used for generating them depend upon the computational model one is working with, and upon the demands on the part of the output. For example, in a ram (random access memory) model, one accepts that real numbers can be stored and operated upon (compared, added, multiplied, and so forth) in one time unit. Furthermore, this model assumes that a source capable of producing an i.i.d. (independent identically distributed) sequence of uniform [0, 1] random variables is available. This model is of course unrealistic, but designing random variate generators based on it has several advantages: first of all, it allows one to disconnect the theory of non-uniform random variate generation from that of uniform random variate generation, and secondly, it permits one to plan for the future, as more powerful computers will be developed that permit ever better approximations of the model. Algorithms designed under finite approximation limitations will have to be redesigned when the next generation of computers arrives. For the generation of discrete or integer-valued random variables, which includes the vast area of the generation of random combinatorial structures, one can adhere to a clean model, the pure bit model, in which each bit operation takes one time unit, and storage can be reported in terms of bits. Typically, one now assumes that an i.i.d. sequence of independent perfect bits is available. In this model, an elegant information-theoretic theory can be derived. For example, Knuth and Yao (1976) showed that to generate a random integer X described by the probability distribution {X = n} = pn, n ≥ 1, any method must use an expected number of bits greater than the binary entropy of the distribution, ∑

3,217 citations

Journal ArticleDOI

469 citations


"Simulating bessel random variables" refers background or methods in this paper

  • ...1991 Mathematics Subject Classifications: Primary 65C10....

    [...]

  • ...It arises in a natural way in the theory of stochastic processes (Pitman and Yor, 1982), and is related to many other distributions, including multivariate and randomized gamma distributions and the von Mises-Fisher distribution (Yuan and Kalbfleisch, 2000)....

    [...]

  • ...At that point, the random variate Y is distributed as a Bessel (ν, a) random variate (Yuan and Kalbfleisch, 2000, p. 439)....

    [...]

  • ...Generator for the standard squared Bessel bridge process This process on [0, 1], denoted by ξ(t), conditional on ξ(0) = a, ξ(1) = b, and with parameter ν > −1, is studied by Pitman and Yor (1982)....

    [...]

01 Jan 2000
TL;DR: This paper reviews statistical methods for analyzing output data from computer simulations to find the best system among a set of competing alternatives on the estimation of steady-state system parameters.
Abstract: This paper reviews statistical methods for analyzing output data from computer simulations. First, it focuses on the estimation of steady-state system parameters. The estimation techniques include the replication/deletion approach, the regenerative method, the batch means method, and methods based on standardized time series. Second, it reviews recent statistical procedures to find the best system among a set of competing alternatives.

332 citations

Journal ArticleDOI

207 citations


"Simulating bessel random variables" refers methods in this paper

  • ...A uniformly fast algorithm for this distribution was derived by Best and Fisher (1979), with alternate methods proposed later by Dagpunar (1990), Barabesi (1993) and Wood (1994)....

    [...]

  • ...In our case, three non-standard functions are involved, Γ (in the computation of pn), Iν(a) (in the computation of pn), and Rν(a) (in the computation of µ and σ 2)....

    [...]