scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Applied Probability in 2007"


Journal ArticleDOI
TL;DR: It is proved in this article that there exists a deterministic equivalent to the empirical Stieltjes transform of the distribution of the eigenvalues of $\Sigma_n \Sigma-n^T$ which is itself the Sti Beltjestransform of a probability measure.
Abstract: Consider an N×n random matrix Yn=(Ynij) where the entries are given by $Y^{n}_{ij}=\frac{\sigma_{ij}(n)}{\sqrt{n}}X^{n}_{ij}$, the Xnij being independent and identically distributed, centered with unit variance and satisfying some mild moment assumption. Consider now a deterministic N×n matrix An whose columns and rows are uniformly bounded in the Euclidean norm. Let Σn=Yn+An. We prove in this article that there exists a deterministic N×N matrix-valued function Tn(z) analytic in ℂ−ℝ+ such that, almost surely, $$\lim_{n\rightarrow+\infty,N/n\rightarrow c}\biggl(\frac{1}{N}\operatorname{Trace}(\Sigma_{n}\Sigma_{n}^{T}-zI_{N})^{-1}-\frac{1}{N}\operatorname{Trace}T_{n}(z)\biggr )=0.$$ Otherwise stated, there exists a deterministic equivalent to the empirical Stieltjes transform of the distribution of the eigenvalues of ΣnΣnT. For each n, the entries of matrix Tn(z) are defined as the unique solutions of a certain system of nonlinear functional equations. It is also proved that $\frac{1}{N}\operatorname{Trace}\ T_{n}(z)$ is the Stieltjes transform of a probability measure πn(dλ), and that for every bounded continuous function f, the following convergence holds almost surely $$\frac{1}{N}\sum_{k=1}^{N}f(\lambda_{k})-\int_{0}^{\infty}f(\lambda)\pi _{n}(d\lambda)\mathop{\longrightarrow}_{n\rightarrow\infty}0,$$ where the (λk)1≤k≤N are the eigenvalues of ΣnΣnT. This work is motivated by the context of performance evaluation of multiple inputs/multiple output (MIMO) wireless digital communication channels. As an application, we derive a deterministic equivalent to the mutual information: $$C_{n}(\sigma^{2})=\frac{1}{N}\mathbb{E}\log \det\biggl(I_{N}+\frac{\Sigma_{n}\Sigma_{n}^{T}}{\sigma^{2}}\biggr),$$ where σ2 is a known parameter.

338 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the optimal dividend problem for an insurance company whose risk process evolves as a spectrally negative Levy process in the absence of dividend payments and give an explicit analytical description of the optimal strategy in the set of barrier strategies and the corresponding value function.
Abstract: In this paper we consider the optimal dividend problem for an insurance company whose risk process evolves as a spectrally negative Levy process in the absence of dividend payments. The classical dividend problem for an insurance company consists in finding a dividend payment policy that maximizes the total expected discounted dividends. Related is the problem where we impose the restriction that ruin be prevented: the beneficiaries of the dividends must then keep the insurance company solvent by bail-out loans. Drawing on the fluctuation theory of spectrally negative Levy processes we give an explicit analytical description of the optimal strategy in the set of barrier strategies and the corresponding value function, for either of the problems. Subsequently we investigate when the dividend policy that is optimal among all admissible ones takes the form of a barrier strategy.

306 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide a thorough mathematical examination of the limiting arguments building on the orientation of Heffernan and Tawn [J. R. Stat. Soc. Ser. B Stat. A. 66 (2004) 497-546] which allows examination of distributional tails other than the joint tail.
Abstract: Models based on assumptions of multivariate regular variation and hidden regular variation provide ways to describe a broad range of extremal dependence structures when marginal distributions are heavy tailed. Multivariate regular variation provides a rich description of extremal dependence in the case of asymptotic dependence, but fails to distinguish between exact independence and asymptotic independence. Hidden regular variation addresses this problem by requiring components of the random vector to be simultaneously large but on a smaller scale than the scale for the marginal distributions. In doing so, hidden regular variation typically restricts attention to that part of the probability space where all variables are simultaneously large. However, since under asymptotic independence the largest values do not occur in the same observation, the region where variables are simultaneously large may not be of primary interest. A different philosophy was offered in the paper of Heffernan and Tawn [J. R. Stat. Soc. Ser. B Stat. Methodol. 66 (2004) 497–546] which allows examination of distributional tails other than the joint tail. This approach used an asymptotic argument which conditions on one component of the random vector and finds the limiting conditional distribution of the remaining components as the conditioning variable becomes large. In this paper, we provide a thorough mathematical examination of the limiting arguments building on the orientation of Heffernan and Tawn [J. R. Stat. Soc. Ser. B Stat. Methodol. 66 (2004) 497–546]. We examine the conditions required for the assumptions made by the conditioning approach to hold, and highlight simililarities and differences between the new and established methods.

165 citations


Journal ArticleDOI
TL;DR: In this article, the authors provided a novel characterization of the proportionally fair bandwidth allocation of network capacities, in terms of the Fenchel-Legendre transform of the network capacity region.
Abstract: In this article we provide a novel characterization of the proportionally fair bandwidth allocation of network capacities, in terms of the Fenchel–Legendre transform of the network capacity region. We use this characterization to prove stability (i.e., ergodicity) of network dynamics under proportionally fair sharing, by exhibiting a suitable Lyapunov function. Our stability result extends previously known results to a more general model including Markovian users routing. In particular, it implies that the stability condition previously known under exponential service time distributions remains valid under so-called phase-type service time distributions. We then exhibit a modification of proportional fairness, which coincides with it in some asymptotic sense, is reversible (and thus insensitive), and has explicit stationary distribution. Finally we show that the stationary distributions under modified proportional fairness and balanced fairness, a sharing criterion proposed because of its insensitivity properties, admit the same large deviations characteristics. These results show that proportional fairness is an attractive bandwidth allocation criterion, combining the desirable properties of ease of implementation with performance and insensitivity.

156 citations


Journal ArticleDOI
TL;DR: In this paper, a theory of existence, uniqueness and ergodicity is developed in sufficient generality to subsume the sampling problems of interest to us, and a class of preconditioned SPDEs is studied, found by applying a Green's operator to the SPDE in such a way that the invariant measure remains unchanged; such infinite dimensional evolution equations are important for the development of practical algorithms for sampling infinite dimensional problems.
Abstract: In many applications, it is important to be able to sample paths of SDEs conditional on observations of various kinds. This paper studies SPDEs which solve such sampling problems. The SPDE may be viewed as an infinite-dimensional analogue of the Langevin equation used in finite-dimensional sampling. In this paper, conditioned nonlinear SDEs, leading to nonlinear SPDEs for the sampling, are studied. In addition, a class of preconditioned SPDEs is studied, found by applying a Green’s operator to the SPDE in such a way that the invariant measure remains unchanged; such infinite dimensional evolution equations are important for the development of practical algorithms for sampling infinite dimensional problems. The resulting SPDEs provide several significant challenges in the theory of SPDEs. The two primary ones are the presence of nonlinear boundary conditions, involving first order derivatives, and a loss of the smoothing property in the case of the pre-conditioned SPDEs. These challenges are overcome and a theory of existence, uniqueness and ergodicity is developed in sufficient generality to subsume the sampling problems of interest to us. The Gaussian theory developed in Part I of this paper considers Gaussian SDEs, leading to linear Gaussian SPDEs for sampling. This Gaussian theory is used as the basis for deriving nonlinear SPDEs which affect the desired sampling in the nonlinear case, via a change of measure.

140 citations


Journal ArticleDOI
TL;DR: In this article, the existence of independent random matching of a large population in both static and dynamic systems has been proved via non-standard analysis, and the proof for the dynamic setting relies on a new Fubini type theorem for an infinite product of Loeb transition probabilities, based on which a continuum of independent Markov chains is derived from random mutation, random partial matching and random type changing.
Abstract: This paper shows the existence of independent random matching of a large (continuum) population in both static and dynamic systems, which has been popular in the economics and genetics literatures. We construct a joint agent-probability space, and randomized mutation, partial matching and match-induced type-changing functions that satisfy appropriate independence conditions. The proofs are achieved via nonstandard analysis. The proof for the dynamic setting relies on a new Fubini-type theorem for an infinite product of Loeb transition probabilities, based on which a continuum of independent Markov chains is derived from random mutation, random partial matching and random type changing.

130 citations


Journal ArticleDOI
TL;DR: In this article, a general method to study dependent data in a binary tree was proposed, where an individual in one generation gives rise to two different offspring, one of type 0 and another of type 1, in the next generation.
Abstract: We propose a general method to study dependent data in a binary tree, where an individual in one generation gives rise to two different offspring, one of type 0 and one of type 1, in the next generation. For any specific characteristic of these individuals, we assume that the characteristic is stochastic and depends on its ancestors’ only through the mother’s characteristic. The dependency structure may be described by a transition probability P(x, dy dz) which gives the probability that the pair of daughters’ characteristics is around (y, z), given that the mother’s characteristic is x. Note that y, the characteristic of the daughter of type 0, and z, that of the daughter of type 1, may be conditionally dependent given x, and their respective conditional distributions may differ. We then speak of bifurcating Markov chains. We derive laws of large numbers and central limit theorems for such stochastic processes. We then apply these results to detect cellular aging in Escherichia Coli, using the data of Stewart et al. and a bifurcating autoregressive model.

116 citations


Journal ArticleDOI
TL;DR: A method for determining the appropriate form for the scaling of the proposal distribution as a function of the dimension is proposed, which leads to the proof of an asymptotic diffusion theorem.
Abstract: In this paper, we shall optimize the efficiency of Metropolis algorithms for multidimensional target distributions with scaling terms possibly depending on the dimension. We propose a method for determining the appropriate form for the scaling of the proposal distribution as a function of the dimension, which leads to the proof of an asymptotic diffusion theorem. We show that when there does not exist any component with a scaling term significantly smaller than the others, the asymptotically optimal acceptance rate is the well-known 0.234.

110 citations


Journal ArticleDOI
TL;DR: Tracy and Widom as discussed by the authors showed that the distribution functions for the top and bottom curves of the Bessel process are equal to Fredholm determinants whose kernel is expressible in terms of Painlev #x27;{e} V functions.
Abstract: Author(s): Tracy, Craig A.; Widom, Harold | Abstract: We consider the process of $n$ Brownian excursions conditioned to be nonintersecting. We show the distribution functions for the top curve and the bottom curve are equal to Fredholm determinants whose kernel we give explicitly. In the simplest case, these determinants are expressible in terms of Painlev #x27;{e} V functions. We prove that as $n\to \infty$, the distributional limit of the bottom curve is the Bessel process with parameter 1/2. (This is the Bessel process associated with Dyson's Brownian motion.) We apply these results to study the expected area under the bottom and top curves.

100 citations


Journal ArticleDOI
TL;DR: In this article, the authors exploit connections between importance sampling, differential games, and classical subsolutions of the corresponding Isaacs equation to design and analyze simple and efficient dynamic importance sampling schemes for general classes of networks.
Abstract: : Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network setting (e.g., a two-node tandem network). Exploiting connections between importance sampling, differential games, and classical subsolutions of the corresponding Isaacs equation, we show how to design and analyze simple and efficient dynamic importance sampling schemes for general classes of networks. The models used to illustrate the approach include d-node tandem Jackson networks and a two node network with feedback, and the rare events studied are those of large queueing backlogs, including total population overflow and the overflow of individual buffers.

92 citations


Journal ArticleDOI
TL;DR: Here, in order to include the effect of stochasticity (genetic drift), self-regulated randomly fluctuating populations subject to mutation are considered, so that the number of coexisting types may fluctuate.
Abstract: The biological theory of adaptive dynamics proposes a description of the long-term evolution of a structured asexual population. It is based on the assumptions of large population, rare mutations and small mutation steps, that lead to a deterministic ODE describing the evolution of the dominant type, called the 'canonical equation of adaptive dynamics'. Here, in order to include the effect of stochasticity (genetic drift), we consider self-regulated randomly fluctuating populations subject to mutation, so that the number of coexisting types may fluctuate. We apply a limit of rare mutations to these populations, while keeping the population size finite. This leads to a jump process, the so-called 'trait substitution sequence', where evolution proceeds by successive invasions and fixations of mutant types. Then we apply a limit of small mutation steps (weak selection) to this jump process, that leads to a diffusion process that we call the 'canonical diffusion of adaptive dynamics', in which genetic drift is combined with directional selection driven by the gradient of the fixation probability, also interpreted as an invasion fitness. Finally, we study in detail the particular case of multitype logistic branching populations and seek explicit formulae for the invasion fitness of a mutant deviating slightly from the resident type. In particular, second-order terms of the fixation probability are products of functions of the initial mutant frequency, times functions of the initial total population size, called the invasibility coefficients of the resident by increased fertility, defence, aggressiveness, isolation, or survival.

Journal ArticleDOI
TL;DR: In this paper, a class of random spanning trees built on a realization of an homogeneous Poisson point process of the plane is analyzed, and some non-local properties such as the shape and structure of its semi-infinite paths or the shape of the set of its vertices less than $k$ generations away from the origin are analyzed.
Abstract: We analyze a class of random spanning trees built on a realization of an homogeneous Poisson point process of the plane. This tree has a local construction rule and a radial structure with the origin as its root We first use stochastic geometry arguments to analyze local functionals of the random tree such as the distribution of the length of the edges or the mean degree of the vertices. Far away from the origin, these local properties are shown to be close to those of the directed spanning tree introduced by Bhatt and Roy. We then use the theory of continuous state space Markov chains to analyze some non local properties of the tree such as the shape and structure of its semi-infinite paths or the shape of the set of its vertices less than $k$ generations away from the origin. This class of spanning trees has applications in many fields and in particular in communications.

Journal ArticleDOI
TL;DR: In this paper, it was shown that exchangeable pairs for normal approximation can effectively be used for translated Poisson approximation for the anti-voter model on finite graphs, and the same rate of convergence was obtained for a stronger metric.
Abstract: It is shown that the method of exchangeable pairs introduced by Stein [Approximate Computation of Expectations (1986) IMS, Hayward, CA] for normal approximation can effectively be used for translated Poisson approximation. Introducing an additional smoothness condition, one can obtain approximation results in total variation and also in a local limit metric. The result is applied, in particular, to the anti-voter model on finite graphs as analyzed by Rinott and Rotar [Ann. Appl. Probab. 7 (1997) 1080–1105], obtaining the same rate of convergence, but now for a stronger metric.

Journal ArticleDOI
TL;DR: In this article, the question of which measure one should choose for valuation or pricing of non-hedgeable payoffs is raised, and a measure is chosen which minimizes a particular functional over the set M.
Abstract: Levy models are very popular in finance due to theirtractability and their good fitting properties. However, Levy models typically yieldincomplete markets. This raises the question of which measure one should choosefor valuation or pricing of nonhedgeable payoffs. Very often, a measure is chosenwhich minimizes a particular functional over the set M

Journal ArticleDOI
TL;DR: In this paper, a new adaptive simulation based algorithm for the numerical solution of optimal stopping problems in discrete time is proposed, which recursively computes the so-called continuation values.
Abstract: Under the assumption of no-arbitrage, the pricing of American and Bermudan options can be casted into optimal stopping problems. We propose a new adaptive simulation based algorithm for the numerical solution of optimal stopping problems in discrete time. Our approach is to recursively compute the so-called continuation values. They are defined as regression functions of the cash flow, which would occur over a series of subsequent time periods, if the approximated optimal exercise strategy is applied. We use nonparametric least squares regression estimates to approximate the continuation values from a set of sample paths which we simulate from the underlying stochastic process. The parameters of the regression estimates and the regression problems are chosen in a data-dependent manner. We present results concerning the consistency and rate of convergence of the new algorithm. Finally, we illustrate its performance by pricing high-dimensional Bermudan basket options with strangle-spread payoff based on the average of the underlying assets.

Journal ArticleDOI
Abstract: This paper deals with discrete-time Markov control processes on a general state space. A long-run risk-sensitive average cost criterion is used as a performance measure. The one-step cost function is nonnegative and possibly unbounded. Using the vanishing discount factor approach, the optimality inequality and an optimal stationary strategy for the decision maker are established.

Journal ArticleDOI
TL;DR: In this paper, it was shown that with probability 1, SIR1 tends, as N→∞, to the limit ∑l,l'=1Lα1(l)α1 (l')al, l', where A=(al,l') is nonrandom, Hermitian positive definite, where α∈ℂL has distribution H.
Abstract: Let {sij: i, j=1, 2, …} consist of i.i.d. random variables in ℂ with $\mathsf{E}s_{11}=0$, $\mathsf{E}|s_{11}|^{2}=1$. For each positive integer N, let sk=sk(N)=(s1k, s2k, …, sNk)T, 1≤k≤K, with K=K(N) and K/N→c>0 as N→∞. Assume for fixed positive integer L, for each N and k≤K, αk=(αk(1), …, αk(L))T is random, independent of the sij, and the empirical distribution of (α1, …, αK), with probability one converging weakly to a probability distribution H on ℂL. Let βk=βk(N)=(αk(1)skT, …, αk(L)skT)T and set C=C(N)=(1/N)∑k=2K βk βk*. Let σ2>0 be arbitrary. Then define SIR1=(1/N)β1*(C+σ2I)−1 β1, which represents the best signal-to-interference ratio for user 1 with respect to the other K−1 users in a direct-sequence code-division multiple-access system in wireless communications. In this paper it is proven that, with probability 1, SIR1 tends, as N→∞, to the limit ∑l,l'=1Lα1(l)α1(l')al,l', where A=(al,l') is nonrandom, Hermitian positive definite, and is the unique matrix of such type satisfying $A=\bigl(c\,\mathsf{E}\frac{\mathbf{\alpha}\mathbf{\alpha}^{*}}{1+\mathbf{\alpha}^{*}A\mathbf{\alpha}}+\sigma^{2}I_{L}\bigr)^{-1}$, where α∈ℂL has distribution H. The result generalizes those previously derived under more restricted assumptions.

Journal ArticleDOI
TL;DR: In this article, a Malliavin type differential calculus for sensitivity computations in a model driven by a Levy process is presented. But the authors assume that sensitivity computation is piecewise differentiable and they do not consider the case where sensitivity is independent of the parameters of the differential operator.
Abstract: We consider random variables of the form $F=f(V_1,...,V_n)$ where $f$ is a smooth function and $V_i,i\in\mathbbN$ are random variables with absolutely continuous law $p_i(y).$ We assume that $p_i,i=1,...,n$ are piecewise differentiable and we develop a differential calculus of Malliavin type based on $\partial\lnp_i.$ This allows us to establish an integration by parts formula $E(\partial_i\phi(F)G)=E(\phi(F)H_i(F,G))$ where $H_i(F,G)$ is a random variable constructed using the differential operators acting on $F$ and $G.$ We use this formula in order to give numerical algorithms for sensitivity computations in a model driven by a Levy process.

Journal ArticleDOI
TL;DR: In this paper, a normal distribution with negative mean -s is defined for independent variables, each having a negative mean distribution, and each having an independent variable distribution with positive mean −s.
Abstract: Let X1,?X2,?… be independent variables, each having a normal distribution with negative mean -s

Journal ArticleDOI
TL;DR: For a class of stationary Markov-dependent sequences (An, Bn)∈ℝ2, this paper considered the random linear recursion Sn =An+BnSn−1, n ∈ ℤ and showed that the distribution tail of its stationary solution has a power law decay.
Abstract: For a class of stationary Markov-dependent sequences (An, Bn)∈ℝ2, we consider the random linear recursion Sn=An+BnSn−1, n∈ℤ, and show that the distribution tail of its stationary solution has a power law decay.

Journal ArticleDOI
TL;DR: In this paper, it was shown that any Markov chain that performs local, reversible updates on randomly chosen vertices of a bounded-degree graph necessarily has mixing time at least Ω(n log n), where n is the number of vertices.
Abstract: We prove that any Markov chain that performs local, reversible updates on randomly chosen vertices of a bounded-degree graph necessarily has mixing time at least Ω(n logn), where n is the number of vertices. Our bound applies to the so-called “Glauber dynamics” that has been used extensively in algorithms for the Ising model, independent sets, graph colorings and other structures in computer science and statistical physics, and demonstrates that many of these algorithms are optimal up to constant factors within their class. Previously, no superlinear lower bound was known for this class of algorithms. Though widely conjectured, such a bound had been proved previously only in very restricted circumstances, such as for the empty graph and the path. We also show that the assumption of bounded degree is necessary by giving a family of dynamics on graphs of unbounded degree with mixing time O(n).

Journal ArticleDOI
TL;DR: In this article, the authors propose two models of the evolution of a pair of competing populations, one is a compromise between fully spatial models and interacting particle system models, which do not, at present, incorporate all of the competitive strategies that a population might adopt, and the second is a simplification of the first, in which competition is only supposed to act within lattice sites and the total population size within each lattice point is a constant.
Abstract: We propose two models of the evolution of a pair of competing populations. Both are lattice based. The first is a compromise between fully spatial models, which do not appear amenable to analytic results, and interacting particle system models, which do not, at present, incorporate all of the competitive strategies that a population might adopt. The second is a simplification of the first, in which competition is only supposed to act within lattice sites and the total population size within each lattice point is a constant. In a special case, this second model is dual to a branching annihilating random walk. For each model, using a comparison with oriented percolation, we show that for certain parameter values, both populations will coexist for all time with positive probability. As a corollary, we deduce survival for all time of branching annihilating random walk for sufficiently large branching rates. We also present a number of conjectures relating to the role of space in the survival probabilities for the two populations.

Journal ArticleDOI
TL;DR: In this article, it was shown that the basic integral representa- tion of transition rates for the �-coalescent is forced by sampling consistency under more general assumptions on the coalescent process.
Abstract: Kingman derived the Ewens sampling formula for random partitions describing the genetic variation in a neutral mutation model defined by a Poisson process of mutations along lines of descent governed by a simple coalescent process, and observed that similar methods could be applied to more complex models. Mohle described the recursion which determines the general- ization of the Ewens sampling formula in the situation when the lines of descent are governed by a �-coalescent, which allows multiple mergers. Here we show that the basic integral representa- tion of transition rates for the �-coalescent is forced by sampling consistency under more general assumptions on the coalescent process. Exploiting an analogy with the theory of regenerative partition structures, we provide various characterizations of the associated partition structures in terms of discrete-time Markov chains.

Journal ArticleDOI
TL;DR: For a class of processes modeling the evolution of a spatially structured population with migration and a logistic local regulation of the reproduction dynamics, the authors show convergence to an upper invariant measure from a suitable class of initial distributions.
Abstract: For a class of processes modeling the evolution of a spatially structured population with migration and a logistic local regulation of the reproduction dynamics, we show convergence to an upper invariant measure from a suitable class of initial distributions. It follows from recent work of Alison Etheridge that this upper invariant measure is nontrivial for sufficiently large super-criticality in the reproduction. For sufficiently small super-criticality, we prove local extinction by comparison with a mean field model. This latter result extends also to more general local reproduction regulations.

Journal ArticleDOI
TL;DR: In this article, the authors compare convergence rates of Metropolis-Hastings chains to multi-modal target distributions when the proposal distributions can be of "local" and "small world" type.
Abstract: We compare convergence rates of Metropolis–Hastings chains to multi-modal target distributions when the proposal distributions can be of “local” and “small world” type In particular, we show that by adding occasional long-range jumps to a given local proposal distribution, one can turn a chain that is “slowly mixing” (in the complexity of the problem) into a chain that is “rapidly mixing” To do this, we obtain spectral gap estimates via a new state decomposition theorem and apply an isoperimetric inequality for log-concave probability measures We discuss potential applicability of our result to Metropolis-coupled Markov chain Monte Carlo schemes

Journal ArticleDOI
TL;DR: It is shown that for words of length 6, the average waiting time is 100,000 years, while forWords of length 8, the waiting time has mean 375,000 Years when there is a 7 out of 8 letter match in the population consensus sequence and has mean 650 million years when there are not.
Abstract: One possible explanation for the substantial organismal differences between humans and chimpanzees is that there have been changes in gene regulation. Given what is known about transcription factor binding sites, this motivates the following probability question: given a 1000 nucleotide region in our genome, how long does it take for a specified six to nine letter word to appear in that region in some individual? Stone and Wray [Mol. Biol. Evol. 18 (2001) 1764–1770] computed 5,950 years as the answer for six letter words. Here, we will show that for words of length 6, the average waiting time is 100,000 years, while for words of length 8, the waiting time has mean 375,000 years when there is a 7 out of 8 letter match in the population consensus sequence (an event of probability roughly 5/16) and has mean 650 million years when there is not. Fortunately, in biological reality, the match to the target word does not have to be perfect for binding to occur. If we model this by saying that a 7 out of 8 letter match is good enough, the mean reduces to about 60,000 years.

Journal ArticleDOI
TL;DR: In this article, an autoregressive model on ℝ defined by the recurrence equation Xn=AnXn−1+Bn, where {(bn, An)} are i.i.d.
Abstract: We consider an autoregressive model on ℝ defined by the recurrence equation Xn=AnXn−1+Bn, where {(Bn, An)} are i.i.d. random variables valued in ℝ×ℝ+ and $\mathbb{E}[\log A_{1}]=0$ (critical case). It was proved by Babillot, Bougerol and Elie that there exists a unique invariant Radon measure of the process {Xn}. The aim of the paper is to investigate its behavior at infinity. We describe also stationary measures of two other stochastic recursions, including one arising in queuing theory.

Journal ArticleDOI
TL;DR: In this paper, the causal estimation error of a Gaussian nonstationary filtering problem and a multidimensional extension of the Yovits-Jackson formula were derived under a wide class of causality patterns.
Abstract: The model considered is that of “signal plus white noise.” Known connections between the noncausal filtering error and mutual information are combined with new ones involving the causal estimation error, in a general abstract setup. The results are shown to be invariant under a wide class of causality patterns; they are applied to the derivation of the causal estimation error of a Gaussian nonstationary filtering problem and to a multidimensional extension of the Yovits–Jackson formula.

Journal ArticleDOI
TL;DR: In this article, a perturbation result for Semi-Artingale reflecting Brownian motions (SRBMs) is proved, assuming certain conditions on the domains and directions of reflection.
Abstract: Semimartingale reflecting Brownian motions (SRBMs) living in the closures of domains with piecewise smooth boundaries are of interest in applied probability because of their role as heavy traffic approximations for some stochastic networks. In this paper, assuming certain conditions on the domains and directions of reflection, a perturbation result, or invariance principle, for SRBMs is proved. This provides sufficient conditions for a process that satisfies the definition of an SRBM, except for small random perturbations in the defining conditions, to be close in distribution to an SRBM. A crucial ingredient in the proof of this result is an oscillation inequality for solutions of a perturbed Skorokhod problem. We use the invariance principle to show weak existence of SRBMs under mild conditions. We also use the invariance principle, in conjunction with known uniqueness results for SRBMs, to give some sufficient conditions for validating approximations involving (i) SRBMs in convex polyhedrons with a constant reflection vector field on each face of the polyhedron, and (ii) SRBMs in bounded domains with piecewise smooth boundaries and possibly nonconstant reflection vector fields on the boundary surfaces.

Journal ArticleDOI
TL;DR: In this article, the authors generalize the work of Kendall [Electron. Comm. Probab. 9 (2004) 140-15 11] and show that perfect simulation is possible for positive recurrent Markov chains.
Abstract: This paper generalizes the work of Kendall [Electron. Comm. Probab. 9 (2004) 140-15 11, which showed that perfect simulation, in the form of dominated coupling from the past, is always possible (although not necessarily practical) for geometrically ergodic Markov chains. Here, we consider the more general situation of positive recurrent chains and explore when it is possible to produce such a simulation algorithm for these chains. We introduce a class of chains which we name tame, for which we show that perfect simulation is possible.