scispace - formally typeset

Journal ArticleDOI

A Comparison of Random Walks in Dependent Random Environments

01 Mar 2016-Advances in Applied Probability (Applied Probability Trust)-Vol. 48, Iss: 1, pp 199-214

TL;DR: Comparing random walks in various dependent environments, it is demonstrated that their drifts can exhibit interesting behavior that depends significantly on the dependency structure of the random environment.

AbstractAlthough the theoretical behavior of one-dimensional random walks in random environments is well understood, the actual evaluation of various characteristics of such processes has received relatively little attention. This paper develops new methodology for the exact computation of the drift in such models. Focusing on random walks in dependent random environments, including $k$-dependent and moving average environments, we show how the drift can be characterized and found using Perron-Frobenius theory. We compare random walks in various dependent environments and show that their drift behavior can differ signicantly.

Topics: Random walk (67%), Random field (64%)

Summary (2 min read)

1 Introduction

  • Random walks in environments are well-known mathematical models for motion through disorganized media.
  • They generalize ordinary random walks whereby the transition probabilities from any position are determined by the random state of the environment at that position.
  • From an applied and computational point of view significant gaps in their understanding remain.
  • Exact drift computations and comparisons (as oposed to comparisons using simulation) between dependent random environments seem to be entirely missing from the literature.
  • In Section 3 the authors prove explicit results for the drift for each of these models, and compare their behaviors.

2.1 General theory

  • (2.1) The theoretical behavior of {Xn} is well understood, as set out in the seminal work of Solomon [14].
  • In particular, Theorems 2.1 and 2.2 below completely describe the transience/recurrence behavior and the Law of Large Numbers behavior of {Xn}.
  • The authors follow the notation of Alili [1] and first give the key quantities that appear in these theorems.
  • These follow directly from the stationarity of U.

3 Evaluating the drift

  • And then further specify the transience/recurrence and drift results to the Markov environment, the 2-dependent environment, and the moving average environments.the authors.
  • The authors omit a separate derivation for the i.i.d. environment, which can be viewed as a special case of the Markovian environment, see Remark 3.1.

3.1 General solution for swap models

  • Consider now the RWRE swap model with a random environment generated by a Markov chain {Yi, i ∈ Z}, as specified in Section 2.2.
  • The authors summarize these findings in the following theorem.

3.2.1 Comparison with the iid environment

  • Substitution into the expression for V (here in the case of positive drift only, see (3.7)) and rewriting yields V = (2p−.
  • This enables us not only to immediately quickly obtain the drift for the iid case (take ̺ = 0), but also to study the dependence of the drift V on ̺.
  • Figures 2 illustrates various aspects of the difference between iid and Markov cases.
  • Clearly, compared to the iid case (for the same value of α), the Markov case with positive correlation coefficient has lower drift, but also a lower ‘cutoff value’ of p at which the drift becomes zero.
  • For negative correlation coefficients the authors see a higher cutoff value, but not all values of α are possible (since they should have a < 1).

3.3 2-dependent environment

  • Unfortunately, the eigenvalues of PD are now the roots of a 4-degree polynomial, which are hard to find explicitly.
  • Using Perron–Frobenius theory and the implicit function theorem it is possible to prove the following lemma, which has the same structure as in the Markovian case.
  • Now, moving σ from 1 to any other positive value, λ0(σ) must continue to play the role of the Perron–Frobenius eigenvalue; i.e., none of the other λi(σ) can at some point take over this role.
  • Including the transience/recurrence result from the first part of this section, and including the cases with negative drift, the authors obtain the following analogon to Proposition 3.1.

3.4 Moving average environment

  • The proof is similar to that of Lemma 3.2; the authors only give an outline, leaving details for the reader to verify.
  • The cutoff value for p is now easily found as (1+σcutoff) −1, which can be numerically evaluated.
  • It is interesting to note that the cutoff points (where V becomes 0) are significantly lower in the moving average case than the iid case, using the same α, while at the same time the maximal drift that can be achieved is higher for the moving average case than for the iid case.
  • This is different behavior from the Markovian case; see also Figure 2.

4 Conclusions

  • Random walks in random environments can exhibit interesting and unusual behavior due to the trapping phenomenon.
  • The dependency structure of the random environment can significantly affect the drift of the process.
  • For the wellknown swap RWRE model, this approach allows for easy computation of drift, as well as explicit conditions under which the drift is positive, negative, or zero.
  • The cutoff values where the drift becomes zero, are determined via Perron–Frobenius theory.
  • Various generalizations of the above environments can be considered in the same (swap model) framework, and can be analyzed along the same lines, e.g., replacing iid by Markovian {Ûi} in the moving average model, or taking moving averages of more than 3 neighboring states.

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

A Comparison of Random Walks in
Dependent Random Environments
Werner R.W. Scheinhardt
University of Twente
w.r.w.scheinhardt@utwente.nl
Dirk P. Kroese
The University of Queensland
kroese@maths.uq.edu.au
April 13, 2015
Abstract
We provide exact computations for the drift of random walks in depen-
dent random environments, including k-dependent and moving average envi-
ronments. We show how the drift can be characterized and evaluated using
Perron-Frobenius theory. Comparing random walks in various dependent en-
vironments, we demonstrate that their drifts can exhibit interesting behavior
that depends significantly on the dependency structure of the random envi-
ronment.
MSC subject classifications: Primary 60K37, 60G50; secondary 82B41
Keywords: random walk, depen dent random environment, drift, Perron–
Frobenius eigenvalue
1 Introduction
Random walks in random environments (RWREs) are well-known mathematical
models for motion through disorganized (random) media. They generalize ordinary
random walks whereby the transition probabilities from any position are determined
by the random state of the environment at that position. RWREs exhibit interesting
and unusual behavior that is not seen in ordinary random walks. For example, the
walk can tend to infinity almost surely, while its overall drift is 0. The reason for
such surprising behavior is that RWREs can spend a long time in (rare) re gions
from which it is difficult to escape in effect, the walker becomes “trapped” for a
long time.
Since the late 1960s a vast body of knowledge has been built up on the behav-
ior of RWREs. Early applications can be found in Chernov [
4] and Temkin [17];
see also Kozlov [
9] and references therein. Recent applications to charge transport
in designed materials are given in Brereton et al. [3] and Stenze l et al. [15]. The
mathematical framework for one-dimensional RWREs in independent environments
was laid by So lomon [14], and was further ex tended by K esten et al. [8], Sinai [13],
1

Greven and Den Hollander [6]. Markovian environments were investigated in Dol-
gopyat [5] and Mayer-Wolf et al. [10]. Alili [1] showed that in the one-dimensional
case much of the theory for independent env ironments could be generalized to the
case where the environment process is stationary and ergodic. Overviews of the
current state of the art, with a foc us on higher-dimensional RWREs, can be found,
for example, in Hughes [
7], Sznitman [16], Zeitouni [18, 19], and ev´esz [1 1].
Although from a theoretical perspective the behavior of one-dimensional RWREs
is well understood, from an applied and computational point of view significant
gaps in our understanding remain. For e xample, exact drift computatio ns and com-
parisons (as oposed to comparisons using simulation) be twee n dependent random
environments seem to be entirely missing from the literature. The reason is that
such exact computations are not trivial and require additional insights.
The contribution of this paper is twofold. First, we provide new methodology
and explicit expressions for the c omputation of the drift of one-dimensiona l random
walks in various dependent environments, focusing on so-called ‘swap models’. In
particular, our approach is based on Perron–Frobenius theory, which allows ea sy
computation of the drift and as well a s various cutoff points for transient/recurrent
behavior. Second, we compare the drift behavior between various dependent envi-
ronments, including moving avera ge and k-dependent e nvironments. We show that
this behavior c an deviate considerably from that of the (known) independent c ase.
The rest of the pape r is organized as follows. In Section
2 we formulate the
model for a one-dimensional RWRE in a stationary and ergodic environment and
review some of the key results fro m [
1]. We then formulate a flexible mechanism
for constructing depe ndent random environment that includes the iid, Markovian,
k-dependent, and moving average environments. In Section
3 we prove explicit
(computable) results for the drift for each of these models, and compare their be-
haviors. Conclusions and directions for future research are given in Section
4.
2 Model and preliminaries
In this section we review so me key results on one-dimens ional RWREs and introduce
the class of ‘swap-models that we will study in more detail.
2.1 General theory
Consider a stochastic process {X
n
, n = 0, 1, 2, . . .} with state space Z, and a sto chas-
tic “Underlying environment U taking value s in some set U
Z
, where U is the set
of possible environment states for each site in Z. We assume that U is statio nary
2

(under P) as well as ergodic (under the natural shift operator on Z). The e volution
of {X
n
} depends on the rea liz ation of U, which is random but fixed in time. For
any re alization u of U the process {X
n
} behaves a s a simple random walk with
transition probabilities
P(X
n+1
= i + 1 | X
n
= i, U = u) = α
i
(u)
P(X
n+1
= i 1 | X
n
= i, U = u) = β
i
(u) = 1 α
i
(u).
(2.1)
The theor e tical behavior of {X
n
} is well understood, as set out in the seminal
work of Solomon [
14]. In particular, Theorems 2.1 and 2.2 below c ompletely de-
scribe the transience/recurrence behavior and the Law of Large Numbers behavior
of {X
n
}. We follow the notation of Alili [
1] and first give the key q uantities that
appear in these theorems. Define
σ
i
= σ
i
(u) =
β
i
(u)
α
i
(u)
, i Z , (2.2)
and let
S = 1 + σ
1
+ σ
1
σ
2
+ σ
1
σ
2
σ
3
+ · · · (2.3)
and
F = 1 +
1
σ
1
+
1
σ
1
σ
2
+
1
σ
1
σ
2
σ
3
+ · · · (2.4)
Theorem 2.1. (Theorem 2.1 in [
1])
1. If E[ln σ
0
] < 0, then almost surely lim
n→∞
X
n
= .
2. If E[ln σ
0
] > 0, then almost surely lim
n→∞
X
n
= −∞ .
3. If E[ln σ
0
] = 0, then almost surely lim inf
n→∞
X
n
= −∞ and lim sup
n→∞
X
n
= .
Theorem 2.2. (Theorem 4.1 in [1])
1. If E[S] < , then almost surely lim
n→∞
X
n
n
=
1
E[(1 + σ
0
)S]
=
1
2E[S] 1
.
2. If E[F ] < , then almost surely lim
n→∞
X
n
n
=
1
E[(1 + σ
1
0
)F ]
=
1
2E[F ] 1
.
3. If E[S] = and E[F ] = , then almost surely lim
n→∞
X
n
n
= 0.
Note that we have added the second equalities in statements 1. and 2. of Theo-
rem
2.2. These follow directly from the stationarity of U.
We will call lim
n→∞
X
n
/n the drift of the proce ss {X
n
}, and denote it by V .
Note that, as mentioned in the introduction, it is possible for the chain to be
transient with drift 0 (namely when E[ln σ
0
] 6= 0, E[S] = and E[F] = ).
3

2.2 Swap model
We next focus on what we will c all swap models, as studied by Sinai [
13]. Here,
U = {−1, 1}; that is , we assume that all elements U
i
of the process U ta ke value
either 1 or +1. We assume that the transition probabilities in state i only depends
on U
i
and not on other e lements of U, as follows. When U
i
= 1, the transition
probabilities of {X
n
} fr om state i to states i + 1 a nd i 1 are swapped with r esp ect
to the values they have when U
i
= + 1. Thus, for some fixed value p in (0, 1) we
let α
i
(u) = p (and β
i
(u) = 1 p) if u
i
= 1, and α
i
(u) = 1 p (and β
i
(u) = p) if
u
i
= 1. Thus, (2.1) becomes
P(X
n+1
= i + 1 | X
n
= i, U = u) =
(
p if u
i
= 1
1 p if u
i
= 1
and
P(X
n+1
= i 1 | X
n
= i, U = u) =
(
1 p if u
i
= 1
p if u
i
= 1 .
Next, we choose a dependence structure fo r U using the following s imple, but
novel, construction. Let {Y
i
, i Z} be a sta tionary and ergodic Markov chain
taking values in some finite set M and let g : M {−1, 1} be a given function.
Now define the environment at state i as U
i
= g(Y
i
), i Z. Despite its simplicity,
this formalism covers a number of interesting dependence structures on U, discussed
next.
iid environment. In this case the {U
i
} are i.i.d. random variables, with α
def
=
P(U
i
= 1 ) = 1 P(U
i
= 1). Formally, this fits the framework above by choosing
g the identity function on M = {−1, 1} and {Y
i
} the Markov chain with one-step
transition probabilities P(Y
i
= 1 | Y
i1
= 1) = P(Y
i
= 1 | Y
i1
= 1) = α for all i.
k-dependent environment. Define a k-dependent environment as an environ-
ment {U
i
} for which
P(U
i
= u
i
| U
i1
= u
i1
, U
i2
= u
i2
, . . .) = (2.5)
P(U
i
= u
i
| U
i1
= u
i1
, U
i2
= u
i2
, . . . , U
ik+1
= u
ik+1
), u
j
{− 1, 1}. (2.6)
Spec ial cases are the independent environment (k = 0; see above) and the so-called
Markovian environment (k = 1). For k > 1, let {Y
i
, i Z} be a Markov chain
that takes values in M = {−1, 1}
k
such that from any state (u
ik
, . . . , u
i1
) only
two possible transitions can ta ke place, given by
(u
ik
, . . . , u
i1
) (u
ik+1
, . . . , u
i1
, u
i
), u
i
{− 1, 1},
4

with corresponding probabilities 1a
(u
ik
,...,u
i2
)
, a
(u
ik
,...,u
i2
)
, b
(u
ik
,...,u
i2
)
, and
1 b
(u
ik
,...,u
i2
)
, for (u
i1
, u
i
) equal to (1, 1), (1, 1), (1, 1), and (1, 1), re-
spectively. Now define U
i
as the last component of Y
i
. Then {U
i
, i Z} is a
k-dependent environment, and Y
i
= (U
ik+1
, . . . , U
i
). In the special case k = 1
(Markovian environment), we omit the subindices of a (transition probability from
U
i1
= 1 to U
i
= +1) and b (from U
i1
= +1 to U
i
= 1).
Moving average environment. Cons ide r a moving average” environment, which
is built up in two phases as follows. First, start with an iid environment {
b
U
i
} as
in the iid case, with P(
b
U
i
= 1) = α. Let Y
i
= (
b
U
i
,
b
U
i+1
,
b
U
i+2
). Hence, {Y
i
} is
a Markov process with states 1 = (1, 1, 1), 2 = (1, 1, 1), . . . , 8 = (1, 1, 1)
(lexicographical o rder). The corresponding transition matrix is given by
P =
1 α α 0 0 0 0 0 0
0 0 1 α α 0 0 0 0
0 0 0 0 1 α α 0 0
0 0 0 0 0 0 1 α α
1 α α 0 0 0 0 0 0
0 0 1 α α 0 0 0 0
0 0 0 0 1 α α 0 0
0 0 0 0 0 0 1 α α
. (2.7)
Now define U
i
= g(Y
i
), where g(Y
i
) = 1 if at le ast two of the three random var iables
b
U
i
,
b
U
i+1
and
b
U
i+2
are 1, and g(Y
i
) = 1 otherwise. Thus,
(g(1), . . . , g(8)) = (1, 1, 1, 1, 1, 1, 1, 1) , (2.8)
and we see that ea ch U
i
is obtained by taking the moving average of
b
U
i
,
b
U
i+1
and
b
U
i+2
, as illustrated in Fig ure 2.2.
Figure 1: Moving average environment.
3 Evaluating the drift
In this section we first give the gene ral solution approach for the Markov-based
swap model, and then further specify the transience/recurrence and drift results to
the Markov environment, the 2-dependent environment, and the moving average
environments. We omit a separate derivation for the i.i.d. environment, which can
be viewed as a special case of the Markovian environment, see Remark
3.1.
5

Citations
More filters

01 Mar 2010
Abstract: We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case that the system load $\rho$ equals 1, and prove that the asymptotic variance rate satisfies \[ \lim_{t \rightarrow \infty} \frac{Var D(t)}{t} = \lambda (1-\frac{2}{\pi})(c^2_a+c^2_s) \] , where $\lambda$ is the arrival rate and $c^2_a$, $c^2_s$ are squared coefficients of variation of the inter-arrival and service times respectively. As a consequence, the departures variability has a remarkable singularity in case $\rho$ equals 1, in line with the BRAVO effect (Balancing Reduces Asymptotic Variance of Outputs) which was previously encountered in the finite-capacity birth-death queues. Under certain technical conditions, our result generalizes to multi-server queues, as well as to queues with more general arrival and service patterns. For the M/M/1 queue we present an explicit expression of the variance of D(t) for any t. Keywords: GI/G/1 queues, critically loaded systems, uniform integrability, departure processes, renewal theory, Brownian bridge, multi-server queues.

28 citations


References
More filters

Book
01 Jan 1966
Abstract: Preface. Elements of Stochastic Processes. Markov Chains. The Basic Limit Theorem of Markov Chains and Applications. Classical Examples of Continuous Time Markov Chains. Renewal Processes. Martingales. Brownian Motion. Branching Processes. Stationary Processes. Review of Matrix Analysis. Index.

3,877 citations


Book
01 Jul 1976
TL;DR: This lecture reviews the theory of Markov chains and introduces some of the high quality routines for working with Markov Chains available in QuantEcon.jl.
Abstract: Markov chains are one of the most useful classes of stochastic processes, being • simple, flexible and supported by many elegant theoretical results • valuable for building intuition about random dynamic models • central to quantitative modeling in their own right You will find them in many of the workhorse models of economics and finance. In this lecture we review some of the theory of Markov chains. We will also introduce some of the high quality routines for working with Markov chains available in QuantEcon.jl. Prerequisite knowledge is basic probability and linear algebra.

3,253 citations


Book
01 Jan 1979
TL;DR: This classic in stochastic network modelling broke new ground when it was published in 1979, and it remains a superb introduction to reversibility and its applications thanks to the author's clear and easy-to-read style.
Abstract: This classic in stochastic network modelling broke new ground when it was published in 1979, and it remains a superb introduction to reversibility and its applications. The book concerns behaviour in equilibrium of vector stochastic processes or stochastic networks. When a stochastic network is reversible its analysis is greatly simplified, and the first chapter is devoted to a discussion of the concept of reversibility. The rest of the book focuses on the various applications of reversibility and the extent to which the assumption of reversibility can be relaxed without destroying the associated tractability. Now back in print for a new generation, this book makes enjoyable reading for anyone interested in stochastic processes thanks to the author's clear and easy-to-read style. Elementary probability is the only prerequisite and exercises are interspersed throughout.

2,478 citations


Journal ArticleDOI
TL;DR: This lecture reviews the theory of Markov chains and introduces some of the high quality routines for working with Markov Chains available in QuantEcon.jl.
Abstract: Markov chains are one of the most useful classes of stochastic processes, being • simple, flexible and supported by many elegant theoretical results • valuable for building intuition about random dynamic models • central to quantitative modeling in their own right You will find them in many of the workhorse models of economics and finance. In this lecture we review some of the theory of Markov chains. We will also introduce some of the high quality routines for working with Markov chains available in QuantEcon.jl. Prerequisite knowledge is basic probability and linear algebra.

1,708 citations


Book
01 Jan 1993
Abstract: 1 Discrete Part Manufacturing Systems 2 Evolution of Manufacturing System Models: An Example 3 Single Stage 'Produce-to-Order' Systems 4 Single Stage 'Produce-to-Stock' Systems 5 Flow Lines 6 Transfer Lines 7 Dynamic Job Shops 8 Flexible Machining Systems 9 Flexible Assembly Systems 10 Multiple Cell Manufacturing Systems 11 Unresolved Issues: Directions for Future Research Appendix A: Standard Probability Distributions Appendix B: Some Notions of Stochastic Ordering Appendix C: Nonparametric Families of Distributions

1,560 citations