scispace - formally typeset
Search or ask a question
Posted Content

A Consistent Test of Stationary Ergodicity

01 Feb 1993-Research Papers in Economics (California Institute of Technology, Division of the Humanities and Social Sciences)-
TL;DR: In this article, a statistical test of stationary-ergodicity for known Markovian processes on null null is presented. But it is not applicable to testing models and algorithms, as well as estimated time series processes ignoring the estimation error.
Abstract: A formal statistical test of stationary-ergodicity is developed for known Markovian processes on null null This makes it applicable to testing models and algorithms, as well as estimated time series processes ignoring the estimation error. The analysis is conducted by examining the asymptotic properties of the Markov operator on density space generated by the transition in the state space. The test is developed under the null of stationary-ergodicity, and it is shown to be consistent against the alternative of nonstationary-ergodicity. The test can be easily performed using any of a number of standard statistical and mathematical computer packages. (This abstract was borrowed from another version of this item.)
Citations
More filters
Posted Content
TL;DR: In this article, the authors study a one-sector growth model with an externality in the production function and identify an income tax-subsidy schedule that supports the efficient allocation as the unique equilibrium outcome.
Abstract: We study a one-sector growth model which is standard except for the presence of an externality in the production function. The set of competitive equilibria is large. It includes constant equilibria, sunspot equilibria, cyclical and chaotic equilibria, and equilibria with deterministic or stochastic regime switching. The efficient allocation is characterized by constant employment and a constant growth rate. We identify an income tax-subsidy schedule that supports the efficient allocation as the unique equilibrium outcome. That schedule has two properties: (i) it specifies the tax rate to be an increasing function of aggregate employment, and (ii) earnings are subsidized when aggregate employment is at its efficient level. The first feature eliminates inefficient, fluctuating equilibria, while the second induces agents to internalize the externality.

174 citations

Journal ArticleDOI
TL;DR: In this article, the authors show how to consistently estimate ergodic models by simulated minimum distance techniques, both in a long-run equilibrium and during an adjustment phase, under a variety of conditions.

122 citations


Cites methods from "A Consistent Test of Stationary Erg..."

  • ...When an AB model is represented as a Markov process, the tests described in Domowitz and El-Gamal (1993) and Domowitz and El-Gamal (2001) can also be applied....

    [...]

Posted Content
TL;DR: This work discusses some practical issues related to the use of the Parameterized Expectations Approach (PEA) for solving non-linear stochastic dynamic models with rational expectations, and a Fortran program for implementing the algorithm is discussed.
Abstract: We discuss some practical issues related to the use of the Parameterized Expectations Approach (PEA) for solving non-linear stochastic dynamic models with rational expectations. This approach has been applied in models of macroeconomics, financial economics, economic growth, contract theory, etc. It turns out to be a convenient algorithm, especially when there is a large number of state variables and stochastic shocks in the conditional expectations. We discuss some practical issues having to do with the application of the algorithm, and we discuss a Fortran program for implementing the algorithm that is available through the internet. We discuss these issues in a battery of six examples.

74 citations

Posted Content
TL;DR: This paper illustrates the use of the nonparametric Wald-Wolfowitz test to detect stationarity and ergodicity in agent-based models and shows that with appropriate settings the tests can detect non-stationarity and non-ergodicity.
Abstract: This paper illustrates the use of the nonparametric Wald-Wolfowitz test to detect stationarity and ergodicity in agent-based models. A nonparametric test is needed due to the practical impossibility to understand how the random component influences the emergent properties of the model in many agent-based models. Nonparametric tests on real data often lack power and this problem is addressed by applying the Wald-Wolfowitz test to the simulated data. The performance of the tests is evaluated using Monte Carlo simulations of a stochastic process with known properties. It is shown that with appropriate settings the tests can detect non-stationarity and non-ergodicity. Knowing whether a model is ergodic and stationary is essential in order to understand its behavior and the real system it is intended to represent; quantitative analysis of the artificial data helps to acquire such knowledge.

58 citations

Journal ArticleDOI
TL;DR: In this paper, the authors introduce a class of nonlinear data generating processes (DGPs) that are first order Markov and can be represented as the sum of a linear plus a bounded nonlinear component, which correspond to the linear concepts of integratedness and cointegratedness.

57 citations


Cites background from "A Consistent Test of Stationary Erg..."

  • ...Recently, Domowitz and El-Gamal (1993, 1997) have proposed a test for the null hypothesis of ergodicity....

    [...]

References
More filters
Journal ArticleDOI

11,285 citations


"A Consistent Test of Stationary Erg..." refers background in this paper

  • ...'s [17] subroutine kstwof), with its accompanying subroutines probksf) and sort()....

    [...]

  • ...'s [17], subroutine ranlf), and we initialized...

    [...]

Book
01 Jan 1953

10,512 citations

Journal ArticleDOI
TL;DR: This chapter reviews the main methods for generating random variables, vectors and processes in non-uniform random variate generation, and provides information on the expected time complexity of various algorithms before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods.

3,304 citations


"A Consistent Test of Stationary Erg..." refers methods in this paper

  • ...This has been shown by Ahrens and Dieter [1], and used also in [8] to generate numbers from a particular/(x) = Zf=o<*'- Their algorithm generates a discrete random variable Z from the multinomial distribution with the probability vector Po> • • • >Pk, and then generates x as 1....

    [...]

Book
16 Apr 1986
TL;DR: A survey of the main methods in non-uniform random variate generation can be found in this article, where the authors provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes and Markov chain methods.
Abstract: This is a survey of the main methods in non-uniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods. Authors’ address: School of Computer Science, McGill University, 3480 University Street, Montreal, Canada H3A 2K6. The authors’ research was sponsored by NSERC Grant A3456 and FCAR Grant 90-ER-0291. 1. The main paradigms The purpose of this chapter is to review the main methods for generating random variables, vectors and processes. Classical workhorses such as the inversion method, the rejection method and table methods are reviewed in section 1. In section 2, we discuss the expected time complexity of various algorithms, and give a few examples of the design of generators that are uniformly fast over entire families of distributions. In section 3, we develop a few universal generators, such as generators for all log concave distributions on the real line. Section 4 deals with random variate generation when distributions are indirectly specified, e.g, via Fourier coefficients, characteristic functions, the moments, the moment generating function, distributional identities, infinite series or Kolmogorov measures. Random processes are briefly touched upon in section 5. Finally, the latest developments in Markov chain methods are discussed in section 6. Some of this work grew from Devroye (1986a), and we are carefully documenting work that was done since 1986. More recent references can be found in the book by Hörmann, Leydold and Derflinger (2004). Non-uniform random variate generation is concerned with the generation of random variables with certain distributions. Such random variables are often discrete, taking values in a countable set, or absolutely continuous, and thus described by a density. The methods used for generating them depend upon the computational model one is working with, and upon the demands on the part of the output. For example, in a ram (random access memory) model, one accepts that real numbers can be stored and operated upon (compared, added, multiplied, and so forth) in one time unit. Furthermore, this model assumes that a source capable of producing an i.i.d. (independent identically distributed) sequence of uniform [0, 1] random variables is available. This model is of course unrealistic, but designing random variate generators based on it has several advantages: first of all, it allows one to disconnect the theory of non-uniform random variate generation from that of uniform random variate generation, and secondly, it permits one to plan for the future, as more powerful computers will be developed that permit ever better approximations of the model. Algorithms designed under finite approximation limitations will have to be redesigned when the next generation of computers arrives. For the generation of discrete or integer-valued random variables, which includes the vast area of the generation of random combinatorial structures, one can adhere to a clean model, the pure bit model, in which each bit operation takes one time unit, and storage can be reported in terms of bits. Typically, one now assumes that an i.i.d. sequence of independent perfect bits is available. In this model, an elegant information-theoretic theory can be derived. For example, Knuth and Yao (1976) showed that to generate a random integer X described by the probability distribution {X = n} = pn, n ≥ 1, any method must use an expected number of bits greater than the binary entropy of the distribution, ∑

3,217 citations