scispace - formally typeset
Search or ask a question

Showing papers on "Central limit theorem published in 1968"



BookDOI
01 Jan 1968
TL;DR: In this article, the authors present a list of series titles for the Brownian Motion Process (BMP) and the Central Limit Problem (CLP) with a focus on the central limit problem.
Abstract: Foundations Laws of Large Numbers and Random Series Limiting Distributions and the Central Limit Problem The Brownian Motion Process Appendix Bibliography Index List of Series Titles.

103 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the asymptotic almost sure equivalence of the standardized forms of a sample quantile and the empirical distribution function at the corresponding population quantile for a stationary independent process, extended to an $m$-dependent process, not necessarily stationary.
Abstract: The usual technique of deriving the asymptotic normality of a quantile of a sample in which the random variables are all independent and identically distributed [cf. Cramer (1946), pp. 367-369] fails to provide the same result for an $m$-dependent (and possibly non-stationary) process, where the successive observations are not independent and the (marginal) distributions are not necessarily all identical. For this reason, the derivation of the asymptotic normality is approached here indirectly. It is shown that under certain mild restrictions, the asymptotic almost sure equivalence of the standardized forms of a sample quantile and the empirical distribution function at the corresponding population quantile, studied by Bahadur (1966) [see also Kiefer (1967)] for a stationary independent process, extends to an $m$-dependent process, not necessarily stationary. Conclusions about the asymptotic normality of sample quantiles then follow by utilizing this equivalence in conjunction with the asymptotic normality of the empirical distribution function under suitable restrictions. For this purpose, the results of Hoeffding (1963) and Hoeffding and Robbins (1948) are extensively used. Useful applications of the derived results are also indicated.

91 citations


Journal ArticleDOI
TL;DR: In this article, Bleistein and Ursell's work was combined and extended to obtain an asymptotic series which apparently holds over the entire range of the noncentral chi-square distribution, when the number of degrees of freedom becomes large, and the tails by a classical saddle point expansion.
Abstract: The noncentral chi-square distribution occurs in noise interference problems. When the number of degrees of freedom becomes large, the middle portion of the distribution is given by the central limit theorem, and the tails by a classical saddle point expansion. Here recent work by N. Bleistein and F. Ursell on “uniform” asymptotic expansions is combined and extended to obtain an asymptotic series which apparently holds over the entire range of the distribution. General methods for expanding saddle point integrals in uniform asymptotic series are discussed. Recurrence relations are given for the coefficients in two typical cases, (i) when there are two saddle points and (ii) when there is only one saddle point but it lies near a pole or a branch point.

50 citations



Journal ArticleDOI
James R. Bell1
TL;DR: This algorithm is one of a class of normal deviate generators, which the authors shall call "chi-squared projections" by using van Neumann rejection to generate sin (¢) and cos (¢), without generating ¢ explicitly [3], which significantly enhances speed by eliminating the calls to the sin and cos functions.
Abstract: procedure norm (D1, D2) ; real D1, D2; comment This procedure generates pairs of independent normal random deviates with mean zero and standard deviation one. The output parameters D1 and D2 are normally distributed on the interval ( ~ , + oo). The method is exact even in the tails. This algorithm is one of a class of normal deviate generators, which we shall call \"chi-squared projections\" [1, 2]. An algorithm of this class has two stages. The first stage selects a random number L from a x~-distribution. The second stage calculates the sine and cosine of a random angle 0. The generated nornlal deviates are given by L sin (0) and L cos (0). The two stages can be altered independently. In particular, as better x22 random generators are developed, they can replace the first stage. (The negative exponential distribution is the same as that of x~2.) The fastest exact method previously published is Algorithm 267 [4], which includes a comparison with earlier algorithms. It is a straight chi-squared projection. Our algorithm differs from it by using van Neumann rejection to generate sin (¢) and cos (¢), [4, = 20], without generating ¢ explicitly [3]. This significantly enhances speed by eliminating the calls to the sin and cos functions. The author wishes to express his gratitude to Professor George Forsythe for his help in developing the algorithm. REFERENCES 1. Box, G., AND MULLER, M. A note on the generation of normal deviates. Ann. Math. Slat. 28, (1958), 610. 2. MULLER, ~V[. E. A comparison of methods for generating normal deviates on digital computers. J. ACM, 6 (July 1959), 376-383. 3. VON NEUMANN, J. Various techniques used in connection with random digits. In Nat. Bur. of Standards Appl. Math. Ser. 12, 1959, p. 36. 4. PIKE, M. C. Algorithm 267, Random Normal Deviate. Comm. ACM, 8 (Oct. 1965), 606.; comment R is any parameterless procedure returning a random number uniformly distributed on the interval from zero to one. A suitable procedure is given by Algorithm 266, Pseudo-Random Numbers [Comm. ACM, 8 (Oct. 1965), 605] if one chooses a = 0, b = 1, and initializes y to some large odd number, such as y = 13421773.; begin real X, Y, XX, YY, S, L;

41 citations


Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo study of several lag estimators under conditions in which the disturbances of relation (2) are serially correlated is presented. And the results of the Monte Carlo analysis of these estimators are shown in Table 1.
Abstract: where ut is a random disturbance with zero mean. Koyck pointed out that subtracting Xyt-i from (1) produced the equation yt = A'+xyt_i+ bxt+ut' (2) where A' = A(1 -X) and ut' = ut -Xut1. Thus instead of being forced to deal with the model in its distributed lag form (1), which involves the seemingly intractable task of estimating a relation with an infinite number of explanatory variables from a finite amount of data, we can estimate the parameters of the autoregressive form of the model given in equation (2). However, this apparent simplification is purchased only at a cost, for consistent estimation of relation (2) requires that we face several estimation problems associated with equations in which lagged dependent variables appear as explanatory variables. While ordinary least squares estimates of the parameters of (2) are consistent provided that the disturbances ut' are serially independent and follow a distribution which satisfies the assumptions of the central limit theorem, even in this case a small sample bias exists. If the disturbances are serially dependent, an asymptotic bias exists.' Moreover, the transformation from (1) to (2) has changed both the variance and the serial correlations of the disturbances. Hence, if the disturbances in (1) are serially independent, those in (2) are necessarily autocorrelated, which means that applying least squares to equation (2) yields inconsistent estimates of the parameters. In addition to ordinary least squares (OLS), several techniques for estimating such distributed lag relations are available. Generally these techniques have been recommended on the basis of their desirable asymptotic properties. However, for economists, who are forced to work in a world where data are scarce, asymptotic properties are frequently of little relevance. What is more often required is knowledge of the properties of the estimators in small samples. Unfortunately, it has proved difficult to investigate these properties analytically. In the absence of such results, sampling or "Monte Carlo" experiments provide an alternative, if less elegant, source of information. Accordingly, this paper presents the results of a "Monte Carlo" study of several lag estimators under conditions in which the disturbances ut' of relation (2) are serially correlated. In addition to ordinary least squares, the following five methods were studied. 1) Two Stage Regression (TSLS): This is an application of Leviatan's instrumental variable approach. Leviatan [12] has suggested that xti1 be used as an instrument for yt-i in estimating relation (2). In order to increase the efficiency of the technique, we employed a linear combination of lagged x's as the instrument. The linear combination was determined by first estimating the equation

23 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the class of Borel sets with the largest translation-invariant subclass of Qi(df e0, e0) is a class of continuous sets.
Abstract: where c is a universal constant. Consider now a sequence {X(n) = (Xi, • • • , X^)} of independent and identically distributed random vectors in Rk each with mean vector (0, • • • , 0) and covariance matrix / , the kXk identity matrix. If Pn denotes the probability distribution of (X& + • • • +X)/n and $ is the standard kdimensional normal distribution, then it is well known that Pn converges weakly to <£ as »—»oo. Bergström [l] has extended (1) to this case, assuming finiteness of absolute third moments of the components of X. Since weak convergence of a sequence Qn of probability measures to <ï> means that Qn(B)~>$(B) for every Borel set B satisfying 3>(dB) = 0, dB being the boundary of B, it seems natural to seek bounds of \\Pn(B)— $(B)\\ for such sets B (called ^-continuity sets). Let Q be a class of Borel sets such that, whatever be the sequence Qn converging weakly to , Qn(B)—><&(B) as n—»<*> uniformly for all BÇz®>Such a class is called a ^-uniformity class. By a theorem of Billingsley and Topsoe [3], a class d is a <£-uniformity class if and only if sup{*(df0 ; B^d} | 0 as € i 0, where (dB) is the e-neighborhood of dB. This leads one naturally to consider the class Ofci(d, e0) of all Borel sets B for which $(dB)^de for 0<€

20 citations



Journal ArticleDOI
TL;DR: In this article, it was shown that the Gaussian central limit theorem does not necessarily apply to the instants of a transaction, and that if the interdependence between addends is sufficiently strong (but not uinrealistically strong), their weighted sum will not converge to a Gaussian limit.
Abstract: REGRETTABLY, Mr. C. W. J. Granger's argument (see Section 3 of the preceding paper) does not suffice to "save" the central limit theorem with a Gaussian limit. It only shifts the point at which a non-Gaussian limit theorem is required. To represent price data correctly, the instants of transaction must indeed be such that Granger's function N7(t remains a widely scattered random variable even when t is large. To account for my observation that Z(t+ T)-Z(t) has infinite variance, NMt) must even have an infinite expectation. But NVt)/E[N(t)] rapidly tends to one whenever the Gaussian central limit theorem applies to the instants of transaction. Thus, those instants cannot satisfy the Gaussian central limit theorem. A process for which N(t) remains widely scattered even when t is large, studied by Howard M. Taylor and myself [6], is no less "strange" than the infinite-variance random processes. Economists interested in applying the Gaussian central limit theorem to dependent variables, may ponder the following, still timely, quotation from U. Grenander and M. Rosenblatt [1, (181)]: "... the experimentalist would argue that in most physically realizable situations where a stationary process has been observed during a time interval long compared to time lags for which correlation is appreciable, the average of the sample would be asymptotically normally distributed. ... Unfortunately none of the extensions of the central limit theorem to dependent variables seems to answer this problem in terms well adapted for practical interpretation." I may add that, if the interdependence between addends is sufficiently strong (but not uinrealistically strong), their weighted sum will not converge to a Gaussian limit. See for example my paper [5], where I describe a process with a Gaussian marginal distribution whose long-term average is not Gaussian but stable Paretian. Such was shown, in [2], to be the case for prices.'

5 citations



Journal ArticleDOI
TL;DR: In this paper, conditions are given under which ϑ > 0 is a measure of the average step size of a stochastic process, where ϑ is a learning rate parameter and ϑ ≥ 0 is the probability that the subject makes a certain response on thenth experimental trial.
Abstract: Let ϑ > 0 be a measure of the average step size of a stochastic process {p n (θ) } n=1 (θ). Conditions are given under whichp n (θ) is approximately normally distributed whenn is large and ϑ is small. This result is applied to a number of learning models where ϑ is a learning rate parameter andp n (θ) is the probability that the subject makes a certain response on thenth experimental trial. Both linear and stimulus sampling models are considered.

Journal ArticleDOI
TL;DR: In this paper, it was shown that if xl, x2,..., Xnare n totally independent random variables, not necessarily identically distributed, with var (xi) k2Mi is less than or equal to an explicit function of k (k < 1), r = min (M,/lo) and B = EM/max (Mi).
Abstract: SUMMARY This paper shows that if xl, x2, ..., Xnare n totally independent random variables, not necessarily identically distributed, with var (xi) k2Mi} is less than or equal to an explicit function of k (k < 1), r = min (M,/lo) and B = EM/max (Mi). A table is provided which allows easy calculation of the numerical value of the inequality.



Journal ArticleDOI
TL;DR: In this article, two more extensions of Rosen's theorem to independent but non-identically distributed random variables are given under different hypotheses than Heyde's, and they are shown to be convergent.
Abstract: In [5], B. Rosen showed that if $\{X_k: k = 1,2, \cdots\}$ is an independent sequence of identically distributed random variables with $EX_k = 0$ and $\operatorname{Var} X_k = \sigma^2, 0 < \sigma^2 < \infty$ and if $S_n = X_1 + \cdots + X_n$, then the series $\sum^\infty_{n=1} n^{-1} (P(S_n < 0) - \frac{1}{2})$ is absolutely convergent. This theorem was motivated by a result of Spitzer [6] who, under the same conditions, established the convergence of this series as a corollary to a result in the theory of random walks. Rosen's theorem was generalized by Baum and Katz [1] who showed that if $EX_k = 0$ and $E|X_k|^{2+\alpha} < \infty$ for $0 \leqq \alpha < 1$ then $\sum^\infty_{n=1}n^{-(1-\alpha/2)} |P(S_n < 0) - \frac{1}{2}| < \infty.$ These results led to the study of series convergence rate criteria for the central limit theorem and a partial solution of this problem was obtained for the case of identically distributed random variables in [2]. A more complete solution has been recently obtained by Heyde [4]. The first study of series convergence rates for $P(S_n < 0)$ in the case of independent but non-identically distributed random variables was made by Heyde [3]. Based on an extension of Rosen's theorem utilizing certain uniform bounds on the characteristic function of the $X_k$'s he concluded the absolute convergence of the series $\sum^\infty_{n=1}n^{-(1-\alpha/2)} (P(S_n < n^px) - \frac{1}{2})$ for $- \infty < x < \infty$ and $0 \leqq p < \frac{1}{2}(1 - \alpha), 0 \leqq \alpha < 1,$ thus obtaining what he termed small deviation convergence rates. In the present paper two more extensions of Rosen's theorem to independent but non-identically distributed random variables are given under different hypotheses than Heyde's. The first (Theorem 1) reduces to Rosen's theorem in the case of identically distributed random variables. The second (Theorem 2) results in a theorem similar to that of Baum and Katz [1] as required in Heyde's small deviation result. This will make it possible to obtain his conclusion by simply carrying out the last step in his proof. These results are obtained in Section 3. In Section 2 some preliminary results are stated and examples are given in Section 4 to show that the first two hypotheses of Theorem 1 cannot, in general, be relaxed.

Journal ArticleDOI
TL;DR: In this paper, the probability density function after n steps is obtained through the use of the statistical characteristic function, and the total neutron distribution φ(x)=∞Σn=0 φn (x) is expressed as a Fourier transform of the power series of this characteristic, function.
Abstract: Considering the displacement in the generalized random walk process of isotropic scattering neutrons as random variable, φn(x), the probability density function after n steps is obtained through the use of the statistical characteristic function The total neutron distribution φ(x)=∞Σn=0 φn (x) is expressed as a Fourier transform of the power series of this characteristic, function Using this statistical method, the total distributions φ(x) are shown by means of concrete examples to agree with the solutions of the relevant linear Boltzmann equations which are already solved The one dimensional space-angle dependent case is examined for a system containing random empty gaps To evaluate φn (x) for large n(n>1), the central limit theorem is used

Journal ArticleDOI
Lionel Weiss1
TL;DR: In this paper, it was shown that the asymptotic joint distribution of homogeneous functions of independent exponential random variables can be derived directly from the joint probability density function of the homogeneous function of independent random variables.
Abstract: X 1,...,X n are independent random variables, identically distributed over the unit interval, with common probability density function 1 + r(x)/n δ for all sufficiently large n, where δ is a positive constant, $$\int\limits_0^1 {r(x){\text{ }}dx = 0}$$ and |r″(x)|