scispace - formally typeset
Search or ask a question

Showing papers on "Section (fiber bundle) published in 1970"


Journal ArticleDOI
TL;DR: In this paper, the authors derived the erodic definition of the maximum of height u and analyzed the sample functions near local maxima of u, especially as u → ∞, and mainly used similar methods as [4] and [11].
Abstract: Consider a stationary normal process ξ(t) with mean zero and the covariance function r(t). Properties of the sample functions in the neighborhood of zeros, upcrossings of very high levels, etc. have been studied by, among others, Kac and Slepian, 1959 [4] and Slepian, 1962 [11]. In this paper we shall study the sample functions near local maxima of height u, especially as u → -∞, and mainly use similar methods as [4] and [11]. Then it is necessary to analyse carefully what is meant by "near a maximum of height u." In Section 2 we derive the "ergodic" definition, i.e. the definition which is possible to interpret by the aid of relative frequencies in a single realisation. This definition has been treated previously by Leadbetter, 1966 [5], and it turns out to be related to Kac and Slepian's horizontal window definition. In Section 3 we give a representation of ξ(t) near a maximum as the difference between a non-stationary normal process and a deterministic process, and in Section 4 we examine these processes as u → -∞. We have then to distinguish between two cases. A: Regular case. r(t) = 1 -λ2t2/2 + λ4 t4/4! - λ6 t6/6! + o(t6) as t → 0, where the positive λ2k are the spectral moments. Then it is proved that if ξ(t) has a maximum of height u at t = 0 then, as u → -∞, egin{align*} (lambda_2lambda_6 - lambda_4^2)(lambda_4 - lambda_2^2)^{-1}{xi((lambda_2lambda_6 - lambda_4^2)^{-frac{1}{2}}(lambda_4 - lambda_2^2)^{frac{1}{2}}t|u|^{-1}) - u} \ sim |u|^{-3}{t^4/4! + omega(lambda_4 - lambda_2^2)^{frac{1}{2}}lambda_2^ {-frac{1}{2}}t^3/3! - zeta(lambda_4 - lambda_2^2)lambda_2 ^{-1}t^2/2}end{align*} where ω and ζ are independent random variables (rv), ω has a standard normal distribution and ζ has the density z exp (-z), z > 0 . Thus, in the neighborhood of a very low maximum the sample functions are fourth degree polynomials with positive t4-term, symmetrically distributed t3-term, and a negatively distributed t2-term but without t-term. B: Irregular case. r(t) = 1 - λ2t2/2 + λ4t4/4! - λ5|t|5/5! + o(t5) as t → 0, where λ5 > 0 . Now ξ(tu-2) - u ∼ |u|-5{λ2λ5(λ4 - λ22)-1 |t|3/3! + (2λ5)1/2 ω(t) - ζ(λ4 - λ22)λ2 -1t2/2} where ω(t) is a non-stationary normal process whose second derivative is a Wiener process, independent of ζ which has the density z exp (-z), z > 0 . The term λ5|t|5/5! "disturbs" the process in such a way that the order of the distance which can be surveyed is reduced from 1/|u| (in Case A) to 1/|u|2. The results are used in Section 5 to examine the distribution of the wave-length and the crest-to-trough wave-height, i.e., the amplitude, discussed by, among others, Cartwright and Longuet-Higgins, 1956 [1]. One hypothesis, sometimes found in the literature, [10], states that the amplitude has a Rayleigh distribution and is independent of the mean level. According to this hypothesis the amplitude is of the order 1/|u| as u → -∞ while the results of this paper show that it is of the order 1/|u|3. (Less)

246 citations


Journal ArticleDOI
TL;DR: In this paper, a complex analytic manifold X and a submanifold M of codimension r^2 of Xy can form the monoidal transform X of X with center M. The restriction of n to 5 makes x:S-*M an analytic fiber bundle with projective (r − l)-space as the standard fibre.
Abstract: When we have a complex analytic manifold X and a complex analytic submanifold M of codimension r^2 of Xy we can form the monoidal transform X of X with centre M. (By a manifold, we shall understand a paracompact connected one through this paper.) X is a complex analytic manifold with the same dimension n as X, there exists a holomorphic mapping n from X onto X, and n is an analytic homeomorphisrn between X—S and X—M, where S=n~(M^). (More properly, we should say (X, n) is the monoidal transform of X) S is an analytic submanifold of X of codimension 1, and is in a peculiar position in X: The restriction of n to 5 makes x:S-*M an analytic fibre bundle with projective (r — 1)-space as the standard fibre. (More specifically, 5 is the normal bundle of M in X, with the zero cross section deleted and "divided" by the group C* operating as multiplication by constants on each fibre.) If we denote the fibre 71^(0) by La(a^M), then we have [S]za= M" , where [5] and [e] denote the complex line bundles defined by the divisor 5 of X and the hyperplane e of P~ = La respectively, and [S]La denotes tne restriction of [5] to La. Now the inverse problem of the monoidal transformation is the following: Suppose we have a complex analytic manifold X" and a submanifold 5 of X of codimension 1. Let S have a structure of a holomorphic fibre bundle over an analytic manifold M with projective (r— l)-space as a standard fibre (m + r = ri). Then under what condi-

137 citations


Journal ArticleDOI
TL;DR: In this paper, the authors defined a structure on the closed left ideals of an arbitrary C∗-algebra which was analogous to the structure topology on the maximal ideals which exists for the abelian case.

115 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of obtaining unbiased estimators for parametric functions of the form (i.e., functions with covariance matrix and variance component models).
Abstract: Exemplification of the theory developed in [9] using a linear space of random variables other than linear combinations of the components of a random vector, and unbiased estimation for the parameters of a mixed linear model using quadratic estimators are the primary reasons for the considerations in this paper. For a random vector $Y$ with expectation $X\beta$ and covariance matrix $\sum_i u_iV_i$ ($ u_1, \cdots, u_m$, and $\beta$ denote the parameters), interest centers upon quadratic estimability for parametric functions of the form $\sum_{i\leqq j}\gamma_{ij}\beta_i\beta_j + \sum_k\gamma_k u k$ and procedures for obtaining quadratic estimators for such parametric functions. Special emphasis is given to parametric functions of the form $\sum_k\gamma_k u_k$. Unbiased estimation of variance components is the main reason for quadratic estimability considerations regarding parametric functions of the form $\sum_k\gamma_k u_k$. Concerning variance component models, Airy, in 1861 (Scheffe [6]), appears to have been the first to introduce a model with more than one source of variation. Such a model is also implied (Scheffe [6]) by Chauvenet in 1863. Fisher [1], [2] reintroduced variance component models and discussed, apparently for the first time, unbiased estimation in such models. Since Fisher's introduction and discussion of unbiased estimation in models with more than one source of variation, there has been considerable literature published on the subject. One of these papers is a description by Henderson [5] which popularized three methods (now known as Henderson's Methods I, II, and III) for obtaining unbiased estimates of variance components. We mention these methods since they seem to be commonly used in the estimation of variance components. For a review as well as a matrix formulation of the methods see Searle [7]. Among the several pieces of work which have dealt with Henderson's methods, only that of Harville [4] seems to have been concerned with consistency of the equations leading to the estimators and to the existence of unbiased (quadratic) estimators under various conditions. Harville, however, only treats a completely random two-way classification model with interaction. One other result which deals with existence of unbiased quadratic estimators in a completely random model is given by Graybill and Hultquist [3]. In Section 2 the form we assume for a mixed linear model is introduced and the pertinent quantiles needed for the application of the results in [9] are obtained. Definitions, terminology, and notation are consistent with the usage in [9]. Section 3 considers parametric functions of the form $\sum_{i\leqq j}\gamma_{ij}\beta_i\beta_j + \sum_k\gamma_k u_k$ and Section 4 concerns parametric functions of the form $\sum_k\gamma_k u_k$. One particular method for obtaining unbiased estimators for linear combinations of variance components is given in Section 4 that is computationally simpler than the Henderson Method III procedure which is the most widely used general approach applicable to any mixed linear model. The method described in Section 4 has the added advantage of giving necessary and sufficient conditions for the existence of unbiased quadratic estimators which is not always the case with the Henderson Method III. In the last section an example is given which illustrates the Henderson Method III procedure from the viewpoint of this paper.

88 citations


Journal ArticleDOI
TL;DR: In this paper, a conjecture due to A. G. Constantine is verified and it is shown that the function defined by (1.1) is identical with the hypergeometric functions defined by Herz [5] by means of inverse Laplace transforms.
Abstract: Many distributions in multivariate analysis can be expressed in a form involving hypergeometric functions $_pF_q$ of matrix argument e.g. the noncentral Wishart $(_0F_1)$ and the noncentral multivariate $F(_1F_1)$. For an exposition of distributions in this form see James [9]. The hypergeometric function $_pF_q$ has been defined by Constantine [1] as the power series representation \begin{equation*}\tag{1.1} _pF_q(a_1,\cdots, a_p; b_1,\cdots, b_q; R) = \sum^\infty_{k=0} \sum_\kappa \frac{(a_1)_\kappa\cdots(a_p)_\kappa}{(b_1)_\kappa\cdots (b_q)_\kappa} \frac{C_\kappa (R)}{k!}\end{equation*} where $a_1,\cdots, a_p, b_1,\cdots, b_q$ are real or complex constants, $(a)_\kappa = \mathbf{prod}^m_{i=1}(a - \frac{1}{2}(i - 1))_{k_i},\quad (a)_n = a(a + 1)\cdots (a + n - 1)$ and $C_\kappa(R)$ is the zonal polynomial of the $m \times m$ symmetric matrix $R$ corresponding to the partition $\kappa = (k_1, k_2,\cdots, k_m), k_1 \geqq k_2 \geqq \cdots \geqq k_m$, of the integer $k$ into not more than $m$ parts. The functions defined by (1.1) are identical with the hypergeometric functions defined by Herz [5] by means of Laplace and inverse Laplace transforms. For a detailed discussion of hypergeometric functions and zonal polynomials, the reader is referred to the papers [1] of Constantine and [7], [8], [9] of James. From a practical point of view, however, the series (1.1) may not be of great value. Although computer programs have been developed for calculating zonal polynomials up to quite high order, the series (1.1) may converge very slowly. It appears that some asymptotic expansions for such functions must be obtained. It is well known that asymptotic expansions for a function can in many cases be derived using a differential equation satisfied by the function (see e.g. Erdelyi [4]), and so, with this in mind, a study of differential equations satisfied by certain hypergeometric functions certainly seems justified. In this paper a conjecture due to A. G. Constantine is verified i.e. it is shown that the function \begin{equation*}\tag{1.2} _2F_1(a, b; c; R) = \sum^\infty_{k=0} \sum_\kappa \frac{(a)_\kappa(b)_\kappa}{(c)_\kappa} \frac{C_\kappa(R)}{K!}\end{equation*} satisfies the system of partial differential equations \begin{align*} \tag{1.3} R_i(1 &- R_i)\partial^2F/\partial R_i^2 + \{ c - \frac{1}{2}(m - 1) - (a + b + 1 - \frac{1}{2}(m - 1))R_i \\ &+\frac{1}{2} \sum^m_{j=1,j eq i}\lbrack R_i(1 - R_i)/(R_i - R_j) \rbrack\}\partial F/\partial R_i \\ &-\frac{1}{2} \sum^m_{j=1,j eq i} \lbrack R_j(1 - R_j)/(R_i - R_j) \rbrack\partial F/\partial R_j = abF \quad (i = 1,2, \cdots, m)\end{align*} where $R_1, R_2,\cdots, R_m$ are the latent roots of the complex symmetric $m \times m$ matrix $R$. When $m = 1$, the system (1.3) clearly reduces to the classical hypergeometric equation. It appears difficult to establish this conjecture directly, and the method used has necessitated a section devoted to a summary of the argument involved (Section 3). The main result in the paper is summarized in Theorem 3.1 of this section. Section 4 contains proofs referred to in Section 3. Using the fact that $C_\kappa (R)$ satisfies the partial differential equation (James [10])\begin{equation*}\tag{1.4} \sum^m_{i=1} R_i^2\partial^2y/\partial R_i^2 + \sum^m_{j=1,j eq i}\lbrack R_i^2/(R_i - R_j) \rbrack\partial y/\partial R_i = \sum^m_{i=1}k_i(k_i + m - i - 1)y,\end{equation*} James and Constantine [11] have obtained the effects of certain differential operators on $C_\kappa(R)$. These results are given in Section 2 and are used in many proofs in Section 4. Section 5 is probably of most interest statistically, for here systems of partial differential equations similar to (1.3) are given for $_1F_1(a; c; R)$ and $_0F_1(c; R)$. These two functions occur often in multivariate distributions. The differential equations for $_1F_1(a; c; R)$ have been used by Constantine [3] to obtain an asymptotic expansion for the noncentral likelihood ratio criterion, and by the author [12] to obtain asymptotic distributions of Hotelling's generalized $T_0^2$ statistic, Pillai's $V^{(m)}$ criterion, and for the largest latent root of the covariance matrix. The system for $_0F_1(c; R)$ is a generalization of that given by James [6] for $_0F_1(m/2; R)$.

75 citations


Journal ArticleDOI
TL;DR: Theorem 1.1 as discussed by the authors shows that if T is a positive linear contraction operator on a discrete measure space induced by an irreducible recurrent aperiodic Markov matrix, then the condition (C) holds: $f \epsilon L_1, \int f = 0$ implies that T^n f converges to zero in norm.
Abstract: A recent theorem of Orey [12] (see also [1], [6], [7], [13]) asserts that if $T$ is an $L_1$ operator induced on a discrete measure space by an irreducible recurrent aperiodic Markov matrix, then the condition (C) holds: $f \epsilon L_1, \int f = 0$ implies that $T^n f$ converges to zero in $L_1$. In an attempt to determine when (C) holds for more general operators, we at first prove the following (Theorem 1.1): Let $T$ be a positive linear contraction operator on $L_1$; if $T^nf$ and $T^{n+1}f$ intersect slightly, but uniformly in $f$ in the unit sphere of $L_1$, then $T^nf - T^{n+1}f$ converges to zero in norm. (C) follows if $T$ is conservative and ergodic (Corollary 1.3). In Section 2 we derive from this a simple proof of Orey's theorem. The main result of the paper is in Section 3 and could be called a "zero-two" theorem: Let $P(x, A)$ be a Markov kernel, and assume that there is a $\sigma$-finite measure $m$ such that for each $A, m(A) = 0$ implies $P(x, A) = 0$ a.e. and $m(A) > 0$ implies $\sum^\infty_{n=0} P^{(n)}(x, A) = \infty$ a.e. Then the total variation of the measure $P^{(n)}(x, \cdot) - P^{(n+1)}(x, \cdot)$ is either a.e. 2 for all $n$ or it converges a.e. to 0 as $n \rightarrow \infty$. In Section 4 it is shown that a version of the zero-two theorem essentially contains the Jamison-Orey generalization of Orey's theorem to Harris processes. Section 1 and Section 2 of this paper do not assume any knowledge of either operator ergodic theory or probability. Some known results in ergodic theory are applied in Section 3, but the proof of the main theorem does not depend on them.

74 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that if a GM estimator for a random sample from a multivariate population with covariance matrix σ > 0, then σ is positive definite.
Abstract: The coordinate free (geometric) approach to univariate linear models has added both insight and understanding to the problems of Gauss Markov (GM) estimation and hypothesis testing One of the initial papers emphasizing the geometric aspects of univariate linear models is Kruskal's (1961) The coordinate free approach is used in this paper to treat GM estimation in a multivariate analysis context In contrast to the univariate situation, a central question for multivariate linear models is the existence of GM estimates Of course, it is the more complicated covariance structure in the multivariate case that creates the concern over the existence of GM estimates As the emphasis is on GM estimation, first and second moment assumptions (as opposed to distributional assumptions) play the key role Classical results for the univariate linear model are outlined in Section 1 In addition, a recent theorem due to Kruskal (1968) concerning the equality of GM and Least Squares (LS) estimates is discussed A minor modification of Kruskal's result gives a very useful, necessary and sufficient condition for the existence of GM estimators for arbitrary covariance structures and a fixed regression manifold In Section 2, the outer product of two vectors and the Kronecker product of linear transformations is discussed and applied to describe the covariance structure of a random matrix This application includes the case of a random sample from a multivariate population with covariance matrix $\Sigma > 0 ("\Sigma > 0"$ means that $\Sigma$ is positive definite) The question of GM estimation in the standard multivariate linear model is taken up in Section 3 This model is described as follows: a random matrix $Y: n \times p$, whose rows are uncorrelated and each row has a covariance matrix $\Sigma > 0$, is observed The mean matrix of $Y, \mu$, is assumed to have the form $\mu = ZB$ where $Z: n \times q$ is known and of rank $q$, and $B: q \times p$ is a matrix of regression coefficients For this model, GM estimators for $\mu$ and $B$ exist and are well known (see Anderson (1958) chapter 8) The main result in Section 3 establishes a converse to this classical result Explicitly, let $Y$ have the covariance structure as above and assume $\Omega$ is a fixed regression manifold It is shown that if a GM estimator for $\mu\in\Omega$ exists, then each element $\mu\in\Omega$ can be written as $\mu = ZB$ where $Z: n \times q$ is fixed and $B: q \times p$ ranges over all $q \times p$ real matrices The results in Section 4 and Section 5 are similar to the main result of Section 3 A complete description of all regression manifolds for which GM estimators exist is given for two different kinds of covariance assumptions concerning $\Sigma$ ($\Sigma$ as above) In Section 4, it is assumed that $\Sigma$ has a block diagonal form with two blocks Section 5 is concerned with the case when $\Sigma$ has the so-called intra-class correlation form

31 citations


Journal ArticleDOI
TL;DR: In this paper, a nonparametric procedure for testing the null hypothesis (H:F = G$ against a class of one-sided alternatives which are shifted from $F$ towards large values of $x$ and $y$ is presented.
Abstract: Let $Z_i = (X_i, Y_i), i = 1,2,\cdots, m$ be a random sample from a bivariate population with continuous $\operatorname{cdf} F(x, y)$ and let $Z_i = (X_i, Y_i), i = m + 1, \cdots, m + n$ be an independent sample from the population with continuous $\operatorname{cdf} G(x, y)$. The present paper is concerned with the development of a nonparametric procedure for testing the null hypothesis $H:F = G$ against a class of one-sided alternatives which are, generally, those $\operatorname{cdf's}$ which are shifted from $F$ towards large values of $x$ and $y$. That is, large values of $x$ and $y$ are more likely under $G$ than $F$. A formal definition of the alternatives, tied to a concept of two-dimensional ordering, is introduced in Section 2. A simple example is given by the one-sided translation alternatives \begin{equation*}\tag{1.1}K_0 = \{(F, G): G(x, y) = F(x - \theta_1, y - \theta_2), \mathbf{\theta} \geqq \mathbf{0} \text{and} \mathbf{\theta} eq \mathbf{0}\}\end{equation*} where inequality between vectors means coordinate-wise inequality. Another interesting class of directed shifts, which has not been previously studied, is composed of the bivariate Lehmann-type alternatives \begin{equation*}\tag{1.2}K_1 = \{(F, G):G(x, y) = F^\theta(x, y) \theta > 1\}.\end{equation*} All $\operatorname{cdf's}$ are assumed to be continuous unless otherwise stated. Among other areas, problems involving ordered alternatives arise in reliability studies of two-component systems when the components are interdependent and prior information on the mechanism involved ensures that the average lifetimes of the components in one system are no smaller than those of the other. Another example would be the comparison of a new with a standard method for treating a disease when two responses are measured. The usefulness of incorporating prior information in the form of an ordered parameter space was first recognized by Bartholomew [2], [3], [4] in the one-way analysis of variance model. He derived the likelihood ratio test for the hypothesis that the means of several normal distributions are equal against the alternative that they are ordered. Maximum likelihood estimation of ordered parameters in other distributions has been studied by Brunk [5] and Van Eeden [17]. Chacko [6] and Shorack [16] proposed rank analogs of Bartholomew's test and showed that the asymptotic Pitman efficiency equals that of the Wilcoxon test relative to the $t$ test. Employing some results from nonlinear programming theory, Kudo [9] and Nuesch [12] derived the likelihood ratio procedure for testing $\mathbf{\mu} = \mathbf{0}$ against $\mathbf{\mu} \geqq \mathbf{0}$ on the basis of a single sample from a multivariate normal $\mathcal{N}(\mathbf{\mu}, \mathbf{\Sigma})$ with $\mathbf{\Sigma}$ known. The case of unknown $\mathbf{\Sigma}$ has recently been investigated by Perlman [13]. Other tests when $ ot\sum$ is known, namely most stringent somewhere most powerful tests, have been studied by Schaafsma and Smid [15] and these have been compared with most stringent tests by Schaafsma [14]. No distribution-free or even asymptotically distribution-free test is yet available in the literature for detecting ordered shifts in bivariate and multivariate distributions. As groundwork for the bivariate two-sample problem, we propose and study a nonparametric test based on the concept of two-dimensional layer ranks which were introduced by Barndorff-Nielsen and Sobel [1]. Let $N = m + n$ and define \begin{align*}L(i, j) &= 1 \quad\text{if}\mathbf{Z}_i \geqq \mathbf{Z}_j; \\ &= 0\quad \text{otherwise}.\end{align*}\begin{equation*}\tag{1.3}\begin{align*}L_i &= \sum^N_{j = 1} L(i, j),\quad L_j^\ast = \sum^N_{i = 1} L(i, j);\quad 1 \leqq i \leqq N, 1 \leqq j \leqq N, \\ \mathbf{L} &= (L_1, \cdots, L_N),\quad \mathbf{L}^\ast = (L_1^\ast, \cdots, L_N^\ast).\end{align*}\end{equation*} Then $L_i(L_i^\ast)$ is called the 3rd (1st) quadrant layer rank of $\mathbf{Z}_i$ in the combined sample $\{\mathbf{Z}_1, \cdots, \mathbf{Z}_N\}$. Geometrically, $L_i$ and $L_i^\ast$ are the number of points $\mathbf{Z}_j$ in the closed 3rd and 1st quadrant, respectively, with reference to Cartesian coordinates having origin $\mathbf{Z}_i$. For intuitive motivation of our test statistic, consider plotting the first sample as dots and the second as crosses in the same diagram [Fig. 1]. Under $H$, the $\mathbf{Z}_i$ are independent identically distributed and the dots and crosses are expected to be well mixed. Under an ordered alternative, $G$ has more mass in the upper right-hand corner than $F$ and the second sample layer ranks $L_{m + 1}, \cdots, L_N$ are expected to be larger than $L_1, \cdots, L_m$ on the average. High values of the statistic $\lbrack m\sum^N_{i = m + 1} L_i - n \sum^m_{i = 1} L_i\rbrack$ should then lead to rejection of $H$ in favor of an ordered shift. Similarly, a small value of $\lbrack m \sum^N_{i = m + 1} L_i^\ast - n \sum^m_{i = 1} L_i^\ast\rbrack$ indicates a one-sided shift. Incidentally, for the univariate two sample problem the diagram would be one of dots and crosses on a line and the layer ranks would reduce to ordinary ranks making each of the two statistics equivalent to the one-sided Wilcoxon statistic. Although $\mathbf{L}$ and $\mathbf{L}^\ast$ are invariant under a natural group of transformations which leave the problem invariant, it is difficult to characterize a maximal invariant under this group in a manageable form. A permutation test based upon the 3rd quadrant layer ranks $\mathbf{L}$ is proposed in Section 2 and is shown to be unbiased. A large sample unconditional test is proposed in Section 3 as an approximation to the permutation test and it is shown to be consistent. Section 4 contains results on the asymptotic distribution under local ordered shift alternatives and the Pitman efficacy of the test. Modifications of the test for some variants of the basic problem and a parametric test under normal theory are discussed in Section 5.

22 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce a set of random variables and give interpretations of them in terms of coupon collection, and define the bonus sum of a collection of coupons is obtained by adding the bonus values of the different colors which are represented in the collection.
Abstract: We shall introduce a set of random variables and give interpretations of them in terms of coupon collection. A person collects coupons with different colors. Let there be in all $N$ different colors, which we label $1,2, \cdots, N$. The different colors may occur with different frequencies. The colors of successive coupons are independent. Let $J_ u$ be the color of the $ u$th coupon. Our formal assumptions are: $J_1, J_2, \cdots$ are independent random variables, all with the following distribution \begin{equation*}\tag{1.1} P(J = s) = p_s,\quad s = 1,2, \cdots, N\end{equation*} where \begin{equation*}\tag{1.2} p_s \geqq 0,\quad p_1 + p_2 + \cdots + p_N = 1.\end{equation*} Thus, $p_s$ is the probability that a coupon has color $s$. Let \begin{equation*}\tag{1.3} M_n = {\tt\#} \text{different elements among} (J_1, J_2, \cdots, J_n),\quad n = 1,2, \cdots.\end{equation*} Thus, $M_n$ is the number of different colors in the collection after $n$ coupons. Let \begin{equation*}\tag{1.4} T_n = \min \{ u: M_ u = n\},\quad n = 1,2, \cdots, N.\end{equation*} $T_n$ is the number of coupons needed in order to get a collection with $n$ different colors in. Define \begin{align*} \tag{1.5} D_ u &= 1\quad\text{if} J_ u otin (J_1, J_2, \cdots, J_{ u-1}),\quad u = 1,2, \cdots \\ &= 0\quad \text{otherwise}.\end{align*} Thus, $D_ u$ tells if the $ u$th coupon adds a new color to the collection or not. We shall assume that the coupons also carry a bonus value, which is a real number. All coupons with the same color have the same bonus value, while the bonus value may differ from color to color. Let $a_s$ be the bonus value of coupons with color $s, s = 1,2, \cdots, N$. The bonus sum of a collection of coupons is obtained by adding the bonus values of the different colors which are represented in the collection. Thus, duplicates do not count. Formally we define the bonus sum as follows. \begin{equation*}\tag{1.6} Q_n = a_{J_1}D_1 + a_{J_2}D_2 + \cdots + a_{J_n}D_n,\quad n = 1,2, \cdots.\end{equation*} The random variable $Q_n$ will be referred to as the bonus sum after $n$ coupons for a collector in the situation $\Omega = ((p_1, a_1), (p_2, a_2), \cdots, (p_N, a_N))$. We define for $B > 0$ \begin{equation*}\tag{1.7} W(B) = \min \{n: Q_n \geqq B\}.\end{equation*} $W(B)$ will be referred to as the waiting time to obtain bonus sum $B$ for a coupon collector in the situation $\Omega$. The following lemma, which is obvious, states that we have introduced a slight abundance of terminology and notation. LEMMA 1.1. The random variables $M_n$ and $T_n$ in (1.3) and (1.4) are respectively the bonus sum after $n$ coupons and the waiting time to obtain bonus sum $n$ for a coupon collector in the situation $((p_1), (p_2, 1), \cdots, (p_n, 1))$. Our main concern will be to study the random variable $W(B)$ and its particular case $T_n$. We confine ourselves to the case when all bonus values, $a_s$, are positive. The main result is that $W(B)$, under general conditions, is asymptotically (as $n$ and $N$ increase simultaneously) normally distributed. We give a brief sketch of the idea of proof, which is well known. When all $a$'s are positive, the distributions of the random variables $W(B)$ and $Q_n$ are related according to the formula \begin{equation*}\tag{1.8} P(W(B) > x) = P(Q_{\lbrack x \rbrack} 0.\end{equation*} With the aid of formula (1.8) one can "invert" results concerning either of the random variables $Q$ or $W$ to yield results concerning the other variable. In [5] we showed that $Q_n$, under general conditions, is asymptotically normally distributed. The asymptotic normality of $W$ will be derived by inversion of the results in [5]. The asymptotic behavior of the collector's waiting time has, to the best of our knowledge, earlier only been considered in the classical case, i.e. $p_s = 1/N$ and $a_s = 1, s = 1,2, \cdots, N$. In [4] Section 3, Renyi derives results about $M$ by first deriving results about $T$ and then "inverting." His basic tool is the representation \begin{equation*}\tag{1.9} T_n = U_1 + U_2 + \cdots + U_n\end{equation*} where $U_v$ is the waiting time from bonus sum $ u - 1$ to bonus sum $ u$. In the classical case $U_1, U_2, \cdots$ are independent random variables and $P(U_ u = k) = (( u - 1)/N)^{k-1}(N - u + 1)/N, k = 1,2, \cdots$. Thus, results concerning the asymptotic behavior of sums of independent random variables can be applied. A complete investigation along these lines is given by Baum and Billingsley in [1]. A generalized version of the problem is considered by Ivchenko and Medvedev in [2]. In their problem, as in our problem here, a representation of the type (1.9) no longer holds. They proceed along the path we shall follow here, i.e. to obtain results about the waiting time by "inverting" results concerning the bonus sum. The following notation will be used throughout the paper. $E$ and $\sigma^2$ stand for expectation and variance. $^c$ denotes centering at expectation, i.e. $X^c = X - EX. X = _\mathscr{L} Y$ means that the random variables $X$ and $Y$ have the same distribution. $\Rightarrow$ denotes convergence in law. The normal distribution with mean $\mu$ and variance $\sigma^2$ is denoted by $N(\mu, \sigma^2)$. The integral part of a real number is denoted by \lbrack \rbrack.

22 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the Koopman-Darmois class of exponential densities and develop a method for obtaining the U.M.V.E, which may be obtained by the Rao-Blackwell theorem provided an unbiased estimator of $g(\theta)$ with finite variance for each ε > 0.
Abstract: Consider a sample $(x_1, x_2, \cdots, x_N)$ from a population with a distribution function $F_\theta(x), (\theta \epsilon \mathbf{\Omega})$ for which a complete sufficient statistic, $s(x)$, exists. Then any parametric function $g(\theta)$ possesses a unique minimum variance unbiased estimator U.M.V.U.E., which may be obtained by the Rao-Blackwell theorem provided an unbiased estimator of $g(\theta)$ with finite variance for each $\theta \epsilon \mathbf{\Omega}$ is available. In this paper we will consider the Koopman-Darmois class of exponential densities and develop a method for obtaining the U.M.V.U.E., $t_g$, of $g(\theta)$ without explicit knowledge of any unbiased estimator of $g(\theta)$. The U.M.V.U.E. $t_g$ is given as the limit in the mean (l.i.m.) of a series and a convergent series is also given for the variance. For any arbitrary but fixed $\theta_0 \epsilon \mathbf{\Omega}$, it can be verified that the complete sufficient statistic $s(x)$ has moments of all orders and that these moments determine its distribution function. Hence the set of polynomials in $s(x)$ is dense in the Hilbert space, $V$ (with the usual inner product), of Borel measurable functions of $s(x)$. Since $t_g$ is an element of $V$, we may obtain a generalized Fourier series for it by constructing a complete orthonormal set $\{\varphi_n\}$ for $V$. Such a set $\{\varphi_n\}$ may be obtained from the density function and its derivatives with respect to $\theta$. For a subclass of the exponential family, Seth [18] has obtained $\{\varphi_n\}$ in a form which is convenient for our purposes. We will study this case in Section 3 and use Seth's results to give an explicit construction of $t_g$. Criteria for the pointwise convergence of the series will also be given. In Section 4 examples illustrating the use of the method are given and some related results are discussed. The general theory for the representation of minimum variance unbiased estimates, both local and uniform, has been developed in depth, for example in [5], [18], [19], [16], [3], and [4]. The present remarks, though founded in the general theory (in particular [3] and [4]), are tailored specifically to the exponential family.

17 citations


Journal ArticleDOI
TL;DR: Theorem 6.1.1 is shown in this article for hypotheses testing problems, Theorem 7.1 allows one to restrict oneself to the class of tests depending on a subset of variables, at least as far as the asymptotic power of the test under alternatives of the form $P_{n,\theta}$ is concerned.
Abstract: Let $(\mathscr{X}, \mathscr{A})$ be a measurable space and let $\Theta$ be an open subset of the $k$-dimensional Euclidean space $\mathscr{E}_k$. For each $\theta\in\Theta$, let $P_\theta$ be a probability measure on $\mathscr{A}$. Let $\{ X_n, n \geqq 0\}$ be a discrete parameter Markov process defined on $(\mathscr{X}, \mathscr{A}, P_\theta), X_n$ taking values in the Borel real line $(R, \mathscr{B})$. Finally, let $\mathscr{A}_n$ be the $\sigma$-field induced by the first $n + 1$ random variables $X_0, X_1,\cdots, X_n$ from the process and let $P_{n,\theta}$ be the restriction of $P_\theta$ to the $\sigma$-field $\mathscr{A}_n$. Under suitable conditions on the process, the following results are derived. Let $\theta_0$ be an arbitrary but fixed point in $\Theta$ and let $\Delta_n(\theta_0)$ be a $k$-dimensional vector defined in terms of the random variables $X_0, X_1,\cdots, X_n; \Delta_n^\ast(\theta_0)$ stands for a certain truncated version of $\Delta_n(\theta_0)$. By means of $\Delta_n^\ast(\theta_0)$ and $h\in\mathscr{E}_k$, one defines a probability measure $R_{n,h}, n \geqq 0$. The first main result is that the sequences $\{P_{n,\theta}\}$ and $\{R_{n,h}\}$ of probability measures with $h = n^{\frac{1}{2}}(\theta - \theta_0), \theta\in\Theta$, are differentially equivalent at the point $\theta_0$. (See Definition 5.1.) This is shown in Corollary 5.1. It is also proved in Corollary 5.2 that the sequence $\{\Delta_n^\ast(\theta_0)\}$ is differentially sufficient at $\theta_0$ (see Definition 5.2) for the family $\{P_{n,\theta}; \theta\in\Theta\}$ of probability measures. Next, let $\{h_n\}$ be a bounded sequence of $h$'s in $\mathscr{E}_k$ and set $\theta_n = \theta_0 + h_nn^{-\frac{1}{2}}$. Then for hypotheses testing problems, Theorem 6.1 allows one to restrict oneself to the class of tests depending on $\Delta_n(\theta_0)$ alone, at least as far as the asymptotic power of the test under alternatives of the form $P_{n,\theta_n}$ is concerned. In Section 7, these results are applied to the case of testing hypotheses about a real-valued parameter. More specifically, asymptotically most powerful tests for testing the hypothesis $\theta = \theta_0$ against one-sided alternatives are constructed. This is covered in Theorem 7.1.1. Also an asymptotically most powerful unbiased test for testing the same hypothesis as above against two-sided alternatives is constructed in Theorem 7.1.2. The first of these problems was also dealt with in Johnson and Roussas [2] but the approach is different here. The second problem is solved in Wald [8] for the independent identically distributed case. However, both the assumptions and approach are different here in addition to the Markovian character of the random variables involved. Section 6 treats the general situation where $\Theta$ is an open subset of $\mathscr{E}_k$. Theorem 6.1 together with Theorem 6.3 provide a way for studying the corresponding hypothesis testing problem in the $k$-dimensional parameter case. Finally, at the end of the last section, an outline is presented of forthcoming results for that case. These results extend, under substantially weaker conditions, the work of Wald [8], [9] to Markov processes. The method of proof relies heavily on the development in LeCam [3]. Unless otherwise stated, limits will be taken as the sequence $\{ n\}$, or subsequences thereof, tends to $\infty$. Integrals without limits will extend over the entire appropriate space. For $h\in\mathscr{E}_k, h'$ stands for its transpose. All bounding constants will be finite numbers.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the bound on the convergence rate of a distribution function to the central limit of an infinitely divisible distribution (F(x)-approximation of the distribution function of a set of independent random variables is bounded.
Abstract: Let $\{F_n\}$ be a sequence of distribution functions defined on the real line, and suppose $\{F_n(x)\}$ converges to some limiting distribution function $F(x)$. It is of interest to investigate the error involved in using $F(x)$ as an approximation to $F_n(x)$, that is to investigate the rate of convergence of $\{F_n\}$ to $F$. This leads to the problem of finding bounds on $M_n = \sup_{-\infty

Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of selecting one population, the objective being to select the population with largest translation parameter, and defined two procedures based on two-sample estimates of shift, and showed that the procedures defined in Section 2 will select a unique population.
Abstract: Let $X_{it} (t = 1,\cdots, n; i = 1,\cdots, k)$ be independent observations from $k$ populations with respective distribution functions $F(x - \theta_i)$, where the translation parameters $\theta_i$ are unknown. Consider the problem of selecting one population, the objective being to select the population with largest translation parameter. Procedures based on the joint ranking of all $nk$ observations have been considered by Lehmann [5], Bartlett and Govindarajulu [1], and Puri and Puri [9]. Robust procedures for related problems have been considered by Sobel [11] and McDonald and Gupta [7], among others. Define the $i$th population to be good if $\theta_i > \theta_{\max} - \Delta$ where $\theta_{\max} = \max \{\theta_1,\cdots, \theta_k\}$ and where $\Delta$ is a specified positive constant. The asymptotic relative efficiency (A.R.E.) of two procedures is then the limiting ratio of the sample sizes required to achieve a preassigned minimum probability of selecting a good population. It was hoped that procedures based on ranks would be more robust in terms of A.R.E. than corresponding parametric procedures. However, it has recently been shown that the slippage configuration used to find the A.R.E. in references [5] and [1] was not least favorable for the selection of a good population (See reference [10].). Puri and Puri [9] avoided this difficulty by restricting consideration to parameter points $\mathbf{\theta}^{(n)} = (\theta_1^{(n)},\cdots, \theta^{(n)}_k)$ for which $\theta^{(n)}_{\max} - \theta^{(n)}_i = b_i/n^{\frac{1}{2}} + 0(1/n ^{\frac{1}{2}})$ for $i = 1,\cdots, k$, where the $b_i$ are nonnegative constants. In Section 2, selection procedures are defined which are based on two-sample estimates of shift. It is shown in Section 3 that if the underlying distribution $F(x)$ is absolutely continuous then the procedures defined in Section 2 will select a unique population. Conditions are given under which the slippage configuration is the least favorable parameter point for the selection of a good population. This result does not require restrictions on the set of translation parameters comprising the parameter space. The A.R.E. of these procedures is defined in Section 4. If we consider the procedure based on the Hodges-Lehmann estimates of shift corresponding to the two-sample $F_0$-scores test, it is shown that the A.R.E. of this procedure relative to the normal theory procedure of Bechhofer [2] is simply the Pitman efficiency of the two-sample $F_0$-scores test relative to the $t$-test. Hence this approach yields efficiency results which are similar to those in references [5], [1], and [9]. However, the use of estimates in the definition of the selection procedure has the advantage of eliminating the difficulties concerning the least favorable parameter point.

Journal ArticleDOI
TL;DR: In this paper, the authors considered a test for the existence of ties in the presence of a null hypothesis (H_0), where the null hypothesis is defined as a class of observations which are formed by selecting one observation from each of the samples.
Abstract: Let $x_{i1}, x_{i2}, \cdots, x_{ini}$ be a random sample of real observations from the $i$th population with cumulative distribution function (cdf) $F_i(x), i = 1,2, \cdots, c$. Let the $c$ samples be independent and the $F$'s continuous. In this paper we shall consider tests for the null hypothesis $H_0:F_1(x) = F_2(x) = \cdots = F_c(x) = F(x), \text{say}$. The statistics and tests, proposed in this paper, are based upon $c$-plets of observations which are formed by selecting one observation from each of the $c$ samples. The total number of distinct $c$-plets that can be formed in this way is $\prod^c_{i=1}n_i$. In each $c$-plet we compare and rank observations appearing therein. Let $v_{ij}$ be the number of $c$-plets in which the observation selected from the $i$th sample is larger than exactly $(j - 1)$ observations and smaller than the other $(c - j)$ observations. Since the distributions are assumed to be continuous the probability of the existence of ties is zero. Let us define $u_{ij} = u_{ij}/\prod^c_{i=1}n_i$; it is the proportion of $c$-plets which give rank $j$ to the observation from the $i$th sample. Let us have $N = \sum^c_{i=1}n_i, p_i = n_i/N, L_i = \sum^c_{j=1} a_ju_{ij}$, where the $a$'s are real constants such that they are not all equal and \begin{equation*} \tag{1.1} A = \sum^c_{j=1} \sum^c_{l=1} a_ja_l \big\{\frac{\binom{c-1}{j-1}\binom{c-1}{l-1}}{(2c - 1)\binom{2c-2}{j+l-2}} - \frac{1}{c^2}\big\}.\end{equation*} Then we define a class of statistics $\mathscr{L}$ as \begin{equation*} \tag{1.2} \mathscr{L} = \frac{N(c - 1)^2}{Ac^2} \big\lbrack \sum^c_{i=1} p_iL_i^2 - \big(\sum^c_{i=1} p_i L_i\big)^2 \big\rbrack.\end{equation*} A particular member of the class is found by specifying the real constants $a$'s. With each member of this class we associate a test of $H_0$: Reject $H_0$ at a significance level $\alpha$ if $\mathscr{L}$ exceeds some predetermined constant $\mathscr{L}_\alpha$. We, later in this paper, show that under $H_0, \mathscr{L}$ is distributed as a $\chi^2$ variate with $c - 1$ degrees of freedom, in the limit as $N \rightarrow \infty$. Hence for sufficiently large $N, \mathscr{L}_\alpha$ may be approximated by the corresponding significance point of the $\chi^2$ distribution with requisite degrees of freedom. Tests proposed by Bhapkar [2], [3], Sugiura [13], and the author [5], [6] may be seen to belong to this class. In this paper it is attempted to provide a unified treatment of statistics and tests based on $c$-plets--particularly those based on linear combinations of the $u$'s. The detailed properties of statistics belonging to this class are discussed under the null hypothesis and the following two alternative hypotheses. (I) the alternative of different locations or shift, the distributions being equal in all other respects and, (II) the alternative of different scales, the distributions again being equal in all other respects. Haller [7] has discussed the use and the properties of some statistics belonging to this class for testing $H_0$ against an alternative of stochastically ordered variables and for selection and ranking procedures. In the fourth section we give a condition on the distributions under which these tests are consistent against specified alternatives. In the fifth section $\mathscr{L}$ is shown to have a limiting noncentral $\chi^2$ distribution with $c - 1$ degrees of freedom under the pertinent alternative hypotheses. The noncentrality parameter is seen to be a quadratic form in the constants $a$'s, involving $F$. The earlier test statistics, mentioned above, were constructed taking into account the relative magnitudes of the $u$'s under the null and under the alternative hypotheses. The idea was to emphasize the difference between the two magnitudes. This "difference" is, in some sense, maximized if we are able to obtain the statistics, from the class, which has the largest noncentrality parameter under the alternative hypothesis of interest. This statistic would then be recommended to test $H_0$ whenever the particular alternative is suspected as likely. Also, for this particular alternative hypothesis, this test shall have maximum asymptotic relative efficiency (in the Pitman sense) among the class of statistics proposed. In the sixth section we show how to obtain the statistics with the above property and do so for certain specified alternatives. In the same section we compute the ARE of these tests with respect to certain of their competitors.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of finding an optimal stopping and scheduling rule for a single hypothesis with a known exponential time delay distribution, where the problem is to decide whether to stop and take action now or to continue and schedule more experiments.
Abstract: This paper reconsiders the usual sequential decision problem of choosing between two simple hypotheses $H_0$ and $H_1$, in terms of iirv when there is a time delay, assumed to have a known exponential distribution, in obtaining observations. A basic assumption underlying much of the current analysis, is that results of taking observations are considered as immediately available. In other words, it is assumed that there is no time delay between the decision to take an observation and obtaining the result of the observation. This, of course, can be a tremendous limitation to the applicability of the theory. In actuality, such time lags can be substantial and taking an observation often involves experimentation. One important example, where this time delay often considerably inhibits the use of sequential analysis is medical experimentation. Here, a long time may elapse between the application and the result of a treatment. The theory of sequential analysis is considered, explicitly taking account of time lags. At any point in time the decision maker must decide whether to stop and take action now or to continue and schedule more experiments. If he continues he must also decide how many more experiments to schedule. The problem basically then is to find an optimal procedure incorporating a stopping, scheduling and terminal action rule. There is an interesting interplay among these three; and optimal stopping rules, currently used for some problems, may not be optimal when scheduling factors are considered. The usual losses related to decision errors are specified and linear cost assumptions, with regard to amount of experimentation and time until a final decision, are made. Time until a terminal decision is an important variable. If it is decided to continue observation then scheduling many experiments will result in a small expected waiting time until the next result. However, this advantage must be balanced with the cost of scheduling these experiments. Finally, all the previous must be weighed with the loss of taking immediate action with only the currently available information. Bayes procedures are derived. The information state, at any time, because of the exponential assumption, will be described by $(n, \pi)$ where $\pi$ is the current posterior probability of $H_0$ and $n$ the numbers of results outstanding from tests already scheduled. As indicated in Section 2, when a known exponential time delay distribution is assumed, possible decision changes should be made only when test results are obtained. In Section 3, the usually used Sequential Probability Ratio Test (SPRT) stopping rule is studied. Here there are two values $0 z(\pi)$ we schedule no experiments. On the other hand, if $n \leqq z(\pi)$ then $z(\pi) -n$ more experiments are scheduled. The functional equation approach of Dynamic Programming is used and provides a computational method for approximating $z(\pi)$. The general case, where the problem is to find an optimal stopping and scheduling rule, is studied in Section 4. Various results about the optimal stopping region, in the $(n, \pi)$ plane, are derived and it is shown that the optimal procedure stops with probability one. The optimal stopping region is a kind of generalized SPRT described by functions $0 < \pi_1(n) \leqq \pi_2(n) < 1$ such that $(n, \pi)$ is a continuation state as long as $\pi_1(n) < \pi < \pi_2(n)$. Also, it is shown that there exist two bounded functions $z_1(\pi) \leqq z_2(\pi) \leqq \mathbf{M} < \infty$ such that if $(n, \pi)$ is a continuation point the optimal scheduling quantity is $y(n, \pi) = z_1(\pi) - n$ if $n \leqq z_1(\pi)$, and $y(n, \pi) = 0$ if $n \geqq z_2(\pi)$. When a continuous approximation to the problem is used, allowing $n$ to take on a continuous range of values, a stronger result is proven. Here, the two functions $z_1(\pi)$ and $z_2(\pi)$ may be taken as equal. The results for optimal scheduling rules have similarities to some problems studied in Inventory theory, see [5].

Patent
03 Jun 1970
TL;DR: A matched pair of fiber optics image encoding-decoding bundles is fabricated by the steps of coiling a continuous optical fiber into a torodial or similar shaped bundle of fiber convolutions as discussed by the authors.
Abstract: A matched pair of fiber optics image encoding-decoding bundles is fabricated by the steps of coiling a continuous optical fiber into a torodial or similar shaped bundle of fiber convolutions; securing together a first section of each fiber convolution to form a first portion of the bundle, the remainder of each fiber convolution comprising a second section thereof; scrambling at least one second section of the fiber convolutions; securing together the second section of each fiber convolution so as to form a second portion of the bundle; and transversely cutting each fiber convolution at the first and second portions to effect two bundles with opposed end faces thereof defined by respective ones of the fiber endings. Although the opposite end faces of each bundle are arranged in different geometrical patterns of fiber endings, the corresponding end faces ofthe pair of bundles are arranged in substantially identical geometrical patterns of fiber endings.

Journal ArticleDOI
TL;DR: In this article, the results obtained from coolant pressure drop measurements, conducted on a test section comprising 16 rods in a square arrangement, were analyzed and the results were shown to be similar to those obtained in this paper.
Abstract: This paper analyzes the results obtained from coolant pressure drop measurements, conducted on a test section comprising 16 rods in a square arrangement.The rod bundle, mechanically assembled by me...

Journal ArticleDOI
TL;DR: In this paper, the Neyman-Pearson theory of hypothesis testing is applied to the problem of finding uniformly most powerful (UMP) level (i.e., UMP-level) tests relative to a family of a priori distributions.
Abstract: Let $X$ be a random variable with a family of possible distributions for $X$ indexed by $\lambda\in\Omega. \lambda$ is the realization of a random variable $\Lambda$ taking values in the space $\Omega$. For each $\lambda$, let $f_\lambda$ denote the conditional density of $X$ given $\Lambda = \lambda$ with respect to some $\sigma$-finite measure $\mu$. Let $\mathscr{G}$ be a family of possible a priori distributions $G$ for $\Lambda$. After observing $X$, we wish to test $H: \lambda\in\omega$ against $K: \lambda\in\omega'$ where $\omega$ is a subset of $\Omega$ and $\omega'$ its complement. To determine good tests for this problem, we use an analysis similar to the one of the Neyman-Pearson theory of hypothesis testing. Analogous to the type I and type II errors of the Neyman-Pearson theory are: type (i) error: $\Lambda\in\omega'$ decided and $\Lambda\in\omega$ occurs, type (ii) error: $\Lambda\in\omega$ decided and $\Lambda\in\omega'$ occurs. Analogous to the problem of finding uniformly most powerful level $\alpha$ tests is the problem: subject to: $P_G$(type (i) error) $\leqq \alpha$ for all $G\in\mathscr{G}$ minimize $P_G$(type (ii) error) uniformly for $G\in\mathscr{G}$. A test which achieves this is called a uniformly most powerful (UMP) level $\alpha$ test relative to $\mathscr{G}$. The existence of such UMP level $\alpha$ tests is proved for this hypotheses testing problem for various choices of the family of a priori distributions $\mathscr{G}$. As might be expected these results are closely related to the Neyman-Pearson theory of hypotheses testing. The second section gives four simple situations where the problem of finding UMP level $\alpha$ tests relative to a family of a priori distributions $\mathscr{G}$ reduces to an ordinary testing problem. In the third section, Theorem 1 gives for this testing problem an analogue of the concept of a least favorable distribution from the classical theory of hypotheses testing. Theorem 1 is used to prove Theorem 2 which gives the existence of a UMP level $\alpha$ test when $X$ is real-valued, $\Omega$ is a subset of the real numbers, the family of distributions indexed by $\lambda\in\Omega$ has a monotone likelihood ratio in $x$, and the family $\mathscr{G}$ satisfies a certain condition. The two theorems are applied to several examples. In the following, as always, a test (randomized) is a function $\delta$ defined on the range of $X$ which takes on values in the interval $\lbrack 0, 1\rbrack$. If $X = x$ is observed, $K$ is decided to be true with probability $\delta(x)$ and $H$ with probability $1 -\delta(x)$. For any test $\delta$ and $G\in\mathscr{G}$ we have \begin{equation*}\tag{1} P_G(\text{type (i) error of} \delta) = \int \int_\omega \delta(x)f_ \lambda(x) dG(\lambda) d \mu \quad\text{and}\end{equation*} \begin{equation*}\tag{2} P_G(\text{type (ii) error of} \delta) = \int \int_{\omega'} (1 - \delta(x))f_\lambda(x) dG(\lambda) d\mu\end{equation*} where the integral involving $X$ is over the entire space of $X$. It will often be convenient to think of $\lambda$ as a fixed but unknown parameter and the test $\delta$ as a test for the classical testing problem $H: \lambda\in\omega$ against $K: \lambda\in\omega'$. Changing the order of integration in (1) by Fubini's theorem, we have for the test $\delta$ the following relationship between the type I error of $\delta$, considered as a test for the classical problem, and the type (i) error of $\delta$, considered as a test for the problem of this paper: \begin{equation*}\tag{3} P_G(\text{type (i) error of} \delta) = \int_\omega P_\lambda (\text{type I error of} \delta) dG(\lambda).\end{equation*} In the same way, we have \begin{equation*}\tag{4} P_G(\text{type (ii) error of} \delta) = \int_{\omega'} P_\lambda (\text{type II error of} \delta) dG(\lambda).\end{equation*} We will now prove the existence of UMP level $\alpha$ tests for various families of a priori distributions.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of determining the convergence of a Markov chain to a Bessel diffusion with respect to a fixed number of elements in the state space of a population.
Abstract: Motivated by ecological and genetic phenomena, Karlin and McGregor [3] introduced the following model to describe the continued formation and growth of mutant biological populations. It is assumed that a new mutant population arises at each event time of a stochastic process (referred to as the input process) $\{v(t),t > 0\}$ whose state space is the non-negative integers. Each new mutant population begins its evolution with a fixed number of elements and evolves according to the laws of a continuous time Markov chain $\mathscr{P}$ with stationary transition probability function $P_{i,j}(t)\quad i,j = 0, 1, 2, \cdots; t \geqq 0.$ We assume that all populations evolve according to the same Markov Chain and independent of one another. In terms of this structure, the basic question which we consider in this work can be formulated in the following manner: (A) Given an input process $\{v(t),t > 0\}$ and the individual growing process $\mathscr{P}$, determine the asymptotic behavior as $t \rightarrow \infty$ of the mean and variance of $S(t) = \{$number of different sizes of mutant populations at time $t\}$ and determine the limit distribution as $t \rightarrow \infty$ of $S(t)$ appropriately normalized. $S(t)$ is a special functional of the vector process $N(t) = \{N_0(t), N_1(t), N_2(t), \cdots\}\quad t > 0$ where $N_k(t) = \text{number of mutant populations with exactly} k \text{members at time} t$ and may be interpreted as a measure of the fluctuations in population size. We have restricted our considerations to this special case because it serves as a model problem for more general situations and possesses all the subtle difficulties of the general case. The random variable $S(t)$ can also be identified as the number of distinct occupied states at time $t$ among all Markov Chains which have begun their evolution up to that time. In the subsequent discussion we will refer to $\{S(t), t > 0\}$ as the "occupied states" process generated by the input process $\{v(t),t > 0\}$ and the Markov Chain $\mathscr{P}$. Without loss of generality we identify the state 0 as the initial state of all evolving Markov Chains and $-1$ as an absorbing state if absorption is possible. In this paper we introduce "occupied states" processes generated by a class of null recurrent, transient, and absorbing barrier Birth and Death processes and a Poisson input process. The special feature of this class is that with the normalization $Y(t;u) = t^{-1} X(t^\alpha u),\quad\alpha > 0$ $(\{X(t), t > 0\}$ is the growing process $\mathscr{P}$), the process $Y(t;u)$ converges weakly in the Markov sense as $t \rightarrow \infty$ to a Bessel diffusion (see C. Stone [8]). The main idea (also applicable to more general growing processes $\mathscr{P}$ is that one requires local limit theorems, and under some circumstances, specification of the rate of convergence of the transition density of $Y(t;u)$ to the density of the limiting diffusion in order to prescribe exact asymptotic formulas for $ES(t)$ and $\operatorname{Var} S(t)$ and to prove a central limit theorem for $S(t)$. The results of this paper are in sharp contrast with the asymptotic formulas for $ES(t)$ and $\operatorname{Var} S(t)$ which appear in the companion paper [6], where the growing process $\mathscr{P}$ is a general positive recurrent Markov chain and the input process remains Poisson. Section 2 contains basic definitions, some intuitive discussion, and precise statements of the main results on asymptotic behavior of $ES(t)$ and $\operatorname{Var} S(t)$. In Section 3, we present detailed proofs of the theorems of Section 2, and we conclude with a central limit theorem for $S(t)$ in Section 4. The appendix contains some technical lemmas which are essential for asymptotic formulas that incorporate speed of convergence theorems.

Book ChapterDOI
01 Jan 1970
TL;DR: In this paper, a generalization of Darcy's law to the case of orthogonal cartesian coordinates x, y, z is presented, and the principle of conservation of mass leading to the continuity equation is discussed.
Abstract: In the preceding chapter Darcy’s law has been formulated in the form $$\upsilon = - k\frac{{d\varphi }}{{ds}}$$ (3.1) where s is the direction of flow. Only in very special cases, such as the flow through a cylindrical tube filled with soil, is the direction s known beforehand. Generally, the direction of flow is different in different points of the field, and the determination of the direction of flow constitutes part of the problem. It would thus be preferable if eqn. (3.1) could be rewritten in a form involving only differentiations with respect to coordinates that are fixed beforehand: for instance, orthogonal cartesian coordinates x, y, z (Fig. 3.1). Such a generalization is presented in this chapter in section 3.1. In section 3.2. the principle of conservation of mass, leading to the continuity equation, is discussed.


Book ChapterDOI
01 Jan 1970
TL;DR: This long, very intensive, and partially very confused discussion, has been rearranged in six sections: (1) Direct and Indirect Evidence of X-rays and Low-Energy Cosmic Rays; (2) The Boundary Layer between the Stable Gas Phases; (3) Theoretical Aspects of Interstellar Gas Dynamics and the Formation of Clouds; (4) Observational Aspects and Observations of the Rarefied, Neutral Intercloud Medium and of the Interstellar Electron Density; (5) The Dynamical Theory of HII Regions; (6)
Abstract: This long, very intensive, and partially very confused discussion, has been rearranged in six sections: (1) Direct and Indirect Evidence of X-rays and Low-Energy Cosmic Rays; (2) The Boundary Layer between the Stable Gas Phases; (3) Theoretical Aspects of Interstellar Gas Dynamics and the Formation of Clouds; (4) Observational Aspects of Interstellar Gas Dynamics and the Formation of Clouds; (5) Observations of the Rarefied, Neutral Intercloud Medium and of the Interstellar Electron Density; (6) The Dynamical Theory of HII Regions. Section 2 has been transferred from the Discussion on Monday, September 8 (Chapter 2). To Section 6 have been added remarks made during various discussions. A couple of remarks have been transferred to other Discussions. Part of the Discussion (in Section 5) was very confused; an attempt has been made to condense and to make as much sense as possible out of what was said. For the convenience of the reader I recapitulate a few concepts, the (mis)-use of which lead partially to the confusion: i. The hydrogen surface density or column density \(N_\text H = \int {n_\text H \text dl}\) (NH is sometimes called the hydrogen measure HM). ii. The dispersion measure DM = \(\text D\text M = \int {n_e \text dl}\) (DM is often called the electron surface density N e ). iii. The rotation measure \(\text R\text M = c_1 \int {n_e B_{||} \,\text dl}\). iv. The emission measure \(\text E\text M = \int {n_e^2 \text dl}\).

Book ChapterDOI
01 Jan 1970
TL;DR: In this paper, the Burgers vector is placed in a separate space, the L-space, which contains the dislocation lines, and the b-space which contains those Burgers vectors which arise from the fulfillment of the node condition.
Abstract: In Section 7.4 it was shown that reactions take place between dislocations, i.e. they form nodes and coalesce if the energy of the configuration is thus lowered. At the nodes, Frank’s node condition for the Burgers vectors (continuity of Burgers vectors) has to be fulfilled [Eq. (7.4-1)]: $$ \sum {b_{in} } = \sum {b_{out} } . $$ (9.1-1) The Burgers vector is attributed to a dislocation but it is not localized on it. Here we choose a representation in which the Burgers vector is completely separated from the dislocation and placed into a separate space (Bollmann, 1962, 4964). The space containing the dislocation lines—the real crystal—is termed the L-space, and that of the Burgers vectors the b-space. While the L-space contains the dislocation lines together with their nodes, the b-space contains those Burgers vector configurations which arise from the fulfillment of the node condition. It will be shown that a strict duality exists between the two configurations. The situation is comparable to the force polygons of structural frameworks—the so called Cremona plans.

Journal ArticleDOI
TL;DR: A theorem is proved that gives a deeper insight into the condition under which the theorem proved in Section II holds and an answer in the special case where \phi _n = \phi and M_n = M is given.
Abstract: Given a probability space (\Omega, S, \mu) ; X_1, \{ V_n \}(n = 1, 2, \cdots), K -component complex-valued random variables on (\Omega, S, \mu), K a positive integer, whose components have finite variance and mean zero; and two sequences of matrices \{ \phi_n \}, \{ M_n \} where the \{ \phi_n \} are K \times K and the \{M_n\} are K_n \times K, K_n positive integers, matrices whose elements are complex constants. We consider the stochastic process \{X_n\} where \begin{equation} X_{n+1} = \phi_n X_n + V_n \qquad n \geq 1 \end{equation} and the associated sampling procedure \begin{equation} Y_n = M_n X_n \qquad n \geq 1. \end{equation} We pose the following question. If in the "prediction problem" and the "filtering problem" we let \bar{X}_{n+1} and X {\prime}_ {n+1} , respectively, be the "best estimates," under what circumstances do the differences X_{n+1} - \bar{X}_{n+1} and X_{n+1}- X {\prime}_{n+1} - "approach 0 as n \rightarrow \infty ". In Secion I we give definitions and some properties. In Section II, we give an answer in the special case where \phi _n = \phi and M_n = M (i.e., they are independent of n ). In Section III we prove a theorem that gives a deeper insight into the condition under which the theorem proved in Section II holds.