scispace - formally typeset
Search or ask a question

Showing papers on "Section (fiber bundle) published in 1972"


Journal ArticleDOI
TL;DR: In this article, the authors studied mathematical models for the unidirectional propagation of long waves in systems that manifest nonlinear and dispersive effects of a particular but common kind.
Abstract: Several topics are studied concerning mathematical models for the unidirectional propagation of long waves in systems that manifest nonlinear and dispersive effects of a particular but common kind. Most of the new material presented relates to the initial-value problem for the equation u$\_{t}$ + u$\_{x}$ + uu$\_{x}$ - u$\_{xxt}$ = 0, (a) whose solution u(x,t) is considered in a class of real nonperiodic functions defined for -$\infty $ < x < $\infty $, t $\geq $ 0. As an approximation derived for moderately long waves of small but finite amplitude in particular physical systems, this equation has the same formal justification as the Korteweg-de Vries equation u$\_{t}$ + u$\_{x}$ + uu$\_{x}$ + u$\_{xxx}$ = 0, (b) with which (a) is to be compared in various ways. It is contended that (a) is in important respects the preferable model, obviating certain problematical aspects of (b) and generally having more expedient mathematical properties. The paper divides into two parts where respectively the emphasis is on descriptive and on rigorous mathematics. In section 2 the origins and immediate properties of equations (a) and (b) are discussed in general terms, and the comparative shortcomings of (b) are reviewed. In the remainder of the paper (section section 3, 4) - which can be read independently of the preceding discussion - an exact theory of (a) is developed. In section 3 3 the existence of classical solutions is proved; and following our main result, theorem 1, several extensions and sidelights are presented. In section 4 solutions are shown to be unique, to depend continuously on their initial values, and also to depend continuously on forcing functions added to the right-hand side of (a). Thus the initial-value problem is confirmed to be classically well set in the Hadamard sense. In appendix 1 a generalization of (a) is considered, in which dispersive effects within a wide class are represented by an abstract pseudo-differential operator. The physical origins of such an equation are explained in the style of section 2, two examples are given deriving from definite physical problems, and an existence theory is outlined. In appendix 2 a technical fact used in section 3 is established.

1,856 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of estimating the number of trials of a multinomial distribution, from an incomplete observation of the cell totals, under constraints on the cell probabilities, is investigated.
Abstract: This paper deals with the problem of estimating the number of trials of a multinomial distribution, from an incomplete observation of the cell totals, under constraints on the cell probabilities. More specifically let $(n_1, \cdots, n_k)$ be distributed according to the multinomial law $M(N; p_1, \cdots, p_k)$ where $N$ is the number of trials and the $p_i$'s are the cell probabilities, $\sum^k_{i=1}p_i$ being equal to 1. Suppose that only a proper subset of $(n_1, \cdots, n_k)$ is observable, that $N, p_1, \cdots, p_k$ are unknown and that $N$ is to be estimated. Without loss of generality, $(n_1, \cdots, n_{l-1}), l \leqq k$ may be taken to be the observable random vector. For fixed $N, (n_1, \cdots, n_{l-1}, N - n)$ has the multinomial distribution $M(N; p_1, \cdots, p_l)$ where $n$ denotes $\sum^{l-1}_{i=1}n_i$ and $p_l$ denotes $1 - \sum^{l-1}_{i=1}p_i$. If the parameter space is such that $N$ can take any nonnegative integral value and each $p_i$ can take any value between 0 and 1, such that $\sum^{l-1}_{i=1}p_i n$. In specific situations, it might, however, be possible to postulate constraints of the type \begin{equation*}\tag{1.1} p_i = f_i(\theta),\quad i = 1, \cdots, l\end{equation*} where $\theta = (\theta_1, \cdots, \theta_r)$ is a vector of $r$ independent parameters and $f_i$ are known functions. This may lead to estimability of $N$. The problem of estimating $N$ in such a situation is studied here. The present investigation is motivated by the following problem. Experiments in particle physics often involve visual scanning of film containing photographs of particles (occurring, for instance, inside a bubble chamber). The scanning is done with a view to counting the number $N$ of particles of a predetermined type (these particles will be referred to as events). But owing to poor visibility caused by such characteristics as low momentum, the distribution and configuration of nearby track patterns, etc., some events are likely to be missed during the scanning process. The question, then, is: How does one get an estimate of $N$? The usual procedure of estimating $N$ is as follows. Film containing the $N$ (unknown) events is scanned separately by $w$ scanners (ordered in some specific way) using the same instructions. For each event $E$ let a $w$-vector $Z(E)$ be defined, such that the $j$th component $Z_j$ of $Z(E)$ is 1 if $E$ is detected by the $j$th scanner and is 0 otherwise. Let $\mathscr{J}$ be the set of $2^w w$-vectors of 1's and 0's and let $I_0$ by the vector of 0's. Let $x_I$ be the number of events $E$ whose $Z(E) = I$. For $I \in \mathscr{J} - \{I_0\}$, the $x_I$'s are observed. A probability model is assumed for the results of the scanning process. That is, it is assumed that there is a probability $p_I$ that $Z(E)$ assumes the value $I$ and that these $p_I$'s are constrained by equations of the type (1.1) (These constraints vary according to the assumptions made about the scanners and events, thus giving rise to different models. An example of $p_I(\theta)$ would be $E( u^{\Sigma^w_{j=1}I_j}(1 - u)^{w-\Sigma^w_{j=1}I_j})$ where $I_j$ is the $j$th component of $I$ and expectation is taken with respect to the two-parameter beta density for $v$. This is the result of assuming that all scanners are equally efficient in detecting events, that the probability $v$ that an event is seen by any scanner is a random variable and that the results of the different scans are locally independent. For a discussion of various models, see Sanathanan (1969), Chapter III. $N$ is then estimated using the observed $x_I$'s and the constraints on the $P_I$'s, provided certain conditions (e.g., the minimum number of scans required) are met. The following formulation of the problem of estimating $N$, however, leads to some systematic study including a development of the relevant asymptotic distribution theory for the estimators. The $Z(E)$'s may be regarded as realizations of $N$ independent identically distributed random variables whose common distribution is discrete with probabilities $p_I$ at $I$ (In particle counting problems, it is usually true that the particles of interest are sparsely distributed throughout the film on account of their Poisson distribution with low intensity. Thus in spite of the factors affecting their visibility outlined earlier, the events can be assumed to be independent.). The joint distribution of the $x_I$'s is, then, multinomial $M(N; p_I, I \in \mathscr{J})$. The problem of estimating $N$ is now in the form stated at the beginning of this section. Since the estimate depends on the constraints provided for the $p_I$'s, it is important to test the "fit" on the model selected. The conditional distribution of the $x_I$'s $(I eq I_0)$ given $x$ is multinomial $M(x; p_I/p(I eq I_0))$ where $x$ is defined as $\sum_{I eq I_0} x_I$ and $p$ as $\sum_{I eq I_0}P_I$. The corresponding $\chi^2$ goodness of fit test may therefore be used to test the adequacy of a model in question. Various estimators of $N$ are considered in this paper and among them is, of course, the maximum likelihood estimator of $N$. Asymptotic theory for maximum likelihood estimation of the parameters of a multinomial distribution has been developed before for the case where $N$ is known but not for the case where $N$ is unknown. Asymptotic theory related to the latter case is developed is Section 4. The result on the asymptotic joint distribution of the relevant maximum likelihood estimators is stated in Theorem 2. A second method of estimation considered is that of maximizing the likelihood based on the conditional probability of observing $(n_1,\cdots, n_{l-1})$, given $n$. This method is called the conditional maximum likelihood (C.M.L.) method. The C.M.L. estimator of $N$ is shown (Theorem 2) to be asymptotically equivalent to the maximum likelihood estimator. Section 5 contains an extension of these results to the situation involving several multinomial distributions. This situation arises in the particle scanning context when the detected events are classified into groups based on some factor like momentum which is related to visibility of an event, and a separate scanning record is available for each group. A third method of estimation considered is that of equating certain linear combinations of the cell totals (presumably chosen on the basis of some criterion) to their respective expected values. Asymptotic theory for this method is given in Section 6. This discussion is motivated by a particular case which is applicable to some models in the particle scanning problem, using a criterion based on the method of moments for the unobservable random variable, given by the number of scanners detecting an event (Discussion of the particular case can be found in Sanathanan (1969) Chapter III.). In the next section we give some definitions and a preliminary lemma.

195 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that G is nonempty if and only if (n r + 1) (n -r) -rg is a special series, that is, if n r > 0, then G,r is not empty.
Abstract: Let X be a complete nonsingular curve of genus g defined over an algebraically closed field Ic of any characteristic, let J be the jacobian variety. Denote by G;r the set of points of J which are images of linear series of degree n and projective dimension at least r, and put d== (r + 1) (n -r) -rg. Assume that G;r is formed by special series, that is (n r) < g. It is repeatedly asserted in the classical literature that G. r depends upon at least d parameters; in particular if d_: 0, then G,r is nonempty. For r= 1, the matter is treated in section 4 of Riemann's " Theorie der Abel'schen Functionen" [11] 1 and in lecture 31 of llensel-Landsberg [2] 1; the general case is treated in Brill-Noether [1] and in lecture 57 and appendix G of Severi [13].1 More recently, Martens [7] proved that G;r is an algebraic set whose components each have dimension at least d and at most (n -2r) and that if 1 r? (n-r) < (g-2), then the maximum occurs if and only if X is hyperelliptic, provided G.r is nonempty. Meis [9] gave an analytic proof that Gn1 is nonempt.y if d ? 0 or equivalently that X can be displayed as a branched covering of the sphere with at most (g + 3)/2 sheets. We offer a proof of the existence assertion in full generality. In fact, we prove that Gn meets any (closed) subvariety v of J with dimension at least (g d). It follows formally that given automorphisms f, of J (such as translations), an intersection (f (G,nri) n . nf1 (G,z,rp) n v) is a nonempty variety of dimension e = (dim (V) (r + 1) (g nj + ri)) whenever e ? 0. The proof involves constructing a vector bundle E on J, which algebraically deforms to the trivial bundle, and a section a over J of the Grassmann bundle B of rank-g-quotients of E such that a translate of G,r is the preimage of a certain special Schubert cycle o%. The cohomology ring of B is the tensor product of the rings of J and of a Grassmann variety, so by classical Schubert calculus, the class s of a is given by a certain polynomial in the g basic Schubert cycle classes, which themselves induce the classes w,,

132 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that for any right continuous martingale, there is a right continuous family of minimal stopping times for the Wiener process with a stable distribution of index α > 1.
Abstract: A stopping time $T$ for the Wiener process $W(t)$ is called minimal if there is no stopping time $S \leqq T$ such that $W(S)$ and $W(T)$ have the same distribution. In the first section, it is shown that if $E\{W(T)\} = 0$, then $T$ is minimal if and only if the process $W(t \wedge T)$ is uniformly integrable. Also, if $T$ is minimal and $E\{W(T)\} = 0$ then $E\{T\} = E\{W(T)^2\}$. In the second section, these ideas are used to show that for any right continuous martingale $M(t)$, there is a right continuous family of minimal stopping times $T(t)$ such that $W(T(t))$ has the same finite joint distributions as $M(t)$. In the last section it is shown that if $T$ is defined in the manner proposed by Skorokhod (and therefore minimal) such that $W(T)$ has a stable distribution of index $\alpha > 1$ then $T$ is in the domain of attraction of a stable distribution of index $\alpha/2$.

121 citations



Journal ArticleDOI
TL;DR: In this article, the authors studied the behavior of the Frobenius map F*: H 1(X, E) → H 1 (X, F*E) for a vector bundle E.
Abstract: Let k be an algebraically closed field of characteristic p > 0, and let X be a curve defined over k. The aim of this paper is to study the behavior of the Frobenius map F*: H1(X, E) → H1(X, F*E) for a vector bundle E.

82 citations



Journal ArticleDOI
TL;DR: One-sided analogies to Scheffe's two-sided confidence bounds were developed in this article for general 2-and 3-parameter functions, which can be sharpened to great practical advantage.
Abstract: The paper develops one-sided analogs to Scheffe's two-sided confidence bounds for a function $f(\mathbf{x}), \mathbf{x} \in R^n$. If the domain $X\ast$ of $f$ is a subset of $R_+^n = \{\mathbf{x}: x_i \geqq 0, \forall i\}$, then the upper Scheffe bounds are conservative upper confidence bounds, which can be sharpened, often to great practical advantage. This sharpening, accomplished by a non-trivial extension of Scheffe's method, is developed by the geometry-probability argument of Section 2. Section 3 derives coverage probabilities for general 2- and 3-parameter functions and illustrates savings by the sharp bounds in two examples.

36 citations


Journal ArticleDOI
TL;DR: This paper presents a method for determining the eigenvalues lying in a “section” of the Eigenvalue spectrum together with the corresponding eigenvectors, and finds that the sectioning method works particularly well for clustered eigen values.
Abstract: When a relatively few eigenvalues are desired for a very large symmetric matrix eigenvalue problem, direct methods such as Householder reduction tend to be inefficient. Inverse iteration works reasonably well but runs into difficulties when eigenvalues are clustered. This paper presents a method for determining the eigenvalues lying in a “section” $\alpha < \lambda < \beta $ of the eigenvalue spectrum together with the corresponding eigenvectors. In contrast with inverse iteration, the sectioning method works particularly well for clustered eigenvalues.The sectioning method proceeds in three phases : first, a basis for the invariant subspace corresponding to the spectral section $\alpha < \lambda < \beta $ is computed, next this basis is used to reduce the eigenproblem by the Ritz process, and finally, the reduced problem is solved in high precision by a fairly standard Householder technique.

33 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the limiting distribution is noncentral chi-square with noncentrality parameter equal to the limit of the maximum likelihood ratio test statistic of the null hypothesis under the assumption that the hypothesis is locally equivalent to a hyperplane.
Abstract: Let $X$ be a random vector, taking values in $p$-dimensional Euclidean space $\mathscr{E}^p$ with density $f(x; \theta)$. The parameter $\theta$ belongs to a subset $\Theta$ of a Euclidean space $\mathscr{E}^q$ and is unkown. Let $g$ be a function over the parameter space having continuous first partial derivatives and taking values in $\mathscr{E}^r (r \leqq q)$. To test the hypothesis $g(\theta) = 0$ against the alternative $g(\theta) eq 0$ using a sample of $n$ independent observations of $X$, one frequently uses the Neyman-Pearson generalized likelihood ratio test statistic $\lambda_n$. The limiting distribution of $-2\ln\lambda_n$ under the null hypothesis, as $n \rightarrow \infty$, was shown by Wilks (1938) to be chi-square with $r$ degrees of freedom (assuming regularity conditions). If $\{\theta_n\}$ is a sequence of alternatives converging to a point of the null hypothesis at the rate $n^{\frac{1}{2}}$, the limiting distribution is noncentral chi-square with noncentrality parameter equal to the limit of $n\lbrack g(\theta_n)\rbrack' \sum^{-1}_g (\theta_n)\lbrack g(\theta_n)\rbrack$, where $\sum_g(\theta)$ is the asymptotic covariance matrix of the quantity $n^{\frac{1}{2}}\lbrack g(\hat{\theta}) - g(\theta)\rbrack$ as $n \rightarrow \infty$ with $\theta$ fixed ($\hat{\theta}$ denoting the maximum-likelihood estimator of $\theta$ based on sample size $n$). This noncentral convergence was first proved by Wald (1943), along with a number of other results, on the basis of some rather severe uniformity conditions. Davidson and Lever (1970) have proved the result using more intuitive assumptions. Feder (1968) has obtained asymptotic noncentral chi-square for the case where both the hypothesis and alternative regions are cones in $\Theta$; this is essentially a generalization of $g(\theta) = 0$ versus $g(\theta) eq 0$, since the hypothesis $g(\theta) = 0$ is locally equivalent to a hyperplane and $g(\theta) eq 0$ to its complement. Despite the generality, Feder's assumptions are quite mild compared with Wald's. The result appears in Wald's paper as a special case of a more general statement entitled "Theorem IX." This theorem states that for $\theta \in \Theta$ and $-\infty < t < \infty$ the relationship \begin{equation*}\tag{1.1}P_\theta\lbrack -2 \ln \lambda_n < t\rbrack - P_\theta\lbrack K_n < t\rbrack \rightarrow 0\end{equation*} holds uniformly in $t$ and $\theta$, where $K_n$ has a noncentral chi-square distribution with $r$ degrees of freedom and noncentrality parameter equal to $n\lbrack g(\theta)\rbrack' \sum^{-1}_g (\theta)\lbrack g(\theta)\rbrack$. This formulation of Wald is too strong. It will be shown by counterexample that, if $\theta$ is held fixed while $n \rightarrow \infty$, relationship (1.1) fails to hold uniformly in $t$. The counterexample is that of testing the value of the mean of a normal distribution with unknown mean and variance. Wald's proof of Theorem IX treats two cases separately, case (i) where $\theta_n$ approaches the null hypothesis set at the rate $n^{-\frac{1}{2}}$ or faster, and case (ii) where it does not. The proof of (1.1) in case (i) requires convergence of $\theta_n$ at the rate $n^{-\frac{1}{2}}$ in order that the Taylor series expansion of the logarithm behave nicely. In case (ii) there is no reason at all to believe the distribution of $K_n$ to be a good approximation to that of $-2\ln\lambda_n$. From Wald's paper (page 480, line following (212)) one gets the impression that Wald felt that the statement of uniform convergence of (1.1) in case (ii) was trivial, since pointwise convergence is trivial (because both terms tend to zero for fixed $t$). But, since $K_n$ does not converge in distribution to a random variable in case (ii), there is really no reason why pointwise convergence should imply uniform convergence. In the same paper, Wald (1943) also described a test procedure based only on the unrestricted maximum-likelihood estimator $\hat{\theta}_n$. This procedure rejects for large values of the statistic $Q_n = n\lbrack g(\hat{\theta}_n)\rbrack' \sum^{-1}_g (\hat{\theta}_n)\lbrack g(\hat{\theta}_n)\rbrack.$ Wald claimed in his paper that (1.1) again holds uniformly in $t$ and $\theta$ if $-2\ln\lambda_n$ is replaced by $Q_n$. This claim too is false, in the stated generality, as the same counterexample will demonstrate. Keeping $\theta$ as a fixed alternative while $n \rightarrow \infty$ has the disadvantage that the limiting behavior of each of the quantities $-2\ln\lambda_n, Q_n$ and $K_n$ is degenerate in the sense that the probability mass moves out to infinity with increasing $n$. However, statement (1.1), uniform in $t$ for fixed $\theta$, has meaning here since both $-2\ln\lambda_n$ (or $Q_n$) and $K_n$ may be related to quantities with genuine limiting normal distributions which must be identical or at least very similar in order for (1.1) to be uniform in $t$. The precise result is embodied in a theorem presented in Section 2 of this paper. In Sections 3 and 4 we consider the case of $X$ normally distributed with mean $\mu$ and variance $\sigma^2$, where $-\infty < \mu < \infty, 0 < \sigma_1 < \sigma < \sigma_2$, and the hypothesis to be tested is $\mu = 0$. It is shown in Sections 3 and 4, respectively, that for this problem the relationships $P_\theta\lbrack Q_n < t\rbrack - P_\theta\lbrack K_n < t\rbrack \rightarrow 0$ and $P_\theta\lbrack -2\ln\lambda_n < t\rbrack - P_\theta\lbrack K_n < t\rbrack \rightarrow 0$ fail to be uniform in $t$ when $\theta = (\mu, \sigma)$ is fixed and satisfies $\mu eq 0, \sigma_1^2 < \sigma^2 < \sigma_2^2 - \mu^2$. The space of values of $\sigma$ has been truncated in order to satisfy Wald's regularity conditions. In the following section boldface letters denote vectors and matrices. The law of the random vector $\mathbf{x}$ is denoted throughout by $\mathscr{L}(\mathbf{x})$. In particular, $\mathscr{N}(\mathbf{\mu}, \mathbf{\Sigma})$ refers to a normal law with mean vector $\mathbf{\mu}$ and covariance matrix $\mathbf{\Sigma}$. By $\mathscr{L}(\mathbf{x}_n) \rightarrow \mathscr{L}(\mathbf{y})$ or $\mathscr{L}(\mathbf{x}_n) \rightarrow \mathscr{N}(\mathbf{\mu}, \mathbf{\Sigma})$ is meant, respectively, that the law of $\mathbf{x}_n$ converges to the law of $\mathbf{y}$ or to the stated normal law, as $n \rightarrow \infty$. The definitions of the Mann-Wald symbols $O_p$ and $o_p$ may be found in Chernoff ((1956), Section 2), as may the statements of some basic results of large-sample theory which are used freely in the proof of the theorem.

31 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare the Bahadur efficiency of three tests for testing uniformity on the circle: Ajne's test A, Kuiper's test V and Rayleigh's test R.
Abstract: In this paper we compare the asymptotic efficiencies of several tests that are available for testing uniformity on the circle. Since the problem of testing goodness of fit on the circle can be reduced to testing uniformity by a simple probability transformation, these comparisons are applicable also to the goodness of fit situation. The alternatives to uniformity considered here are the familiar circular normal distributions (CND's) with density \begin{equation*}\tag{1.1}g(\alpha) = \lbrack 2\pi I_0(\kappa)\rbrack^{-1} \exp\lbrack\kappa \cos \alpha\rbrack,\quad - \pi \leqq \alpha < \pi.\end{equation*} $0 \leqq \kappa < \infty$ is a parameter of concentration, larger values of $\kappa$ corresponding to more concentration towards the mean direction zero, and $I_0(\kappa)$ is the Bessel function of purely imaginary argument. When $\kappa = 0 (1.1)$ is the uniform density, so the null hypothesis is $H_0: \kappa = 0$. The test compared here are (i) Ajne's test A (ii) Watson's test W (iii) Rayleigh's test R (iv) Ajne's test N (v) Kuiper's test V (iv) Spacings test U. In subsequent sections each of these tests is briefly described and its Bahadur efficiency [4], [5] is computed, using large deviation results. We compare the local slopes of the test statistics, i.e. the slopes in the neighborhood of the hypothesis. On the basis of these comparisons, we fine that limiting efficiencies of the first three tests viz., Ajne's test A, Watson's W and Rayleigh's test based on R, are identical, while the other tests have lower asymptotic efficiencies. Further conclusions are given in Section 7. Finally in Section 8 a simple inequality between the Ajne's N and Kuiper's V, whose asymptotic performances are identical, is noted.


Book ChapterDOI
TL;DR: In this paper, the authors considered the influence of moments of a random variable on the rate of convergence to zero of a sequence of independent and identically distributed random variables in the Chebyshev series.
Abstract: Let X i i = 1, 2, 3,… be a sequence of independent and identically distributed random variables with EX i = o and varX i = l. Write F(x) for the distribution function and f(t) for the characteristic function of X i and put \(S_n = \sum\limits_{i = 1}^n {X_i } \). Then, $$F_n(x) = P(S_n \leqq x\sqrt{n})\rightarrow \Phi(x) = \frac{1}{\sqrt{2\pi}} \int\limits^x_{-\infty}e^{-\frac{1}{2}u^2}du$$ as n → ∞. We shall herein be concerned with the influence of moments of X i on the rate of convergence to zero of $$A_{kn} = \sup_x |F_n(x) \-- G_{kn}(x)|$$ where $$G_{kn}(x) = \Phi(x) + \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}}x^2 \sum\limits_{s=1}^k Q_s(x)n^{-\frac{1}{2}s}$$ is a given portion of the Chebyshev series corresponding to the X i (see for example Gnedenko and Kolmogorov [2], Section 38), the Q j (x) being polynomials of degree 3j - 1 whose coefficients depend on the first (j + 2) moments of X i . Now, Cramer (see [2], Section 45) has shown that for distributions satisfying the condition (C) (that is, \(\lim\sup_{t\rightarrow \infty}|f(t)|<1\)) and if \(E|X_i|{k + 2} < \infty (k \geqq 1),\), then \(A_{kn} = o(n^{-k/2})) {\rm as} n \rightarrow \infty.\). Furthermore, Ibragimov [4] has produced necessary and sufficient conditions, under (C), for (i) \(A_{kn} = o(n^{-k/2})(k\geqq 1)\) and (ii) \(A_{kn} = O(n^{-(k+\delta)/2}), 0 < \delta \leqq 1, \ k\geqq 1,\), but these conditions are not in general moment conditions. We shall provide, also under (C), some necessary and sufficient conditions in terms of moments on the rate of convergence of A kn to zero.

Journal ArticleDOI
TL;DR: In this paper, the Ball-Zachariasen equation for high-energy scattering is calculated unmerically and the solutions coincide with those obtained in the existence proof of paper I, but they disagree qualitatively with experiment.
Abstract: Solutions of the Ball-Zachariasen equation for high-energy scattering are calculated unmerically. For small values of the parameter $c$, which measures the strength of particle production, the solutions coincide with those obtained in the existence proof of paper I. When $c{\ensuremath{\sigma}}_{\mathrm{el}}$ (${\ensuremath{\sigma}}_{\mathrm{el}}=\mathrm{elastic}\mathrm{cross}\mathrm{section}$) is much smaller than the physical value, the solutions are obtained successfully by either plain iteration or Newton-Kantorovich iteration, but they disagree qualitatively with experiment. A continuation to larger $c$ is attempted by using the result of a successful iteration as the starting point for an iteration with slightly larger $c$. The continuation proceeds only a short distance before coming to a halt, because of a singularity of the Fr\'echet derivative of the nonlinear integral operator. The singularity is circumvented by an excursion into the complex $c$ plane. On return to the real axis the solutions are complex, however, which is not allowed physically. A proposed approximate solution of Ball and Zachariasen, quite different from those of paper I, is also investigated. Plain or Newton-Kantorovich iteration starting with this function fails to converge. The failure is explained again in terms of a nearby singularity of the Fr\'echet derivative. The singularity makes it difficult to decide whether the Ball-Zachariasen function is close to a true solution.


Journal ArticleDOI
Juraj Virsik1
TL;DR: In this paper, the main theorem of [1] or [2] applies also to skew connections, and it is shown that under some circumstances, skew connections can be extended to semi-holonomic and nonholonomic pseudo-connections.
Abstract: The paper is closely related to [1] and [2]. A skew connection in a vector bundle E as defined here is a pseudo-connection (in the sense of [1]) which can be changed into a connection by transforming separately the bundle E itself and the bundle of its differentials, i.e. one-forms on the base with values in E . The properties of skew connections are thus expected to be only “algebraically” more complicated than those of connections; especially one can follow the pattern of [1], and prolong them to obtain higher order semi-holonomic and non-holonomic pseudo-connections. It is shown in this paper that under some circumstances the main theorem of [1] or [2] applies also to skew connections.