scispace - formally typeset
Search or ask a question

Showing papers on "Probability distribution published in 1970"


Journal ArticleDOI
TL;DR: Tests are grouped together primarily according to general type of mathematical derivation or type of statistical "information" used in'conducting the test, and mathematical interrelationships among the tests are indicated.
Abstract: As a result of an extensive survey of the literature, a large number of distribution-free statistical tests are examined. Tests are grouped together primarily according to general type of mathematical derivation or type of statistical \"information\" used in'conducting the test. Each of the more important tests is treated under the headings: Rationale, Null Hypothesis, Assumptions, Treatment of Ties, Efficiency, Application, Discussion, Tables, and Sources. Derivations are given and mathematical interrelationships among the tests are indicated. Strengths and weaknesses of individual tests, and of distribution-free tests as a class compared to parametric tests, are discussed.

2,104 citations


Journal ArticleDOI
TL;DR: In this article, a generalization of the Kruskal-Wallis test for testing the equality of K continuous distribution functions when observations are subject to arbitrary right censorship is proposed, where the distribution of the censoring variables is allowed to differ for different populations.
Abstract: SUMMARY A generalization of the Kruskal-Wallis test, which extends Gehan's generalization of Wilcoxon's test, is proposed for testing the equality of K continuous distribution functions when observations are subject to arbitrary right censorship. The distribution of the censoring variables is allowed to differ for different populations. An alternative statistic is proposed for use when the censoring distributions may be assumed equal. These statistics have asymptotic chi-squared distributions under their respective null hypotheses, whether the censoring variables are regarded as random or as fixed numbers. Asymptotic power and efficiency calculations are made and numerical examples provided. A generalization of Wilcoxon's statistic for comparing two populations has been proposed by Gehan (1965a) for use when the observations are subject to arbitrary right censorship. Mantel (1967), as well as Gehan (1965b), has considered a further generalization to the case of arbitrarily restricted observation, or left and right censorship. Both of these authors base their calculations on the permutation distribution of the statistic, conditional on the observed censoring pattern for the combined sample. However, this model is inapplicable when there are differences in the distribution of the censoring variables for the two populations. For instance, in medical follow-up studies, where Gehan's procedure has so far found its widest application, this would happen if the two populations had been under study for different lengths of time. This paper extends Gehan's procedure for right censored observations to the comparison of K populations. The probability distributions of the relevant statistics are here considered in a large sample framework under two models: Model I, corresponding to random or unconditional censorship; and Model II, which considers the observed censoring times as fixed numbers. Since the distributions of the censoring variables are allowed to vary with the population, Gehan's procedure is also extended to the case of unequal censorship. For Model I these distributions are theoretical distributions; for Model II they are empirical. Besides providing chi-squared statistics for use in testing the hypothesis of equality of the K populations against general alternatives, the paper shows how single degrees of freedom may be partitioned for use in discriminating specific alternative hypotheses. Several investigators (Efron, 1967) have pointed out that Gehan's test is not the most efficient against certain parametric alternatives and have proposed modifications to increase its power. Asymptotic power and efficiency calculations made below demonstrate that their criticisms would apply equally well to the test proposed here. Hopefully some of the modifications they suggest can likewise eventually be generalized to the case of K

1,351 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of making inference about the point in a sequence of zero-one variables at which the binomial parameter changes is discussed, and the asymptotic distribution of the maximum likelihood estimate of the change-point is derived in computable form using random walk results.
Abstract: : The report discusses the problem of making inference about the point in a sequence of zero-one variables at which the binomial parameter changes. The asymptotic distribution of the maximum likelihood estimate of the change-point is derived in computable form using random walk results. The asymptotic distributions of likelihood ratio statistics are obtained for testing hypotheses about the change-point. Some exact numerical results for these asymptotic distributions are given and their accuracy as finite sample approximations is discussed. (Author)

766 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an alternative way to relate the expected utility and mean-variance approaches, and present proofs of the usefulness of mean and variance in situations involving less and less risk.
Abstract: Publisher Summary The chapter presents an alternative way to relate the expected utility and mean-variance approaches. It presents proofs of the two general theorems involved in an aspect of the mean-variance model—namely, the usefulness of mean and variance in situations involving less and less risk. It describes a defense of mean-variance analysis and highlights its exact limitations along with those for any r-moment model. The mean and variance of wealth are approximately sufficient parameters for the portfolio selection model when the probability distribution of wealth is compact. In the compact case, moments of order 3 and higher are small in magnitude relative to the first two moments of the portfolio return; hence, a limiting approximation indicates that only the first two moments are relevant for optimal portfolio selection.

747 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived asymptotic relations for the convolution U*z(t), t −* oo, for a large class of integrable functions z.
Abstract: Let F be a nonarithmetic probability distribution on (0, oo) and suppose 1 —F(f) is regularly varying at oo with exponent a, 00 fixed. Next we derive asymptotic relations for the convolution U*z(t), t —>■ oo, for a large class of integrable functions z. All of these asymptotic relations are expressed in terms of the truncated mean function m(t) = f0 [1 — F(x)] dx, t large, and appear as the natural extension of the classical strong renewal theorem for distributions with finite mean. Finally in the last sections of the paper we apply the special case a = l to derive some limit theorems for the distributions of certain waiting times associated with a renewal process. 1. Principal theorems. Let A be a probability measure concentrated on [0, oo)(2) and let U be the associated renewal measure defined for any measurable set / by (l.i) t/{/} = !>\"•{/} 0 where Fn' denotes the «-fold convolution of F with itself (P°* is the probability measure concentrated at the origin). The series (1.1) converges to a finite number for every bounded I. (For this and other elementary properties of U see [3, VI. 6] ; for a probabilistic interpretation of U see §9 in this paper.) We write U(x) for U{[0, x]} and we shall henceforth ignore the distinction between U the measure and U the function. (This convention applies to other measures as well.) The main results of this paper deal primarily with the differences U(t+h) — U(t) for h>0 fixed, and t -*■ oo. The principal assumption is that Phas the form (1.2) \\-F(t) = t~aL(t), t>0, Received by the editors October 4, 1969. A MS Subject Classifications. Primary 6070, 6020, 6030; Secondary 4042, 4252.

159 citations


Journal ArticleDOI
01 Oct 1970
TL;DR: In this paper, an exact derivation of the probability distribution of the conventional and high-resolution estimators for the frequency-wavenumber power spectrum is given, subject to certain assumptions, and compared with those derived previously using approximations recommended by Blackman and Tukey.
Abstract: An exact derivation, subject to certain assumptions, is given for the probability distribution of the conventional and high-resolution estimators for the frequency-wavenumber power spectrum. These results are compared with those derived previously using approximations recommended by Blackman and Tukey.

156 citations


Journal ArticleDOI
TL;DR: In this paper, the probability distribution of the point sought in the real line is not known to the searcher, and the situation is treated as a game and obtain minimax type solutions.
Abstract: The linear search problem has been discussed previously by one of the present authors. In this paper, the probability distribution of the point sought in the real line is not known to the searcher. Since there is noa priori choice of distribution which recommends itself above all others, we treat the situation as a game and obtain minimax type solutions. Different minimaxima apply depending on the factors which one wishes to minimize (resp. maximize). Certain criteria are developed which help the reader judge whether the results obtained can be considered “good advice” in the solution of real problems analogous to this one.

147 citations


Journal ArticleDOI
TL;DR: In this article, the first two time derivatives of the streamwise velocity fluctuation in a mixing layer are presented, and the probability distributions of squared derivatives are found to be nearly log normal.
Abstract: Measurements of probability densities and distributions, moments, and spectra obtained from the first two time derivatives of the streamwise velocity fluctuation in a mixing layer are presented. Probability distributions of squared derivatives are found to be nearly log normal. The skewness (S) and kurtosis (K) of the first derivative are compared with new atmospheric data at much higher Reynolds numbers. Reynolds-number dependence is indicated, in conflict with universal equilibrium theory. A modified theory using fluctuating dissipation predicts that S and K have power-law behavior with turbulent Reynolds number and are related by S ∝ K3/8, in general agreement with the trends of the limited data available.

134 citations


Journal ArticleDOI
TL;DR: In this article, the authors present optimal efficiency criteria for portfolio selection when the utility function is quadratic in money returns and for a variety of kinds of information about the distribution of returns.
Abstract: Decisions about investment, or portfolio selection, are regarded as choices among alternative probability distributions of returns, where the optimal choice is determined by maximization of the expected value of an investor's utility function.' In the real world, investors' utility functions and investment probability distributions of returns may assume highly complex or irregular forms. However, most theoretical discussions of choice under risk have dealt with relatively simple forms, for example, quadratic utility functions and normal probability distributions, in order to make more manageable the description and testing of investment decision rules.2 This paper presents optimal efficiency criteria for portfolio selection when the utility function is quadratic in money returns and for a variety of kinds of information about the distribution of returns. In addition, it provides an optimal criterion for cubic utility. By efficiency criteria we mean conditions for dominance, or preference among risks, which apply to all investors whose utility functions are of a given general class (e.g., quadratic), independent of specific individual tastes or specific parameters of the utility function. Our main conclusions are: first, that the common procedures and criteria for quadratic utility, of which the simple mean-variance criterion is the best known and most widely used,3 are insufficient and may be improved considerably. The criteria given in the following section are all weaker sufficient conditions for dominance, relative to the mean-variance criterion, and thus they are more effective. Second, we claim that a cubic utility may be preferable, in some respects, to the quadratic form and is also amenable to a complete efficiency analysis, with some interesting implications. * The authors wish to thank Merton Miller and A. Beja for valuable comments and criticism on a first draft.

131 citations



Journal ArticleDOI
TL;DR: In this article, individual decisions about investment may be regarded as choices among alternative probability distributions of net returns, assuming that these distributions are completely known and independent of initial wealth positions, and that individuals determine the preferred portfolio of investment in accordance with a given, consistent set of preferences.
Abstract: Individual decisions about investment may be regarded as choices among alternative probability distributions of net returns. It is assumed that these distributions are completely known and independent of initial wealth positions, and that individuals determine the preferred portfolio of investment in accordance with a given, consistent set of preferences.

Book
01 Jan 1970
TL;DR: In this paper, the authors present a survey of the study of statistics and its application in the field of probability analysis, showing that the standard deviation of a Probability Distribution can be used as a measure of variance.
Abstract: (NOTE: Each chapter ends with Solutions to the Practice Exercises.) 1. Introduction. Numerical Data and Categorical Data. Nominal, Ordinal, Interval, and Ratio Data. Sample Data and Populations. Biased Data. Statistics, Past and Present. The Study of Statistics. Statistics, What Lies Ahead. 2. Summarizing Data: Listing and Grouping. Dot Diagrams. Stem-and-Leaf Displays. Frequency Distributions. Graphical Presentations. 3 Summarizing Data: Statistical Descriptions. Measures of Location: The Mean. Measures of Location: The Weighted Mean. Measures of Location: The Median and Other Fractiles. Measures of Location: The Mode. Measures of Variation: The Range. Measures of Variation: The Standard Deviation. Some Applications of the Standard Deviation. The Description of Grouped Data. Some Further Descriptions. Technical Note Summations. Review: Chapters 1,2, & 3. 4. Possibilities and Probabilities. Counting. Permutations. Combinations. Probability. Mathematical Expectation. A Decision Problem. 5. Some Rules of Probability. 5.1 The Sample Space. Events. Some Basic Rules of Probability. Probabilities and Odds. Addition Rules. Conditional Probability. Independent Events. Multiplication Rules. Bayes' Theorem. Review: Chapters 4 & 5. 6. Probability Distributions. Probability Distributions. The Binomial Distribution. The Hypergeometric Distribution. The Poisson Distribution. The Multinomial Distribution. The Mean of a Probability Distribution. The Standard Deviation of a Probability Distribution. Chebyshev's Theorem. 7. The Normal Distribution. Continuous Distributions. The Normal Distribution. Some Applications. The Normal Approximation to the Binomial Distribution. 8. Sampling and Sampling Distributions. Random Sampling. Sampling Distributions. The Standard Error of the Mean. The Central Limit Theorem. Review: Chapters 6, 7, & 8. 9. Problems of Estimation. The Estimation of Means. Confidence Intervals for Means. Confidence Intervals for Means (Small Samples). Confidence Intervals for Standard Deviations. The Estimation of Proportions. 10. Tests Concerning Means. 10.1 Test of Hypotheses. Significance Tests. Tests Concerning Means. Tests Concerning Means (Small Samples). Differences Between Means. Differences Between Means (Small Samples). Differences Between Means (Paired Data). Differences Among k Means. Analysis of Variance. 11. Tests Based on Count Data. Tests Concerning Proportions. Tests Concerning Proportions (Large Samples). Differences Between Proportions. Differences Among Proportions. Contingency Tables. Goodness of Fit. Review: Chapters 9, 10, & 11. 12. Regression and Correlation. Curve Fitting. The Method of Least Squares. Regression Analysis. The Coefficient of Correlation. The Interpretation of r. A Significance Test for r. 13. Nonparametric Tests. The One-Sample Sign Test. The Paired-Sample Sign Test. The Sign Test (Large Samples). Rank Sums: The U Test. Rank Sums: The U Test (Large Samples). Rank Sums: The H Test. Tests of Randomness: Runs. Tests of Randomness: Runs (Large Samples). Tests of Randomness: Runs Above and Below the Median. Rank Correlation. Review: Chapters 12 & 13. Appendix A: TI-83 Tips. Statistical Tables. Answers to Odd-Numbered Exercises. Index.

Book
01 Jan 1970
TL;DR: In this paper, the appraisal of events that have uncertain outcomes is discussed with particular reference to a feasible method for evaluating the riskiness of investment projects, which is best characterized in terms of a decision agent's subjective beliefs about probabilities.
Abstract: The appraisal of events that have uncertain outcomes is discussed with particular reference to a feasible method for evaluating the riskiness of investment projects The essence of the uncertainty problem is that many of the variables affecting the outcome of a particular plan are outside of the planner's control Uncertainty, which is relevant for most decisions, is best characterized in terms of a decision agent's subjective beliefs about probabilities The probabilistic approach lends itself best to an appraisal of possible outcomes of a project that is affected by uncertainties from many sources Probability judgments about many basic variables and parameters affecting the final outcome can be aggregated into an estimate of the probability distribution of that final outcome This aggregation method is demonstrated for calculation of the economic returns of a project The method of approximation by a simulated sample is described, and its application to probability distribution rates of returns from actual projects is explained The preparation of a mathematical model is detailed, emphasizing the usefulness of computerized calculations Fourteen tables and nine figures are provided

Journal ArticleDOI
TL;DR: In this paper, a consistent estimator for the nonparametric discrimination problem was proposed, which is consistent at all points at which the two above estimators are consistent and allows the investigator to estimate the density at every point of the Euclidean space from one construction.
Abstract: Let $x_1, x_2, \cdots, x_m$ be a random sample from a $p$-dimensional random variable $X = (X_1, X_2, \cdots, X_p)$ with probability distribution $P$. It is assumed that $P$ is absolutely continuous with respect to Lebesgue measure, and that the corresponding probability density function is denoted by $f$. If $z = (z_1, z_2, \cdots, z_p)$ is a point at which $f$ is both continuous and positive, an estimator for $f(z)$ based on statistically equivalent blocks is suggested and its consistency is shown. This estimator grew out of work on the nonparametric discrimination problem. Fix and Hodges [2] showed how density estimation could be used in this problem and demonstrated a consistent estimator at points such as $z$. Loftsgaarden and Quesenberry [4] proposed another estimator which is consistent at points such as $z$; their estimator was based on statistically equivalent blocks. Although this estimator is easier to use in practice than that suggested by Fix and Hodges, it does require separate calculations if the sample is to be used to estimate the density at two or more points, and gives complex regions on which the estimate is constant if it is desired to estimate $f$ on some subset of the entire space. The estimator suggested in this paper is consistent at all points at which the two above estimators are consistent and allows the investigator to estimate the density at every point of $p$-dimensional Euclidean space from one construction, as well as providing rectangular regions on which the estimate is constant.

Journal ArticleDOI
TL;DR: In this paper, a technique is presented for predicting response statistics of a nonlinear hysteretic system by seeking an approximate equivalence between the hystertic system and a non-linear non-hysterentic system for which certain statistics of stationary response can be computed by an existing analytical method.
Abstract: A technique is presented for predicting response statistics of a nonlinear hysteretic system by seeking an approximate equivalence between the hysteretic system and a nonlinear nonhysteretic system for which certain statistics of stationary response can be computed by an existing analytical method. The response statistics that can be computed include stationary displacement and velocity levels, and probability distributions. The technique is applied to the bilinear hysteretic oscillator, and the results are compared with experimental results and with the results of the equivalent linearization approximate technique of Krylov and Bogoliubov.

Journal ArticleDOI
J. Ian Collins1
29 Jan 1970
TL;DR: In this article, a procedure was developed to transform an arbitrary probability density of wave characteristics in deep water into the corresponding breaking characteristics in shallow Water using hydrodynamic relationships for shoaling and refraction of waves approaching a shoreline over parallel bottom contours.
Abstract: Utilizing the hydrodynamic relationships for shoaling and refraction of waves approaching a shoreline over parallel bottom contours a procedure is developed to transform an arbitrary probability density of wave characteristics in deep water into the corresponding breaking characteristics in shallow Water A number of probability distributions for breaking wave characteristics are derived m terms of assumed deep water probability densities of wave heights wave lengths and angles of approach Some probability densities for wave heights at specific locations in the surf zone are computed for a Rayleigh distribution in deep water The probability computations are used to derive the expectation of energy flux and its distribution.

Journal ArticleDOI
TL;DR: In this article, the probability of a randomly excited structure to survive a service time interval without suffering a first-excursion failure is determined analytically, based on the Stratonovich-Kuznetsov theory of random points.
Abstract: The probability for a randomly excited structure to survive a service time interval without suffering a first-excursion failure is determined analytically. The first-excursion failure occurs when, for the first time, the structural response passes out of a prescribed safety domain. The problem is formulated from the viewpoint of the Stratonovich-Kuznetsov theory of random points. The exact solution is expressed in two equivalent series forms, one reducible to Rice's "in and exclusion" series. The first order truncation of the second series corresponds to Poisson random points and the second order truncation to random points with "pseudo" Gaussian arrival rate. Numerical results are presented for a single-degree-offreedom linear oscillator under Gaussian white noise excitation based on these truncations and the model of nonapproaching random points suggested by Stratonovich.

Journal ArticleDOI
TL;DR: The comparison shows that spherically invariant processes are slightly more general than Gaussian compound processes, and a simple expression for the probability distribution is given and some expectation values are calculated.
Abstract: This correspondence discusses the comparison between the class of spherically invariant processes and a particular class of Gaussian compound processes. We give a simple expression for the probability distribution and calculate some expectation values. The comparison shows that spherically invariant processes are slightly more general.

Journal ArticleDOI
TL;DR: In this article, the authors used the chi-squared test and a comparison of the skewness-kurtosis relation to determine the goodness-of-fit of five distributions, i.e., the gamma distribution, the log-normal, the square root normal, the normal and the Grumbel's extreme value distribution.
Abstract: The object herein is to establish a suitable probability distribution for annual droughts of 14-day periods. The approach used was to fit certain theoretical distributions to the observed data and to select, by suitable criteria, the distribution which best described the data. The relative adequacy of five distributions was studied. They were the gamma, the log-normal, the square root normal, the normal and the Grumbel's extreme value (Weibull) distributions. Two techniques were used to determine the goodness-of-fit: (1) the chi-squared test; and (2) a comparison of the skewness-kurtosis relation. Tests were performed using the data of 37 stations in the Missouri River Basin whose annual droughts in succeeding years were found to be randomly distributed with respect to time. The results of either test reveals that the gamma distribution is the first choice, the Weibull being next, and the log-normal comes in third.

Journal ArticleDOI
TL;DR: In this article, a statistical-dynamical theory is described which can be applied to the problem of atmospheric predictions from synoptic charts, where the m variables and constants whose measurement is needed to characterize an initial state are regarded as coordinates of an m dimensional phase space.
Abstract: By analogy with Gibbs' statistical mechanics, a statistical-dynamical theory is described which can be applied to the problem of atmospheric predictions from synoptic charts. The m variables and constants whose measurement is needed to characterize an initial state are regarded as coordinates of an m dimensional phase space. In this space, the probability density ψ of the ensemble of possible initial measurements is used to define the “true” values of these quantities and to furnish probabilities of initial measurements lying within prescribed limits of true values. Dynamical equations provide standard deterministic predictions here, while a general continuity equation for ψ transforms initial probability distributions into final ones which, in turn, yield probability forecasts. This continuity equation is resolvable into component equations of probability diffusion for all coordinates of the phase space. It is found that predictability increases (decreases) with time if the atmosphere is diverge...

Journal ArticleDOI
TL;DR: This paper presents a method for dealing with parameter uncertainty in system design which is based on the study of the statistical properties of an ensemble of systems defined by a given structure and by a priori parameter distributions rather than point parameter estimates.
Abstract: This paper presents a method for dealing with parameter uncertainty in system design which is based on the study of the statistical properties of an ensemble of systems defined by a given structure and by a priori parameter distributions rather than point parameter estimates. It is assumed that the model of the actual system is a random member of the ensemble. The object of the analysis is to design or modify the properties of the ensemble to ensure a high probability of adequate performance of the actual system. The primary statistical function employed is the sample distribution function. This function is used to estimate the true population distribution of a scalar variable chosen to measure the system property of interest. The sample distribution function is constructed from random samples of this figure of merit generated by a suitable digital computer programme. The accuracy of the estimation of the population distribution by the sample distribution is determined by application of statistical result...

Journal ArticleDOI
TL;DR: A recursive estimation scheme suitable for real-time implementation is derived for a class of nolinear systems and observations expressed as nonlinear functions in discrete time, corrupted by a non-Gaussian mutually correlated random white noise sequence.
Abstract: A recursive estimation scheme suitable for real-time implementation is derived for a class of nolinear systems and observations expressed as nonlinear functions in discrete time, corrupted by a non-Gaussian mutually correlated random white noise sequence. The probability densities are expanded as a Gram-Charlier series and a Gauss-Hermite quadrature formula is used for computing the expectations. In the multidimensional case an expansion about a density of mutually independent Gaussian variables is used instead of a general multidimensional Gaussian density, which may result in a poorer performance in linear systems with Gaussian noise. However, in the case of nonlinear systems and non-Gaussian noise, the computational simplifications which result, outweigh the impairment in performance if any. A computational example is included.


01 Jan 1970
TL;DR: In this article, the authors examined in detail the nature of the probability distribution of the independent and identically distributed random variables (i.i.d.r.s.) X 1,X 2,..., which possess the property E(a 1 X 1 +a 2 X 2 +...|b 1 x 1 +b 2 x 2,+...)=0a.
Abstract: The paper examines in some detail the nature of the probability distribution of the independent and identically distributed random variables (i.i.d.r.v.'s.) X 1 ,X 2 ,..., which possess the property E(a 1 X 1 +a 2 X 2 +...|b 1 X 1 +b 2 X 2 ,+...)=0a.s. Both the cases of finite and infinite number of variables are considered. The distribution depends on the nature of the coefficients a 1 , b 2 , and their relationships.

Journal ArticleDOI
TL;DR: In this paper, an optimal statistical control problem of finding the best way of search is formulated, where the search already carried out by a searcher is taken into account by a straightforward application of Bayes' formula.
Abstract: We shall assume that the motion of a target in a given region is a diffusion process and that a searcher tries to find the target. If the searcher is able to remember the track he has travelled, then the probability density--from the searcher's point of view--of the target will be changing also because of the search, since the searcher knows that the target was not in the immediate proximity of his track. In the case of a stationary target the search already carried out would be taken into account by a straightforward application of Bayes' formula [4]. We shall assume the customary form for the instantaneous probability density of detection [3] and then modify the diffusion equation of the target to include the above Bayesian effect. The properties of this equation are discussed. An optimal statistical control problem of finding the best way of search is formulated.

Journal ArticleDOI
TL;DR: For statistical problems based on groups, it was shown in this article that there is convergence in probability to an invariant posterior distribution if and only if the posterior corresponds to right-invariant Haar prior and the group can support an asymptotically right-inverse sequence of proper prior distributions.
Abstract: For statistical problems based on groups, it is shown that there is convergence in probability to an invariant posterior distribution if and only if the posterior corresponds to right-invariant Haar prior and the group can support an asymptotically right-invariant sequence of proper prior distributions.

Journal ArticleDOI
TL;DR: An algebraic language, called genex, for pedigree prohahility calculus is introduced, which allows the user to refer to smaller or larger pieces of information by a symbolic name, and the assumption concept is introduced.
Abstract: An algebraic language, called genex, for pedigree prohahility calculus is introduccd. and the basic techniques arc presented in an informal way (cf. Section 1.1). Probability distributions arc represented by so-called generating expressions (Section 2.1), and the biological phenomena reSpOnSibk for random transmission of gcnes, etc. are I-epresented by operators manipulating such generating expressions (Section 4). For the purpose of unarnbiguouy notation and correct handling of pedigree knowledge, the assumption concept is introduced (Section 5.2, 6.1), which allows the user to refer to smaller or larger pieces of information by a symbolic name.

Journal ArticleDOI
TL;DR: In this article, the entropy of the uniform distribution on the positive integers was shown to be log ε(s)− sε( s) ε − s ε (s) √ n − s ∵(n) ∵ (n) − s∵(m, m2) = 1, where m1 is statistically independent of divisibility by m2.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the one-dimensional Burgers' model of turbulence by computing the functional integral expression for the correlation function, based on the Hopf theory of statistical hydromechanics, with the aid of a high speed computer.
Abstract: The one‐dimensional Burgers' model of turbulence is investigated by computing the functional integral expression for the correlation function, based on the Hopf theory of statistical hydromechanics, with the aid of a high‐speed computer. The initial probability distribution of the velocity is assumed to be normal with zero mean and with a Gaussian covariance function. The manner in which the energy decay curve changes under variation of the Reynolds number R implies the existence of a certain asymptotic curve for R → ∞. The values obtained for the correlation function at some instants indicate that the inverse‐square law for the energy spectrum holds in some wavenumber range for high values of R.