scispace - formally typeset
Search or ask a question

Showing papers on "Probability distribution published in 1974"


Journal ArticleDOI
TL;DR: In this article, the authors consider a group of individuals who must act together as a team or committee, and assume that each individual in the group has his own subjective probability distribution for the unknown value of some parameter.
Abstract: Consider a group of individuals who must act together as a team or committee, and suppose that each individual in the group has his own subjective probability distribution for the unknown value of some parameter. A model is presented which describes how the group might reach agreement on a common subjective probability distribution for the parameter by pooling their individual opinions. The process leading to the consensus is explicitly described and the common distribution that is reached is explicitly determined. The model can also be applied to problems of reaching a consensus when the opinion of each member of the group is represented simply as a point estimate of the parameter rather than as a probability distribution.

3,527 citations


Book
01 Jan 1974
TL;DR: In this article, the Chi-square distribution and the analysis of Frequencies Nonparametric and Distribution-Free Statistics Vital Statistics are presented. But they do not consider the correlation analysis.
Abstract: Introduction to Biostatistics Descriptive Statistics Some Basic Probability Concepts Probability Distributions Some Important Sampling Distributions Estimation Hypothesis Testing Analysis of Variance Simple Linear Regression and Correlation Multiple Regression and Correlation Regression Analysis - Some Additional Techniques The Chi-Square Distribution and the Analysis of Frequencies Nonparametric and Distribution-Free Statistics Vital Statistics.

2,833 citations


Book
01 Jan 1974
TL;DR: This chapter discusses nonparametric Methods: Goodness-of-Fit-Tests, Analysis of Ranked Data, and Statistical Quality Control and Quality Management.
Abstract: What is statistics? summarizing data - frequency distributions and graphic presentation describing data - measure of central tendency measures of dispersion and skewness a survey of probability concepts discrete probability distributions the normal probability distribution sampling methods and sampling distributions tests of hypotheses - large samples tests of hypotheses - small samples analysis of variance linear regression and correlation multiple regression and correlation nonparametric methods - chi square applications nonparametric methods - analysis of ranked data statistical quality control index numbers time series and forecasting an introduction to decision making under uncertainty.

616 citations



Journal ArticleDOI
TL;DR: The testing of binary hypotheses is developed from an information-theoretic point of view, and the asymptotic performance of optimum hypothesis testers is developed in exact analogy to the ascyptoticperformance of optimum channel codes.
Abstract: The testing of binary hypotheses is developed from an information-theoretic point of view, and the asymptotic performance of optimum hypothesis testers is developed in exact analogy to the asymptotic performance of optimum channel codes. The discrimination, introduced by Kullback, is developed in a role analogous to that of mutual information in channel coding theory. Based on the discrimination, an error-exponent function e(r) is defined. This function is found to describe the behavior of optimum hypothesis testers asymptotically with block length. Next, mutual information is introduced as a minimum of a set of discriminations. This approach has later coding significance. The channel reliability-rate function E(R) is defined in terms of discrimination, and a number of its mathematical properties developed. Sphere-packing-like bounds are developed in a relatively straightforward and intuitive manner by relating e(r) and E (R) . This ties together the aforementioned developments and gives a lower bound in terms of a hypothesis testing model. The result is valid for discrete or continuous probability distributions. The discrimination function is also used to define a source code reliability-rate function. This function allows a simpler proof of the source coding theorem and also bounds the code performance as a function of block length, thereby providing the source coding analog of E (R) .

358 citations


Journal ArticleDOI
TL;DR: In this article, the microcanonical ensemble for two-dimensional interacting line vortices is explored for the regime of total positive interaction energy, which should be above the Onsager negative temperature threshold.
Abstract: The dynamics of two-dimensional interacting line vortices is identical to that of the two-dimensional electrostatic guiding center plasma. Both are Hamiltonian systems and are therefore susceptible to statistical mechanical treatments. The predictions of the microcanonical ensemble are explored for this system. Interest focuses primarily on the regime of total positive interaction energy, which should be above the Onsager negative temperature threshold. Calculations of the probability distribution for a component by means of the central limit theorem are carried out in the manner of Khinchin. The probability distribution of a component reduced to the usual Gibbs distribution in the regime of positive temperatures, and is still explicitly calculable for negative temperatures. The negative temperature states are neither quiescent nor spatially uniform. Expressions for the temperature are explicitly provided in terms of the total particle energy and particle number. A BBGKY hierarchy can be derived for both temperature regimes. Numerical simulations involving solutions of the equations of motion of 4008 particles are presented.

276 citations


Journal ArticleDOI
TL;DR: In this article, Batchelor's (1959) constant-strain dissipation spectrum and a joint probability distribution of the scalar field and its space derivatives were analyzed for convection of a sparse distribution of sheets of passive scalar in a random straining field whose correlation scale is large compared with the sheet size.
Abstract: The stretching of line elements, surface elements and wave vectors by a random, isotropic, solenoidal velocity field in D dimensions is studied. The rates of growth of line elements and (D – 1)-dimensional surface elements are found to be equal if the statistics are invariant to velocity reversal. The analysis is applied to convection of a sparse distribution of sheets of passive scalar in a random straining field whose correlation scale is large compared with the sheet size. This is Batchelor's (1959) κ−1 spectral regime. Some exact analytical solutions are found when the velocity field varies rapidly in time. These include the dissipation spectrum and a joint probability distribution that describes the simultaneous effect of Stretching and molecular diffusivity κ on the amplitude profile of a sheet. The latter leads to probability distributions of the scalar field and its space derivatives. For a growing κ−1 range at zero κ, these derivatives have essentially lognormal statistics. In the steady-state κ−1 regime at κ > 0, intermittencies measured by moment ratios are much smaller than for lognormal statistics, and they increase less rapidly with the order of the derivative than in the κ = 0 case. The κ > 0 distributions have singularities a t zero amplitude, due to a background of highly diffused sheets. The results do not depend strongly on D. But as D → ∞, temporal fluctuations in the stretching rates become negligible and Batchelor's (1959) constant-strain dissipation spectrum is recovered.

230 citations


Journal ArticleDOI
TL;DR: In this paper, the FKG inequalities are generalized to two probability distributions and a theorem is proved which shows how one distribution dominates the other and makes it clear why expectation values of increasing functions with respect to one distribution are larger than the other.
Abstract: The FKG inequalities are generalized to two probability distributions. A theorem is proved which shows how one distribution dominates the other and makes it clear why expectation values of increasing functions with respect to one distribution are larger than with respect to the other.

230 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present empirical evidence that attempts to represent the probability distribution of the rates of return on common stocks by a member of the stable Paretian family of distributions, with 1 < α < 2, may be misleading and in fact may not produce an adequate fit to observed rate of return.
Abstract: In this article we present some empirical evidence which indicates that attempts to represent the probability distribution of the rates of return on common stocks by a member of the stable Paretian family of distributions, with 1 < α < 2, may be misleading and in fact may not produce an adequate fit to observed rates of return. We offer an alternative probability model for describing rates of return based on the hypothesized phenomenon of a changing variance. We test the “goodness of fit” of our model vis-a-vis a stable Paretian model for several series of rates of return. Finally we propose an extension of the stability test of Fama and Roll [6].

221 citations


Book ChapterDOI
TL;DR: The formal theories that have been developed for rational decision-making under uncertainty by Ramsey (1951), de Finetti (1931, 1937), Koopman (1940a, b), Savage (1954) and subsequent authors have almost uniformly tended to yield a result that guarantees a unique probability distribution on states of nature or whatever other collection of entities is used for the expression of prior beliefs as mentioned in this paper.
Abstract: Almost everyone who has thought about the problems of measuring beliefs in the tradition of subjective probability or Bayesian statistical procedures concedes some uneasiness with the problem of always asking for the next decimal of accuracy in the prior estimation of a probability or of asking for the parameter of a distribution that determines the probabilities of events. On the other hand, the formal theories that have been developed for rational decision-making under uncertainty by Ramsey (1951), de Finetti (1931, 1937), Koopman (1940a, b), Savage (1954) and subsequent authors have almost uniformly tended to yield a result that guarantees a unique probability distribution on states of nature or whatever other collection of entities is used for the expression of prior beliefs.

201 citations


Journal ArticleDOI
TL;DR: In this article, a stress-strength model is formulated for s of k systems consisting of identical components and a minimum variance unbiased estimation of system reliability for data consisting of a random sample from the stress distribution and one from the strength distribution when the two distributions are exponential with unknown scale parameters.
Abstract: A stress-strength model is formulated for s of k systems consisting of identical components. We consider minimum variance unbiased estimation of system reliability for data consisting of a random sample from the stress distribution and one from the strength distribution when the two distributions are exponential with unknown scale parameters. The asymptotic distribution is obtained by expanding the unbiased estimate about the maximum likelihood value and establishing their equivalence. Performance of the two estimates for moderate samples is studied by Monte Carlo simulation. Uniformly most accurate unbiased confidence intervals are also obtained for system reliability.

Journal ArticleDOI
TL;DR: In this article, a technique is presented which enables the recovery of the probability distribution for single scattering from plural-scattering electron energy loss data, neither the scattering parameter t/? nor details of the component processes need be known.
Abstract: A technique is presented which enables the recovery of the probability distribution for single scattering from plural-scattering electron energy loss data. Neither the scattering parameter t/? nor details of the component processes need be known. The computational method uses Fourier series in order to overcome a number of practical problems in the application of convolution series methods, to include instrumental effects and to permit the processing of data with large values of the scattering parameter. The effects of noise, specimen oxidation and the accuracy of the technique are considered.

Journal ArticleDOI
TL;DR: In this correspondence, an extension of the theorem is given which shows that the theorem "almost" holds for an " almost" continuous input distribution.
Abstract: The use of a gray level transformation which transforms a given empirical distribution function of gray level values in an image into a uniform distribution has been used as an image enhancement as well as for a normalization procedure. This transformation produces a discrete variable whose empirical distribution might be expected to be approximately uniform since it is related to the well known distribution transformation. In this correspondence, an extension of the theorem is given which shows that the theorem "almost" holds for an "almost" continuous input distribution. The application of the discrete distribution transformation to computer image enhancement is considered.

Journal ArticleDOI
TL;DR: This note develops a simple policy characterized by two stock levels and an optimal price line in the price-inventory plane, gives an algorithm to compute such a policy and check conditions for its optimality, and presents a counterexample.
Abstract: This note studies the problem of setting price and production levels simultaneously in a series of N periods, where price is a parameter in the probability distribution of demand. It develops a simple policy characterized by two stock levels and an optimal price line in the price-inventory plane, gives an algorithm to compute such a policy and check conditions for its optimality, and presents a counterexample to the simple policy, using a small set of allowable prices, along with a discussion of both the counterexample and some characteristics of the model.

Journal ArticleDOI
TL;DR: A new distance is proposed which permits tighter bounds to be set on the error probability of the Bayesian decision rule and which is shown to be closely related to several certainty or separability measures.
Abstract: An important measure concerning the use of statistical decision schemes is the error probability associated with the decision rule. Several methods giving bounds on the error probability are presently available, but, most often, the bounds are loose. Those methods generally make use of so-cailed distances between statistical distributions. In this paper a new distance is proposed which permits tighter bounds to be set on the error probability of the Bayesian decision rule and which is shown to be closely related to several certainty or separability measures. Among these are the nearest neighbor error rate and the average conditional quadratic entropy of Vajda. Moreover, our distance bears much resemblance to the information theoretic concept of equivocation. This relationship is discussed. Comparison is made between the bounds on the Bayes risk obtained with the Bhattacharyya coefficient, the equivocation, and the new measure which we have named the Bayesian distance.


Journal ArticleDOI
TL;DR: The algorithm calculates the exact cumulative distribution of the two-sided Kolmogorov-Smirnov statistic for samples with few observations for data sampling and discrete system simulation.
Abstract: The algorithm calculates the exact cumulative distribution of the two-sided Kolmogorov-Smirnov statistic for samples with few observations. The general problem for which the formula is needed is to assess the probability that a particular sample comes from a proposed distribution. The problem arises specifically in data sampling and in discrete system simulation. Typically, some finite number of observations are available, and some underlying distribution is being considered as characterizing the source of the observations.

Journal ArticleDOI
TL;DR: The authors generalizes the one-sector model by transforming the basic differential equation on the capital labor ratio into a stochastic differential equation, whose probability distributions vary with time, and focuses on the existence of a steady state denfined by the (probabilistic) stationarity of these variables.

01 Jan 1974
TL;DR: In this paper, an admissible scoring system for a continuous distribution is presented, where a collection of possiblid bets can be postulated on a continuous variable, and an Admissible Score System can be constructed as the net pay-off to a forecaster who takes all bets (and only those bets) which appear favorable on the basis of his reported distribution.
Abstract: ABSTRACT The defining property of an admissible scoring system is that any individual perceives himself as maximizing his expected score by reporting his true subjective distribution. The use of admissible scoring systems as a measure of probabilistic forecasts is becoming increasingly well-known in those.cases where the forecast is a discrete distributicn over a finite number of alternatives. Most serious forecastS which are made in the real world seem to be forecasts of quantitieL rather than choices between a finite number of alternatives. In such cases as this, it seems much more natural to ask the forecaster to specify a continuous probability distribution which represents his expectations rather than trying to re-cast a basically continuous process into a discrete one. To construct an admissible scoring system for a continuous distribution, a collection of possiblid bets can be postulated on a continuous variable, and an admissible scoring system can be constructed as the net pay-off to a forecaster who takes all bets (and only those bets). which appear favorable on the basis of his reported distribution. Mathematical models for this and alternative systems are presentedlAuthor/BW)

Journal ArticleDOI
TL;DR: In this article, the problem of determining which of a number of possible models best describes a set of data is considered, and the multinomial model is used as the base model.
Abstract: OFTEN, more than one probability distribution is theoretically feasible when considering statistical models for an experiment. The problem of determination of the more plausible distribution using likelihood procedures (see, for example, Sprott and Kalbfleisch, 1969) will be discussed for the simple case where all observations are made under the same response conditions. (Lindsey, 1974, will consider this problem when independent variables are present.) To do this using likelihood inference, a base statistical model must be introduced with which all other distributions under consideration may be compared. The derivation which follows yields the multinomial model as the base model. Several approaches have been suggested in the literature to the problem of determining which of a number of possible models best describes a set of data. Cox (1961, 1962) develops asymptotic Neyman-Pearson likelihood ratio tests and suggests an alternative approach involving a combination, either additive or multiplicative, of the density functions, with estimation of additional parameters. This approach is further developed by Atkinson (1970). When prior probabilities, both for each model and for the parameters within the models, are available, Lindley (1961, p. 456) gives a posterior odds ratio of the two models using Bayes's theorem. When applicable (i.e. when prior probabilities are available), this approach may be used with the methods developed below.


Journal ArticleDOI
Peter Kall1
TL;DR: In this paper, it was proved that the convergence of the objective functions of the approximating problems to that one of the original problem can be achieved by choosing the discrete distributions in quite a natural way.
Abstract: The probability distribution of the data entering a recourse problem is replaced by finite discrete distributions. It is proved that the convergence of the objective functions of the approximating problems to that one of the original problem can be achieved by choosing the discrete distributions in quite a natural way. For bounded feasible sets this implies the convergence of the optimal values. Finally some error bounds are derived.

01 Apr 1974
TL;DR: In this article, a general statistical-physical model of man-made radio noise processes appearing in the input stages of a typical receiver is described analytically, and the first-order statistics of the se random processes are developed in detail for narrow-band reception.
Abstract: A general statistical-physical model of man-made radio noise processes appearing in the input stages of a typical receiver is described analytically. The first-order statistics of the se random processes are developed in detail for narrow-band reception. These include, principally, the first order probability densities and probability distributions for a) a purely impulsive (poisson) process, and b) an additive mixture of a gauss background noise and impulsive sources. Particular attention is given to the basic waveforms of the emissions. in the course of propagation. including such critical geometric and kinematic factors as the beam patterns of source and receiver, mutual location, Doppler, far-field conditions, and the physical density of the sources, which are assumed independent and poisson distributed in space over a domain A. Apart from specific analytic relations. the most important general result s are that these first-order distributions are analytically tractable and canonical. They are not so complex as to be unusable in communication theory applications; they incorporate in an explicit way the controlling physical parameters and mechanisms which determine the actual radiated and received processes; and finally, they are formally invariant of the particular source location and density, waveform emission, propagation mode, etc., as long as the received disturbance is narrow-band, at least as it is passed by the initial stages of the typical receiver. The desired first-order distributions are represented by an asymptotic development, with additional terms dependent on the fourth and higher moments of the basic interference waveform, which in turn progressively affect the behavior at the larger amplitudes. This first report constitutes an initial step in a program to provide workable analytical models of the general nongaussian channel ubiquitous in practical communications applications. Specifically treated here are the important classes of interference with bandwidths comparable to (or less than) the effective aperture-RF-IF bandwidth of the receiver, the common situation in the case of communication interference.

Journal ArticleDOI
TL;DR: In this article, the problem of Brownian motion in nonlinear dynamic systems, including a linear oscillator acted upon by random forces, parametric resonance in an oscillating system with random parameters, turbulent diffusion of particles in a random-velocity field, and diffusion of rays in a medium with random inhomogeneities of the refractive index, is considered.
Abstract: The review considers, on the basis of a unified approach, the problem of Brownian motion in nonlinear dynamic systems, including a linear oscillator acted upon by random forces, parametric resonance in an oscillating system with random parameters, turbulent diffusion of particles in a random-velocity field, and diffusion of rays in a medium with random inhomogeneities of the refractive index. The same method is used to consider also more complicated problems such as equilibrium hydrodynamic fluctuations in an ideal gas, description of hydrodynamic turbulence by the method of random forces, and propagation of light in a medium with random inhomogeneities. The method used to treat these problems consists of constructing equations for the probability density of the system or for its statistical moments, using as the small parameter the ratio of the characteristic time of the random actions to the time constant of the system (in many problems, the role of the time is played by one of the spatial coordinates). The first-order approximation of the method is equivalent to replacement of the real correlation function of the action by a δ function; this yields equations for the characteristics in closed form. The method makes it possible to determine also higher approximations in terms of the aforementioned first-order small parameter.

Journal ArticleDOI
TL;DR: In this article, equivalence classes of regularly varying functions (in the extended sense) are studied, which is relevant to the problems mentioned above. But this method involves an extended theory of regular varying functions.

Journal ArticleDOI
TL;DR: In this article, a review of results from a continuing series of experiments designed to investigate the external accuracy of subjectively assessed probability distributions is presented, and the impact of extended assessor training and hypotheses regarding the effects of variation in the assessor's information level and the complexity of the assessment task are explored.
Abstract: Despite the key role that subjective probabilities play in decisions made under conditions of uncertainty, little is known about the ability of probability assessors in developing these estimates. A literature survey is followed by a review of results from a continuing series of experiments designed to investigate the external accuracy of subjectively assessed probability distributions. Initial findings confirm that probability assessments provided by untrained assessors are of questionable value in predicting the distribution of actual outcomes of uncertain events. Particular difficulty is encountered when subjects attempt to quantify the extremes of their subjective distributions. The impact of extended assessor training and hypotheses regarding the effects of variation in the assessor's information level and the complexity of the assessment task are explored. Implications for applied decision making are drawn, and directions for future investigations are suggested.

Journal ArticleDOI
TL;DR: The minimal data required for a reasonable estimation of probability distributions is investigated through a Monte Carlo study of a rule for smoothing sparse data into cumulative distribution functions.
Abstract: The minimal data required for a reasonable estimation of probability distributions is investigated through a Monte Carlo study of a rule for smoothing sparse data into cumulative distribution functions. In a set of estimated distributions, risky prospects not preferred by risk-averse decision makers can be identified and discarded.

Journal ArticleDOI
TL;DR: In this article, an exact and an approximate first-order probability density function are computed in terms of the eigenvalues of the autocorrelation function of the speckle pattern obtained in coherent light.

Journal ArticleDOI
TL;DR: In this article, the authors derived the spectral density of a sinusoidal carrier phase modulated by a random baseband pulse train in which the signaling pulse duration is finite and the signaling pulses may have different shapes.
Abstract: We derive the spectral density of a sinusoidal carrier phase modulated by a random baseband pulse train in which the signaling pulse duration is finite and the signaling pulses may have different shapes. The spectral density is expressed as a compact Hermitian form in which the Hermitian matrix is a function of only the symbol probability distribution, and the associated column vector is a function of only the signal pulse shapes. If the baseband pulse duration is longer than one signaling interval, we assume that the symbols transmitted during different time slots are statistically independent. The applicability of the method to compute the spectral density is illustrated by examples for binary, quaternary, octonary, and 16-ary PSK systems with different pulse overlap. Similar methods yield the spectral density of the output of a nonlinear device whose input is a random baseband pulse train with overlapping pulses.

Journal ArticleDOI
TL;DR: A new form of the probability distribution of the phase of a sine wave in narrow-band normal noise is derived which leads to several different simple expressions which render the previously required numerical integration of the probabilities density function unnecessary.
Abstract: A new form of the probability distribution of the phase of a sine wave in narrow-band normal noise is derived. It leads to several different simple expressions for the probability distribution which render the previously required numerical integration of the probability density function unnecessary.