scispace - formally typeset
Search or ask a question

Showing papers on "Probability distribution published in 1983"


Journal ArticleDOI
TL;DR: A procedure for quantifying movement sequences in terms of move length and turning angle probability distributions is developed and it is shown this displacement formula can be used to highlight the consequences of different searching behaviors.
Abstract: This paper develops a procedure for quantifying movement sequences in terms of move length and turning angle probability distributions. By assuming that movement is a correlated random walk, we derive a formula that relates expected square displacements to the number of consecutive moves. We show this displacement formula can be used to highlight the consequences of different searching behaviors (i.e. different probability distributions of turning angles or move lengths). Observations of Pieris rapae (cabbage white butterfly) flight and Battus philenor (pipe-vine swallowtail) crawling are analyzed as a correlated random walk. The formula that we derive aptly predicts that net displacements of ovipositing cabbage white butterflies. In other circumstances, however, net displacements are not well-described by our correlated random walk formula; in these examples movement must represent a more complicated process than a simple correlated random walk. We suggest that progress might be made by analyzing these more complicated cases in terms of higher order markov processes.

773 citations


Journal ArticleDOI
TL;DR: In this paper, an order parameter for spin-glasses is defined in a clear physical way: it is a function on the interval 0-1 and it is related to the probability distribution of the overlap of the magnetization in different states of the system.
Abstract: An order parameter for spin-glasses is defined in a clear physical way: It is a function on the interval 0-1 and it is related to the probability distribution of the overlap of the magnetization in different states of the system. It is shown to coincide with the order parameter introduced by use of the broken replica-symmetry approach.

624 citations


Journal ArticleDOI
TL;DR: In this paper, the number of photons emitted in a short time interval by a single atom in the process of resonance fluorescence is measured, and it is shown that the probability distribution of this number is sub-Poissonian.
Abstract: The number of photons emitted in a short time interval by a single atom in the process of resonance fluorescence is measured, and it is shown that the probability distribution of this number is sub-Poissonian.

493 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that higher order product moments yield important structural information when the distribution of variables is arbitrary, and some asymptotically distribution-free efficient estimators for such arbitrary structural models are developed.
Abstract: Current practice in structural modeling of observed continuous random variables is limited to representation systems for first and second moments (e.g., means and covariances), and to distribution theory based on multivariate normality. In psychometrics the multinormality assumption is often incorrect, so that statistical tests on parameters, or model goodness of fit, will frequently be incorrect as well. It is shown that higher order product moments yield important structural information when the distribution of variables is arbitrary. Structural representations are developed for generalizations of the Bentler-Weeks, Joreskog-Keesling-Wiley, and factor analytic models. Some asymptotically distribution-free efficient estimators for such arbitrary structural models are developed. Limited information estimators are obtained as well. The special case of elliptical distributions that allow nonzero but equal kurtoses for variables is discussed in some detail. The argument is made that multivariate normal theory for covariance structure models should be abandoned in favor of elliptical theory, which is only slightly more difficult to apply in practice but specializes to the traditional case when normality holds. Many open research areas are described.

453 citations


Journal ArticleDOI
TL;DR: In this article, a new procedure based on gaussian quadrature is developed to decrease the error in the approximation to any desired level, which can be used to increase the accuracy.
Abstract: Practical limits on the size of most probabilistic models require that probability distributions be approximated by a few representative values and associated probabilities. This paper demonstrates that methods commonly used to determine discrete approximations of probability distributions systematically underestimate the moments of the original distribution. A new procedure based on gaussian quadrature is developed in this paper. It can be used to decrease the error in the approximation to any desired level.

311 citations


Journal ArticleDOI
TL;DR: In this paper, a flexible representation of a firm's stochastic technology is developed based on the moments of the probability distribution of output, which are a unique representation of the technology and are functions of inputs.
Abstract: Conventional production function specifications are shown to impose restrictions on the probability distribution of output that cannot be tested with the conventional models. These restrictions have important implications for firm behavior under uncertainty. A flexible representation of a firm's stochastic technology is developed based on the moments of the probability distribution of output. These moments are a unique representation of the technology and are functions of inputs. Large-sample estimators are developed for a linear moment model that is sufficiently flexible to test the restrictions implied by conventional production function specifications. The flexible moment-based approach is applied to milk production data. The first three moments of output are statistically significant functions of inputs. The cross-moment restrictions implied by conventional models are rejected.

298 citations


Book
01 Nov 1983
TL;DR: In this paper, the authors present a framework for robustness, breakdown point, and influence function for probability distributions. But they do not consider the effect of variance on robustness.
Abstract: 1. Introduction and Summary.- 1.1. History and main contributions.- 1.2. Why robust estimations?.- 1.3. Summary.- A The Theoretical Background.- 2. Sample spaces, distributions, estimators.- 2.1. Introduction.- 2.2. Example.- 2.3. Metrics for probability distributions.- 2.4. Estimators seen as functionals of distributions.- 3. Robustness, breakdown point and influence function.- 3.1. Definition of robustness.- 3.2. Definition of breakdown point.- 3.3. The influence function.- 4. The jackknife method.- 4.1. Introduction.- 4.2. The jackknife advanced theory.- 4.3. Case study.- 4.4. Comments.- 5. Bootstrap methods, sampling distributions.- 5.1. Bootstrap methods.- 5.2. Sampling distribution of estimators.- B.- 6. Type M estimators.- 6.1. Definition.- 6.2. Influence function and variance.- 6.3. Robust M estimators.- 6.4. Robustness, quasi-robustness and non-robustness.- 6.4.1. Statement of the location problem.- 6.4.2. Least powers.- 6.4.3. Huber's function.- 6.4.4. Modification to Huber's proposal.- 6.4.5. Function "Fair".- 6.4.6. Cauchy-s function.- 6.4.7. Welsch-s function.- 6.4.8. "Bisquare" function.- 6.4.9. Andrews's function.- 6.4.10. Selection of the ?-function.- 7. Type L estimators.- 7.1. Definition.- 7.2. Influence function and variance.- 7.3. The median and related estimators.- 8. Type R estimator.- 8.1. Definition.- 8.2. Influence function and variance.- 9. Type MM estimators.- 9.1. Definition.- 9.2. Influence function and variance.- 9.3. Linear model and robustness - Generalities.- 9.4. Scale of residuals.- 9.5. Robust linear regression.- 9.6. Robust estimation of multivariate location and scatter.- 9.7. Robust non-linear regression.- 9.8. Numerical methods.- 9.8.1. Relaxation methods.- 9.8.2. Simultaneous solutions.- 9.8.3 Solution of fixed-point and non-linear equations.- 10. Quantile estimators and confidence intervals.- 10.1. Quantile estimators.- 10.2. Confidence intervals.- 11. Miscellaneous.- 11.1. Outliers and their treatment.- 11.2. Analysis of variance, constraints on minimization.- 11.3. Adaptive estimators.- 11.4. Recursive estimators.- 11.5. Concluding remark.- 12. References.- 13. Subject index.

262 citations


Book
01 Jan 1983
TL;DR: This text is a revised version of the author's thesis for the University of Leiden and is mainly concerned with the theory of finite Markov decision problems.
Abstract: This text is a revised version of the author's thesis for the University of Leiden and is mainly concerned with the theory of finite Markov decision problems. Such problems are those where a decision maker observes the state of a system at discrete time-points and chooses an action at each point, earning a reward and changing the state of the system according to a probability distribution. If the action chosen depends only on the current state, then the sequence of states forms a Markov chain.

218 citations


Journal ArticleDOI
TL;DR: In this paper, a stochastic model for mass transport in a network of discrete fracture networks is proposed, which is based on the repetitive generation of realizations of a fracture network from probability distributions, describing the fracture geometry, and on a solution for mass transportation within each network, using a particle-tracking technique.
Abstract: A stochastic modeling technique has been developed to investigate mass transport in a network of discrete fractures. The model is based on the repetitive generation of realizations of a fracture network from probability distributions, describing the fracture geometry, and on a solution for mass transport within each network, using a particle-tracking technique. The system we work with consists of two orthogonal fracture sets of finite length, oriented at various angles with respect to the direction of the mean hydraulic gradient. Emphasis is placed on describing the character of dispersion, which develops as a consequence of fracture interconnectivity, and on testing the validity of the conventional diffusion-based model of dispersion in describing transport in fractured media. Results show that mass distributions have a complex form. Marked longitudinal dispersion can develop even a short distance from a source. The distribution of mass in the direction of flow has a consistent negative skew. This pattern of dispersion arises from the limited number of pathways for mass to migrate through the network. Controlling factors in the transport process are the orientation of the fracture sets with respect to the mean hydraulic gradient, the difference in the mean flow velocity in the two fracture sets, and the standard deviation in velocity for fracture set 1. Transport patterns can change greatly as the orientation of the hydraulic gradient changes with respect to the two fracture sets. A conventional diffusion-based model of dispersion cannot characterize transport in these fracture networks. A skewed spatial distribution of mass is observed much more frequently than a Gaussian distribution. When the mean velocities in the two fracture sets are not equal, the form of mass spreading is described by a more general, skewed distribution that accounts for the bias in the probability of mass moving along one fracture set over another. There is a tendency for mass to form a more symmetric distribution as the orientation of the two fracture sets is rotated toward a 45° angle with respect to the direction of the mean hydraulic gradient. Furthermore, constant dispersivity values or simple dispersivity functions are not definable because of the sensitivity of transport to the local velocity field in the fracture network.

174 citations


Journal ArticleDOI
TL;DR: The paper illustrates the advantages of using t-distributions, instead of least squares, and clarifies the role of the information provided by the standard deviations.
Abstract: Several people each express their opinion of an uncertain quantity by providing their mean and standard deviation of it. In this paper we discuss the problem of incorporating all these opinions into a single, reconciled, probability distribution for the quantity. The usual procedure is least squares, based on normality. The paper illustrates the advantages of using t-distributions, instead, and clarifies the role of the information provided by the standard deviations. Suitable approximations are provided for many cases.

164 citations


Journal ArticleDOI
TL;DR: In this article, a new mathematical theory is proposed to analyze the propagation of fatigue crack based on the concepts of fracture mechanics and random processes, and the time-dependent crack size is approximated by a Markov process.
Abstract: A new mathematical theory is proposed to analyze the propagation of fatigue crack based on the concepts of fracture mechanics and random processes. The time-dependent crack size is approximated by a Markov process. Analytical expressions are obtained for the probability distribution of crack size at any given time and the probability distribution of the random time at which a given crack size is reached, conditional on the knowledge of the initial crack size. Examples are given to illustrate the application of the theory, and the results are compared with available experimental data.

Journal ArticleDOI
TL;DR: In this article, the authors developed a statistical model for the time dependent failure of a unidirectional composite material under tensile loads and derived the probability distributions for lifetime in both stress-rupture and cyclic-fatigue and for strength under a linearly increasing load.

Journal ArticleDOI
TL;DR: In this article, a path-integral solution for processes described by nonlinear Fokker-Planck equations together with externally imposed boundary conditions is derived, written in the form of a path sum for small time steps and contains a conventional volume integral, a surface integral which incorporates the boundary conditions.
Abstract: A path-integral solution is derived for processes described by nonlinear Fokker-Planck equations together with externally imposed boundary conditions. This path-integral solution is written in the form of a path sum for small time steps and contains, in addition to the conventional volume integral, a surface integral which incorporates the boundary conditions. A previously developed numerical method, based on a histogram representation of the probability distribution, is extended to a trapezoidal representation. This improved numerical approach is combined with the present path-integral formalism for restricted processes and is shown to give accurate results.

Journal ArticleDOI
TL;DR: The consistency and asymptotic expressions for the bias and covariance of discrete-time estimates f_{n}(x) for the marginal probability density function f(X) of continuous-time processes X(t) are established.
Abstract: For broad classes of deterministic and random sampling schemes \{t_{k}\} we establish the consistency and asymptotic expressions for the bias and covariance of discrete-time estimates f_{n}(x) for the marginal probability density function f(x) of continuous-time processes X(t) . The effect of the sampling scheme and the sampling rate on the performance of the estimates is studied. The results are established for continuous-time processes X(t) satisfying various asymptotic independence-uncorrelatedness conditions.

Book
01 Jan 1983
TL;DR: In this paper, the central limit theorem and confidence intervals are used to measure the central tendency of the distribution of the mean and variance of the normal probability distribution of a sample distribution, which is a measure of the dispersion and skewness of data.
Abstract: Part 1 An introduction to statistics: Who Else Uses Statistics? Graphic Presentation of Data. Types of Statistics. Types of Variables. Levels of Measurement. Uses and Abuses of Statistics. A Word of Encouragement. Computer Applications. Part 2 Summarizing data - frequency distributions and graphic presentations: Frequency Distributions. Stem-and-Leaf Charts. Portraying the Frequency Distributions Graphically. Misuses of Charts and Graphs. Part 3 Descriptive statistics - measures of central tendency: The Sample Mean. The Population Mean. Properties of the Arithmetic Mean. The Weighted Arithmetic Mean. The Median. The Mode. Choosing an Average. Estimating the Arithmetic Mean from Grouped Data. Estimating the Median from Grouped Data. Estimating the Mode from Grouped Data. Choosing an Average for Data in a Frequency Distribution. Part 4 Descriptive statistics - measures of dispersion and skewness. Measures of Dispersion for Raw Data. Measures of Dispersion for Grouped Data. Interpreting and Using the Standard Deviation. Box Plots. Relative Dispersion. The Coefficient of Skewness. Software Example. Part 5 An introduction to probability: Concepts of Probability. Types of Probability. Classical Concept of Probablity. Probability Rules. Some Counting Principles. Part 6 Probability distributions: What is a Probability Distribution? Discrete and Continuous Random Variables. The Mean and Variance of a Probability Distribution. The Binomial Probability Distribution. The Poisson Probability Distribution. Part 7 The normal probability distribution: Characteristics of a Normal Probability Distribution. The "Family" of Normal Distributions. The Standard Normal Probability Distribution. The Normal Approximation to the Binomial. The Normal Approximation to the Poisson Distribution. Part 8 Sampling methods and sampling distributions: Designing the Sample Survey or Experiment. Methods of Probability Sampling. The Sampling Error. The Sampling Distribution of the Sample Mean. Part 9 The central limit theorem and confidence intervals: The Central Limit Theorem. Confidence Intervals for Means. The Standard Error of the Sample Mean. The Standard Error of the Sample Proportion. Confidence Intervals for Proportions. The Finite Population Correction Factor. Choosing an Appropriate Sample Size. Part 10 Hypothesis tests - large-sample methods: The General Idea of Hypothesis Testing. A Test Involving the Population Mean (Large Samples.) A Test for Two Population Means (Large Samples.) A Test Involving the Population Proportion (Large Samples). A Test Involving Two Population Proportions (Large Samples). Part 11 Hypothesis tests - small-sample method: Characteristics of the T Distribution. Testing a Hypothesis about a Population Mean. Comparing Two Population Means. Testing with Dependent Observations. Comparing Dependent and Independent Samples.

Proceedings ArticleDOI
01 May 1983
TL;DR: Estimates of the number of sequential and random block accesses required for retrieving a number of records of a file when the distribution of records in blocks of secondary storage is not uniform are provided.
Abstract: In this paper we provide estimates of the number of sequential and random block accesses required for retrieving a number of records of a file when the distribution of records in blocks of secondary storage is not uniform. We show how these results apply to estimating sizes of joins and semi-joins. We prove that when the uniformity of placement assumption is not satisfied it often leads to pessimistic estimates of performance. Finally we show a recursive estimation of the probability distribution of the number of blocks containing a given number of records.

Journal ArticleDOI
TL;DR: In this paper, the problem of image segmentation is considered in the context of a mixture of probability distributions, where segments fall into classes and a probability distribution is associated with each class of segment.
Abstract: The problem of image segmentation is considered in the context of a mixture of probability distributions. The segments fall into classes. A probability distribution is associated with each class of segment. Parametric families of distributions are considered, a set of parameter values being associated with each class. With each observation is associated an unobservable label, indicating from which class the observation arose. Segmentation algorithms are obtained by applying a method of iterated maximum likelihood to the resulting likelihood function. A numerical example is given. Choice of the number of classes, using Akaike's information criterion (AIC) for model identification, is illustrated.

Journal ArticleDOI
TL;DR: A process is described for analyzing failure data in which Bayes' theorem is used twice, firstly to develop a "prior" or "generic" probability distribution and secondly to specialize this distribution to the specific machine or system in question.
Abstract: A process is described for analyzing failure data in which Bayes' theorem is used twice, firstly to develop a "prior" or "generic" probability distribution and secondly to specialize this distribution to the specific machine or system in question. The process is shown by examples to be workable in practice as well as simple and elegant in concept.

Journal ArticleDOI
TL;DR: It is shown that the envelopes of deterministic autocorrelations have essentially a cosine-like behavior but with jump discontinuities at points where the normalized relative displacement is the reciprocal of an integer.
Abstract: The auto/cross correlation of L2 functions are constrained by certain bounds which may often be used to advantage. These bounds apply to all the common cross correlation functions used for registration purposes (called ``deterministic'' correlation functions in this paper, as opposed to stochastic correlation based on non-L2 functions). It is shown that the envelopes of deterministic autocorrelations have essentially a cosine-like behavior but with jump discontinuities at points where the normalized relative displacement is the reciprocal of an integer. Several inequalities extending these results are given. It is shown how these can be applied toward obtaining improved registration algorithms.

Journal ArticleDOI
TL;DR: In this paper, the use of a scoring rule for the elicitation of forecasts in the form of probability distributions and for the subsequent evaluation of such forecasts is studied, and two simple, well-known rules the spherical and the quadratic are shown to be effective with respect to suitable metrics.
Abstract: This paper studies the use of a scoring rule for the elicitation of forecasts in the form of probability distributions and for the subsequent evaluation of such forecasts. Given a metric distance function on a space of probability distributions, a scoring rule is said to be effective if the forecaster's expected score is a strictly decreasing function of the distance between the elicited and "true" distributions. Two simple, well-known rules the spherical and the quadratic are shown to be effective with respect to suitable metrics. Examples and a practical application in Foreign Exchange rate forecasting are also provided.

Book
01 Jan 1983
TL;DR: In this paper, the authors present a model for regression in economic data, which is based on sampling theory in regression, and apply it to the problem of estimating and regression problems.
Abstract: 1. Introduction. I. DATA AND DESCRIPTION. 2. Economic Data. 3. Descriptive Statistics. 4. Frequency Distributions. II. SPECIFICATION AND ESTIMATION OF REGRESSION MODELS. 5. Simple Regression: Theory. 6. Simple Regression: Application. 7. Multiple Regression: Theory and Application. III. PROBABILITY DISTRIBUTIONS. 8. Probability Theory. 9. Random Variables and Probability Distributions. 10. The Normal and t Distributions. IV. INFERENCE IN REGRESSION. 11. Sampling Theory in Regression. 12. Hypothesis Testing. 13. Estimation and Regression Problems. V. TOPICS IN ECONOMETRICS. 14. F Tests and Dummy Variable Outcomes. 15. Heteroscedasticity and Autocorrelation. 16. Regression and Time Series. 17. Simultaneous-Equation Models. VI. TOPICS IN STATISTICS. 18. Inference for the Mean and Variance. 19. Chi-Square Tests and Analysis of Variance. Statistical Tables. Answers to Selected Problems. Bibliography. Index.

Journal ArticleDOI
TL;DR: It is shown that at a given point in time theJust reward function and the just reward distribution are linked in a mathematically precise relationship that includes a third element, viz., the probability distributions of the reward-relevant characteristics.
Abstract: Suppose that each individual member of a society is judged to be perfectly justly rewarded in the distribution of a socially valued good (such as income). Does it necessarily follow that the observer making the judgment will also judge the resultant probability distribution of that good to be a just distribution? This paper answers that question by specifying the conditions under which justice judgments about individuals and justice judgments about collectivities are and are not mutually entailed. I show that at a given point in time the just reward function (or micro principle ofjustice) and the just reward distribution (or macro principle ofjustice) are linked in a mathematically precise relationship that includes a third element, viz., the probability distributions of the reward-relevant characteristics. Given the latter probability distributions, the micro and macro principles imply each other, and hence there is no inherent inconsistency between them. However, intertemporal changes in the probability distributions of the reward-relevant characteristics destroy the fit between the just reward function and the just reward distribution, rendering mutually inconsistent a previously consistent set of micro and macro principles of justice. If such changes in the probability distributions of the reward-relevant characteristics are not uncommon, then conflict between the micro and macro principles ofjustice would appear to be inevitable over time. Such inconsistency, however, can be avoided if the reward-relevant characteristics in the just rewardfunction are expressed as ranks rather than as magnitudes, provided that individuals' ranks remain constant over time (even though the corresponding magnitudes might change).

Journal ArticleDOI
TL;DR: In this article, a stochastic description of an exothermic reaction leading to adiabatic explosion is set up, and the numerical solution of the master equation reveals the appearance of a long tail and of multiple humps of the probability distribution, which subsist for a certain period of time.
Abstract: A stochastic description of an exothermic reaction leading to adiabatic explosion is set up. The numerical solution of the master equation reveals the appearance of a long tail and of multiple humps of the probability distribution, which subsist for a certain period of time. During this interval the system displays a markedly chaotic behavior, reflecting the random character of the ignition process. An analytical description of this transient evolution is developed, using a piecewise linear approximation of the transition rates. A comparison with other transient phenomena observed in stochastic theory is carried out.

Journal ArticleDOI
TL;DR: In this article, a comprehensive framework for power system security assessment which incorporates probabilistic aspects of disturbances and system dynamic responses to disturbances is presented, where a linear vector differential equation is derived whose solution gives the probability distribution of the time to insecurity.
Abstract: A comprehensive framework for power system security assessment which incorporates probabilistic aspects of disturbances and system dynamic responses to disturbances is presented. Standard mathematical models for power system (steady-state) power flow analysis and transient stability (dynamic) analysis are used. A linear vector differential equation is derived whose solution gives the probability distribution of the time to insecurity. The coefficients of the differential equation contain the transition rates of system structural changes and a set of transition probabilities defined in terms of the steady-state and the dynamic security regions. These regions are defined in the space of power injections. Upper and lower bounds on the time to insecurity distribution are obtained.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a method based on the minimization of a dispersion function defined by a weighted Gini's mean difference, which is derived with an asymptotic linearity result.
Abstract: Robust estimates for the parameters in the general linear model are proposed which are based on weighted rank statistics. The method is based on the minimization of a dispersion function defined by a weighted Gini's mean difference. The asymptotic distribution of the estimate is derived with an asymptotic linearity result. An influence function is determined to measure how the weights can reduce the influence of high-leverage points. The weights can also be used to base the ranking on a restricted set of comparisons. This is illustrated in several examples with stratified samples, treatment vs control groups and ordered alternatives.

Journal ArticleDOI
TL;DR: In this article, Monte Carlo calculations are used to investigate some statistical properties of random walks on fractal structures and renewal theory (return to the original site): probability of return P0(N), mean number of returns nu N, and probability distribution of the walker position P(N,R) after N steps is discussed.
Abstract: Monte Carlo calculations are used to investigate some statistical properties of random walks on fractal structures Two kinds of lattices are used: the Sierpinski gasket and the infinite percolation cluster, in two dimensions Among other problems, the authors study: (i) the range RN of the walker (number of distinct visited sites during N steps): average value SN, variance sigma N and asymptotic distribution: (ii) renewal theory (return to the original site): probability of return P0(N), mean number of returns nu N The probability distribution of the walker position P(N,R) after N steps is discussed The asymptotic behaviour (N>>1) of these quantities exhibits power laws, with associated exponents The numerical values of these exponents are in good agreement with recent theoretical predictions (Alexander and Orbach, 1982; Rammal and Toulouse, 1982)

Journal ArticleDOI
TL;DR: Segmentation algorithms are obtained by applying a relaxation method to maximize the resulting likelihood function and special attention is given to the situation in which the observations are conditionally independent, given the labels.

Journal ArticleDOI
TL;DR: By simulation, the possibility of generating a process with a specified spectral density and a specified first-order probability distribution by passing a Gaussian process with anappropriately chosen spectral density through an appropriately chosen zero-memory nonlinearity is explored.
Abstract: The procedure for generating a Gaussian process with a specified spectral density is well known. It is harder to generate a process with a specified spectral density and a specified first-order probability distribution. In this paper we explore, by simulation, the possibility of generating a process with such a dual specification by passing a Gaussian process with an appropriately chosen spectral density through an appropriately chosen zero-memory nonlinearity. Several applications are cited where such a dual specification is desirable.

Journal ArticleDOI
TL;DR: In this paper, a method is derived which estimates the probability distribution of the critical gaps of those drivers entering a main road at a priority junction who have rejected the initial lag offered to them, using observations of the sizes of the gaps refused and that eventually accepted by the driver.
Abstract: A method is derived which estimates the probability distribution of the critical gaps of those drivers entering a main road at a priority junction who have rejected the initial lag offered to them, using observations of the sizes of the gaps refused and that eventually accepted by the driver. An extension of the method estimates the probability distribution of the critical times of all drivers, on the assumption that each driver’s critical lag and critical gap are equal, from observations of the sizes of the initial lags accepted and rejected by the driver, and the sizes of subsequent gaps considered if the initial lag is rejected. Both methods have been tested using data obtained by computer simulation. Estimates of the mean and standard deviation obtained using each method are substantially unbiased and have a fairly small standard error. Estimates of the critical gap and critical time distributions obtained using the methods are subject to considerable errors in individual cells of the distribution, ho...

Journal ArticleDOI
TL;DR: The method of arbitrary functions is a mathematical technique for the determination of probability distributions that has been used to justify a unique and objectively interpreted concept of probability in games of chance of a mechanical type, and in the theory of dynamical systems of mathematical physics.
Abstract: The method of arbitrary functions is a mathematical technique for the determination of probability distributions. It has been used to justify a unique and objectively interpreted concept of probability in games of chance of a mechanical type, and in the theory of dynamical systems of mathematical physics. Relatively recent results in the ergodic theory of classical dynamical systems will be useful for a re-evaluation of the method of arbitrary or random functions. Towards the end of this paper, we will try to show how, and to what degree, these new results justify the hopes of the pioneers of the method. The literature on probability in dynamical systems, which goes back almost one hundred years, has been unjustly forgotten. It forms an important chapter in the foundations of probability, and includes technical and interpretative insights of great relevance. It may be surprising to find out that probabilities in games of chance which are supposed to be mechanical systems are given an objective interpretation. Nowadays many a philosopher of probability advocates the view that a theory of genuine chance phenomena has to be based on a quantum-mechanically motivated notion of indeterminism. The possibility of 'a causal theory of probability' shows that this propensity point of view, whenever it is applied on a macroscopic level, can be rivalled by a view in which a non-classical notion of indeterminism is not needed. The literature on the method of arbitrary functions and ergodic theory contains studies of specific systems with given equations of motion. Beginning with the work of Ya. Sinai in 1963, the existence of dynamical systems which are measure-theoretically isomorphic to certain stochastic processes has been established. These rigorous results of mathematical physics, and their relevance for the interpretation of probability, have so far been left largely unnoticed by philosophers. Now