scispace - formally typeset
Search or ask a question

Showing papers on "Probability distribution published in 1994"


Journal ArticleDOI
TL;DR: In this article, a new method for inferring risk-neutral probabilities (or state-contingent prices) from the simultaneously observed prices of European options is developed. But this method requires the assumption that the underlying asset has a limited risk-free lognormal distribution.
Abstract: This article develops a new method for inferring risk-neutral probabilities (or state-contingent prices) from the simultaneously observed prices of European options. These probabilities are then used to infer a unique fully specified recombining binomial tree that is consistent with these probabilities (and, hence, consistent with all the observed option prices). A simple backwards recursive procedure solves for the entire tree. From the standpoint of the standard binomial option pricing model, which implies a limiting risk-neutral lognormal distribution for the underlying asset, the approach here provides the natural (and probably the simplest) way to generalize to arbitrary ending risk-neutral probability distributions.

1,858 citations


Proceedings ArticleDOI
01 Jan 1994
TL;DR: In this article, the authors estimate the mean and variance of the probability distribution of the target as a function of the input, given an assumed target error-distribution model through the activation of an auxiliary output unit, which provides a measure of the uncertainty of the usual network output for each input pattern.
Abstract: Introduces a method that estimates the mean and the variance of the probability distribution of the target as a function of the input, given an assumed target error-distribution model. Through the activation of an auxiliary output unit, this method provides a measure of the uncertainty of the usual network output for each input pattern. The authors derive the cost function and weight-update equations for the example of a Gaussian target error distribution, and demonstrate the feasibility of the network on a synthetic problem where the true input-dependent noise level is known. >

579 citations


Book
31 Oct 1994
TL;DR: In this paper, the Gauss and Feynman Distributions on infinite-dimensional spaces over non-archimedean fields are presented, and the p-Adic Valued Probability Distributions (generalized functions) are discussed.
Abstract: Introduction. I. First Steps to Non-Archimedean Fields. II. The Gauss, Lebesgue and Feynman Distributions over Non-Archimedean Fields. III. The Gauss and Feynman Distributions on Infinite-Dimensional Spaces over Non-Archimedean Fields. IV. Quantum Mechanics for Non-Archimedean Wave Functions. V. Functional Integrals and the Quantization of Non-Archimedean Models with an Infinite Number of Degrees of Freedom. VI. The p-Adic-Valued Probability Measures. VII. Statistical Stabilization with Respect to p-Adic and Real Metrics. VIII. The p-Adic Valued Probability Distributions (Generalized Functions). IX. p-Adic Superanalysis. Bibliographical Remarks. Open Problems. Appendix: 1. Expansion of Numbers on a Given Scale. 2. An Analogue of Newton's Method. 3. Non-Existence of Differential Maps from Qp to R. Bibliography. Index.

400 citations


Proceedings ArticleDOI
23 May 1994
TL;DR: A new model of learning probability distributions from independent draws is introduced, inspired by the popular Probably Approximately Correct (PAC) model for learning boolean functions from labeled examples, in the sense that it emphasizes efficient and approximate learning, and it studies the learnability of restricted classes of target distributions.
Abstract: We introduce and investigate a new model of learning probability distributions from independent draws. Our model is inspired by the popular Probably Approximately Correct (PAC) model for learning boolean functions from labeled examples [24], in the sense that we emphasize efficient and approximate learning, and we study the learnability of restricted classes of target distributions. The dist ribut ion classes we examine are often defined by some simple computational mechanism for transforming a truly random string of input bits (which is not visible to the learning algorithm) into the stochastic observation (output) seen by the learning algorithm. In this paper, we concentrate on discrete distributions over {O, I}n. The problem of inferring an approximation to an unknown probability distribution on the basis of independent draws has a long and complex history in the pattern recognition and statistics literature. For instance, the problem of estimating the parameters of a Gaussian density in highdimensional space is one of the most studied statistical problems. Distribution learning problems have often been investigated in the context of unsupervised learning, in which a linear mixture of two or more distributions is generating the observations, and the final goal is not to model the distributions themselves, but to predict from which distribution each observation was drawn. Data clustering methods are a common tool here. There is also a large literature on nonpararnetric density estimation, in which no assumptions are made on the unknown target density. Nearest-neighbor approaches to the unsupervised learning problem often arise in the nonparametric setting. While we obviously cannot do justice to these areas here, the books of Duda and Hart [9] and Vapnik [25] provide excellent overviews and introductions to the pattern recognition work, as well as many pointers for further reading. See also Izenman’s recent survey article [16]. Roughly speaking, our work departs from the traditional statistical and pattern recognition approaches in two ways. First, we place explicit emphasis on the comput ationrd complexity of distribution learning. It seems fair to say that while previous research has provided an excellent understanding of the information-theoretic issues involved in dis-

339 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of detecting radar targets against a background of coherent, correlated, non-Gaussian clutter is studied with a two-step procedure, where in the first step, the structure of the amplitude and the multivariate probability density functions (pdfs) describing the statistical properties of the clutter is derived.
Abstract: The problem of detecting radar targets against a background of coherent, correlated, non-Gaussian clutter is studied with a two-step procedure. In the first step, the structure of the amplitude and the multivariate probability density functions (pdfs) describing the statistical properties of the clutter is derived. The starting point for this derivation is the basic scattering problem, and the statistics are obtained from an extension of the central limit theorem (CLT). This extension leads to modeling the clutter amplitude statistics by a mixture of Rayleigh distributions. The end product of the first step is a multidimensional pdf in the form of a Gaussian mixture, which is then used in step 2. The aim of step 2 is to derive both the optimal and a suboptimal detection structure for detecting radar targets in this type of clutter. Some performance results for the new detection processor are also given. >

255 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method for estimating substitution rates for DNA sequence data using likelihood techniques using a recursion satisfied by the sampling probabilities to construct a Markov chain with absorbing states in such a way that the required sampling distribution is the mean of a functional of the process up to the absorption time.

247 citations


Journal ArticleDOI
TL;DR: In this article, a detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out, highlighting important similarities and differences between the data and the random cascade theory.
Abstract: Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three da...

225 citations


Journal ArticleDOI
TL;DR: Statistical properties of mobile-to-mobile land communication channels have been developed, including the level-crossing rate and duration of fades of the envelope, the probability distribution of random FM, and the expected number of crossings of the random phase and random FM of the channel.
Abstract: Statistical properties of mobile-to-mobile land communication channels have been developed. In particular, the level-crossing rate and duration of fades of the envelope, the probability distribution of random FM, the expected number of crossings of the random phase and random FM of the channel, and the power spectrum of random FM of the channel have been considered. >

210 citations


Book
01 Nov 1994
TL;DR: In this article, asymptotic approximations are derived in the original space of the random variables, and simple formulas for the sensitivity of the failure probability to changes in the distribution parameters.
Abstract: This paper considers the asymptotic evaluation of probability integrals. The usual methods require that all random variables are transformed into standard normal variables. The method described here does not use such transformations. Asymptotic approximations are derived in the original space of the random variables. In this way it is also possible to obtain simple formulas for the sensitivity of the failure probability to changes in the distribution parameters.

196 citations


Journal ArticleDOI
TL;DR: The imbedding of the scale similarity of random fields into the theory of infinitely divisible probability distributions is considered and the general probability distribution for the breakdown coefficients of turbulent energy dissipation is obtained along with corresponding similarity exponents.
Abstract: The imbedding of the scale similarity of random fields into the theory of infinitely divisible probability distributions is considered. The general probability distribution for the breakdown coefficients of turbulent energy dissipation is obtained along with corresponding similarity exponents. Related issues of self-similarity and asymptotic behavior of statistical characteristics are also considered.

183 citations


Journal ArticleDOI
TL;DR: A deterministic fluid model and two stochastic traffic models for wireless networks and how the models can be used to investigate various aspects of time and space dynamics in wireless networks are presented.
Abstract: Introduces a deterministic fluid model and two stochastic traffic models for wireless networks. The setting is a highway with multiple entrances and exits. Vehicles are classified as calling or noncalling, depending upon whether or not they have calls in progress. The main interest is in the calling vehicles; but noncalling vehicles are important because they can become calling vehicles if they initiate (place or receive) a call. The deterministic model ignores the behavior of individual vehicles and treats them as a continuous fluid, whereas the stochastic traffic models consider the random behavior of each vehicle. However, all three models use the same two coupled partial differential equations (PDEs) or ordinary differential equations (ODEs) to describe the evolution of the system. The call density and call handoff rate (or their expected values in the stochastic models) are readily computable by solving these equations. Since no capacity constraints are imposed in the models, these computed quantities can be regarded as offered traffic loads. The models complement each other, because the fluid model can be extended to include additional features such as capacity constraints and the interdependence between velocity and vehicular density, while the stochastic traffic model can provide probability distributions. Numerical examples are presented to illustrate how the models can be used to investigate various aspects of time and space dynamics in wireless networks. >

Journal ArticleDOI
Jonathan R. M. Hosking1
TL;DR: Some of the properties of the four-parameter kappa distribution are described, and an example in which it is applied to modeling the distribution of annual maximum precipitation data is given.
Abstract: Many common probability distributions, including some that have attracted recent interest for flood-frequency analysis, may be regarded as special cases of a four-parameter distribution that generalizes the three-parameter kappa distribution of P.W. Mielke. This four-parameter kappa distribution can be fitted to experimental data or used as a source of artificial data in simulation studies. This paper describes some of the properties of the four-parameter kappa distribution, and gives an example in which it is applied to modeling the distribution of annual maximum precipitation data.

Journal ArticleDOI
TL;DR: The authors employ a stochastic model developed by G. Udny Yule and Herbert A. Simon as the probability mechanism underlying the consumer's choice of artistic products and predict that artistic outputs will be concentrated among a few lucky individuals.
Abstract: This study employs a stochastic model developed by G. Udny Yule and Herbert A. Simon as the probability mechanism underlying the consumer's choice of artistic products and predicts that artistic outputs will be concentrated among a few lucky individuals. We find that the probability distribution implied by the stochastic model provides an excellent description of the empirical data in the popular music industry, suggesting that the stochastic model may represent the process generating the superstar phenomenon. Because the stochastic model does not require differential talents among individuals, our empirical results support the notion that the superstar phenomenon could exist among individuals with equal talent. Copyright 1994 by MIT Press.

Journal ArticleDOI
TL;DR: In this article, the authors relax the assumption that the cumulative distribution function of the lead time demand is completely known and merely assume that the first two moments of F are known and finte.
Abstract: Stochastic inventory models, such as continuous review models and periodic review models, require information on the lead time demand. However, information about the form of the probability distribution of the lead time demand is often limited in practice. We relax the assumption that the cumulative distribution function, say F, of the lead time demand is completely known and merely assume that the first two moments of F are known and finte. The minmax distribution free approach for the inventory model consists of finding the most unfavourable distribution for each decision variable and then minimizing over the decision variable. We solve both the continuous review model and the periodic review model with a mixture of backorders and lost sales using the minmax distribution free approach.

Journal ArticleDOI
01 Jan 1994
TL;DR: In this article, the authors consider the problem of tracking a subset of a domain (called the target) which changes gradually over time and evaluate algorithms based on how much movement of the target can be tolerated between examples while predicting with accuracy.
Abstract: In this paper we consider the problem of tracking a subset of a domain (called the target) which changes gradually over time. A single (unknown) probability distribution over the domain is used to generate random examples for the learning algorithm and measure the speed at which the target changes. Clearly, the more rapidly the target moves, the harder it is for the algorithm to maintain a good approximation of the target. Therefore we evaluate algorithms based on how much movement of the target can be tolerated between examples while predicting with accuracy e Furthermore, the complexity of the class {\CAL H} of possible targets, as measured by d, its VC-dimension, also effects the difficulty of tracking the target concept. We show that if the problem of minimizing the number of disagreements with a sample from among concepts in a class {\CAL H} can be approximated to within a factor k, then there is a simple tracking algorithm for {\CAL H} which can achieve a probability e of making a mistake if the target movement rate is at most a constant times epsilon^2/(k(d+k){\rm ln}{1\over \epsilon}), where d is the Vapnik-Chervonenkis dimension of {\CAL H}. Also, we show that if {\CAL H} is properly PAC-learnable, then there is an efficient (randomized) algorithm that with high probability approximately minimizes disagreements to within a factor of 7d + 1, yielding an efficient tracking algorithm for {\CAL H} which tolerates drift rates up to a constant times \epsilon^2/(d^2{\rm ln}{1\over \epsilon}). In addition, we prove complementary results for the classes of halfspaces and axis-aligned hyperrectangles showing that the maximum rate of drift that any algorithm (even with unlimited computational power) can tolerate is a constant times e2/d.

Journal ArticleDOI
TL;DR: In this article, the probability distributions of field differences were calculated from time series of Helios data obtained in 1976 at heliocentric distances near 0.3 AU, and the relevance of these observations to the interpretation and understanding of the nature of solar wind magnetohydrodynamic (MHD) turbulence was pointed out.
Abstract: The probability distributions of field differences x( )=x(t+ )-x(t), where the variable x(t) may denote any solar wind scalar field or vector field component at time t, have been calculated from time series of Helios data obtained in 1976 at heliocentric distances near 0.3 AU. It is found that for comparatively long time lag , ranging from a few hours to 1 day, the differences are normally distributed according to a Gaussian. For shorter time lags, of less than ten minutes, significant changes in shape are observed. The distributions are often spikier and narrower than the equivalent Gaussian distribution with the same standard deviation, and they are enhanced for large, reduced for intermediate and enhanced for very small values of x. This result is in accordance with fluid observations and numerical simulations. Hence statistical properties are dominated at small scale by large fluctuation amplitudes that are sparsely distributed, which is direct evidence for spatial intermittency of the fluctuations. This is in agreement with results from earlier analyses of the structure functions of x. The non-Gaussian features are differently developed for the various types of fluctuations. The relevance of these observations to the interpretation and understanding of the nature of solar wind magnetohydrodynamic (MHD) turbulence is pointed out, and contact is made with existing theoretical concepts of intermittency in fluid turbulence.

Journal ArticleDOI
TL;DR: A simple one-parameter model is developed that can be used to estimate a probability distribution for future projections of energy forecasts and can be applied to any field where a history of forecasting is available.

Journal ArticleDOI
TL;DR: A computer program founded upon several fast, robust numerical procedures based on a number of statistical-estimation methods is presented, and it is found that the least-square minimi- zation method provided better quality fits in general, compared to the other two approaches.
Abstract: Construction operations are subject to a wide variety of fluctuations and interruptions Varying weather conditions, learning development on repetitive operations, equipment breakdowns, management interference, and other external factors may impact the production process in construction As a result of such interferences, the behavior of construction processes becomes subject to random variations This necessitates modeling construction operations as random processes during simulation Random processes in simulation include activity and processing times, arrival processes (eg weather patterns) and disruptions In the context of construction simulation studies, modeling a random input process is usually per- formed by selecting and fitting a sufficiently flexible probability distribution to that process based on sample data To fit a generalized beta distribution in this context, a computer program founded upon several fast, robust numerical procedures based on a number of statistical-estimation methods is presented In particular, the fol- lowing methods were derived and implemented: moment matching, maximum like- lihood, and least-square minimization It was found that the least-square minimi- zation method provided better quality fits in general, compared to the other two approaches The adopted fitting procedures have been implemented in BetaFit, an interactive, microcomputer-based software package, which is in the public domain The operation of BetaFit is discussed, and some applications of this package to the simulation of construction projects are presented

Journal ArticleDOI
TL;DR: In this article, a unified theory for both signalized and unsignalized intersections results were derived for queue length and delay distributions at traffic signals. But the analysis of the Laplace transform of the delay distribution was not considered.
Abstract: Some new analytical results on statistical distributions of queue lenghts and delays at traffic signals are derived. For this purpose, the probability generating function of the queue length distribution is developed, from which the Laplace-transform of the delay distribution is obtained. A Poisson arrival process and fixed-time control are assumed. Similar techniques have successfully been employed to obtain the queue-length and delay distributions at priority intersections. Thus, a unified theory for both signalized and unsignalized intersections results.

Journal ArticleDOI
TL;DR: In this paper, the authors consider systems of particles hopping stochastically on d-dimensional lattices with space-dependent probabilities and derive duality relations, expressing the time evolution of a given initial configuration in terms of correlation functions of simpler dual processes.
Abstract: We consider systems of particles hopping stochastically on d-dimensional lattices with space-dependent probabilities. We map the master equation onto an evolution equation in a Fock space where the dynamics are given by a quantum Hamiltonian (continuous time) or a transfer matrix (discrete time). Using non-Abelian symmetries of these operators we derive duality relations, expressing the time evolution of a given initial configuration in terms of correlation functions of simpler dual processes. Particularly simple results are obtained for the time evolution of the density profile. As a special case we show that for any SU(2) symmetric system the two-point and three-point density correlation functions in the N-particle steady state can be computed from the probability distribution of a single particle moving in the same environment. We apply our results to various models, among them partial exclusion, a simple diffusion-reaction system, and the two-dimensional six-vertex model with space-dependent vertex weights. For a random distribution of the vertex weights one obtains a version of the random-barrier model describing diffusion of particles in disordered media. We derive exact expressions for the averaged two-point density correlation functions in the presence of weak, correlated disorder.

Journal ArticleDOI
TL;DR: This short communication examines the different forms that have been presented in the literature for the log-normal distribution, properly interprets the parameters that appear in these functions, and provides the appropriate equations required to transform between these different distributions and properly evaluate the appropriate statistical parameters.

Journal ArticleDOI
TL;DR: In this paper, a family of Lagrangian stochastic models for the joint motion of particle pairs in isotropic homogeneous stationary turbulence is considered, and two constraints are derived which formally require that the correct one-particle statistics are obtained by the models.
Abstract: A family of Lagrangian stochastic models for the joint motion of particle pairs in isotropic homogeneous stationary turbulence is considered. The Markov assumption and well-mixed criterion of Thomson (1990) are used, and the models have quadratic-form functions of velocity for the particle accelerations. Two constraints are derived which formally require that the correct one-particle statistics are obtained by the models. These constraints involve the Eulerian expectation of the ‘acceleration’ of a fluid particle with conditioned instantaneous velocity, given either at the particle, or at some other particle's position. The Navier-Stokes equations, with Gaussian Eulerian probability distributions, are shown to give quadratic-form conditional accelerations, and models which satisfy these two constraints are found. Dispersion calculations show that the constraints do not always guarantee good one-particle statistics, but it is possible to select a constrained model that does. Thomson's model has good one-particle statistics, but is shown to have unphysical conditional accelerations. Comparisons of relative dispersion for the models are made.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of identifying the probability distributions of real-valued random variables based on some Funcions of them, and they show that some models for Reliability and Survival Analysis can be found in some Econometric Models.
Abstract: Introduction. Identifiability of Probability Distributions of Real-Valued Random Variables Based on Some Funcions of Them. Identifiability of Probability Measures on Abstract Spaces. Identifiability for Some Types of Stochastic Processes. Generalized Convolutions. Identifiability in Some Econometric Models. Identifiability in Some Models for Reliability and Survival Analysis. Identifiability for Mixtures of Distributions. Chapter References. Index.

Journal ArticleDOI
TL;DR: In this article, several conditions are established under which a family of elliptical probability density functions possesses a preferable consistency property, which ensures that any marginal distribution of a random vector whose distribution belongs to a specific elliptical family also belongs to the family.

Proceedings Article
01 Aug 1994
TL;DR: A probabilistic semantics for planning under uncertainty is described, and a fully implemented algorithm that generates plans that succeed with probability no less than a user-supplied probability threshold is presented.
Abstract: We define the probabilistic planning problem in terms of a probability distribution over initial world states, a boolean combination of goal propositions, a probability threshold, and actions whose effects depend on the execution-time state of the world and on random chance Adopting a probabilistic model complicates the definition of plan success: instead of demanding a plan that provably achieves the goal, we seek plans whose probability of success exceeds the threshold This paper describes a probabilistic semantics for planning under uncertainty, and presents a fully implemented algorithm that generates plans that succeed with probability no less than a user-supplied probability threshold The algorithm is sound (if it terminates then the generated plan is sufficiently likely to achieve the goal) and complete (the algorithm will generate a solution if one exists)

Journal ArticleDOI
TL;DR: In this article, the authors examined the probability density functions (PDFs) of the strain-rate tensor eigenvalues and found that the accepted normalization used to bound the intermediate eigenvalue between ±1 leads to a PDF that must vanish at the end points for a non-singular distribution of strain states.
Abstract: Probability density functions (PDFs) of the strain‐rate tensor eigenvalues are examined. It is found that the accepted normalization used to bound the intermediate eigenvalue between ±1 leads to a PDF that must vanish at the end points for a non‐singular distribution of strain states. This purely kinematic constraint has led previous investigators to conclude incorrectly that locally axisymmetric deformations do not exist in turbulent flows. An alternative normalization is presented that does not bias the probability distribution near the axisymmetric limits. This alternative normalization is shown to lead to the expected flat PDF in a Gaussian velocity field and to a PDF that indicates the presence of axisymmetric strain states in a turbulent field. Extension of the new measure to compressible flow is discussed. Several earlier results concerning the likelihood of various strain states and the correlation of these with elevated kinetic energy dissipation rate are reinterpreted in terms of the new normali...

Journal ArticleDOI
TL;DR: In this paper, it is shown that entropy calculations are seriously affected by systematic errors due to the finite size of the samples, and that these difficulties can be dealt with by assuming simple probability distributions underlying the generating process (e.g. equidistribution, power-law distribution, exponential distribution).
Abstract: This paper is devoted to the statistical analysis of symbol sequences, such as Markov strings, DNA sequences, or texts from natural languages. It is shown that entropy calculations are seriously affected by systematic errors due to the finite size of the samples. These difficulties can be dealt with by assuming simple probability distributions underlying the generating process (e.g. equidistribution, power-law distribution, exponential distribution). Analytical expressions for the dominant correction terms are derived and tested.

Journal ArticleDOI
TL;DR: In this article, the Poisson model for rainfall occurrences in which storm intensity and duration are represented by two independent random variables is extended to consider intensity andduration as bivariate random variables each with a marginal exponential distribution, and a numerical optimization method using annual maxima is adopted for parameters estimation.

Book ChapterDOI
01 Jan 1994
TL;DR: In this paper, the authors focus on sequentially assigning treatment levels to subjects in a manner that describes a random walk, with transition probabilities that depend on the prior response as well as the prior treatment.
Abstract: Quantile estimation is an important problem in many areas of application, such as toxicology, item response analysis, and material stress analysis. In these experiments, a treatment or stimulus is given or a stress is applied at a finite number of levels or dosages, and the number of responses at each level is observed. This paper focuses on sequentially assigning treatment levels to subjects in a manner that describes a random walk, with transition probabilities that depend on the prior response as well as the prior treatment. Criteria are given for random walk rules such that resulting stationary treatment distributions will center around an unknown, but prespecified quantile. It is shown how, when a parametric form for the response function is assumed, the stationary treatment distribution may be further characterized. Using the logistic response function as an example, a mechanism for generating new discrete probability distribution functions is revealed. In this example, three different estimates of the unknown quantile arise naturally.

Journal ArticleDOI
TL;DR: In this paper, the authors present a methodology to develop the customer's structure function and to calculate the probability of each system state, based on which the customer defines and evaluates reliability.
Abstract: Reliability models for multistate coherent systems require customer interaction. The customer defines the distinctive component and system states. Knowing the component states, the authors estimate the probability distribution for each component. The customer specifies when a change in the state of any component forces a change in the state of the system. From this, the authors present a methodology to develop the customer's structure function and to calculate the probability of each system state. The customer defines and evaluates reliability. Using the customer's definition, one can summarize the probability distribution of the system. >