scispace - formally typeset
Search or ask a question

Showing papers on "Probability distribution published in 1993"


Journal ArticleDOI
TL;DR: In this article, a polynomial approximation of actual limit states in the reliability analysis is used to reduce the number of analyses required by using closed-form mechanical models to predict the behavior of complex structural systems.

694 citations


Journal ArticleDOI
TL;DR: The product partition model as discussed by the authors assumes that the probability of any partition is proportional to a product of prior cohesions, one for each block in the partition, and given the blocks the parameters in different blocks have independent prior distributions.
Abstract: A sequence of observations undergoes sudden changes at unknown times. We model the process by supposing that there is an underlying sequence of parameters partitioned into contiguous blocks of equal parameter values; the beginning of each block is said to be a change point. Observations are then assumed to be independent in different blocks given the sequence of parameters. In a Bayesian analysis it is necessary to give probability distributions to both the change points and the parameters. We use product partition models (Barry and Hartigan 1992), which assume that the probability of any partition is proportional to a product of prior cohesions, one for each block in the partition, and that given the blocks the parameters in different blocks have independent prior distributions. Given the observations a new product partition model holds, with posterior cohesions for the blocks and new independent block posterior distributions for parameters. The product model thus provides a convenient machinery for allo...

624 citations


Journal ArticleDOI
TL;DR: The authors show the existence of effective bandwidths for multiclass Markov fluids and other types of sources that are used to model ATM traffic and show that when such sources share a buffer with deterministic service rate, a constraint on the tail of the buffer occupancy distribution is a linear constraint onThe number of sources.
Abstract: The authors show the existence of effective bandwidths for multiclass Markov fluids and other types of sources that are used to model ATM traffic. More precisely, it is shown that when such sources share a buffer with deterministic service rate, a constraint on the tail of the buffer occupancy distribution is a linear constraint on the number of sources. That is, for a small loss probability one can assume that each source transmits at a fixed rate called its effective bandwidth. When traffic parameters are known, effective bandwidths can be calculated and may be used to obtain a circuit-switched style call acceptance and routing algorithm for ATM networks. The important feature of the effective bandwidth of a source is that it is a characteristic of that source and the acceptable loss probability only. Thus, the effective bandwidth of a source does not depend on the number of sources sharing the buffer or the model parameters of other types of sources sharing the buffer. >

592 citations


Journal ArticleDOI
TL;DR: Variable-rate data transmission schemes in which constellation points are selected according to a nonuniform probability distribution are studied and prefix codes can be designed that approach the optimal performance.
Abstract: Variable-rate data transmission schemes in which constellation points are selected according to a nonuniform probability distribution are studied. When the criterion is one of minimizing the average transmitted energy for a given average bit rate, the best possible distribution with which to select constellations points is a Maxwell-Boltzmann distribution. In principle, when constellation points are selected according to a Maxwell-Boltzmann distribution, the ultimate shaping gain ( pi e/6 or 1.53 dB) can be achieved in any dimension. Nonuniform signaling schemes can be designed by mapping simple variable-length prefix codes onto the constellation. Using the Huffman procedure, prefix codes can be designed that approach the optimal performance. These schemes provide a fixed-rate primary channel and a variable-rate secondary channel, and are easily incorporated into standard lattice-type coded modulation schemes. >

478 citations


Book ChapterDOI
01 Jan 1993
TL;DR: The problem of converting possibility measures into probability measures has received attention in the past, but not by so many scholars, and has roots at least as much in the possibility/probability consistency principle of Zadeh (1978), that he proposed in the paper founding possibility theory.
Abstract: The problem of converting possibility measures into probability measures has received attention in the past, but not by so many scholars. This question is philosophically interesting as part of the debate between probability and fuzzy sets. The imbedding of fuzzy sets into random set theory as done by Goodman and Nguyen (1985), Wang Peizhuang (1983), among others, has solved this question in principle. However the conversion problem has roots at least as much in the possibility/probability consistency principle of Zadeh (1978), that he proposed in the paper founding possibility theory.

398 citations


Proceedings ArticleDOI
01 Jan 1993
TL;DR: The limited independence result implies that a reduced amount and weaker sources of randomness are sufficient for randomized algorithms whose analyses use the CH bounds, e.g., the analysis of randomized algorithms for random sampling and oblivious packet routing.
Abstract: Chernoff-Hoeffding bounds are fundamental tools used in bounding the tail probabilities of the sums of bounded and independent random variables. We present a simple technique which gives slightly better bounds than these and which, more importantly, requires only limited independence among the random variables, thereby importing a variety of standard results to the case of limited independence for free. Additional methods are also presented, and the aggregate results are very sharp and provide a better understanding of the proof techniques behind these bounds. They also yield improved bounds for various tail probability distributions and enable improved approximation algorithms for jobshop scheduling. The ``limited independence'''' result implies that weaker sources of randomness are sufficient for randomized algorithms whose analyses use the Chernoff-Hoeffding bounds; further, it leads to algorithms that require a reduced amount of randomness for any analysis which uses the Chernoff-Hoeffding bounds, e.g., the analysis of randomized algorithms for random sampling and oblivious packet routing.

372 citations


Book
13 Dec 1993
TL;DR: The Probabilite Reference Record (PRR) was created on 2004-09-07, modified on 2016-08-08 as mentioned in this paper, which is the state record of the world.
Abstract: Keywords: Statistique ; Probabilite Reference Record created on 2004-09-07, modified on 2016-08-08

350 citations


Journal ArticleDOI
TL;DR: It is shown that a fixed-limit booking policy that maximizes expected revenue can be characterized by a simple set of conditions on the subdifferential of the expected revenue function.
Abstract: This paper addresses the problem of determining optimal booking policies for multiple fare classes that share the same seating pool on one leg of an airline flight when seats are booked in a nested fashion and when lower fare classes book before higher ones. We show that a fixed-limit booking policy that maximizes expected revenue can be characterized by a simple set of conditions on the subdifferential of the expected revenue function. These conditions are appropriate for either the discrete or continuous demand cases. These conditions are further simplified to a set of conditions that relate the probability distributions of demand for the various fare classes to their respective fares. The latter conditions are guaranteed to have a solution when the joint probability distribution of demand is continuous. Characterization of the problem as a series of monotone optimal stopping problems proves optimality of the fixed-limit policy over all admissible policies. A comparison is made of the optimal solutions ...

346 citations


Journal ArticleDOI
01 Sep 1993-Ecology
TL;DR: Using data regarding sea—surface temperature, wave—induced hydrodynamic forces, wind speeds, and human life—spans, it is shown that accurate long—term predictions can at times be made from a surprisingly small number of measurements if appropriate care is taken in the application of the statistics.
Abstract: Biostatistics channels ecologists into thinking primarily about the mean and variance of a probability distribution. But many problems of biological interest concern the extremes in a variable (e.g., highest temperature, largest force, longest drought, maximum lifespan) rather than its central tendency. Such extremes are not adequately addressed by standard biostatistics. In these cases an alternative approach–the statistics of extremes–can be of value. In the limit of a large number of measurements, the probability structure of extreme values conforms to a generalized distribution described by three parameters. In practice these parameters are estimated using maximum likelihood techniques. Using this estimate of the probability distribution of extreme values, one can predict the expected time between the imposition of extremes of a given magnitude (a return time) and can place confidence limits on this prediction. Using data regarding sea—surface temperature, wave—induced hydrodynamic forces, wind speeds, and human life—spans we show that accurate long—term predictions can at times be made from a surprisingly small number of measurements if appropriate care is taken in the application of the statistics. For example, accurate long—term prediction of sea—surface temperatures can be derived from short—term data that are anomalous in that they contain the effects of an extreme EL Nino. In the cases of wave—induced forces and wind speeds, the probability distribution of extreme values is similar among years and diverse sites, indicating the possible existence of unifying principles governing these phenomena. Limitations and possible misuse of the method are discussed.

287 citations


Journal ArticleDOI
TL;DR: A method for evaluating the performance of packet switching communication networks under a fixed, session-based, routing strategy is proposed, based on properly bounding the probability distribution functions of the system input processes.
Abstract: A method for evaluating the performance of packet switching communication networks under a fixed, session-based, routing strategy is proposed. The approach is based on properly bounding the probability distribution functions of the system input processes. The suggested bounds which are decaying exponentials, possess three convenient properties. When the inputs to an isolated network element are all bounded, they result in bounded outputs and assure that the delays and queues in this element have exponentially decaying distributions. In some network settings, bounded inputs result in bounded outputs. Natural traffic processes can be shown to satisfy such bounds. Consequently, this method enables the analysis of various previously intractable setups. Sufficient conditions are provided for the stability of such networks, and derive upper bounds for the parameters of network performance are derived. >

284 citations


Book
01 Jan 1993
TL;DR: This chapter discusses the construction of BCMP networks, a model of queueing networks for parallel processing systems, and some of the algorithms used to design and implement these networks.
Abstract: (Each chapter concludes with Exercises.) Preface. 1. Essentials of Probability Theory. Sample space, events and probability. Conditional probability. Independence. 2. Random Variables and Distributions. Probability distribution functions. Discrete random variable. Continuous random variables. Joint random variables. Conditional distributions. Independence and sums. 3. Expected Values and Moments. Expectation. Generating functions and transforms. Asymptotic properties. 4. Stochastic Processes. Random walks. Markov chains. Markov processes. Reversibility. Renewal theory. 5. Queues. Simple Markovian queues. The M/G/1 queue. The G/G/1 queue. 6. Single Class Queueing Networks. Introduction. Open queueing networks. Closed queueing networks. Mean value analysis. Performance measures for the state-dependent case. The flow equivalent server method. 7. Multi-class Queueing Networks. Service time distributions. Types of service centre. Multi-class traffic model. BCMP theorem. Properties and extensions of BCMP networks. Computational algorithms for BCMP networks. Priority disciplines. Quasi-reversibility. 8. Approximate Methods. Decomposition. Fixed point methods. Diffusion approximation. Maximum entropy methods. 9. Time Delays. Time delays in the single server queue. Time delays in networks of queues. Inversion of the Laplace transforms. Approximate methods. 10. Blocking in Queueing Networks. Introduction. Types of blocking. Two finite queues in a closed network. Aggregating Markovian states. BAS blocking. BBS blocking. Repetitive service blocking. 11. Switching Network Models. Telephone networks. Interconnection networks for parallel processing systems. Models of the full crossbar switch. Multi-stage interconnection networks. Models of synchronous MINS. Models of asynchronous MINS. Interconnection networks in a queueing model. Appendix: Outline Solutions. References. Index. 0201544199T04062001

Journal ArticleDOI
TL;DR: In this paper, the finite element method is applied to the solution of the transient Fokker-Planck equation for several often cited nonlinear stochastic systems accurately giving, for the first time, the joint probability density function of the response for a given initial distribution.
Abstract: The finite element method is applied to the solution of the transient Fokker-Planck equation for several often cited nonlinear stochastic systems accurately giving, for the first time, the joint probability density function of the response for a given initial distribution. The method accommodates nonlinearity in both stiffness and damping as well as both additive and multiplicative excitation, although only the former is considered herein. In contrast to the usual approach of directly solving the backward Kolmogorov equation, when appropriate boundary conditions are prescribed, the probability density function associated with the first passage problem can be directly obtained. Standard numerical methods are employed, and results are shown to be highly accurate. Several systems are examined, including linear, Duffing, and Van der Pol oscillators.

Journal ArticleDOI
TL;DR: In this article, a simple renormalization group argument leads to a direct understanding of this universality; it is a consequence of the attractive nature of a gaussian fixed point.

Journal Article
TL;DR: In this article, the authors formulate the general theory of random walks in continuum, essential for treating a collision rate which depends smoothly upon direction of motion, and also consider a smooth probability distribution of correlations between the directions of motion before and after collisions.
Abstract: We formulate the general theory of random walks in continuum, essential for treating a collision rate which depends smoothly upon direction of motion. We also consider a smooth probability distribution of correlations between the directions of motion before and after collisions, as well as orientational Brownian motion between collisions. These features lead to an effective Smoluchowski equation. Such random walks involving an infinite number of distinct directions of motion cannot be treated on a lattice, which permits only a finite number of directions of motion, nor by Langevin methods, which make no reference to individual collisions. The effective Smoluchowski equation enables a description of the biased random walk of the bacterium Escherichia coli during chemotaxis, its search for food

Journal ArticleDOI
TL;DR: The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints and is shown that the algorithm is a generalization of the classical Lloyd-Max results.
Abstract: An algorithm is developed for the design of a nonlinear, n-sensor, distributed estimation system subject to communication and computation constraints. The algorithm uses only bivariate probability distributions and yields locally optimal estimators that satisfy the required system constraints. It is shown that the algorithm is a generalization of the classical Lloyd-Max results. >

Journal ArticleDOI
TL;DR: A general class of Hit-and-Run algorithms for generating essentially arbitrary absolutely continuous distributions on Rd. include the Hypersphere Directions algorithm and the Coordinate Directions algorithm that have been proposed for identifying nonredundant linear constraints and for generating uniform distributions over subsets of Rd.
Abstract: We introduce a general class of Hit-and-Run algorithms for generating essentially arbitrary absolutely continuous distributions on Rd. They include the Hypersphere Directions algorithm and the Coordinate Directions algorithm that have been proposed for identifying nonredundant linear constraints and for generating uniform distributions over subsets of Rd. Given a bounded open set S in Rd, an absolutely continuous probability distribution π on S the target distribution and an arbitrary probability distribution I? on the boundary of the d-dimensional unit sphere centered at the origin the direction distribution, the I?, π-Hit-and-Run algorithm produces a sequence of iteration points as follows. Given the nth iteration point x, choose a direction I? according to the distribution I? and then choose the n + 1st iteration point according to the conditionalization of the distribution π along the line defined by x and x + I?. Under mild conditions, we show that this sequence of points is a Harris recurrent reversible Markov chain converging in total variation to the target distribution π.

Proceedings ArticleDOI
E. Bocchieri1
27 Apr 1993
TL;DR: The author presents an efficient method for the computation of the likelihoods defined by weighted sums (mixtures) of Gaussians, which uses vector quantization of the input feature vector to identify a subset of Gaussian neighbors.
Abstract: In speech recognition systems based on continuous observation density hidden Markov models, the computation of the state likelihoods is an intensive task. The author presents an efficient method for the computation of the likelihoods defined by weighted sums (mixtures) of Gaussians. This method uses vector quantization of the input feature vector to identify a subset of Gaussian neighbors. It is shown that, under certain conditions, instead of computing the likelihoods of all the Gaussians, one needs to compute the likelihoods of only the Gaussian neighbours. Significant (up to a factor of nine) likelihood computation reductions have been obtained on various data bases, with only a small loss of recognition accuracy. >

Journal ArticleDOI
TL;DR: In this article, random partitions of integers are treated in the case where all partitions of an integer are assumed to have the same probability, and the focus is on limit theorems as the number being partitioned approaches ∞.
Abstract: Random partitions of integers are treated in the case where all partitions of an integer are assumed to have the same probability. The focus is on limit theorems as the number being partitioned approaches ∞. The limiting probability distribution of the appropriately normalized number of parts of some small size is exponential. The large parts are described by a particular Markov chain. A central limit theorem and a law of large numbers holds for the numbers of intermediate parts of certain sizes. The major tool is a simple construction of random partitions that treats the number being partitioned as a random variable. The same technique is useful when some restriction is placed on partitions, such as the requirement that all parts must be distinct

Journal ArticleDOI
TL;DR: It is argued that a possible solution is to forgo reliance on theoretical distributions for expectations and quantiles of goodness-of-fit statistics and use Monte Carlo sampling to arrive at an empirical central or noncentral distribution.
Abstract: Latent class models with sparse contingency tables can present problems for model comparison and selection, because under these conditions the distributions of goodness-of-fit indices are often unknown. This causes inaccuracies both in hypothesis testing and in model comparisons based on normed indices. In order to assess the extent of this problem, we carried out a simulation investigating the distributions of the likelihood ratio statistic G(2), the Pearson statistic ⊃(2), and a new goodness-of-fit index suggested by Read and Cressie (1988). There were substantial deviations between the expectation of the chi-squared distribution and the means of the G(2) and Read and Cressie distributions. In general, the mean of the distribution of a statistic was closer to the expectation of the chi-squared distribution when the average cell expectation was large, there were fewer indicator items, and the latent class measurement parameters were less extreme. It was found that the mean of the χ(2) distribution is generally closer to the expectation of the chi-squared distribution than are the means of the other two indices we examined, but the standard deviation of the χ(2) distribution is considerably larger than that of the other two indices and larger than the standard deviation of the chi-squared distribution. We argue that a possible solution is to forgo reliance on theoretical distributions for expectations and quantiles of goodness-of-fit statistics. Instead, Monte Carlo sampling (Noreen, 1989) can be used to arrive at an empirical central or noncentral distribution.

Journal ArticleDOI
TL;DR: The existence of a unique asymptotic probability distribution (stationary distribution) for the Markov chain when the mutation probability is used with any constant nonzero probability value and a Cramer's Rule representation is developed to show that the stationary distribution possesses a zero mutation probability limit.
Abstract: This paper develops a theoretical framework for the simple genetic algorithm (combinations of the reproduction, mutation, and crossover operators) based on the asymptotic state behavior of a nonstationary Markov chain algorithm model. The methodology borrows heavily from that of simulated annealing. We prove the existence of a unique asymptotic probability distribution (stationary distribution) for the Markov chain when the mutation probability is used with any constant nonzero probability value. We develop a Cramer's Rule representation of the stationary distribution components for all nonzero mutation probability values and then extend the representation to show that the stationary distribution possesses a zero mutation probability limit. Finally, we present a strong ergodicity bound on the mutation probability sequence that ensures that the nonstationary algorithm (which results from varying mutation probability during algorithm execution) achieves the limit distribution asymptotically. Although the focus of this work is on a nonstationary algorithm in which mutation probability is reduced asymptotically to zero via a schedule (in a fashion analogous to simulated annealing), the stationary distribution results (existence, Cramer's Rule representation, and zero mutation probability limit) are directly applicable to conventional, simple genetic algorithm implementations as well.

Journal ArticleDOI
TL;DR: A nearest-neighbor estimator was developed that produced not the traditional "one-state" prediction (alpha-helix, beta-strand, or coil) but rather a probability distribution over the three states and found that these probability triplets possess 58% more information than the one-state predictions.

Journal ArticleDOI
TL;DR: The method of invariants as discussed by the authors is a technique in the field of molecular evolution for inferring phylogenetic relations among a number of species on the basis of nucleotide sequence data.
Abstract: The so-called method of invariants is a technique in the field of molecular evolution for inferring phylogenetic relations among a number of species on the basis of nucleotide sequence data. An invariant is a polynomial function of the probability distribution defined by a stochastic model for the observed nucleotide sequence. This function has the special property that it is identically zero for one possible phylogeny and typically nonzero for another possible phylogeny. Thus it is possible to discriminate statistically between two competing phylogenies using an estimate of the invariant. The advantage of this technique is that it enables such inferences to be made without the need for estimating nuisance parameters that are related to the specific mechanisms by which the molecular evolution occurs. For a wide class of models found in the literature, we present a simple algebraic formalism for recognising whether or not a function is an invariant and for generating all possible invariants. Our work is based on recognising an uderlying group structure and using discrete Fourier analysis.

Journal ArticleDOI
TL;DR: In this paper, a Lyapunov function for the phase-locked state of the Kuramoto model of non-linearly coupled oscillators is presented, which allows the introduction of thermodynamic formalism such as ground states and universality classes.
Abstract: A Lyapunov function for the phase-locked state of the Kuramoto model of non-linearly coupled oscillators is presented. It is also valid for finite-range interactions and allows the introduction of thermodynamic formalism such as ground states and universality classes. For the Kuramoto model, a minimum of the Lyapunov function corresponds to a ground state of a system with frustration: the interaction between the oscillators,XY spins, is ferromagnetic, whereas the random frequencies induce random fields which try to break the ferromagnetic order, i.e., global phase locking. The ensuing arguments imply asymptotic stability of the phase-locked state (up to degeneracy) and hold for any probability distribution of the frequencies. Special attention is given to discrete distribution functions. We argue that in this case a perfect locking on each of the sublattices which correspond to the frequencies results, but that a partial locking of some but not all sublattices is not to be expected. The order parameter of the phase-locked state is shown to have a strictly positive lower bound (r ⩾ 1/2), so that a continuous transition to a nonlocked state with vanishing order parameter is to be excluded.

Journal ArticleDOI
TL;DR: Optimality properties of likelihood-ratio quantizers are established for a very broad class of quantization problems, including problems involving the maximization of an Ali-Silvey (1966) distance measure and the Neyman-Pearson variant of the decentralized detection problem.
Abstract: M hypotheses and a random variable Y with a different probability distribution under each hypothesis are considered. A quantizer is applied to form a quantized random variable gamma (Y). The extreme points of the set of possible probability distributions of gamma (Y), as gamma ranges over all quantizers, is characterized. Optimality properties of likelihood-ratio quantizers are established for a very broad class of quantization problems, including problems involving the maximization of an Ali-Silvey (1966) distance measure and the Neyman-Pearson variant of the decentralized detection problem. >

Journal ArticleDOI
TL;DR: A maximum likelihood estimation method is applied directly to the K distribution to investigate the accuracy and uncertainties in maximum likelihood parameter estimates as functions of sample size and the parameters themselves and finds it to be at least as accurate as those from the other estimators in all cases tested, and are more accurate in most cases.
Abstract: The K distribution has proven to be a promising and useful model for backscattering statistics in synthetic aperture radar (SAR) imagery. However, most studies to date have relied on a method of moments technique involving second and fourth moments to estimate the parameters of the K distribution. The variance of these parameter estimates is large in cases where the sample size is small and/or the true distribution of backscattered amplitude is highly non-Rayleigh. The present authors apply a maximum likelihood estimation method directly to the K distribution. They consider the situation for single-look SAR data as well as a simplified model for multilook data. They investigate the accuracy and uncertainties in maximum likelihood parameter estimates as functions of sample size and the parameters themselves. They also compare their results with those from a new method given by Raghavan (1991) and from a nonstandard method of moments technique; maximum likelihood parameter estimates prove to be at least as accurate as those from the other estimators in all cases tested, and are more accurate in most cases. Finally, they compare the simplified multilook model with nominally four-look SAR data acquired by the Jet Propulsion Laboratory AIRSAR over sea ice in the Beaufort Sea during March 1988. They find that the model fits data from both first-year and multiyear ice well and that backscattering statistics from each ice type are moderately non-Rayleigh. They note that the distributions for the data set differ too little between ice types to allow discrimination based on differing distribution parameters. >

Journal ArticleDOI
TL;DR: The use of the beta-binomial distribution for the description of plant disease incidence data, collected on the basis of scoring plants as either «diseased» or «healthy», and maximum likelihood parameters p and θ are estimated.
Abstract: We discuss the use of the beta-binomial distribution for the description of plant disease incidence data, collected on the basis of scoring plants as either «diseased» or «healthy». The beta-binomial is a discrete probability distribution derived by regarding the probability of a plant being diseased (a constant in the binomial distribution) as a beta-distributed variable. An important characteristic of the beta-binomial is that its variance is larger than that of the binomial distribution with the same mean. The beta-binomial distribution, therefore, may serve to describe aggregated disease incidence data. Using maximum likelihood, we estimated beta-binomial parameters p (mean disease incidence) and θ (an index of aggregation) for four previously published sets of disease incidence data in which there were some indications of aggregation [...]

Journal ArticleDOI
TL;DR: In this paper, a new contingent claim formula based on the log-negative-binomial distribution was proposed, where the option price does not depend on the scale of the stock return.
Abstract: This paper characterizes contingent claim formulas that are independent of parameters governing the probability distribution of asset returns. While these parameters may affect stock, bond, and option values, they are "invisible" because they do not appear in the optiQn formulas. For example, the Black-Scholes (1973) formula is independent of the mean of the stock return. This paper presents a new formula based on the log-negative-binomial distribution. In analogy with Cox, Ross, and Rubinstein's (1979) log-binomial formula, the log-negative-binomial option price does not depend on the jump probability. This paper also presents a new formula based on the log-gamma distribution. In this formula, the option price does not depend on the scale of the stock return, but does depend on the mean of the stock return. This paper extends the log-gamma formula to continuous time by defining a gamma process. The gamma process is a jump process with independent increments that generalizes the Wiener process. Unlike the Poisson process, the gamma process can instantaneously jump to a continuum of values. Hence, it is fundamentally "unhedgeable." If the gamma process jumps upward, then stock returns are positively skewed, and if the gamma process jumps downward, then stock returns are negatively skewed. The gamma process has one more parameter than a Wiener process; this parameter controls the jump intensity and skewness of the process. The skewness of the log-gamma process generates strike biases in options. In contrast to the results of diffusion models, these biases increase for short maturity options. Thus, the log-gamma model produces a parsimonious option-pricing formula that is consistent with empirical biases in the Black-Scholes formula.

Journal ArticleDOI
TL;DR: In this paper, the cosmological evolution of the 1-point probability distribution function (PDF) was calculated using an analytic approximation that combines gravitational perturbation theory with the Edgeworth expansion of the PDF.
Abstract: We calculate the cosmological evolution of the 1-point probability distribution function (PDF), using an analytic approximation that combines gravitational perturbation theory with the Edgeworth expansion of the PDF. Our method applies directly to a smoothed mass density field or to the divergence of a smoothed peculiar velocity field, provided that rms fluctuations are small compared to unity on the smoothing scale, and that the primordial fluctuations that seed the growth of structure are Gaussian. We use this `Edgeworth approximation' to compute the evolution of $ $ and $ $; these measures are similar to the skewness and kurtosis of the density field, but they are less sensitive to tails of the probability distribution, so they may be more accurately estimated from surveys of limited volume. We compare our analytic calculations to cosmological N-body simulations in order to assess their range of validity. When $\sigma \ll 1$, the numerical simulations and perturbation theory agree precisely, demonstrating that the N-body method can yield accurate results in the regime of weakly non-linear clustering. We show analytically that `biased' galaxy formation preserves the relation $ \propto ^2$ predicted by second-order perturbation theory, provided that the galaxy density is a local function of the underlying mass density.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed various estimators for characterizing synthetic aperture radar clutter textures and compared their predicted performance with the maximum likelihood estimates in a search for robust, optimum texture estimators.
Abstract: This paper analyses various estimators for characterizing synthetic aperture radar clutter textures. First, we consider maximum likelihood estimators, which require specific knowledge of the form of the probability distribution of the data but would be expected to yield the best performance. Both K- and Weibull-distributed clutter models, which are often applied to characterize natural SAR clutter, are considered. Though a full maximum likelihood solution is impossible for the K distribution, we derive an approximate one for the multi-look case. We next derive expressions for limiting errors in a variety of direct texture estimators and compare their predicted performance with the maximum likelihood estimates in a search for robust, optimum texture estimators.

Journal ArticleDOI
James E. Smith1
TL;DR: The methods supporting this moment approach are discussed and their performance is evaluated in the context of the example to suggest a different approach that offers potential improvements in accuracy and efficiency over the usual approach.
Abstract: Decision models involving continuous probability distributions almost always require some form of approximation. The usual approach to evaluating these kinds of models is to construct a discrete approximation for each continuous distribution and compute value lotteries and certain equivalents using these discrete approximations. Although decision analysts are quite comfortable with this approach, there has been relatively little consideration of how these discrete approximations affect the results of the analysis. In the first part of this paper, we review three common methods of constructing discrete approximations and compare their performance in a simple example. The results of the example suggest a different approach that offers potential improvements in accuracy and efficiency over the usual approach. The basic idea is that given discrete approximations that accurately represent the moments of assessed "input" distributions, we may easily and accurately compute the moments of the "output" distribution or value lotteries. These moments then summarize what we know about the value lottery and certain equivalent, and provide the basis for computing approximate value lotteries and certain equivalents. In this paper, we discuss the methods supporting this moment approach and evaluate their performance in the context of the example.