scispace - formally typeset
Search or ask a question

Showing papers on "Probability distribution published in 1997"


Journal ArticleDOI
TL;DR: A novel fast algorithm for independent component analysis is introduced, which can be used for blind source separation and feature extraction, and the convergence speed is shown to be cubic.
Abstract: We introduce a novel fast algorithm for independent component analysis, which can be used for blind source separation and feature extraction. We show how a neural network learning rule can be transformed into a fixedpoint iteration, which provides an algorithm that is very simple, does not depend on any user-defined parameters, and is fast to converge to the most accurate solution allowed by the data. The algorithm finds, one at a time, all nongaussian independent components, regardless of their probability distributions. The computations can be performed in either batch mode or a semiadaptive manner. The convergence of the algorithm is rigorously proved, and the convergence speed is shown to be cubic. Some comparisons to gradient-based algorithms are made, showing that the new algorithm is usually 10 to 100 times faster, sometimes giving the solution in just a few iterations.

3,215 citations


Journal ArticleDOI
TL;DR: Two methods for separating mixture of independent sources without any precise knowledge of their probability distribution are proposed by considering a maximum likelihood (ML) solution corresponding to some given distributions of the sources and relaxing this assumption afterward.
Abstract: We propose two methods for separating mixture of independent sources without any precise knowledge of their probability distribution. They are obtained by considering a maximum likelihood (ML) solution corresponding to some given distributions of the sources and relaxing this assumption afterward. The first method is specially adapted to temporally independent non-Gaussian sources and is based on the use of nonlinear separating functions. The second method is specially adapted to correlated sources with distinct spectra and is based on the use of linear separating filters. A theoretical analysis of the performance of the methods has been made. A simple procedure for optimally choosing the separating functions is proposed. Further, in the second method, a simple implementation based on the simultaneous diagonalization of two symmetric matrices is provided. Finally, some numerical and simulation results are given, illustrating the performance of the method and the good agreement between the experiments and the theory.

500 citations


Journal ArticleDOI
TL;DR: In this paper, two new classes of probability distributions are introduced that radically simplify the process of developing variance components structures for extreme-value and logistic distributions, and they are shown to be computationally simpler and far more tractable than alternatives such as estimation by simulated moments.
Abstract: Two new classes of probability distributions are introduced that radically simplify the process of developing variance components structures for extremevalue and logistic distributions. When one of these new variates is added to an extreme-value (logistic) variate, the resulting distribution is also extreme value (logistic). Thus, quite complicated variance structures can be generated by recursively adding components having this new distribution, and the result will retain a marginal extreme-value (logistic) distribution. It is demonstrated that the computational simplicity of extreme-value error structures extends to the introduction of heterogeneity in duration, selection bias, limited-dependent- and qualitative-variable models. The usefulness of these new classes of distributions is illustrated with the examples of nested logit, multivariate risk, and competing risk models, where important generalizations to conventional stochastic structures are developed. The new models are shown to be computationally simpler and far more tractable than alternatives such as estimation by simulated moments. These results will be of considerable use to applied microeconomic researchers who have been hampered by computational difficulties in constructing more sophisticated estimators.

431 citations


Proceedings ArticleDOI
Anja Feldmann1, Ward Whitt2
09 Apr 1997
TL;DR: This work develops an algorithm for approximating a long-tail distribution by a finite mixture of exponentials, where an exponential component is fit in the largest remaining time scale and then the fitted exponential components are subtracted from the distribution.
Abstract: Traffic measurements from communication networks have shown that many quantities characterizing network performance have long-tail probability distributions, i.e., with tails that decay more slowly than exponentially. Long-tail distributions can have a dramatic effect upon performance, but it is often difficult to describe this effect in detail, because performance models with component long-tail distributions tend to be difficult to analyze. We address this problem by developing an algorithm for approximating a long-tail distribution by a finite mixture of exponentials. The fitting algorithm is recursive over time scales. At each stage, an exponential component is fit in the largest remaining time scale and then the fitted exponential component is subtracted from the distribution. Even though a mixture of exponentials has an exponential tail, it can match a long-tail distribution in the regions of primary interest when there are enough exponential components.

351 citations


Patent
16 Jun 1997
TL;DR: In this paper, the significance of terms is determined assuming a standard normal probability distribution, and terms are determined to be significant to a cluster if their probability of occurrence being due to chance is low.
Abstract: Documents are classified into one or more clusters corresponding to predefined classification categories by building a knowledge base comprising matrices of vectors which indicate the significance of terms within a corpus of text formed by the documents and classified in the knowledge base to each cluster. The significance of terms is determined assuming a standard normal probability distribution, and terms are determined to be significant to a cluster if their probability of occurrence being due to chance is low. For each cluster, statistical signatures comprising sums of weighted products and intersections of cluster terms to corpus terms are generated and used as discriminators for classifying documents. The knowledge base is built using prefix and suffix lexical rules which are context-sensitive and applied selectively to improve the accuracy and precision of classification.

315 citations


Journal ArticleDOI
TL;DR: A new sampling technique is presented that generates and inverts the Hammersley points to provide a representative sample for multivariate probability distributions and is compared to a sample obtained from a Latin hypercube design by propagating it through a set of nonlinear functions.
Abstract: The basic setting of this article is that of parameter-design studies using data from computer models. A general approach to parameter design is introduced by coupling an optimizer directly with the computer simulation model using stochastic descriptions of the noise factors. The computational burden of these approaches can be extreme, however, and depends on the sample size used for characterizing the parametric uncertainties. In this article, we present a new sampling technique that generates and inverts the Hammersley points (a low-discrepancy design for placing n points uniformly in a k-dimensional cube) to provide a representative sample for multivariate probability distributions. We compare the performance of this to a sample obtained from a Latin hypercube design by propagating it through a set of nonlinear functions. The number of samples required to converge to the mean and variance is used as a measure of performance. The sampling technique based on the Hammersley points requires far fewer sampl...

309 citations


01 Jan 1997
TL;DR: An algorithm which incrementally learns pairwise probability distributions from good solutions seen so far, uses these statistics to generate optimal (in terms of maximum likelihood) dependency trees to model these distributions, and then stochastically generates new candidate solutions from these trees.
Abstract: Many combinatorial optimization algorithms have no mechanism for capturing inter-parameter dependencies. However, modeling such dependencies may allow an algorithm to concentrate its sampling more effectively on regions of the search space which have appeared promising in the past. We present an algorithm which incrementally learns pairwise probability distributions from good solutions seen so far, uses these statistics to generate optimal (in terms of maximum likelihood) dependency trees to model these distributions, and then stochastically generates new candidate solutions from these trees. We test this algorithm on a variety of optimization problems. Our results indicate superior performance over other tested algorithms that either (1) do not explicitly use these dependencies, or (2) use these dependencies to generate a more restricted class of dependency graphs.

298 citations


Patent
15 Aug 1997
TL;DR: In this article, a path-integral approach based on the probability distribution of the complete histories of an underlying security is presented for the pricing of financial instruments such as derivative securities.
Abstract: A Monte Carlo system and method are presented for the pricing of financial instruments such as derivative securities. A path-integral approach is described that relies upon the probability distribution of the complete histories of an underlying security. A Metropolis algorithm is used to generate samples of a probability distribution of the paths (histories) of the security. Complete information on the derivative security is obtained in a single simulation, including parameter sensitivities. Multiple values of parameters are also obtained in a single simulation. The method is applied in a plurality of systems, including a parallel computing environment and an online real-time valuation service. The method and system also have the capability of evaluating American options using Monte Carlo methods.

297 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived the exact expected empirical spectral distribution of the complex eigenvalues for finiten, from which convergence in the expected distribution to the circular law for normally distributed matrices may be derived.

271 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an alternative approach to generate realizations that are conditional to pressure data, focusing on the distribution of realizations and on the efficiency of the method.
Abstract: Generating one realization of a random permeability field that is consistent with observed pressure data and a known variogram model is not a difficult problem. If, however, one wants to investigate the uncertainty of reservior behavior, one must generate a large number of realizations and ensure that the distribution of realizations properly reflects the uncertainty in reservoir properties. The most widely used method for conditioning permeability fields to production data has been the method of simulated annealing, in which practitioners attempt to minimize the difference between the ’ ’true and simulated production data, and “true” and simulated variograms. Unfortunately, the meaning of the resulting realization is not clear and the method can be extremely slow. In this paper, we present an alternative approach to generating realizations that are conditional to pressure data, focusing on the distribution of realizations and on the efficiency of the method. Under certain conditions that can be verified easily, the Markov chain Monte Carlo method is known to produce states whose frequencies of appearance correspond to a given probability distribution, so we use this method to generate the realizations. To make the method more efficient, we perturb the states in such a way that the variogram is satisfied automatically and the pressure data are approximately matched at every step. These perturbations make use of sensitivity coefficients calculated from the reservoir simulator.

248 citations


Journal ArticleDOI
TL;DR: In this paper, a sampling technique is presented that generates and inverts the Hammersley points (an optimal design for placing n points uniformly on a k-dimensional cube) to provide a representative sample for multivariate probability distributions.
Abstract: The concept of robust design involves identification of design settings that make the product performance less sensitive to the effects of seasonal and environmental variations. This concept is discussed in this article in the context of batch distillation column design with feed stock variations, and internal and external uncertainties. Stochastic optimization methods provide a general approach to robust/parameter design as compared to conventional techniques. However, the computational burden of these approaches can be extreme and depends on the sample size used for characterizing the parametric variations and uncertainties. A novel sampling technique is presented that generates and inverts the Hammersley points (an optimal design for placing n points uniformly on a k-dimensional cube) to provide a representative sample for multivariate probability distributions. The example of robust batch-distillation column design illustrates that the new sampling technique offers significant computational savings and better accuracy.

Journal ArticleDOI
TL;DR: In this paper, a method of extracting the risk-neutral probability distribution of future exchange rates from option prices is described, which provides investors and market analysts with an important tool for gauging market sentiment.
Abstract: This article describes a method of extracting the riskneutral probability distribution of future exchange ratesfrom option prices. In foreign exchange markets, interbank option pricing conventions facilitate reliable inferences about riskneutral probability distributions with a small amount of readily available information. T h e risk-neutral probability distribution of the future exchange rate provides investors and market analysts wi th an important tool for gauging market sentiment.

Journal ArticleDOI
TL;DR: This work compares PIPE to GP on a function regression problem and the 6-bit parity problem, and uses it to solve tasks in partially observable mazes, where the best programs have minimal runtime.
Abstract: Probabilistic incremental program evolution (PIPE) is a novel technique for automatic program synthesis. We combine probability vector coding of program instructions, population-based incremental learning, and tree-coded programs like those used in some variants of genetic programming (GP). PIPE iteratively generates successive populations of functional programs according to an adaptive probability distribution over all possible programs. Each iteration, it uses the best program to refine the distribution. Thus, it stochastically generates better and better programs. Since distribution refinements depend only on the best program of the current population, PIPE can evaluate program populations efficiently when the goal is to discover a program with minimal runtime. We compare PIPE to GP on a function regression problem and the 6-bit parity problem. We also use PIPE to solve tasks in partially observable mazes, where the best programs have minimal runtime.

Journal ArticleDOI
TL;DR: In this article, a nonparametric method for the synthesis of streamflow that is data-driven and avoids prior assumptions as to the form of dependence (e.g., linear or nonlinear) and the shape of the probability density functions (i.e., Gaussian) is presented.
Abstract: In this paper kernel estimates of the joint and conditional probability density functions are used to generate synthetic streamflow sequences. Streamflow is assumed to be a Markov process with time dependence characterized by a multivariate probability density function. Kernel methods are used to estimate this multivariate density function. Simulation proceeds by sequentially resampling from the conditional density function derived from the kernel estimate of the underlying multivariate probability density function. This is a nonparametric method for the synthesis of streamflow that is data-driven and avoids prior assumptions as to the form of dependence (e.g., linear or nonlinear) and the form of the probability density functions (e.g., Gaussian). We show, using synthetic examples with known underlying models, that the nonparametric method presented is more flexible than the conventional models used in stochastic hydrology and is capable of reproducing both linear and nonlinear dependence. The effectiveness of this model is illustrated through its application to simulation of monthly streamflow from the Beaver River in Utah.

Journal ArticleDOI
TL;DR: In this paper, an alternative methodology for extreme values of univariate time series was developed, by assuming that the time series is Markovian and using bivariate extreme value theory to suggest appropriate models for the transition distributions.
Abstract: In recent research on extreme value statistics, there has been an extensive development of threshold methods, first in the univariate case and subsequently in the multivariate case as well. In this paper, an alternative methodology for extreme values of univariate time series is developed, by assuming that the time series is Markovian and using bivariate extreme value theory to suggest appropriate models for the transition distributions. A new likelihood representation for threshold methods is presented which we apply to a Markovian time series. An important motivation for developing this kind of theory is the possibility of calculating probability distributions for functionals of extreme events. We address this issue by showing how a theory of compound Poisson limits for additive functionals can be combined with simulation to obtain numerical solutions for problems of practical interest. The methods are illustrated by application to temperature data.

Journal ArticleDOI
TL;DR: A precise understanding of how Occam's razor, the principle that simpler models should be preferred until the data justify more complex models, is automatically embodied by probability theory is arrived at.
Abstract: The task of parametric model selection is cast in terms of a statistical mechanics on the space of probability distributions. Using the techniques of low-temperature expansions, I arrive at a systematic series for the Bayesian posterior probability of a model family that significantly extends known results in the literature. In particular, I arrive at a precise understanding of how Occam’s razor, the principle that simpler models should be preferred until the data justify more complex models, is automatically embodied by probability theory. These results require a measure on the space of model parameters and I derive and discuss an interpretation of Jeffreys’ prior distribution as a uniform prior over the distributions indexed by a family. Finally, I derive a theoretical index of the complexity of a parametric family relative to some true distribution that I call the razor of the model. The form of the razor immediately suggests several interesting questions in the theory of learning that can be studied using the techniques of statistical mechanics.

Book
30 Nov 1997
TL;DR: This chapter discusses Bayesian Probabilities, Bayesian Methods, and the Foundations of Statistical Analysis, which focuses on Bayesian Estimation of Parameters.
Abstract: Introduction: Introduction. - Types of Uncertainty. - Taylor Series Expansion. - Applications. - Problems. - Data Description and Treatment: Introduction.- Classification of Data. - Graphical Description of Data. - Histograms and Frequency Diagrams. - Descriptive Measures. - Applications. - Problems. - Fundamentals Of Probability: Introduction. - Sample Spaces, Sets, and Events. - Mathematics of Probability. - Random Variables and Their Probability Distributions. - Moment.- Common Discrete Probability Distributions. - Common Continuous Probability Distributions. - Applications. - Problems. - Multiple Random Variables: Introduction. - Joint Random Variables and Their Probability Distributions. - Functions of Random Variables. - Applications. - Problems. - Fundamentals of Statistical Analysis: Introduction. - Estimation of Parameters. - Sampling Distributions. - Hypothesis Testing: Procedure. - Hypothesis Tests of Means. - Hypothesis Tests of Variances. - Confidence Intervals. - Sample-Size Determination. - Selection of Model Probability Distributions. - Applications. Problems. - Curve Fitting and Regression Analysis: Introduction. - Correlation Analysis. - Introduction to Regression. - Principle of Least Squares. - Reliability of the Regression Equation. - Reliability of Point Estimates of the Regression Coefficients. - Confidence Intervals of the Regression Equation. - Correlation Versus Regression. - Applications of Bivariate Regression Analysis. - Multiple Regression Analysis. - Regression Analysis of Nonlinear Models. - Applications. Problems. - Simulation: Introduction. - Monte Carlo Simulation. - Random Numbers. - Generation of Random Variables. - Generation of Selected Discrete Random Variables. - Generation of Selected Continuous Random Variables. - Applications. - Problems. - Reliability and Risk Analysis: Introduction. - Time to Failure. - Reliability of Components. - Reliability of Systems. - Risk-Based Decision Analysis. - Applications. - Problems. - Bayesian Methods: Introduction. - Bayesian Probabilities. - Bayesian Estimation of Parameters. - Bayesian Statistics. - Applications. - Problems. - Appendix A: Probability and Statistics Tables. - Appendix B: Values of the Gamma Function. - Subject Index.

Journal ArticleDOI
15 Oct 1997-EPL
TL;DR: The stochastic differential equations for a model of dissipative particle dynamics with both total energy and total momentum conservation in the particle-particle interactions are presented in this paper, together with corresponding fluctuation-dissipation theorems ensuring that the ab initio chosen equilibrium probability distribution for the relevant variables is a stationary solution.
Abstract: The stochastic differential equations for a model of dissipative particle dynamics with both total energy and total momentum conservation in the particle-particle interactions are presented. The corresponding Fokker-Planck equation for the evolution of the probability distribution for the system is deduced together with the corresponding fluctuation-dissipation theorems ensuring that the ab initio chosen equilibrium probability distribution for the relevant variables is a stationary solution. When energy conservation is included, the system can sustain temperature gradients and heat flow can be modeled.

Journal ArticleDOI
TL;DR: In this paper, a class of extensions of the univariate quantile function to the multivariate case (M-quantiles) were developed, related in a certain way to M-parameters of a probability distribution and their M-estimators.
Abstract: The paper develops a class of extensions of the univariate quantile function to the multivariate case (M-quantiles), related in a certain way to M-parameters of a probability distribution and their M-estimators. The spatial (geometric) quantiles, recently introduced by Koltchinskii and Dudley and by Chaudhuri as Well as the regression quantiles of Koenker and Basset, are the examples of the M-quantile function discussed in the paper. We study the main properties of M-quantiles and develop the asymptotic theory of empirical M-quantiles. We use M-quantiles to extend L-parameters and L-estimators to the multivariate case; to introduce a bootstrap test for spherical symmetry of a multivariate distribution, and to extend the notion of regression quantiles to multiresponse linear regression models.

Journal ArticleDOI
TL;DR: In this article, a family of trimmed regions is introduced for a probability distribution in Euclidean d-space, and a trimming transform is constructed that injectively maps a given distribution to a distribution having a unique median.
Abstract: A family of trimmed regions is introduced for a probability distribution in Euclidean d-space. The regions decrease with their parameter $\alpha$, from the closed convex hull of support (at $\alpha = 0$) to the expectation vector (at $\alpha = 1$). The family determines the underlying distribution uniquely. For every $\alpha$ the region is affine equivariant and continuous with respect to weak convergence of distributions. The behavior under mixture and dilation is studied. A new concept of data depth is introduced and investigated. Finally, a trimming transform is constructed that injectively maps a given distribution to a distribution having a unique median.

Journal ArticleDOI
TL;DR: A family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial, is described and the small sample properties of this rule compare favorably to those of the continual reassessment method, determined by simulation.
Abstract: We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.

Journal ArticleDOI
TL;DR: It is shown that distances in proteins are predicted more accurately by neural networks than by probability density functions, and that the accuracy of the predictions can be further increased by using sequence profiles.
Abstract: We predict interatomic Calpha distances by two independent data driven methods. The first method uses statistically derived probability distributions of the pairwise distance between two amino acids, whilst the latter method consists of a neural network prediction approach equipped with windows taking the context of the two residues into account. These two methods are used to predict whether distances in independent test sets were above or below given thresholds. We investigate which distance thresholds produce the most information-rich constraints and, in turn, the optimal performance of the two methods. The predictions are based on a data set derived using a new threshold which defines when sequence similarity implies structural similarity. We show that distances in proteins are predicted more accurately by neural networks than by probability density functions. We show that the accuracy of the predictions can be further increased by using sequence profiles. A threading method based on the predicted distances is presented. A homepage with software, predictions and data related to this paper is available at http://www.cbs.dtu.dk/services/CPHmodels/.

Journal ArticleDOI
TL;DR: An efficient and very simple algorithm based on the successive refinement of partitions of the unit interval (0, 1), which is called the interval algorithm, is proposed and a fairly tight evaluation on the efficiency is given.
Abstract: The problem of generating a random number with an arbitrary probability distribution by using a general biased M-coin is studied. An efficient and very simple algorithm based on the successive refinement of partitions of the unit interval (0, 1), which we call the interval algorithm, is proposed. A fairly tight evaluation on the efficiency is given. Generalizations of the interval algorithm to the following cases are investigated: (1) output sequence is independent and identically distributed (i.i.d.); (2) output sequence is Markov; (3) input sequence is Markov; (4) input sequence and output sequence are both subject to arbitrary stochastic processes.

Journal ArticleDOI
TL;DR: In this paper, a generalization of the P olya-urn scheme is introduced which characterizes the discrete beta-Stacy process and is shown to be neutral to the right and a generalisation of the Dirichlet process.
Abstract: beta-Stacy process is dened. It is shown to be neutral to the right and a generalization of the Dirichlet process. The posterior distribution is also a beta-Stacy process given independent and identically distributed (iid) observations, possibly with right censoring, from F. A generalization of the P olya-urn scheme is introduced which characterizes the discrete betaStacy process. 1. Introduction. Let F be the space of cumulative distribution functions (cdfs) oni0;1e. This paper considers placing a probability distribution on F by dening a stochastic process F onei0;1e; Ae, where A is the Borel eld of subsets, such that Fe0e D 0 a.s., F is a.s. nondecreasing, a.s. right continuous and lim t!1 Fete D 1 a.s. Thus, with probability 1, the sample paths of F are cdf’s. Previous work includes the Dirichlet process [Ferguson (1973, 1974)], neutral to the right processes [Doksum (1974)], the extended gamma process [Dykstra and Laud (1981)], the beta process [Hjort (1990)] and P olya trees [Lavine (1992, 1994), Mauldin, Sudderth and Williams (1992)]. The purpose of this paper is twofold: (1) to introduce a new stochastic process which generalizes the Dirichlet process, in that more exible prior beliefs are able to be represented, and, unlike the Dirichlet process, is conjugate to right censored observations, and (2) to introduce a generalization of the P olyaurn scheme in order to characterize the discrete time version of the process. The property of conjugacy to right censored observations is also a feature of the beta process; however, with the beta process the statistician is required to consider hazard rates and cumulative hazards when constructing the prior. The beta-Stacy process only requires considerations on the distribution of the observations. The process is shown to be neutral to the right. The present paper is restricted to considering the estimation of an unknown cdf oni0;1e, although it is trivially extended to includee1 ;1e. Finally, for ease of notation, F is written to mean either the cdf or the corresponding probability measure. The organization of the paper is as follows. In Section 2 the process is dened and its connections with other processes given. We also provide an

Journal ArticleDOI
TL;DR: Alignments and approximations for finite sequences, called Polya frequency sequences, which follow from their probabilistic representation are reviewed, finding a number of improvements of known estimates.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate branch independence and distribution independence of choices between gambles, and show that for both properties, patterns of violations are opposite those predicted by the inverse-S weighting function used in the model of cumulative prospect theory by Tversky and Kahneman.

Journal ArticleDOI
TL;DR: In this article, the role of non-Gaussian fluctuations in primordial black hole (PBH) formation is explored and shown that the standard Gaussian assumption, used in all PBH formation papers to date, is not justified.
Abstract: We explore the role of non-Gaussian fluctuations in primordial black hole (PBH) formation and show that the standard Gaussian assumption, used in all PBH formation papers to date, is not justified. Since large spikes in power are usually associated with flat regions of the inflaton potential, quantum fluctuations become more important in the field dynamics, leading to mode-mode coupling and non-Gaussian statistics. Moreover, PBH production requires several {sigma} (rare) fluctuations in order to prevent premature matter dominance of the universe, so we are necessarily concerned with distribution tails, where any intrinsic skewness will be especially important. We quantify this argument by using the stochastic slow-roll equation and a relatively simple analytic method to obtain the final distribution of fluctuations. We work out several examples with toy models that produce PBH{close_quote}s, and test the results with numerical simulations. Our examples show that the naive Gaussian assumption can result in errors of many orders of magnitude. For models with spikes in power, our calculations give sharp cutoffs in the probability of large positive fluctuations, meaning that Gaussian distributions would vastly overproduce PBH{close_quote}s. The standard results that link inflation-produced power spectra and PBH number densities must then be reconsidered, since they rely quitemore » heavily on the Gaussian assumption. We point out that since the probability distributions depend strongly on the nature of the potential, it is impossible to obtain results for general models. However, calculating the distribution of fluctuations for any specific model seems to be relatively straightforward, at least in the single inflaton case. {copyright} {ital 1997} {ital The American Physical Society}« less

Journal ArticleDOI
TL;DR: In this paper, the chi-square family of signal fluctuation distributions is defined, and the probability of detection curves for signal fluctuations not belonging to the chi square family is presented.
Abstract: The chi-square family of signal fluctuation distributions is defined. Rules are given for embedding the Swerling cases, and other cases of interest, in this family. Probability of detection curves are presented for the chi-square family of fluctuations, including cases whose probability of detection curves cannot be bracketed by the Swerling cases and the non-fluctuating case. The work of Weinstock has indicated such cases to be of practical interest; the fluctuation loss, for probability of detection exceeding 0.50, can be much larger than that for Swerling Case I. A discussion and partial analysis, accompanied by examples, is devoted to the question: when can the detection probability curves for signal fluctuations not belonging to the chi-square family be well approximated by curves resulting from chi-square fluctuations, and what methods can be used to choose adequately fitting chi-square fluctuation models when such a fit is possible?.

Journal ArticleDOI
TL;DR: In this article, a new and direct approach for analyzing the scaling properties of the various distribution functions for the random forced Burgers equation is proposed, and the authors consider the problem of the growth of random surfaces.
Abstract: Statistical properties of solutions of the random forced Burgers equation have been a subject of intensive studies recently (see Refs. [1–6]). Of particular interest are the asymptotic properties of probability distribution functions associated with velocity gradients and velocity increments. Aside from the fact that such issues are of direct interest to a large number of problems such as the growth of random surfaces [1], it is also hoped that the field-theoretic techniques developed for the Burgers equation will eventually be useful for understanding more complex phenomena such as turbulence. In this paper, we propose a new and direct approach for analyzing the scaling properties of the various distribution functions for the random forced Burgers equation. We will consider the problem

Posted Content
TL;DR: This paper surveys four measures of distinguishability for quantum-mechanical states from the point of view of the cryptographer with a particular eye on applications in quantum cryptography, and obtains several inequalities that relate the quantum distinguishability measures to each other.
Abstract: This paper, mostly expository in nature, surveys four measures of distinguishability for quantum-mechanical states. This is done from the point of view of the cryptographer with a particular eye on applications in quantum cryptography. Each of the measures considered is rooted in an analogous classical measure of distinguishability for probability distributions: namely, the probability of an identification error, the Kolmogorov distance, the Bhattacharyya coefficient, and the Shannon distinguishability (as defined through mutual information). These measures have a long history of use in statistical pattern recognition and classical cryptography. We obtain several inequalities that relate the quantum distinguishability measures to each other, one of which may be crucial for proving the security of quantum cryptographic key distribution. In another vein, these measures and their connecting inequalities are used to define a single notion of cryptographic exponential indistinguishability for two families of quantum states. This is a tool that may prove useful in the analysis of various quantum cryptographic protocols.