scispace - formally typeset
Search or ask a question

Showing papers on "Symmetric probability distribution published in 1996"


Journal ArticleDOI
TL;DR: In one dimension, free field theory with a normalization constraint provides a tractable formulation of the problem, and generalizations to higher dimensions are discussed.
Abstract: Imagine being shown $N$ samples of random variables drawn independently from the same distribution. What can you say about the distribution? In general, of course, the answer is nothing, unless you have some prior notions about what to expect. From a Bayesian point of view one needs an a priori distribution on the space of possible probability distributions, which defines a scalar field theory. In one dimension, free field theory with a normalization constraint provides a tractable formulation of the problem, and we discuss generalizations to higher dimensions.

106 citations


Posted Content
TL;DR: In this paper, a non-biased random walk along the E axis with long range power-law decaying tails was proposed to measure the degeneracy of the 1D Ising ferromagnet.
Abstract: Ferrenberg and Swendsen histogram method is based on Boltzmann probability distribution which presents exponentially decaying tails. Thus, it gives accurate measures only within a narrow window around the simulated temperature. The larger the system, the narrower this window, and the worst the performance of this method. We present a quite different approach, defining a non-biased random walk along the E axis with long range power-law decaying tails, and measuring directly the degeneracy g(E), without thermodynamic constraints. Our arguments are general (model independent), and the method is shown to be exact for the 1D Ising ferromagnet. Also for the 2D Ising ferromagnet, our numerical results for different thermodynamic quantities agree quite well with exact expressions.

89 citations


Journal ArticleDOI
TL;DR: In this article, a procedure was developed to generate a non-Gaussian stationary stochastic process with the knowledge of its first-order probability density and the spectral density, which is applicable to an arbitrary probability density if the spectral densities are of a low-pass type, and to a large class of probability densities if the density is of a narrow band with its peak located at a nonzero frequency.
Abstract: A procedure is developed to generate a non-Gaussian stationary stochastic process with the knowledge of its first-order probability density and the spectral density. The procedure is applicable to an arbitrary probability density if the spectral density is of a low-pass type, and to a large class of probability densities if the spectral density is of a narrow band, with its peak located at a nonzero frequency. \textcopyright{} 1996 The American Physical Society.

81 citations


Journal ArticleDOI
TL;DR: In this article, the probability density function (PDF) of fluctuating physical quantities measured in any stationary or statistically homogeneous process is derived for stationary processes, the formula relates the PDF to two conditional means: two averages involving a general function of the quantity and its time derivatives, taken when the fluctuating quantity is at a certain value.
Abstract: A general formula is derived for the probability density function (PDF) of fluctuating physical quantities measured in any stationary or statistically homogeneous process. For stationary processes, the formula relates the PDF to two conditional means: two averages involving a general function of the quantity and its time derivatives, the time derivative of this function and the time derivative of the quantity, taken when the fluctuating quantity is at a certain value. A previous result by Pope and Ching [Phys. Fluids A 5, 1529 (1993)] is a special case of this general formula when the function is chosen to be the time derivative of the fluctuating quantity. An analogous formula is obtained for the PDF of fluctuating physical quantities measured in statistically homogeneous processes with spatial derivatives in place of time derivatives. \textcopyright{} 1996 The American Physical Society.

31 citations


Journal Article
TL;DR: The main result of the present paper ensures that, for every p € (1, oo), the square root of the corresponding divergence defines a distance on the set of probability distributions.
Abstract: The class If , p G ( l ,oo] , of /-divergences investigated in this paper generalizes an /-divergence introduced by the author in [9] and applied there and by Reschenhofer and Bomze [11] in different areas of hypotheses testing. The main result of the present paper ensures that, for every p € (1, oo), the square root of the corresponding divergence defines a distance on the set of probability distributions. Thus it generalizes the respecting statement for p = 2 made in connection with Example 4 by Kafka, Osterreicher and Vincze in [6]. From the former literature on the subject the maximal powers of /-divergences defining a distance are known for the subsequent classes. For the class of Hellinger-divergences given in terms of p\u) = 1 + u — (u -\-u~) , s £ (0,1) , already Csiszar and Fischer [3] have shown that the maximal power is min(s, 1 — s). For the following two classes the maximal power coincides with their parameter. The class given in terms of f(a)(u) = | l — u \ , a € (0,1], was investigated by Boekee [2]. The previous class and this one have the special case s = a = \ in common. This famous case is attributed to Matusita [8]. The class given by

29 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the common practice of utilizing the residual vector as an estimate of the variance is preferable to using the known value of variance, and sufficient conditions on the spherical distributions for which this paradox occurs.

28 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide representations for geometric-stable probability densities in terms of fast converging integrals, which is the same as the representation used in this paper.

16 citations


Journal ArticleDOI
TL;DR: In this article, single and double compressed-limit sequential probability ratio tests (SPRT) and cumulative sum (CUSUM) control charts were designed to detect one-sided mean shifts in a symmetric probability distribution.
Abstract: Methodology is presented for the design of single and double compressed-limit sequential probability ratio tests (SPRT) and cumulative sum (CUSUM) control charts to detect one-sided mean shifts in a symmetric probability distribution. We also show how to evaluate the average run length properties with the fast initial response (FIR) feature. The resulting CUSUM plans have a simple scoring procedure, and are extremely simple to derive and implement. The use of two compressed-limit gauges is more efficient than a single compressed-limit gauge. In the case of SPRTs, the use of two compressed limit gauges minimizes the average sampling number required for specified operating characteristics. In the case of CUSUM, the gain in efficiency reduces the out-of-control average run length for a given in-control average run length.

14 citations



Journal ArticleDOI
01 Aug 1996-EPL
TL;DR: In this paper, the authors showed that an additive Poissonian white shot noise can induce a macroscopic current of a dissipative particle in a periodic potential, even in the absence of spatial asymmetry of the potential.
Abstract: In the paper by J. Łuczka et al. (Europhys. Lett., 31 (1995) 431), the authors reported by rigorous calculation that an additive Poissonian white shot noise can induce a macroscopic current of a dissipative particle in a periodic potential—even in the absence of spatial asymmetry of the potential. We argue that their central result can easily be attributed to the spatially broken symmetry of a probability distribution of the additive noise, unlike the similar result caused by chaotic noise which has a symmetric probability distribution (J. Phys. Soc. Jpn., 63 (1994) 2014).

6 citations



Book ChapterDOI
01 Jan 1996
TL;DR: In this article, the authors provide estimates of the closeness between probability measures defined on IRn which have the same marginals in a finite number of arbitrary directions, and show that the probability laws get closer in the?-metric which metrizes the weak topology when the number of coinciding marginals increases.
Abstract: We provide estimates of the closeness between probability measures defined on IRn which have the same marginals in a finite number of arbitrary directions. Our estimates show that the probability laws get closer in the ?-metric which metrizes the weak topology when the number of coinciding marginals increases. Our results offer a solution to the computer tomography paradox stated in Gutmann, Kemperman, Reeds, and Shepp (1991).

Journal ArticleDOI
TL;DR: In this paper, a stochastic theory of charge exchange fluctuations in ion transport is introduced based on a master equation for the probability of finding ions in appropriate phase space locations, which is solved exactly in the binary charge state case.
Abstract: A stochastic theory of charge exchange fluctuations in ion transport is introduced based on a master equation for the probability of finding ions in appropriate phase space locations. When energy losses are absent, a forward master equation is obtained which is solved exactly in the binary charge state case. The equilibrium probability distribution is shown to be a binomial distribution and differs from that at small depths. In the more general case a backward master equation is derived for the multi-point probability distribution function, from which an equation for the average charge state is obtained.

Journal ArticleDOI
TL;DR: In this article, the correlation integral for arbitrarily distributed random numbers is investigated and a normalization procedure is proposed to improve the scaling behavior of chaotic time series, which can also be extended to the analysis of chaotic systems.
Abstract: In calculating the correlation dimension of chaotic time series, frequently the scaling behaviour is unsatisfactory. This is due to the influence of inhomogeneities in the probability distribution, such as boundary and lacunarity effects. To quantify these influences, the correlation integral for arbitrarily distributed random numbers is investigated. An analytical expression for the correlation integral is derived from a histogram of the probability. Different types of inhomogeneities in the probability distribution are then investigated. As an application, a normalization procedure is proposed that improves the scaling behaviour. It can also be extended to the analysis of data from chaotic systems.

Journal ArticleDOI
Mark R. Bell1
TL;DR: An analytical procedure is presented that gives the exact expression for the probability of interest for any particular case of m contiguous detections, and the implications for binary integration in radar and electronic warfare problems are considered.
Abstract: The probability of detecting m or more pulses contiguously-that is, in a row-from a pulse train of n pulses is determined when the detection of each pulse is an independent Bernoulli trial with probability p. While a general closed-form expression for this probability is not known, we present an analytical procedure that gives the exact expression for the probability of interest for any particular case. We also present simple asymptotic expressions for these probabilities and develop bounds on the probability that the number of pulses that must be observed before m contiguous detections is greater than or less than some particular number. We consider the implications for binary integration in radar and electronic warfare problems.


Proceedings ArticleDOI
TL;DR: A class of Bayesian, binary hypothesis-testing problems relevant to the classification of targets in the presence of pose uncertainty is analyzed, and an approximation based on the observation that both the numerator and denominator of the likelihood ratio test statistic consist of sums of lognormal random variables is devised.
Abstract: We analyze a class of Bayesian, binary hypothesis-testing problems relevant to the classification of targets in the presence of pose uncertainty. When hypothesis H1 is true, we observe one of N1 possible complex-valued signal vectors, immersed in additive, white complex Gaussian noise; when hypothesis H2 occurs, we observe one of N2 other possible signal vectors, again immersed in noise. Given prior probabilities for H1 and H2, and also prior conditional probabilities for the presence of each of the signal vectors, the problem is to determine both a decision rule that minimizes the error probability and also the associated minimal problem is to determine both a decision rule that minimizes the error probability and also the associated minimal error probability. The optimal decision rule here is well-known to be a likelihood ratio test having a straightforward analytical form; however, the performance of this optimal test is intractable analytically, and thus approximations are required to calculate the probability of error. We devise an approximation based on the observation that both the numerator and denominator of the likelihood ratio test statistic consist of sums of lognormal random variables. Previous work has shown that such sums are well approximated as themselves having a lognormal distribution; we exploit this fact to obtain a simple, approximate error probability expression. For a specific problem, we then compare the resulting error probability numbers with ones obtained via Monte Carlo simulation, demonstrating good agreement between the two methods.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.