scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1968"


Journal ArticleDOI
TL;DR: It is shown that uniform quantizing yields an output entropy which asymptotically is smaller than that for any other quantizer, independent of the density function or the error criterion, and the discrepancy between the entropy of the uniform quantizer and the rate distortion function apparently lies with the inability of the optimal quantizing shapes to cover large dimensional spaces without overlap.
Abstract: It is shown, under weak assumptions on the density function of a random variable and under weak assumptions on the error criterion, that uniform quantizing yields an output entropy which asymptotically is smaller than that for any other quantizer, independent of the density function or the error criterion. The asymptotic behavior of the rate distortion function is determined for the class of u th law loss functions, and the entropy of the uniform quantizer is compared with the rate distortion function for this class of loss functions. The extension of these results to the quantizing of sequences is also given. It is shown that the discrepancy between the entropy of the uniform quantizer and the rate distortion function apparently lies with the inability of the optimal quantizing shapes to cover large dimensional spaces without overlap. A comparison of the entropies of the uniform quantizer and of the minimum-alphabet quantizer is also given.

522 citations


Journal ArticleDOI
TL;DR: In this paper, the authors modify the usual rectangular lattice by allowing each row of vertical bonds to vary randomly from row to row with a prescribed probability function, and they find that the logarithmic singularity of Onsager's lattice is smoothed out into a function which at T c is infinitely differentiable but not analytic.
Abstract: Recent experiments demonstrate that at the Curie temperature the specific heat may be a smooth function of the temperature. We propose that this effect can be due to random impurities and substantiate our proposal by a study of an Ising model containing such impurities. We modify the usual rectangular lattice by allowing each row of vertical bonds to vary randomly from row to row with a prescribed probability function. In the case that this probability is a particular distribution with a narrow width, we find that the logarithmic singularity of Onsager's lattice is smoothed out into a function which at ${T}_{c}$ is infinitely differentiable but not analytic. This function is expressible in terms of an integral involving Bessel functions and is computed numerically.

271 citations


Journal ArticleDOI
TL;DR: In this paper, linear and nonlinear optimal filters with limited memory length are developed, where the filter output is the conditional probability density function and the conditional mean and covariance matrix where the conditioning is only on a fixed amount of most recent data.
Abstract: Linear and nonlinear optimal filters with limited memory length are developed. The filter output is the conditional probability density function and, in the linear Gaussian case, is the conditional mean and covariance matrix where the conditioning is only on a fixed amount of most recent data. This is related to maximum-likelihood least-squares estimation. These filters have application in problems where standard filters diverge due to dynamical model errors. This is demonstrated via numerical simulations.

212 citations


Journal ArticleDOI
TL;DR: In this paper, a general strength theory of unidimensional absolute and comparative judgments is described in detail, with particular emphasis on criterion variance in an absolute-judgment task and its relation to criterion variance for a comparative task.

143 citations


Journal ArticleDOI
TL;DR: The problem of estimating from noisy measurement data the state of a dynamical system described by non-linear difference equations is considered and a Bayesian approach is suggested in which the density function for the state conditioned upon the available measurement data is computed recursively.
Abstract: The problem of estimating from noisy measurement data the state of a dynamical system described by non-linear difference equations is considered. The measurement data have a non-linear relation with the state and are assumed to be available at discrete instants of time. A Bayesian approach to the problem is suggested in which the density function for the state conditioned upon the available measurement data is computed recursively. The evolution of the a posteriori density function cannot be described in a closed form for most systems; the class of linear systems with additive, white gaussian noise provides the major exception. Thus, the problem of non-linear filtering can be viewed as essentially a problem of approximating this density function. For linear systems with additive, white gaussian noise, the a posteriori density is gaussian. The results for linear systems are frequently applied to non-linear systems by introducing linear perturbation theory. Then, the linear equations and gaussian a posterio...

130 citations


Journal ArticleDOI
TL;DR: In this paper, a Gaussian signal generator with a desired probability density function and power density spectrum can be used to generate signals with acceptable accuracy for a range of useful cases. But the necessary operations may be realized using fairly simple analog computer components.
Abstract: In the testing of physical systems, a random signal having a desired probability density function and power density spectrum may be required. A method is presented for generation of signals which, for a range of useful cases, can meet such specifications with acceptable accuracy. It is shown that suitable linear and nonlinear operations on the output of a Gaussian signal generator will, in many cases, give the desired signals. The necessary operations may be realized using fairly simple analog computer components. A number of examples are given of the type of signals which can be generated by this method. Some examples which demonstrate the limitations of the method are also given.

70 citations


Journal ArticleDOI
01 Oct 1968
TL;DR: In this paper, the integral for the average scattered power from a rough surface obtained from physical optics is shown to be proportional to the joint probability density function for the surface slopes in the high-frequency limit.
Abstract: The integral for the average scattered power from a rough surface obtained from physical optics is shown to be proportional to the joint probability density function for the surface slopes in the high-frequency limit.

66 citations


01 Jan 1968
TL;DR: Abstract : Contents: Preprocessing of data; Digital filtering; Fourier series and Fourier transform computations; Correlation function computations'; Spectral density function computation; Frequency response function and coherence function Computations; Probability density function computation; Nonstationary processes; and Test case and examples.
Abstract: : Contents: Preprocessing of data; Digital filtering; Fourier series and Fourier transform computations; Correlation function computations; Spectral density function computations; Frequency response function and coherence function computations; Probability density function computations; Nonstationary processes; and Test case and examples.

65 citations


Journal ArticleDOI
TL;DR: It is shown that, over a wide range of detection and false-alarm probabilities, the performances of the two tests do not differ significantly and the new results obtained here for the distribution of the sum of two or more Pareto-distributed variables are of considerable general interest.
Abstract: In a coherent search rader, the pulse-to-pulse Doppler shift of a signal is generally not known a priori. Given the distribution of this parameter, the best test variable for detection is the average of the likelihood ratio with respect to target Doppler frequency. Most coherent search radars employ a maximum-likelihood ratio detector, that is, a bank of independent Doppler filters, for detection. The average-likelihood and maximum-likelihood tests are compared here for a target with the Rayleigh amplitude distribution. It is shown that, over a wide range of detection and false-alarm probabilities, the performances of the two tests do not differ significantly. For this target model, the likelihood ratio has the Pareto distribution, which arises in some statistical problems in economics. The new results obtained here for the distribution of the sum of two or more Pareto-distributed variables are of considerable general interest.

49 citations



Journal ArticleDOI
TL;DR: In this article, a series solution to the first-passage problem in random vibration is derived which is valid for any type of response process and for both single and double-sided barriers.

Journal ArticleDOI
M. Konrad1
TL;DR: It is shown that the weighting function characterizes both time variant and invariant networks with respect to the determination of the errors and special attention has been given to base line error reduction.
Abstract: Shaping is considered with respect to minimizing the random errors with respect to detector pulse measurements. Some properties of time variant and invariant linear networks are considered. It is shown that the weighting function characterizes both time variant and invariant networks with respect to the determination of the errors. The errors are classified according to their origin and character. Errors due to noise, pile up and base line shift are considered. Equations for their variances and average values are derived using the weighting function and the statistical properties of random impulses. The weighting function characteristics, such as shape and time scale, are considered with respect to the resulting errors The characteristic of the weighting function, required to minimize or decrease particular errors are discussed. Particular attention has been given to base line error reduction.

Journal ArticleDOI
James R. Bell1
TL;DR: This algorithm is one of a class of normal deviate generators, which the authors shall call "chi-squared projections" by using van Neumann rejection to generate sin (¢) and cos (¢), without generating ¢ explicitly [3], which significantly enhances speed by eliminating the calls to the sin and cos functions.
Abstract: procedure norm (D1, D2) ; real D1, D2; comment This procedure generates pairs of independent normal random deviates with mean zero and standard deviation one. The output parameters D1 and D2 are normally distributed on the interval ( ~ , + oo). The method is exact even in the tails. This algorithm is one of a class of normal deviate generators, which we shall call \"chi-squared projections\" [1, 2]. An algorithm of this class has two stages. The first stage selects a random number L from a x~-distribution. The second stage calculates the sine and cosine of a random angle 0. The generated nornlal deviates are given by L sin (0) and L cos (0). The two stages can be altered independently. In particular, as better x22 random generators are developed, they can replace the first stage. (The negative exponential distribution is the same as that of x~2.) The fastest exact method previously published is Algorithm 267 [4], which includes a comparison with earlier algorithms. It is a straight chi-squared projection. Our algorithm differs from it by using van Neumann rejection to generate sin (¢) and cos (¢), [4, = 20], without generating ¢ explicitly [3]. This significantly enhances speed by eliminating the calls to the sin and cos functions. The author wishes to express his gratitude to Professor George Forsythe for his help in developing the algorithm. REFERENCES 1. Box, G., AND MULLER, M. A note on the generation of normal deviates. Ann. Math. Slat. 28, (1958), 610. 2. MULLER, ~V[. E. A comparison of methods for generating normal deviates on digital computers. J. ACM, 6 (July 1959), 376-383. 3. VON NEUMANN, J. Various techniques used in connection with random digits. In Nat. Bur. of Standards Appl. Math. Ser. 12, 1959, p. 36. 4. PIKE, M. C. Algorithm 267, Random Normal Deviate. Comm. ACM, 8 (Oct. 1965), 606.; comment R is any parameterless procedure returning a random number uniformly distributed on the interval from zero to one. A suitable procedure is given by Algorithm 266, Pseudo-Random Numbers [Comm. ACM, 8 (Oct. 1965), 605] if one chooses a = 0, b = 1, and initializes y to some large odd number, such as y = 13421773.; begin real X, Y, XX, YY, S, L;

Journal ArticleDOI
TL;DR: In particular, the A.R.E. of this test to best tests for gamma distributions is maximized when the gamma distribution shape parameter is between five and six as mentioned in this paper.
Abstract: Limiting Pitman efficiencies of the sum of squared ranks test relative to best tests for scale alternative detection are obtained for some specific continuous asymmetrical distributions which have mass confined to the positive axis. In particular the A.R.E. of this test to best tests for gamma distributions is maximized (.9372) when the gamma distribution shape parameter is between five and six. Also this test is an asymptotic locally most powerful rank test for detecting scale alternatives with respect to the distribution formed by doubling the density function of the t distribution with two degrees of freedom over its positive domain.

Journal ArticleDOI
TL;DR: In this paper, the Fokker-Planck equations governing the stationary probability density function for two-degree-of-freedom systems were solved by representing the density function by a multiple series of Hermite polynomials, and the constants in the series expansion were determined by Galerkin's method.
Abstract: A one-degree-of-freedom system and a two-degree-of-freedom system containing Dis-placement and velocity dependent nonlinearities subjected to stationary gaussian white noise excitation have been studied by the method of the Fokker-Planck equation. Non-linearities have been represented by suitable polynomials. The Fokker-Planck equations governing the stationary probability density function for these systems have been solved by representing the density function by a multiple series of Hermite polynomials. The constants in the series expansion were determined by Galerkin's method. Analysis is developed for the system containing nonlinearities described by suitable polynomials in velocity and displacement dependent forces. Comparisons were made between series and exact solutions for those special cases for which exact solutions are known.

Journal ArticleDOI
01 Jun 1968
TL;DR: A solution for the probability ofdouble errors in differentially coherent phase shift keyed (PSK) communication systems is presented which is simpler than previous analyses and which focuses on the cause of double errors.
Abstract: A solution for the probability of double errors in differentially coherent phase shift keyed (PSK) communication systems is presented which is simpler than previous analyses and which focuses on the cause of double errors. Results of a digital computer simulation are given which substantiate the theory.

Journal ArticleDOI
TL;DR: In this paper, the Laplace transformed Fokker-planck equation is used to obtain the transformed transition probability density and moments of first order systems governed by stochastic differential equations of the form dx dt + f(x) = n(t), where f is piecewise linear and n is stationary Gaussian white noise.
Abstract: The Fokker-Planck equation is used to develop a general method for finding the spectral density and other properties of first order systems governed by stochastic differential equations of the form dx dt + f(x) = n(t), where f(x) is piecewise linear and n(t) is stationary Gaussian white noise. For such systems, it is shown how the Laplace transformed Fokker-Planck equation can be solved to obtain the transformed transition probability density. By manipulation of this equation and its adjoint, an expression is derived for the spectral density and moments in terms of the transformed transition density and its derivatives. The results in several special cases are presented.

01 Sep 1968
TL;DR: First, a general expression of multi-dimensional probability density function in the form of orthogonal series such as statistical Hermite expansion is introduced, which is more general than the well-known expansion expression due to Gram Charlier, because it includes the latter.
Abstract: When many correlative random physical proccesses are passed through nonlinear circuit elements such as detectors or rectifiers and the output random fluctuations are considered, the probability variables defined over positive region are fundamental quantities of engineering interest. First, a general expression of multi-dimensional probability density function in the form of orthogonal series such as statistical Hermite expansion is introduced. This probability expression is more general than the well-known expansion expression due to Gram Charlier, because it includes the latter. Then, as the special case of interest when many correlative physical quantities fluctuating only in positive region are treated, an explicit representation of joint probability density function in the form of statistical Laguerre expansion series is also -presented. Further, it has been pointed out that the latter method using a statistical Laguerre expansion is closely related with the statistical Hermite expansion method under a quadratic nonlinear transformation. We must call our attention to the fact that the statistical meaning 0. e., the random property) is reflected in each expansion coefficient. Finally, the detailed experimental considerations enough to corroborate the above theories are given for the following two cases : (a) a quadratic rectifier with non-Gaussian random input, (b) a non-quadratic rectifier with Gaussian random input.

Journal ArticleDOI
01 Nov 1968
TL;DR: Instead of a histogram construction for the evaluation of an unknown PDF, the use of the uniform distribution of coverages of order statistics is proposed, which makes it possible to evaluate the differential entropy of a continuous random variable directly through the distances between neighboring ordered observations.
Abstract: Instead of a histogram construction for the evaluation of an unknown PDF, the use of the uniform distribution of coverages of order statistics is proposed. This estimation makes it possible to evaluate the differential entropy of a continuous random variable directly through the distances between neighboring ordered observations. On the basis of this evaluation, a new nonparametric test can be constructed.

Journal ArticleDOI
TL;DR: First- and second-order stochastic gradient algorithms are developed for suitably approximating the unknown density and distribution functions of a random vector from a sequence of independent samples.
Abstract: First- and second-order stochastic gradient algorithms are developed for suitably approximating the unknown density and distribution functions of a random vector from a sequence of independent samples. The mean-square-error criterion and the integral-square-error criterion are used in the approximations. The rates of convergence and the approximation error are also evaluated.

Journal ArticleDOI
TL;DR: An upper bound is obtained on the probability density of the estimate of the parameter m when a nonlinear function s(t, m) is transmitted over a channel that adds Gaussian noise, and maximum likelihood or maximum a posteriori estimation is used.
Abstract: An upper bound is obtained on the probability density of the estimate of the parameter m when a nonlinear function s(t, m) is transmitted over a channel that adds Gaussian noise, and maximum likelihood or maximum a posteriori estimation is used. If this bound is integrated with a loss function, an upper bound on the average error is obtained. Nonlinear (below threshold) effects are included. The problem is viewed in a Euclidean space. Evaluation of the probability density can be reduced to integrating the probability density of the observation over part of a hyperplane. By bounding the integrand, and using a larger part of the hyperplane, an upper bound is obtained. The resulting bound on mean-square error is quite close for the cases calculated.

Journal ArticleDOI
TL;DR: In this article, the authors derived exact Bayesian confidence limits for the reliability of a redundant system of exponential subsystems, when subsystem tests are terminated at first failure, and derived the posterior probability density function of system reliability using the Mellin integral transform.
Abstract: Exact Bayesian confidence limits are derived for the reliability of a redundant system of exponential subsystems, when subsystem tests are terminated at first failure. The application is conceived for the treatment of a redundant system having extremely high reliability, and allows a monomial family of prior density functions which is conjugate when tests are terminated at first failure. The posterior probability density function of system reliability is derived using the Mellin integral transform. The inversion is accomplished by the method of residues. From the density function the distribution function is obtained which yields confidence limits on reliability by numerical inversion.

Journal ArticleDOI
TL;DR: This paper investigates the probability distribution of time to first emptiness in the spare parts problem for repairable components, where “r” spares are provided initially.
Abstract: This paper investigates the probability distribution of time to first emptiness in the spare parts problem for repairable components, where “r” spares are provided initially. Two cases are considered, namely, (i) when the component is constantly used, and (ii) when it is intermittently used. Expressions for the Laplace Transforms and the first two moments of the probability density functions of time to first emptiness are derived.

04 Oct 1968
TL;DR: In this article, the authors developed the theory for the detection of a steady signal in log-normal clutter by first using a single pulse and then by using the sum of N pulses integrated non-coherently.
Abstract: : Measurements of sea clutter using high-resolution radar indicate that the clutter-cross-section returns follow a log-normal probability density function more closely than the usually assumed Rayleigh law. This report develops the theory for the detection of a steady signal in log-normal clutter by first using a single pulse and then by using the sum of N pulses integrated noncoherently. Plots of the probability density of the envelope of the signal plus clutter show the function to be bimodal, an unexpected result. Curves are presented for the threshold bias, normalized to the median clutter voltage, versus the probability of false alarm for several values of the standard deviation sigma and for various values of N. Probability of detection curves are presented for sigma = 3, 6, and 9 dB, for N = 1, 3, 10, and 30 pulses, and for false alarm probabilities from 0.01 to 10 to the minus 8th power. The ratio of signal to median clutter required for detection increases markedly as sigma increases because of the highly skewed clutter density.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a probabilistic model which is a constantly applied stress and a Rayleigh distributed part strength, where the parameter of the Rayleigh distribution is allowed to increase in an exponential fashion with time, which produces the strength deterioration effect.
Abstract: The generalized stress-strength model which is prevalent in current literature is perhaps the closest that analysts have come to a general physical model. To obtain a failure density function and associated hazard function one must assume a certain probability distribution for the part strength and a particular amplitude distribution and frequency of occurrence distribution for the part stress. If one assumes a normal strength distribution and Poisson distributed stress occurrence times with normally distributed amplitudes, then this leads to an exponential failure density function and a constant hazard. Such a model is probably best suited for situations in which the part generally lasts a long time and only seems to fail when on occasion a large stress occurs. In many situations the failure of parts seems to fit a different pattern. The part is operated at nearly a constant stress level; however, the part strength gradually deteriorates with time. As time goes on the rate of deterioration should increase sharply as wear-out is reached and cause an increase in hazard. A probabilistic model which fits this hypothesis is a constantly applied stress and a Rayleigh distributed part strength. The parameter of the Rayleigh distribution is allowed to increase in an exponential fashion with time which produces the strength deterioration effect. Basically the failure rate turns out to depend on the square of the applied stress; however, if the strength deterioration rate is allowed to be a function of the input stress, other behaviors are predicted.

Journal ArticleDOI
TL;DR: The Stochaatia approximation and related learning techniques, and the abstraction problem in pattern Claasification, are discussed.
Abstract: REFERENCES 111 A. Dvoretsky, “On stochastic approximation,” Pmceedings of the Third Berkeley Symposium OR Mathematical Statistics and Probability, vol. 1. Berkeley, C&f.: University of California Press. 1956, pp. 39-55. I21 C. Derman, “Stochastic Approximation,” Ann. Math. Statistics, vol. 27 pp. 879-886, 1956. IN K. S. Fu. 2. J. Nikolio. Y. T. Chien, and W. G. Wee, “On the Stochaatia approximation and related learning techniques,” School of Elec. Engrg., Purdue University, Lafayette, Ind.. Rept. TR-EE66-6. 1966. 81 C. Blaydon, “On a pattern classification result of Aizerman, Braverman and Rmonoer,” IEEE Trans. Information Theory (Correspondence), vol. IT-12. pp. l3283, January 1966. ISI K. S. fi and Y. T. Chien, “On Bayesiau learning in stochastic approximation,” Proc. 4th Annual AU&on Conf. on Cinzuit and System Theory, University of Illinois. Urbana, 1966. IU Yu-Chi Ho and C. Blaydon, “On the abstraction problem in pattern Claasification,” Cruft Lab., Harvard University, Cambridge, Mass., Tech. Rept. 476, October 1965.

Journal ArticleDOI
TL;DR: In this paper, the lower limit of the integral in the definition of ψξ(η) can be replaced by zero if the range of x is restricted to the right semi-plane.
Abstract: Let the random variable ξ be distributed with distribution function G(ξ) of the continuous, discontinuous or mixed type, and the random variables x ξ be distributed with the distribution functions V ξ(x) with the corresponding characteristic functions , η being a real variable, and i the imaginary unit, and with the generating functions defined by ψξ(− i log z). If the range of x is restricted to the right semi-plane the lower limit of the integral in the definition of ψξ(η) can be replaced by zero.

Proceedings ArticleDOI
E. Henrichon1, K. Fu1
01 Dec 1968
TL;DR: In this paper, a procedure for determining the modes of a continuous univariate probability density function (pdf) is proposed, which is based on the use of a new nonparametric estimate of the pdf.
Abstract: A procedure for determining the modes of a continuous univariate probability density function (pdf) is suggested Inherent in this procedure is the use of a new nonparametric estimate of the pdf An extension of this method to the multidimensional case is posed and some results of the procedure applied to real data problems are presented

Journal ArticleDOI
TL;DR: Stochastic computation, as described in this paper, is described, in which continuous quantities are represented as a sequence of one-bit binary words in which the probability of an ON logic level is a measure of the quantity.
Abstract: In conventional digital computation, continuous quantities are quantized and represented by binary words in which the number of bits determines the precision of representation. In stochastic computation, as described in this paper, continuous quantities are represented as a sequence of one-bit binary words (i. e., as a pulse stream or sequence of logic levels) in which the probability of an ON logic level is a measure of the quantity. Since probability is a continuous variable in the range O≤p≤1, this removes the effects of quantization. However, a probability cannot be measured precisely, only estimated subject to random variance, and hence there is an effective random noise in the output of the computer.

Journal ArticleDOI
TL;DR: Several aspects of stochastic models as they relate to the monthly probability of conception are explored, and the Weibull distribution is shown to illustrate considerable flexibility in describing these differing patterns of intercourse quantitatively.
Abstract: Several aspects of stochastic models as they relate to the monthly probability of conception are explored. In particular a method of obtaining the required probability distributions proceeding from a rather general description of human behaviour is presented. The number of acts of intercourse per month is considered as a random variable rather than being taken as a fixed constant. The pattern of intercourse is characterized mathematically. The construction of the probability function is presented and illustrated numerically. he Weibull distribution is shown to illustrate considerable flexibility in describing these differing patterns of intercourse quantitatively. The resulting probabilities of conception vary considerably under changing parameter values even when the expected number of acts of intercourse is taken to be nearly equal. The relevance of these considerations is discussed.