scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1989"


Journal ArticleDOI
TL;DR: In this article, various ways of removing this discrete nature of the problem by the introduction of a density function that is a continuous design variable are described. But none of these methods can be used for shape optimization in a general setting.
Abstract: Shape optimization in a general setting requires the determination of the optimal spatial material distribution for given loads and boundary conditions. Every point in space is thus a material point or a void and the optimization problem is a discrete variable one. This paper describes various ways of removing this discrete nature of the problem by the introduction of a density function that is a continuous design variable. Domains of high density then define the shape of the mechanical element. For intermediate densities, material parameters given by an artificial material law can be used. Alternatively, the density can arise naturally through the introduction of periodically distributed, microscopic voids, so that effective material parameters for intermediate density values can be computed through homogenization. Several examples in two-dimensional elasticity illustrate that these methods allow a determination of the topology of a mechanical element, as required for a boundary variations shape optimization technique.

3,434 citations


Journal ArticleDOI
TL;DR: In this article, a two-dimensional graphical display of all possible relative sizes of the three principal moments is presented, together with a method of representing the probability density of these relative sizes deduced from a given set of data.
Abstract: Seismic signals provide information about the underlying moment tensor which, in turn, may be interpreted in terms of source mechanism. This paper is concerned with a two-dimensional graphical display of all possible relative sizes of the three principal moments; it provides a method of representing the probability density of these relative sizes deduced from a given set of data. Information provided by such a display, together with that relating to the orientation of the principal moments, provides as full a picture of the moment tensor as possible apart from an indication of its absolute magnitude. As with the compatibility plot, which was previously introduced to portray probability measures for different forms of P wave seismogram given a presumed source type, this "source type plot" for display of the principal moments is constructed to be "equal area" in the sense that the a priori probability density of the moment ratios is uniform over the whole plot. This a priori probability is based on the assumption that, with no information whatsoever concerning the source mechanism, each principal moment may independently take any value up to some arbitrary upper limit of magnitude, with equal likelihood. Although we have in mind the study of teleseismic relative amplitude data, the ideas can, in principle, be applied quite generally. The aim is to be able to display the degree of constraint imposed on the moment tensor by any set of observed data; estimates of the sizes of the principal moments together with their errors, when displayed on the source type plot, show directly the range of moment tensors compatible with the data.

311 citations


Journal ArticleDOI
TL;DR: In this article, a random walk model is developed based on the approach of Thomson (1987, J. Fluid Mech. 180, 529,556) which satisfies the well-mixed condition, incorporates skewness in the vertical velocity and has Gaussian random forcing.

233 citations


Journal ArticleDOI
TL;DR: In this article, a multivariate K-distribution is proposed to model the statistics of fully polarimetric radar data from earth terrain with polarizations HH, HV, VH, and VV.
Abstract: A multivariate K-distribution is proposed to model the statistics of fully polarimetric radar data from earth terrain with polarizations HH, HV, VH, and VV. In this approach, correlated polarizations of radar signals, as characterized by a covariance matrix, are treated as the sum of N n-dimensional random vectors; N obeys the negative binomial distribution with a parameter alpha and mean N-bar. Subsequently, an n-dimensional K-distribution, with either zero or nonzero mean, is developed in the limit of infinite N-bar or illuminated area. The probability density function (PDF) of the K-distributed vector normalized by its Euclidean norm is independent of the parameter alpha and is the same as that derived from a zero-mean Gaussian-distributed random vector.

185 citations


Posted Content
TL;DR: In this article, a series of theorems relating log-concavity and/or logconvexity of probability density functions, distribution functions, reliability functions, and their integrals are presented.
Abstract: In many applications, assumptions about the log-concavity of a probability distribution allow just enough special structure to yield a workable theory. This paper catalogs a series of theorems relating log-concavity and/or log-convexity of probability density functions, distribution functions, reliability functions, and their integrals. We list a large number of commonly-used probability distributions and report the log-concavity or log-convexity of their density functions and their integrals. We also discuss a variety of applications of log-concavity that have appeared in the literature. Copyright Springer-Verlag Berlin/Heidelberg 2005(This abstract was borrowed from another version of this item.)

140 citations


Journal ArticleDOI
TL;DR: In this article, the estimation of a density and a hazard rate function based on censored data by the kernel smoothing method is studied, which is facilitated by a recent result of Lo and Singh (1986) which establishes a strong uniform approximation of the Kaplan-Meier estimator by an average of independent random variables.
Abstract: We study the estimation of a density and a hazard rate function based on censored data by the kernel smoothing method. Our technique is facilitated by a recent result of Lo and Singh (1986) which establishes a strong uniform approximation of the Kaplan-Meier estimator by an average of independent random variables. (Note that the approximation is carried out on the original probability space, which should be distinguished from the Hungarian embedding approach.) Pointwise strong consistency and a law of iterated logarithm are derived, as well as the mean squared error expression and asymptotic normality, which is obtain using a more traditional method, as compared with the Hajek projection employed by Tanner and Wong (1983).

125 citations


Journal ArticleDOI
TL;DR: In this article, a simple method (here called the RS-method) established for solving renewal-type integral equations based on direct numerical Riemann-Stieltjes integration is presented and evaluated.
Abstract: A simple method (here called the RS-method) established for solving renewal-type integral equations based on direct numerical Riemann-Stieltjes integration is presented and evaluated. In almost all situations it has shown surprisingly good results in terms of simplicity, convergence and applicability compared with the other known methods. The RS-method is particularly useful when the probability density function has singularities.

116 citations


14 Dec 1989
TL;DR: In this article, the authors developed a tracking filter based on the assumption that the number of mixture components should be minimized without modifying the "structure" of the distribution beyond a specified limit.
Abstract: The paper is concerned with the development of practical filters for tracking a target when the origin of sensor measurements is uncertain. The full Bayesian solution to this problem gives rise to mixture distributions. From knowledge of the mixture distribution, in principle, an optimal estimate of the state vector for any criteria may be obtained. Also, if the problem is linear and Gaussian, the distribution becomes a Gaussian mixture in which each component probability density function is given by a Kalman filter. The author only considers this case. The methods presented are based on the premise that the number of mixture components should be minimized without modifying the 'structure' of the distribution beyond a specified limit. The techniques operate by merging similar components in such a way that the approximation preserves the mean and covariance of the original mixture. Also to allow the tracking filter to be implemented as a bank of Kalman filters, it is required that the approximated distribution is itself a Gaussian mixture.

109 citations


Journal ArticleDOI
TL;DR: In this paper, two approaches to characterize transport by groundwater are compared: the common one in which solute movement is represented in terms of concentration as function of space and time and that of a travel-time probability distribution function (p.d.), defined as the probability of crossing a compliance surface by a solute particle.

107 citations


Journal ArticleDOI
TL;DR: In this article, a class of nonparametric estimators of conditional quantiles of Y for a given value of X, based on a random sample from the above distribution, is proposed.

105 citations


Journal ArticleDOI
TL;DR: The authors develop the mathematical tools for the computation of the average (or expected) error due to quantization, the analytic expression for the probability density of error distribution of a function of an arbitrarily large number of independently quantized variables.
Abstract: Due to the important role that digitization error plays in the field of computer vision, a careful analysis of its impact on the computational approaches used in the field is necessary. The authors develop the mathematical tools for the computation of the average (or expected) error due to quantization. They can be used in estimating the actual error occurring in the implementation of a method. Also derived is the analytic expression for the probability density of error distribution of a function of an arbitrarily large number of independently quantized variables. The probability that the error of the function will be within a given range can thus be obtained accurately. The tools developed can be used in the analysis of the applicability of a given algorithm. >

Journal ArticleDOI
TL;DR: A new approach to the computational treatment of polyreaction kinetics is presented, which is characterized by a Galerkin method based on orthogonal polynomials of a discrete variable, the polymer degree, to avoid the disadvantages and preserve the advantages of either of them.
Abstract: This paper presents a new approach to the computational treatment of polyreaction kinetics, which is characterized by a Galerkin method based on orthogonal polynomials of a discrete variable, the polymer degree (or chain length). In comparison with known competing approaches (statistical moment treatment, Galerkin methods for continuous polymer models), the suggested method is shown to avoid the disadvantages and preserve the advantages of either of them. The basic idea of the method is the construction of a discrete inner product associated with a reasonably chosen probability density function. For the so-called Schulz-Flory distribution one thus obtains the discrete Laguerre polynomials, whereas the Poisson distribution leads to the Charlier polynomials. Numerical experiments for selected polyreaction mechanisms illustrate the efficiency of the proposed method.

Journal ArticleDOI
TL;DR: Comparisons of the a priori uniform and nonuniform Bayesian algorithms to the maximum-likelihood algorithm are carried out using computer-generated noise-free and Poisson randomized projections.
Abstract: A method that incorporates a priori uniform or nonuniform source distribution probabilistic information and data fluctuations of a Poisson nature is presented. The source distributions are modeled in terms of a priori source probability density functions. Maximum a posteriori probability solutions, as determined by a system of equations, are given. Interactive Bayesian imaging algorithms for the solutions are derived using an expectation maximization technique. Comparisons of the a priori uniform and nonuniform Bayesian algorithms to the maximum-likelihood algorithm are carried out using computer-generated noise-free and Poisson randomized projections. Improvement in image reconstruction from projections with the Bayesian algorithm is demonstrated. Superior results are obtained using the a priori nonuniform source distribution. >

Book ChapterDOI
01 Jan 1989
TL;DR: In this paper, the authors discuss the conditions for the approximate normality of the distribution of the number of local maxima of a random function on the set of vertices of a graph when the values of the random function are independently identically distributed with a continuous distribution function.
Abstract: Publisher Summary This chapter discusses the normal approximation for the number of local maxima of a random function on a graph. It discusses the conditions for the approximate normality of the distribution of the number of local maxima of a random function on the set of vertices of a graph when the values of the random function are independently identically distributed with a continuous distribution function. For a regular graph, the distribution of the number of local maxima is approximately normal if its variance is large. The basic idea of a normal approximation theorem is to exploit a sum of indicator random variables. The chapter discusses a basic lemma on normal approximation for sums of indicator random variables.

Journal ArticleDOI
TL;DR: In this article, the authors examined the effect of noise on a general system at a saddle-node bifurcation and revealed that the probability density of the time to pass through the saddle node has a universal shape, the specific kinetics of the particular system serving only to set the time scale.
Abstract: An examination of the effect of noise on a general system at a saddle-node bifurcation has revealed that, in the limit of weak noise, the probability density of the time to pass through the saddle-node has a universal shape, the specific kinetics of the particular system serving only to set the time scale. This probability density is displayed and its salient features are explicated. In the case of a saddle-node bifurcation leading to relaxation oscillations, this analysis leads to the prediction of the existence of noise-induced oscillations which appear much less random than might at first be expected. The period of these oscillations has a well-defined, nonzero most probable value, the inverse of which is a noise-induced frequency. This frequency can be detected as a peak in power spectra from numerical simulations of such a system. This is the first case of the prediction and detection of a noise-induced frequency of which the authors are aware.

Journal ArticleDOI
TL;DR: In this paper, a method of determining the magnitude of the sum of random harmonic vectors of arbitrary probability characteristics is presented to evaluate net harmonic magnitudes due to distributed sources in both deterministic and stochastic networks.
Abstract: A method of determining the magnitude of the sum of random harmonic vectors of arbitrary probability characteristics is presented. Utilization of summation technique, in conjunction with harmonic load flow, to evaluate net harmonic magnitudes due to distributed sources in both deterministic and stochastic networks is discussed. It is demonstrated and stochastic networks is discussed. It is demonstrated that the widely used form of probability density function of the magnitude of the sum of random vectors arises from simplification of the general expressions developed here. To assess its validity, a comparative study between the method developed and Monte Carlo simulation is carried out, showing good agreement. However, the analytical method is a lot faster and provides closed-form expressions for the probability density characteristics of the sum of random vectors. >

Journal ArticleDOI
TL;DR: In this article, the distribution of the ratio of the absolute values of the two correlated normal random variables is computed as a two-dimensional integral where one integral is over an infinite interval and the other integral can be reduced to a single integral over a finite interval.
Abstract: Our objective in this paper is to propose an efficient method to compute the distribution of the ratio of the absolute values of the two correlated normal random variables. The problem can be solved as a two–dimensional integral where one integral is over an infinite interval. However, by a linear transformation the problem can be reduced to a single integral over a finite interval. The integral can be evaluated by numerical integration. It is easy to program and the program is computationally efficient.

Journal ArticleDOI
TL;DR: Analysis of the proposed method, leading to a measure of the gain in using this biasing scheme, shows that in all optimal systems considered, less than 100 trials is needed to achieve estimates with 45% confidence, even for extremely small error probabilities.
Abstract: Detection systems are designed to operate with optimal or nearly optimal probability of a wrong decision. Analytical solutions of the performance of these systems have been very difficult to obtain. Monte Carlo simulations are often the most tractable method of estimating performance. However, in systems with small probability of error, this technique requires very large amounts of computer time. A technique known as importance sampling substantially reduces the number of simulation trials needed, for a given accuracy, over the standard Monte Carlo method. The theory and application of the importance sampling method in Monte Carlo simulation is considered in a signal detection context. A general method of applying this technique to the optimal detection problem is given. Results show that in cases examined, the gain is approximately proportional to the inverse of the error probability. Applications of the proposed method are not limited to optimum detection systems; analysis, leading to a measure of the gain in using this biasing scheme, shows that in all optimal systems considered, less than 100 trials is needed to achieve estimates with 45% confidence, even for extremely small error probabilities. >

Journal ArticleDOI
TL;DR: The theory is applied to Abel's equation and the estimation of particle size densities in stereology and rates of convergence of regularized histogram estimates of the particle size density are given.
Abstract: Given data $y_i = (Kg)(u_i) + \varepsilon_i$ where the $\varepsilon$'s are random errors, the $u$'s are known, $g$ is an unknown function in a reproducing kernel space with kernel $r$ and $K$ is a known integral operator, it is shown how to calculate convergence rates for the regularized solution of the equation as the evaluation points $\{u_i\}$ become dense in the interval of interest. These rates are shown to depend on the eigenvalue asymptotics of $KRK^\ast$, where $R$ is the integral operator with kernel $r$. The theory is applied to Abel's equation and the estimation of particle size densities in stereology. Rates of convergence of regularized histogram estimates of the particle size density are given.

Journal ArticleDOI
TL;DR: In this paper, an empirical scaling model for Pseudo Relative Velocity (PSV) spectrum amplitudes has been refined by introducing the frequency dependent attenuation of amplitudes with distance.

Journal ArticleDOI
TL;DR: In this article, the authors describe a novel cumulant method of probabilistic power system simulation using the Laguerre polynomial expansion, which enables practically any number of cumulants to be used in the simulation.
Abstract: The authors describe a novel cumulant method of probabilistic power system simulation using the Laguerre polynomial expansion. In this method, Laguerre polynomials are used for obtaining the equivalent load duration probability density function from cumulants representing the load and generator outage probability density functions. A recursive algorithm is used for calculating moments and cumulants which enables practically any number of cumulants to be used in the simulation. The results of testing the method on the IEEE Reliability Test System are presented. The accuracy and computational efficiency of the method are compared with those of the conventional cumulant method based on the Hermite polynomial expansion. >

Journal ArticleDOI
TL;DR: In this article, a higher order technique is presented to compute the probability distribution function of the forced response of a mistuned bladed disk assembly, where the modal stiffness of each blade is assumed to be a random variable with a Gaussian distribution.

Journal ArticleDOI
TL;DR: The joint probability density function of the weight vector in least-mean-square (LMS) adaptation is studied for Gaussian data models and the weights are shown to be jointly Gaussian with time-varying mean vector and covariance matrix given as the solution to well-known difference equations for the weight vectors mean and covariancy matrix.
Abstract: The joint probability density function of the weight vector in least-mean-square (LMS) adaptation is studied for Gaussian data models. An exact expression is derived for the characteristic function of the weight vector at time n+1, conditioned on the weight vector at time n. The conditional characteristic function is expanded in a Taylor series and averaged over the unknown weight density to yield a first-order partial differential-difference equation in the unconditioned characteristic function of the weight vector. The equation is approximately solved for small values of the adaption parameter and the weights are shown to be jointly Gaussian with time-varying mean vector and covariance matrix given as the solution to well-known difference equations for the weight vector mean and covariance matrix. The theoretical results are applied to analyzing the use of the weights in detection and time delay estimation. Simulations that support the theoretical results are also presented. >

Book ChapterDOI
01 Jan 1989
TL;DR: In this article, the cumulative distribution function of a parabolic function of independent standard normal random variables is computed by inversion of the corresponding characteristic function using the saddle-point method in conjunction with the trapezoidal rule.
Abstract: The probability density function and the cumulative distribution function of a parabolic function of independent standard normal random variables are computed by inversion of the corresponding characteristic function. The method uses the saddle-point method in conjunction with the trapezoidal rule. The result is useful in second order reliability analysis.

Journal ArticleDOI
TL;DR: Using a generalized likelihood ratio test, it is proven that, for a symmetric noise probability density function, the detection performance is asymptotically equivalent to that obtained for a detector designed with a priori knowledge of the noise parameters.
Abstract: The problem of detecting a signal known except for amplitude in non-Gaussian noise is addressed. The noise samples are assumed to be independent and identically distributed with a probability density function known except for a few parameters. Using a generalized likelihood ratio test, it is proven that, for a symmetric noise probability density function, the detection performance is asymptotically equivalent to that obtained for a detector designed with a priori knowledge of the noise parameters. A computationally more efficient but equivalent test is proposed, and a computer simulation performed to illustrate the theory is described. >

Journal ArticleDOI
TL;DR: In this paper, a modified Monte Carlo Simulation is used to determine the reliability of rock slopes including possible correlations between the variables entering into the design equation, and a computer program has been developed to perform all the necessary calculations.

Journal ArticleDOI
01 May 1989
TL;DR: A visual, interactive method for specifying a bounded Johnson (SB) probability distribution when little or no data are available for formally identifying and fitting an input process, packaged into a public-domain microcomputer-based software system called VISIFIT.
Abstract: We present a visual, interactive method for specifying a bounded Johnson (SB) probability distribution when little or no data are available for formally identifying and fitting an input process. Using subjective information, the modeler provides values for familiar characteristics of an envisioned target distribution. These numerical characteristics are transformed into parameter values for the probability density function. The parameters can then be indirectly manipulated, either by revising the desired numerical values of the function's specifiable characteristics or by directly altering the shape of the displayed curve. Interaction with a visual display of the fitted density permits the modeler to conveniently obtain a more realistic representation of an in put process than was previously possible. The techniques involved have been packaged into a public-domain microcomputer-based software system called VISIFIT.

Journal ArticleDOI
TL;DR: It is shown that such minimum norm solution is the maximum-likelihood estimate of the system function parameters and that such an estimate is unbiased, with the lower bound of the variance of the error equal to the Cramer-Rao lower bound, and the upper bound derived from the concept of a generalized inverse.
Abstract: The properties of the maximum likelihood estimator of the generalized p-Gaussian (GPG) probability density function from N independent identically distributed samples is investigated, especially in the context of the deconvolution problem under GPG white noise. Specifically, the properties in the estimator are first described independently of the application. Then the solution of the above-mentioned deconvolution problem is obtained as the solution of a minimum norm problem in an l/sub p/ normed space. It is shown that such minimum norm solution is the maximum-likelihood estimate of the system function parameters and that such an estimate is unbiased, with the lower bound of the variance of the error equal to the Cramer-Rao lower bound, and the upper bound derived from the concept of a generalized inverse. The results are illustrated by computer simulations. >

Journal ArticleDOI
TL;DR: The algorithm is based on the kernels used in the non-parametric estimation of probability density and regression functions and possesses tracking properties as the sample size grows large and the conditions for the mean square error convergence and the almost sure convergence are given.

Journal ArticleDOI
TL;DR: In this paper, the breakdown probability distribution function for DC and impulse voltages was determined for conditions of gap pressure of 10, 0.1, and 10/sup -4/ Pa and gap length of 0. 1 mm.
Abstract: The breakdown probability distribution function was determined for DC and impulse voltages, for conditions of gap pressure of 10, 0.1, and 10/sup -4/ Pa and gap length of 0.1 mm. It was found that DC and impulse voltages are associated with different types of probability distribution function due to different initiation mechanisms. The statistical influence of the number of previous breakdowns on the probability distribution of the breakdown voltage was investigated. By applying the U test for analysis of measured data, it was found that at lower gap pressure, the breakdown voltage probability changes after a smaller number of breakdowns than it does at higher gap pressure. >