scispace - formally typeset
Search or ask a question

Showing papers on "Cumulative distribution function published in 2001"


Journal ArticleDOI
01 Sep 2001-Geoderma
TL;DR: Two new criteria (exceedence probability plot and narrowness of probability intervals that include the true values) are presented to assess the accuracy and precision of local uncertainty models using cross-validation.

514 citations


Proceedings ArticleDOI
09 Dec 2001
TL;DR: This paper shows three methods for incorporating the error due to input distributions that are based on finite samples, when calculating confidence intervals for output parameters, using finite samples.
Abstract: Stochastic simulation models are used to predict the behavior of real systems whose components have random variation. The simulation model generates artificial random quantities based on the nature of the random variation in the real system. Very often, the probability distributions occurring in the real system are unknown, and must be estimated using finite samples. This paper shows three methods for incorporating the error due to input distributions that are based on finite samples, when calculating confidence intervals for output parameters.

101 citations


Journal ArticleDOI
TL;DR: In this article, the performance of an ultrawideband (UWB) random-noise radar operating in the 1-2 GHz frequency band has been investigated from a statistical point of view by developing the theoretical basis for the system's receiver operating characteristics.
Abstract: An ultrawideband (UWB) random-noise radar operating in the 1-2 GHz frequency band has been developed and held-tested at a 200 m range at the University of Nebraska. A unique heterodyne correlation technique based on a delayed transmitted waveform using a photonic delay line has been used to inject coherence within this system. The performance of this radar, assuming a point target, has been investigated from a statistical point of view by developing the theoretical basis for the system's receiver operating characteristics (ROC). Explicit analytical expressions for the joint probability density function (pdf) of the in-phase (I) and quadrature (Q) components of the receiver output have been derived under the assumption that the input signals are partially correlated Gaussian processes. The pdf and the complementary cumulative distribution function (cdf) for the envelope of the receiver output are also derived. These expressions are used to relate the probability of detection (P/sub d/) to the probability of false alarm (P/sub f/) for different numbers of integrated samples, and the results are analyzed.

86 citations


Journal ArticleDOI
TL;DR: The introduction of random phase shifts into the functional transformations, proposed by previous investigators to disrupt the repetitiveness of search curves, does not necessarily improve the sensitivity analysis results because it destroys the orthogonality of the trigonometric functions, which is required for Fourier analysis.

70 citations


Journal ArticleDOI
TL;DR: An expression for the cumulative distribution function evaluated at zero of the difference of two chi-square variates with different number of degrees of freedom is derived and applied to the outage probability computation of cellular mobile radio systems in fading channels.
Abstract: An expression for the cumulative distribution function evaluated at zero of the difference of two chi-square variates with different number of degrees of freedom is derived and applied to the outage probability computation of cellular mobile radio systems in fading channels. In particular, a generic result is developed for this probability which takes on several forms, the simplest of which, is a single integral with finite limits and an integrand composed of elementary (exponential and trigonometric) functions. The results are applicable to cellular systems that are subject to independent identically distributed (i.i.d.) interfering signals and that employ maximal-ratio combining reception over i.i.d. diversity paths. Various fading channel models are assumed for the desired signal and interferers. For each desired signal/interferer combination, the outage probability is expressed in closed form in terms of a set of parameters that characterize the particular scenario. A tabulation of these various scenarios and their associated parameters for channels of practical interest is included.

60 citations


Journal ArticleDOI
TL;DR: A double-bounded density function is used to approximate the distributions of systems with design parameters which are random variables distributed with various general and possibly non-symmetrical distributions.
Abstract: This paper presents a new method for finding optimal solutions of systems with design parameters which are random variables distributed with various general and possibly non-symmetrical distributions. A double-bounded density function is used to approximate the distributions. Specifications may require tracking constraints in time domain and stability conditions in frequency domain. Using sensitivity information, the proposed method first finds a linearized feasible region. Afterwards it attempts to place a tolerance box of the design parameters such that the region with higher yield lies in the feasible region. The yield is estimated by the joint cumulative density function over a portion of the tolerance box contained in the feasible region. Optimal designs are found for a fourth-order servomechanism and actual yields are evaluated by Monte-Carlo simulation. Copyright © 2001 John Wiley & Sons, Ltd.

60 citations


Journal ArticleDOI
TL;DR: In this paper, the Kolmogorov-Smirnov test is applied to the specific problem of fatigue crack detection and it is shown that this test not only successfully identifies the presence of the fatigue cracks but also gives an indication related to the advancement of the crack.

59 citations


Patent
02 Oct 2001
TL;DR: The analysis of variables through analog representation (AVATAR) as mentioned in this paper is a method, processes, and apparatus for measurement and analysis of variable of different type and origin, which can be implemented through various physical means as well as through digital means or computer calculations.
Abstract: Various components of the present invention are collectively designated as Analysis of Variables Through Analog Representation (AVATAR). It is a method, processes, and apparatus for measurement and analysis of variables of different type and origin. AVATAR offers an analog solution to those problems of the analysis of variables which are normally handled by digital means. The invention allows (a) the improved perception of the measurements through geometrical analogies, (b) effective solutions of the existing computational problems of the order statistic methods, and (c) extended applicability of these methods to analysis of variables. The invention employs transformation of discrete or continuous variables into normalized continuous scalar fields, that is, into objects with mathematical properties of density and/or cumulative distribution functions. In addition to dependence on the displacement coordinates (thresholds), these objects can also depend on other parameters, including spatial coordinates (e.g., if the incoming variables are themselves scalar or vector fields), and/or time (if the variables depend on time). Moreover, this transformation of the measured variables may be implemented with respect to any reference variable. Thus, the values of the reference variable provide a common unit, or standard, for measuring and comparison of variables of different natures, for assessment of mutual dependence of these variables, and for evaluation of changes in the variables and their dependence with time.The invention enables, on a consistent general basis, a variety of new techniques for analysis of variables, which can be implemented through various physical means in continuous action machines as well as through digital means or computer calculations. Several of the elements of these new techniques do have digital counterparts, such as some rank order techniques in digital signal and image processing. However, this invention significantly extends the scope and applicability of these techniques and enables their analog implementation. The invention also introduces a wide range of signal analysis tools which do not exist, and cannot be defined, in the digital domain. In addition, by the present invention, all existing techniques for statistical processing of data, and for studying probability fluxes, are made applicable to analysis of any variable.

57 citations


Journal Article
TL;DR: This paper analyzes the characteristics of the tail part of packet delay distributions by statistical analytic approach and shows that the Pareto distribution is most appropriate in 95–99.9% region of the cumulative distribution of packet transmission delays.
Abstract: A packet transmission delay is an important quality characteristic for various applications including real-time and data applications. In particular, it is necessary to investigate not only a whole distribution of the packet transmission delay, but also the tail part of the distribution, in order to detect the packet loss. In this paper, we analyze the characteristics of the tail part of packet delay distributions by statistical analytic approach. Our analytic results show that the Pareto distribution is most appropriate in 95–99.9% region of the cumulative distribution of packet transmission delays. Based on our statistical analysis, we next propose an adaptive playout control algorithm, which is suitable to real-time applications. Numerical examples show that our algorithm provides the stable packet loss ratio independently on traffic fluctuations. key words: packet transmission delay, one-way delay, distribution function, Pareto distribution, packet loss ratio

51 citations


Proceedings ArticleDOI
25 Nov 2001
TL;DR: An efficient method for generating correlated Nakagami-m fading envelope samples is presented and an accurate approximation to the inverse NakagAMI-m cumulative distribution function, valid for all values of m, is derived.
Abstract: An efficient method for generating correlated Nakagami-m fading envelope samples is presented. The new method is compared to other methods used to generate Nakagami-m random variates. An accurate approximation to the inverse Nakagami-m cumulative distribution function, valid for all values of m, is derived. Uncertainties regarding the phase distribution and the autocorrelation of the Nakagami-m fading process are discussed. The fading envelope autocorrelation is determined by simulation and asymptotic analysis.

46 citations


Journal ArticleDOI
TL;DR: In this paper, the consequences of three types of specification error from cumulative conditional distribution functions F (y/a): measurement error in y, in a and omitted conditioning variables are analyzed, and conditions under which the effect of the misspecification on the computer cumulative distribution function can be signed are obtained.
Abstract: We Analyze the consequences of three types of specification error from cumulative conditional distribution functions F (y/a): measurement error in y, in a and omitted conditioning variables. The paper uses exact results to obtain conditions under which the effect of the misspecification on the computer cumulative distribution function can be signed. The effects are shown to depend on both the curvature of the true distribution and the properties of the error distribution. We illustrate our findings using a model of intergeneratioonal mobilit

Journal ArticleDOI
01 Jun 2001-Fractals
TL;DR: In this article, the authors derived the relationship between the scaling exponents of non-cumulative and cumulative number-size distributions for linearly binned and logarithmically binned data.
Abstract: Power law cumulative number-size distributions are widely used to describe the scaling properties of data sets and to establish scale invariance. We derive the relationships between the scaling exponents of non-cumulative and cumulative number-size distributions for linearly binned and logarithmically binned data. Cumulative number-size distributions for data sets of many natural phenomena exhibit a "fall-off" from a power law at the largest object sizes. Previous work has often either ignored the fall-off region or described this region with a different function. We demonstrate that when a data set is abruptly truncated at large object size, fall-off from a power law is expected for the cumulative distribution. Functions to describe this fall-off are derived for both linearly and logarithmically binned data. These functions lead to a generalized function, the upper-truncated power law, that is independent of binning method. Fitting the upper-truncated power law to a cumulative number-size distribution determines the parameters of the power law, thus providing the scaling exponent of the data. Unlike previous approaches that employ alternate functions to describe the fall-off region, an upper-truncated power law describes the data set, including the fall-off, with a single function.

Journal ArticleDOI
TL;DR: In this paper, the authors derived an exact analytical expression for the cumulative distribution function (CDF) of the generalized McDaniel model and then compared it with numerical inversion of the characteristic function.
Abstract: Reverberation in low-frequency active sonar systems operating in shallow water has often been observed to follow non-Rayleigh statistical distributions. McDaniel's model, generalized to allow noninteger valued parameters, has shown promise as being capable of accurately representing real data with a minimal parameterization. This paper first derives an exact analytical expression for the cumulative distribution function (CDF) of the generalized McDaniel model and then compares it with numerical inversion of the characteristic function. Both methods are seen to provide adequate and equivalent precision; however the characteristic function inversion method is significantly faster. The latter CDF evaluation technique is then applied to the analysis of simulated and real data to show that, when minimal data are available, McDaniel's model can more accurately represent a wide variety of non-Rayleigh reverberation than the K or Rayleigh mixture models. This result arises from the generality of McDaniel's model with respect to the K-distribution (i.e., the K-distribution P/sub fa/ estimate can be dominated by model mismatch error) and to its compact parameterization with respect to the Rayleigh mixture (i.e., the Rayleigh mixture model P/sub fa/ estimate is usually dominated by parameter estimation error).

Journal ArticleDOI
TL;DR: A two-dimensional analog of the probability integral transform for bivariate distribution functions H1 and H2 is discussed, showing that the distribution function of H1(X,Y) depends only on the copulas C1 and C2 associated with H2.

Journal ArticleDOI
TL;DR: In this paper, asymptotic properties of the corresponding hazard function estimator obtained by convolution smoothing of Beran's cumulative hazard estimator are studied. But the main idea is to use smoothing in the covariates.
Abstract: Consider a regression model in which the responses are subject to random right censoring. In this model, Beran studied the nonparametric estimation of the conditional cumulative hazard function and the corresponding cumulative distribution function. The main idea is to use smoothing in the covariates. Here we study asymptotic properties of the corresponding hazard function estimator obtained by convolution smoothing of Beran's cumulative hazard estimator. We establish asymptotic expressions for the bias and the variance of the estimator, which together with an asymptotic representation lead to a weak convergence result. Also, the uniform strong consistency of the estimator is obtained.

Journal ArticleDOI
TL;DR: A novel diversity metric for use in the design of combinatorial chemistry and high-throughput screening experiments that allows meaningful comparison of data sets of different cardinality and is not affected by the curse of dimensionality, which plagues many other diversity indices.
Abstract: We describe a novel diversity metric for use in the design of combinatorial chemistry and high-throughput screening experiments. The method estimates the cumulative probability distribution of intermolecular dissimilarities in the collection of interest and then measures the deviation of that distribution from the respective distribution of a uniform sample using the Kolmogorov-Smirnov statistic. The distinct advantage of this approach is that the cumulative distribution can be easily estimated using probability sampling and does not require exhaustive enumeration of all pairwise distances in the data set. The function is intuitive, very fast to compute, does not depend on the size of the collection, and can be used to perform diversity estimates on both global and local scale. More importantly, it allows meaningful comparison of data sets of different cardinality and is not affected by the curse of dimensionality, which plagues many other diversity indices. The advantages of this approach are demonstrated using examples from the combinatorial chemistry literature.

Journal ArticleDOI
TL;DR: In this article, an efficient method is presented for approximate computation of extreme value characteristics of the response of a linear structure subjected to nonstationary Gaussian excitation, where the characteristics considered are the mean and standard deviation of the extreme value and fractile levels having specific probabilities of not being exceeded by the random process within a specified time interval.
Abstract: An efficient method is presented for approximate computation of extreme value characteristics of the response of a linear structure subjected to nonstationary Gaussian excitation. The characteristics considered are the mean and standard deviation of the extreme value and fractile levels having specific probabilities of not being exceeded by the random process within a specified time interval. The approximate procedure can significantly facilitate the utilization of nonstationary models in engineering practice, since it avoids computational difficulties associated with direct application of extreme value theory. The method is based on the approximation of the cumulative distribution function (CDF) of the extreme value of a nonstationary process by the CDF of a corresponding “equivalent” stationary process. Approximate procedures are developed for both the Poisson and Vanmarcke approaches to the extreme value problem, and numerical results are obtained for an example problem. These results demonstrate that ...

Journal ArticleDOI
TL;DR: In this paper, empirical cumulative distribution functions (CDFs) are used to infer probability distribution functions for orbital periods and eccentricities, and a joint probability density for period-eccentricity pairs in each population is derived.
Abstract: We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.

Journal ArticleDOI
TL;DR: In this paper, a general method for generating correlated survival rates is proposed, which consists of applying the transformation Φ x, the cumulative distribution function of the standard normal distribution, to suitably correlated normal variates X1, X2, X3, X4, X5, X6, X7, X8, X9, X10, X11, X12, X13, X14, X15, X16, X17, X18, X19, X20, X21, X22, X23, X24, X

Journal ArticleDOI
TL;DR: A one-dimensional lattice random walk with an absorbing boundary at the origin and a movable partial reflector is studied, suggesting a mechanism for nonuniversal kinetic critical behavior, observed in models with an infinite number of absorbing configurations.
Abstract: We study a one-dimensional lattice random walk with an absorbing boundary at the origin and a movable partial reflector. On encountering the reflector at site x, the walker is reflected (with probability r) to x-1 and the reflector is simultaneously pushed to x+1. Iteration of the transition matrix, and asymptotic analysis of the probability generating function show that the critical exponent delta governing the survival probability varies continuously between 1/2 and 1 as r varies between 0 and 1. Our study suggests a mechanism for nonuniversal kinetic critical behavior, observed in models with an infinite number of absorbing configurations.

Journal ArticleDOI
TL;DR: This paper compares the proposed approach to the one based on a mixture of kernels and shows through computer simulations that comparable results may be obtained with limited expense in computational efforts.
Abstract: In this paper we deal with the problem of approximating the probability density function of a signal by means of adaptive activation function neurons. We compare the proposed approach to the one based on a mixture of kernels and show through computer simulations that comparable results may be obtained with limited expense in computational efforts.

Journal ArticleDOI
TL;DR: An alternative, unified, semi-analytical approach for the evaluation of the cumulative distribution function (cdf) of the weighted sum of L independent Rician and m-Nakagami envelopes with or without the presence of Additive White Gaussian Noise (AWGN) is presented.
Abstract: An alternative, unified, semi-analytical approach for the evaluation of the cumulative distribution function (cdf) of the weighted sum of L independent Rician (or Rayleigh as a special case) and m-Nakagami envelopes with or without the presence of Additive White Gaussian Noise (AWGN) is presented. The cdf is evaluated directly in a nested mode via the Hermite numerical integration technique. The proposed formulation avoids the calculation of complex functions and can be efficiently applied to practical wireless applications when L ≤ 3, using arbitrary statistical characteristics for the modeling parameters. Moreover, it can be also used to control the accuracy of other techniques when L > 3. Comments, comparison with other existing techniques and useful curves for several practical wireless applications such as the calculation of the error bounds for coding on fading channels in mobile satellite applications and the Equal Gain Combining (EGC), are also presented. Finally, the relation between the distribution of the sum of m-Nakagami and Rice envelopes is investigated and discussed.

Journal ArticleDOI
TL;DR: It is shown that combiner errors affect the mean combined SNR negligibly in comparison to their effect on the deep fades, and closed-form expressions for the probability density function, cumulative distribution function and moment generating function of the combiner output SNR statistic are derived.
Abstract: Selection diversity combining (SDC) is one of the simplest and most commonly implemented diversity mechanism for mitigating the detrimental effects of deep fades experienced on wireless channels. While SDC improves the mean combined signal-to-noise ratio (SNR) over that of a single branch with increasing diversity order, its main advantage is the reduction of the probability of deep fades. The effect of Gaussian errors in the branch gain estimates on the SDC receiver performance is investigated by deriving new closed-form expressions for the probability density function, cumulative distribution function and the moment generating function of the combiner output SNR statistic. Mathematical expressions for quantifying the degradation in the mean combined SNR, outage probability and the average symbol error rate of a broad class of binary and multilevel modulation schemes owing to imperfect branch SNR estimates in Rayleigh fading are also derived. It is shown that combiner errors affect the mean combined SNR negligibly in comparison to their effect on the deep fades. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
Paolo Fabbri1
TL;DR: In this paper, a nonparametric geostatistical procedure such as indicator kriging was used to identify zones with a high probability that the temperature is more than 80°C and zones with an intermediate probability of less than 70°C in the geothermal Euganean aquifer.
Abstract: In the geothermal Euganean area (Veneto region, NE Italy) water temperatures range from 60 to 86°C. The aquifer considered is rocky and the production wells in this study have a depth ranging from 300 to 500 m. For exploitation purposes, it is important to identify zones with a high probability that the temperature is more than 80°C and zones with a high probability that the temperature is less than 70°C. First, variographic analysis was conducted from 186 temperature data of thermal ground waters. This analysis gave results that are consistent with the main regional tectonic structure, the NW-SE trending “Schio-Vicenza” fault system. Then indicator variograms of the second, fifth, and eighth decile were compared to identify the spatial continuity at different thresholds. The unacceptability of a multigaussian hypothesis of the random function and the necessity to know the cumulative distribution function in any location, suggested the use of a nonparametric geostatistical procedure such as indicator kriging. Thus, indicator variograms at the cutoffs of 65, 70, 73, 75, 78, 80, 82, and 84°C were analyzed, fitted, and used during the indicator kriging procedure. Finally, probability maps were derived from postprocessing indicator kriging results. These maps identified scarcely exploited areas with a high probability of the temperature being higher than 80°C, between 70 and 80°C and areas with high probability of the temperature being below 70°C.

Patent
31 Aug 2001
TL;DR: In this paper, the posterior cumulative distribution function permits a determination to a desired probability whether or not the bit-error rate is less than a desired bit error rate limit, and a maximal test time has been reached.
Abstract: A bit-error rate is tested in a minimal necessary time period. A block of bits is measured and a cumulative number of bit errors is counted in parallel with calculation of a posterior cumulative distribution function. The posterior cumulative distribution function permits a determination to a desired probability whether or not the bit-error rate is less than a desired bit-error-rate limit. The measurement of blocks of bits and accumulation of bit errors relating thereto and calculation of the posterior cumulative distribution function and making of determinations based thereon continue in parallel until one of three events is detected. The three events are: 1) the bit-error rate is less than the desired bit-error rate limit to the desired probability; 2) the bit-error rate is greater than or equal to the desired bit-error rate limit to the desired probability; and 3) a maximal test time has been reached. Upon detection of any of these three conditions, the test is stopped.

Journal ArticleDOI
TL;DR: It is shown that there does not exist a stable mass distribution if the cells grow exponentially, and one can consider the cell mass growth as a linear dynamical system with a stochastic perturbation.

Journal ArticleDOI
TL;DR: In this article, a physically-based hemispheric hydrologic model, PBHHM, is used to model the hydrological system response to an El Nino/Southern Oscillation (ENSO) forcing event.

Journal ArticleDOI
TL;DR: In this article, the authors derived closed-form expressions for the mean and variance of advective solute travel time under such non-uniform flow conditions, based on an analytical direct evaluation method (DEM).
Abstract: Groundwater recharge from the unsaturated zone results in a nonuniform mean groundwater flow. We report closed-form expressions for the mean and variance of advective solute travel time under such nonuniform flow conditions. These expressions are derived directly from velocity statistics, based on an analytical direct evaluation method (DEM). An alternative, indirect evaluation method (IEM) has previously been used for deriving and evaluating the cumulative distribution function of solute travel time in nonuniform mean flow, through its relation to an assumed Gaussian probability density function of solute displacement. For negative groundwater recharge we argue that neither of the two analytical methods DEM and IEM can be considered reliable. For positive groundwater recharge we show that (1) the DEM results reported here remain as reliable for nonuniform as for uniform mean groundwater flow; (2) an assumed lognormal cumulative distribution function based on the DEM-derived travel time moments is consistent with the IEM-derived solute travel time cumulative distribution function (cdf), which is in turn consistent with numerical simulation results reported in the literature; and (3) the detailed differences between the DEM-based lognormal cdf and the IEM-derived cdf for nonuniform flow are such that the former (DEM) may exhibit even better agreement with reported numerical simulation results than the latter (IEM).


Patent
Dean M. Ford1
06 Jun 2001
TL;DR: In this article, confidence levels on a probability distribution are analyzed based on data sets of varying size using a curve such as a Weibull distribution curve, computations can be made such as the odds that a percentage of a plurality of parts will fail after a prescribed time.
Abstract: Confidence levels on a probability distribution are analyzed based on data sets of varying size. Using a curve such as a Weibull distribution curve, computations can be made such as the odds that a percentage of a plurality of parts will fail after a prescribed time.