scispace - formally typeset
Search or ask a question

Showing papers on "Cumulative distribution function published in 1994"


Journal ArticleDOI
TL;DR: In this paper, the authors consider kernel estimation of a univariate density whose support is a compact interval and propose to transform the data to a density that has its first derivative equal to 0 at both boundaries.
Abstract: We consider kernel estimation of a univariate density whose support is a compact interval. If the density is non-zero at either boundary, then the usual kernel estimator can be seriously biased. 'Reflection' at a boundary removes some bias, but unless the first derivative of the density is 0 at the boundary the estimator with reflection can still be much more severely biased at the boundary than in the interior. We propose to transform the data to a density that has its first derivative equal to 0 at both boundaries. The density of the transformed data is estimated, and an estimate of the density of the original data is obtained by a change of variables. The transformation is selected from a parametric family, which is allowed to be quite general in our theoretical study. We propose algorithms where the transformation is either a quartic polynomial, a ,B cumulative density function (CDF) or a linear combination of a polynomial and a ,B CDF. The last two types of transformation are designed to accommodate possible poles at the boundaries. The first two algorithms are tested on simulated data and compared with an adjusted kernel method of Rice. We find that our proposal performs similarly to Rice's for densities with one-sided derivatives at the boundaries. Unlike Rice's method, our proposal is guaranteed to produce non-negative estimates. This can be a distinct advantage when the density is 0 at either boundary. Our algorithm for densities with poles outperforms the Rice adjustment when the density has a pole at a boundary.

213 citations


Journal ArticleDOI
TL;DR: In this article, the authors relax the assumption that the cumulative distribution function of the lead time demand is completely known and merely assume that the first two moments of F are known and finte.
Abstract: Stochastic inventory models, such as continuous review models and periodic review models, require information on the lead time demand. However, information about the form of the probability distribution of the lead time demand is often limited in practice. We relax the assumption that the cumulative distribution function, say F, of the lead time demand is completely known and merely assume that the first two moments of F are known and finte. The minmax distribution free approach for the inventory model consists of finding the most unfavourable distribution for each decision variable and then minimizing over the decision variable. We solve both the continuous review model and the periodic review model with a mixture of backorders and lost sales using the minmax distribution free approach.

153 citations


Journal ArticleDOI
TL;DR: In this article, a probability approach for life prediction is developed and illustrated through a simplified model for the pitting corrosion and corrosion fatigue crack growth in aluminum alloys in aqueous environments.
Abstract: A probability approach for life prediction is developed and illustrated through a simplified model for the pitting corrosion and corrosion fatigue crack growth in aluminum alloys in aqueous environments. A method for estimation of the cumulative distribution function (CDF) for the lifetime is demonstrated by using an assumed CDF for each key random variable (RV). The basic aim of this approach is to make predictions for the lifetime, reliability, and durability beyond the range of typical data by integrating the CDFs of the individual RVs into a mechanistically based model. The contribution of each key RV is considered, and its significance is assessed. Thus, the usefulness of probability-based modeling is demonstrated. It is noted that physically realistic parameters were assumed for the illustrations. As such, the results from analysis of the model qualitatively agree quite well with experimental observations. However, these results should not be construed to represent behavior in actual systems. Because of these assumptions, confidence levels for the predictions are not addressed. 9 refs.

88 citations


Journal ArticleDOI
TL;DR: In this article, the cumulative distribution function is modeled as a mixture of Beta cumulative distribution functions, noting that the latter family is dense within the collection of all continuous densities on [0, 1], and the fitting of the model is carried out using sampling based methods, in particular, a tailored Metropolis-within-Gibbs algorithm.
Abstract: function by incorporating it as an unknown in the model. Since the link function is usually taken to be strictly increasing, by a strictly increasing transformation of its range to the unit interval we can model it as a strictly increasing cumulative distribution function. The transformation results in a domain which is [0, 1]. We model the cumulative distribution function as a mixture of Beta cumulative distribution functions, noting that the latter family is dense within the collection of all continuous densities on [0, 1]. For the fitting of the model we take a Bayesian approach, encouraging vague priors, to focus upon the likelihood. We discuss choices of such priors as well as the integrability of the resultant posteriors. Implementation of the Bayesian approach is carried out using sampling based methods, in particular, a tailored Metropolis-within-Gibbs algorithm. An illustrative example utilising data involving wave damage to cargo ships is provided.

83 citations


Journal ArticleDOI
TL;DR: In this article, the performance of four algorithms (full indicator cokriging, median indicator kriging and adjacent cutoffs indicator ckriging) for modeling conditional cumulative distribution functions (ccdf) was compared with the other simpler algorithms which involve less variogram modeling effort and smaller computational cost.
Abstract: This paper compares the performance of four algorithms (full indicator cokriging. adjacent cutoffs indicator cokriging, multiple indicator kriging, median indicator kriging) for modeling conditional cumulative distribution functions (ccdf).The latter three algorithms are approximations to the theoretically better full indicator cokriging in the sense that they disregard cross-covariances between some indicator variables or they consider that all covariances are proportional to the same function. Comparative performance is assessed using a reference soil data set that includes 2649 locations at which both topsoil copper and cobalt were measured. For all practical purposes, indicator cokriging does not perform better than the other simpler algorithms which involve less variogram modeling effort and smaller computational cost. Furthermore, the number of order relation deviations is found to be higher for cokriging algorithms, especially when constraints on the kriging weights are applied.

83 citations


Journal ArticleDOI
In Choi1
TL;DR: In this paper, the authors proposed residual-based tests for the null of level and trend-stationarity, which are analogs of the LM test for an MA unit root, and the tests display stable size when the lag truncation number for the long-run variance estimation is chosen appropriately.
Abstract: This paper proposes residual-based tests for the null of level- and trend-stationarity, which are analogs of the LM test for an MA unit root. Asymptotic distributions of the tests are nonstandard, but they are expressed in a unified manner by expressing stochastic integrals. In addition, the tests are shown to be consistent. By expressing the distributions expressed as a function of a chi-square variable with one degree of freedom, the exact limiting probability density and cumulative distribution functions are obtained, and the exact limiting cumulative distribution functions are tabulated. Finite sample performance of the proposed tests is studied by simulation. The tests display stable size when the lag truncation number for the long-run variance estimation is chosen appropriately. But the power of the tests is generally not high at selected sample sizes. The test for the null of trend-stationarity is applied to the U.S. macroeconomic time series along with the Phillips-Perron Z(⋯) test. For some monthly and annual series, the two tests provide consistent inferential results. But for most series, the two contradictory nulls of trend-stationarity and a unit root cannot be rejected at the conventional significance levels.

65 citations


Journal ArticleDOI
01 Sep 1994
TL;DR: In this paper, the authors relax the assumption that the cumulative distribution function, say F of the lead time demand is completely known and merely assume that the first two moments of F are known and finite.
Abstract: The stochastic inventory models require the information on the lead time demand. However, the distributional information of the lead time demand is often limited in practice. We relax the assumption that the cumulative distribution function, say F, of the lead time demand is completely known and merely assume that the first two moments of F are known and finite. The distribution free approach for the inventory model consists of finding the most unfavorable distribution for each decision variable and then minimizing over the decision variable. We apply the distribution free approach to the continuous review inventory system with a service level constraint. We develop an iterative procedure to find the optimal order quantity and reorder level.

65 citations


Journal ArticleDOI
TL;DR: In this article, the time-dependent moments of the workload process in the M/G/1 queue were characterized in terms of a differential equation involving lower moment functions and the timedependent server-occupation probability.
Abstract: In this paper we describe the time-dependent moments of the workload process in the M/G/1 queue. The kth moment as a function of time can be characterized in terms of a differential equation involving lower moment functions and the time-dependent server-occupation probability. For general initial conditions, we show that the first two moment functions can be represented as the difference of two nondecreasing functions, one of which is the moment function starting at zero. The two nondecreasing components can be regarded as probability cumulative distribution function (cdf's) after appropriate normalization. The normalized moment functions starting empty are called moment cdf's; the other normalized components are called moment-difference cdf's. We establish relations among these cdf's using stationary-excess relations. We apply these relations to calculate moments and derivatives at the origin of these cdf's. We also obtain results for the covariance function of the stationary workload process. It is inte...

64 citations


Journal ArticleDOI
TL;DR: An upper bound for the rate of convergence is given in terms of the objective functions of the associated deterministic problems and it is shown how it can be applied to derivation of the Law of Iterated Logarithm for the optimal solutions.
Abstract: In this paper we study stability of optimal solutions of stochastic programming problems with fixed recourse. An upper bound for the rate of convergence is given in terms of the objective functions of the associated deterministic problems. As an example it is shown how it can be applied to derivation of the Law of Iterated Logarithm for the optimal solutions. It is also shown that in the case of simple recourse this upper bound implies upper Lipschitz continuity of the optimal solutions with respect to the Kolmogorov--Smirnov distance between the corresponding cumulative probability distribution functions.

60 citations


Journal ArticleDOI
TL;DR: A comparison of the presently developed probability density function and the histogram constructed from a record indicating strong non-Gaussian characteristics shows excellent agreement.

59 citations


Journal ArticleDOI
TL;DR: In this paper, the cumulative probability distributions for stream order, stream length, contributing area, and energy dissipation per unit length of channel are derived, for an ordered drainage system, from Horton's laws of network composition.
Abstract: A>° The cumulative probability distributions for stream order, stream length) contributing area, and energy dissipation per unit length of channel are derived, for an ordered drainage system, from Horton's laws of network composition. It is shown how these distributions can be related to the fractal nature of single rivers and river networks. Finally, it is shown that the structure proposed here for these probability distributions is able to fit the observed frequency distri butions, and their deviations from straight lines in a log-log plot.

Journal ArticleDOI
TL;DR: In this article, an algorithm is presented to post process simulated realizations or any spatial distribution to reproduce the target marginal cumulative distribution function (cdf) in the case of continuous variables or target proportions of categorical variables, yet honoring the conditioning data.
Abstract: Stochastic simulation techniques which do not depend on a back transform step to reproduce a prior marginal cumulative distribution function (cdf)may lead to deviations from that distribution which are deemed unacceptable. This paper presents an algorithm to post process simulated realizations or any spatial distribution to reproduce the target cdfin the case of continuous variables or target proportions in the case of categorical variables, yet honoring the conditioning data. Validations conducted for both continuous and categorical cases show that. by adjusting the value of a correction level parameter ω, the target cdfor proportions can be well reproduced without significant modification of the spatial correlation patterns of the original simulated realizations.


Journal ArticleDOI
TL;DR: In this article, for a single record-breaking sample, consistent estimation is not possible, and replication is required for global results, and the proposed distribution function and density estimators are shown to be strongly consistent and asymptotically normal as m → ∞.
Abstract: In some experiments, such as destructive stress testing and industrial quality control experiments, only values smaller than all previous ones are observed. Here, for such record-breaking data, kernel estimation of the cumulative distribution function and smooth density estimation is considered. For a single record-breaking sample, consistent estimation is not possible, and replication is required for global results. For m independent record-breaking samples, the proposed distribution function and density estimators are shown to be strongly consistent and asymptotically normal as m → ∞. Also, for small m, the mean squared errors and biases of the estimators and their smoothing parameters are investigated through computer simulations.

Book
26 Nov 1994
TL;DR: In this article, the mean and proportion estimation and hypothesis testing were performed for two populations and Chi-square tests of variance were used to test the correlation between two populations. But the results were limited to simple linear regression and time series analysis.
Abstract: Organizing Data Numerical Descriptive Measures Probability Discrete Random Variables and Their Probability Distributions Continuous Random Variables and Their Probability Distributions Sampling Distributions Estimation of the Mean and Proportion Hypothesis Tests About the Mean and Proportion Estimation and Hypothesis Testing: Two Populations Chi-Square Tests Analysis of Variance Simple Linear Regression Multiple Regression Time Series Analysis.

Journal ArticleDOI
TL;DR: The S-distribution as discussed by the authors is defined by the ordinary differential equation dF/dX = α(Fg − FhFo = F(Xo), where F is the cumulative distribution of the random variable X, and α, g, h, and Fo are parameters.
Abstract: The S-distribution is defined by the ordinary differential equation dF/dX = α(Fg — FhFo = F(Xo), where F is the cumulative distribution of the random variable X, and α, g, h, and Fo are parameters. The S-distribution was recently described in this journal as a tool for the approximation and classification of univariate, unimodal continuous probability distributions. This article shows that the S-distribution rather accurately models the commonly used univariate discrete distributions.

Journal ArticleDOI
F.T. Hsu1, James S. Lehnert
TL;DR: In this paper, the conditional density functions and the conditional cumulative distribution functions of random variables that characterize the MAI were derived for generalized quadri-phase spread-spectrum systems.
Abstract: A proposition that is useful for studying multiple-access interference (MAI) in generalized quadriphase spread-spectrum systems is established. The proposition provides the conditional density functions and the conditional cumulative distribution functions of random variables that characterize the MAI. It is shown that these characterizing random variables converge almost surely to the true MAI. The proposition provides a method to study the effect of the chip waveform on the performance of the system and provides a means for the design and analysis of spectrally efficient systems with continuous-phase or M-ary phase-shift-keyed modulation. The proposition allows the evaluation of the average error probability with arbitrary accuracy without making a Gaussian approximation for the MAI and is also useful for studying systems that send data in packets, where the delays of the interferers are often fixed for the duration of a packet. For illustration, the proposition is applied to computing the average error probability of generalized quadriphase direct-sequence spread-spectrum multiple-access communication systems with various chip waveforms and offset parameters corresponding to commonly used modulation schemes. >

Journal ArticleDOI
TL;DR: In this article, the exact and limiting cumulative distributions and probability density functions of the Durbin-Watson (DW) statistic were derived for a unit root in time series regression.

Journal ArticleDOI
TL;DR: In this article, a fortran algorithm that can be used to compute the cdf of the product of two normal distribution random variables is presented. But this algorithm is not suitable for the case of large numbers of random variables.
Abstract: This paper provides a fortran algorithm that can be used to compute the cdf of the product of two normal distribution random variables. We also give references that provide mathematical properties, tables, and applications of this distribution

Journal ArticleDOI
TL;DR: In this article, Gaussian-type curves are fit to estimates of the cumulative distribution function (cdf) at data quantiles to yield smoothed estimates of cdf; and to correct for violations of order relations (i.e., situations wherein the estimate of the cdf for a larger quantile is less than that for a smaller quantile).
Abstract: Consideration of order relations is key to indicator kriging, indicator cokriging, and probability kriging, especially for the latter two methods wherein the additional modeling of cross-covariance contributes to an increased chance of violating order relations. Herein, Gaussian-type curves are fit to estimates of the cumulative distribution function (cdf) at data quantiles to: (1) yield smoothed estimates of the cdf; and (2) to correct for violations of order relations (i.e., to correct for situations wherein the estimate of the cdf for a larger quantile is less than that for a smaller quantile). Smoothed estimates of the cdf are sought as a means to improve the approximation to the integral equation for the expected value of the regionalized variable in probability kriging. Experiments show that this smoothing yields slightly improved estimation of the expected value (in probability kriging). Another experiment, one that uses the same variogram for all indicator functions, does not yield improved estimates.

Journal ArticleDOI
TL;DR: In this paper, the authors assume independent and identically distributed environments and use the special properties of fractional linear generating functions to derive some explicit distributions, which may be singular or absolutely continuous, depending on the values of certain parameters.
Abstract: In a branching process with random environments, the probability of ultimate extinction is a function of the environment sequence, and is therefore a random variable. Explicit results about the distribution of this random variable are difficult to obtain in general. Here we assume independent and identically distributed environments and use the special properties of fractional linear generating functions to derive some explicit distributions, which may be singular or absolutely continuous, depending on the values of certain parameters. We also consider briefly tail behaviour close to 1, and provide an extension to cases where probability generating functions are not fractional linear.

Journal ArticleDOI
Attila Csenki1
TL;DR: In this paper, the cumulative operational time of a repairable system is defined as the time spent by Y in the set of up states U during [0, t], and a method is described for the computation of the cumulative distribution function of CO (t).

Journal ArticleDOI
TL;DR: In this article, a linear approximation of the groundwater flow equation is used to approximate the ranks of the piezometric heads, which is based on the conjecture that first-order approximations are more robust for computing the ranks rather than computing the heads themselves.
Abstract: Cumulative distribution functions (cdf) of groundwater model responses are generally determined using Monte Carlo analysis. The procedure consists of (1) generating a number of realizations of the parameters controlling groundwater flow, (2) solving the groundwater flow equation in each of the realizations to obtain the model responses, (3) ranking the model responses, and (4) assigning a probability to each model response as a function of its rank and the total number of realizations. When one is only interested in determining one of the tails of the cdf, e.g., to determine model responses with a small probability of being exceeded, it would be more appropriate to try to reverse steps 2 and 3 above, so that the realizations are ranked first and then the groundwater flow equation is solved only for those realizations leading to responses in the tail of the cdf. Because the ranking of the realizations must be done in terms of their model responses, which calls for the solution of the groundwater flow equation, we propose to use a linear approximation of the flow equation to approximate the ranks. The proposal is based on the conjecture that first-order approximations are more robust for computing the ranks of the piezometric heads than for computing the heads themselves. The conjecture is demonstrated in two two-dimensional confined flow problems comparing the results of the approximation to the results of full Monte Carlo analyses on several sets of 200 realizations with varying standard deviations of log10 T.

Journal ArticleDOI
TL;DR: In this paper, a method to estimate continuous univariate distributions is proposed, where data are represented by quantiles; the principle yields a reference distribution that is identical with the maximum product of spacings estimate.
Abstract: SUMMARY A principle of least information is proposed for estimating continuous univariate distribu- tions. Data are represented by quantiles; the principle yields a reference distribution that is identical with the maximum product of spacings estimate. A method to estimate continuous univariate distributions is proposed. It is suitable when the distribution type is uncertain or when the data are only a small random sample. In 1957 Jaynes presented the maximum entropy approach to estimation (Jaynes, 1983), which is valid for discrete random variables. The product is the 'least-biased' estimate possible for the information given. Data in Jaynes's analysis are in the form of a finite set of moments. The analysis was extended to continuous random variables by Kullback and Leibler and axiomatized by Shore and Johnson (1980); it uses moments and requires an unspecified 'prior' distribution. A development of Jaynes's entropy theory is presented here. The analysis is derived from the Kullback-Leibler relative entropy but uses quantiles instead of moments and gives a minimum information principle for selecting the prior (i.e. the reference) distribution p. In the univariate case p is identical with the maximum product of spacings (MPS) distribution due to Cheng and Amin (1983) and Ranneby (1984). The MPS method thus finds a rationale in information theory. Conversely, these and more recent works (Titterington, 1985; Cheng and Stephens, 1989) provide information on some of the properties of the method proposed. In general the MPS distribution p does not satisfy the constraints of the sample rule stated here. It is not a minimum information model, but an approximation. There is, however, a unique model, called the posterior q, that is closest to p in that it minimizes the relative entropy with respect to p and satisfies the constraints of the data. The two minimum principles for p and q are unified into a principle of least information for selection of the reference distribution and determination of the posterior distribution. Cheng and Amin (1983) considered a candidate set in the form of distributions with density p (x, 6) and cumulative density function P(x, 6), where 6 is a parameter vector. They mapped the sample {x} and the end points of the domain of X into the unit interval (0, 1) by using the transformation y = P(x, 6), subdividing the unit interval into m spacings. Their MPS method chooses 6 to maximize the geometric

Journal ArticleDOI
TL;DR: In this article, an inquiry into the problem of generating closed form expressions for the cumulative distribution and probability density functions of products of independent beta variates was made, and recursive analytical procedures for constructing the equational forms of these functions-from their Mellin inversion integral representations via the Cauchy residue theorem-were described.
Abstract: This paper concerns an inquiry into the problem of generating closed form expressions for the cumulative distribution and probability density functions of products of independent beta variates. Recursive analytical procedures for constructing the equational forms of these functions-from their Mellin inversion integral representations, via the Cauchy residue theorem-are described. A numerical example illustrating details of the construction of a computable form of the distribution function of the product of three independent beta variates is also included.

Proceedings ArticleDOI
23 May 1994
TL;DR: A novel approach to statistical modeling is presented which is directly extracted by fitting the cumulative probability distributions of the model responses to those of the measured data and should prove more reliable and robust than the existing methods.
Abstract: A novel approach to statistical modeling is presented. The statistical model is directly extracted by fitting the cumulative probability distributions (CPDs) of the model responses to those of the measured data. This new technique is based on a solid mathematical foundation and, therefore, should prove more reliable and robust than the existing methods. The approach is illustrated by statistical MESFET modeling based on a physics-oriented model which combines the modified Khatibzadeh and Trew model and the Ladbrooke model (KTL). The approach is compared with the established parameter extraction/postprocessing approach (PEP) in the context of yield verification. >

Proceedings ArticleDOI
08 Aug 1994
TL;DR: It is shown that, mainly for forest data, the fit with the K multilook distribution is superior to some of other distributions that frequently appear in the literature.
Abstract: The K distribution has been used as a flexible tool for the modelling of SAR data over non-homogeneous areas. It is characterized by three real-valued parameters; one of these parameters, the number of looks, is related to the kind of processing the raw data suffer in order to become an image. This distribution has been mostly used for one look data. In this paper the multilook case is considered for both quadratic and linear detections. A closed (recursive) computational form is provided for the K cumulative distribution function, as well as the estimators derived from the substitution method. The sensitivity of the cumulative distribution function, with respect to possible discretizations of the parameters due to limitations imposed by the recursive form is discussed. The recursive form of the cumulative distribution function of K multilook random variables is used to perform the Kolmogorov-Smirnov (KS) test of goodness of fit over SAREX data. It is shown that, mainly for forest data, the fit with the K multilook distribution is superior to some of other distributions that frequently appear in the literature. Specifically, the use of the normal distribution for this kind of data is discarded systematically. >

ReportDOI
17 Jan 1994
TL;DR: In this paper, the authors considered somewhat analogous quadratic forms in normal variables when the dimensionality is infinite and distributed infinite weighted SUMS Of X 2 -variables. And they gave tables of the distribution of the criterion for testing the hypothesis that a stationary stochastic process is a given moving average process order 1.
Abstract: : In this paper we consider somewhat analogous quadratic forms in normal variables when the dimensionality is infinite. Then the quadratic forms are distributed infinite weighted SUMS Of X2 -variables. These come about as goodness-of-fit criteria for a hypothesis that a cumulative distribution function is a specified one or that two cdf's are the same. Such criteria also arise for goodness-of-fit tests for standardized spectral distributions. As examples, we give tables of the distribution of the criterion for testing the hypothesis that a stationary stochastic process is a given moving average process order 1 and for testing the hypothesis that it is a specified autoregressive process order 1. Two methods are described for calculating the distribution. Either method is appropriate for calculating the distribution of the criterion for testing the hypothesis that a process is a stationary process whose standardized spectral density of distribution is a specified one. Goodness of fit, Time series, Mahalanobis distance, Stationary stochastic process, Spectral distributions.

Proceedings ArticleDOI
26 Oct 1994
TL;DR: In this article, a method for detection and estimation of the parameters of jump-like changes of the expectation of optical data for ocean/sea investigations, based on changes approximations by cumulative distribution functions, is proposed.
Abstract: A method for detection and estimation of the parameters of jump-like changes of the expectation of optical data for ocean/sea investigations, based on changes approximations by cumulative distribution functions, is proposed. Algorithms and programs for the change-points detection and estimation of the expectation of a stochastic process, based on approximations of the changes by cumulative distribution functions, are constructed and tested.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Posted Content
TL;DR: In this article, the authors describe the cumulative distribution function of excess returns conditional on a broad set of predictors that summarize the state of the economy by estimating a sequence of conditional logit models over a grid of values of the response variable.
Abstract: In this paper we describe the cumulative distribution function of excess returns conditional on a broad set of predictors that summarize the state of the economy. We do so by estimating a sequence of conditional logit models over a grid of values of the response variable. Our method uncovers higher-order multidimensional structure that cannot be found by modeling only the first two moments of the distribution. We compare two approaches to modeling: one based on a conventional linear logit model, the other an additive logit. The second approach avoids the â¬Scurse of dimensionalityâ¬? problem of fully nonparametric methods while retaining both interpretability and the ability to let the data determine the shape of the relationship between the response variable and the predictors. We find that additive logit fits better and reveals aspects of the data that remain undetected by the linear logit. The additive model retains its superiority even in out-of-sample prediction and portfolio selection performance, suggesting that this model captures genuine features of the data which seem to be important to guide investorsâ¬" optimal portfolio choices.