scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1997"


Journal ArticleDOI
TL;DR: In this paper, the least square estimation of a change point in multiple regressions is studied and the analytical density function and the cumulative distribution function for the general skewed distribution are derived.
Abstract: This paper studies the least squares estimation of a change point in multiple regressions. Consistency, rate of convergence, and asymptotic distributions are obtained. The model allows for lagged dependent variables and trending regressors. The error process can be dependent and heteroskedastic. For nonstationary regressors or disturbances, the asymptotic distribution is shown to be skewed. The analytical density function and the cumulative distribution function for the general skewed distribution are derived. The analysis applies to both pure and partial changes. The method is used to analyze the response of market interest rates to discount rate changes.

801 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered the convergence of variance type models for a regression function or for the logarithm of a probability function, conditional probability functions, density function, hazard function, or spectral density function.
Abstract: Analysis of variance type models are considered for a regression function or for the logarithm of a probability function, conditional probability function, density function, conditional density function, hazard function, conditional hazard function or spectral density function. Polynomial splines are used to model the main effects, and their tensor products are used to model any interaction components that are included. In the special context of survival analysis, the baseline hazard function is modeled and nonproportionality is allowed. In general, the theory involves the $L_2$ rate of convergence for the fitted model and its components. The methodology involves least squares and maximum likelihood estimation, stepwise addition of basis functions using Rao statistics, stepwise deletion using Wald statistics and model selection using the Bayesian information criterion, cross-validation or an independent test set. Publicly available software, written in C and interfaced to S/S-PLUS, is used to apply this methodology to real data.

387 citations


Journal ArticleDOI
TL;DR: A method to estimate the probability that a conflict will occur, given a pair of predicted trajectories and their levels of uncertainty is presented.
Abstract: The safety and efficiency of free flight will benefit from automated conflict prediction and resolution advisories. Conflict prediction is based on trajectory prediction and is less certain the farther in advance the prediction, however. An estimate is therefore needed of the probability that a conflict will occur, given a pair of predicted trajectories and their levels of uncertainty. This paper presents a method to estimate that conflict probability. The trajectory prediction errors are modeled as normally distributed, and the two error covariances for an aircraft pair are combined into a single, equivalent covariance of the relative position. A coordinate transformation is then used to derive an analytical solution. Numerical examples and a Monte Carlo validation are presented. (Author)

384 citations


Proceedings Article
01 Jan 1997
TL;DR: The shape variation displayed by a class of objects can be represented as probability density function, allowing us to determine plausible and implausible examples of the class, and how this distribution can be used in image search to locateExamples of the modelled object in new images.
Abstract: The shape variation displayed by a class of objects can be represented as probability density function, allowing us to determine plausible and implausible examples of the class. Given a training set of example shapes we can align them into a common co-ordinate frame and use kernel-based density estimation techniques to represent this distribution. Such an estimate is complex and expensive, so we generate a simpler approximation using a mixture of gaussians. We show how to calculate the distribution, and how it can be used in image search to locate examples of the modelled object in new images.

326 citations


Journal ArticleDOI
TL;DR: In this article, a probabilistic approach for robustness analysis of control systems affected by bounded uncertainty is presented. But the authors focus on the problem of estimating the number of samples required to estimate the probability that a certain level of robustness is attained.

236 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate the form of the one-point probability distribution function (pdf) for the density field of the interstellar medium using numerical simulations that successively reduce the number of physical processes included.
Abstract: We investigate the form of the one-point probability distribution function (pdf) for the density field of the interstellar medium using numerical simulations that successively reduce the number of physical processes included. Two-dimensional simulations of self-gravitating supersonic MHD and hydrodynamic turbulence, and of decaying Burgers turbulence, produce in all cases filamentary density structures and a power-law density pdf with logarithmic slope around -1.7. This suggests that the functional form of the pdf and the general filamentary morphology are the signature of the nonlinear advection operator. These results do not support previous claims that the pdf is lognormal. A series of 1D simulations of forced supersonic polytropic turbulence is used to resolve the discrepancy. They suggest that the pdf is lognormal only for effective polytropic indices $\gamma=1$ (or nearly lognormal for $\gamma ot=1$ if the Mach number is sufficiently small), while power laws develop at high densities if $\gamma<1$. We evaluate the polytropic index for conditions relevant to the cool interstellar medium using published cooling functions and different heating sources, finding that a lognormal pdf may occur at densities between 10$^3$ and at least 10$^4$ cm$^{-3}$. Several applications are examined. First, we question a recent derivation of the IMF from the density pdf by Padoan, Nordlund & Jones because a) the pdf does not contain spatial information, and b) their derivation produces the most massive stars in the voids of the density distribution. Second, we illustrate how a distribution of ambient densities can alter the predicted form of the size distribution of expanding shells. Finally, a brief comparison is made with the density pdfs found in cosmological simulations.

207 citations


Journal ArticleDOI
Kenichi Nanbu1
TL;DR: In this article, the probability density function for a deflection angle depends on the time spent by a charged particle while engaged in the cumulative collision, and a simple analytic expression for the function is proposed which is easy to use together with the Monte Carlo method.
Abstract: A succession of small-angle binary collisions can be grouped into a unique binary collision with a large scattering angle. The latter is called a cumulative collision. This makes it possible to treat the cumulative collision like a collision between neutral molecules. A significant feature of the cumulative collision is that the probability density function for a deflection angle depends on the time spent by a charged particle while engaged in the cumulative collision. Here a simple analytic expression for the function is proposed which is easy to use together with the Monte Carlo method. The validity of the present theory is ascertained by calculating various relaxation phenomena in plasmas. The theory is best suited to particle simulation of plasmas.

195 citations


Journal ArticleDOI
TL;DR: In this article, a theory of extreme deviations is developed, devoted to the far tail of the pdf of the sum X of a finite number n of independent random variables with a common pdf e-f(x).
Abstract: Stretched exponential probability density functions (pdf), having the form of the exponential of minus a fractional power of the argument, are commonly found in turbulence and other areas They can arise because of an underlying random multiplicative process For this, a theory of extreme deviations is developed, devoted to the far tail of the pdf of the sum X of a finite number n of independent random variables with a common pdf e-f(x) The function f(x) is chosen (i) such that the pdf is normalized and (ii) with a strong convexity condition that $f''(x)>0$ and that $x^2 f''(x)\rightarrow +\infty$ for |x|→∞ additional technical conditions ensure the control of the variations of $f''(x)$ The tail behavior of the sum comes then mostly from individual variables in the sum all close to X/n and the tail of the pdf is ∼e-nf(X/n) This theory is then applied to products of independent random variables, such that their logarithms are in the above class, yielding usually stretched exponential tails An application to fragmentation is developed and compared to data from fault gouges The pdf by mass is obtained as a weighted superposition of stretched exponentials, reflecting the coexistence of different fragmentation generations For sizes near and above the peak size, the pdf is approximately log-normal, while it is a power law for the smaller fragments, with an exponent which is a decreasing function of the peak fragment size The anomalous relaxation of glasses can also be rationalized using our result together with a simple multiplicative model of local atom configurations Finally, we indicate the possible relevance to the distribution of small-scale velocity increments in turbulent flow

174 citations


Journal ArticleDOI
TL;DR: It is shown that distances in proteins are predicted more accurately by neural networks than by probability density functions, and that the accuracy of the predictions can be further increased by using sequence profiles.
Abstract: We predict interatomic Calpha distances by two independent data driven methods. The first method uses statistically derived probability distributions of the pairwise distance between two amino acids, whilst the latter method consists of a neural network prediction approach equipped with windows taking the context of the two residues into account. These two methods are used to predict whether distances in independent test sets were above or below given thresholds. We investigate which distance thresholds produce the most information-rich constraints and, in turn, the optimal performance of the two methods. The predictions are based on a data set derived using a new threshold which defines when sequence similarity implies structural similarity. We show that distances in proteins are predicted more accurately by neural networks than by probability density functions. We show that the accuracy of the predictions can be further increased by using sequence profiles. A threading method based on the predicted distances is presented. A homepage with software, predictions and data related to this paper is available at http://www.cbs.dtu.dk/services/CPHmodels/.

152 citations


Journal ArticleDOI
TL;DR: In this article, a new and direct approach for analyzing the scaling properties of the various distribution functions for the random forced Burgers equation is proposed, and the authors consider the problem of the growth of random surfaces.
Abstract: Statistical properties of solutions of the random forced Burgers equation have been a subject of intensive studies recently (see Refs. [1–6]). Of particular interest are the asymptotic properties of probability distribution functions associated with velocity gradients and velocity increments. Aside from the fact that such issues are of direct interest to a large number of problems such as the growth of random surfaces [1], it is also hoped that the field-theoretic techniques developed for the Burgers equation will eventually be useful for understanding more complex phenomena such as turbulence. In this paper, we propose a new and direct approach for analyzing the scaling properties of the various distribution functions for the random forced Burgers equation. We will consider the problem

134 citations


Journal ArticleDOI
TL;DR: In this article, an asymptotically efficient data-driven estimator that is adaptive to both smoothness of estimated density and distribution of measurement error is proposed. But this estimator is universal in sense that its derivatives and integral are sharp estimators of the corresponding derivatives and the cumulative distribution function, and these estimators are sharp both globally and pointwise.
Abstract: The problem is to estimate the probability density of a random variable contaminated by an independent measurement error. I explore one of the worst-case scenario when the characteristic function of this measurement error decreases exponentially and thus optimal estimators converge only with logarithmic rate. The particular example of such measurement error is any random variable contaminated by normal, Cauchy, or another stable random variable. For this setting and circular data, I suggest an asymptotically efficient data-driven estimator that is adaptive to both smoothness of estimated density and distribution of measurement error. Moreover, this estimator is universal in sense that its derivatives and integral are sharp estimators of the corresponding derivatives and the cumulative distribution function, and these estimators are sharp both globally and pointwise. For the case of small sample sizes, I suggest a modified estimator that mimics an optimal linear pseudoestimator. I explore this est...

Journal ArticleDOI
TL;DR: In this article, a pdf near-wall model is developed in which the generalized Langevin model is combined with an exact model for viscous transport, and the method of elliptic relaxation is used to incorporate the wall effects without the use of wall functions or damping functions.
Abstract: Probability density function (pdf) methods are extended to include modeling of wall‐bounded turbulent flows. A pdf near‐wall model is developed in which the generalized Langevin model is combined with an exact model for viscous transport. Then the method of elliptic relaxation is used to incorporate the wall effects without the use of wall functions or damping functions. Information about the proximity of the wall is provided only in the boundary conditions so that the model can be implemented without ad hoc assumptions about the geometry of the flow. A Reynolds‐stress closure is derived from this pdf model, and its predictions are compared with DNS and experimental results for fully developed turbulent channel flow.

Journal ArticleDOI
TL;DR: In this paper, a new path integration scheme based on the Gauss-Legendre quadrature integration rule is proposed for calculating the probability density of the response of a dynamical system under Gaussian white-noise excitation.
Abstract: A new path integration scheme based on the Gauss-Legendre quadrature integration rule is proposed for calculating the probability density of the response of a dynamical system under Gaussian white-noise excitation. The new scheme is capable of producing accurate results of probability density as it evolves with time, including the tail region where the probability level is very low. This low probability region is important for the system reliability estimation.

Journal ArticleDOI
TL;DR: In this article, a method for the evaluation of the stationary and non-stationary probability density function of non-linear oscillators subjected to random input is presented, which requires the approximation of the probability density functions of the response in terms of C-type Gram-Charlier series expansion.
Abstract: A method for the evaluation of the stationary and non-stationary probability density function of non-linear oscillators subjected to random input is presented. The method requires the approximation of the probability density function of the response in terms of C-type Gram-Charlier series expansion. By applying the weighted residual method, the Fokker-Planck equation is reduced to a system of non-linear first order ordinary differential equations, where the unknowns are the coefficients of the series expansion. Furthermore, the relationships between the A-type and C-type Gram-Charlier series coefficient are derived.

Journal ArticleDOI
TL;DR: In this article, a nonparametric approach for estimating optimal transformations of petrophysical data to obtain the maximum correlation between observed variables is proposed, which does not require a priori assumptions of a functional form and the optimal transformations are derived solely based on the data set.
Abstract: Conventional Imtrltiple regression for permeability estimation from well logs requires a functional relationship to be presumed Due to the inexact nature of the relationship between petrophysical variables, it is not always possible to identify the underlying functional form between dependent and independent variables in advance When large variations in metrological properties arc exhibited, parametric regression often fails or leads to unstable and erroneous results, especially for multi variate cases In this paper we describe a nonparametric approach for estimating optimal transformations of petrophysical data to obtain the maximum correlation between observed variables The approach does not require a priori assumptions of a functional form and the optimal transformations are derived solely based on the data set An iterative procedure involving the ul[ernaling conditional expec[a[ion (ACE) forms the basis of our approach The power of ACE is illustrated using synthetic as well as field examples The results clearly demonstrate improved permeability estimation by ACE compared to conventional parametric regression methods Introduction A critical aspect of reservoir description involves estimating References and illustrations at end of paper permeability in uncored wells based on well logs and other known petrophysical attributes A common approach is to develop a permeability-porosity relationship by regressing on data from cored wells and then, to predict permeability in uncored wells from well logs 1’2 Multiple regression is used when large variations in metrological properties exist (e g a wide range in grain sizes, high degree of cementation, diagenetic alteration etc) and a simple permeability-porosity relationship no longer holds good However, there are several Iim itations to such an approach Many of these arise from the inexact nature of the relationship between petrophysical variables and u priori assumptions regarding functional forms used to model the data -all leading to biased estimates When prediction of permeability extremes is a major concern, the high and low values are enhanced through a weighting scheme in the regression Besides being subjective in nature, such weighting can cause the prediction to become unstable which leads to erroneous results Most importantly, conventional regression assumes independent variables to be free of error, which is highly optimistic for geologic and petrophysical data, Jensen and Lake? introduced power transformations for optimization of regression-based permeability y-porosity predictions The underlying theory is that if the joint probability distribution function (jpdf) of two variables is binorrnal, (he relationship will be linear,3 Several methods exist to estimate the exponents for power transformation One method, described by Emerson and Stoto4 and adopted by Jensen and Lake,2 is based on symmetrizing the probability distribution function (pdf) Another method is a trial-anderror approach based on a normal probability plot of the data By power transforming permeability and porosity separately the authors are able to improve permeability-porosity correlations However, using a trial-and-error method for selecting exponents for power transformation is time consuming, and symmetrizing the p,d f does not necessarily guarantee a binormal distribution of transformed variables In addition, there are no indications as to whether power transformations will work for multivariate cases

Journal ArticleDOI
TL;DR: A novel technique for adaptive scalar quantization that uses previously quantized samples to estimate the distribution of the source, and does not require that side information be sent in order to adapt to changing source statistics.
Abstract: In this paper, we introduce a novel technique for adaptive scalar quantization. Adaptivity is useful in applications, including image compression, where the statistics of the source are either not known a priori or will change over time. Our algorithm uses previously quantized samples to estimate the distribution of the source, and does not require that side information be sent in order to adapt to changing source statistics. Our quantization scheme is thus backward adaptive. We propose that an adaptive quantizer can be separated into two building blocks, namely, model estimation and quantizer design. The model estimation produces an estimate of the changing source probability density function, which is then used to redesign the quantizer using standard techniques. We introduce nonparametric estimation techniques that only assume smoothness of the input distribution. We discuss the various sources of error in our estimation and argue that, for a wide class of sources with a smooth probability density function (pdf), we provide a good approximation to a "universal" quantizer, with the approximation becoming better as the rate increases. We study the performance of our scheme and show how the loss due to adaptivity is minimal in typical scenarios. In particular, we provide examples and show how our technique can achieve signal-to-noise ratios within 0.05 dB of the optimal Lloyd-Max quantizer for a memoryless source, while achieving over 1.5 dB gain over a fixed quantizer for a bimodal source.

Journal ArticleDOI
TL;DR: In this paper, simple closure hypotheses for the conditional pseudo-diffusion and pseudo-dissipation terms given a fixed concentration level in a dispersing plume are formulated, and used to obtain a simple functional form for the one-point probability density function (PDF) of concentration.

Journal ArticleDOI
TL;DR: In this article, the probability distribution function of the velocity derivatives and the corresponding hyperflatness factors, up to order 6, was measured in a flow produced between counter-rotating disks using low-temperature helium gas as the working fluid, in a range of microscale Reynolds numbers lying between 150 and 2300.
Abstract: We report measurements of the probability distribution function of the velocity derivatives, and the corresponding hyperflatness factors, up to order 6, as a function of the microscale Reynolds number Rλ. The measurements are performed in a flow produced between counter-rotating disks, using low-temperature helium gas as the working fluid, in a range of microscale Reynolds numbers lying between 150 and 2300. Consistently with previous studies, a transitional behavior is found around Rλ≈700. We determine a simple scaling law, in terms of Rλ, which allows the collapse of the tails of the pdf of the velocity derivatives onto a single curve, below the transition. We find well-defined relative power laws for the hyperflatness factors Hp and Hp*, throughout the entire range of variation of Rλ: H4=F=(0.99±0.05)H60.376±0.015 and H5*=(0.95±0.05)H60.67±0.022. These results are compared to those of previous investigators and to various theoretical approaches both statistical (multifractal model) and structural (i.e., based on a model of fine scales).

Journal ArticleDOI
TL;DR: In this paper, the authors applied self-similar random fields to the statistical description and simulation of rainfall and obtained a simple procedure for the parameterization and modeling of the experimentally measured probability density function, which was then used for rainfall simulation by multiplicative cascades.
Abstract: The theory of self-similar random fields is applied to the statistical description and simulation of rainfall. Fluctuations in rainfields measured by a high resolution weather radar covering a 15 km square are shown to satisfy the condition of self-similarity. The probability density function of the logarithm of the breakdown coefficients (defined as the ratio of two field means, each computed at different resolutions) of the rainfall fluctuation field generally belongs to the class of infinitely divisible distributions. The theoretical framework for scaling self-similar fields is presented and related to results from alternate frameworks, presented in the literature. A simple procedure for the parameterization and modeling of the experimentally measured probability density function is presented. The obtained generator is then used for rainfall simulation by multiplicative cascades. Simulated results exhibit a good statistical and visual agreement with the measured data.

Journal ArticleDOI
TL;DR: The analysis, modeling and simulation of measured full-scale wind pressure and velocity data is addressed, followed by the simulation of pressure data through new static transformation techniques.

Journal ArticleDOI
TL;DR: It is shown that, unlike the SMI method, the eigencanceler yields a conditional SNR distribution that is dependent on the covariance matrix, and it is further shown that simpler, covariance Matrix-independent approximations can be found for the large interference-to-noise case.
Abstract: The statistical characterization of the conditioned signal-to-noise ratio (SNR) of the sample matrix inversion (SMI) method has been known for some time. An eigenanalysis-based detection method, referred to as the eigencanceler, has been shown to be a useful alternative to SMI, when the interference has low rank. In this work, the density function of the conditioned SNR is developed for the eigencanceler. The development is based on the asymptotic expansion of the distribution of the principal components of the covariance matrix. It is shown that, unlike the SMI method, the eigencanceler yields a conditional SNR distribution that is dependent on the covariance matrix. It is further shown that simpler, covariance matrix-independent approximations can be found for the large interference-to-noise case. The new distribution is shown to be in good agreement with the numerical data obtained from simulations.

Journal Article
TL;DR: In this paper, the effect of weak lensing effects on the shape of the CMB temperature spectrum has been investigated, and it is shown that the effect can be detected with the four-point correlation function.
Abstract: The weak lensing effects are known to change only weakly the shape of the power spectrum of the Cosmic Mi- crowave Background (CMB) temperature fluctuations. I show here that they nonetheless induce specic non-Gaussian effects thatcanbedetectablewiththefour-pointcorrelationfunctionof the CMB anisotropies. The magnitude and geometrical depen- dences of this correlation function are investigated in details. It is thus found to scale as the square of the derivative of the two-point correlation function and as the angular correlation function of the gravitational displacement eld. It also contains specic dependences on the shape of the quadrangle formed by the four directions. When averaged at a given scale, the four-point function, that identies with the connected part of the fourth moment of the probability distribution function of the local ltered tem- perature, scales as the square of logarithmic slope of its second moment, and as the variance of the gravitational magnication at the same angular scale. All these effects have been computed for specic cosmo- logical models. It is worth noting that, as the amplitude of the gravitational lens effects has a specic dependence on the cos- mologicalparameters,thedetectionofthefour-pointcorrelation function could provide precious complementary constraints to those brought by the temperature power spectrum.

Journal ArticleDOI
TL;DR: In case the probability density function belongs to the exponential family, the EM algorithm is one particular case of the ICE algorithm, which was formally introduced in the field of statistical segmentation of images.
Abstract: We compare the expectation maximization (EM) algorithm with another iterative approach, namely, the iterative conditional estimation (ICE) algorithm, which was formally introduced in the field of statistical segmentation of images. We show that in case the probability density function (PDF) belongs to the exponential family, the EM algorithm is one particular case of the ICE algorithm.

Journal ArticleDOI
TL;DR: In this article, a simple semiparametric estimator of the moments of the density function of the latent variable's unobserved random component is proposed. But the results can be used as starting values for parametric estimators, for specification testing including tests of latent error skewness and kurtosis, and to estimate coefficients of discrete explanatory variables in the model.
Abstract: Latent variable discrete choice model estimation and interpretation depend on the density function of the latent variable's unobserved random component. This paper provides a simple semiparametric estimator of the moments of this density. The results can be used as starting values for parametric estimators, to estimate the appropriate location and scaling for semiparametric estimators, for specification testing including tests of latent error skewness and kurtosis, and to estimate coefficients of discrete explanatory variables in the model.

Journal ArticleDOI
TL;DR: In this paper, two new probability density functions (generalized beta and quadratic elasticity) are considered as models for the size distribution of income, and they are fit into five sets of US family income data for 1970, 1975, 1980, 1985 and 1990.

Journal ArticleDOI
TL;DR: An analytical technique and a Monte Carlo simulation approach for determining the reliability indices of distribution systems based on the 'device of stages' technique so that the impact of equipment aging can be incorporated into the model.
Abstract: This paper describes an analytical technique and a Monte Carlo simulation approach for determining the reliability indices of distribution systems. The proposed analytical method is based on the 'device of stages' technique so that the impact of equipment aging can be incorporated into the model. Two types of stage configurations are described. For each configuration, the parameters of their probability density functions are estimated using the method of matching moments. The Monte Carlo simulation approach utilizes the component state duration sampling method. A computer program has been developed to implement these methods. The application of the concepts presented in the paper are illustrated by the analysis of a practical configuration.

Journal ArticleDOI
TL;DR: In this article, a super-matrix model of disorder with a direction (constant imaginary vector potential) is considered and the density of complex eigenvalues is calculated in zero dimension for both the unitary and orthogonal ensembles.
Abstract: Models of disorder with a direction (constant imaginary vector potential) are considered These non-Hermitian models can appear as a result of computation for models of statistical physics using a transfer-matrix technique, or they can describe nonequilibrium processes Eigenenergies of non-Hermitian Hamiltonians are not necessarily real, and a joint probability density function of complex eigenvalues can characterize basic properties of the systems This function is studied using the supersymmetry technique, and a supermatrix \ensuremath{\sigma} model is derived The \ensuremath{\sigma} model differs from that already known by a new term The zero-dimensional version of the \ensuremath{\sigma} model turns out to be the same as the one obtained recently for ensembles of random weakly non-Hermitian or asymmetric real matrices Using a new parametrization for the supermatrix $Q,$ the density of complex eigenvalues is calculated in zero dimension for both the unitary and orthogonal ensembles The function is drastically different in these two cases It is everywhere smooth for the unitary ensemble but has a \ensuremath{\delta}-functional contribution for the orthogonal one This anomalous part means that a finite portion of eigenvalues remains real at any degree of the non-Hermiticity All details of the calculations are presented

Journal ArticleDOI
TL;DR: For the Gaussian and Laguerre random matrix ensembles, the probability density function (p.d.f.) for the linear statistic ΣjN=1 (xj − 〈x) is computed exactly and shown to satisfy a central limit theorem asN → ∞ as mentioned in this paper.
Abstract: For the Gaussian and Laguerre random matrix ensembles, the probability density function (p.d.f.) for the linear statistic ΣjN=1 (xj − 〈x〉) is computed exactly and shown to satisfy a central limit theorem asN → ∞. For the circular random matrix ensemble the p.d.f.’s for the statistics ½ΣjN=1 (θj −π) and − ΣjN=1 log 2 |sinθj/2| are calculated exactly by using a constant term identity from the theory of the Selberg integral, and are also shown to satisfy a central limit theorem asN → ∞.

Journal ArticleDOI
TL;DR: Based on Longuet-Higgins's theory of the probability distribution of wave amplitude and wave period and on some observations, a new probability density function (PDF) of ocean surface slopes is derived.
Abstract: Based on Longuet-Higgins’s theory of the probability distribution of wave amplitude and wave period and on some observations, a new probability density function (PDF) of ocean surface slopes is derived. It is where ζx and ζy are the slope components in upwind and crosswind directions, respectively; σ2u and σ2c are the corresponding mean-square slopes. The peakedness of slopes is generated by nonlinear wave–wave interactions in the range of gravity waves. The skewness of slopes is generated by nonlinear coupling between the short waves and the underlying long waves. The peakedness coefficient n of the detectable surface slopes is determined by both the spectral width of the gravity waves, and the ratio between the gravity wave mean-square slope and the detectable short wave mean-square slope. When n equals 10, the proposed PDF fits the Gram Charlier distribution, given by Cox and Munk, very well in the range of small slopes. When n → ∞, it is very close to the Gaussian distribution. Radar backscat...

Journal ArticleDOI
TL;DR: This paper provides solutions to the partial differential equations associated with the components of the steady-state probability density function of the buffer levels for two part-type, single-machine flexible manufacturing systems under a linear switching curve (LSC) policy.
Abstract: Quadratic approximations to the differential cost-to-go function, which yield linear switching curves, have been extensively studied in the literature. In this paper, we provide solutions to the partial differential equations associated with the components of the steady-state probability density function of the buffer levels for two part-type, single-machine flexible manufacturing systems under a linear switching curve (LSC) policy. When there are more than two part-types, we derive the probability density function, under a prioritized hedging point (PHP) policy by decomposing the multiple part-type problem into a sequence of two part-type problems. The analytic expression for the steady-state probability density function is independent of the cost function. Therefore, for average cost functions, we can compute the optimal PHP policy or the more general optimal LSC policy for two part-type problems.