scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1979"


Journal ArticleDOI
TL;DR: In this article, the transport equation for the probability density function (pdf)P of a scalar variable in a turbulent field is derived and various closure approximations for the turbulent convective and the molecular transport term are discussed.
Abstract: The transport equation for the probability density function (pdf)P of a scalar variable in a turbulent field is derived and various closure approximations for the turbulent convective and the molecular transport term are discussed. For the special case of a turbulent diffusion flame, for which the density, temperature and composition can be considered as a function of a scalar variable f, the transport equation for Pf (z) is closed using the conditional mean of the velocity for the turbulent convective term and an integral model for the molecular transport term. Preliminary results are presented and compared with a two-parameter form of the pdf Pf (z). Introduction Probability density functions (pdf) play an increasingly significant role in the theoretical treatment of turbulent flows. Lundgren [ 1 ] devised a method to derive a hierarchy of transport equations for N-point probability density fuctions (pdfs) for fluctuating variables in a turbulent incompressible flow. Fox [12] and Dopazo and O'Brien [5], [7] extended this method to compressible and reacting flows. Dopazo and O'Brien suggested closure assumptions for the onepoint pdf equation based on a quasi-Gaussian property of conditional moments, which was successful for nearly Gaussian problems [6]. The Lundgren approach has been applied to turbulent flames by Pope [4]. This author constructed the transport equation for the pdf of a scalar quantity f describing the instantaneous composition of a turbulent diffusion flame under certain simplifying assumptions. Beside, he suggested an interesting closure-procedure for the unknown term arising from molecular transport that will be discussed below. Paper presented at the Colloquium on "Probability Distribution Function-Methods for Turbulent Flows" in Aachen, Germany, August, 29-30, 1977. 0340-0204/79/0004-0047S02.00 © Copyright by Walter de Gruyter & Co. · Berlin · New York .48 J. Janicka, W. Kolbe, W. Kollmann In this paper the transport equation for the pdf of a scalar quantity f in a turbulent field is considered. Closure assumptions for two different unknown terms occurring in this equation are discussed in the special case of a turbulent diffusion flame, the composition, density and temperature of which can be related to the scalar quantity f, [9]. For each term a new closure approximation is suggested. The physical significance of all approximations involved is investigated. Results of test calculations are presented for both the homogeneous case and the turbulent diffusion flame. 1 . Transport equation for the probability density of a scalar For turbulent flows, a hierarchy of transport equations for a probability density function (pdf) f(x, t) at a finite number of points in x-space can be derived using a method devised by Lundgren [1 ]. The structure of this hierarchy is similar to the well-known BBGKY-hierarchy in statistical mechanics. The closure approximations for the BBGKY-hierarchy, such as the Kirkwood closure for simple liquids [2], do not work satisfactorily for the turbulent case because there essential conditions are not fulfilled (for details cp. [3]). Therefore, a different approach towards constructing closure approximations is suggested. It is based mainly on physical assumptions about the turbulent transport mechanism. Consider a normalized scalar function f(x, t) of space coordinates ÷ = (÷á, α = 1 , 2, 3) and time t, that satifies the following equation Here ñ is the density which is a function of f(x, t), à is the molecular transport coefficient and S0 is the source term. The velocity í = (íá , α. = 1,2,3) satisfies the Navier-Stokes equations 9va dva 8p 8f , (5) which is the simultaneous one-point pdf of the conserved scalar quantity f and the velocity va at (x, t). To derive a transport equation for P* we only need elementary properties of the averaging process (and avoid delicate mathematical questions connected with the probability measure of a turbulent flow field). Differentiation of eq. (5) yields (6) · at at az at aza Inserting (1) and (2) into (6) leads to

374 citations


Journal ArticleDOI
Tor Aulin1
TL;DR: In this article, a generalization of an existing model for the fading signal at a mobile radio antenna has been made, which lies in letting the scattering waves not necessarily be traveling horizontally, and the effects of this generalization are investigated concerning probability density function (pdf), correlational properties, and power spectra of the phase and envelope.
Abstract: A generalization of an existing model for the fading signal at a mobile radio antenna has been made. The generalization lies in letting the scattering waves not necessarily be traveling horizontally. The effects of this generalization are investigated concerning probability density function (pdf), correlational properties, and power spectra of the phase and envelope. The pdf is not affected, but the power spectrum of the envelope is significantly affected. This generalized spectrum is smoother than the original and always finite, which the latter is not. Thus it is assumed that the generalized model is more consistent with measured spectra, especially in urban environments.

336 citations



Journal ArticleDOI
TL;DR: The Nakagami fading distribution is shown to fit empirical results more generally than other distributions, and the dependence of error probability on number of paths, amount of fading and spread of path delays is shown.
Abstract: The Nakagami fading distribution is shown to fit empirical results more generally than other distributions. A statistical model for a noisy, Nakagami fading multipath channel is given, following Turin's delay-line model. Optimal receivers are derived for two states of knowledge of the channel-known path delays and random path delays. Upper bounds on the probability of error are computed, for binary, equal-energy, equiprobable signals, which are uniformly orthogonal and have equal, triangular, autocorrelation moduli. Results are graphically displayed and show the dependence of error probability on number of paths, amount of fading and spread of path delays.

308 citations


Journal ArticleDOI
TL;DR: In this paper, the evolution of an initially binary (zero unity) scalar field undergoing turbulent and molecular mixing is studied in terms of conservation equations for the probability density function of the scalar property.
Abstract: The evolution of an initially binary (zero unity) scalar field undergoing turbulent and molecular mixing is studied in terms of conservation equations for the probability density function of the scalar property. Attention is focused on the relaxation of the dynamic system to a state independent of the intial conditions. A few existing methods are discussed and evaluated and a new mechanistic model is proposed. Classical iteration techniques are used to obtain an equation for the single point probability density and the unperturbed Green’s function. It is suggested that use of the true Green’s function or perturbed propagator of the system might be necessary in order to obtain the correct evolution of the probability density function.

169 citations


Journal ArticleDOI
TL;DR: In this article, a method of density estimation based on "delta sequences" is studied and mean square rates established, and a necessary and sufficient condition for asymptotic unbiasedness for continuous densities is given.
Abstract: Let $X_1, X_2, \cdots, X_n$ be i.i.d. random variables with common density function $f$. A method of density estimation based on "delta sequences" is studied and mean square rates established. This method generalizes certain others including kernel estimators, orthogonal series estimators, Fourier transform estimators, and the histogram. Rates are obtained for densities in Sobolev spaces and for densities satisfying Lipschitz conditions. The former generalizes some results of Wahba who also showed the rates obtained are the best possible. The rates obtained in the latter case have been shown to be the best possible by Farrell. This is shown independently by giving examples for which the rates are exact. Finally, a necessary and sufficient condition for asymptotic unbiasedness for continuous densities is given.

140 citations


Journal ArticleDOI
TL;DR: In this paper, a model of the reflection of radar impulses from the sea at near-vertical incidence is used to account for non-Gaussian ocean waves statistics, and the joint probability density function (pdf), of wave height and slope, is calculated according to the theory of Longuet-Higgins (1963) on the distribution of variables in a 'weakly nonlinear' random era.
Abstract: A model of the reflection of radar impulses from the sea at near-vertical incidence is used to account for non-Gaussian ocean waves statistics. The joint probability density function (pdf), of wave height and slope, is calculated according to the theory of Longuet-Higgins (1963) on the distribution of variables in a 'weakly nonlinear' random era. The long-crested approximation is made, a Phillips wave spectrum is assumed, and the Gram-Charlier series is truncated after skewness terms. It is found that the height and height-slope skewness coefficients bear the ratio 1:2 and that the derived impulse response and conditional cross section versus wave height are in excellent agreement with previous observations. Finally, it is suggested that the empirically determined and theoretically predicted sea state bias be corrected for in the routine processing of satellite radar altimeter data.

105 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that the properties at any point in the flame can be determined from the transport equations for the velocity U and a set of scalars: represents the species mass fractions and enthalpy.
Abstract: The theory stemming from the statistical representation of turbulent flames is presented and developed, the major aim being to describe the basic processes in relatively simple flames. Starting from conservation equations, with the assumption of low Mach number and high Reynolds number, it is shown that the properties at any point in the flame can be determined from the transport equations for the velocity U and a set of scalars : represents the species mass fractions and enthalpy. However, the solution of these equations with initial conditions and boundary conditions appropriate to turbulent flames is prohibitively difficult. Statistical theories attempt to describe the behaviour of averaged quantities in terms of averaged quantities. This requires the introduction of closure approximations, but renders a more readily soluble set of equations. A closure of the Reynolds-stress equations and the equation for the joint probability density function of is considered. The use of the joint probability density function (p.d.f.) equation removes the difficulties that are otherwise encountered due to non-linear functions of the scalars (such as reaction rates). While the transport equation for the joint p.d.f. provides a useful description of the physics, its solution is feasible only for simple cases. As a practical alternative, a general method is presented for estimating the joint p.d.f. from its first and second moments: transport equations for these quantities are also considered therefore. Modelled transport equations for the Reynolds stresses, the dissipation rate, scalar moments and scalar fluxes are discussed, including the effects of reaction and density variations. A physical interpretation of the joint p.d.f. equation is given and the modelling of the unknown terms is considered. A general method for estimating the joint p.d.f. is presented. It assumes that the joint p.d.f. is the statistically most likely distribution with the same first and second moments. This distribution is determined for any number of reactive or non-reactive scalars.

102 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the implications of penalties for output not equalling demand by employing a general stochastic model for a firm facing an uncertain demand with a known probability density function.
Abstract: This paper examines some of the implications of introducing penalties for output not equalling demand by employing a general stochastic model for a firm facing an uncertain demand with a known probability density function. Several alternative objectives of the firm are considered: (1) maximization of expected profits; (2) maximization of the probability of achieving a particular target level of profits; and (3) maximization of target profits, given a target level of the probability of their being achieved. It is shown that the resulting probability density function of profits is not well defined. The shape and location of the function depend on the relative magnitudes of the model parameters and the output decision. Several important implications of this result for cost-volume-profit analysis are discussed.

83 citations


Journal ArticleDOI
TL;DR: In this paper, the strain energy density function surface for the rubber tested by L.G. T reloar (1944a) is determined from bis stress-strain data.
Abstract: T he strain-energy density function surface for the rubber tested by L. R.G. T reloak (1944a) is determined from bis stress-strain data. The data were given for the three pure homogeneous strain paths of simple elongation, pure shear, and equi-biaxial extension of a thin sheet. The surface is formed by plotting calculated points of the strain-energy function above a plane having the first and second strain invariants as rectangular cartesian coordinates. The strain-energy function is expressed as a double power series in the invariants expanded about the zero energy state which is the origin of coordinates. An analysis of this experimentally derived surface provides the information required for the rational selection of terms and the determination of the coefficients in the series expansion, thus defining a function within the Rivlin-type formulation. The function so determined is tested by employing it in the appropriate constitutive formulae to compute stresses for comparison with experimental values. Another test is made by utilizing the function to predict shapes of an inflated membrane for comparison with experimentally observed shapes. Excellent agreement between prediction and experiment is found. A second demonstration is given for another rubber tested by D.F. J ones and L.R.G. T reloar (1975). Again, excellent results are obtained.

72 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the dynamics of a chemostat in which two microbial populations grow and compete for a common substrate and showed that the two populations cannot coexist in a spatially uniform environment which is subject to time invariant external influences unless the dilution rate takes on one of a discrete set of special values.
Abstract: The dynamics of a chemostat in which two microbial populations grow and compete for a common substrate is examined. It is shown that the two populations cannot coexist in a spatially uniform environment which is subject to time invariant external influences unless the dilution rate takes on one of a discrete set of special values. The dynamics of the same system are next considered in the stochastic environment created by random fluctuations of the dilution rate about a value that allows coexistence. The information needed for the description of the random process of the state of the chemostat is obtained from the transition probability density function. By modeling the system as a Markov process continuous in time and space, the transition probability density is obtained as solution of the Fokker-Planck equation. Analytical and numerical solutions of this equation show that extinction of either one population or the other will ultimately take place. The time required for extinction, the evolution of the mean composition with time, the steady states of the latter and the dependence of all the above on the intensity of the random noise are also calculated using constants appropriate to the competition of E. coli and Spirillum sp. The question of making predictions as to which population is the more likely to become extinct is treated finally, and the probabilities of extinction are calculated as solutions of the steady state version of the backward Fokker-Planck (Kolmogorov) equation.

Journal ArticleDOI
TL;DR: In this paper, two graphical procedures for analysing distributions of survival time are compared, one based on the survivor function, and the other based on estimates of log hazard, which is designed to give points with independent errors of constant known variance.
Abstract: SUMMARY Two graphical procedures for analysing distributions of survival time are compared. One works with the survivor function, or with the order statistics, and the other is based on estimates of log hazard and is designed to give points with independent errors of constant known variance. The distribution of survival time, T, may be described equivalently by a probability density function f(t), by a survivor function F(t) = pr (T> t), and by a hazard function p(t) = f(t)/l(t). One advantage of p(.) is that often, although not always, it varies slowly over all or most of its range. Constant p(.) corresponds to an exponential distribution and, even when it is not intended to base the analysis on an assumption of exponential form, that distribution often gives a natural base against which to judge distributional shape. There are a lot of ways of comparing data graphically with the exponential distribution; for a cryptic list of ten such, see Cox (1978, Table 2). Most are transformations of one another so that choice is partly a matter of taste. The present note contrasts two simple graphical techniques.

Journal ArticleDOI
TL;DR: In this article, the frequency distribution of the volume of water above a given threshold discharge is developed using basic and accessible information like the joint probability density function of rainfall intensity and duration.
Abstract: The frequency distribution of the volume of water above a given threshold discharge is developed. This is done using basic and accessible information like the joint probability density function of rainfall intensity and duration together with expressions, to be derived, relating the volume of interest to rainfall intensity and duration. The resulting distribution function is in a closed analytical form containing only a few climatological and physical parameters of a catchment. This distribution function will be of great value in the design of storage devices, flood control systems, and storm water treatment facilities in urban areas.

Journal ArticleDOI
TL;DR: The random walk procedure is intended mainly for the texture discrimination problem, and its possible application to the edge detection problem (as shown in this paper) is just a by-product.
Abstract: We consider the problem of texture discrimination Random walks are performed in a plain domain D bounded by an absorbing boundary ? and the absorption distribution is calculated Measurements derived from such distributions are the features used for discrimination Both problems of texture discrimination and edge segment detection can be solved using the same random walk approach The border distributions and their differences with respect to a homogeneous image can classify two different images as having similar or dissimilar textures The existence of an edge segment is concluded if the boundary distribution for a given window (subimage) differs significantly from the boundary distribution for a homogeneous (uniform grey level) window The random walk procedure has been implemented and results of texture discrimination are shown A comparison is made between results obtained using the random walk approach and the first-or second-order statistics, respectively The random walk procedure is intended mainly for the texture discrimination problem, and its possible application to the edge detection problem (as shown in this paper) is just a by-product

Journal ArticleDOI
TL;DR: This paper presents the results of an experimental investigation of three types of mismatch of quantizers for digital coding systems: gamma, Laplacian, and Gaussian-distribution are used for the modeling of the source statistics.
Abstract: Quantizers for digital coding systems are usually optimized with respect to a model of the probability density function of the random variable to be quantized. Thus a mismatch of the quantizer relative to the actual statistics of the random variable may be unavoidable. This paper presents the results of an experimental investigation of three types of mismatch. For the modeling of the source statistics, the gamma-, Laplacian-, and Gaussian-distribution are used. The optimization of the quarttizers is carried out with respect to the minimum mean-square error criterion.

Journal ArticleDOI
TL;DR: In this paper, the authors obtain the limiting distribution of the uncovered proportion of the circle, which has a natural interpretation as a noncentral chi-square distribution with zero degrees of freedom by expressing it as a Poisson mixture of mass at zero with central Chi-square deviates having even degree of freedom.
Abstract: Place $n$ arcs, each of length $a_n$, uniformly at random on the circumference of a circle, choosing the arc length sequence $a_n$ so that the probability of completely covering the circle remains constant. We obtain the limiting distribution of the uncovered proportion of the circle. We show that this distribution has a natural interpretation as a noncentral chi-square distribution with zero degrees of freedom by expressing it as a Poisson mixture of mass at zero with central chi-square deviates having even degrees of freedom. We also treat the case of proportionately smaller arcs and obtain a limiting normal distribution. Potential applications include immunology, genetics, and time series analysis.

Journal ArticleDOI
TL;DR: In this article, the pore size distribution is derived from the empirical relationship between the moisture characteristic and the pORE size distribution, based on an empirical law relating the pores suction to a characteristic size referred to as the effective pore radius.
Abstract: Most pF curves are shaped as cumulative distribution functions. From this observation, restricted to nonshrinking or swelling soils, a close relationship is established between the moisture characteristic and the pore size distribution, based on an empirical law relating the pore suction to a characteristic size referred to as the effective pore radius. For many soils showing a moisture characteristic similar to a normal cumulative distribution function, the derived pore size distribution is log normal. In this case, a direct identification technique is developed, yielding the parameters of the density function from the experimental pF-θ relationship. Practical applications demonstrate the validity of this probabilistic model, which gives good agreement with morphometric pore size data, available in the literature. The model also gives rise to an analytical expression for the pF curve in terms of two structural parameters of the porous system, the mean and the standard deviation of the effective pore radius.

Journal ArticleDOI
TL;DR: The spatial and temporal dependencies of this uncertainty are dependent upon the variance of the velocity and dispersivity as discussed by the authors, the magnitude of the dispersivity, the functional form of the probability distribution function, and the number of uncertain parameters considered in the analysis.
Abstract: The coefficients of the mass transport equation are often characterized by considerable uncertainty. Where the functional form of this uncertainty is known the transport equation can be solved to yield a solution also characterized by uncertainty. The spatial and temporal dependencies of this uncertainty are dependent upon the variance of the velocity and dispersivity, the magnitude of the dispersivity, the functional form of the probability distribution function, and the number of uncertain parameters considered in the analysis. In general the uncertainty, as measured by the coefficient of variation, is considerably smaller for the solution than for the input parameters. While any finite variance theoretically can be accommodated by the method employed for solution of the stochastic partial differential equations, it is nevertheless essential to select numerical parameters (Δx and Δt) such that certain constraints on the form of the coefficient matrix are not violated.

Journal ArticleDOI
TL;DR: In this paper, a general theorem is given which simplifies and extends the techniques of Prakasa Rao (1966) and Brunk (1970), expressing sufficient conditions for a specified limit distribution to obtain are expressed in terms of local and global conditions.
Abstract: : Isotonic estimation involves the estimator of a function which is known to be increasing with respect to a specified partial order. For the case of a linear order, a general theorem is given which simplifies and extends the techniques of Prakasa Rao (1966) and Brunk (1970). Sufficient conditions for a specified limit distribution to obtain are expressed in terms of a local condition and a global condition. The theorem is applied to several examples. The first example is estimation of a monotone function mu on (0,1) based on observations (i/n, X sub ni), where EX sub ni = mu (i/n). In the second example, i/n is replaced by random T sub ni. Robust estimators for this problem are described. Estimation of a monotone density function is also discussed. It is shown that the rate of convergence depends on the order of first non-zero derivative and that this result can obtain even if the function is not monotone over its entire domain. (Author)

Journal ArticleDOI
TL;DR: In this paper, a smooth density function over a geographical region from data aggregated over irregular subregions is obtained by minimizing a family of roughness criteria given volume data, which leads to smooth multivariate functions.
Abstract: This work was motivated by the problem of obtaining a smooth density function over a geographical region from data aggregated over irregular subregions. Minimization of a family of roughness criteria given “volume” data leads to smooth multivariate functions—Laplacian histosplines, having a certain order of the iterated Laplacian of constant value in each of the subregions and satisfying natural boundary conditions on the boundary of the region. For inexact data, e.g., in case of estimating an underlying density given counts of events by subregions, Laplacian smoothing histosplines are constructed, analogous to smoothing splines in the univariate case, and a method for choosing the smoothing parameter is presented.For both cases of exact and inexact data, modified roughness criteria, independent of the region, are discussed, and results known for point-evaluation data are extended to the case of aggregated data.

Journal ArticleDOI
TL;DR: In this paper, a new approximation based on the saddlepoint method of approximating integrals is derived for the probability density of the k-class estimator in the case of the equation with two endogenous variables.
Abstract: A new approximation based on the saddlepoint method of approximating integrals is derived for the probability density of the k-class estimator in the case of the equation with two endogenous variables. The two tails of the density are approximated by different functions, each of which bears a close relationship with the exact density in the same region of the distribution. Corresponding approximations are also derived for the distribution function and the method of derivation should be useful in other applications. Some brief numerical results are reported which illustrate the performance of the new approximation.

Journal ArticleDOI
TL;DR: In this article, a new representation of a probability density function on the three dimensional rotation group is presented, which generalizes the exponential Fourier densities on the circle, and an error criterion which is compatible with a Riemannian metric is introduced and discussed.
Abstract: A new representation of a probability density function on the three dimensional rotation group, $SO( 3 )$ is presented, which generalizes the exponential Fourier densities on the circle. As in the circle case, this class of densities on $SO( 3 )$ is closed under the operation of taking conditional distributions. Several simple multistage estimation and detection models are considered. The closure property enables us to determine the sequential conditional densities by recursively updating a finite and fixed number of coefficients. It also enables us to express the likelihood ratio for signal detection explicitly in terms of the conditional densities.An error criterion, which is compatible with a Riemannian metric, is introduced and discussed. The optimal orientation estimates with respect to this error criterion are derived for a given probability distribution, illustrating how the updated conditional densities can be used to recursively determine the optimal estimates on $SO( 3 )$.

Journal ArticleDOI
TL;DR: In this article, an algorithm for finding the distance between two finite areas is presented. But this algorithm is not suitable for speech recognition and orthographic word correction, and it cannot be used for word segmentation.
Abstract: The problems of speech recognition and orthographic word correction have been greatly mitigated by the use of dynamic programming techniques for finding the distance between two finite sequences. This paper extends the technique into two dimensions, and presents an algorithm for finding the distance between two finite areas. Applications of the algorithm are suggested.

Journal ArticleDOI
TL;DR: For equally spaced observations from a one-dimensional, stationary, Gaussian random function, the characteristic function of the usual variogram estimator for a fixed lag k is derived in this paper.
Abstract: For equally spaced observations from a one-dimensional, stationary, Gaussian random function, the characteristic function of the usual variogram estimator\(\hat \gamma k\) for a fixed lag k is derived. Because the characteristic function and the probability density function form a Fourier integral pair, it is possible to tabulate the sampling distribution of a function of a\(\hat \gamma k\) using either analytic or numerical methods. An example of one such tabulation is given for an underlying model that is simple transitive.

Journal ArticleDOI
01 Jan 1979
TL;DR: In this article, the authors applied autocorrelation and probability analysis to investigate the concentration field of a turbulent free jet of natural gas having a Reynolds number of 16000, which was found to be approximately gaussian on the centre line but highly non-gaussian in off-axis intermittent regions.
Abstract: This paper describes the measurement of turbulent concentration parameters using a laser Raman spectrometer. Autocorrelation and probability analysis are applied to investigate the concentration field of a turbulent free jet of natural gas having a Reynolds number of 16000. Results of mean and rms concentration are presented along the jet axis; the asymptotic value of volume fraction fluctuation intensity was found to be ∼30%. A method for obtaining the probability density function is described; this was found to be approximately gaussian on the centre line but highly non-gaussian in off-axis intermittent regions. This information is combined with simplified chemical kinetics to calculate factors relating to ignition probability in such turbulent mixing flows.

Journal ArticleDOI
TL;DR: In this article, the authors studied the probability distribution for the density of primary cosmic rays at a given position when the spacetime source volume and number of sources increase without limit, assuming supernova sources.
Abstract: Assuming that the sources of primary cosmic radiation are discrete and distributed at random but according to a given probability distribution throughout the Galaxy, statistical quantities of interest (e.g., means, variances, and two-point correlations) are calculated analytically for the density, flux (anistropy), and age distribution of both secondaries and primaries in terms of a Green's function describing galactic propagation from a monoenergetic discrete source. Also presented is the probability distribution for the density of primary cosmic rays at a given position when the spacetime source volume and number of sources increase without limit. For illustrative purposes, detailed results are presented for one species of primary and secondary nuclei within a version of the standard ''leaky box'' propagation model assuming supernova sources. The importance of the theory lies in its unification of all models of galactic cosmic ray propagation involving discrete sources within a single framework.

Journal ArticleDOI
TL;DR: In this article, the effect of turbulence in the atmosphere on the motion stability of a helicopter blade is investigated, where only flapping and torsional motions are considered; the results are reported in two parts.
Abstract: The effect of turbulence in the atmosphere on the motion stability of a helicopter blade is investigated. Modeling turbulence as a random field, statistically stationary in time and homogeneous in space, the method of stochastic average of Stratonovich is used to obtain equivalent Ito stochastic equations, from which the FokkerPlanck equation for the transition probability density and the equations for various stochastic moments can be derived. As an exploratory study, only flapping and torsional motions are considered; the results are reported in two parts. In Part I, the present paper, equations of motion are derived which are reducible to those obtained previously by Sissingh and Kuczynski when the turbulence terms are removed. The in-plane turbulence components appear in the coefficients of these equations; thus, they affect the stability of the flapping and torsional motions. On the other hand, the normal turbulence component appears in the inhomogeneous terms in the equations; its statistical properties, while affecting the level of system response, do not change a stable solution to an unstable solution. Then detailed discussions are given for the reduced case of uncoupled flapping in a hovering flight. This simple case is theoretically interesting, since closed-form solutions can be obtained and considerable insight can be gained from the analysis. Certain mathematical tools involving stochastic processes which may be foreign to engineers are explained in the Appendix. Solutions for the case of forward flights for both coupled and uncoupled motions are presented in Part II.

Journal ArticleDOI
TL;DR: In this article, the maximum entropy method of spectral analysis is examined in terms of the probability density of a finite random sequence, and the distribution is found which has maximum entropy within constraints imposed by the stationarity of the sequence.
Abstract: The maximum entropy method of spectral analysis is examined in terms of the probability density of a finite random sequence. The distribution is found which has maximum entropy within constraints imposed by the stationarity of the sequence. The population power spectral density of the sequence is expressed in terms of the parameters of this distribution. The distribution is shown to be related to a linear regression model for which the parameters may be estimated by the established methods of regression theory. Confidence limits for the spectral estimator are derived, infinite confidence limits being interpreted in terms of cyclic behavior of the ensemble means of the sequence. The null hypothesis that the members of a subset of regression coefficients are all zero is used to estimate the order of the autoregression. Estimated spectra of a sequence of annual mean sunspot numbers are computed using this method and found to have a single significant peak.

Book ChapterDOI
01 Jan 1979
TL;DR: Adopting a technique developed by Farrel and Wahba (1975), the optimal rate of convergence of the MSE is obtained for non-parametric estimators of derivatives of a density by restricting us to kernels with compact support.
Abstract: We consider kernel estimates for the derivatives of a probability density which satisfies certain smoothness conditions. We derive the rate of convergence of the local and of the integrated mean square error (MSE and IMSE), by restricting us to kernels with compact support. Optimal kernel functions for estimating the first three derivatives are given. Adopting a technique developed by Farrel (1972) and Wahba (1975), we obtain the optimal rate of convergence of the MSE for non-parametric estimators of derivatives of a density. Kernel estimates attain this optimal rate.

Journal ArticleDOI
TL;DR: In this paper, it is shown that machining economic calculations based on the deterministic tool life concept, when in fact the tool life is a statistical quantity, yields incorrect results.