scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1995"


Journal ArticleDOI
TL;DR: In inverse problems, obtaining a maximum likelihood model is usually not sucient, as the theory linking data with model parameters is nonlinear and the a posteriori probability in the model space may not be easy to describe.
Abstract: Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines a priori information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the a posteriori probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.). When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sucient, as we normally also wish to have infor

1,124 citations


Journal ArticleDOI
TL;DR: A simple alternative method to estimate the shape parameter for the generalized Gaussian PDF is proposed that significantly reduces the number of computations by eliminating the need for any statistical goodness-of-fit test.
Abstract: A subband decomposition scheme for video signals, in which the original or difference frames are each decomposed into 16 equal-size frequency subbands, is considered. Westerink et al. (1991) have shown that the distribution of the sample values in each subband can be modeled with a "generalized Gaussian" probability density function (PDF) where three parameters, mean, variance, and shape are required to uniquely determine the PDF. To estimate the shape parameter, a series of statistical goodness-of-fit tests such as Kolmogorov-Smirnov or chi-squared tests have been used. A simple alternative method to estimate the shape parameter for the generalized Gaussian PDF is proposed that significantly reduces the number of computations by eliminating the need for any statistical goodness-of-fit test. >

565 citations


Journal ArticleDOI
TL;DR: In this paper, a high-resolution procedure to reconstruct common-midpoint (CMP) gathers is presented, in which the target is the artifacts-free, aperture-compensated velocity gather.
Abstract: We present a high-resolution procedure to reconstruct common-midpoint (CMP) gathers. First, we describe the forward and inverse transformations between offset and velocity space. Then, we formulate an underdetermined linear inverse problem in which the target is the artifacts-free, aperture-compensated velocity gather. We show that a sparse inversion leads to a solution that resembles the infinite-aperture velocity gather. The latter is the velocity gather that should have been estimated with a simple conjugate operator designed from an infinite-aperture seismic array. This high-resolution velocity gather is then used to reconstruct the offset space. The algorithm is formally derived using two basic principles. First, we use the principle of maximum entropy to translate prior information about the unknown parameters into a probabilistic framework, in other words, to assign a probability density function to our model. Second, we apply Bayes’s rule to relate the a priori probability density function (pdf) with the pdf corresponding to the experimental uncertainties (likelihood function) to construct the a posteriori distribution of the unknown parameters. Finally the model is evaluated by maximizing the a posteriori distribution. When the problem is correctly regularized, the algorithm converges to a solution characterized by different degrees of sparseness depending on the required resolution. The solutions exhibit minimum entropy when the entropy is measured in terms of Burg’s definition. We emphasize two crucial differences in our approach with the familiar Burg method of maximum entropy spectral analysis. First, Burg’s entropy is minimized rather than maximized, which is equivalent to inferring as much as possible about the model from the data. Second, our approach uses the data as constraints in contrast with the classic maximum entropy spectral analysis approach where the autocorrelation function is the constraint. This implies that we recover not only amplitude information but also phase information, which serves to extrapolate the data outside the original aperture of the array. The tradeoff is controlled by a single parameter that under asymptotic conditions reduces the method to a damped least-squares solution. Finally, the high-resolution or aperture-compensated velocity gather is used to extrapolate near- and far-offset traces.

395 citations


Journal ArticleDOI
TL;DR: The Rice Clock Model (RCM) as mentioned in this paper was introduced to describe the effect of temperature on the rate of crop development, and the model accurately described the response to temperature of several developmental processes, and was superior to two widely used thermal time approaches.

337 citations


Posted Content
TL;DR: In this paper, the authors provide an elementary geometric derivation of the Kac integral formula for the expected number of real zeros of a random polynomial with independent standard normally distributed coefficients.
Abstract: We provide an elementary geometric derivation of the Kac integral formula for the expected number of real zeros of a random polynomial with independent standard normally distributed coefficients. We show that the expected number of real zeros is simply the length of the moment curve $(1,t,\ldots,t^n)$ projected onto the surface of the unit sphere, divided by $\pi$. The probability density of the real zeros is proportional to how fast this curve is traced out. We then relax Kac's assumptions by considering a variety of random sums, series, and distributions, and we also illustrate such ideas as integral geometry and the Fubini-Study metric.

308 citations


Journal ArticleDOI
TL;DR: The performance of optimum receivers, designed to detect signals embedded in impulsive noise which is modeled as an infinite variance symmetric /spl alpha/-stable process, is examined, and it is compared against the performance of several suboptimum receivers.
Abstract: Impulsive noise bursts in communication systems are traditionally handled by incorporating in the receiver a limiter which clips the received signal before integration. An empirical justification for this procedure is that it generally causes the signal-to-noise ratio to increase. Recently, very accurate models of impulsive noise were presented, based on the theory of symmetric /spl alpha/-stable probability density functions. We examine the performance of optimum receivers, designed to detect signals embedded in impulsive noise which is modeled as an infinite variance symmetric /spl alpha/-stable process, and compare it against the performance of several suboptimum receivers. As a measure of receiver performance, we compute an asymptotic expression for the probability of error for each receiver and compare it to the probability of error calculated by extensive Monte-Carlo simulation. >

249 citations


Journal ArticleDOI
TL;DR: In this article, the statistical data of fifty days' wind speed measurements at the MERC-solar site are used to find out the wind energy density and other wind characteristics with the help of the Weibull probability distribution function.

231 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the ensemble of random symmetric n×n matrices specified by an orthogonal invariant probability distribution and showed that the normalized eigenvalue counting function of this ensemble converges in probability to a nonrandom limit as n→∞ and that this limiting distribution is the solution of a certain self-consistent equation.
Abstract: We consider the ensemble of random symmetricn×n matrices specified by an orthogonal invariant probability distribution. We treat this distribution as a Gibbs measure of a mean-field-type model. This allows us to show that the normalized eigenvalue counting function of this ensemble converges in probability to a nonrandom limit asn→∞ and that this limiting distribution is the solution of a certain self-consistent equation.

169 citations


Patent
01 Nov 1995
TL;DR: In this paper, a system and method for fusing independent measures of the physiological parameter uses a Kalman filter for each possible combination of sensor measurements, and a confidence calculator uses Bayesian statistical analysis to determine a confidence level for each of the KF outputs, and selects a fused estimate for the physiological parameters based on the confidence level.
Abstract: A system and method for fusing independent measures of the physiological parameter uses a Kalman filter for each possible combination of sensor measurements. The Kalman filter utilize probability density functions of a nominal error contamination model and a prediction error model as well as past estimates of the physiological parameter to produce the Kalman filter outputs. A confidence calculator uses Bayesian statistical analysis to determine a confidence level for each of the Kalman filter outputs, and selects a fused estimate for the physiological parameter based on the confidence level. The fused estimate and the confidence level are used to dynamically update the nominal error contamination model and prediction error model to create an adaptive measurement system. The confidence calculator also takes into account the probability of artifactual error contamination in any or all of the sensor measurements. The system assumes a worst case analysis of the artifactual error contamination, thus producing a robust model able to adapt to any probability density function of the artifactual error and a priori probability of artifact.

163 citations


Journal ArticleDOI
TL;DR: In this paper, the mean area of isoconcentration surface per unit volume is studied through its transport equation derived by using the probability density function (pdf) formalism, which allows the value of the reactive scalar used to define the level surface to be treated as an independent variable.
Abstract: To characterize the dynamics and the physical properties of isoconcentration surfaces of a random reactive scalar field, an instantaneous isosurface quantity and its transport equation are introduced. For turbulent flows, the mean area of isoconcentration surface per unit volume is studied through its transport equation derived by using the probability density function (pdf) formalism. This approach allows the value of the reactive scalar used to define the level surface to be treated as an independent variable. It also leads to the definition of a surface density function. The developed formalism is applied to premixed turbulent combustion and a bridge is built between modeling approaches based on pdf, and others based on the flame surface concept.

159 citations


Proceedings Article
01 Jan 1995

Journal ArticleDOI
TL;DR: In this paper, the authors studied the properties of the probability distribution function of the cosmological continuous density field and compared dynamically motivated methods to derive the PDF, based on the regularization of integrals.
Abstract: The properties of the probability distribution function of the cosmological continuous density field are studied. We present further developments and compare dynamically motivated methods to derive the PDF. One of them is based on the Zel'dovich approximation (ZA). We extend this method for arbitrary initial conditions, regardless of whether they are Gaussian or not. The other approach is based on perturbation theory with Gaussian initial fluctuations. We include the smoothing effects in the PDFs. We examine the relationships between the shapes of the PDFs and the moments. It is found that formally there are no moments in the ZA, but a way to resolve this issue is proposed, based on the regularization of integrals. A closed form for the generating function of the moments in the ZA is also presented, including the smoothing effects. We suggest the methods to build PDFs out of the whole series of the moments, or out of a limited number of moments -- the Edgeworth expansion. The last approach gives us an alternative method to evaluate the skewness and kurtosis by measuring the PDF around its peak. We note a general connection between the generating function of moments for small r.m.s $\sigma$ and the non-linear evolution of the overdense spherical fluctuation in the dynamical models. All these approaches have been applied in 1D case where the ZA is exact, and simple analytical results are obtained. The 3D case is analyzed in the same manner and we found a mutual agreement in the PDFs derived by different methods in the the quasi-linear regime. Numerical CDM simulation was used to validate the accuracy of considered approximations. We explain the successful log-normal fit of the PDF from that simulation at moderate $\sigma$ as mere fortune, but not as a universal form of density PDF in general.

Proceedings ArticleDOI
06 Nov 1995
TL;DR: In this paper, a selection combining scheme for a RAKE receiver operating over a multipath fading channel is introduced, by which the m largest channel outputs are selected instead of only the largest one, as in the conventional selection combining receiver.
Abstract: A selection combining scheme for a RAKE receiver operating over a multipath fading channel is introduced, by which the m largest channel outputs are selected instead of only the largest one, as in the conventional selection combining receiver. Expressions for the error probability of this scheme for an exponential multipath intensity profile (MIP) with arbitrary decay constant are found by first deriving the joint density function of the m ordered channel outputs, then averaging the conditional error probability over the joint density function. The performance is compared with that of maximal ratio combining in both an interference-limited and a noise-limited environment. The interference-limited environment chosen is a multicell CDMA system. Numerical results show that the performance of the selection combining scheme is superior to that of conventional selection combining, and can be very close to that of maximal ratio combining, depending upon the value of m and the rate of decay of the MIP.

Journal ArticleDOI
TL;DR: In this paper, a spatial correlation model is presented for modeling hydraulic conductivity in sand-shale or sand-clay formations or in fractured rocks, where the conductivities of the fractured and nonfractured rocks display dramatically different spatial structures.
Abstract: A spatial correlation model is presented for the case of a spatially distributed, bimodal attribute. This model can be used for modeling the hydraulic conductivity in sand-shale or sand-clay formations or in fractured rocks, where the conductivities of the fractured and nonfractured rocks display dramatically different spatial structures. In the proposed model each of the modes is defined by a different multivariate probability density function and correlation scale. A length scale other than the one specified for each mode is used to characterize the relative distribution of the modes in space. Effective conductivity and transport parameters are then defined and analyzed. In developing the transport parameters our goal is to see the effects of the different scales and the different modes on transport. Unlike the case of a unimodal distribution, the macrodispersion is not a linear function of the total variance of the population, and the relative contributions of the variabilities of the different modes are determined by the ratios between the various length scales. We found that the effects of the large-scale variability on longitudinal spread become significant only after a large travel distance, but that its contribution to lateral spread occurs at a relatively early travel time.

Journal ArticleDOI
TL;DR: Modeling of partial volume effect is shown to be useful when one of the materials is present in images mainly as a pixel component and incorporated into finite mixture densities in order to more accurately model the distribution of image pixel values.
Abstract: Statistical models of partial volume effect for systems with various types of noise or pixel value distributions are developed and probability density functions are derived. The models assume either Gaussian system sampling noise or intrinsic material variances with Gaussian or Poisson statistics. In particular, a material can be viewed as having a distinct value that has been corrupted by additive noise either before or after partial volume mixing, or the material could have nondistinct values with a Poisson distribution as might be the case in nuclear medicine images. General forms of the probability density functions are presented for the N material cases and particular forms for two- and three-material cases are derived. These models are incorporated into finite mixture densities in order to more accurately model the distribution of image pixel values. Examples are presented using simulated histograms to demonstrate the efficacy of the models for quantification. Modeling of partial volume effect is shown to be useful when one of the materials is present in images mainly as a pixel component. >

Journal ArticleDOI
TL;DR: The probability of the MUSIC algorithm resolving two spatially separated signal sources in the context of array processing is analyzed by formulating the resolution problem in the framework of statistical decision theory and directly determining the probability density function of the indefinite and singular quadratic form that defines the resolution event.
Abstract: The MUSIC algorithm is well known for its high-resolution capability, and various aspects of its statistical performance have been investigated. However, rigorous asymptotic analysis of one of its most important performance measures, the probability of resolution, is not available yet. We analyze the probability of the MUSIC algorithm resolving two spatially separated signal sources in the context of array processing. By formulating the resolution problem in the framework of statistical decision theory and directly determining the probability density function (PDF) of the indefinite and singular quadratic form that defines the resolution event, we arrive at an exact asymptotic formula for the probability of resolution. This is accomplished by a multistep procedure. Computer simulations have been performed to confirm the validity of the theory. >

Journal ArticleDOI
TL;DR: It is shown that the Cauchy beamformer greatly outperforms the Gaussian beamformer in a wide variety of non-Gaussian noise environments, and performs comparably to the Gaussia beamformer when the additive noise is Gaussian.
Abstract: This paper introduces a new class of robust beamformers which perform optimally over a wide range of non-Gaussian additive noise environments. The maximum likelihood approach is used to estimate the bearing of multiple sources from a set of snapshots when the additive interference is impulsive in nature. The analysis is based on the assumption that the additive noise can be modeled as a complex symmetric /spl alpha/-stable (S/spl alpha/S) process. Transform-based approximations of the likelihood estimation are used for the general S/spl alpha/S class of distributions while the exact probability density function is used for the Cauchy case. It is shown that the Cauchy beamformer greatly outperforms the Gaussian beamformer in a wide variety of non-Gaussian noise environments, and performs comparably to the Gaussian beamformer when the additive noise is Gaussian. The Cramer-Rao bound for the estimation error variance is derived for the Cauchy case, and the robustness of the S/spl alpha/S beamformers in a wide range of impulsive interference environments is demonstrated via simulation experiments.

Journal ArticleDOI
TL;DR: In this paper, the authors address the empirical bandwidth choice problem in cases where the range of dependence may be virtually arbitrarily long and provide surprising evidence that, even for some strongly dependent data sequences, the asymptotically optimal bandwidth for independent data is a good choice.
Abstract: We address the empirical bandwidth choice problem in cases where the range of dependence may be virtually arbitrarily long. Assuming that the observed data derive from an unknown function of a Gaussian process, it is argued that, unlike more traditional contexts of statistical inference, in density estimation there is no clear role for the classical distinction between short- and long-range dependence. Indeed, the "boundaries" that separate different modes of behaviour for optimal bandwidths and mean squared errors are determined more by kernel order than by traditional notions of strength of dependence, for example, by whether or not the sum of the covariances converges. We provide surprising evidence that, even for some strongly dependent data sequences, the asymptotically optimal bandwidth for independent data is a good choice. A plug-in empirical bandwidth selector based on this observation is suggested. We determine the properties of this choice for a wide range of different strengths of dependence. Properties of cross-validation are also addressed.

Journal ArticleDOI
TL;DR: In this article, the role and limitations of geographical information systems (GISs) in scaling hydrological models over heterogeneous land surfaces are outlined, where the authors define scaling as the extension of small-scale process models, which may be directly parameterized and validated, to larger spatial extents.
Abstract: The roles and limitations of geographical information systems (GISs) in scaling hydrological models over heterogeneous land surfaces are outlined. Scaling is defined here as the extension of small-scale process models, which may be directly parameterized and validated, to larger spatial extents. A process computation can be successfully scaled if this extension can be carried out with minimal bias. Much of our understanding of land surface hydrological processes as currently applied within distributed models has been derived in conjunction with 'point' or 'plot' experiments, in which spatial variations and patterns of the controlling soil, canopy and meteorological factors are not defined. In these cases, prescription of model input parameters can be accomplished by direct observation. As the spatial extent is expanded beyond these point experiments to catchment or larger watershed regions, the direct extension of the point models requires an estimation of the distribution of the model parameters and process computations over the heterogeneous land surface. If the distribution of the set of spatial variables required for a given hydrological model (e.g. surface slope, soil hydraulic conductivity) can be described by a joint density function, f(x), where x = x 1 ,x 2 ,x 3 ,... are the model variables, then a GIS may be evaluated as a tool for estimating this function. In terms of the scaling procedure, the GIS is used to replace direct measurement or sampling of f(x) as the area of simulation is increased beyond the extent over which direct sampling of the distribution is feasible. The question to be asked is whether current GISs and current available spatial data sets are sufficient to adequately estimate these density functions.

Journal ArticleDOI
TL;DR: As special cases of Ricean fading, error probability for Rayleigh fading and non-fading channels are obtained which either match the results or complete the approximate derivations formerly known from the literature.
Abstract: The method used in Aghamohammadi and Meyr (1990) for finding the error probability of linearly modulated signals on Rayleigh frequency-flat fading channels has been applied to the more general case of Ricean fading. A signal received on a fading channel is subject to a multiplicative distortion (MD) and to the usual additive noise. Following a compensation of the MD, the signal provided to the detector may be thought to include only a single additive distortion term ("final noise"), which comprises the effects of the original additive noise, the MD, and the error in MD compensation. An exact expression for the probability density function of the final noise is derived. This allows calculation of error probability for arbitrary types of linear modulations. Results for many cases of interest are presented. Furthermore, as special cases of Ricean fading, error probability for Rayleigh fading and non-fading channels are obtained which either match the results or complete the approximate derivations formerly known from the literature. >

Journal ArticleDOI
TL;DR: This paper characterize situations where the fast rate is valid, and also gives rates for a variety of cases where they are slower, and a modification of the usual variable window width estimator is proposed, which does have the earlier claimed rates of convergence.
Abstract: Variable window width kernel density estimators, with the width varying proportionally to the square root of the density, have been thought to have superior asymptotic properties. The rate of convergence has been claimed to be as good as those typical for higher-order kernels, which makes the variable width estimators more attractive because no adjustment is needed to handle the negativity usually entailed by the latter. However, in a recent paper, Terrell and Scott show that these results can fail in important cases. In this paper, we characterize situations where the fast rate is valid, and also give rates for a variety of cases where they are slower. In addition, a modification of the usual variable window width estimator is proposed, which does have the earlier claimed rates of convergence.

Journal ArticleDOI
TL;DR: In this paper, the authors used a conditional statistical analysis technique to analyze floating potential fluctuations in the scrape-off layer and plasma edge of the floating potential signal and found that the fluctuations had a nearly Gaussian probability density function with the largest deviation from a Gaussian at the shear layer.
Abstract: Fluctuations in floating potential in the scrape‐off layer and plasma edge were analyzed using a conditional statistical analysis technique. The floating potential fluctuations had a nearly Gaussian probability density function with the largest deviation from a Gaussian at the shear layer. The conditional averaging technique followed the statistical evolution of selected conditions in the floating potential signal. The decay rate of a conditional feature in time or space showed a small systematic variation with the amplitude of condition chosen. Either long‐lived coherent structures are not present in statistically significant numbers, or the fluctuations are dominated by a large number of coherent structures with a nearly Gaussian distribution of fluctuation amplitudes, or conditional analysis using the amplitude of the floating potential as a condition is not a sensitive technique for identifying coherent structures.

Patent
23 Jun 1995
TL;DR: In this article, a Bayesian updating rule is employed to build a local posterior distribution for the primary variable at each simulated location, where the posterior distribution is the product of a Gaussian kernel function obtained by simple kriging of the primary variables and a secondary probability function obtained directly from a scatter diagram between primary and secondary variables.
Abstract: A multivariate stochastic simulation application that involves the mapping of a primary variable from a combination for sparse primary data and more densely sampled secondary data The method is applicable when the relationship between the simulated primary variable and one or more secondary variables is non-linear The method employs a Bayesian updating rule to build a local posterior distribution for the primary variable at each simulated location The posterior distribution is the product of a Gaussian kernel function obtained by simple kriging of the primary variable and a secondary probability function obtained directly from a scatter diagram between primary and secondary variables

Journal ArticleDOI
TL;DR: In this article, a second-order Lagrangian stochastic model for particle trajectories in low Reynolds number turbulence is presented, which satisfies a well-mixed constraint for the (hypothetical) case of stationary, homogeneous, isotropic turbulence in which the joint probability density function for the fixed point velocity and acceleration is Gaussian.
Abstract: We review Sawford’s [Phys. Fluids A 3, 1577 (1991)] second‐order Lagrangian stochastic model for particle trajectories in low Reynolds number turbulence, showing that it satisfies a well‐mixed constraint for the (hypothetical) case of stationary, homogeneous, isotropic turbulence in which the joint probability density function for the fixed‐point velocity and acceleration is Gaussian. We then extend the model to decaying homogeneous turbulence and, by optimizing model agreement with the measured spread of tracers in grid turbulence, estimate that Kolmogorov’s universal constant (C0) for the Lagrangian velocity structure function has the value of 3.0±0.5.

Journal ArticleDOI
TL;DR: In this paper, a probabilistic model of a dynamical system in terms of a measure instead of a density function is presented and the equation of motion for the cumulative function of this measure is derived and stationary solutions are constructed with the aid of deRham-type functional equations.
Abstract: Nonequilibrium stationary states are studied for a multibaker map, a simple reversible chaotic dynamical system. The probabilistic description is extended by representing a dynamical state in terms of a measure instead of a density function. The equation of motion for the cumulative function of this measure is derived and stationary solutions are constructed with the aid of deRham-type functional equations. To select the physical states, the time evolution of the distribution under a fixed boundary condition is investigated for an open multibaker chain of scattering type. This system corresponds to a diffusive flow experiment through a slab of material. For long times, any initial distribution approaches the stationary one obeying Fick's law. At stationarity, the intracell distribution is singular in the stable direction and expressed by the Takagi function, which is continuous but has no finite derivatives. The result suggests that singular measures play an important role in the dynamical description of non-equilibrium states.

Journal ArticleDOI
TL;DR: In this paper, the exact stationary solution in terms of probability density function for a restricted class of nonlinear systems under both external and parametric non-normal delta-correlated processes is presented.
Abstract: In this paper the exact stationary solution in terms of probability density function for a restricted class of non-linear systems under both external and parametric non-normal delta-correlated processes is presented. This class has been obtained by imposing a given probability distribution and finding the corresponding dynamical system which satisfies the modified Fokker-Planck equation. The effectiveness of the results has been verified by means of a Monte Carlo simulation.

Journal ArticleDOI
TL;DR: In this paper, a stochastic analysis procedure is developed to examine the properties of chaotic roll motion and the capsize of ships subjected to periodic excitation with a random noise disturbance.

Journal ArticleDOI
TL;DR: In this paper, the authors derived closed-form expressions for the probability density function (pdf) of the scattered signal intensity for one, two, and three scatterers having arbitrary amplitudes.
Abstract: Complex radar targets are often modeled as a number of individual scattering elements randomly distributed throughout the spatial region containing the target. While it is known that as the number of scatterers grows large the distribution of the scattered signal power or intensity is asymptotically exponential, this is not true for a small number of scatterers. The authors study the statistics of measured power or intensity, and hence scattering cross section, resulting from a small number of constant amplitude scatterers each having a random phase. They derive closed-form expressions for the probability density function (pdf) of the scattered signal intensity for one, two, and three scatterers having arbitrary amplitudes. For n>3 scatterers, they derive expressions for the pdf when the individual scatterers have identical constant amplitudes and independent random phases; these expressions are Gram-Charlier type expansions with weighting functions determined by the asymptotic form of the intensity pdf for a large number of scatterers n. The Kolmogorov-Smirnov goodness-of-fit test is used to show that the series expansions are a good fit to empirical pdfs computed using Monte-Carlo simulation of targets made up of a small number of constant amplitude scatterers with random phase. >

Journal ArticleDOI
TL;DR: In this paper, the interaction between partial discharge (PD) phenomena occurring in insulating systems is investigated by means of a 5-parameter additive Weibull distribution, obtained from measurements performed on specimens of stator bars and windings of ac rotating machines.
Abstract: The interaction between partial discharge (PD) phenomena occurring in insulating systems is investigated in this paper In particular, the recognition of two PD phenomena simultaneously active is approached by means of a 5-parameter additive Weibull distribution Various shapes of the PD height distribution, obtained from measurements performed on specimens of stator bars and windings of ac rotating machines, are considered It is shown that the proposed probability function fits well the partial discharge height distributions By this way the probability distribution relevant to each concurring PD phenomenon can be derived, analyzed and identified Moreover, the standard average quantities are estimated for each phenomenon

Journal ArticleDOI
TL;DR: In this paper, the refined similarity hypothesis of Kolmogorov [J. 13, 82 (1962)] is extended to a scalar field using measurements in a circular jet and the atmospheric surface layer.
Abstract: The refined similarity hypothesis of Kolmogorov [J. Fluid Mech. 13, 82 (1962)] is extended to a scalar field. These hypotheses are tested using measurements in a circular jet and the atmospheric surface layer. Over a significant part of the inertial range, statistics of the normalized stochastic variables for velocity and temperature indicate a dependence on the separation r. This dependence is also quantified through the probability density functions of the stochastic variables and the correlation between the velocity (or temperature) increment and the local energy (or temperature) dissipation rates. Probability density functions of the stochastic variables are conditioned on the local Reynolds number Rer based on r and the local energy dissipation rate. These functions depend on Rer when the latter is small and are approximately universal when Rer is very large. This behaviour is consistent with the refined similarity hypothesis. There is however a slight difference between the shapes of the conditional...