scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 1996"


Journal ArticleDOI
TL;DR: The motivation for this work comes from a desire to preserve the dependence structure of the time series while bootstrapping (resampling it with replacement), and the method is data driven and is preferred where the investigator is uncomfortable with prior assumptions.
Abstract: A nonparametric method for resampling scalar or vector-valued time series is introduced. Multivariate nearest neighbor probability density estimation provides the basis for the resampling scheme developed. The motivation for this work comes from a desire to preserve the dependence structure of the time series while bootstrapping (resampling it with replacement). The method is data driven and is preferred where the investigator is uncomfortable with prior assumptions as to the form (e.g., linear or nonlinear) of dependence and the form of the probability density function (e.g., Gaussian). Such prior assumptions are often made in an ad hoc manner for analyzing hydrologic data. Connections of the nearest neighbor bootstrap to Markov processes as well as its utility in a general Monte Carlo setting are discussed. Applications to resampling monthly streamflow and some synthetic data are presented. The method is shown to be effective with time series generated by linear and nonlinear autoregressive models. The utility of the method for resampling monthly streamflow sequences with asymmetric and bimodal marginal probability densities is also demonstrated.

713 citations


Journal ArticleDOI
TL;DR: In this article, a Bayesian approach was proposed for retrieving the precipitation and vertical hydrometeor profiles from downward viewing radiometers based on a prior probability density function of rainfall profiles, which is computationally much less expensive than previous profiling schemes and has been designed specifically to allow for tractability of assumptions.
Abstract: Presents a computationally simple technique for retrieving the precipitation and vertical hydrometeor profiles from downward viewing radiometers. The technique is computationally much less expensive than previous profiling schemes and has been designed specifically to allow for tractability of assumptions. In this paper, the emphasis is placed upon passive microwave applications, but the combination of passive with active microwave sensors, infrared sensors, or other a priori information can be adapted easily to the framework described. The technique is based upon a Bayesian approach. The authors use many realizations of the Goddard Cumulus Ensemble model to establish a prior probability density function of rainfall profiles. Detailed three-dimensional radiative transfer calculations are used to determine the upwelling brightness temperatures from the cloud model to establish the similarity of radiative signatures and thus the probability that a given profile is actually observed. In this study, the authors show that good results may be obtained by weighting profiles from the prior probability density function according to their deviation from the observed brightness temperatures. Examples of the retrieval results are shown for oceanic as well as land situations. Microwave data from the Advanced Microwave Precipitation Radiometer (AMPR) instrument are used to illustrate the retrieval structure results for high-resolution data while SSM/I is used to illustrate satellite applications. Simulations are performed to compare the expected retrieval performance of the SSM/I instrument with that of the upcoming TMI instrument aboard the Tropical Rainfall Measuring Mission (TRMM) to be launched in August 1997. These simulations show that correlations of /spl sim/0.77 may be obtained for 10-km retrievals of the integrated liquid water content based upon SSM/I channels. This correlation increases to /spl sim/0.90 for the same retrievals using the TMI channels and resolution. Due to the lack of quantitative validation data, hydrometeor profiles cannot be compared directly but are instead converted to an equivalent reflectivity structure and compared to existing radar observations where possible.

484 citations


Journal ArticleDOI
TL;DR: In this paper, a detailed statistical study of the evolution of structure in a photoionized intergalactic medium (IGM) using analytical simulations to extend the calculation into the mildly non-linear density regime found to prevail at z = 3.
Abstract: We have performed a detailed statistical study of the evolution of structure in a photoionized intergalactic medium (IGM) using analytical simulations to extend the calculation into the mildly non-linear density regime found to prevail at z = 3. Our work is based on a simple fundamental conjecture: that the probability distribution function of the density of baryonic diffuse matter in the universe is described by a lognormal (LN) random field. The LN field has several attractive features and follows plausibly from the assumption of initial linear Gaussian density and velocity fluctuations at arbitrarily early times. Starting with a suitably normalized power spectrum of primordial fluc- tuations in a universe dominated by cold dark matter (CDM), we compute the behavior of the baryonic matter, which moves slowly toward minima in the dark matter potential on scales larger than the Jeans length. We have computed two models that succeed in matching observations. One is a non-standard CDM model with Omega=1, h=0.5 and \Gamma=0.3, and the other is a low density flat model with a cosmological constant(LCDM), with Omega=0.4, Omega_Lambda=0.6 and h=.65. In both models, the variance of the density distribution function grows with time, reaching unity at about z=4, where the simulation yields spectra that closely resemble the Ly-alpha forest absorption seen in the spectra of high z quasars. The calculations also successfully predict the observed properties of the Ly-alpha forest clouds and their evolution from z=4 down to at least z=2, assuming a constant intensity for the metagalactic UV background over this redshift range. However, in our model the forest is not due to discrete clouds, but rather to fluctuations in a continuous intergalactic medium. (This is an abreviated abstract; the complete abstract is included with the manuscript.)

313 citations


Journal ArticleDOI
TL;DR: A method for defining a probability density function over a set of Luminaires is presented that allows the direct lighting calculation to be carried out with a number of sample points that is independent of the number of luminaires.
Abstract: In a distributed ray tracer, the sampling strategy is the crucial part of the direct lighting calculation. Monte Carlo integration with importance sampling is used to carry out this calculation. Importance sampling involves the design of integrand-specific probability density functions that are used to generate sample points for the numerical quadrature. Probability density functions are presented that aid in the direct lighting calculation from luminaires of various simple shapes. A method for defining a probability density function over a set of luminaires is presented that allows the direct lighting calculation to be carried out with a number of sample points that is independent of the number of luminaires.

251 citations


Patent
27 Sep 1996
TL;DR: In this paper, a statistical simulation of a semiconductor fabrication process is performed in parallel with the actual process using a Monte Carlo technique, a trial-and-error method using repeated calculations to determine a best solution to a problem.
Abstract: A statistical simulation of a semiconductor fabrication process is performed in parallel with the actual process. Input parameters derived from a probability density function are applied to the simulator which, in turn, simulates an actual fabrication process which is modeled as a probability density function. Each simulation step is repeated with a random seed value using a Monte Carlo technique, a trial-and-error method using repeated calculations to determine a best solution to a problem. The simulator generates an output in the form of a probability distribution. The statistical simulation uses single-step feedback in which a simulation run uses input parameters that are supplied or derived from actual in-line measured data. Output data generated by the simulator, both intermediate output structure data and WET data, are matched to actual in-line measured data in circumstances for which measured data is available. The probability density structure of the simulator is adjusted after each simulation step so that simulated data more closely matches in-line measured data.

229 citations


Proceedings ArticleDOI
03 Sep 1996
TL;DR: In this article, a Gaussian random field is used to generate realizations of the permeability field drawn from a probability density function conditioned on inaccurate pressure or saturation data, even if the unconditional realizations are Gaussian Random Fields, because the problem is highly nonlinear.
Abstract: Generating realizations of the permeability field drawn from a probability density function conditioned on inaccurate pressure or saturation data is difficult, even if the unconditional realizations are Gaussian random fields, because the problem is highly nonlinear. Inefficient methods that generate large numbers of rejected images, such as simulated annealing, must be ruled out as impractical because of the repeated need for reservoir flow simulation.

179 citations


Journal ArticleDOI
TL;DR: The instanton solution for the forced Burgers equation is found that describes the exponential tail of the probability distribution function of velocity differences in the region where shock waves are absent; that is, for large positive velocity differences.
Abstract: The instanton solution for the forced Burgers equation is found. This solution describes the exponential tail of the probability distribution function of velocity differences in the region where shock waves are absent; that is, for large positive velocity differences. The results agree with the one found recently by Polyakov, who used the operator product conjecture. If this conjecture is true, then our WKB asymptotics of the Wyld functional integral should be exact to all orders of perturbation expansion around the instanton solution. We also generalized our solution for the arbitrary dimension of the Burgers (KPZ) equation. As a result we found the asymptotics of the angular dependence of the velocity difference probability distribution function. \textcopyright{} 1996 The American Physical Society.

158 citations


Journal ArticleDOI
TL;DR: The method for finding the non-Gaussian tails of the probability distribution function for solutions of a stochastic differential equation, such as the convection equation for a passive scalar, the random driven Navier-Stokes equation, etc, is described.
Abstract: We describe the method for finding the non-Gaussian tails of the probability distribution function (PDF) for solutions of a stochastic differential equation, such as the convection equation for a passive scalar, the random driven Navier-Stokes equation, etc. The existence of such tails is generally regarded as a manifestation of the intermittency phenomenon. Our formalism is based on the WKB approximation in the functional integral for the conditional probability of large fluctuation. We argue that the main contribution to the functional integral is given by a coupled field-force configuration\char22{}the instanton. As an example, we examine the correlation functions of the passive scalar u advected by a large-scale velocity field \ensuremath{\delta} correlated in time. We find the instanton determining the tails of the generating functional, and show that it is different from the instanton that determines the probability distribution function of high powers of u. We discuss the simplest instantons for the Navier-Stokes equation. \textcopyright{} 1996 The American Physical Society.

153 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce two new methods to obtain reliable velocity field statistics from N-body simulations, or indeed from any general density and velocity fluctuation field sampled by discrete points, which are based on the use of the Voronoi and Delaunay tessellations of the point distribution defined by the locations at which the velocity held is sampled.
Abstract: We introduce two new methods to obtain reliable velocity field statistics from N-body simulations, or indeed from any general density and velocity fluctuation field sampled by discrete points, These methods, the Voronoi tessellation method and Delaunay tessellation method, are based on the use of the Voronoi and Delaunay tessellations of the point distribution defined by the locations at which the velocity held is sampled. In the Voronoi method the velocity is supposed to be uniform within the Voronoi polyhedra, whereas the Delaunay method constructs a velocity field by linear interpolation between the four velocities at the locations defining each Delaunay tetrahedron. The most important advantage of these methods is that they provide an optimal estimator for determining the statistics of volume-averaged quantities, as opposed to the available numerical methods that mainly concern mass-averaged quantities. As the major share of the related analytical work on velocity field statistics has focused on volume-averaged quantities, the availability of appropriate numerical estimators is of crucial importance for checking the validity of the analytical perturbation calculations. In addition, it allows us to study the statistics of the velocity field in the highly non-linear clustering regime. Specifically we describe in this paper how to estimate, in both the Voronoi and the Delaunay methods, the value of the volume-averaged expansion scalar theta = H(-1)del .upsilon (the divergence of the peculiar velocity, expressed in units of the Hubble constant H), as well as the value of the shear and the vorticity of the velocity field, at an arbitrary position. The evaluation of these quantities on a regular grid leads to an optimal estimator for determining the probability distribution function (PDF) of the volume-averaged expansion scalar, shear and vorticity. Although in most cases both the Voronoi and the Delaunay methods lead to equally good velocity field estimates, the Delaunay method may be slightly preferable. In particular it performs considerably better at small radii. Note that it is more CPU-time intensive while its requirement for memory space is almost a factor 8 lower than the Voronoi method. As a test we here apply our estimator to that of an N-body simulation of such structure formation scenarios. The PDF:; determined from the simulations agree very well with the analytical predictions. An important benefit of the present method is that, unlike previous methods, it is capable of probing accurately the velocity field statistics in regions of very low density, which in N-body simulations are typically sparsely sampled, In a forthcoming paper we will apply the newly developed tool to a plethora of structure formation scenarios, of both Gaussian and non-Gaussian initial conditions, in order to see to what extent the velocity field PDFs are sensitive discriminators, highlighting fundamental physical differences between the scenarios.

126 citations


Proceedings ArticleDOI
11 Dec 1996
TL;DR: In this article, a probabilistic approach for robustness analysis of control systems affected by bounded uncertainty is presented. But the authors focus on the problem of estimating the number of samples required to estimate the probability that a given performance level is attained given a certain accuracy and confidence.
Abstract: In this paper, we study robustness analysis of control systems affected by bounded uncertainty. Motivated by the difficulty to perform this analysis when the uncertainty enters into the plant coefficients in a nonlinear fashion, we study a probabilistic approach. In this setting, the uncertain parameters q are random variables bounded in a set Q and described by a multivariate density function f(q). We then ask the following question: Given a performance level, what is the probability that this level is attained? The main content of this paper is to derive explicit bounds for the number of samples required to estimate this probability with a certain accuracy and confidence apriori specified. It is shown that the number obtained is inversely proportional to these thresholds and it is much smaller than that of classical results. Finally, we remark that the same approach can be used to study several problems in a control system context. For example, we can evaluate the worst-case H/sup /spl infin// norm of the sensitivity function or compute /spl mu/ when the robustness margin is of concern.

112 citations


Journal ArticleDOI
TL;DR: A probabilistic-analysis methodology is described that provides quantitative measures of alerting-system performance, including the probabilities of a false alarm and missed detection, and can be used in a variety of vehicle, transportation-system, and process-control applications.
Abstract: A probabilistic-analysis methodology is described that provides quantitative measures of alerting-system performance, including the probabilities of a false alarm and missed detection. As part of the approach, the alerting decision is recast as a signal-detection problem, and system operating-characteristic curves are introduced to describe the tradeoffs between alerting-threshold placement and system performance. The methodology fills the need for a means to determine appropriate alerting thresholds and to quantify the potential benefits that are possible through changes in the design of the system. Because the methodology is developed in a generalized manner, it can be used in a variety of vehicle, transportation-system, and process-control applications. The methodology is demonstrated through an example application to the Traffic Alert and Collision Avoidance System (TCAS). Recent changes in TCAS alerting thresholds are shown to reduce the probability of a false alarm in situations known to produce frequent nuisance alerts in actual operations. Nomenclature A, N, T = probabilistic-state trajectories E = event of encountering a hazard fx (x) = probability density function for the random variable x h = estimated relative altitude h = estimated relative-altitude rate / = incident event ^maif - probability of alerting-system malfunction P(x) = probability of event ;c PT(X) = probability of event x evaluated along trajectory T r = estimated relative range r = estimated relative-range rate

Journal ArticleDOI
TL;DR: In this paper, a two-dimensional (x, z) Lagrangian stochastic dispersion model is presented that is correct (i.e. fulfils the well-mixed condition) for neutral to convective conditions.
Abstract: A two-dimensional (x, z) Lagrangian stochastic dispersion model is presented that is correct (i.e. fulfils the well-mixed condition) for neutral to convective conditions. The probability density function (pdf) of the particle velocities is constructed as a weighted sum of a neutral pdf (u and w jointly Gaussian) and a convective pdf (w skewed, u and w uncorrelated). The transition function ℱ varies continuously with stability and therefore ensures that the model results are not confined to a finite number of stability classes. The model is described in full detail and some sensitivity tests are presented. In particular, the role of Co, the universal constant in the Lagrangian structure function for the inertial subrange, in determining average plume characteristics is discussed. Furthermore, the evolution of average plume-height and -width is investigated for different boundary-layer stabilities ranging from ideally neutral to fully convective. Finally, the model is applied to the situation of a tracer experiment in Copenhagen, and it is shown that the measured surface-concentrations can be well simulated.

PatentDOI
TL;DR: The invention provides for an automatic selection of the best model in real time wherein the best fit is determined by a voting process and the memory requirements for the speech recognition apparatus and method are significantly reduced.
Abstract: Apparatus and method for improving the speed an accuracy of recognition of speech dialects, or speech tansferred via dissimilar channels is described. The invention provides multiple models tailored to specific segments or dialects, and/or speech channels, of the population. However, there is not a proportional increase in recognition time or computing power or computing resources needed. Probability density functions for the acoustic descriptors are provided which are shared among the various models. Since there is a common pool of probability density functions which are mapped or pointed to for the different acoustic descriptors for each different dialect or speech channel model, the memory requirements for the speech recognition apparatus and method are significantly reduced. Each model is comprised of triphonemes which are modelled by discrete probability distribution functions forming hidden Markov models or statistical word models. Any one probability density function is assigned or mapped to many different triphonemes in many different dialects or different models. The invention provides for an automatic selection of the best model in real time wherein the best fit is determined by a voting process.

Journal ArticleDOI
TL;DR: In this article, the first and second moments of the storage state distribution in terms of the moment of the inflow distribution are derived for specified drafts, using a non-linear solver.

Journal ArticleDOI
TL;DR: This paper discusses the spanning percolation probability function for three different spanning rules, in general dimensions, with both free and periodic boundary conditions, and finds strong relations among different derivatives of the spanning function with respect to the scaling variables, thus yielding several universal amplitude ratios.
Abstract: We discuss the spanning percolation probability function for three different spanning rules, in general dimensions, with both free and periodic boundary conditions. Our discussion relies on the renormalization group theory, which indicates that, apart from a few scale factors, the scaling functions for the spanning probability are determined by the fixed point and therefore are universal for every system with the same dimensionality, spanning rule, aspect ratio, and boundary conditions. For square and rectangular systems, we combine this theory with simple relations between the spanning rules and with duality arguments, and find strong relations among different derivatives of the spanning function with respect to the scaling variables, thus yielding several universal amplitude ratios and allowing a systematic study of the corrections to scaling, both singular and analytic, in the system size. The theoretical predictions are numerically confirmed with excellent accuracy. \textcopyright{} 1996 The American Physical Society.

Journal ArticleDOI
TL;DR: An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights.
Abstract: An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

Journal ArticleDOI
TL;DR: In this paper, the influence of the tail weight of the error distribution is addressed in the setting of choosing threshold and truncation parameters, and different approaches to correction for stochastic design are suggested.
Abstract: SUMMARY Concise asymptotic theory is developed for non-linear wavelet estimators of regression means, in the context of general error distributions, general designs, general normalizations in the case of stochastic design, and non-structural assumptions about the mean. The influence of the tail weight of the error distribution is addressed in the setting of choosing threshold and truncation parameters. Mainly, the tail weight is described in an extremely simple way, by a moment condition; previous work on this topic has generally imposed the much more stringent assumption that the error distribution be normal. Different approaches to correction for stochastic design are suggested. These include conventional kernel estimation of the design density, in which case the interaction between the smoothing parameters of the non-linear wavelet estimator and the linear kernel method is described.

Journal ArticleDOI
TL;DR: The nonlinear subsystem of a Hammerstein system is identified, i.e., its characteristic is recovered from input output ohservations of the whole system and satisfies a piecewise Lipschitz condition only.
Abstract: The nonlinear subsystem of a Hammerstein system is identified, i.e., its characteristic is recovered from input output ohservations of the whole system. The input and disturbance are white stochastic processes. The identified characteristic satisfies a piecewise Lipschitz condition only. Algorithms presented in the paper are calculated from ordered input-output observations, i.e., from pairs of observations arranged in a sequence in which input measurements increase in value. The mean integrated square error converges to zero as the number of observations tends to infinity. Convergence rates are insensitive to the shape of the probability density of the input signal. Results of numerical simulation are also shown.

Journal ArticleDOI
TL;DR: In this article, four nonparametric estimates of the mode of a density function are investigated, two from a global (resp. from a local) kernel density estimate, while the other two are defined from the local kernel estimate of the first derivative of the density function.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a modification to the slotting technique that results in a much lower statistical variance at small lag times, enabling a direct estimation of the Taylor microscale from the curvature of the acf at zero lag time.
Abstract: The autocorrelation function of turbulent velocity fluctuations can be estimated from randomly sampled LDA data using the slotting technique. However, the autocorrelation function (acf) obtained in this way suffers from a relatively high statistical variance. This paper proposes a modification to the slotting technique that results in a much lower statistical variance at small lag times. The modification enables a direct estimation of the Taylor microscale from the curvature of the acf at zero lag time. The potential of the modified slotting technique for the estimation of spectral density functions is also explored. It is shown that the modified slotting technique in conjunction with a variable window forms a powerful spectral estimator for low data density flows.

BookDOI
01 Jan 1996
TL;DR: The authors' online web service was released using a want to work as a complete online electronic library that gives entry to large number of PDF archive assortment that includes famous books, solution key, exam test question and solution, and so forth.
Abstract: Problems in the Foundation of the Path Probability Method: Path Probability Function and Entropy Production (R. Kikuchi). Ising Model and Kinetic Mean Field Theories (F. Ducastelle). Kinetic Path and Fluctuation Calculated by the PPM (T. Mohri). Time Development of Fluctuations in the PPM (K. Wada). Universal Dynamic Response in Solid Electrolytes: Formalism of the Path Probability Method as Applied to Transport Properties (H. Sato). Cluster Variational Approach to Nonequilibrium Lattice Models (M. Katori). Coherent-Anomaly Method and Its Applications (M. Suzuki). Cluster Variation and Cluster Static (D. de Fontaine). The Cluster Variation Method, the Effective Field Method and the Method of Integral Equation on the Regular and Random Ising Model (S. Katsura). CVM, CCM, and QCVM (T. Morita). The Cluster Expansion Method (J.M. Sanchez). Lattice Models and Cluster Expansions for the Prediction of Oxide Phase Diagrams and Defect Arrangements (G. Ceder). Diffuse Scattering of Neutrons in Ni3V and Pt3V: Test of the Gamma Expansion Method Approximation in a Degenerate Case (R. Caudron). Cluster Variation Method Applications to Large Ising Aggregates (F. AguileraGranja). Thermodynamic Properties of Coherent Interphase Boundaries in fcc Substitutional Alloys (M. Asta). 8 additional articles. Index.

Journal ArticleDOI
TL;DR: In this paper, the exact density of the difference of two linear combinations of independent noncentral chi-square variables is obtained in terms of Whittaker's function and expressed in closed forms.
Abstract: The exact density of the difference of two linear combinations of independent noncentral chi-square variables is obtained in terms of Whittaker's function and expressed in closed forms. Two distinct representations are required in order to cover all the possible cases. The corresponding expressions for the exact distribution function are also given.

Journal ArticleDOI
TL;DR: New upper and lower bounds on the minimum probability of error of Bayesian decision systems for the two-class problem are presented, making them tighter than any previously known bounds.
Abstract: This paper presents new upper and lower bounds on the minimum probability of error of Bayesian decision systems for the two-class problem. These bounds can be made arbitrarily close to the exact minimum probability of error, making them tighter than any previously known bounds.

Journal ArticleDOI
TL;DR: Log compression of A lines to produce B-scan images in clinical ultrasound imaging systems is a standard procedure to control the dynamic range of the images and the analysis shows that the mean of the log-compressed envelope is an increasing function of both the backscattered energy and the effective scatterer density.
Abstract: Log compression of A lines to produce B‐scan images in clinical ultrasound imaging systems is a standard procedure to control the dynamic range of the images. The statistics of such compressed images in terms of underlying scatterer statistics have not been derived. The statistics are analyzed for partially formed speckle using a general K distribution model of envelope statistics to derive the density function for the log‐compressed envelope. This density function is used to elucidate the relation between the moments of the compressed envelope, the compression parameters, and the statistics of the scatterers. The analysis shows that the mean of the log‐compressed envelope is an increasing function of both the backscattered energy and the effective scatterer density. The variance of the log‐compressed envelope is a decreasing function of the effective scatterer density and is independent of the backscattered energy.

Journal ArticleDOI
TL;DR: In this article, necessary and sufficient conditions were established for the density function to be an increasing function of a density function and the log-concavity of integrals of this density function.

Journal ArticleDOI
TL;DR: In this paper, it is argued that such a model seems inapplicable, at least in its simplest form, by providing a timescale and a length scale which are not in agreement with observations.
Abstract: Localized electrostatic wave packets in the frequency region of lower hybrid waves have been detected by the instruments on the FREJA satellite. These waves are often associated with local density depletions indicating that the structures can be interpreted as wave filled cavities. The basic features of the observations are discussed. On the basis of simple statistical arguments it is attempted to present some characteristics which have to be accommodated within an ultimate theory describing the observed wave phenomena. An interpretation in terms of collapse of nonlinear lower hybrid waves is discussed in particular. It is argued that such a model seems inapplicable, at least in its simplest form, by providing a timescale and a length scale which are not in agreement with observations. Alternatives to this model are presented.

Journal ArticleDOI
TL;DR: In this article, a stochastic approach for the analysis of non-chaotic, chaotic, random and non-chaos dynamics of a non-linear system is presented, which utilizes a Markov process approximation, direct numerical simulations, and a generalized Stochastic Melnikov process.
Abstract: This study presents a stochastic approach for the analysis of nonchaotic, chaotic, random and nonchaotic, random and chaotic, and random dynamics of a nonlinear system. The analysis utilizes a Markov process approximation, direct numerical simulations, and a generalized stochastic Melnikov process. The Fokker-Planck equation along with a path integral solution procedure are developed and implemented to illustrate the evolution of probability density functions. Numerical integration is employed to simulate the noise effects on nonlinear responses. In regard to the presence of additive ideal white noise, the generalized stochastic Melnikov process is developed to identify the boundary for noisy chaos. A mathematical representation encompassing all possible dynamical responses is provided. Numerical results indicate that noisy chaos is a possible intermediate state between deterministic and random dynamics. A global picture of the system behavior is demonstrated via the transition of probability density function over its entire evolution. It is observed that the presence of external noise has significant effects over the transition between different response states and between co-existing attractors.

Journal ArticleDOI
TL;DR: In this paper, the shape and topology of a structural component are optimized with the objective of minimizing the compliance subject to a constraint on the total mass of the structure, which is defined as the boundary of the shape.
Abstract: In this paper, a method is proposed for the design optimization of structural components where both shape and topology are optimized. The boundaries of the shape of the structure are represented using contours of a shape density function. The contour of the density function corresponding to a threshold value is defined as the boundary of the shape. The shape density function is defined over a feasible domain and is represented by a continuous piece-wise interpolation over the finite elements used for structural analysis. The values of the density function at the nodes serve as the design variables of the optimization problem. The advantage of this shape representation is that both shape and topology of the structure can be modified and optimized by the optimization algorithm. Unlike previous methods for shape and topology optimization, the material is not modeled as porous or composite using the homogenization method. Instead the material properties of the structure are assumed to depend on the density function and many approximate material property-density relations have been studied. The shape and topology of structural components are optimized with the objective of minimizing the compliance subject to a constraint on the total mass of the structure.

Journal ArticleDOI
TL;DR: In this article, the performance of coherent Doppler lidar in the weak-signal regime is investigated by computer simulations of velocity estimators that accumulate the signal from N pulses of zero-mean complex Gaussian stationary lidar data described by a Gaussian covariance function.
Abstract: The performance of coherent Doppler lidar in the weak-signal regime is investigated by computer simulations of velocity estimators that accumulate the signal from N pulses of zero-mean complex Gaussian stationary lidar data described by a Gaussian covariance function. The probability density function of the resulting estimates is modeled as a fraction b of uniformly distributed had estimates or random outliers and a localized distribution of good estimates with standard deviation g. Results are presented for various velocity estimators and for typical boundary layer measurements of 2-µm coherent lidars and also for proposed space-based measurements with 2- and 10-µm lidars. For weak signals and insufficient pulse accumulation, the fraction of bad estimates is high and g ≈ WV, the spectral width of the signal in velocity space. For a large number of accumulated pulses N, there are few bad estimates and g ∝ WvN−1/2. The threshold signal energy or average number of coherent photoelectrons per pulse ...

Posted Content
TL;DR: In this paper, the four-point correlation function of the CMB anisotropies was investigated in detail and the magnitude and geometrical dependences of this correlation function were investigated.
Abstract: The weak lensing effects are known to change only weakly the shape of the power spectrum of the Cosmic Microwave Background (CMB) temperature fluctuations. I show here that they nonetheless induce specific non-Gaussian effects that can be detectable with the four-point correlation function of the CMB anisotropies. The magnitude and geometrical dependences of this correlation function are investigated in detail. It is thus found to scale as the square of the derivative of the two-point correlation function and as the angular correlation function of the gravitational displacement field. It also contains specific dependences on the shape of the quadrangle formed by the four directions. When averaged at a given scale, the four-point function, that identifies with the connected part of the fourth moment of the probability distribution function of the local filtered temperature, scales as the square of logarithmic slope of its second moment, and as the variance of the gravitational magnification at the same angular scale. All these effects have been computed for specific cosmological models. It is worth noting that, as the amplitude of the gravitational lens effects has a specific dependence on the cosmological parameters, the detection of the four-point correlation function could provide precious complementary constraints to those brought by the temperature power spectrum.