scispace - formally typeset
Search or ask a question

Showing papers on "Cumulative distribution function published in 2000"


Journal ArticleDOI
TL;DR: In this article, the authors introduced a class of distortion operators, ga(t) = D[44-(u + a), where D is the standard normal cumulative distribution for any loss (or asset) variable X with a probability distribution Sx(x) = 1Fx (x), and ga [Sx(X)] defines a distorted probability distribution whose mean value yields a risk-adjusted premium (or an asset price) The distortion operator ga can be applied to both assets and liabilities, with opposite signs in the parameter a based on CAPM, the author establishes
Abstract: This article introduces a class of distortion operators, ga(t) = D[44-(u) + a], where D is the standard normal cumulative distribution For any loss (or asset) variable X with a probability distribution Sx(x) = 1Fx(x), ga [Sx(x)] defines a distorted probability distribution whose mean value yields a risk-adjusted premium (or an asset price) The distortion operator ga can be applied to both assets and liabilities, with opposite signs in the parameter a Based on CAPM, the author establishes that the parameter ca should correspond to the systematic risk of X For a normal (L,aU2) distribution, the distorted distribution is also normal with '= u + aa and a5' = a For a lognormal distribution, the distorted dis

618 citations


Journal ArticleDOI
TL;DR: A moment generating function-based numerical technique for the outage probability evaluation of maximal-ratio combining (MRC) and postdetection equal-gain combining (EGC) in generalized fading channels for which the fading in each diversity path need not be independent, identically distributed, nor even distributed according to the same family of distributions.
Abstract: Outage probability is an important performance measure of communication systems operating over fading channels. Relying on a simple and accurate algorithm for the numerical inversion of the Laplace transforms of cumulative distribution functions, we develop a moment generating function-based numerical technique for the outage probability evaluation of maximal-ratio combining (MRC) and postdetection equal-gain combining (EGC) in generalized fading channels for which the fading in each diversity path need not be independent, identically distributed, nor even distributed according to the same family of distributions. The method is then extended to coherent EGC but only for the case of Nakagami-m fading channels. The mathematical formalism is illustrated by applying the method to some selected numerical examples of interest showing the impact of the power delay profile and the fading correlation on the outage probability of MRC and EGC systems.

211 citations


Journal ArticleDOI
TL;DR: In this paper, nine of the most important estimators known for the two-point correlation function are compared using a predetermined, rigorous criterion, which takes into account bias and variance, and it is independent of the possibly non-Gaussian nature of the error statistics.
Abstract: Nine of the most important estimators known for the two-point correlation function are compared using a predetermined, rigorous criterion. The indicators were extracted from over 500 subsamples of the Virgo Hubble volume simulation cluster catalog. The "real" correlation function was determined from the full survey in a 3000 h(-1) Mpc periodic cube. The estimators were ranked by the cumulative probability of returning a value within a certain tolerance of the real correlation function. This criterion takes into account bias and variance, and it is independent of the possibly non-Gaussian nature of the error statistics. As a result, for astrophysical applications, a clear recommendation has emerged: the Landy & Szalay estimator, in its original or grid version (Szapudi & Szalay), is preferred in comparison with the other indicators examined, with a performance almost indistinguishable from the Hamilton estimator.

208 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present statistical analysis of data obtained by measuring narrowband path loss at DECT (digital enhanced cordless telecommunications) frequency (1.89 GHz) in an indoor environment.
Abstract: This paper presents statistical analysis of data obtained by measuring narrowband path loss at DECT (digital enhanced cordless telecommunications) frequency (1.89 GHz) in an indoor environment. Specific goodness-of-fit tests are applied to the data. The tests assess whether the data generating source belongs to a known family of random variables. Results obtained support that local path-loss distribution can be represented as Weibull or Nakagami in most environments. The close resemblance between such distributions and Rice distribution (with Rayleigh as a special case) confirms that Rice/Rayleigh description of fading can be applied to indoor environments. That similarity leads to a general, simple, approximate expression for the cumulative distribution function of the signal-to-interference ratio. The expression can be used without limitations to the number and the parameters of the Rice interferers.

197 citations


Journal ArticleDOI
TL;DR: Comparisons of the present result with those obtained by existing uncertainty importance measures show that the metric distance measure is a useful tool to express the measure of uncertainty importance in terms of the relative impact of distributional changes of inputs on the change of an output distribution.

180 citations


22 Sep 2000
TL;DR: In this paper, it was shown that the assumption for a zero-mean, normal error distribution can be replaced by a requirement that the error distribution is symmetric, unimodal, and whose cumulative distribution function is bounded by a Normal error distribution.
Abstract: One of the most critical requirements for aviation applications is integrity. For Category I precision approach, the International Civil Aviation Organization (ICAO) has defined an integrity requirement of 10-7 per approach on the probability that the system fails in a way that causes misleading information. Since the performance of GNSS can vary dramatically depending on satellite geometry, mathematical bounds on the position error have been defined to evaluate this requirement for a particular approach. These bounds are the horizontal and vertical protection levels (HPL and VPL, respectively), and are defined in RTCA and ICAO documents. The protection level equations are constructed on the assumption that the individual error components (pseudorange errors) are exactly characterized by a zero- mean Normal distribution, whose variance is known. Unfortunately, this assumption has not been validated. In fact, testing has indicated that there can be small residual means and the tails of the error distribution are not necessarily characterized by a Normal distribution. In order to generalize the integrity requirements for individual pseudorange error components, these effects must be accommodated. This paper proves that the assumption for a zero-mean, Normal error distribution can be replaced by a requirement that the error distribution is symmetric, unimodal, and whose cumulative distribution function (cdf) is bounded by a Normal error distribution (overbounded for errors less than the mean, underbounded for errors greater than the mean). This result is extended to accommodate non-zero means, which can be accounted for by inflating the variance of the assumed Normal error model. Three methods of inflating this variance are described.

152 citations


Journal ArticleDOI
TL;DR: This paper presents another data transformation method using cumulative distribution functions, simply addressed as distribution transformation, which can transform a stream of random data distributed in any range to data points uniformly distributed on the interval [0,1].
Abstract: The application of neural networks (NNs) in construction cover a large range of topics, including estimating construction costs and markup estimation, predicting construction productivity, predicting settlements during tunneling, and predicting the outcome of construction litigation. The primary purpose of data transformation is to modify the distribution of input variables so that they can better match outputs. The performance of a NN is often improved through data transformations. There are three existing data transformation methods: linear transformation, statistical standardization, and mathematical functions. This paper presents another data transformation method using cumulative distribution functions, simply addressed as distribution transformation. This method can transform a stream of random data distributed in any range to data points uniformly distributed on the interval [0,1]. Therefore, all neural input variables can be transformed to the same ground-uniform distributions on [0,1]. The transformation can also serve the specific need of neural computation that requires all input data to be scaled to the range [1,1] or [0,1]. The paper applies distribution transformation to two examples. Example 1 fits a cowboy hat surface because it provides a controlled environment for generating accurate input and output data patterns. The results show that distribution transformation improves the network performance by 50% over linear transformation. Example 2 is a real tunneling project, the Brasilia Tunnel, in which distribution transformation has reduced the prediction error by more than 13% compared with linear transformation.

77 citations


Journal ArticleDOI
TL;DR: In this article, a structural reliability analysis method with inclusion of random variables with unknown cumulative distribution functions is suggested, and an accurate third-moment standardization function is proposed using the proposed method.
Abstract: First- and second-order reliability methods are generally considered to be among the most useful for computing structural reliability. In these methods, the uncertainties included in resistances and loads are generally expressed as continuous random variables that have a known cumulative distribution function. The Rosenblatt transformation is a fundamental requirement for structural reliability analysis. However, in practical applications, the cumulative distribution functions of some random variables are unknown, and the probabilistic characteristics of these variables may be expressed using only statistical moments. In the present study, a structural reliability analysis method with inclusion of random variables with unknown cumulative distribution functions is suggested. Normal transformation methods that make use of high-order moments are investigated, and an accurate third-moment standardization function is proposed. Using the proposed method, the normal transformation for random variables with unknown cumulative distribution functions can be realized without using the Rosenblatt transformation. Through the numerical examples presented, the proposed method is found to be sufficiently accurate to include the random variables with unknown cumulative distribution functions in the first- and second-order reliability analyses with little extra computational effort.

76 citations


Journal ArticleDOI
Sheng Yue1
TL;DR: In this article, a procedure is presented for using the bivariate normal distribution to describe the joint distribution of storm peaks (maximum rainfall intensities) and amounts which are mutually correlated.
Abstract: A procedure is presented for using the bivariate normal distribution to describe the joint distribution of storm peaks (maximum rainfall intensities) and amounts which are mutually correlated. The Box-Cox transformation method is used to normalize original marginal distributions of storm peaks and amounts regardless of the original forms of these distributions. The transformation parameter is estimated using the maximum likelihood method. The joint cumulative distribution function, the conditional cumulative distribution function, and the associated return periods can be readily obtained based on the bivariate normal distribution. The method is tested and validated using two rainfall data sets from two meteorological stations that are located in different climatic regions of Japan. The theoretical distributions show a good fit to observed ones.

70 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an algorithm for computing the cumulative distribution function of the Kolmogorov-Smirnov test statistic D n in the all-parameters-known case.

67 citations


Journal ArticleDOI
TL;DR: This paper presents a general method for maximizing manufacturing yield when the realizations of system components are independent random variables with arbitrary distributions.
Abstract: This paper presents a general method for maximizing manufacturing yield when the realizations of system components are independent random variables with arbitrary distributions. Design specifications define a feasible region which, in the nonlinear case, is linearized using a first-order approximation. The method attempts to place the given tolerance hypercube of the uncertain parameters such that the area with higher yield lies in the feasible region. The yield is estimated by using the joint cumulative density function over the portion of the tolerance hypercube that is contained in the feasible region. A double-bounded density function is used to approximate various bounded distributions for which optimal designs are demonstrated on a tutorial example. Monte Carlo simulation is used to evaluate the actual yields of optimal designs.

Journal ArticleDOI
TL;DR: In this article, the authors evaluated the application of probability density functions (PDFs) and curve-fitting methods to approximate particle size distributions emitted from four pharmaceutical aerosol systems characterized using standard methods.

Journal ArticleDOI
TL;DR: Analysis of sets of mental processes under the assumption that the processes are partially ordered, that is, arranged in a directed acyclic network for serial-parallel networks finds some previous results for means are shown to follow from results for cumulative distribution functions.

Journal ArticleDOI
TL;DR: A new technique for computing the exact overall duration of a project, when task durations have independent distributions is presented, and graph reduction techniques by Hopcroft and Tarjan and by Valdes allow the problem to be broken into a series of smaller subproblems, improving computational efficiency.

Journal ArticleDOI
TL;DR: A new quantification algorithm in the GO methodology that permits the direct calculation of all signal state cumulative probability and is practical and meaningful for development and application of the GO method and useful in practice.

Journal ArticleDOI
TL;DR: This work rederives an infinite series for the CDF of a sum of RVs using the Gil-Pelaez (1951) inversion formula and the Poisson sum formula and shows that the PDF and CDF can be computed directly using a discrete Fourier transform.
Abstract: A frequent problem in digital communications is the computation of the probability density function (PDF) and cumulative distribution function (CDF), given the characteristic function (CHF) of a random variable (RV). This problem arises in signal detection, equalizer performance, equal-gain diversity combining, intersymbol interference, and elsewhere. Often, it is impossible to analytically invert the CHF to get the PDF and CDF in closed form. Beaulieu (1990, 1991) has derived an infinite series for the CDF of a sum of RVs that has been widely used. We rederive his series using the Gil-Pelaez (1951) inversion formula and the Poisson sum formula. This derivation has several advantages including both the bridging of the well-known sampling theorem with Beaulieu's series and yielding a simple expression for calculating the truncation error term. It is also shown that the PDF and CDF can be computed directly using a discrete Fourier transform.

Journal ArticleDOI
TL;DR: The study results show that the simulation method gives acceptable reliability indices and can also be used to provide information on the probability distributions associated with the predicted indices for different distributions of TTR and TTS.

Journal ArticleDOI
TL;DR: An analytical study that aims at evaluating the power-control error statistics in wireless direct-sequence code-division multiple-access (DS-CDMA) cellular systems based on an ideal variable step closed-loop power- control scheme with autocorrelation properties.
Abstract: This paper proposes an analytical study that aims at evaluating the power-control error statistics in wireless direct-sequence code-division multiple-access (DS-CDMA) cellular systems based on an ideal variable step closed-loop power-control scheme. In particular, the cumulative distribution function and the correlation coefficient of the power-control error are derived through a first-order Taylor expansion of the received signal envelope. A novel power-control scheme that exploits the autocorrelation properties of the fading is also proposed, and its performance is analyzed in terms of power-control error statistics. Rayleigh and Rice frequency-selective channel models, which involve the use of a diversity RAKE receiver at the base station, have been taken into account. The proposed analytical approach specifically applies to CDMA systems. A method that aims at estimating the capacity of a DS-CDMA cellular network is also given.

Journal ArticleDOI
TL;DR: An attempt to modelled the cumulative distribution of the time period between publication of an article and the time it receives its first citation using a classical Lotka function and a simple decreasing exponential model.
Abstract: The first-citation distribution, i.e. the cumulative distribution of the time period between publication of an article and the time it receives its first citation, has never been modelled by using well-known informetric distributions. An attempt to this is given in this paper. For the diachronous aging distribution we use a simple decreasing exponential model. For the distribution of the total number of received citations we use a classical Lotka function. The combination of these two tools yield new first-citation distributions. The model is then tested by applying nonlinear regression techniques. The obtained fits are very good and comparable with older experimental results of Rousseau and of Gupta and Rousseau. However our single model is capable of fitting all first-citation graphs, concave as well as S-shaped; in the older results one needed two different models for it. Our model is the function Here γ is the fraction of the papers that eventually get cited, t1 is the time of the first citation, a is...

Journal ArticleDOI
TL;DR: In this article, a distribution free method for estimating the quantile function of a non-negative random variable using the principle of maximum entropy (MaxEnt) subject to constraints specified in terms of the probability-weighted moments estimated from observed data is presented.

Journal ArticleDOI
TL;DR: This work considers a doubly-truncated gamma random variable restricted by both a lower (l) and upper (u) truncation point, both of which are considered known.
Abstract: The truncated gamma distribution has been widely studied, primarily in life-testing and reliability settings. Most work has assumed an upper bound on the support of the random variable, i.e. the space of the distribution is (0,u). We consider a doubly-truncated gamma random variable restricted by both a lower (l) and upper (u) truncation point, both of which are considered known. We provide simple forms for the density, cumulative distribution function (CDF), moment generating function, cumulant generating function, characteristic function, and moments. We extend the results to describe the density, CDF, and moments of a doubly-truncated noncentral chi-square variable.

Proceedings ArticleDOI
S.G. Gedam1, S.T. Beaudet
24 Jan 2000
TL;DR: A technique for performing Monte-Carlo simulation using an Excel spreadsheet using the powerful mathematical and statistical capabilities of Excel to obtain results such as reliability estimate, mean and variance of failures and confidence intervals.
Abstract: A technique for performing Monte-Carlo simulation using an Excel spreadsheet has been developed. This technique utilizes the powerful mathematical and statistical capabilities of Excel. The functional reliability block diagram (RBD) of the system under investigation is first transformed into a table in an Excel spreadsheet. Each cell within the table corresponds to a specific block in the RBD. Formulae for failure times entered into these cells are in accordance with the failure time distribution of the corresponding block and can follow exponential, normal, lognormal or Weibull distribution. The Excel pseudo random number generator is used to simulate failure times of individual units or modules in the system. Logical expressions are then used to determine system success or failure. Excel's macro feature enables repetition of the scenario thousands of times while automatically recording the failure data. Excel's graphical capabilities are later used for plotting the failure probability density function (PDF) and cumulative distribution function (CDF) of the overall system. The paper discusses the results obtainable from this method such as reliability estimate, mean and variance of failures and confidence intervals. Simulation time is dependent on the complexity of the system, computer speed, and the accuracy desired, and may range from a few minutes to a few hours.

Proceedings ArticleDOI
01 Dec 2000
TL;DR: A "classical" probability density function (PDF)-based approach relying on the cumulative distribution function (CDF) of the combined output signal-to-noise ratio (SNR) as well as the joint PDF of the Combined Output SNR and its time derivative, is used to obtain exact closed form expressions for average outage duration (AOD) of maximal-ratio combiners (MRC) over independent and identically distributed channels.
Abstract: A "classical" probability density function (PDF)-based approach relying on the cumulative distribution function (CDF) of the combined output signal-to-noise ratio (SNR) as well as the joint PDF of the combined output SNR and its time derivative, is used to obtain exact closed form expressions for average outage duration (AOD) of maximal-ratio combiners (MRC) over independent and identically distributed (i.i.d.) Rayleigh and Rice fading channels. On the other hand, relying on numerical techniques for inverting Laplace transforms of CDFs and for the computation of the joint characteristic function (CF) of the combined output SNR random process and its time derivative, a CF-based approach is adopted to calculate the AOD of MRC over non-i.i.d. Rician diversity paths. The mathematical formalism is illustrated by presenting some numerical results/plots showing the impact of the power delay profile and the distribution of the angle of arrivals on the AOD of diversity systems operating over typical fading channels of practical interest.

Journal ArticleDOI
TL;DR: In this paper, three nonparametric kriging methods (indicator, probability, and cumulative distribution function of order statistics) were used to estimate the probability of heavy-metal concentrations lower than a cutoff value.
Abstract: The probability of pollutant concentrations greater than a cutoff value is useful for delineating hazardous areas in contaminated soils. It is essential for risk assessment and reclamation. In this study, three nonparametric kriging methods [indicator kriging, probability kriging, and kriging with the cumulative distribution function (CDF) of order statistics (CDF kriging)] were used to estimate the probability of heavy-metal concentrations lower than a cutoff value. In terms of methodology, the probability kriging estimator and CDF kriging estimator take into account the information of the order relation, which is not considered in indicator kriging. Since probability kriging has been shown to be better than indicator kriging for delineating contaminated soils, the performance of CDF kriging, which we propose, was compared with that of probability kriging in this study. A data set of soil Cd and Pb concentrations obtained from a 10-ha heavy-metal contaminated site in Taoyuan, Taiwan, was used. The results demonstrated that the probability kriging and CDF kriging estimations were more accurate than the indicator kriging estimation. On the other hand, because the probability kriging was based on the cokriging estimator, some unreliable estimates occurred in the probability kriging estimation. This indicated that probability kriging was not as robust as CDF kriging. Therefore, CDF kriging is more suitable than probability kriging for estimating the probability of heavy-metal concentrations lower than a cutoff value.

Journal ArticleDOI
TL;DR: A new and general finite-series expression for the BEP in arbitrarily correlated Rayleigh fading is obtained, which is general and unifies previous published BEP results for 2/4 DPSK and NCFSK for multichannel reception in Rician fading.
Abstract: In this paper, we analyze the bit error probability (BEP) of binary and quaternary differential phase shift keying (2/4 DPSK) and noncoherent frequency shift keying (NCFSK) with postdetection diversity combining in arbitrary Rician fading channels. The model is quite general in that it accommodates fading correlation and noise correlation between different diversity branches as well as between adjacent symbol intervals. We show that the relevant decision statistic can be expressed in a noncentral Gaussian quadratic form, and its moment generating function (MGF) is derived. Using the MGF and the saddle point technique, we give an efficient numerical quadrature scheme to compute the BEP. The most significant contribution of the paper, however, lies in the derivation of a closed-form cumulative distribution function (cdf) for the decision statistic. As a result, a closed-form BEP expression in the form of an infinite series of elementary functions is developed, which is general and unifies previous published BEP results for 2/4 DPSK and NCFSK for multichannel reception in Rician fading. Specialization to some important cases are discussed and, as a byproduct, a new and general finite-series expression for the BEP in arbitrarily correlated Rayleigh fading is obtained. The theory is applied to study 2/4 DPSK and NCFSK performance for independent and correlated Rician fading channels; and some interesting findings are presented.

Journal Article
TL;DR: In this article, the authors extend Burg's method for recursive modeling of univariate autoregressions on a full set of lags, to multivariate modeling on a subset of Lags.
Abstract: We devise an algorithm that extends Burg's original method for recursive modeling of univariate autoregressions on a full set of lags, to multivariate modeling on a subset of lags. The key step in the algorithm involves minimizing the sum of the norm of the forward and backward prediction error residual vectors, as a function of the reflection coefficient matrices. We show that this sum has a global minimum, and give an explicit expression for the minimizer. By modifying the manner in which the reflection coefficients are calculated, this algorithm will also give the well-known Yule-Walker estimates. Based on recently proposed subset extensions to existing full set counterparts, two other algorithms that involve modifying the reflection coefficient calculation are also presented. Using simulated data, all four algorithms are compared with respect to the size of the Gaussian likelihood produced by each respective model. We find that the Burg and Vieira-Morf algorithms tend to perform better than the others for all configurations of roots of the autoregressive polynomial, averaging higher likelihoods with smaller variability across a large number of realizations. We extend existing asymptotic central limit type results for three common vector autoregressive process estimators, to the subset case. First, consistency and asymptotic normality are established for the least squares estimator. This is extended to Yule-Walker, by virtue of the similarity in the closed forms for the two estimators. Taking advantage of the fact that the Yule-Walker and Burg estimates can be calculated recursively via nearly identical algorithms, we then show these two differ by terms of order at most Op(1/n). In this way the Burg estimator inherits the same asymptotics as both Yule-Walker and least squares. Saddlepoint approximations to the distributions of the Yule-Walker and Burg autoregressive coefficient estimators, when sampling from a subset Gaussian AR(p) with only one non-zero lag, are given. In this context, each estimator can be written as a ratio of quadratic forms in normal random variables. The larger bias and variance in the distribution of the Yule-Walker estimator, particularly evident at low sample sizes and when the AR coefficient is close too ±1, agrees with its tendency to give lower likelihoods, as noted earlier. Empirical probability density histograms of the sampling distributions, for these two as well as the maximum likelihood estimator, provide further confirmation of the superiority of Burg over Yule-Walker in the vicinity of ±1. Relative error comparisons between the saddlepoint approximations and the empirical cumulative distribution functions, show close agreement.

Book
01 Feb 2000
TL;DR: In this article, the authors introduce histogram histograms, which are histograms of histograms obtained from the measured data and their theoretical values, and show that the histogram is a histogram of the average of a sample of measurements.
Abstract: 1 Basic characteristics of error distribution histograms 1 .I Introductory remarks histograms 1.2 The average of a sample of measurements 1.3 Dispersion measures in error analysis 1.4 Cumulative frequency distribution 1.5 Examples of empirical distributions 1.6 Parameters obtained from the measured data and their theoretical values Problems References 2 Random variables and probability normal distribution 2.1 Probability and random variables 2.2 The cumulative distribution function the probability density function 2.3 Moments 2.4 The normal probability distribution 2.5 Two-dimensional gravity flow of granular material Problems References 3 Probability distributions and their characterizations 3.1 The characteristic function of a distribution 3.2 Constants characterizing the random variables 3.3 Deterministic functions of random variabies 3.4 Some other one-dimensionat distributions 3.4.1 Discrete probability distributions 3.4.2 Corttinuous probability distribuhoas 3.4.3 Remarks on other probability distributions 3.4.4 Measures of deviation from the normal distribution 3.5 Approximate methods for constructing a probability density function 3.6 Multi-dimensional probability distributions Problerns References 4 Functions of independent random variables 4.1 Basic relations. 4.2 Simple examples of applications 4.3 Examples of applications in non-direct measuremenis 4.4 Remarks on applications in the calculus of tolerance limits 4.5 Statical analogy in the analysis of complex dimension nets Problems References 5 Two-dimensional Distributions 5.1 Introductory remarks 5.2 Linear regression of experimental observations 5.2.1 Nonparametric regression 5.2.2 The method of least squares for determining the linear regression line 5.2.3 The method of moments for determining the linear regression line l 5.3 Litlear correlation between experimentally determined quantities 5.4 Two-dimensional continuous random variables 5.5 The two-dimensional normal distribution 5.5.1 The case of independent random variables 5.5.2 The circular normal distribution 5 5.3 Three-dimensional gravity flow of granular media 5.5.4 The case of dependent randoin variables Problems References 6 Two-dimeosional functions of independent random variables 6 .1 Basic relations 6.2 The rectangillar distribution of independent random variables 6.2.1 Analytical method for determining two-dimensional tolerance limits polygons .2.2 Statical analogy method for determining two-dimensional tolerance limit polygons 6.2.3 Graphical method for determining two-dimensional tolerance limits polygon . Williot's diagram 6.3 The normal distribution of independent random variables 6.4 Indirect deternlination of the ellipses of probability concentration Problems References 7 Three-dimensional distributions 7.1 General remarks 7.2 Continuous three-dimensional random variables 7.3 Thc three-dimensional normal distribution 7.3.1 Independent random variables 7.3.2 The spherical normd distribution 7.3.3 The case of dependent random variables Problems References 8 Three-dimensional functions of independent random variables 8.1 Basic relations 8.2 The rectangular distribution of independent random variables 8.3 The normal distribution of independent random variables 8.4 Indirect determination of the ellipsoids of probability concentmtion Problems References 9 Problems described by implicit equations: 9.1 Introduction 9.2 Statistically independent random variables 9.2.1 Two independent random variables 9.2.2 A function of independent random variables 9.3 Statistically dependent random variables 9.3.1 Two dependent random variables 9.3.2 The case of Gaussian random variables 9.3.3 More random variables: the Rosenblatt transformation 9.4 Computational problems References 10 Useful definitions an

Journal ArticleDOI
TL;DR: An optimization algorithm based on simulated annealing is proposed, in which the domain of the search is successively reduced based on a probability concept until the stopping criteria are satise ed, by introducing the ideas of probability cumulative distribution function and stable energy.
Abstract: An optimization algorithm based on simulated annealing is proposed, in which the domain of the search is successively reduced based on a probability concept until the stopping criteria are satise ed. By introducing the ideas of probability cumulative distribution function and stable energy, the selection of initial temperature and equilibrium criterion in the process of simulated annealing becomes easy and effective. Numerical studies using a set of standard test functions and an example of a 10-bar truss show that the approach is effective and robust in solving both functional and structural optimization problems. HE achievement of optimum design is a goal naturally attractive to the designer. The e eld of structural design is generally characterized by a large number of variables, which are usually discrete in dimensions or properties, and these variables should satisfy all of the constraints to be a feasible structural design. Some structuraloptimizationproblemsarefairlyamenableto amathematical programming approach. In such instances, the design space is continuousand convex. The searchmay be deterministic,and methods are employed using, for example, gradient concepts. However, there is a large class of structural optimization problems with nonconvexities in the design space and with a mix of continuous and discrete variables. Under such circumstances, standard mathematical programming techniques are usually inefe cientbecause they are computationally expensive and are almost assured of locating the relative optimum close to the starting design. To overcome these dife culties, the stochastic search in structural optimization is considered. Many methods have become possible with the powerful computing facilities available in recent years. Among the stochastic algorithms, pure random search 1 is the simplest strategy for optimal design. Some modie ed versions have been suggested, such as single start, multistart, and random directions. 2,3 Methods in this class generally are quite simple to implement, but the appropriate stopping rules are very dife cult to derive. Recently, two classes of powerful search methods, which have their philosophical basis in processes found in nature, have been widely used in structural optimization. The e rst class of methods, including genetic algorithms 4 and simulated evolution, 5 is based on the spirit of Darwinian theory of evolution. The second class of methods is generally referred to as simulated annealing techniques 6 because they are qualitatively derived from a simulation of the behaviorofparticlesinthermalequilibriumatagiventemperature.Because of their global capabilities, research on the utilization of these search methods in design optimization has been undertaken. 7i 11 Somehybridtechniqueshavealsobeendevelopedbycombiningfeaturesofthesetwoalgorithms,suchasusingageneticalgorithmtodetermine better annealing schedules 12 and introducing a Boltzmanntype mutation or selection process into simulated evolution. 13,14 In this paper, we propose a method based on simulated annealing that searches from a population as in the method of simulated evolution instead of from a single point. The algorithm is called the region-reduction simulated annealing (RRSA) method because it locates the optimum by successively eliminating the regions with low probability of containing the optimum. A brief review of basic simulated annealing is given in Sec. II, which is helpful for development and explanation of the proposed

Journal ArticleDOI
TL;DR: New closed form expressions for the probability density function and the cumulative distribution function of the ratio of L/sub S/ Rician signals to L/ sub I/ Ricians interferers, so-called signal-to-interference (SIR) power ratio are derived.
Abstract: New closed form expressions for the probability density function and the cumulative distribution function of the ratio of L/sub S/ Rician signals to L/sub I/ Rician interferers, so-called signal-to-interference (SIR) power ratio, are derived. These SIR distributions are then used to evaluate the error probability and capture probability of direct sequence code division multiple access systems operating in Rician faded channels with lognormal shadowing. The influence of various system parameters such as the Rice K factor, shadowing spread, number of interferers, and spread spectrum processing gain on the system performance are analyzed and discussed.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a new method of integrating soft data, which is large-scale and has locally variable precision, using simulated annealing to construct stochastic realizations that reflect the uncertainty in the soft data by constraining the cumulative probability values of the block average values to follow a specified distribution.
Abstract: Interpretation of geophysical data or other indirect measurements provides large-scale soft secondary data for modeling hard primary data variables. Calibration allows such soft data to be expressed as prior probability distributions of nonlinear block averages of the primary variable; poorer quality soft data leads to prior distributions with large variance, better quality soft data leads to prior distributions with low variance. Another important feature of most soft data is that the quality is spatially variable; soft data may be very good in some areas while poorer in other areas. The main aim of this paper is to propose a new method of integrating such soft data, which is large-scale and has locally variable precision. The technique of simulated annealing is used to construct stochastic realizations that reflect the uncertainty in the soft data. This is done by constraining the cumulative probability values of the block average values to follow a specified distribution. These probability values are determined by the local soft prior distribution and a nonlinear average of the small-scale simulated values within the block, which are all known. For each realization to accurately capture the information contained in the soft data distributions, we show that the probability values should be uniformly distributed between 0 and 1. An objective function is then proposed for a simulated annealing based approach to enforce this uniform probability constraint. The theoretical justification of this approach is discussed, implementation details are considered, and an example is presented.