scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 1989"


Journal ArticleDOI
TL;DR: A new reproducibility index is developed and studied that is simple to use and possesses desirable properties and the statistical properties of this estimate can be satisfactorily evaluated using an inverse hyperbolic tangent transformation.
Abstract: A new reproducibility index is developed and studied. This index is the correlation between the two readings that fall on the 45 degree line through the origin. It is simple to use and possesses desirable properties. The statistical properties of this estimate can be satisfactorily evaluated using an inverse hyperbolic tangent transformation. A Monte Carlo experiment with 5,000 runs was performed to confirm the estimate's validity. An application using actual data is given.

6,916 citations


Journal ArticleDOI
TL;DR: A simple model is developed, based on the diffusion approximation to radiative transfer theory, which yields analytic expressions for the pulse shape in terms of the interaction coefficients of a homogeneous slab.
Abstract: When a picosecond light pulse is incident on biological tissue, the temporal characteristics of the light backscattered from, or transmitted through, the sample carry information about the optical absorption and scattering coefficients of the tissue. We develop a simple model, based on the diffusion approximation to radiative transfer theory, which yields analytic expressions for the pulse shape in terms of the interaction coefficients of a homogeneous slab. The model predictions are in good agreement with the results of preliminary in vivo experiments and Monte Carlo simulations.

2,242 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a method for optimizing the analysis of data from multiple Monte Carlo computer simulations over wide ranges of parameter values, which is applicable to simulations in lattice gauge theories, chemistry, and biology, as well as statistical mechanics.
Abstract: We present a new method for optimizing the analysis of data from multiple Monte Carlo computer simulations over wide ranges of parameter values. Explicit error estimates allow objective planning of the lengths of runs and the parameter values to be simulated. The method is applicable to simulations in lattice gauge theories, chemistry, and biology, as well as statistical mechanics.

2,198 citations


Journal ArticleDOI
Ulli Wolff1
TL;DR: A Monte Carlo algorithm is presented that updates large clusters of spins simultaneously in systems at and near criticality and its efficiency is demonstrated in the two-dimensional $\mathrm{O}(n)$ $\ensuremath{\sigma}$ models.
Abstract: A Monte Carlo algorithm is presented that updates large clusters of spins simultaneously in systems at and near criticality. We demonstrate its efficiency in the two-dimensional $\mathrm{O}(n)$ $\ensuremath{\sigma}$ models for $n=1$ (Ising) and $n=2$ ($x\ensuremath{-}y$) at their critical temperatures, and for $n=3$ (Heisenberg) with correlation lengths around 10 and 20. On lattices up to ${128}^{2}$ no sign of critical slowing down is visible with autocorrelation times of 1-2 steps per spin for estimators of long-range quantities.

1,965 citations


Journal ArticleDOI
TL;DR: In this article, conditions under which the numerical approximation of a posterior moment converges almost surely to the true value as the number of Monte Carlo replications increases, and the numerical accuracy of this approximation may be assessed reliably, are set forth.
Abstract: Methods for the systematic application of Monte Carlo integration with importance sampling to Bayesian inference in econometric models are developed. Conditions under which the numerical approximation of a posterior moment converges almost surely to the true value as the number of Monte Carlo replications increases, and the numerical accuracy of this approximation may be assessed reliably, are set forth. Methods for the analytical verification of these conditions are discussed

1,649 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a simple modification of a conventional method of moments estimator for a discrete response model, replacing response probabilities that require numerical integration with estimators obtained by Monte Carlo simulation.
Abstract: This paper proposes a simple modification of a conventional method of moments estimator for a discrete response model, replacing response probabilities that require numerical integration with estimators obtained by Monte Carlo simulation. This method of simulated moments (MSM) does not require precise estimates of these probabilities for consistency and asymptotic normality, relying instead on the law of large numbers operating across observations to control simulation error, and hence can use simulations of practical size. The method is useful for models such as high-dimensional multinomial probit (MNP), where computation has restricted applications.

1,621 citations


Posted Content
TL;DR: In this article, the authors show that the effects of data snooping can be substantial when applied to financial asset pricing models, where properties of the data are used to construct the test statistics.
Abstract: Tests of financial asset pricing models may yield misleading inferences when properties of the data are used to construct the test statistics. In particular, such tests are often based on returns to portfolios of common stock, where portfolios are constructed by sorting on some empirically motivated characteristic of the securities such as market value of equity. Analytical calculations, Monte Carlo simulations, and two empirical examples show that the effects of this type of data snooping can be substantial. Article published by Oxford University Press on behalf of the Society for Financial Studies in its journal, The Review of Financial Studies.(This abstract was borrowed from another version of this item.)

1,117 citations


Book
30 Oct 1989
TL;DR: In this paper, the authors present a review of the properties of Semiconductor devices and compare them with the Monte Carlo simulation of the two-dimensional electron gas (2DEG) model.
Abstract: 1 Introduction.- References.- 2 Charge Transport in Semiconductors.- 2.1 Electron Dynamics.- 2.2 Energy Bands.- 2.2.1 Relationship of Energy to Wavevector.- 2.2.2 Effective Masses.- 2.2.3 Nonparabolicity.- 2.2.4 Herring and Vogt Transformation.- 2.2.5 Actual Bands of Real Semiconductors.- 2.3 Scattering Mechanisms.- 2.3.1 Classification and Physical Discussion.- 2.3.2 Fundamentals of Scattering.- 2.4 Scattering Probabilities.- 2.4.1 Phonon Scattering, Deformation-Potential Interaction.- 2.4.2 Phonon Scattering, Electrostatic Interaction.- 2.4.3 Ionized Impurity Scattering.- 2.4.4 Carrier-Carrier Scattering.- 2.5 Transport Equation.- 2.6 Linear Response and the Relaxation Time Approximation.- 2.6.1 Relaxation Times for the Various Scattering Mechanisms.- 2.6.2 Carrier Mobilities in Various Materials.- 2.7 Diffusion, Noise, and Velocity Autocorrelation Function.- 2.7.1 Basic Macroscopic Equations of Diffusion.- 2.7.2 Diffusion, Autocorrelation Function, and Noise.- 2.7.3 Electron Lifetime and Diffusion Length.- 2.8 Hot Electrons.- 2.9 Transient Transport.- 2.10 The Two-dimensional Electron Gas.- 2.10.1 Subband Levels and Wavefunctions.- 2.10.2 Scattering Rates.- References.- 3 The Monte Carlo Simulation.- 3.1 Fundamentals.- 3.2 Definition of the Physical System.- 3.3 Initial Conditions.- 3.4 The Free Flight, Self Scattering.- 3.5 The Scattering Process.- 3.6 The Choice of the State After Scattering.- 3.6.1 Phonon Scattering, Deformation-Potential Interaction.- 3.6.2 Phonon Scattering, Electrostatic Interaction.- 3.6.3 Ionized Impurity Scattering.- 3.6.4 Carrier-Carrier Scattering.- 3.7 Collection of Results for Steady-State Phenomena.- 3.7.1 Time Averages.- 3.7.2 Synchronous Ensemble.- 3.7.3 Statistical Uncertainty.- 3.8 The Ensemble Monte Carlo (EMC).- 3.9 Many Particle Effects.- 3.9.1 Carrier-Carrier Scattering.- 3.9.2 Molecular Dynamics and Monte Carlo Method.- 3.9.3 Degeneracy in Monte Carlo Calculations.- 3.10 Monte Carlo Simulation of the 2DEG.- 3.11 Special Topics.- 3.11.1 Periodic Fields.- 3.11.2 Diffusion, Autocorrelation Function, and Noise.- 3.11.3 Ohmic Mobility.- 3.11.4 Impact Ionization.- 3.11.5 Magnetic Fields.- 3.11.6 Optical Excitation.- 3.11.7 Quantum Mechanical Corrections.- 3.12 Variance-reducing Techniques.- 3.12.1 Variance Due to Thermal Fluctuations.- 3.12.2 Variance Due to Valley Repopulation.- 3.12.3 Variance Related to Improbable Electron States.- 3.13 Comparison with Other Techniques.- 3.13.1 Analytical Techniques.- 3.13.2 The Iterative Technique.- 3.13.3 Comparison of the Different Techniques.- References.- 4 Review of Semiconductor Devices.- 4.1 Introduction.- 4.2 Historical Evolution of Semiconductor Devices.- 4.2.1 Evolution of Si Devices.- 4.2.2 Evolution of GaAs Devices.- 4.2.3 Technological Features.- 4.2.4 Scaling and Miniaturization.- 4.3 Physical Basis of Semiconductor Devices.- 4.3.1 p-n Junction.- 4.3.2 Bipolar Transistors.- 4.3.3 Heterojunction Bipolar Transistor.- 4.3.4 Metal-Semiconductor Contacts.- 4.3.5 Metal-Semiconductor Field-Effect Transistor.- 4.3.6 Metal-Oxide-Semiconductor Field-Effect Transistor.- 4.3.7 High Electron Mobility Transistor.- 4.3.8 Hot Electron Transistors.- 4.3.9 Permeable Base Transistor.- 4.4 Comparison of Semiconductor Devices.- 4.4.1 Device Parameters.- 4.4.2 Comparison of Semiconductor Devices.- References.- 5 Monte Carlo Simulation of Semiconductor Devices.- 5.1 Introduction.- 5.2 Geometry of the System.- 5.2.1 Boundary Conditions.- 5.2.2 Grid Definition.- 5.2.3 Superparticles.- 5.3 Particle-Mesh Force Calculation.- 5.3.1 Particle-Mesh Calculation in One Dimension.- 5.3.2 Charge Assignment Schemes in Two Dimensions.- 5.4 Poisson Solver and Field Distribution.- 5.4.1 Finite Difference Scheme.- 5.4.2 Matrix Methods.- 5.4.3 Rapid Elliptic Solvers (RES).- 5.4.4 Iterative Methods.- 5.4.5 Calculation of the Electric Field.- 5.4.6 The Collocation Method.- 5.5 The Monte Carlo Simulation of Semiconductor Devices.- 5.5.1 Initial Conditions.- 5.5.2 Time Cycles.- 5.5.3 Free Flight.- 5.5.4 Scattering.- 5.5.5 Carrier-Carrier Scattering.- 5.5.6 Degenerate Statistics.- 5.5.7 Statistics.- 5.5.8 Static Characteristics.- 5.5.9 A.C. Characteristics.- 5.5.10 Noise.- References.- 6 Applications.- 6.1 Introduction.- 6.2 Diodes.- 6.2.1 n+-n-n+ Diodes.- 6.2.2 Schottky Diode.- 6.3 MESFET.- 6.3.1 Short Channel Effects.- 6.3.2 Geometry Effects.- 6.3.3 Space-Charge Injection FET.- 6.3.4 Conclusions.- 6.4 HEMT and Heterojunction Real Space Transfer Devices.- 6.4.1 HEMT.- 6.4.2 Real-Space Transfer Devices.- 6.4.3 Velocity-Modulation Field Effect Transistor.- 6.5 Bipolar Transistor.- 6.6 HBT.- 6.7 MOSFET and MISFET.- 6.7.1 MOSFET.- 6.7.2 GaAs Injection-modulated MISFET.- 6.7.3 Conclusions.- 6.8 Hot Electron Transistors.- 6.8.1 The THETA Device.- 6.8.2 GaAs FET with Hot-Electron Injection Structure.- 6.8.3 Planar-doped-Barrier Transistors.- 6.9 Permeable Base Transistor.- 6.10 Comparison with Traditional Simulators.- References.- Appendix A. Numerical Evaluation of Some Integrals of Interest.- References.- Appendix B. Generation of Random Numbers.- References.

1,056 citations


Journal ArticleDOI
TL;DR: Weighted likelihood estimation (WLE) as discussed by the authors removes the first order bias term from MLE and proved to be less biased than MLE with the same asymptotic variance and normal distribution.
Abstract: Applications of item response theory, which depend upon its parameter invariance property, require that parameter estimates be unbiased. A new method, weighted likelihood estimation (WLE), is derived, and proved to be less biased than maximum likelihood estimation (MLE) with the same asymptotic variance and normal distribution. WLE removes the first order bias term from MLE. Two Monte Carlo studies compare WLE with MLE and Bayesian modal estimation (BME) of ability in conventional tests and tailored tests, assuming the item parameters are known constants. The Monte Carlo studies favor WLE over MLE and BME on several criteria over a wide range of the ability scale.

965 citations


Journal ArticleDOI
TL;DR: In Monte Carlo simulations, two two-stage designs are found to provide reduced bias in maximum likelihood estimation of the MTD in less than ideal dose-response settings and several designs to be nearly as conservative as the standard design in terms of the proportion of patients entered at higher dose levels.
Abstract: The Phase I clinical trial is a study intended to estimate the so-called maximum tolerable dose (MTD) of a new drug. Although there exists more or less a standard type of design for such trials, its development has been largely ad hoc. As usually implemented, the trial design has no intrinsic property that provides a generally satisfactory basis for estimation of the MTD. In this paper, the standard design and several simple alternatives are compared with regard to the conservativeness of the design and with regard to point and interval estimation of an MTD (33rd percentile) with small sample sizes. Using a Markov chain representation, we found several designs to be nearly as conservative as the standard design in terms of the proportion of patients entered at higher dose levels. In Monte Carlo simulations, two two-stage designs are found to provide reduced bias in maximum likelihood estimation of the MTD in less than ideal dose-response settings. Of the three methods considered for determining confidence intervals--the delta method, a method based on Fieller's theorem, and a likelihood ratio method--none was able to provide both usefully narrow intervals and coverage probabilities close to nominal.

816 citations


Proceedings ArticleDOI
TL;DR: This paper provides all the details necessary for implementation of a Monte Carlo program andVariance reduction schemes that improve the effiency of the Monte Carlo method are discussed.
Abstract: The Monte Carlo method is rapidly becoming the model of choice for simulating light transport in tissue. This paper provides all the details necessary for implementation of a Monte Carlo program. Variance reduction schemes that improve the eciency of the Monte Carlo method are discussed. Analytic expressions facilitating convolution calculations for finite flat and Gaussian beams are included. Useful validation benchmarks are presented.

Book
01 Jan 1989
TL;DR: Approximate Randomization Tests.
Abstract: Approximate Randomization Tests. Monte Carlo Sampling. Bootstrap Resampling. Conclusion. Appendices. References. Index.

Journal ArticleDOI
TL;DR: Applications are given to a GI/G/1 queueing problem and response surface estimation and Computation of the theoretical moments arising in importance sampling is discussed and some numerical examples given.
Abstract: Importance sampling is one of the classical variance reduction techniques for increasing the efficiency of Monte Carlo algorithms for estimating integrals. The basic idea is to replace the original...

Journal ArticleDOI
TL;DR: A new technique is described for investigating phase transitions and dynamics in interacting electron systems based on the derivation and self-consistent solution of infinite-order conserving approximations that provides a new approach to the study of two-particle correlations with strong frequency and momentum dependence.
Abstract: A semianalytical approach is described for strongly correlated electronic systems which satisfies microscopic conservation laws, treats strong frequency and momentum dependences, and provides information on both static and dynamic properties. This approach may be used to treat large systems and temperatures lower than those currently accessible to finite-temperature quantum Monte Carlo techniques. Examples of such systems include heavy-electron compounds, organic Bechegaard salts, bis-(ethylenedithiolo)-TTF superconductors, and the oxide superconductors. The technique is based on the derivation and self-consistent solution of infinite-order conserving approximations. The technique is used to derive a low-temperature phase diagram and dynamic correlation functions for the two-dimensional Hubbard lattice model.

Journal ArticleDOI
TL;DR: Importance sampling as a technique to improve the Monte Carlo method for probability integration can be shown to be extremely efficient and versatile.

Journal ArticleDOI
TL;DR: Calculations of fluence-depth distributions, effective penetration depths and diffuse reflectance from two models of radiative transfer, diffusion theory, and Monte Carlo simulation are compared for a semi-infinite medium.
Abstract: Using optical interaction coefficients typical of mammalian soft tissues in the red and near infrared regions of the spectrum, calculations of fluence-depth distributions, effective penetration depths and diffuse reflectance from two models of radiative transfer, diffusion theory, and Monte Carlo simulation are compared for a semi-infinite medium. The predictions from diffusion theory are shown to be increasingly inaccurate as the albedo tends to zero andlor the average cosine of scatter tends to unity.

Journal ArticleDOI
TL;DR: In this paper, a new method for the solution of problems involving material variability is proposed, which makes use of the Karhunen-Loeve expansion to represent the random material property.
Abstract: A new method for the solution of problems involving material variability is proposed. The material property is modeled as a stochastic process. The method makes use of the Karhunen‐Loeve expansion to represent the random material property. The expansion is a representation of the process in terms of a finite set of uncorrelated random variables. The resulting formulation is compatible with the finite element method. A Neumann expansion scheme is subsequently employed to obtain a convergent expansion of the response process. The response is thus obtained as a homogeneous multivariate polynomial in the uncorrelated random variables. From this representation various statistical quantities may be derived. The usefulness of the proposed method, in terms of accuracy and efficiency, is exemplified by considering a cantilever beam with random rigidity. The derived results pertaining to the second‐order statistics of the response are found in good agreement with those obtained by a Monte Carlo simulation solution ...

Journal ArticleDOI
TL;DR: The usefulness of the Monte Carlo code for the accurately simulation of important parameters in scintillation camera systems, stationary as well as SPECT (single-photon emission computed tomography) systems, has been demonstrated.

ReportDOI
TL;DR: In this article, the authors examined the finite-sample properties of the variance ratio test of the random walk hypothesis via Monte Carlo simulations under two null and three alternative hypotheses, and compared the performance of the Dickey-Fuller t and the Box-Pierce Q statistics.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the use of larger numbers of randomly oriented lines (100) can enhance the performance of the three-dimensional turning bands method, and use of a large number of lines will also reduce the presence of a distortion effect manifested as linelike patterns in the field.
Abstract: Numerical techniques to generate replicates of spatially correlated random fields are often used to synthesize sets of highly variable physical quantities in stochastic models of naturally heterogeneous systems. Within the realm of hydrologic research, for example, such tools are widely used to develop hypothetical rainfall distributions, hydraulic conductivity fields, fracture set properties, and other surface or subsurface flow parameters. The turning bands method is one such algorithm which generates two- and three-dimensional fields by combining values found from a series of one-dimensional simulations along lines radiating outward from a coordinate origin. Previous work with two-dimensional algorithms indicates that radial lines evenly spaced about the unit circle lead to enhanced convergence properties. The same can be said for the three-dimensional models, but it is more difficult to choose an arbitrary number of evenly spaced lines about the unit sphere. The current investigation shows that the use of larger numbers of randomly oriented lines (100) can enhance the performance of the three-dimensional algorithm. This improved performance is needed to effectively simulate problems characterized by full three dimensionality and/or anisotropy in either Monte Carlo or single-realization applications. Use of a large number of lines will also reduce the presence of a distortion effect manifested as linelike patterns in the field. Increased computational costs can be reduced by employing a fast Fourier transform technique to generate the line processes.


Journal ArticleDOI
TL;DR: Finite‐width light distributions in arterial tissue during Argon laser irradiation are simulated using the Monte Carlo method and the diverging light from a fiber penetrates tissue in a manner similar to collimated light.
Abstract: Finite-width light distributions in arterial tissue during Argon laser irradiation (476 nm) are simulated using the Monte Carlo method Edge effects caused by radial diffusion of the light extend +/- 15 mm inward from the perimeter of a uniform incident beam For beam diameters exceeding 3 mm the light distribution along the central axis can be described by the one-dimensional solution for an infinitely wide beam The overlapping edge effects for beam diameters smaller than 3 mm reduce the penetration of the irradiance in the tissue The beam profile influences the light distribution significantly The fluence rates near the surface for a Gaussian beam are two times higher on the central axis and decrease faster radially than for a flat profile The diverging light from a fiber penetrates tissue in a manner similar to collimated light

Journal ArticleDOI
TL;DR: The parallel analysis method for determining the number of components to retain in a principal components analysis has received a recent resurgence of support and interest, but researchers and practitioners desiring to use this criterion have been hampered by the required Monte Carlo analyses needed to develop the criteria.
Abstract: The parallel analysis method for determining the number of components to retain in a principal components analysis has received a recent resurgence of support and interest. However, researchers and practitioners desiring to use this criterion have been hampered by the required Monte Carlo analyses needed to develop the criteria. Two recent attempts at presenting regression estimation methods to determine eigenvalues were found to be deficient in several respects, and less accurate in general, than a simple linear interpolation of tabled random data eigenvalues generated through Monte Carlo simulation. Other methods for determining the parallel analysis criteria are discussed.

Journal ArticleDOI
TL;DR: In this article, simple Monte Carlo significance testing has many applications, particularly in the preliminary analysis of spatial data, where the value of the test statistic is ranked among a random sample of values generated according to the null hypothesis.
Abstract: SUMMARY Simple Monte Carlo significance testing has many applications, particularly in the preliminary analysis of spatial data. The method requires the value of the test statistic to be ranked among a random sample of values generated according to the null hypothesis. However, there are situations in which a sample of values can only be conveniently generated using a Markov chain, initiated by the observed data, so that independence is violated. This paper describes two methods that overcome the problem of dependence and allow exact tests to be carried out. The methods are applied to the Rasch model, to the finite lattice Ising model and to the testing of association between spatial processes. Power is discussed in a simple case.

Journal ArticleDOI
TL;DR: In this paper, the authors present a method for optimizing the analysis of data from multiple Monte Carlo computer simulations over wide ranges of parameter values, which is applicable to simulations in lattice gauge theories, chemistry, and biology, as well as statistical mechanics.
Abstract: We present a new method for optimizing the analysis of data from multiple Monte Carlo computer simulations over wide ranges of parameter values. Explicit error estimates allow objective planning of the lengths of runs and the parameter values to be simulated. The method is applicable to simulations in lattice gauge theories, chemistry, and biology, as well as statistical mechanics.

Journal ArticleDOI
TL;DR: An iterative maximum-likelihood method for simultaneously estimating directions of arrival (DOA) and sensor locations is developed and a distinctive feature of the algorithm is its ability to locate the sensors accurately without deploying calibration sources at known locations.
Abstract: Sensor location uncertainty can severely degrade the performance of direction-finding systems. An iterative maximum-likelihood method for simultaneously estimating directions of arrival (DOA) and sensor locations is developed to alleviate this problem. The case of nondisjoint sources, i.e., sources observed in the same frequency cell and at the same time, is emphasized. The algorithm converges to the global maximum of the likelihood function if the initial conditions are sufficiently good. Numerical examples are presented, illustrating the performance of the proposed technique. A distinctive feature of the algorithm is its ability to locate the sensors accurately without deploying calibration sources at known locations. >

Journal ArticleDOI
TL;DR: In this article, Ashworth's rapid lifetime determination method (RLD) was compared to the weighted linear least-squares (WLLS) method for a single exponential decay.
Abstract: The precision and speed of Ashworth's rapid lifetime determination method (RLD) for a single exponential decay is evaluated The RLD is compared to the weighted linear least-squares (WLLS) method Results are presented as a function of integration range and signal noise level For both the lifetime and the preexponential factor, optimum fitting regions exist, yet the errors increase rather slowly on either side of the optimum The optimum conditions for determination of the preexponential factor and the lifetime are similar, so both can be determined with good precision even at low total counts (10/sup 4/) In the optimum region, the relative standard deviations for the RLD are only 30-40% worse than for WLLS, but the calculations are tens to hundreds of times faster, depending on how the data are taken The speed and precision of the RLD coupled with the ease of data acquisition make it an attractive data reduction tool for real time analyses

Journal ArticleDOI
TL;DR: In this paper, a new generalized Flory equation of state for fluids containing athermal chain molecules is developed and compared to simulation results and existing theories in three, two, and one dimensions.
Abstract: A new equation of state for fluids containing athermal chain molecules is developed and compared to simulation results and existing theories in three, two, and one dimensions The new expression, which builds upon the generalized Flory theory, contains no adjustable parameters and relates the compressibility factor of an n‐mer fluid to the compressibility factors of monomer and dimer fluids at the same volume fraction Comparisons with Monte Carlo results for three‐ and two‐dimensional freely jointed chains show very good agreement, and the overall accuracy of the new expression appears comparable to Wertheim’s thermodynamic perturbation theory of polymerization In one dimension the new expression reduces to the exact result Application of the equation to chain models with internal constraints and overlapping hard sites is discussed and illustrated through comparisons with Monte Carlo results for rigid trimers An extension of our approach to arbitrary reference fluids shows that the generalized Flory and new equations are the first two members of a family of increasingly accurate equations of state for chains

Journal ArticleDOI
TL;DR: In this paper, the authors apply stochastic methods to the analysis and prediction of solute transport in heterogeneous saturated porous media and derive partial differential equations for three unconditional ensemble moments (the concentration mean, concentration covariance, and velocity concentration cross covariance) for a conservative solute.
Abstract: This paper applies stochastic methods to the analysis and prediction of solute transport in heterogeneous saturated porous media. Partial differential equations for three unconditional ensemble moments (the concentration mean, concentration covariance, and velocity concentration cross covariance) are derived by applying perturbation techniques to the governing transport equation for a conservative solute. Concentration uncertainty is assumed to be the result of unmodeled small-scale fluctuations in a steady state velocity field. The moment expressions, which describe how each moment evolves over time and space, resemble the classic deterministic advection-dispersion equation and can be solved using similar methods. A solution procedure based on a Galerkin finite element algorithm is illustrated with a hypothetical two-dimensional example. For this example the required steady state velocity statistics are obtained from an infinite domain spectral solution of the stochastic groundwater flow equation. The perturbation solution is shown to reproduce the statistics obtained from a Monte Carlo simulation quite well for a natural log conductivity standard deviation of 0.5 and moderately well for a natural log conductivity standard deviation of 1.0. The computational effort required for a perturbation solution is significantly less than that required for a Monte Carlo solution of acceptable accuracy. Sensitivity analyses conducted with the perturbation approach provide qualitative confirmation of a number of results obtained by other investigators for more restrictive special cases.

Journal ArticleDOI
TL;DR: A regression equation is presented for predicting parallel analysis values used to decide the number of principal components to retain and is appropriate for predicting criterion mean eigenvalues and was derived from random data sets containing between 5 and 50 variables and between 50 and 500 subjects.
Abstract: Monte Carlo research increasingly seems to favor the use of parallel analysis as a method for determining the "correct" number of factors in factor analysis or components in principal components analysis. We present a regression equation for predicting parallel analysis values used to decide the number of principal components to retain. This equation is appropriate for predicting criterion mean eigenvalues and was derived from random data sets containing between 5 and 50 variables and between 50 and 500 subjects. This relatively simple equation is more accurate for predicting mean eigenvalues from random data matrices with unities in the diagonals than a previously published equation. Moreover, given that the parallel analysis decision rule may be too dependent on chance, our equation is also used to predict the 95th percentile point in distributions of eigenvalues generated from random data matrices. Multiple correlations for all analyses were at least .95. Regression weights for predicting the first 33 mean and 95th percentile eigenvalues are given in easy-to-use tables.