scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 1968"


Journal ArticleDOI
TL;DR: In this article, the authors made a Monte Carlo determination of the pressure and absolute entropy of the hard-sphere solid, and used these solid-phase thermodynamic properties, coupled with known fluid-phase data, to confirm the existence of a first-order melting transition for a classical many-body system of hard spheres and to discover the densities of the coexisting phases.
Abstract: In order to confirm the existence of a first‐order melting transition for a classical many‐body system of hard spheres and to discover the densities of the coexisting phases, we have made a Monte Carlo determination of the pressure and absolute entropy of the hard‐sphere solid. We use these solid‐phase thermodynamic properties, coupled with known fluid‐phase data, to show that the hard‐sphere solid, at a density of 0.74 relative to close packing, and the hard‐sphere fluid, at a density of 0.67 relative to close packing, satisfy the thermodynamic equilibrium conditions of equal pressure and chemical potential at constant temperature. To get the solid‐phase entropy, we integrated the Monte Carlo pressure–volume equation of state for a “single‐occupancy” system in which the center of each hard sphere was constrained to occupy its own private cell. Such a system is no different from the ordinary solid at high density, but at low density its entropy and pressure are both lower. The difference in entropy between an unconstrained system of particles and a constrained one, with one particle per cell, is the so‐called “communal entropy,” the determination of which has been a fundamental problem in the theory of liquids. Our Monte Carlo measurements show that communal entropy is nearly a linear function of density.

1,167 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that for given oX = l/n, n a positive integer, the power of the Monte Carlo test procedure is a monotone increasing function of the size of the reference set.
Abstract: JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.. Wiley and Royal Statistical Society are collaborating with JSTOR to digitize, preserve and extend access to Journal of the Royal Statistical Society. Series B (Methodological). SUMMARY The use of Monte Carlo test procedures for significance testing, with smaller reference sets than are now generally used, is advocated. It is shown that, for given oX = l/n, n a positive integer, the power of the Monte Carlo test procedure is a monotone increasing function of the size of the reference set, the limit of which is the power of the corresponding uniformly most powerful test. The power functions and efficiency of the Monte Carlo test to the uniformly most powerful test are discussed in detail for the case where the test criterion is N(y, 1). The cases when the test criterion is Student's t-statistic and when the test statistic is exponentially distributed are considered also.

881 citations


Journal ArticleDOI
TL;DR: The paper gives details of the degree of regularity of congruential random number generators in terms of sets of relatively few parallel hyperplanes which contain all of the points produced by the generator.
Abstract: : Most of the world's computer centers use congruential random number generators. This note points out that such random number generators produce points in 2,3,4,... dimensions which are too regular for many Monte Carlo calculations. The trouble is that the points fall exactly on a lattice with quite a gross structure. The paper gives details of the degree of regularity of such generators in terms of sets of relatively few parallel hyperplanes which contain all of the points produced by the generator.

492 citations


Journal ArticleDOI
TL;DR: In this article, a Monte Carlo study of truncated means as estimates of location is presented, where it is shown that some truncated mean has smaller sampling dispersion than the full mean.
Abstract: This paper takes a few steps toward alleviating problems of data analysis that arise from the fact that elementary expressions for density and cumulative distribution functions (c.d.f.'s) for most stable distributions are unknown. In section 2 results of Bergstrom [3] are used to develop numerical approximations for the c.d.f.'s and the inverse functions of the c.d.f.'s of symmetric stable distributions. Tables of the c.d.f.'s and their inverse functions are presented for twelve values of the characteristic exponent. In section 3 the usefulness of the numerical c.d.f.'s and their inverse functions in estimating the parameters of stable distributions and testing linear models involving stable variables is discussed. Finally, section 4 presents a Monte Carlo study of truncated means as estimates of location. In every case but the Gaussian, some truncated mean is shown to have smaller sampling dispersion than the full mean.

443 citations


Journal ArticleDOI
TL;DR: In this article, the absolute differential efficiency of liquid organic scintillators was determined for nearly monoenergetic neutrons at 20 energies between 0.2 and 22 MeV incident on the curved side of the detector.

314 citations


Journal ArticleDOI
TL;DR: The model dependence of the Monte Carlo simulation of intranuclear cascades generated by nucleons up to ∼380 MeV incident on complex nuclei has been investigated in this article, where the specific effects that were investigated are those attendant upon the introduction of refraction of cascade particles when going through regions of varying potential energy, and upon the change in the nuclear density distribution from that of a uniform density sphere to one with a diffuse surface similar to that consistent with electron-scattering experiments.
Abstract: The model dependence of the Monte Carlo simulation of intranuclear cascades generated by nucleons up to \ensuremath{\sim}380 MeV incident on complex nuclei has been investigated. Differences in the details of the Monte Carlo procedure between this work and previous intranuclear-cascade calculations are discussed. The specific effects that were investigated are those attendant upon the introduction of refraction of cascade particles when going through regions of varying potential energy, and upon the change in the nuclear density distribution from that of a uniform-density sphere to one with a diffuse surface similar to that consistent with electron-scattering experiments. Among the calculated quantities discussed are reaction cross sections, excitation energies of cascade products, spallation cross sections, energy and angular distributions of emitted particles, and linear and angular momentum transfers. The introduction of the diffuse-surface-density distribution improves agreement with available experimental data. At incident energies below \ensuremath{\sim}200 MeV and for medium and heavy nuclei, best agreement with experimental data is obtained when refraction and reflection are neglected. Possible reasons for this result are discussed.

218 citations


Journal ArticleDOI
TL;DR: In this article, low-frequency component electric microfield distributions in a plasma are calculated at both a neutral and a charged point, and a detailed analysis of all approximations is included, together with a Monte Carlo study.
Abstract: Low-frequency component electric microfield distributions in a plasma are calculated at both a neutral and a charged point. It is shown that this calculation allows for the inclusion of all correlations to a high degree of accuracy. The theory is compared with the Holtsmark and Baranger-Mozer theories. A detailed analysis of all approximations is included, together with a Monte Carlo study. Numerical results are shown both graphically and in tabulated form.

207 citations


Journal ArticleDOI
TL;DR: The numerous small angle scatterings of the photon in the direction of the incident beam are followed accurately and produce a greater penetration into the cloud than is obtained with a more isotropic and less realistic phase function.
Abstract: Visible light scattering by clouds calculated by photon multiple scattered path following Monte Carlo code, using Mie theory

175 citations



Journal ArticleDOI
TL;DR: In this article, the Percus-Yevick, hypernetted chain, and pressure-consistent integral equations have been solved, using numerical Hankel transforms, for a fluid of two-dimensional hard cores.
Abstract: The Percus–Yevick, hypernetted‐chain, and “pressure‐consistent' integral equations have been solved, using numerical Hankel transforms, for a fluid of two‐dimensional hard cores. The thermodynamic quantities obtained from these solutions are presented and compared among themselves and with the results of other theories; a comparison of computed pair distribution functions g with a Monte Carlo g is also presented. The Percus–Yevick equation is found to give the best over‐all results.

137 citations


Journal ArticleDOI
TL;DR: In this article, an unambiguous method of specification of the critical value of the reaction coordinate was proposed, and an improved way of treating rotational state densities, closely related to that of the Rice-Ramsperger-Kassel-Marcus theory was used.
Abstract: Previously obtained Monte Carlo rate constants for unimolecular decomposition of model molecules [J. Chem. Phys. 40, 1946 (1964)] are compared with the predictions of a modified version of the Rice‐Ramsperger‐Kassel‐Marcus theory. The principal modification is an unambiguous method of specification of the critical value of the reaction coordinate. Anharmonicity corrections are accurately calculated, and an improved way of treating rotational state densities, closely related to that of Marcus [J. Chem. Phys. 43, 2658 (1965)], is used. The agreement between theory and Monte Carlo results is drastically improved; remaining deviations are about ± 50% for bent molecules and undetectable (within 20%) for linear ones.

Journal ArticleDOI
TL;DR: In this paper, non-self-intersecting walks on the simple cubic and face-centered cubic lattices are used as a model for the linear polymer chain with excluded volume and nearest-neighbor interactions between the chain elements.
Abstract: Non‐self‐intersecting walks on the simple cubic and face‐centered cubic lattices are used as a model for the linear polymer chain with excluded volume and nearest‐neighbor interactions between the chain elements. The statistical properties of this model are investigated using the modified Monte Carlo technique for inversely restricted sampling. The following properties are investigated: the limiting distribution function of chain dimensions, the dependence of mean square length of the chain on the number of chain elements, and the thermodynamic properties of the chain. The results of these investigations are presented by a set of parametric representations. Each of these representations includes a parameter which is descriptive of long‐range interactions in the polymer chain. These parameters are investigated for their dependence on the nearest‐neighbor interaction parameter. A particular value for the nearest‐neighbor interaction parameter is found, in which long‐range interaction parameters reduce to the values they would attain were the chain simulated by an equivalent Markovian chain model. Thus, the conditions are found which uniquely define the Flory's theta point of the single chain. It is also found that an infinitely long single chain undergoes a phase transition which is associated with abrupt changes in the thermodynamic properties of the chain at a critical range of the nearest‐neighbor interaction parameter.

Journal ArticleDOI
TL;DR: The first section of this paper is a mathematical construction of a certain Monte Carlo procedure for sampling from the distribution by defining a particular random variable.
Abstract: The first section of this paper is a mathematical construction of a certain Monte Carlo procedure for sampling from the distributionThe construction begins by defining a particular random variable ...


Journal ArticleDOI
TL;DR: The proper justification of Monte Carlo integration must be based not on the randomness of the procedure, which is spurious, but on equidistribution properties of the sets of points at which the integrand values are computed as discussed by the authors.
Abstract: The proper justification of the normal practice of Monte Carlo integration must be based not on the randomness of the procedure, which is spurious, but on equidistribution properties of the sets of points at which the integrand values are computed. Besides the discrepancy, which it is proposed to call henceforth extreme discrepancy, another concept, that of mean square discrepancy, can be regarded as a measure of the lack of equidistribution of a sequence of points in a multidimensional cube. Determinate upper bounds can be obtained, in terms of either discrepancy, for the absolute value of the error in the computation of the integral. There exist sequences of points yielding, for sufficiently smooth functions, errors of a much smaller order of magnitude than that which is claimed by the Monte Carlo method. In the case of two dimensions, sequences with optimum properties can be generated with the help of Fibonacci numbers. The previous arguments do not apply to domains of integration which cannot be reduced to multidimensional intervals. Difficult questions arising in this connection still await an answer.


Journal ArticleDOI
TL;DR: In this paper, the electron distribution function in the (000 and 100) valleys of gallium arsenide and the resulting velocity-field relationship have been calculated using a Monte Carlo method.

Journal ArticleDOI
TL;DR: In this paper, the authors explored the effect of the level of detail in the description of the radiation properties of surfaces and showed that under some conditions the choice of the model for radiation surface characteristics can be very critical for both the local radiant heat flux and for overall radiant interchange calculations.

Journal ArticleDOI
TL;DR: In this paper, a variational calculation of the curve of ground-state energy versus density for solid helium-3 and liquid helium-4 at absolute zero is presented, using a two-parameter "localized" Jastrow \ifmmode\times\else\texttimes\fi{} Gaussian trial wave function.
Abstract: A variational calculation of the curve of ground-state energy versus density for solid helium-3 and solid helium-4 at absolute zero is presented, using a two-parameter "localized" Jastrow \ifmmode\times\else\texttimes\fi{} Gaussian trial wave function. The energy expectation value is calculated by a Monte Carlo method taking advantage of the formal analogy between the quantum-variational integral and a classical canonical-ensemble configuration integral. Results are in good agreement with experiment. The solid-liquid phase transition for both ${\mathrm{He}}_{3}$ and ${\mathrm{He}}_{4}$ is predicted to take place at densities close to the experimental ones. The Monte Carlo method has also been used to test the validity of a variational calculation on a truncated cluster expansion of the energy expectation value due to Nosanow.


Journal ArticleDOI
TL;DR: In this article, the effects of N and communality on the variability of zero and nonzero factor loadings were assessed using a Monte Carlo approach, and it was found that increasing N or communality resulted in decreased sampling error of individual factor loads, but for zero loadings N was found to have the greatest influence.
Abstract: The effects ofN and communality on the variability of zero and nonzero factor loadings were assessed using a Monte Carlo approach. It was found that increasingN or communality resulted in decreased sampling error of individual factor loadings, but for zero loadingsN was found to have the greatest influence. It was also found that distributions of factor loadings become relatively elongated as communality increases.

Journal ArticleDOI
TL;DR: In this article, an analysis is presented of the dynamical properties of a model meant to represent the electron jump mechanism that has been proposed to describe the alkali-metal-halogen-molecule reactions.
Abstract: An analysis is presented of the dynamical properties of a model meant to represent the “harpooning,” or electron‐jump mechanism that has been proposed to describe the alkali‐metal–halogen‐molecule reactions (M + X2→MX + X). Individual trajectories are computed from classical equations of motion with the starting conditions chosen by Monte Carlo procedures. With 500 or so trajectories for each trial, a comparison can be made with available observations from molecular‐beam experiments; the way in which the reaction energy is distributed between kinetic energy of translation and the internal modes of the product, along with a measure of the differential cross section, are of particular concern. The trajectories include a sudden crossing from an initial homopolar surface to a final surface with the long‐range forces required of an M+X− bond to simulate electron transfer. Two different potential functions are used as the final surface: one has a phenomenological form to include various types of X–X forces after transition, and the other has a simple induced‐dipole term as an interaction of the departing X with the resulting charges (M+,X−). Except for an extreme trial closely approximating pure stripping, nine trials of the first potential failed to agree with experiment. The last potential gave good results for trials of K + Br2,K + I2,Rb + Br2,Rb + I2, and Cs + Br2.

Journal ArticleDOI
TL;DR: In this article, the pulse height distribution of a silicon semiconductor counter was measured for monoenergetic electrons in the energy range from 300 to 1200 keV at angles of incidence from 15° to 90°.

Journal ArticleDOI
01 Jan 1968
TL;DR: The paper describes a method for laying out networks by computer so that the number of crossings between the network connections is close to a minimum, relevant to the design of printed circuits, where special wiring arrangements have to be made when crossings occur.
Abstract: The paper describes a method for laying out networks by computer so that the number of crossings between the network connections is close to a minimum. The problem is relevant to the design of printed circuits, where special wiring arrangements have to be made when crossings occur. The network is expressed in the form of a permutation, which is convenient for manipulation, by deforming the network so that the node points lie on a straight line with the connections drawn as semicircles above an below the node line. Locally optimal networks are defined so that no gain can result from moving an individual node to a new position, and a 2-stage method of construction is proposed. The formulas used to calculate the number of crossings consist primarily of summations, so that the procedure is quickly performed on a computer. The method has been tested on some trial networks for which the minimum number of crossings is known, and it has also been compared with Monte Carlo methods on random networks. The results are encouraging in all cases.

Journal ArticleDOI
TL;DR: The reflected and transmitted radiances for the isotropic and Rayleigh models tend to be similar, as are those for the various haze and cloud models, and the downward flux, cloud albedo, and ean optical path are discussed.
Abstract: Particle size distribution influence on reflected and transmitted light from clouds calculated by Monte Carlo technique



Journal ArticleDOI
TL;DR: In this paper, the authors compared the advantage of the integral formulation with a Monte Carlo calculation of the solid angle, and concluded that the Monte Carlo method is of lesser advantage, but is more easily adaptable to complex geometries.


Journal ArticleDOI
TL;DR: The Monte Carlo model for electron scattering described in an earlier paper has been used to calculate the absorption and back-scattering corrections met in electron-probe x-ray microanalysis as mentioned in this paper.
Abstract: The Monte Carlo model for electron scattering described in an earlier paper has been used to calculate the absorption and back-scattering corrections met in electron-probe x-ray microanalysis. Although agreement with experimental data is on the whole good, the calculated values for the correction factors are not sufficiently accurate for general use. However, in the case of light element analysis where very high absorption corrections are needed, the corrections calculated from Monte Carlo data are the best available at present. Values for the back-scattering correction factor R calculated for incident beam angles 225° and 45° to the normal surface are given.