scispace - formally typeset
Search or ask a question

Showing papers in "Computers in Physics in 1998"


Journal ArticleDOI
TL;DR: The implementation of various types of turbulence modeling in a FOAM computational-fluid-dynamics code is discussed, and calculations performed on a standard test case, that of flow around a square prism, are presented.
Abstract: In this article the principles of the field operation and manipulation (FOAM) C++ class library for continuum mechanics are outlined. Our intention is to make it as easy as possible to develop reliable and efficient computational continuum-mechanics codes: this is achieved by making the top-level syntax of the code as close as possible to conventional mathematical notation for tensors and partial differential equations. Object-orientation techniques enable the creation of data types that closely mimic those of continuum mechanics, and the operator overloading possible in C++ allows normal mathematical symbols to be used for the basic operations. As an example, the implementation of various types of turbulence modeling in a FOAM computational-fluid-dynamics code is discussed, and calculations performed on a standard test case, that of flow around a square prism, are presented. To demonstrate the flexibility of the FOAM library, codes for solving structures and magnetohydrodynamics are also presented with appropriate test case results given. © 1998 American Institute of Physics.

3,987 citations


Journal ArticleDOI
David L. Windt1
TL;DR: IMD includes a full graphical user interface and affords modeling with up to eight simultaneous independent variables, as well as parameter estimation using nonlinear, least-squares curve fitting to user-supplied experimental optical data.
Abstract: A computer program called IMD is described. IMD is used for modeling the optical properties (reflectance, transmittance, electric-field intensities, etc.) of multilayer films, i.e., films consisting of any number of layers of any thickness. IMD includes a full graphical user interface and affords modeling with up to eight simultaneous independent variables, as well as parameter estimation (including confidence interval generation) using nonlinear, least-squares curve fitting to user-supplied experimental optical data. The computation methods and user interface are described, and numerous examples are presented that illustrate some of IMD’s unique modeling, fitting, and visualization capabilities. © 1998 American Institute of Physics.

892 citations


Journal ArticleDOI
TL;DR: This work focuses specifically on the coupling of length scales for the three mechanics describing materials phenomena: quantum mechanics, atomistic mechanics, and continuum mechanics.
Abstract: couple different length and time scales in serial fashion. By this we mean that one set of calculations at a fundamental level, and of high computational complexity, is used to evaluate constants for use in a more approximate or phenomenological computational methodology at longer length or time scales. In pioneering work of this sort in the 1980s, Clementi and coworkers1 used high-quality quantum-mechanical methods to evaluate the interaction of several water molecules. From this database they parameterized an empirical potential for use in molecular-dynamics atomistic simulations. Such a simulation was then used to evaluate the viscosity of water from the atomic autocorrelation function. Finally, the computed viscosity was used in a computational-fluid-dynamics calculation to predict tidal circulation in Buzzards Bay, MA. This tour de force of computational physics is a powerful example of the sequential coupling of length and time scales: one series of calculations is used as input to the next up the length and time hierarchy. There are many other examples in the literature. But what underlies all these schemes is that an appropriate computational methodology is used for a given scale or task, whether it be the accuracy of quantum mechanics at the shortest scales or fluid dynamics at the longest scales. In contrast, there has been comparatively little effort devoted to the parallel coupling of different computational schemes for a simultaneous attack on a given problem; in our case, our interest dictates specific attention toward issues in materials or solid-state physics. We will focus specifically on the coupling of length scales for the three mechanics describing materials phenomena: quantum mechanics, atomistic mechanics, and continuum mechanics.

285 citations



Journal ArticleDOI
TL;DR: In this paper, an explicit algorithm to calculate the first point of the solution with an accuracy appropriate to that obtained with the general algorithm is given. But this algorithm is not applicable to the special case of second-order ordinary differential equations of the Numerov type.
Abstract: Second-order ordinary differential equations of the Numerov type (no first derivative and the given function linear in the solution) are common in physics, but little discussion is devoted to the special first step that is needed before one can apply the general algorithm. We give an explicit algorithm to calculate the first point of the solution with an accuracy appropriate to that obtained with the general algorithm. © 1997 American Institute of Physics.

134 citations


Journal ArticleDOI
TL;DR: It is shown that the procedure for preservation of molecular rigidity can be realized particularly simply within the Verlet algorithm in velocity form and it is demonstrated that the method presented leads to an improved numerical stability with respect to the usual quaternion rescaling scheme.
Abstract: A revised version of the quaternion approach for numerical integration of the equations of motion for rigid polyatomic molecules is proposed. The modified approach is based on a formulation of the quaternion dynamics with constraints. This allows one to resolve the rigidity problem rigorously using constraint forces. It is shown that the procedure for preservation of molecular rigidity can be realized particularly simply within the Verlet algorithm in velocity form. We demonstrate that the method presented leads to an improved numerical stability with respect to the usual quaternion rescaling scheme and it is roughly as good as the cumbersome atomic-constraint technique. © 1998 American Institute of Physics.

109 citations


Journal ArticleDOI
TL;DR: In this paper, the generalized feedback shift-register random-number generator is shown to be significantly reduced when the number of feedback taps is increased from two to four (or more) and the tap offsets are made large.
Abstract: Correlations in the generalized feedback shift-register random-number generator are shown to be greatly reduced when the number of feedback taps is increased from two to four (or more) and the tap offsets are made large. Simple formulas for producing maximal-cycle four-tap rules from available primitive trinomials are given, and explicit three- and four-point correlations are found for some of those rules. Several generators are also tested using a simple but sensitive random-walk simulation that relates to a problem in percolation theory. While virtually all two-tap generators fail this test, four-tap generators with offsets greater than about 500 pass it, have passed tests carried out by others, and appear to be good multipurpose high-quality random-number generators. © 1998 American Institute of Physics.

91 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the low-lying excitations of the Heisenberg antiferromagnet in a magnetic field and interpreted these collective states as composites of quasi-particles from a different species.
Abstract: Having introduced the magnon in part I and the spinon in part II as the relevant quasi-particles for the interpretation of the spectrum of low-lying excitations in the one-dimensional (1D) s=1/2 Heisenberg ferromagnet and antiferromagnet, respectively, we now study the low-lying excitations of the Heisenberg antiferromagnet in a magnetic field and interpret these collective states as composites of quasi-particles from a different species. We employ the Bethe ansatz to calculate matrix elements and show how the results of such a calculation can be used to predict lineshapes for neutron scattering experiments on quasi-1D antiferromagnetic compounds. The paper is designed as a tutorial for beginning graduate students. It includes 11 problems for further study.

78 citations


Journal ArticleDOI
TL;DR: Methods of web-based testing are described, the positive attributes of the WWW for testing are highlighted, and the dangers that threaten its effectiveness are marked.
Abstract: The World Wide Web is impacting education in profound ways. So far, the predominant use of the WWW in teaching has been for finding and distributing information, much like an online library. However, as information technology evolves, the Web is increasingly used for more interactive applications like testing. But as with any technological development or media revolution, there is potential for both success and failure. To use the WWW effectively for testing in physics education, we must identify our goals and determine the best way to meet those goals. This paper describes methods of web-based testing, highlights the positive attributes of the WWW for testing, and marks the dangers that threaten its effectiveness.

47 citations


Journal ArticleDOI
TL;DR: An introduction to quantum mechanics based on the sum-over-paths method originated by Richard P. Feynman is outlined, which has promise for the education of physicists and other scientists, as well as for distribution to a wider audience.
Abstract: We outline an introduction to quantum mechanics based on the sum-over-paths method originated by Richard P. Feynman. Students use software with a graphics interface to model sums associated with multiple paths for photons and electrons, leading to the concepts of electron wavefunction, the propagator, bound states, and stationary states. Material in the first portion of this outline has been tried with an audience of high-school science teachers. These students were enthusiastic about the treatment, and we feel that it has promise for the education of physicists and other scientists, as well as for distribution to a wider audience. © 1998 American Institute of Physics.

46 citations



Journal ArticleDOI
TL;DR: In this article, the authors used lifted interpolating wavelets to solve Poisson's equation for a 3D Uranium dimer, where the length scales of the charge distribution vary by 4 orders of magnitude.
Abstract: Wavelets are a powerful new mathematical tool which offers the possibility to treat in a natural way quantities characterized by several length scales. In this article we will show how wavelets can be used to solve partial differential equations which exhibit widely varying length scales and which are therefore hardly accessible by other numerical methods. As a benchmark calculation we solve Poisson's equation for a 3-dimensional Uranium dimer. The length scales of the charge distribution vary by 4 orders of magnitude in this case. Using lifted interpolating wavelets the number of iterations is independent of the maximal resolution and the computational effort therefore scales strictly linearly with respect to the size of the system.

Journal ArticleDOI
TL;DR: A method to approximate continuous multidimensional probability-density functions (PDFs) using their projections and correlations is described, particularly useful for event classification when estimates of systematic uncertainties are required and for the application of an unbinned maximum-likelihood analysis when an analytic model is not available.
Abstract: A method to approximate continuous multidimensional probability-density functions (PDFs) using their projections and correlations is described. The method is particularly useful for event classification when estimates of systematic uncertainties are required and for the application of an unbinned maximum-likelihood analysis when an analytic model is not available. A simple goodness-of-fit test of the approximation can be used, and simulated event samples that follow the approximate PDFs can be generated efficiently. The source code for a Fortran-77 implementation of this method is available. © 1998 American Institute of Physics.

Journal ArticleDOI
TL;DR: A simple yet robust algorithm is presented to approximate data containing noise by smooth spline functions, which performs well even in the presence of correlated noise.
Abstract: A simple yet robust algorithm is presented to approximate data containing noise by smooth spline functions. The method is easy to implement and fast. Experience has shown that in the great majority of cases no user decisions are required to obtain adequate approximants, which is a useful property for routine processing of experimental data. The statistical basis of the algorithm is a generalized version of the Durbin–Watson statistic, which performs well even in the presence of correlated noise. Examples are given to illustrate the power of the algorithm. © 1998 American Institute of Physics.


Journal ArticleDOI
TL;DR: A symbolic Mathematica package for analysis and control of chaos in discrete and continuous nonlinear systems is presented and some commands with which to obtain qualitative and quantitative measures of chaos are described.
Abstract: In this article a symbolic Mathematica package for analysis and control of chaos in discrete and continuous nonlinear systems is presented. We start by presenting the main properties of chaos and describing some commands with which to obtain qualitative and quantitative measures of chaos, such as the bifurcation diagram and the Lyapunov exponents, respectively. Then we analyze the problem of chaos control and suppression, illustrating the different methodologies proposed in the literature by means of two representative algorithms (linear feedback control and suppression by perturbing the system variables). A novel analytical treatment of these algorithms using the symbolic capabilities of Mathematica is also presented. Well known one- and two-dimensional maps (the logistic and Henon maps) and flows (the Duffing and Rossler systems) are used throughout the article to illustrate the concepts and algorithms. © 1998 American Institute of Physics.

Journal ArticleDOI
TL;DR: A predictor–corrector pair of multistep algorithms adapted to the accurate and efficient numerical integration of perturbed oscillators are introduced, since the truncation error comes exclusively from the perturbation terms.
Abstract: In this article we introduce a predictor–corrector pair of multistep algorithms adapted to the accurate and efficient numerical integration of perturbed oscillators, since the truncation error comes exclusively from the perturbation terms. They are the first such methods that can achieve a high order while allowing step-size and order variations. © 1998 American Institute of Physics.

Journal ArticleDOI
TL;DR: In this article, large-scale molecular-dynamics simulations of the classical Rayleigh-Taylor (RT) phenomenon in a Lennard-Jones molecular liquid were carried out, and the development of hydrodynamic instabilities from two different kinds of interacting particles.
Abstract: We carried out large-scale molecular-dynamics simulations of the classical Rayleigh–Taylor (RT) phenomenon in a Lennard-Jones molecular liquid. We have observed from these simulations, involving 106–107 particles, the development of hydrodynamic instabilities from two different kinds of interacting particles. A free surface is introduced by deploying an overlying void. For a box with a dimension up to about 1 μm and two layers having different particle sizes, the fingering type of instability is observed as a result of oscillations caused by the gravitational field. In this gridless scheme, surface waves can be captured self-consistently. For equally sized particles, a spontaneous “fluctuation driven” mixing with a long start-up time is observed. These molecular- dynamics results suggest the possibilities of upscaling the RT phenomenon. For conducting these numerical experiments, which require at least ∼105 time steps, a single simulation would require 100–200 Tflops of massively parallel computer power. © 1998 American Institute of Physics.

Journal ArticleDOI
TL;DR: The Absorption module is created because it is useful for students to see that light can be both emitted as well as absorbed by atoms when electrons make transitions from one energy level to another, even though they do not do an absorption experiment in the authors' laboratory.
Abstract: The goal of the Visual Quantum Mechanics project l is to develop materials that help high school and undergradu­ ate students to learn quantum physics without having a back­ ground in higher-level mathematics. To reach these students we are concentrating on activities that integrate hands-on experiments and multimedia materials with interactive soft­ ware packages. One of the packages, Spectroscopy Lab Suite,] is designed to help students to learn how quantum­ mechanical concepts such as energy levels and energy bands can explain the spectra of light emitted by different light sources. Spectroscopy Lab Suite is incorporated into the Learning Cycle2 model ofinstruction, which makes extensive use of hands-on activities. The opening screen ofSpectroscopy Lab Suite shows the hierarchy of the various activities (see Fig. I). This hierarchy is designed to reflect the pedagogical application of the pack­ age as envisaged as by us for use in our instructional units. Students first observe the spectra of gas lamps and construct an empirical model of discrete energy levels in an atom using the components of the Gas Lamps module. Students generally use the Emission module after observing the spectra emitted by gas-discharge tubes in the laboratory. We created the Absorption module because we believe that it is useful for students to see that light can be both emitted as well as absorbed by atoms when electrons make transitions from one energy level to another, even though they do not do an absorption experiment in our laboratory.

Journal ArticleDOI
TL;DR: In this article, a simulation of strike-slip earthquake faults is presented, where the authors focus on the first rupture sites near the earth's surface and simulate the initial rupture sites of the San Andreas fault.
Abstract: nderstanding the physics of complex phenomena suchas earthquakes is a formidable challenge. Because weare constrained by our inability to do experiments on earth-quake faults, we turn to computers as our earthquake labo-ratories. Computer simulations of earthquakes help us totest theoretical models and allow us to generate catalogs ofsynthetic earthquake events. In this column, we discuss sev-eral models and algorithms for simulating strike-slip faultswith nearest-neighbor and long-range interactions.We begin by briefly reviewing the essential physics ofearthquakes ~see Ref. 1 for a more complete introductionand Ref. 2 for a more advanced exposition!. A strike-slip ortransform fault, such as the San Andreas fault in California,delineates the boundary between two crustal regions thatmove primarily along the earth’s surface and opposite toeach other. Although earthquakes occur at other types offaults, we shall concentrate solely on strike-slip earth-quakes, which have their initial rupture sites near the earth’ssurface, because their mainly one-dimensional motionmakes them easier to simulate than other types of faults.Without the earth’s relatively rigid upper layer, calledthe lithosphere, these shallow-focus quakes would not oc-cur. The lithosphere, an approximately 100-km-thick layer,which is made of the crust and a portion of the uppermantle, consists of two principal parts that are coupled.Although the upper part can sustain tremendous shearstresses ~about 10 MPa in faults and about 100 MPa in thesurrounding rock! and undergoes brittle fracture and seis-mic slip, the lower part behaves ductilely and experiencesaseismic slip. Seismic slip is associated with earthquakesand refers to very rapid motion due to a frictional instabilitybetween the two sides of a fault. After undergoing seismicslip, the formerly sliding rock experiences an interval oflittle or no motion during which the stress on the rock re-charges. This phenomenon is called a stick-slip instability.In contrast, aseismic slip refers to continuous, usually slow,stable sliding and is not associated with earthquakes. Con-sequently, the lower part constrains earthquakes to the up-per region of the lithosphere and prevents large earthquakesfrom expanding to below the border between the upper andlower regions. This constraint may lead to differences in thedistribution of occurrence frequency versus size for largeand small earthquakes.

Journal ArticleDOI
TL;DR: In a clear and concise manner, the title leads the reader from simple modeling calculations for small molecules to the complex modeling systems used to simulate protein and other relevant biomolecules.
Abstract: From the Publisher: With the growing speed of today's computers, molecular modeling is becoming an increasingly popular method for conducting experiments on the computer before applying the results in the laboratory. These techniques allow the computer-aided generation of molecular structures as well as the computation of molecular properties. Prediction of three-dimensional structures, visualizations of molecular surface properties, and optimization of drug-receptor interactions by visual inspection can all be achieved by molecular modeling. Written by experienced experts in molecular modeling, this book describes the various pitfalls to molecular modeling and provides the information necessary to reliably judge the modeling results. In a clear and concise manner, the title leads the reader from simple modeling calculations for small molecules to the complex modeling systems used to simulate protein and other relevant biomolecules. Both practitioners in industry and academia will find this introduction to be an invaluable reference for the daily use of molecular modeling.

Journal ArticleDOI
TL;DR: In this paper, a set of long-standing problems for which a computational approach yields qualitatively new information that is inaccessible using conventional methods are discussed, for which the inertial forces are tiny compared to viscous forces.
Abstract: techniques has opened a new window on diverse problems in physics. Our focus, in this article, is on a set of issues pertaining to the behavior of fluids, at low Reynolds numbers, for which the inertial forces are tiny compared to viscous forces. We shall discuss classes of long-standing problems for which a computational approach yields qualitatively new information that is inaccessible using conventional methods. Broadly speaking, these problems involve questions of behavior at the subcontinuum level for which experiments are unable to provide the requisite answers. In these cases, computers provide a bridge between microscopic and macroscopic scales and are beginning to yield new insights into technologically important issues. The computational method is molecular-dynamics (MD) simulation, which entails the integration of Newton’s laws of motion for a set of interacting molecules (see the flow diagram in Fig. 1). At the simplest level, molecular simulations are important for the interpretation of nanoscale experiments. A detailed understanding of the behavior of materials at very small scales is necessary for the fabrication of miniature devices and the manipulation of materials at the molecular scale. The behavior of materials at the nanoscale is often quite different from that in the bulk. For example, it has been found recently that very small systems may exhibit solid-liquid coexistence over a range of temperatures, quite distinct from the macroscopic behavior described by equilibrium statistical mechanics.1 Another example occurs in studies of friction in narrow systems,2 in which a combination of experiment and simulation has elucidated the relationship between the stick-slip behavior observed in experiments and the local freezing and melting first discovered in MD simulations by Thompson and Robbins.3 Microscopic studies are needed to assess the range of

Journal ArticleDOI
TL;DR: The POOMA FrameworK is an integrated collection of C++ classes designed to increase simulation lifetime and agility, ease data-parallel interfaces, and improve portability across rapidly evolving high-performance computing architectures.
Abstract: The Parallel Object-Oriented Methods and Applications (POOMA) Framework is described. The POOMA FrameworK is an integrated collection of C++ classes designed to increase simulation lifetime and agility, ease data-parallel interfaces, and improve portability across rapidly evolving high-performance computing architectures. (AIP)


Journal ArticleDOI
TL;DR: In this article, the authors generalize the nonstandard finite-difference methodology to two and three dimensions, give example algorithms, and discuss practical applications, and give examples of two-dimensional algorithms.
Abstract: Nonstandard finite differences can be used to construct exact algorithms to solve some differential equations of physical interest such as the wave equation and Schrodinger’s equation Even where exact algorithms do not exist, nonstandard finite differences can greatly improve the accuracy of low-order finite-difference algorithms with a computational cost low compared to higher-order schemes or finer gridding While nonstandard finite differences have been applied successfully to a variety of one-dimensional problems, they cannot be directly extended to higher dimensions without modification In this article we generalize the nonstandard finite-difference methodology to two and three dimensions, give example algorithms, and discuss practical applications © 1998 American Institute of Physics

Journal ArticleDOI
TL;DR: Hardware and software for a high-speed, pulsed-data-acquisition system that can be obtained via any device attachable to a VXI crate, GPIB controller, or directly via serial or parallel ports are described.
Abstract: Hardware and software for a high-speed, pulsed-data-acquisition system are described. Data can be obtained via any device attachable to a VXI crate, GPIB controller, or directly via serial or parallel ports. Stepper motors controlled via serial port automate probe movement. LabVIEW and custom C++ modules are used to handle setup, data gathering and processing, and user interface. Experimental parameters can be controlled at each point via any GPIB-ready device. © 1997 American Institute of Physics.



Journal ArticleDOI
TL;DR: The editors/authors of the Numerical Recipes column are invited back to offer some observations about the past of scientific computing, including the educational niche occupied by the books, and to make some prognostications about where the field is going.
Abstract: Longtime readers of Computers in Physics may remember us as the editors/authors of the Numerical Recipes column that ran from 1988 through 1992. At that time, with the publication of the second edition of our Numerical Recipes (NR) books in C and Fortran, we took a sabbatical leave from column writing, a leave that became inadvertently permanent. Now, as a part of CIP's Tenth Anniversary celebration, we have been invited back to offer some observations about the past of scientific computing, including the educational niche occupied by our books, and to make some prognostications about where the field is going.

Journal ArticleDOI
TL;DR: The growth of streamer trees in insulating fluids (a submicrosecond process that triggers high-voltage breakdown) has been simulated with a combination of parallel-coding tools, enabling the solution of memory-intensive problems on a group of limited-memory processors.
Abstract: The growth of streamer trees in insulating fluids (a submicrosecond process that triggers high-voltage breakdown) has been simulated with a combination of parallel-coding tools. Large grids and arrays display well the multifractal, self-avoiding character of the streamer trees. Three physical cases have been approximated by different power-law weightings of the statistical growth filter: dense anode trees, in the uniform field; sparse cathode trees (a rarer experimental case); and ultrasparse anode trees (seen in some fluids of higher viscosity). The model is contained in a software package that is written in Fortran 90 with data parallel extensions for distributed execution. These extensions encapsulate an underlying, invisible message-passing environment, thus enabling the solution of memory-intensive problems on a group of limited-memory processors. Block partitioning creates processes of reasonable size, which operate in parallel like small copies of the original code. The user needs only to express his model in transparent array-directed commands; parallel interfacing between blocks is handled invisibly. Breakdown is performed in parallel in each of the local blocks. Results are presented for experiments run on eight and nine nodes of the IBM SP2, and four and eight nodes of the SGI Onyx and Origin, three examples of multiple-processor machines. Display is carried out in three dimensions. Timing of the growth can be shown by color banding or by frame animation of the results. The adequacy of the growth rules and size scaling are tested by comparing the simulations against snapshots from high-voltage discharge events.