Showing papers by "Steven J. Plimpton published in 2001"
••
TL;DR: A systematic, large-scale simulation study of granular media in two and three dimensions, investigating the rheology of cohesionless granular particles in inclined plane geometries, finds that a steady-state flow regime exists in which the energy input from gravity balances that dissipated from friction and inelastic collisions is found.
Abstract: We have performed a systematic, large-scale simulation study of granular media in two and three dimensions, investigating the rheology of cohesionless granular particles in inclined plane geometries, i.e., chute flows. We find that over a wide range of parameter space of interaction coefficients and inclination angles, a steady-state flow regime exists in which the energy input from gravity balances that dissipated from friction and inelastic collisions. In this regime, the bulk packing fraction (away from the top free surface and the bottom plate boundary) remains constant as a function of depth z, of the pile. The velocity profile in the direction of flow vx(z) scales with height of the pile H, according to vx(z) proportional to H(alpha), with alpha=1.52+/-0.05. However, the behavior of the normal stresses indicates that existing simple theories of granular flow do not capture all of the features evidenced in the simulations.
853 citations
••
TL;DR: In this paper, the authors examined size scale and strain rate effects on single-crystal face-centered cubic cubic (fcc) metals and found that dislocations nucleating at free surfaces are critical to causing micro-yield and macro-yielding in pristine material.
271 citations
••
TL;DR: In this paper, simple shear molecular dynamics simulations using the embedded atom method (EAM) potentials were performed on single crystals and various parametric effects on the stress state and kinematics have been quantified.
103 citations
••
01 Oct 2001
TL;DR: Icarus is a 2D Direct Simulation Monte Carlo code which has been optimized for the parallel computing environment and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates.
Abstract: Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird[11.1] and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modeled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modeled using steric factors derived from Arrhenius reaction rates or in a manner similar to continuum modeling. Surface chemistry is modeled with surface reaction probabilities; an optional site density, energy dependent, coverage model is included. Electrons are modeled by either a local charge neutrality assumption or as discrete simulational particles. Ion chemistry is modeled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electro-static fields can either be: externally input, a Langmuir-Tonks model or from a Green's Function (Boundary Element) based Poison Solver. Icarus has been used for subsonic to hypersonic, chemically reacting, and plasma flows. The Icarus software package includes the grid generation, parallel processor decomposition, post-processing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. All of the software packages are written in standard Fortran.
38 citations
••
17 Jun 2001TL;DR: An efficient algorithm has been developed for dynamically rebalancing the particle workload on a timestep-by-timestep basis and the strategies used and algorithms developed to parallelize and dynamically load-balance the code are described.
Abstract: Summary form only given. QUICKSILVER is a 3-D electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It was originally written for shared-memory, multi-processor supercomputers such as the Cray X/MP. A new parallel version of QUICKSILVER has been developed to enable large-scale simulations to be efficiently run on massively-parallel distributed memory supercomputers with thousands of processors, such as the Intel Tflops and Cplant machines at Sandia. The new parallel code implements all the features of the original QUICKSILVER and can be run on any platform that supports the message-passing interface (MPI) standard as well as on single-processor workstations. The original QUICKSILVER code was based on a multiple-block grid, which provided a natural strategy for extending the code to partition a simulation among multiple processors. By adding the automated capability to divide QUICKSILVER's existing blocks into sub-blocks and then distribute those sub-blocks among processors, a simulation's spatial domain can be easily and efficiently partitioned. Based upon this partitioning scheme as well as QUICKSILVER's existing particle-handling infrastructure, an algorithm has been developed for dynamically rebalancing the particle workload on a timestep-by-timestep basis that has proven to be very efficient. This paper will elaborate on the strategies used and describe the algorithms developed to parallelize and dynamically load-balance the code. Results of several benchmark simulations will be presented that illustrate the code's performance and parallel efficiency for a wide variety of simulation conditions. These calculations have as many as 10/sup 8/ grid cells and 10/sup 9/ particles and were run on thousands of processors.
8 citations