Author
R. A. Gingold
Bio: R. A. Gingold is an academic researcher from University of Illinois at Urbana–Champaign. The author has contributed to research in topics: Stellar structure & Angular momentum. The author has an hindex of 4, co-authored 7 publications receiving 6585 citations.
Papers
More filters
6,206 citations
TL;DR: In this article, the particle method SPH is applied to one-dimensional shock tube problems by incorporating an artificial viscosity into the equations of motion, and the results show either excessive oscillation or excessive smearing of the shock front.
Abstract: The particle method SPH is applied to one-dimensional shock tube problems by incorporating an artificial viscosity into the equations of motion. When the artificial viscosity is either a bulk viscosity or the Von Neumann-Richtmyer viscosity, in a form analogous to that for finite differences, the results show either excessive oscillation or excessive smearing of the shock front. The reason for the excessive particle oscillation is that, in the standard form, the artificial viscosity cannot dampen irregular motion on the scale of the particle separation since that scale is usually less than the resolution of the interpolating kernel. We propose a new form of artificial viscosity which eliminates this problem. The resulting shock simulation has negligible oscillation and satisfactorily sharp discontinuities. Results with a gaussian interpolating kernel (with second-order errors) are shown to be greatly inferior to those with a super gaussian kernel (with fourth-order errors).
1,119 citations
55 citations
14 citations
TL;DR: In this article, the authors show that spurious angular momentum transport can seriously affect the evolution of the cloud and that it may determine if a ring-like density enhancement will occur in a rotating, self gravitating cloud.
Abstract: Recent numerical experiments (Norman et al . 1980), which simulate the axisymmetric collapse of a rotating, self gravitating cloud, show that spurious angular momentum transport can seriously affect the evolution of the cloud. In particular, it may determine if a ring-like density enhancement will occur. The spurious angular momentum transport can arise either from an explicit artificial viscosity, which might be required if shocks occur, or from an implicit viscosity due to truncation errors in the difference equation approximation to the exact equations. In donor cell schemes like those used by Tohline (1980) and Boss (1980) spurious angular momentum transport is due to truncation errors in the difference equations. For axisymmetric problems the errors are usually not serious since the typical length of a cell in the computational grid is very much less than the length scale of the cloud. We would expect the errors to be much greater when fragmentation occurs because the length scale of a fragment may only be comparable to that of three or four cells.
4 citations
Cited by
More filters
TL;DR: GADGET-2 as mentioned in this paper is a massively parallel tree-SPH code, capable of following a collisionless fluid with the N-body method, and an ideal gas by means of smoothed particle hydrodynamics.
Abstract: We discuss the cosmological simulation code GADGET-2, a new massively parallel TreeSPH code, capable of following a collisionless fluid with the N-body method, and an ideal gas by means of smoothed particle hydrodynamics (SPH). Our implementation of SPH manifestly conserves energy and entropy in regions free of dissipation, while allowing for fully adaptive smoothing lengths. Gravitational forces are computed with a hierarchical multipole expansion, which can optionally be applied in the form of a TreePM algorithm, where only short-range forces are computed with the ‘tree’ method while long-range forces are determined with Fourier techniques. Time integration is based on a quasi-symplectic scheme where long-range and short-range forces can be integrated with different time-steps. Individual and adaptive short-range time-steps may also be employed. The domain decomposition used in the parallelization algorithm is based on a space-filling curve, resulting in high flexibility and tree force errors that do not depend on the way the domains are cut. The code is efficient in terms of memory consumption and required communication bandwidth. It has been used to compute the first cosmological N-body simulation with more than 10 10 dark matter particles, reaching a homogeneous spatial dynamic range of 10 5 per dimension in a three-dimensional box. It has also been used to carry out very large cosmological SPH simulations that account for radiative cooling and star formation, reaching total particle numbers of more than 250 million. We present the algorithms used by the code and discuss their accuracy and performance using a number of test problems. GADGET-2 is publicly released to the research community. Ke yw ords: methods: numerical ‐ galaxies: interactions ‐ dark matter.
6,196 citations
TL;DR: In this article, an element-free Galerkin method which is applicable to arbitrary shapes but requires only nodal data is applied to elasticity and heat conduction problems, where moving least-squares interpolants are used to construct the trial and test functions for the variational principle.
Abstract: An element-free Galerkin method which is applicable to arbitrary shapes but requires only nodal data is applied to elasticity and heat conduction problems. In this method, moving least-squares interpolants are used to construct the trial and test functions for the variational principle (weak form); the dependent variable and its gradient are continuous in the entire domain. In contrast to an earlier formulation by Nayroles and coworkers, certain key differences are introduced in the implementation to increase its accuracy. The numerical examples in this paper show that with these modifications, the method does not exhibit any volumetric locking, the rate of convergence can exceed that of finite elements significantly and a high resolution of localized steep gradients can be achieved. The moving least-squares interpolants and the choices of the weight function are also discussed in this paper.
5,324 citations
TL;DR: In this article, the theory and application of Smoothed particle hydrodynamics (SPH) since its inception in 1977 are discussed, focusing on the strengths and weaknesses, the analogy with particle dynamics and the numerous areas where SPH has been successfully applied.
Abstract: In this review the theory and application of Smoothed particle hydrodynamics (SPH) since its inception in 1977 are discussed. Emphasis is placed on the strengths and weaknesses, the analogy with particle dynamics and the numerous areas where SPH has been successfully applied.
4,070 citations
TL;DR: A new continuous reproducing kernel interpolation function which explores the attractive features of the flexible time-frequency and space-wave number localization of a window function is developed and is called the reproducingkernel particle method (RKPM).
Abstract: A new continuous reproducing kernel interpolation function which explores the attractive features of the flexible time-frequency and space-wave number localization of a window function is developed. This method is motivated by the theory of wavelets and also has the desirable attributes of the recently proposed smooth particle hydrodynamics (SPH) methods, moving least squares methods (MLSM), diffuse element methods (DEM) and element-free Galerkin methods (EFGM). The proposed method maintains the advantages of the free Lagrange or SPH methods; however, because of the addition of a correction function, it gives much more accurate results. Therefore it is called the reproducing kernel particle method (RKPM). In computer implementation RKPM is shown to be more efficient than DEM and EFGM. Moreover, if the window function is C∞, the solution and its derivatives are also C∞ in the entire domain. Theoretical analysis and numerical experiments on the 1D diffusion equation reveal the stability conditions and the effect of the dilation parameter on the unusually high convergence rates of the proposed method. Two-dimensional examples of advection-diffusion equations and compressible Euler equations are also presented together with 2D multiple-scale decompositions.
2,682 citations
18 Dec 2006
TL;DR: The parallel landscape is frame with seven questions, and the following are recommended to explore the design space rapidly: • The overarching goal should be to make it easy to write programs that execute efficiently on highly parallel computing systems • The target should be 1000s of cores per chip, as these chips are built from processing elements that are the most efficient in MIPS (Million Instructions per Second) per watt, MIPS per area of silicon, and MIPS each development dollar.
Abstract: Author(s): Asanovic, K; Bodik, R; Catanzaro, B; Gebis, J; Husbands, P; Keutzer, K; Patterson, D; Plishker, W; Shalf, J; Williams, SW | Abstract: The recent switch to parallel microprocessors is a milestone in the history of computing. Industry has laid out a roadmap for multicore designs that preserves the programming paradigm of the past via binary compatibility and cache coherence. Conventional wisdom is now to double the number of cores on a chip with each silicon generation. A multidisciplinary group of Berkeley researchers met nearly two years to discuss this change. Our view is that this evolutionary approach to parallel hardware and software may work from 2 or 8 processor systems, but is likely to face diminishing returns as 16 and 32 processor systems are realized, just as returns fell with greater instruction-level parallelism. We believe that much can be learned by examining the success of parallelism at the extremes of the computing spectrum, namely embedded computing and high performance computing. This led us to frame the parallel landscape with seven questions, and to recommend the following: • The overarching goal should be to make it easy to write programs that execute efficiently on highly parallel computing systems • The target should be 1000s of cores per chip, as these chips are built from processing elements that are the most efficient in MIPS (Million Instructions per Second) per watt, MIPS per area of silicon, and MIPS per development dollar. • Instead of traditional benchmarks, use 13 “Dwarfs” to design and evaluate parallel programming models and architectures. (A dwarf is an algorithmic method that captures a pattern of computation and communication.) • “Autotuners” should play a larger role than conventional compilers in translating parallel programs. • To maximize programmer productivity, future programming models must be more human-centric than the conventional focus on hardware or applications. • To be successful, programming models should be independent of the number of processors. • To maximize application efficiency, programming models should support a wide range of data types and successful models of parallelism: task-level parallelism, word-level parallelism, and bit-level parallelism. 1 The Landscape of Parallel Computing Research: A View From Berkeley • Architects should not include features that significantly affect performance or energy if programmers cannot accurately measure their impact via performance counters and energy counters. • Traditional operating systems will be deconstructed and operating system functionality will be orchestrated using libraries and virtual machines. • To explore the design space rapidly, use system emulators based on Field Programmable Gate Arrays (FPGAs) that are highly scalable and low cost. Since real world applications are naturally parallel and hardware is naturally parallel, what we need is a programming model, system software, and a supporting architecture that are naturally parallel. Researchers have the rare opportunity to re-invent these cornerstones of computing, provided they simplify the efficient programming of highly parallel systems.
2,262 citations