scispace - formally typeset
Search or ask a question
Topic

Global Arrays

About: Global Arrays is a research topic. Over the lifetime, 97 publications have been published within this topic receiving 6671 citations. The topic is also known as: GA.


Papers
More filters
Journal ArticleDOI
TL;DR: An overview of NWChem is provided focusing primarily on the core theoretical modules provided by the code and their parallel performance, as well as Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures.

4,666 citations

Journal ArticleDOI
TL;DR: The design and some implementation details of the overall NWChem architecture facilitates rapid development and portability of fully distributed application modules and shows performance of a few of the modules within NWChem.

726 citations

Journal ArticleDOI
TL;DR: The key concept of GAs is that they provide a portable interface through which each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed matrices, with no need for explicit cooperation by other processes.
Abstract: Portability, efficiency, and ease of coding are all important considerations in choosing the programming model for a scalable parallel application. The message-passing programming model is widely used because of its portability, yet some applications are too complex to code in it while also trying to maintain a balanced computation load and avoid redundant computations. The shared-memory programming model simplifies coding, but it is not portable and often provides little control over interprocessor data transfer costs. This paper describes an approach, called Global Arrays (GAs), that combines the better features of both other models, leading to both simple coding and efficient execution. The key concept of GAs is that they provide a portable interface through which each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed matrices, with no need for explicit cooperation by other processes. We have implemented the GA library on a variety of computer systems, including the Intel Delta and Paragon, the IBM SP-1 and SP-2 (all message passers), the Kendall Square Research KSR-1/2 and the Convex SPP-1200 (nonuniform access shared-memory machines), the CRAY T3D (a globally addressable distributed-memory computer), and networks of UNIX workstations. We discuss the design and implementation of these libraries, report their performance, illustrate the use of GAs in the context of computational chemistry applications, and describe the use of a GA performance visualization tool.

354 citations

Journal ArticleDOI
01 May 2006
TL;DR: Compatibility of GA with MPI enables the programmer to take advatage of the existing MPI software/libraries when available and appropriate, and demonstrates the attractiveness of using higher level abstractions to write parallel code.
Abstract: This paper describes capabilities, evolution, performance, and applications of the Global Arrays (GA) toolkit. GA was created to provide application programmers with an inteface that allows them to distribute data while maintaining the type of global index space and programming syntax similar to that available when programming on a single processor. The goal of GA is to free the programmer from the low level management of communication and allow them to deal with their problems at the level at which they were originally formulated. At the same time, compatibility of GA with MPI enables the programmer to take advatage of the existing MPI software/libraries when available and appropriate. The variety of applications that have been implemented using Global Arrays attests to the attractiveness of using higher level abstractions to write parallel code.

341 citations

Proceedings ArticleDOI
14 Nov 1994
TL;DR: The key concept of GA is that it provides a portable interface through which each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed matrices, with no need for explicit cooperation by other processes.
Abstract: Portability, efficiency and ease of coding are all important considerations in choosing the programming model for a scalable parallel application. The message-passing programming model is widely used because of its portability, yet some applications are too complex to code in it while also trying to maintain a balanced computation load and avoid redundant computations. The shared-memory programming model simplifies coding, but it is not portable and often provides little control over interprocessor data transfer costs. This paper describes a new approach, called Global Arrays (GA) that combines the better features of both other models, leading to both simple coding and efficient execution. The key concept of GA is that it provides a portable interface through which each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed matrices, with no need for explicit cooperation by other processes. We have implemented GA libraries on a variety of computer systems, including the Intel DELTA and Paragon, the IBM SP-1 (all message-passers), the Kendall Square KSR-2 (a nonuniform access shared-memory machine), and networks of Unix workstations. We discuss the design and implementation of these libraries, report their performance, illustrate the use of GA in the context of computational chemistry applications, and describe the use of a GA performance visualization tool. >

224 citations

Network Information
Related Topics (5)
Parallel algorithm
23.6K papers, 452.6K citations
79% related
Virtual machine
43.9K papers, 718.3K citations
76% related
Scalability
50.9K papers, 931.6K citations
73% related
Load balancing (computing)
27.3K papers, 415.5K citations
73% related
Compiler
26.3K papers, 578.5K citations
73% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20211
20191
20171
20166
20157
20141