scispace - formally typeset
Search or ask a question
Author

Bryan Carpenter

Bio: Bryan Carpenter is an academic researcher from Indiana University. The author has contributed to research in topics: Java & Scala. The author has an hindex of 4, co-authored 7 publications receiving 43 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: HP Java as mentioned in this paper is a grid computing environment for high-performance grid-enabled applications, including run-time communication library, compilation strategies and optimization schemes, which can be used not only for parallel computing, but also for gridenabled applications.
Abstract: The paper begins by considering what a grid computing environment might be, why it is demanded and how the authors' HP spmd programming fits into this picture. We then review our HP Java environment as a contribution towards programming support for high-performance grid-enabled environments. Future grid computing systems will need to provide programming models. In a proper programming model for grid-enabled environments and applications, high performance on multi-processor systems is a critical issue. We describe the features of HP Java, including run-time communication library, compilation strategies and optimization schemes. Through experiments, we compare HP Java programs against FORTRAN and ordinary Java programs. We aim to demonstrate that HP Java can be used “anywhere”—not only for high-performance parallel computing, but also for grid-enabled applications.

14 citations

01 Jan 2003
TL;DR: The paper describes the novel issues in the implementation of device level library on different platforms, and gives comprehensive benchmark results on a parallel platform.
Abstract: Two characteristic run-time communication libraries of HPJava are developed as an application level library and device level library. A high-level communication API, Adlib, is developed as an application level communication library. This communication library supports collective operations on distributed arrays. The mpjdev API is a device level underlying communication library for HPJava. This library is developed to perform actual communication between processes. The paper describes the novel issues in the implementation of device level library on different platforms, and gives comprehensive benchmark results on a parallel platform. All software developed in this project is available for free download from www.hpjava.org. Procs2 p = new Procs2(P, P) ; on(p) { Range x = new BlockRange(M, p.dim(0)) ; Range y = new BlockRange(N, p.dim(1)) ; float [[-,-]] a = new float [[x, y]], b = new float [[x, y]], c = new float [[x, y]]; ... initialize values in `a', `b' overall(i = x for :) overall(j = y for :) c [i, j] = a [i, j] + b [i, j] ;

11 citations

Journal ArticleDOI
01 May 2003
TL;DR: The HPJava project aims to support scientific and parallel computing in a modern, object-oriented, Internet-friendly environment - the Java platform, and introduces a slightly unusual parallel programming model somewhere in between the classical HPF and message-passing interface (MPI) extremes.
Abstract: We consider a project that's ongoing at our Pervasive Technology Lab at Indiana University. The HPJava (high-performance Java) project aims to support scientific and parallel computing in a modern, object-oriented, Internet-friendly environment - the Java platform. HPJava leverages popular high-performance Fortran (HPF) language and library features such as "scientific" multidimensional array syntax and distributed arrays, while at a more language-independent level, it introduces a slightly unusual parallel programming model, somewhere in between the classical HPF and message-passing interface (MPI) extremes.

8 citations

Book ChapterDOI
02 Oct 2003
TL;DR: Two applications of the HPJava language for parallel computing are described, one a multigrid solver for a Poisson equation, and the second a CFD application that solves the Euler equations for inviscid flow.
Abstract: We describe two applications of our HPJava language for parallel computing. The first is a multigrid solver for a Poisson equation, and the second is a CFD application that solves the Euler equations for inviscid flow. We illustrate how the features of the HPJava language allow these algorithms to be expressed in a straightforward and convenient way. Performance results on an IBM SP3 are presented.

4 citations

Journal Article
TL;DR: In this article, the authors describe two applications of the HPJava language for parallel computing, one is a multigrid solver for a Poisson equation, and the other is a CFD application that solves the Euler equations for inviscid flow.
Abstract: We describe two applications of our HPJava language for parallel computing. The first is a multigrid solver for a Poisson equation, and the second is a CFD application that solves the Euler equations for inviscid flow. We illustrate how the features of the HPJava language allow these algorithms to be expressed in a straightforward and convenient way. Performance results on an IBM SP3 are presented.

3 citations


Cited by
More filters
Proceedings ArticleDOI
25 Sep 2006
TL;DR: The implementation of MPJ Express is described and a performance comparison against various other C and Java messaging systems is presented.
Abstract: MPJ Express is a thread-safe Java messaging library that provides a full implementation of the mpiJava 1.2 API specification. This specification defines a MPI-like bindings for the Java language. We have implemented two communication devices as part of our library, the first, called niodev is based on the Java New I/O package and the second, called mxdev is based on the Myrinet eXpress library. MPJ Express comes with an experimental runtime, which allows portable bootstrapping of Java Virtual Machines across a cluster or network of computers. In this paper we describe the implementation of MPJ Express. Also, we present a performance comparison against various other C and Java messaging systems. A beta version of MPJ Express was released in September 2005.

100 citations

01 Aug 1991

86 citations

Journal ArticleDOI
TL;DR: This article introduces a system called Satin that simplifies the development of parallel grid applications by providing a rich high-level programming model that completely hides communication, and shows that the divide-and-conquer model scales better on large systems than the master-worker approach, since it has no single central bottleneck.
Abstract: Computational grids have an enormous potential to provide compute power. However, this power remains largely unexploited today for most applications, except trivially parallel programs. Developing parallel grid applications simply is too difficult. Grids introduce several problems not encountered before, mainly due to the highly heterogeneous and dynamic computing and networking environment. Furthermore, failures occur frequently, and resources may be claimed by higher-priority jobs at any time.In this article, we solve these problems for an important class of applications: divide-and-conquer. We introduce a system called Satin that simplifies the development of parallel grid applications by providing a rich high-level programming model that completely hides communication. All grid issues are transparently handled in the runtime system, not by the programmer. Satin's programming model is based on Java, features spawn-sync primitives and shared objects, and uses asynchronous exceptions and an abort mechanism to support speculative parallelism.To allow an efficient implementation, Satin consistently exploits the idea that grids are hierarchically structured. Dynamic load-balancing is done with a novel cluster-aware scheduling algorithm that hides the long wide-area latencies by overlapping them with useful local work. Satin's shared object model lets the application define the consistency model it needs. If an application needs only loose consistency, it does not have to pay high performance penalties for wide-area communication and synchronization.We demonstrate how grid problems such as resource changes and failures can be handled transparently and efficiently. Finally, we show that adaptivity is important in grids. Satin can increase performance considerably by adding and removing compute resources automatically, based on the application's requirements and the utilization of the machines and networks in the grid.Using an extensive evaluation on real grids with up to 960 cores, we demonstrate that it is possible to provide a simple high-level programming model for divide-and-conquer applications, while achieving excellent performance on grids. At the same time, we show that the divide-and-conquer model scales better on large systems than the master-worker approach, since it has no single central bottleneck.

70 citations

Journal IssueDOI
TL;DR: This paper evaluates and compares the performance of the Java and C versions of these two scientific applications, and demonstrates that the Java codes can achieve performance comparable with legacy applications written in conventional HPC languages.
Abstract: In the 1990s the Message Passing Interface Forum defined MPI bindings for Fortran, C, and C++. With the success of MPI these relatively conservative languages have continued to dominate in the parallel computing community. There are compelling arguments in favour of more modern languages like Java. These include portability, better runtime error checking, modularity, and multi-threading. But these arguments have not converted many HPC programmers, perhaps due to the scarcity of full-scale scientific Java codes, and the lack of evidence for performance competitive with C or Fortran. This paper tries to redress this situation by porting two scientific applications to Java. Both of these applications are parallelized using our thread-safe Java messaging system—MPJ Express. The first application is the Gadget-2 code, which is a massively parallel structure formation code for cosmological simulations. The second application uses the finite-domain time-difference method for simulations in the area of computational electromagnetics. We evaluate and compare the performance of the Java and C versions of these two scientific applications, and demonstrate that the Java codes can achieve performance comparable with legacy applications written in conventional HPC languages. Copyright © 2009 John Wiley & Sons, Ltd.

32 citations

Journal ArticleDOI
01 Aug 2007
TL;DR: This work addresses Titanium's partitioned global address space model, single program multiple data parallelism support, multi-dimensional arrays and array-index calculus, memory management, immutable classes, operator overloading, and generic programming.
Abstract: We describe the rationale behind the design of key features of Titanium - an explicitly parallel dialect of Java for high-performance scientific programming - and our experiences in building applications with the language. Specifically, we address Titanium's partitioned global address space model, single program multiple data parallelism support, multi-dimensional arrays and array-index calculus, memory management, immutable classes (class-like types that are value types rather than reference types), operator overloading, and generic programming. We provide an overview of the Titanium compiler implementation, covering various parallel analyses and optimizations, Titanium runtime technology and the GASNet network communication layer. We summarize results and lessons learned from implementing the NAS parallel benchmarks, elliptic and hyperbolic solvers using adaptive mesh refinement, and several applications of the immersed boundary method.

31 citations