scispace - formally typeset
Search or ask a question
Book ChapterDOI

MPJava: High-Performance Message Passing in Java Using Java.nio

02 Oct 2003-pp 323-339
TL;DR: It is found that Java is increasingly an attractive platform for scientific cluster-based message passing codes and advances in Java Virtual Machine technology along with new high performance I/O libraries in Java 1.4 are explored.
Abstract: We explore advances in Java Virtual Machine (JVM) technology along with new high performance I/O libraries in Java 1.4, and find that Java is increasingly an attractive platform for scientific cluster-based message passing codes.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper analyzes the current state of Java for HPC, both for shared and distributed memory programming, presents related research projects, and evaluates the performance of current Java HPC solutions and research developments on two shared memory environments and two InfiniBand multi-core clusters.

100 citations


Cites background from "MPJava: High-Performance Message Pa..."

  • ...In this case, the higher programming effort required by the lower-level API allows for higher throughput, key in HPC....

    [...]

Journal ArticleDOI
TL;DR: F-MPJ significantly improves the scalability of current MPJ implementations by providing efficient non-blocking communication, taking advantage of shared memory systems and high-performance networks, andoptimizing MPJ collective primitives.
Abstract: This paper presents F-MPJ (Fast MPJ), a scalable and efficient Message-Passing in Java (MPJ) communication middleware for parallel computing. The increasing interest in Java as the programming language of the multi-core era demands scalable performance on hybrid architectures (with both shared and distributed memory spaces). However, current Java communication middleware lacks efficient communication support. F-MPJ boosts this situation by: (1) providing efficient non-blocking communication, which allows communication overlapping and thus scalable performance; (2) taking advantage of shared memory systems and high-performance networks through the use of our high-performance Java sockets implementation (named JFS, Java Fast Sockets); (3) avoiding the use of communication buffers; and (4) optimizing MPJ collective primitives. Thus, F-MPJ significantly improves the scalability of current MPJ implementations. A performance evaluation on an InfiniBand multi-core cluster has shown that F-MPJ communication primitives outperform representative MPJ libraries up to 60 times. Furthermore, the use of F-MPJ in communication-intensive MPJ codes has increased their performance up to seven times.

51 citations

Proceedings ArticleDOI
27 Aug 2009
TL;DR: This paper analyzes the current state of Java for HPC, both for shared and distributed memory programming, presents related research projects, and evaluates the performance of current Java HPC solutions and research developments on a multi-core cluster with a high-speed network, InfiniBand, and a 24-core shared memory machine.
Abstract: The rising interest in Java for High Performance Computing (HPC) is based on the appealing features of this language for programming multi-core cluster architectures, particularly the built-in networking and multithreading support, and the continuous increase in Java Virtual Machine (JVM) performance. However, its adoption in this area is being delayed by the lack of analysis of the existing programming options in Java for HPC and evaluations of their performance, as well as the unawareness of the current research projects in this field, whose solutions are needed in order to boost the embracement of Java in HPC.This paper analyzes the current state of Java for HPC, both for shared and distributed memory programming, presents related research projects, and finally, evaluates the performance of current Java HPC solutions and research developments on a multi-core cluster with a high-speed network, InfiniBand, and a 24-core shared memory machine. The main conclusions are that: (1) the significant interest on Java for HPC has led to the development of numerous projects, although usually quite modest, which may have prevented a higher development of Java in this field; and (2) Java can achieve almost similar performance to native languages, both for sequential and parallel applications, being an alternative for HPC programming. Thus, the good prospects of Java in this area are attracting the attention of both industry and academia, which can take significant advantage of Java adoption in HPC.

34 citations


Cites background from "MPJava: High-Performance Message Pa..."

  • ...• MPJava [24] is the first Java message-passing library implemented on Java NIO sockets, taking advantage of their scalability and high performance communications....

    [...]

Book ChapterDOI
02 Oct 2003
TL;DR: It is found that language features can make parallel programs easier to write, but cannot hide the underlying communication costs for the target parallel architecture.
Abstract: We evaluate the impact of programming language features on the performance of parallel applications on modern parallel architectures, particularly for the demanding case of sparse integer codes. We compare a number of programming languages (Pthreads, OpenMP, MPI, UPC) on both shared and distributed-memory architectures. We find that language features can make parallel programs easier to write, but cannot hide the underlying communication costs for the target parallel architecture. Powerful compiler analysis and optimization can help reduce software overhead, but features such as fine-grain remote accesses are inherently expensive on clusters. To avoid large reductions in performance, language features must avoid degrading the performance of local computations.

31 citations


Cites methods from "MPJava: High-Performance Message Pa..."

  • ...Pugh and Spacco use similar benchmarks to evaluate MPJava, a method for developing high-performance parallel computations in Java [ PS03 ]....

    [...]

Proceedings ArticleDOI
18 Feb 2009
TL;DR: NPB-MPJ is presented, the first extensive implementation of the NAS Parallel Benchmarks (NPB), the standard parallel benchmark suite, for Message-Passing in Java (MPJ) libraries, whose comparative analysis of current Java and native parallel solutions confirms that MPJ is an alternative for parallel programming multi-core systems.
Abstract: Java is a valuable and emerging alternative for the development of parallel applications, thanks to the availability of several Java message-passing libraries and its full multithreading support. The combination of both shared and distributed memory programming is an interesting option for parallel programming multi-core systems. However, the concerns about Java performance are hindering its adoption in this field, although it is difficult to evaluate accurately its performance due to the lack of standard benchmarks in Java.This paper presents NPB-MPJ, the first extensive implementation of the NAS Parallel Benchmarks (NPB), the standard parallel benchmark suite, for Message-Passing in Java (MPJ) libraries. Together with the design and implementation details of NPB-MPJ, this paper gathers several optimization techniques that can serve as a guide for the development of more efficient Java applications for High Performance Computing (HPC). NPB-MPJ has been used in the performance evaluation of Java against C/Fortran parallel libraries on two representative multi-core clusters. Thus, NPB-MPJ provides an up-to-date snapshot of MPJ performance, whose comparative analysis of current Java and native parallel solutions confirms that MPJ is an alternative for parallel programming multi-core systems.

26 citations


Additional excerpts

  • ...Finally, Section 6 concludes the paper....

    [...]

References
More filters
01 Apr 1994
TL;DR: This document contains all the technical features proposed for the interface and the goal of the Message Passing Interface, simply stated, is to develop a widely used standard for writing message-passing programs.
Abstract: The Message Passing Interface Forum (MPIF), with participation from over 40 organizations, has been meeting since November 1992 to discuss and define a set of library standards for message passing MPIF is not sanctioned or supported by any official standards organization The goal of the Message Passing Interface, simply stated, is to develop a widely used standard for writing message-passing programs As such the interface should establish a practical, portable, efficient and flexible standard for message passing , This is the final report, Version 10, of the Message Passing Interface Forum This document contains all the technical features proposed for the interface This copy of the draft was processed by LATEX on April 21, 1994 , Please send comments on MPI to mpi-comments@csutkedu Your comment will be forwarded to MPIF committee members who will attempt to respond

3,181 citations

Journal ArticleDOI
01 Sep 1991
TL;DR: A new set of benchmarks has been developed for the performance evaluation of highly parallel supercom puters that mimic the computation and data move ment characteristics of large-scale computational fluid dynamics applications.
Abstract: A new set of benchmarks has been developed for the performance evaluation of highly parallel supercom puters. These consist of five "parallel kernel" bench marks and three "simulated application" benchmarks. Together they mimic the computation and data move ment characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their "pencil and paper" specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional bench- marking approaches on highly parallel systems are avoided.

2,246 citations

Journal ArticleDOI
TL;DR: This work discusses the main additions to Java are immutable classes, multidimensional arrays, an explicitly parallel SPMD model of computation with a global address space, and zone-based memory management, and reports progress on the development of Titanium.
Abstract: Titanium is a language and system for high-performance parallel scientific computing. Titanium uses Java as its base, thereby leveraging the advantages of that language and allowing us to focus attention on parallel computing issues. The main additions to Java are immutable classes, multidimensional arrays, an explicitly parallel SPMD model of computation with a global address space, and zone-based memory management. We discuss these features and our design approach, and report progress on the development of Titanium, including our current driving application: a three-dimensional adaptive mesh refinement parallel Poisson solver. © 1998 John Wiley & Sons, Ltd.

433 citations

01 Jan 1999
TL;DR: This work discusses the main additions to Java are immutable classes, multidimensional arrays, an explicitly parallel SPMD model of computation with a global address space, and zone-based memory management, and reports on the development of Titanium.
Abstract: Titanium is a language and system for high-performance parallel scientific computing. Titanium uses Java as its base, thereby leveraging the advantages of that language and allowing us to focus attention on parallel computing issues. The main additions to Java are immutable classes, multidimensional arrays, an explicitly parallel SPMD model of computation with a global address space, and zone-based memory management. We discuss these features and our design approach, and report progress on the development of Titanium, including our current driving application: a three-dimensional adaptive mesh refinement parallel Poisson solver.

374 citations


"MPJava: High-Performance Message Pa..." refers background in this paper

  • ...The communications are thus subject to the inefficiencies of the older java.io package....

    [...]

Proceedings ArticleDOI
01 Jun 1999
TL;DR: It is demonstrated that a much faster drop-in RMI and an efficient serialization can be designed and implemented completely in Java without any native code, and the re-designed RMI supports non-TCP/IP communication networks, even with heterogeneous transport protocols.
Abstract: In current Java implementations, Flemote Method Invocation (RMI) is too slow, especially for high performance computing. RMI is designed for wide-area and high-latency networks, it is based on a slow object serialization, and it does not support high-performance communication networks. The paper demonstrates that a much faster drop-in RMI and an efficient serialization can be designed and implemented completely in Java without any native code. Moreover, the re-designed RMI supports non-TCP/IP communication networks, even with heterogeneous transport protocols. As a by-product,, a benchmark collection for RMI is presented. This collection asked -for by the Java Grande Forum from its first meeting can guide JVM vendors in their performance optimizations. On PCs connected through Ethernet, the better serialization and the improved RMI save a median of 45% (maximum of 71%) of the runtime for :some set of arguments. On our Myrinet-based ParaStation network (a cluster of DEC Alphas) we save a median of 85% (maximum of 96%), compared to standard RMI, standard serialization, and Fast Ethernet; a remote method invocation runs as fast as 115 /.s round trip time, compared to about 1.5 ms.

163 citations


"MPJava: High-Performance Message Pa..." refers background in this paper

  • ...However, they have yet to deliver a product; all we have are their design goals [2]....

    [...]