scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A ubiquitous message passing interface implementation in Java:jmpi

12 Apr 1999-pp 203-207
TL;DR: jmpi is a 100% Java-based implementation of the message-passing interface (MPI-1) standard and supports a user-friendly Java application programming interface (API) for MPI.
Abstract: jmpi is a 100% Java-based implementation of the message-passing interface (MPI-1) standard jmpi comes with an efficient and effective MPI implementation in Java and supports a user-friendly Java application programming interface (API) for MPI. We present the implementation details and give some early communication benchmark performance results on a cluster of SUN UltraSparc workstations.
Citations
More filters
Book ChapterDOI
01 Jan 2006
TL;DR: This article proposes a solution to those challenges which takes the form of a programming and deployment framework featuring parallel, mobile, secure and distributed objects and components.
Abstract: In summary, the essence of our proposition, presented in this chapter, is as follows: a distributed object-oriented programming model, smoothly extended to get a component-based programming model (in the form of a 100% Java library); moreover this model is “grid-aware” in the sense that it incorporates from the very beginning adequate mechanisms in order to further help in the deployment and runtime phases on all possible kind of infrastructures, notably secure grid systems. This programming framework is intended to be used for large scale grid applications. For instance, we have succeeded to apply it for a numerical simulation of electromagnetic waves propagation, a non embarrassingly parallel application [21], featuring visualization and monitor- ing capabilities for the user. To date, this simulation has successfully been deployed on various infrastructures, ranging from interconnected clusters, to an intranet grid composed of approxi- matively 300 desktop machines. Performances compete with a previous existing version of the application, written in Fortran MPI. The proposed object-oriented approach is more generic and features reusability (the component-oriented version is under development, which may further add dynamicity to the application), and the deployment is very flexible.

141 citations


Additional excerpts

  • ...If libraries for parallel and distributed application development exist (RMI in Java, jmpi [19] for MPI programming, etc....

    [...]

Proceedings ArticleDOI
31 Jan 2001
TL;DR: Neko as discussed by the authors is a Java platform that provides a uniform and extensible environment for the various phases of algorithm design and performance evaluation: prototyping, tuning, simulation, deployment, etc.
Abstract: Designing, tuning, and analyzing the performance of distributed algorithms and protocols are complex tasks. A major factor that contributes to this complexity is the fact that there is no single environment to support all phases of the development of a distributed algorithm. This paper presents Neko, an easy to use Java platform that provides a uniform and extensible environment for the various phases of algorithm design and performance evaluation: prototyping, tuning, simulation, deployment, etc.

115 citations

Proceedings ArticleDOI
24 Jul 2002
TL;DR: The use of XML-based descriptor for the deployment part of a distributed application and the use of IC2D (Interactive Control and Debugging of Distribution), for the monitoring and steering of the running application are described, a contribution towards the construction of integrated environments for component-based grid programming.
Abstract: Increasing complexity of distributed applications and commodity of resources through grids are making the tasks of deploying those applications harder. There is a clear need for standard tools allowing versatile deployment and analysis of distributed applications. We present here a solution for the deployment and monitoring of applications written using ProActive, an experimental Java-based library for concurrent, distributed and mobile computing. We describe the use of XML-based descriptor for the deployment part of a distributed application and the use of IC2D (Interactive Control and Debugging of Distribution), for the monitoring and steering of the running application. Those ideas, concepts, and experiments are a contribution towards the construction of integrated environments for component-based grid programming.

89 citations


Cites methods from "A ubiquitous message passing interf..."

  • ...We present here a solution for the deployment and monitoring of applications written using ProActive, an experimental Java-based library for concurrent, distributed and mobile computing....

    [...]

Proceedings ArticleDOI
21 May 2002
TL;DR: The middleware is presented, called M-JavaMPI, that runs on top of the standard JVM to support transparent Java process migration and communication redirection, and allows Java processes to freely and transparently migrate between machines to achieve load balancing.
Abstract: Several Java bindings to the Message Passing Interface (MPI) software have been developed for high-performance parallel Java-based computing with message-passing in the past. None of them however addressed the issue of supporting transparent Java process migration for achieving dynamic load distribution and balancing. This paper presents a middleware, called M-JavaMPI, that runs on top of the standard JVM to support transparent Java process migration and communication redirection. The middleware allows Java processes to freely and transparently migrate between machines to achieve load balancing, and migrated processes can continue communication with other processes using MPI. The method we use to achieve process migration is to capture execution context and restoring the execution context at the Java bytecode level using the Java Virtual Machine Debugger Interface (JVMDI). Post-migration interprocess communication is enabled via a Restorable Java-MPI API. Tests using a 16-node cluster have Shown that our mechanism yields considerable performance gain through migration.

27 citations


Cites background from "A ubiquitous message passing interf..."

  • ...Keywords: process migration, MPI, JVMDI, message passing, M-JavaMPI, load balancing, Java, cluster computing, parallel computing...

    [...]

Proceedings ArticleDOI
01 Jan 2003
TL;DR: This paper evaluates, models and compares the performance of MPI-like point-to-point and collective communication primitives from selected Java message-passing implementations on clusters with different interconnection networks.
Abstract: The use of Java for parallel programming on clusters according to the message-passing paradigm is an attractive choice. In this case, the overall application performance will largely depend on the performance of the underlying Java message-passing library. This paper evaluates, models and compares the performance of MPI-like point-to-point and collective communication primitives from selected Java message-passing implementations on clusters with different interconnection networks. We have developed our own micro-benchmark suite to characterize the message-passing communication overhead and thus derive analytical latency models.

27 citations


Cites methods from "A ubiquitous message passing interf..."

  • ...• jmpi [5] is another pure Java implementation of MPI built on top of JPVM (see below)....

    [...]

References
More filters
Book
01 Jan 1994
TL;DR: Using MPI as mentioned in this paper provides a thoroughly updated guide to the MPI (Message-Passing Interface) standard library for writing programs for parallel computers, including a comparison of MPI with sockets.
Abstract: This book offers a thoroughly updated guide to the MPI (Message-Passing Interface) standard library for writing programs for parallel computers Since the publication of the previous edition of Using MPI, parallel computing has become mainstream Today, applications run on computers with millions of processors; multiple processors sharing memory and multicore processors with multiple hardware threads per core are common The MPI-3 Forum recently brought the MPI standard up to date with respect to developments in hardware capabilities, core language evolution, the needs of applications, and experience gained over the years by vendors, implementers, and users This third edition of Using MPI reflects these changes in both text and example code The book takes an informal, tutorial approach, introducing each concept through easy-to-understand examples, including actual code in C and Fortran Topics include using MPI in simple programs, virtual topologies, MPI datatypes, parallel libraries, and a comparison of MPI with sockets For the third edition, example code has been brought up to date; applications have been updated; and references reflect the recent attention MPI has received in the literature A companion volume, Using Advanced MPI, covers more advanced topics, including hybrid programming and coping with large data

2,666 citations

Journal ArticleDOI
01 Sep 1996
TL;DR: The MPI Message Passing Interface (MPI) as mentioned in this paper is a standard library for message passing that was defined by the MPI Forum, a broadly based group of parallel computer vendors, library writers, and applications specialists.
Abstract: MPI (Message Passing Interface) is a specification for a standard library for message passing that was defined by the MPI Forum, a broadly based group of parallel computer vendors, library writers, and applications specialists. Multiple implementations of MPI have been developed. In this paper, we describe MPICH, unique among existing implementations in its design goal of combining portability with high performance. We document its portability and performance and describe the architecture by which these features are simultaneously achieved. We also discuss the set of tools that accompany the free distribution of MPICH, which constitute the beginnings of a portable parallel programming environment. A project of this scope inevitably imparts lessons about parallel computing, the specification being followed, the current hardware and software environment for parallel computing, and project management; we describe those we have learned. Finally, we discuss future developments for MPICH, including those necessary to accommodate extensions to the MPI Standard now being contemplated by the MPI Forum.

2,082 citations

01 Jan 1996
TL;DR: The MPI Message Passing Interface (MPI) as discussed by the authors is a standard library for message passing that was defined by the MPI Forum, a broadly based group of parallel computer vendors, library writers, and applications specialists.
Abstract: MPI (Message Passing Interface) is a specification for a standard library for message passing that was defined by the MPI Forum, a broadly based group of parallel computer vendors, library writers, and applications specialists. Multiple implementations of MPI have been developed. In this paper, we describe MPICH, unique among existing implementations in its design goal of combining portability with high performance. We document its portability and performance and describe the architecture by which these features are simultaneously achieved. We also discuss the set of tools that accompany the free distribution of MPICH, which constitute the beginnings of a portable parallel programming environment. A project of this scope inevitably imparts lessons about parallel computing, the specification being followed, the current hardware and software environment for parallel computing, and project management; we describe those we have learned. Finally, we discuss future developments for MPICH, including those necessary to accommodate extensions to the MPI Standard now being contemplated by the MPI Forum.

2,065 citations

Journal ArticleDOI
TL;DR: Initial applications performance results achieved with a prototype JPVM system indicate that the Java-implemented approach can offer good performance at appropriately coarse granularities.
Abstract: The JPVM library is a software system for explicit message-passing based distributed memory MIMD parallel programming in Java. The library supports an interface similar to the C and Fortran interface provided by the Parallel Virtual Machine (PVM) system, but with syntax and semantics modifications afforded by Java and better matched to Java programming styles. The similarity between JPVM and the widely used PVM system supports a quick learning curve for experienced PVM programmers, thus making the JPVM system an accessible, low-investment target for migrating parallel applications to the Java platform. At the same time, JPVM offers novel features not found in standard PVM such as thread safety, multiple communication end-points per task, and default-case direct message routing. JPVM is implemented entirely in Java, and is thus highly portable among platforms supporting some version of the Java Virtual Machine. This feature opens up the possibility of utilizing resources commonly excluded from network parallel computing systems such as Macintosh and Windows-NT based systems. Initial applications performance results achieved with a prototype JPVM system indicate that the Java-implemented approach can offer good performance at appropriately coarse granularities.

121 citations

Book Chapter
01 Jan 1999

3 citations


"A ubiquitous message passing interf..." refers background in this paper

  • ...Basic MPI datatypes include byte, char, short, int, long, float, double and boolean primitive Java datatypes....

    [...]