scispace - formally typeset
Open Access

MPI: A Message-Passing Interface Standard

TLDR
This document contains all the technical features proposed for the interface and the goal of the Message Passing Interface, simply stated, is to develop a widely used standard for writing message-passing programs.
Abstract
The Message Passing Interface Forum (MPIF), with participation from over 40 organizations, has been meeting since November 1992 to discuss and define a set of library standards for message passing MPIF is not sanctioned or supported by any official standards organization The goal of the Message Passing Interface, simply stated, is to develop a widely used standard for writing message-passing programs As such the interface should establish a practical, portable, efficient and flexible standard for message passing , This is the final report, Version 10, of the Message Passing Interface Forum This document contains all the technical features proposed for the interface This copy of the draft was processed by LATEX on April 21, 1994 , Please send comments on MPI to mpi-comments@csutkedu Your comment will be forwarded to MPIF committee members who will attempt to respond

read more

Citations
More filters
Book

Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers

TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Journal ArticleDOI

A neural probabilistic language model

TL;DR: The authors propose to learn a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences, which can be expressed in terms of these representations.
Proceedings ArticleDOI

Cloud Computing and Grid Computing 360-Degree Compared

TL;DR: In this article, the authors compare and contrast cloud computing with grid computing from various angles and give insights into the essential characteristics of both the two technologies, and compare the advantages of grid computing and cloud computing.

PETSc Users Manual

TL;DR: The Portable, Extensible Toolkit for Scientific Computation (PETSc), is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations that supports MPI, and GPUs through CUDA or OpenCL, as well as hybrid MPI-GPU parallelism.
Journal ArticleDOI

A high-performance, portable implementation of the MPI message passing interface standard

TL;DR: The MPI Message Passing Interface (MPI) as mentioned in this paper is a standard library for message passing that was defined by the MPI Forum, a broadly based group of parallel computer vendors, library writers, and applications specialists.
References
More filters
Book

Lapack Users' Guide

Ed Anderson
TL;DR: The third edition of LAPACK provided a guide to troubleshooting and installation of Routines, as well as providing examples of how to convert from LINPACK or EISPACK to BLAS.
Journal ArticleDOI

PVM: Parallel virtual machine: a users' guide and tutorial for networked parallel computing

TL;DR: The PVM system, a heterogeneous network computing trends in distributed computing PVM overview other packages, and troubleshooting: geting PVM installed getting PVM running compiling applications running applications debugging and tracing debugging the system.
Book

The High Performance Fortran Handbook

TL;DR: High Performance Fortran is a set of extensions to Fortran expressing parallel execution at a relatively high level that brings the convenience of sequential Fortran a step closer to today's complex parallel machines.
Journal ArticleDOI

Disk-directed I/O for MIMD multiprocessors

TL;DR: In this article, the authors proposed a disk-directed I/O technique, which allows the disk servers to determine the flow of data for maximum performance, and demonstrated that this technique provided consistent high performance that was largely independent of data distribution.
Journal ArticleDOI

Improved parallel I/O via a two-phase run-time access strategy

TL;DR: This work provides experimental results and proposes a two-phase access strategy, to be implemented in a runtime system, in which the data distribution on computational nodes is decoupled from storage distribution, and shows that performance improvements of several orders of magnitude over direct access based data distribution methods can be obtained.
Related Papers (5)