scispace - formally typeset
Search or ask a question
Conference

Parallel Computing 

About: Parallel Computing is an academic conference. The conference publishes majorly in the area(s): Parallel algorithm & Distributed memory. Over the lifetime, 5690 publications have been published by the conference receiving 99895 citations.


Papers
More filters
Journal ArticleDOI
01 Sep 1996
TL;DR: The MPI Message Passing Interface (MPI) as mentioned in this paper is a standard library for message passing that was defined by the MPI Forum, a broadly based group of parallel computer vendors, library writers, and applications specialists.
Abstract: MPI (Message Passing Interface) is a specification for a standard library for message passing that was defined by the MPI Forum, a broadly based group of parallel computer vendors, library writers, and applications specialists. Multiple implementations of MPI have been developed. In this paper, we describe MPICH, unique among existing implementations in its design goal of combining portability with high performance. We document its portability and performance and describe the architecture by which these features are simultaneously achieved. We also discuss the set of tools that accompany the free distribution of MPICH, which constitute the beginnings of a portable parallel programming environment. A project of this scope inevitably imparts lessons about parallel computing, the specification being followed, the current hardware and software environment for parallel computing, and project management; we describe those we have learned. Finally, we discuss future developments for MPICH, including those necessary to accommodate extensions to the MPI Standard now being contemplated by the MPI Forum.

2,082 citations

Journal ArticleDOI
01 Jul 2004
TL;DR: The design, implementation, and evaluation of Ganglia are presented along with experience gained through real world deployments on systems of widely varying scale, configurations, and target application domains over the last two and a half years.
Abstract: Ganglia is a scalable distributed monitoring system for high performance computing systems such as clusters and Grids. It is based on a hierarchical design targeted at federations of clusters. It relies on a multicast-based listen/announce protocol to monitor state within clusters and uses a tree of point-to-point connections amongst representative cluster nodes to federate clusters and aggregate their state. It leverages widely used technologies such as XML for data representation, XDR for compact, portable data transport, and RRDtool for data storage and visualization. It uses carefully engineered data structures and algorithms to achieve very low per-node overheads and high concurrency. The implementation is robust, has been ported to an extensive set of operating systems and processor architectures, and is currently in use on over 500 clusters around the world. This paper presents the design, implementation, and evaluation of Ganglia along with experience gained through real world deployments on systems of widely varying scale, configurations, and target application domains over the last two and a half years.

1,401 citations

Journal ArticleDOI
01 Feb 2006
TL;DR: This work considers the problem of designing a dynamic scheduling strategy that takes into account both workload and memory information in the context of the parallel multifrontal factorization and shows that a new scheduling algorithm significantly improves both the memory behaviour and the factorization time.
Abstract: We consider the problem of designing a dynamic scheduling strategy that takes into account both workload and memory information in the context of the parallel multifrontal factorization. The originality of our approach is that we base our estimations (work and memory) on a static optimistic scenario during the analysis phase. This scenario is then used during the factorization phase to constrain the dynamic decisions that compute fully irregular partitions in order to better balance the workload. We show that our new scheduling algorithm significantly improves both the memory behaviour and the factorization time. We give experimental results for large challenging real-life 3D problems on 64 and 128 processors.

1,072 citations

Journal ArticleDOI
01 Jul 1991
TL;DR: An adaptation of taboo search to the quadratic assignment problem is discussed in this paper, which is efficient and robust, requiring less complexity and fewer parameters than earlier adaptations.
Abstract: An adaptation of taboo search to the quadratic assignment problem is discussed in this paper. This adaptation is efficient and robust, requiring less complexity and fewer parameters than earlier adaptations. In order to improve the speed of our taboo search, two parallelization methods are proposed and their efficiencies shown for a number of processors proportional to the size of the problem. The best published solutions to many of the biggest problems have been improved and every previously best solution (probably optimal) of smaller problems has been found. In addition, an easy way of generating random problems is proposed and good solutions of these problems, whose sizes are between 5 and 100, are given.

839 citations

Journal ArticleDOI
01 Aug 1990
TL;DR: An overview of several different experiments applying genetic algorithms to neural network problems including optimizing the weighted connections in feed-forward neural networks using both binary and real-valued representations and using a genetic algorithm to discover novel architectures for neural networks that learn using error propagation are presented.
Abstract: Genetic algorithms are a robust adaptive optimization method based on biological principles. A population of strings representing possible problem solutions is maintained. Search proceeds by recombining strings in the population. The theoretical foundations of genetic algorithms are based on the notion that selective reproduction and recombination of binary strings changes the sampling rate of hyperplanes in the search space so as to reflect the average fitness of strings that reside in any particular hyperplane. Thus, genetic algorithms need not search along the contours of the function being optimized and tend not to become trapped in local minima. This paper is an overview of several different experiments applying genetic algorithms to neural network problems. These problems include 1. (1) optimizing the weighted connections in feed-forward neural networks using both binary and real-valued representations, and 2. (2) using a genetic algorithm to discover novel architectures in the form of connectivity patterns for neural networks that learn using error propagation. Future applications in neural network optimization in which genetic algorithm can perhaps play a significant role are also presented.

754 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
20225
202168
202075
2019145
201889
2017119