scispace - formally typeset
Search or ask a question

Showing papers by "Bernard J. Pope published in 2011"


Journal ArticleDOI
Bernard J. Pope, Blake G. Fitch1, Mike Pitman1, J. Jeremy Rice1, Matthias Reumann1 
TL;DR: The results show that the hybrid models perform favorably when compared to an implementation using only theMPI and, furthermore, that OpenMP in combination with the MPI provides a satisfactory compromise between performance and code complexity.
Abstract: Future multiscale and multiphysics models that support research into human disease, translational medical science, and treatment can utilize the power of high-performance computing (HPC) systems. We anticipate that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message-passing processes [e.g., the message-passing interface (MPI)] with multithreading (e.g., OpenMP, Pthreads). The objective of this study is to compare the performance of such hybrid programming models when applied to the simulation of a realistic physiological multiscale model of the heart. Our results show that the hybrid models perform favorably when compared to an implementation using only the MPI and, furthermore, that OpenMP in combination with the MPI provides a satisfactory compromise between performance and code complexity. Having the ability to use threads within MPI processes enables the sophisticated use of all processor cores for both computation and communication phases. Considering that HPC systems in 2012 will have two orders of magnitude more cores than what was used in this study, we believe that faster than real-time multiscale cardiac simulations can be achieved on these systems.

21 citations


Proceedings ArticleDOI
Bernard J. Pope, Blake G. Fitch1, Michael C. Pitman1, J. Jeremy Rice1, Matthias Reumann1 
01 Dec 2011
TL;DR: The results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to the results using complex physiological models, so the user may not need to increase programming complexity by using a hybrid programming approach.
Abstract: Future multiscale and multiphysics models must use the power of high performance computing (HPC) systems to enable research into human disease, translational medical science, and treatment. Previously we showed that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message passing processes (e.g. the message passing interface (MPI)) with multithreading (e.g. OpenMP, POSIX pthreads). The objective of this work is to compare the performance of such hybrid programming models when applied to the simulation of a lightweight multiscale cardiac model. Our results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to our results using complex physiological models. Thus, with regards to lightweight multiscale cardiac models, the user may not need to increase programming complexity by using a hybrid programming approach. However, considering that model complexity will increase as well as the HPC system size in both node count and number of cores per node, it is still foreseeable that we will achieve faster than real time multiscale cardiac simulations on these systems using hybrid programming models.

3 citations


01 Jan 2011
TL;DR: This research attacked the mode confusion problem by developing a modeling framework that automates the very labor-intensive and therefore time-heavy and expensive process of manually cataloging and modeling the response of the immune system.
Abstract: 1 IBM Research Collaboratory for Life Sciences-Melbourne, Carlton, Australia, {reumann; John.Wagner; mattdton; stevemoore@au1.ibm.com} 2 Dept. Microbiology and Immunology, University of Melbourne, Carlton, Australia, {kholt; tstinear, sjturner}@unimelb.edu.au 3 Walter and Eliza Hall Institute in Melbourne, Australia, inouye@wehi.edu.au 4 Dept. Computer Science and Software Engineering, University of Melbourne, Carlton, Australia, {bwgoudey; gabraham}@csse.unimelb.edu.au; {qwan; adrianrp; jz}@unimelb.edu.au 5 National ICT Australia, Victoria Research Laboratories, Carlton, Australia, {Fan.Shi; adam.kowalczyk@nicta.com.au} 6 Victorian Life Sciences Computation Initiative, Carlton, Australia, {aisaac; bjpope@unimelb.edu.au} 7 Dept. of Medicine, University of Melbourne, Carlton, Australia {butz; slavep; obrientj@unimelb.edu.au 8 Deakin University, Science and Technology pcc@deakin.edu.au Howard Florey Institute, Carlton, Australia, Judith.field@florey.edu.au 10 Dept. of Pathology, University of Melbourne, msouthey@unimelb.edu.au 11 Peter MacCullum Cancer Center, Melbourne, David.Bowtell@petermac.org 12 Melbourne School of Population Health, University of Melbourne, Carlton, Australia