scispace - formally typeset
Open AccessJournal ArticleDOI

A Component Architecture for High-Performance Scientific Computing

TLDR
The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance coputing.
Abstract
The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance coputing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed coputing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal ovehead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including cobustion research, global climate simulation, and computtional chemistry.

read more

Content maybe subject to copyright    Report

163HIGH-PERFORMANCE SCIENTIFIC COMPUTING
A COMPONENT ARCHITECTURE FOR
HIGH-PERFORMANCE SCIENTIFIC
COMPUTING
Benjamin A. Allan
2
Robert Armstrong
2
David E. Bernholdt
1
Felipe Bertrand
3
Kenneth Chiu
4
Tamara L. Dahlgren
5
Kostadin Damevski
6
Wael R. Elwasif
1
Thomas G. W. Epperly
5
Madhusudhan Govindaraju
4
Daniel S. Katz
7
James A. Kohl
1
Manoj Krishnan
8
Gary Kumfert
5
J. Walter Larson
9
Sophia Lefantzi
10
Michael J. Lewis
4
Allen D. Malony
11
Lois C. McInnes
9
Jarek Nieplocha
8
Boyana Norris
9
Steven G. Parker
6
Jaideep Ray
12
Sameer Shende
11
Theresa L. Windus
13
Shujia Zhou
14
Abstract
The Common Component Architecture (CCA) provides a
means for software developers to manage the complexity
of large-scale scientific simulations and to move toward
a plug-and-play environment for high-performance com-
puting. In the scientific computing context, component
models also promote collaboration using independently
developed software, thereby allowing particular individu-
als or groups to focus on the aspects of greatest interest
to them. The CCA supports parallel and distributed com-
puting as well as local high-performance connections
between components in a language-independent manner.
The design places minimal requirements on components
and thus facilitates the integration of existing code into the
CCA environment. The CCA model imposes minimal over-
head to minimize the impact on application performance.
The focus on high performance distinguishes the CCA from
most other component models. The CCA is being applied
within an increasing range of disciplines, including com-
bustion research, global climate simulation, and computa-
tional chemistry.
Key words: component architecture, combustion mode-
ling, climate modeling, quantum chemistry, parallel com-
puting
The International Journal of High Performance Computing Applications,
Volume 20, No. 2, Summer 2006, pp. 163–202
DOI: 10.1177/1094342006064488
© 2006 SAGE Publications
1
COMPUTER SCIENCE AND MATHEMATICS DIVISION, OAK
RIDGE NATIONAL LABORATORY, P. O. BOX 2008, OAK
RIDGE, TN 37831
2
SCALABLE COMPUTING R & D, MS 9915, PO BOX 969,
SANDIA NATIONAL LABORATORIES, LIVERMORE, CA
94551–0969
3
COMPUTER SCIENCE DEPARTMENT, 215 LINDLEY HALL,
INDIANA UNIVERSITY, 47405
4
DEPARTMENT OF COMPUTER SCIENCE, STATE
UNIVERSITY OF NEW YORK (SUNY) AT BINGHAMTON,
BINGHAMTON, NY 13902
5
CENTER FOR APPLIED SCIENTIFIC COMPUTING,
LAWRENCE LIVERMORE NATIONAL LABORATORY, P.O.
BOX 808, L-365, LIVERMORE, CA 94551
6
SCIENTIFIC COMPUTING AND IMAGING INSTITUTE,
UNIVERSITY OF UTAH, 50 S. CENTRAL CAMPUS DR., ROOM
3490, SALT LAKE CITY, UT 84112
7
JET PROPULSION LABORATORY, CALIFORNIA INSTITUTE
OF TECHNOLOGY, 4800 OAK GROVE DRIVE, PASADENA,
CA 91109
8
COMPUTATIONAL SCIENCES AND MATHEMATICS,
PACIFIC NORTHWEST NATIONAL LABORATORY,
RICHLAND, WA 99352
9
MATHEMATICS AND COMPUTER SCIENCE DIVISION,
ARGONNE NATIONAL LABORATORY, 9700 SOUTH CASS
AVE., ARGONNE, IL 60439–4844
10
REACTING FLOW RESEARCH, MS 9051, PO BOX 969,
SANDIA NATIONAL LABORATORIES, LIVERMORE, CA
94551–0969
11
DEPARTMENT OF COMPUTER AND INFORMATION
SCIENCE, UNIVERSITY OF OREGON, EUGENE, OR 97403
12
ADVANCED SOFTWARE R & D, MS 9051, PO BOX 969,
SANDIA NATIONAL LABORATORIES, LIVERMORE, CA
94551–0969
13
PACIFIC NORTHWEST NATIONAL LABORATORY,
ENVIRONMENTAL MOLECULAR SCIENCES LABORATORY,
P.O. BOX 999, MS-IN: K8–91, RICHLAND, WA 99352
14
NORTHROP GRUMMAN CORPORATION, INFORMATION
TECHNOLOGY SECTOR, 4801 STONECROFT BLVD,
CHANTILLY, VA 20151

164 COMPUTING APPLICATIONS
1 Introduction
Historically, the principal concerns of software developers
for high-performance scientific computing have centered
on increasing the scope and fidelity of their simulations,
and then increasing the performance and efficiency to
address the exceedingly long execution times that can
accompany these goals. Initial successes with computa-
tional simulations have led to the desire for solutions to
larger, more sophisticated problems and the improvement
of models to reflect greater levels of detail and accuracy.
Efforts to address these new demands necessarily have
included improvements to scientific methodology, algo-
rithms, and programming models, and virtually always
each advance has been accompanied by increases in the
complexity of the underlying software. At the same time,
the computer industry has continued to create ever larger
and more complex hardware in an attempt to satisfy the
increasing demand for simulation capabilities. These archi-
tectures tend to exacerbate the complexity of software
running on these systems, as in nearly all cases, the increased
complexity is exposed to the programmer at some level
and must be explicitly managed to extract the maximum
possible performance. In scientific high-performance com-
puting, relatively little attention has been paid to improv-
ing the fundamental software development process and
finding ways to manage the ballooning complexity of the
software and operating environment.
Simultaneously, in other domains of software develop-
ment, complexity rather than runtime performance has
been a primary concern. For example, in the business area
the push has been less for increasing the size of “the prob-
lem” than for interconnecting and integrating an ever-
increasing number of applications that share and manipu-
late business-related information, such as word processors,
spreadsheets, databases, and web servers. More recently,
with the fast pace of the internet boom, the flood of new
software technology used in business and commercial
applications has increased both the degree of interopera-
bility desired and the number of applications and tools to
be integrated. This situation has led to extreme software
complexity, which at several levels is not unlike the com-
plexity now being seen in high-performance scientific
computing. For example, scientific codes regularly attempt
the integration of multiple numerical libraries and/or pro-
gramming models into a single application. Recently, efforts
have increased to couple multiple stand-alone simulations
together into multi-physics and multi-scale applications
for models with better overall physical fidelity.
One of the approaches that the business/internet soft-
ware community has found invaluable in helping to address
their complexity conundrum is the concept of compo-
nent-based software engineering (CBSE). The basic tenet
of CBSE is the encapsulation of useful units of software
functionality into components. Components are defined
by the interfaces that they present to the outside world
(i.e. to other components), while their internal implemen-
tations remain opaque. Components interact only through
these well-defined interfaces, and based on these inter-
faces can be composed into full applications. Using this
methodology enables use of the “plug-and-play” software
approach for creating complex applications. The smaller,
task-specific units of software are more easily managed
and understood than a complete application software struc-
ture. Logically, many components can provide functionality
that is generally useful in a variety of different applica-
tions. In such cases, suitably well-designed components
might be reusable across multiple applications with little
or no modification. It is also possible that a number of dif-
ferent components can export the same interface and pro-
vide the same essential functionality, but via different
implementations, thereby allowing these interoperable com-
ponents to be swapped within an application in a plug-
and-play fashion.
These ideas have spawned a number of component archi-
tectures, some of which have become so widely used as to
have reached “commodity” status: Microsoft’s Component
Object Model (COM) (Microsoft Corporation 1999), the
Object Management Group’s Common Object Request
Broker Architecture (CORBA) Component Model (Object
Management Group 2002), and Sun’s Enterprise JavaBeans
(Sun Microsystems 2004a).
While having proven quite popular and successful in
the areas in which it originated, CBSE has made few
inroads into the scientific computing community. Part of
the reason is that serious scientific applications tend to be
large, evolving, and long-lived codes whose lifetimes often
extend over decades. In addition to a natural inertia that
slows the adoption of new software engineering paradigms
by scientific software developers, current commodity com-
ponent models present a variety of issues ranging from
the amount of code that must be changed and added to
adapt existing code to the component environment, to
performance overheads, to support for languages, data
types, and even operating systems widely used in scien-
tific computing.
The Common Component Architecture (CCA) Forum
was launched in 1998 as a grass-roots effort to create a
component model specifically tailored to the needs of
high-performance scientific computing. The group’s goals
are both to facilitate scientific application development
and to gain a deeper understanding of the requirements
for and use of CBSE in this community so that they can
feed back into the development of future component
models, in the hope that eventually this community too
may be adequately served by “commodity” tools. In the
intervening years, the Forum has developed a specifica-
tion for the CCA as well as prototype implementations of

165HIGH-PERFORMANCE SCIENTIFIC COMPUTING
many associated tools, and the CCA is experiencing
increasing adoption by applications developers in a number
of scientific disciplines. This paper updates our previous
overview paper in 1999 (Armstrong et al. 1999) to reflect
the evolution of the CCA and our much more detailed under-
standing of the role of CBSE in scientific computing.
The first half of the paper presents the Common Com-
ponent Architecture itself in some detail. We will discuss
the special needs of this community (Section 2), and we
will describe the CCA model and how it addresses these
needs (Section 3). Section 4 presents the CCA’s approach
to language interoperability, and Sections 5–7 describe in
detail how Common Component Architecture handles
local, high-performance parallel, and distributed compo-
nent interactions, respectively. Section 8 summarizes the
currently available tools implementing the CCA, and Sec-
tion 9 describes related work.
The remainder of the paper provides an overview of the
the CCA in use. Section 10 outlines the typical way in
which software is designed in a component-based envi-
ronment, and the process of componentizing existing soft-
ware. Section 11 discusses some of the existing work
towards the development of common interfaces and com-
ponentized software in the CCA environment. Section 12
describes progress on tools to simplify data exchange in
coupled simulations. Section 13 presents an overview of
CCA-based applications in the fields of combustion
research, global climate modeling, and quantum chemis-
try. We conclude the paper with a brief look at the people
and groups associated with the Common Component
Architecture effort (Section 14) and a few ideas for future
work in the field (Section 15).
2 Components for Scientific Computing
Component-based software engineering is a natural
approach for modern scientific computing. The ability to
easily reuse interoperable components in multiple appli-
cations, and the plug-and-play assembly of those applica-
tions, has significant benefits in terms of productivity in
the creation of simulation software. This is especially so
when the component “ecosystem” is rich enough that a
large portion of the components needed by any given
application will be available “off the shelf” from a com-
ponent repository. The simplicity of plug-and-play com-
position of applications and the fact that components
hide implementation details and provide more managea-
ble units of software development, testing, and distribu-
tion all help to deal with the complexity inherent in modern
scientific software. Once the overall architecture of a soft-
ware system and interfaces between its elements (compo-
nents) have been defined, software developers can then
focus on the creation of the components of particular sci-
entific interest to them, while reusing software developed
by others (often experts in their own domains) for other
needed components of the system. Componentization also
indirectly assists with the performance issues that are so
critical in this area; by providing the ability to swap com-
ponents with different implementations that are tailored
to the platform of interest. Finally, components are a nat-
ural approach to handle the coupling of codes involving
different physical phenomena or different time and length
scales, which is becoming increasing important as a
means to improve the fidelity of simulations.
In the scientific software world, CBSE is perhaps most
easily understood as an evolution of the widespread prac-
tice of using a variety of software libraries as the founda-
tion on which to build applications. Traditional libraries
already offer some of the advantages of components, but
a component-based approach extends and magnifies these
benefits. While it is possible for users to discover and use
library routines that were not meant to be exposed as part
of the library’s public interface, component environments
can enforce the public interface rigorously. It is possible
to have multiple instances (versions) of a component in a
component-based application, whereas with plain librar-
ies this is not generally possible. Finally, in a library-
based environment, there are often conflicts of resources,
programming models, or other dependencies, which are
more likely as numbers of libraries increase. With com-
ponents, these concerns can often be handled by hooking
up each “library” component to components providing
the resource management or programming model func-
tionality.
The use of domain-specific computational frameworks
is another point of contact between current practice in sci-
entific computing and CBSE. Domain-specific frameworks
have become increasingly popular as environments in
which a variety of applications in a given scientific domain
can be constructed. Typically, the “framework” provides
a deep computational infrastructure to support calcula-
tions in the domain of interest. Applications are then con-
structed as relatively high-level code utilizing the domain-
specific and more general capabilities provided by the
framework. Many frameworks support modular construc-
tion of applications in a fashion very similar to that pro-
vided by component architectures, but this is typically
limited to the higher-level parts of the code. However,
domain-specific frameworks have their limitations as well.
Their domain focus tends to lead to assumptions about the
architecture and workflow of the application becoming
embodied in the design of the framework, making it much
harder to generalize them to other domains. Similarly,
since their development is generally driven by a small
group of domain experts, it is rare to find interfaces or
code that can be easily shared across multiple domain-
specific frameworks. Unfortunately, this situation is often
true even with important cross-cutting infrastructure, such

166 COMPUTING APPLICATIONS
as linear algebra software, which could in principle be
used across many scientific domains. Generic component
models, on the other hand, provide all the benefits of
domain-specific frameworks. However, by casting the
computational infrastructure as well as the high-level phys-
ics of the applications as components, they also provide
easier extension to new areas, easier coupling of applica-
tions to create multi-scale and multi-physics simulations,
and significantly more opportunities to reuse elements of
the software infrastructure.
However, scientific computing also places certain
demands on a CBSE environment, some of which are not
easily satisfied by most of the commodity component
models currently available. As noted above, performance
is a paramount issue in modern scientific computing, so
that a component model for scientific computing would
have to be able to maintain the performance of traditional
applications without imposing undue overheads. (A widely
used rule of thumb is that environments that impose a
performance penalty in excess of ten percent will be sum-
marily rejected by high-performance computing (HPC)
software developers.) In contrast, commodity component
models have been designed (primarily or exclusively) for
distributed computing and tend to use protocols that
assume all method invocations between components will
be made over the network, in environments where net-
work latencies are often measured in tens and hundreds
of milliseconds. This situation is in stark contrast to HPC
scientific computing, where latencies on parallel inter-
connects are measured in microseconds; traditional pro-
gramming practices assume that on a given process in a
parallel application, data can be “transferred” between
methods by direct reference to the memory location in
which it lives. The performance overheads of the com-
modity component models are often too high for scien-
tific computing.
In scientific computing, it is common to have large
codes that evolve over the course of many years or even
decades. Therefore, the ease with which “legacy” code
bases can be incorporated into a component-based envi-
ronment, and the cost of doing so, are also important con-
siderations. Many of the commodity component models
may require significant restructuring of code and the addi-
tion of new code to satisfy the requirements of the model.
At least partial automation of this process is supported for
some languages, such as Java and more recently, C++; to
our knowledge, however, there is no industry support for
generating components from Fortran, which is used in the
majority legacy scientific software. Furthermore, automated
analysis of parallel code adds another dimension to the
complexity of automating component generation.
Finally, also important to high-performance scientific
computing are considerations including support for lan-
guages, data types, and computing platforms. The various
commodity component models available may not support
Fortran, may require extensive use of Java, and may not
support arrays or complex numbers as first-class data types.
They may even be effectively limited to Windows-based
operating systems, which are not widely used in scientific
HPC.
The Common Component Architecture (CCA) was con-
ceived to remedy the fact that a suitable component envi-
ronment could not be found to satisfy the special needs of
this community. Since its inception, the CCA effort has
grown to encompass a community of researchers, several
funded projects, an increasing understanding of the role of
CBSE in high-performance scientific computing, a matur-
ing specification for the CCA component model, and
practical implementations of tools and applications con-
forming to that specification.
3 The Common Component Architecture
Formally, the Common Component Architecture is a spec-
ification of an HPC-friendly component model. This spec-
ification provides a focus for an extensive research and
development effort. The research effort emphasizes under-
standing how best to utilize and implement component-
based software engineering practices in the high-perform-
ance scientific computing arena, and feeding back that
information into the broader component software field. In
addition to defining the specification, the development
effort creates practical reference implementations and
helps scientific software developers use them to create
CCA-compliant software. Ultimately, a rich marketplace
of scientific components will allow new component-
based applications to be built from predominantly off-the-
shelf scientific components.
3.1 Philosophy and Objectives
The purpose of the CCA is to facilitate and promote
the more productive development of high-performance,
high-quality scientific software in a way that is simple
and natural for scientific software developers. The CCA
intentionally has much in common with commodity com-
ponent models but does not hesitate to do things differ-
ently where the needs of HPC dictate (see Section 9 for a
more detailed comparison). High performance and ease
of use are more strongly emphasized in the CCA effort
than in commodity component models. However, in prin-
ciple no barriers exist to providing an HPC component
framework based on commodity models, or to creating
bridges between CCA components and other component
models, e.g. Web Services (Christensen et al. 2001; Fos-
ter et al. 2002).
The specific objectives that have guided the develop-
ment of the CCA are:

167HIGH-PERFORMANCE SCIENTIFIC COMPUTING
1. Component Characteristics. The CCA is used pri-
marily for high-performance components imple-
mented in the Single Program Multiple Data (SPMD)
or Multiple Program Multiple Data (MPMD) par-
adigms. Issues that must be resolved to build appli-
cations of such components include interacting
with multiple communicating processes, the coex-
istence of multiple sophisticated run-time systems,
message-passing libraries, threads, and efficient
transfers of large data sets.
2. Heterogeneity. Whenever technically possible, the
CCA must be able to combine within one applica-
tion components executing on multiple architec-
tures, implemented in different languages, and using
different run-time systems. Furthermore, design
priorities must be geared toward addressing the
software needs most common in HPC environ-
ment; for example, interoperability with languages
popular in scientific programming, such as For-
tran, C, and C++, should be given priority.
3. Local and Remote Components. Components
are local if they live in a single application address
space (referred to as in-process components in some
other component models) and remote otherwise.
The interaction between local components should
cost no more than a virtual function call; the inter-
action of remote components must be able to
exploit zero-copy protocols and other advantages
offered by state of the art networking. Whenever
possible local and remote components must be
interoperable and be able to change interactions
from local to remote seamlessly. The CCA will
address the needs of remote components running
over a local area network and wide area network;
distributed component applications must be able
to satisfy real-time constraints and interact with
diverse supercomputing schedulers.
4. Integration. The integration of components into
the CCA environment must be as smooth as possi-
ble. Existing components or well-structured non-
component component code should not have to be
rewritten substantially to work in CCA frame-
works. In general, components should be usable
in multiple CCA-compliant frameworks without
modification (at the source code level).
5. High-Performance. It is essential that the set of
standard features contain mechanisms for support-
ing high-performance interactions. Whenever pos-
sible, the component environment should avoid
requiring extra copies, extra communication, and
synchronization, and should encourage efficient
implementations, such as parallel data transfers.
The CCA should not impose a particular parallel
programming model on users, but rather allow users
to continue using the approaches with which they
are most familiar and comfortable.
6. Openness and Simplicity. The CCA specification
should be open and usable with open software. In
HPC this flexibility is needed to keep pace with the
ever-changing demands of the scientific program-
ming world. Related and possibly more important
is simplicity. For the target audience of computa-
tional scientists, computer science is not a primary
concern. An HPC component architecture must be
simple to adopt, use, and reuse; otherwise, any
other objectives will be moot.
3.2 CCA Concepts
The central task of the CCA Forum (http://www.cca-
forum.org) is the development of a component model
specification that satisfies the objectives above. That spec-
ification defines the rights, responsibilities of individual
elements and the relationships among the elements of the
CCA’s component model, including the interfaces and
methods that control their interactions. Briefly, the ele-
ments of the CCA model are as follows:
Components are units of software functionality and
deployment that can be composed together to form
applications. Components encapsulate much of the com-
plexity of the software inside a black box and expose
only well-defined interfaces to other components.
Ports are the abstract interfaces through which compo-
nents interact. Specifically, CCA ports provide proce-
dural interfaces that can be thought of as a class or an
interface in object-oriented languages, or a collection
of subroutines; in a language such as Fortran 90, they
can be related to a collection of subroutines or a module.
Components may provide ports, meaning they imple-
ment the functionality expressed in the port (called pro-
vides ports), or they may use ports, meaning they make
calls on a port provided by another component (called
uses ports). It is important to recognize that the CCA
working group does not claim responsibility for defin-
ing all possible ports. It is hoped that the most impor-
tant ports will be defined by domain computational
scientists and be standardized by common consent or
de facto use.
Frameworks manage CCA components as they are
assembled into applications and executed. The frame-
work is responsible for connecting uses and provides
ports without exposing the components’ implementa-
tion details. The framework also provides a small set of
standard services that are available to all components.
In order to reuse concepts within the CCA, services are
cast as ports that are available to all components at all
times.

Citations
More filters
Book ChapterDOI

Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation

TL;DR: Open MPI provides a unique combination of novel features previously unavailable in an open-source, production-quality implementation of MPI, which provides both a stable platform for third-party research as well as enabling the run-time composition of independent software add-ons.
Journal ArticleDOI

The Tau Parallel Performance System

TL;DR: This paper presents the TAU (Tuning and Analysis Utilities) parallel performance sytem and describes how it addresses diverse requirements for performance observation and analysis.
Journal ArticleDOI

NWChem: Past, present, and future

Edoardo Aprà, +113 more
TL;DR: The NWChem computational chemistry suite is reviewed, including its history, design principles, parallel tools, current capabilities, outreach, and outlook.
References
More filters
Book

The Grid 2: Blueprint for a New Computing Infrastructure

TL;DR: The Globus Toolkit as discussed by the authors is a toolkit for high-throughput resource management for distributed supercomputing applications, focusing on real-time wide-distributed instrumentation systems.
BookDOI

The Case Study

TL;DR: On May 25, 1977, IEEE member, Virginia Edgerton, a senior information scientist employed by the City of New York, telephoned the chairman of CSIT's Working Group on Ethics and Employment Practices, having been referred to the committee by IEEE Headquarters.
Book

Component Software: Beyond Object-Oriented Programming

TL;DR: Anyone responsible for developing software strategy, evaluating new technologies, buying or building software will find Clemens Szyperski's objective and market-aware perspective of this new area invaluable.
Journal ArticleDOI

The GRID: Blueprint for a New Computing Infrastructure

TL;DR: The main purpose is to update the designers and users of parallel numerical algorithms with the latest research in the field and present the novel ideas, results and work in progress and advancing state-of-the-art techniques in the area of parallel and distributed computing for numerical and computational optimization problems in scientific and engineering application.
Related Papers (5)
Frequently Asked Questions (22)
Q1. What contributions have the authors mentioned in the paper "A component architecture for high-performance scientific computing" ?

The Common Component Architecture ( CCA ) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. 

BasicParameterPort, and ParameterPortFactory are specified conveniences for component writers, to standardize some basic operations. 

Suitable specification, efficient enforcement, and extension of such constraints will be a focus area of long term research on richer interface descriptions. 

Very high level capability, such as setting molecular geometries, wavefunction type, and basis sets and then computing energies, gradients, and Hessians is made available through the quantum chemistry interface. 

Since component technologies are an evolutionary step beyond object-oriented programming, the CCA has been able to leverage Babel and SIDL in the development of its component framework. 

The dependence of F on neighboring points requires that steep variations of Φ be fully resolved in scattered time-dependent regions of the domain, which is usually achieved by increasing the grid density locally. 

Issues that must be resolved to build applications of such components include interacting with multiple communicating processes, the coexistence of multiple sophisticated run-time systems, message-passing libraries, threads, and efficient transfers of large data sets. 

For the class of reacting flow problems in which the authors are interested, the main physics sub-problems are: diffusion and convection (F), and chemical reactions (G). 

as the community builds more CCAcompliant components, some based on legacy codes, there is every reason to expect the trend toward multi-language applications to continue. 

MCT is a software package for constructing parallel couplings between MPIbased distributed-memory parallel applications which supports both sequential and concurrent couplings. 

Additional utility services include permitting a component to access MPI, to receive connection events, and to establish its own interactive window with a user. 

the principal concerns of software developers for high-performance scientific computing have centered on increasing the scope and fidelity of their simulations, and then increasing the performance and efficiency to address the exceedingly long execution times that can accompany these goals. 

Common interfaces are especially important for scientific data components, as the interfaces for numerical tools and application components often include a certain degree of specificity of the data structures used as input and output arguments. 

The parallel performance on 1 to 16 processes of a Linux cluster also demonstrate that the SPMD component implementation does not adversely affect parallel performance and scalability. 

Services also allows a user to get a handle to a port in order to make method calls on it (getPort(), getPortNonblocking(), releasePort()). 

A much more user-friendly approach, which is also being actively pursued in the CCA, is to subsume the M × N data transfer capabilities into the CCA framework (i.e. as an optional CCA service) and make the framework responsible for ensuring that data is appropriately redistributed when required by method calls. 

This approach is considered better CCA programming practice than acquiring handles to all relevant ports once at the beginning of the component execution and releasing them only at the end, as it allows the use of a more dynamic component programming model. 

These services not only allow the easy construction of application builder GUIs, but also allow dynamic behavior of the application itself, for example, swapping components based on numerical or computational performance (Hovland et al. 

This interface is expected to evolve as the TSTT discretization library is developed, and this prototype component will be replaced with a more sophisticated variant that supports multiple discretization schemes and mesh types. 

the ease with which “legacy” code bases can be incorporated into a component-based environment, and the cost of doing so, are also important considerations. 

The simplest approach to integrating existing parallel code in a CCA environment is to accept the parallel programming model used in the original code, rather than trying to impose a new CCAspecific one. 

These include tools which automate much of the relatively mechanical work of setting up the “skeleton” of the files required to create ports and components, graphical “application builder” interfaces which connect to CCA frameworks, and tools which facilitate the “componentization” of existing software.