scispace - formally typeset
Search or ask a question

Showing papers in "Scientific Programming in 2006"


Journal ArticleDOI
TL;DR: A genetic algorithm approach is presented to address scheduling optimization problems in workflow applications, based on two QoS constraints, deadline and budget, which are presented in this paper.
Abstract: Grid technologies have progressed towards a service-oriented paradigm that enables a new way of service provisioning based on utility computing models, which are capable of supporting diverse computing services. It facilitates scientific applications to take advantage of computing resources distributed world wide to enhance the capability and performance. Many scientific applications in areas such as bioinformatics and astronomy require workflow processing in which tasks are executed based on their control or data dependencies. Scheduling such interdependent tasks on utility Grid environments need to consider users' QoS requirements. In this paper, we present a genetic algorithm approach to address scheduling optimization problems in workflow applications, based on two QoS constraints, deadline and budget.

392 citations


Journal ArticleDOI
TL;DR: A domain specific embedded language in C++ that can be used in various contexts such as numerical projection onto a functional space, numerical integration, variational formulations and automatic differentiation is presented.
Abstract: In this article, we present a domain specific embedded language in C++ that can be used in various contexts such as numerical projection onto a functional space, numerical integration, variational formulations and automatic differentiation Albeit these tools operate in different ways, the language overcomes this difficulty by decoupling expression constructions from evaluation The language is implemented using expression templates and meta-programming techniques and uses various Boost libraries The language is exercised on a number of non-trivial examples and a benchmark presents the performance behavior on a few test problems

75 citations


Journal Article
TL;DR: In this paper, an implementation of the standard multi-objective evolutionary algorithm NSGA II has been added to improve its efficiency, and an elitist diversification mechanism is adapted to be used with NSGAII.
Abstract: In this paper, we address a bi-objective vehicle routing problem in which the total length of routes is minimized as well as the balance of routes, i.e. the difference between the maximal route length and the minimal route length. For this problem, we propose an implementation of the standard multi-objective evolutionary algorithm NSGA II. To improve its efficiency, two mechanisms have been added. First, a parallelization of NSGA II by means of an island model is proposed. Second, an elitist diversification mechanism is adapted to be used with NSGA II. Our method is tested on standard benchmarks for the vehicle routing problem. The contribution of the introduced mechanisms is evaluated by different performance metrics. All the experimentations indicate a strict improvement of the generated Pareto set.

64 citations


Proceedings ArticleDOI
TL;DR: This paper reflects on seven issues pertaining to the interplay of crosscutting concerns and architectural connection abstractions and examines the design of existing aspect-oriented and non-AO ADLs with respect to these issues.
Abstract: ions to express architectural connection play a central role in architecture design, especially in Architecture Description Languages (ADLs). With the emergence of aspect-oriented software development (AOSD), there is a need to understand the adequacy of ADLs' conventional connection abstractions to capture the crosscutting nature of architectural concerns. This paper reflects on seven issues pertaining to the interplay of crosscutting concerns and architectural connection abstractions. We review and assess the design of existing aspect-oriented (AO) and non-AO ADLs with respect to these issues. A case study is used to illustrate our viewpoints, claims, and proposals.

58 citations


Journal Article
TL;DR: In this article, an immunological algorithm for continuous global optimization problems, called OPT-IA, has been proposed and evaluated on a suite of 23 widely used benchmarks problems, and the experimental results show that OPTIA is a suitable numerical optimization technique that, in terms of accuracy, generally outperforms the other algorithms analyzed in this comparative study.
Abstract: Numerical optimization of given objective functions is a crucial task in many real-life problems. The present article introduces an immunological algorithm for continuous global optimization problems, called OPT-IA. Several biologically inspired algorithms have been designed during the last few years and have shown to have very good performance on standard test bed for numerical optimization. In this paper we assess and evaluate the performance of OPT-IA, FEP, IFEP, DIRECT, CEP, PSO, and EO with respect to their general applicability as numerical optimization algorithms. The experimental protocol has been performed on a suite of 23 widely used benchmarks problems. The experimental results show that OPT-IA is a suitable numerical optimization technique that, in terms of accuracy, generally outperforms the other algorithms analyzed in this comparative study. The OPT-IA is also shown to be able to solve large-scale problems.

48 citations


Journal Article
TL;DR: Using linear component analysis, the tongue was found to possess five degrees of freedom, including one for jaw movement, and the velum and nasopharyngeal port two degrees offreedom, which can be interpreted in phonetic / biomechanical terms control a linear articulatory model of speech production.
Abstract: Volume images of tongue, jaw, velum, nasopharyngeal wall, etc., were acquired by MRI on one French subject uttering a corpus of sustained articulations. Supplementary images of jaw, hard palate, and nasal cavities were acquired by CT. The three-dimensional outlines of these organs are represented by the vertices of triangular surface meshes. Using linear component analysis, the tongue was found to possess five degrees of freedom, including one for jaw movement, and the velum and nasopharyngeal port two degrees of freedom. These parameters can be interpreted in phonetic / biomechanical terms control a linear articulatory model of speech production.

44 citations


Proceedings ArticleDOI
TL;DR: In this paper, a control system for pneumatic variable valve actuation has been designed, implemented and tested in a single cylinder test engine with valve actuators provided by Cargine Engineering AB.
Abstract: A control system for pneumatic variable valve actuation has been designed, implemented and tested in a single cylinder test engine with valve actuators provided by Cargine Engineering AB. The design goal for the valve control system was to achieve valve lifts between 2 and 12 mm over an engine speed interval of 300 to 2500 rpm. The control system was developed using LabView and implemented on the PCI 7831. The design goals were fulfilled with some limitations. Due to physical limitations in the actuators, stable operation with valve lifts below 2.6 mm were not possible. During the engine testing the valve lift was limited to 7 mm to guarantee piston clearance. Different valve strategies for residual gas HCCI combustion were generated on a singlecylinder test engine. (Less)

41 citations


Proceedings ArticleDOI
TL;DR: In this paper, the authors use trace relations to define crosscutting and apply this approach to the identification of crosscutting across early phases in the software life cycle, based on the transitivity of trace relations.
Abstract: Traceability of requirements and concerns enhances the quality of software development. We use trace relations to define crosscutting. As starting point, we set up a dependency matrix to capture the relationship between elements at two levels, e.g. concerns and representations of concerns. The definition of crosscutting is formalized in terms of linear algebra, and represented with matrices and matrix operations. In this way, crosscutting can be clearly distinguished from scattering and tangling. We apply this approach to the identification of crosscutting across early phases in the software life cycle, based on the transitivity of trace relations. We describe an illustrative case study to demonstrate the applicability of the analysis.

35 citations


Journal ArticleDOI
TL;DR: This paper presents an approach for high level workflow specification that considers a comprehensive set of QoS requirements, which includes economical, legal and security aspects, and introduces a QoS-aware workflow reduction technique.
Abstract: Many important scientific and engineering problems may be solved by combining multiple applications in the form of a Grid workflow. We consider that for the wide acceptance of Grid technology it is important that the user has the possibility to express requirements on Quality of Service (QoS) at workflow specification time. However, most of the existing workflow languages lack constructs for QoS specification. In this paper we present an approach for high level workflow specification that considers a comprehensive set of QoS requirements. Besides performance related QoS, it includes economical, legal and security aspects. For instance, for security or legal reasons the user may express the location affinity regarding Grid resources on which certain workflow tasks may be executed. Our QoS-aware workflow system provides support for the whole workflow life cycle from specification to execution. Workflow is specified graphically, in an intuitive manner, based on a standard visual modeling language. A set of QoS-aware service-oriented components is provided for workflow planning to support automatic constraint-based service negotiation and workflow optimization. For reducing the complexity of workflow planning, we introduce a QoS-aware workflow reduction technique. We illustrate our approach with a real-world workflow for maxillo facial surgery simulation.

29 citations


Journal ArticleDOI
TL;DR: Object-oriented platforms developed for the numerical solution of PDEs must combine flexibility and reusability, in order to ease the integration of new functionalities and algorithms, and two design patterns, Perspective and Method-Command-Strategy, that support extensibility and run-time flexibility are presented.
Abstract: Object-oriented platforms developed for the numerical solution of PDEs must combine flexibility and reusability, in order to ease the integration of new functionalities and algorithms. While designing similar frameworks, a built-in support for high performance should be provided and enforced transparently, especially in parallel simulations. The paper presents solutions developed to effectively tackle these and other more specific problems (data handling and storage, implementation of physical models and numerical methods) that have arisen in the development of COOLFluiD, an environment for PDE solvers. Particular attention is devoted to describe a data storage facility, highly suitable for both serial and parallel computing, and to discuss the application of two design patterns, Perspective and Method-Command-Strategy, that support extensibility and run-time flexibility in the implementation of physical models and generic numerical algorithms respectively.

27 citations


Proceedings ArticleDOI
TL;DR: A novel use of Repertory Grid Technique with roots in psychology of personal constructs as a systematic and effective way to support analysts for viewing and manipulating requirements models to expose how entities relate to one another, thereby facilitating aspectual requirements identification and conflicts detection.
Abstract: The dominant decomposition at the requirements level relies on how requirements are represented and modeled An aspectual requirement is a broadly scoped concern that cuts across and has impacts on other requirements-level concerns or artifacts This paper presents a novel use of Repertory Grid Technique with roots in psychology of personal constructs as a systematic and effective way to support analysts for viewing and manipulating requirements models to expose how entities relate to one another, thereby facilitating aspectual requirements identification and conflicts detection We illustrate the approach with a proof-of-concept example adapted from the literature; in particular, we show how early aspects can be discovered in goal models, and how interference can be detected in viewpoints-based models

Proceedings ArticleDOI
TL;DR: This paper presents a new method for representing and composing aspect-oriented models that is both scalable and expressive and validated on an air traffic control example based on a NASA application.
Abstract: This paper presents a new method for composing aspect models The method is based on the use of a UML-based aspect modeling language to precisely and graphically specify model--level aspects and the use of graph transformations to define how aspects should be composed and to apply those compositions The result is a method for representing and composing aspect-oriented models that is both scalable and expressive The work is validated on an air traffic control example based on a NASA application

Proceedings ArticleDOI
TL;DR: A tool suite to support AORE in a scalable fashion is presented and it is shown that its output is comparable to that of a requirements engineer carrying out the same tasks manually.
Abstract: Aspect-Oriented Requirements Engineering (AORE) supports identification of crosscutting, aspectual requirements as well as analysis of their influence on other requirements of the system. Identifying and analyzing aspectual requirements manually is very resource intensive due to their broadly scoped nature and the large volumes and ambiguity of input information from the stakeholders. In this paper we present a tool suite to support AORE in a scalable fashion. The tools support identification of aspectual requirements and their influences on other requirements, conflict detection and resolution between aspectual requirements, as well as requirements representation and requirements document structuring. A number of case studies, including two in an industrial setting, demonstrate the scalability and efficiency of the tool suite. They also show that its output is comparable to that of a requirements engineer carrying out the same tasks manually.

Journal ArticleDOI
TL;DR: Grid technologies have progressed towards a service-oriented paradigm that enables a new way of service provisioning based on utility computing models, which are capable of supporting diverse compu...
Abstract: Grid technologies have progressed towards a service-oriented paradigm that enables a new way of service provisioning based on utility computing models, which are capable of supporting diverse compu...

Journal Article
TL;DR: In this article, the authors proposed an adaptive generalized spectral subtraction (GSS) algorithm, which adaptively adjusts the spectral order @b according to the local SNR in each critical band frame by frame as in a sigmoid function.
Abstract: The performance degradation of speech communication systems in noisy environments inspired increasing research on speech enhancement and noise reduction. As a well-known single-channel noise reduction technique, spectral subtraction (SS) has widely been used for speech enhancement. However, the spectral order @b set in SS is always fixed to some constants, resulting in performance limitation to a certain degree. In this paper, we first analyze the performance of the @b-order generalized spectral subtraction (GSS) in terms of the gain function to highlight its dependence on the value of spectral order @b. A data-driven optimization scheme is then introduced to quantitatively determine the change of @b with the change of the input signal-to-noise ratio (SNR). Based on the analysis results and considering the non-uniform effect of real-world noise on speech signal, we propose an adaptive @b-order GSS in which the spectral order @b is adaptively updated according to the local SNR in each critical band frame by frame as in a sigmoid function. The performance of the proposed adaptive @b-order GSS is finally evaluated objectively by segmental SNR (SEGSNR) and log-spectral distance (LSD), and subjectively by spectrograms and mean opinion score (MOS), using comprehensive experiments in various noise conditions. Experimental results show that the proposed algorithm yields an average SEGSNR increase of 2.99dB and an average LSD reduction of 2.71dB, which are much larger improvement than that obtained with the competing SS algorithms. The superiority of the proposed algorithm is also demonstrated by the highest MOS ratings obtained from the listening tests.

Journal ArticleDOI
TL;DR: Preliminary diagnostics show that the results of the LACES simulation for the tropical stage of Hurricane Earl's lifecycle compare well with available observations for the storm, and further studies involving advanced diagnostics have commenced, taking advantage of the uniquely large spatial extent of the high resolution LacES simulation to investigate multiscale interactions in the hurricane and its environment.
Abstract: The Large Atmospheric Computation on the Earth Simulator (LACES) project is a joint initiative between Canadian and Japanese meteorological services and academic institutions that focuses on the high resolution simulation of Hurricane Earl (1998). The unique aspect of this effort is the extent of the computational domain, which covers all of North America and Europe with a grid spacing of 1 km. The Canadian Mesoscale Compressible Community (MC2) model is shown to parallelize effectively on the Japanese Earth Simulator (ES) supercomputer; however, even using the extensive computing resources of the ES Center (ESC), the full simulation for the majority of Hurricane Earl's lifecycle takes over eight days to perform and produces over 5.2 TB of raw data. Preliminary diagnostics show that the results of the LACES simulation for the tropical stage of Hurricane Earl's lifecycle compare well with available observations for the storm. Further studies involving advanced diagnostics have commenced, taking advantage of the uniquely large spatial extent of the high resolution LACES simulation to investigate multiscale interactions in the hurricane and its environment. It is hoped that these studies will enhance our understanding of processes occurring within the hurricane and between the hurricane and its planetary-scale environment.

Journal ArticleDOI
TL;DR: A stochastic model to evaluate QoS (make-span, reliability and cost) of workflow systems based on QWF-net, which extends traditional WF-net by associating tasks with firing-rate, failure-rate and cost-coefficient is proposed.
Abstract: Quality (QoS) prediction is one of the most important research topics of workflow. In this paper, we propose a stochastic model to evaluate QoS (make-span, reliability and cost) of workflow systems based on QWF-net, which extends traditional WF-net by associating tasks with firing-rate, failure-rate and cost-coefficient. Through a case study, we show that our framework is capable of modeling real-world workflow-based application. Also, Monte-carlo simulation in the case study indicates our analytical methods are consistent with simulation. We also present a sensitivity analysis technique to identify QoS bottleneck.The paper concludes with a comparison between our approach and related work.

Journal Article
TL;DR: In the research field of Artificial Life, the concepts of emergence and adaptation form the basis of a class of models which describes reproducing individuals whose characteristics evolve over time as mentioned in this paper, which allow to investigate the laws of evolution, to observe emergent phenomena at individual and population level, and additionally yield new design techniques for computer animation and robotics industries.
Abstract: In the research field of Artificial Life, the concepts of emergence and adaptation form the basis of a class of models which describes reproducing individuals whose characteristics evolve over time. These models allow to investigate the laws of evolution, to observe emergent phenomena at individual and population level, and additionally yield new design techniques for computer animation and robotics industries. This paper presents an introductory non-exhaustive survey of the constitutive work of the last twenty years. When examining the history of development of these models, different periods can be distinguished. Each one incorporated new modeling concepts, however to this day all the models have failed to exhibit long-lasting, let alone open-ended evolution. A particular look at the richness of dynamics of the modeled environments reveals that only little attention has been paid to their design, which could account for the experienced evolutionary barrier.

Proceedings ArticleDOI
TL;DR: It is proposed to extend the architectural description with slices and composition mechanisms to prevent this crosscutting and perform an initial exploration of these concepts in an Online Auction system to better separate concerns that otherwise would crosscut the views.
Abstract: Architectural views are at the foundation of software architecture and are used to describe the system from different perspectives However, some architectural concerns crosscut the decomposition of the architecture in views The drawbacks of crosscutting with respect to architectural views is similar to the drawbacks with respect to code, ie hampering reuse, maintenance and evolution of the architecture This paper investigates the relations between architectural concerns, architectural drivers and views to identify why crosscutting manifests itself We propose to extend the architectural description with slices and composition mechanisms to prevent this crosscutting and perform an initial exploration of these concepts in an Online Auction system Within this limited setting the first results look promising to better separate concerns that otherwise would crosscut the views

Journal ArticleDOI
TL;DR: The results of the first prototype of a XMM-Newton pipeline processing task, parallelized at a CCD level, which can be run in a Grid system are presented.
Abstract: We present the results of the first prototype of a XMM-Newton pipeline processing task, parallelized at a CCD level, which can be run in a Grid system. By using the Grid Way application and the XMM-Newton Science Archive system, the processing of the XMM-Newton data is distributed across the Virtual Organization (VO) constituted by three different research centres: ESAC (European Space Astronomy Centre), ESTEC (the European Space research and TEchnology Centre) and UCM (Complutense University of Madrid). The proposed application workflow adjusts well to the Grid environment, making use of the massive parallel resources in a flexible and adaptive fashion.

Journal ArticleDOI
TL;DR: A reducing set concurrent simplex (RSCS) variant of the Nelder-Mead algorithm compared favourably with the original algorithm, and also with the inherently parallel multidirectional search algorithm (MDS).
Abstract: This paper describes a method of parallelisation of the popular Nelder-Mead simplex optimization algorithms that can lead to enhanced performance on parallel and distributed computing resources. A reducing set of simplex vertices are used to derive search directions generally closely aligned with the local gradient. When tested on a range of problems drawn from real-world applications in science and engineering, this reducing set concurrent simplex (RSCS) variant of the Nelder-Mead algorithm compared favourably with the original algorithm, and also with the inherently parallel multidirectional search algorithm (MDS). All algorithms were implemented and tested in a general-purpose, grid-enabled optimization toolset.

Journal Article
TL;DR: In this article, a modified GA is proposed to improve the efficiency of the beam angle optimization (BAO) problem in intensity-modulated radiotherapy (IMRT), and two types of expert knowledge are employed, i.e., beam orientation constraints and beam configuration templates.
Abstract: In this paper, a modified genetic algorithm (GA) is proposed to improve the efficiency of the beam angle optimization (BAO) problem in intensity-modulated radiotherapy (IMRT). Two modifications are made to GA in this study: (1) a new operation named sorting operation is introduced to sort the gene in each chromosome before the crossover operation, and (2) expert knowledge about tumor treatment is employed to guide the GA evolution. Two types of expert knowledge are employed, i.e., beam orientation constraints and beam configuration templates. The user-defined knowledge is used to reduce the search space and guide the optimization process. The sorting operation is introduced to inherently improve the evolution performance for the specified ABO problem. The beam angles are selected using GA, and the intensity maps of the corresponding beams are optimized using a conjugate gradient (CG) method. The comparisons of the preliminary optimization results on a clinical prostate case show that the proposed optimization algorithm can slightly or heavily improve the computation efficiency.

Journal Article
TL;DR: A new Memetic Algorithm designed to compute near optimal solutions for the MinLA problem is presented, which incorporates a highly specialized crossover operator, a fast MinLA heuristic used to create the initial population and a local search operator based on a fine tuned Simulated Annealing algorithm.
Abstract: This paper presents a new Memetic Algorithm designed to compute near optimal solutions for the MinLA problem. It incorporates a highly specialized crossover operator, a fast MinLA heuristic used to create the initial population and a local search operator based on a fine tuned Simulated Annealing algorithm. Its performance is investigated through extensive experimentation over well known bench marks and compared with other state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: It is demonstrated that systematic run-time assertion checking inspired by the formal constraints facilitated the pinpointing of an exceptionally hard-to-reproduce compiler bug and that the run- time assertion checking has a negligible impact on performance.
Abstract: The memory management rules for abstract data type calculus presented by Rouson, Morris & Xu [15] are recast as formal statements in the Object Constraint Language (OCL) and applied to the design of a thermal energy equation solver. One set of constraints eliminates memory leaks observed in composite overloaded expressions with three current Fortran 95/2003 compilers. A second set of constraints ensures economical memory recycling. The constraints are preconditions, postconditions and invariants on overloaded operators and the objects they receive and return. It is demonstrated that systematic run-time assertion checking inspired by the formal constraints facilitated the pinpointing of an exceptionally hard-to-reproduce compiler bug. It is further demonstrated that the interplay between OCL's modeling capabilities and Fortran's programming capabilities led to a conceptual breakthrough that greatly improved the readability of our code by facilitating operator overloading. The advantages and disadvantages of our memory management rules are discussed in light of other published solutions [11,19]. Finally, it is demonstrated that the run-time assertion checking has a negligible impact on performance.

Journal Article
TL;DR: In this paper, the RBF-Gene algorithm is proposed to evolve both the structure and the numerical parameters of the network, which is able to evolve the number of neurons and their weights.
Abstract: We propose here a new evolutionary algorithm, the RBF-Gene algorithm, to optimize Radial Basis Function Neural Networks. Unlike other works on this subject, our algorithm can evolve both the structure and the numerical parameters of the network: it is able to evolve the number of neurons and their weights. The RBF-Gene algorithm's behavior is shown on a simple toy problem, the 2D sine wave. Results on a classical benchmark are then presented. They show that our algorithm is able to fit the data very well while keeping the structure simple the solution can be applied generally.

Journal ArticleDOI
TL;DR: The Styx Grid Service is presented, a simple system that wraps command-line programs and allows them to be run over the Internet exactly as if they were local programs, significantly increasing the efficiency of workflows that use large data volumes.
Abstract: The service-oriented approach to performing distributed scientific research is potentially very powerful but is not yet widely used in many scientific fields. This is partly due to the technical difficulties involved in creating services and workflows and the inefficiency of many workflow systems with regard to handling large datasets. We present the Styx Grid Service, a simple system that wraps command-line programs and allows them to be run over the Internet exactly as if they were local programs. Styx Grid Services are very easy to create and use and can be composed into powerful workflows with simple shell scripts or more sophisticated graphical tools. An important feature of the system is that data can be streamed directly from service to service, significantly increasing the efficiency of workflows that use large data volumes. The status and progress of Styx Grid Services can be monitored asynchronously using a mechanism that places very few demands on firewalls. We show how Styx Grid Services can interoperate with with Web Services and WS-Resources using suitable adapters.

BookDOI
TL;DR: It is shown that MHC do not induce any specific biases in the distribution of sizes, allowing size control during evolution and an accurate control of the size is possible while improving performances of MHC.
Abstract: Most of the Evolutionary Algorithms handling variable-sized structures, like Genetic Programming, tend to produce too long solutions and the recombination operator used is often considered to be partly responsible of this phenomenon, called bloat. The Maximum Homologous Crossover (MHC) preserves similar structures from parents by aligning them according to their homology. This operator has already demonstrated interesting abilities in bloat reduction but also some weaknesses in the exploration of the size of programs during evolution. In this paper, we show that MHC do not induce any specific biases in the distribution of sizes, allowing size control during evolution. Two different methods for size control based on MHC are presented and tested on a symbolic regression problem. Results show that an accurate control of the size is possible while improving performances of MHC.

Journal Article
TL;DR: In this paper, a multi-agent algorithm for clustering distributed data in a peer-to-peer environment is proposed based on the biology-inspired paradigm of a flock of birds, where agents are used to discover clusters using a density-based approach.
Abstract: Clustering can be defined as the process of partitioning a set of patterns into disjoint and homogeneous meaningful groups, called clusters. Traditional clustering methods require that all data have to be located at the site where they are analyzed and cannot be applied in the case of multiple distributed datasets. This paper describes a multi-agent algorithm for clustering distributed data in a peer-to-peer environment. The algorithm proposed is based on the biology-inspired paradigm of a flock of birds. Agents, in this context, are used to discovery clusters using a density-based approach. Swarm-based algorithms have attractive features that include adaptation, robustness and a distributed, decentralized nature, making them well-suited for clustering in p2p networks, in which it is difficult to implement centralized network control. We have applied this algorithm on synthetic and real world datasets and we have measured the impact of the flocking search strategy on performance in terms of accuracy and scalability.

Journal ArticleDOI
TL;DR: A methodology that analyzes the source code of the distributed system and constructs a graph that uniquely represents each possible partial order of the system, termed the partial order graph (POG), which enables run-time evaluation of the assert statement without relying on traces or addition messages.
Abstract: Capturing and examining the causal and concurrent relationships of a distributed system is essential to a wide range of distributed systems applications. Many approaches to gathering this information rely on trace files of executions. The information obtained through tracing is limited to those executions observed. We present a methodology that analyzes the source code of the distributed system. Our analysis considers each process's source code and produces a single comprehensive graph of the system's possible behaviors. The graph, termed the partial order graph (POG), uniquely represents each possible partial order of the system. Causal and concurrent relationships can be extracted relative either to a particular partial order, which is synonymous to a single execution, or to a collection of partial orders. The graph provides a means of reasoning about the system in terms of relationships that will definitely occur, may possible occur, and will never occur. Distributed assert statements provide a means to monitor distributed system executions. By constructing the POG prior to system execution, the causality information provided by the POG enables run-time evaluation of the assert statement without relying on traces or addition messages.

Journal Article
TL;DR: In this article, the authors extend the EM methodology to combinatorial optimization problems and illustrate its effectiveness on the well-known resource-constrained project scheduling problem (RCPSP).
Abstract: Recently, an electromagnetism (EM) heuristic has been introduced by Birbil and Fang (2003) to solve unconstrained optimization problems. In this paper, we extend the EM methodology to combinatorial optimization problems and illustrate its effectiveness on the well-known resource-constrained project scheduling problem (RCPSP). We present computational experiments on a standard benchmark dataset, compare the results of the different modifications on the original EM framework with current state-of-the-art heuristics, and show that the procedure is capable of producing consistently good results for challenging instances of the problem under study. We also give directions for future research in order to further explore the potential of this new technique.