scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A component-based framework for certification of components in a cloud of HPC services

TL;DR: A Verification-as-a-Service (VaaS) framework for component certification on HPC Shelf is presented, aimed at providing higher confidence that components of parallel computing systems of HPC shelf behave as expected according to one or more requirements expressed in their contracts.
About: This article is published in Science of Computer Programming.The article was published on 2020-06-01 and is currently open access. It has received 4 citations till now. The article focuses on the topics: Cloud computing & Certification.

Summary (8 min read)

1. Introduction

  • HPC Shelf is a cloud computing platform aimed at addressing domain-specific, computationally intensive problems typically emerging from computational science and engineering domains.
  • In HPC Shelf, they must be able to identify and combine components to form parallel computing systems.
  • Through the proposed framework, components called certifiers may use a set of different certification tools to certify that the components of parallel computing systems meet a certain set of requirements.
  • The case studies used to demonstrate the proposed certification framework are particularly focused on functional and behavioral requirements that can be verified through automated verification methods and tools, such as theorem provers and model checkers.
  • From this assessment, the following outstanding features and contributions have been identified in favor of the certification framework of HPC Shelf: .

2. HPC Shelf

  • HPC Shelf is a cloud computing platform that provides HPC services for providers of domain-specific applications.
  • An application is a problem-solving environment through which specialist users, the end users of HPC Shelf, specify problems and obtain computational solutions for them.
  • It is assumed that these solutions are computationally intensive, thus demanding the use of large-scale parallel computing infrastructure, i.e. comprising multiple parallel computing platforms engaged in a single computational task.
  • Applications generate computational solutions as component-oriented parallel computing systems.
  • To do so, these components comply to Hash [8], a parallel component model whose components may exploit parallel processing in distributed-memory parallel computing platforms.

2.1. Component kinds of parallel computing systems

  • Component platforms that comply to the Hash component model distinguish components according to a set of component kinds.
  • Action bindings connect a set of action ports belonging to computation and connector components.
  • It may be programmed by using a general-purpose programming language (currently, C#) or SAFeSWL (SAFe Scientific Workflow Language), an XML-based orchestration language designed for activating the computational tasks of the solution components in a prescribed order [1].
  • In a MapReduce parallel computing system, mappers, reducers, splitters, and shufflers may be connected through bindings between their compatible collect_pairs and feed_pairs ports.
  • In turn, the connectors have only the read_chunk/finish_chunk and chunk_ready action names, distributed in their facets.

2.3. Stakeholders

  • The following stakeholders work around HPC Shelf: The specialists (end users) use applications for specifying problems using a domain-specific interface.
  • They do not handle directly with components, which are hidden behind the domain-specific abstractions of the application interface.
  • The providers create and deploy applications, by designing their high-level interfaces and by programming the generation of parallel computing systems.
  • For that, they are experts in parallel computer architectures and parallel programming.
  • Through contextual contracts, they may specify the architectural features of the virtual platforms they support.

2.4. Architecture

  • The multilayer cloud architecture of HPC Shelf for servicing applications comprises the three elements in Fig. 5: Frontend, Core and Backend.
  • The Frontend is SAFe (Shelf Application Framework) [1], a collection of classes and design patterns used by providers to build applications.
  • It supports SAFeSWL as a language for specifying parallel computing systems.
  • In turn, using the orchestration subset, the provider may program its workflow, by specifying the order in which action names must be activated.
  • Applications access the services of the Core for resolving contextual contracts and deploying the selected components on virtual platforms.

2.5. Contextual contracts

  • Since both abstract components named Mapper and Reducer are derived from MRComputation, function is used to specify the custom map and reduce functions that they will execute in particular MapReduce computations.
  • A contextual contract is an abstract component whose context parameters have particular execution context assumptions associated to each one of them.
  • In the classification phase, the list of candidate system components is ordered taking into account the best fulfillment of the contract requirements and the resource allocation policies of HPC Shelf.

3. The certification framework

  • For the purpose of leveraging component certification in HPC Shelf, a certification framework is introduced in this section.
  • It encompasses a set of component kinds, composition rules and design patterns.
  • They provide an environment where certification tools can be encapsulated into components to provide some level of assurance to application providers and component developers that components of parallel computing systems meet a predetermined set of requirements prior to their instantiation.
  • Each certifier associated with a certifiable component may impose its own set of obligations on compatible certifiable components, such as the use of certain programming languages, design patterns, code conventions, annotations, etc.
  • The service interface determines which kind of ad hoc properties are supported and how they are specified.

3.1. Parallel certification systems

  • Certifier components are implemented as parallel certification systems, comprising the following architectural elements, as depicted in Fig. 6: A set of tactical components; A certification-workflow component that orchestrates the tactical ones; A set of bindings, connecting the tactical components to the certification-workflow component.
  • The certification-workflow component performs a certification procedure on the certifiable components connected to the certifier.
  • Parallel certification systems are analogous to parallel computing systems, but aimed at certification purposes.
  • In turn, tactical components play the role of solution components.
  • For this reason, they must be seen as special kinds of virtual platforms on which the proof infrastructure is installed and ready to run verification tasks.

3.2. Tactical components

  • As stated earlier, a tactical component encapsulates a certification infrastructure comprising one or more certification tools.
  • An action port with action names perform, conclusive and inconclusive, where the latter two are alternative; .
  • When the certification subroutine terminates, either conclusive or inconclusive may be activated by the tactical component.
  • In the current implementation, the contextual signature of Tactical, the component type from which specific tactical components are derived, is similar to that of EnvironmentPortCertification.

3.3. Certifier components

  • In the orchestration code, an activation of the action certify will instantiate a parallel certification system for each certifier, which will certificate the certifiable component in parallel.
  • Each one may be reused to certify all certifiable components associated with the same certifier, when their certify actions are activated.
  • After the certification procedure, a certifiable component is considered certified if all default, contractual, component and ad hoc properties have been checked by the certifier.
  • Regarding the action ports, the certification-workflow component has ports to be connected to the action ports of its tactical components.
  • The units may run on different data partitions and synchronize by exchanging messages in a discipline akin to the MPI programming model [15].

4.1. Tactical components for C4

  • The verification of computation components may resort to two different classes of methods and tools.
  • The first class is based on deductive program verification, which partially automate axiomatic program verification based on some variant of the Floyd-Hoare logic.
  • The alternative approach explores the space of reachable states of a system through model checking.

4.1.1. Deductive tactical components

  • Tactical components for deductive verification require the target component programs to be annotated with assertions in the style of the Floyd-Hoare logic or its extensions, namely, separation logic [16], for mutable data structures, Owicki-Gries reasoning [17], for shared-memory parallel programs, and Apt’s reasoning [18], for distributed-memory components.
  • Actually, only ParTypes [19] can verify C/MPI programs, annotated in the syntax of VCC [20], against a high-level communication protocol stored by the certifier as an ad hoc property.
  • In such a case, they are equipped with reasonably complex interfaces for editing, searching and choosing suitable proof procedures and heuristics.
  • Using this approach, the application may either automatically interact with the tactical component or require some intervention of the specialist user to proceed the verification subroutine.

4.1.2. Model checking tactical components

  • Model checking provides a powerful alternative to deductive verification tools to establish properties of MPI programs.
  • In the context of the certification framework discussed in this article, the following tools were explored: ISP (In-situ Partial Order) [34] and CIVL (Concurrency Intermediate Verification Language) [35].
  • Both verify a fixed , although sufficiently expressive set of safety properties.
  • The former handles deadlock absence, assertion violations, MPI object leaks, and communication races (i.e. unexpected communication matches) in components written in C, C++ or C#, carrying MPI/OpenMP directives.
  • CIVL, on its turn, is also able to establish functional equivalence between programs and is able to discharge verification conditions to the provers Z3, CVC3 and CVC4.

4.2. Contextual contracts and architecture

  • Thus, C4 certifiers may prescribe the host programming language on which the computation component is written, as well as the message passing library for communication between the units of the computation component.
  • Also, certifiers can determine whether or not the ad hoc properties are supported.
  • The tactical components of C4MPIComplex are ISP, JWFA and CZ, whose abstract components restrict the bounds of the context parameters of Tactical to define the interface types through which they talk to the certification-workflow component.
  • Like in most scientific workflows management systems, such as Askalon [37], BPEL Sedna [38], Kepler [39], Pegasus [40], Taverna [41] and Triana [42], SAFeSWL workflows are usually represented by components and execution dependencies among them, usually adopting abstract descriptions of components and abstracting away from the computing platforms on which they run.
  • At an appropriate time of the workflow execution, a resolution procedure may be triggered for discovering an appropriate component implementation, making it relevant to ensure that the activation of computational actions of components is made after their effective resolution.

5.2. Architecture and contextual contracts

  • SWC2 prescribes two default properties: Deadlock absence; Obedience to the protocol in which lifecycle actions must be activated, for each component, presented in Section 2.1.
  • It supports all the default properties prescribed for SWC2 certifiers, as well as ad hoc properties.
  • There are two concrete components of mCRL2Certifier.
  • MCRL2Certifier certifiers may be able to exploit parallelism by initiating different verification tasks on distinct processing nodes of the tactical component.
  • For simplifying formulas, mCRL2 allows the use of regular expressions over the set of actions as possible labels of both necessity and eventuality modalities.

5.3.1. The translation process

  • The translation process follows directly the operational rules (Fig. 10) defined for an abstract grammar of a formal specification of the orchestration subset of SAFeSWL (Fig. 9).
  • Rule finish indicates that an action asynchronously activated can actually occur, having its handle registered in F and emitting an action to the system ((a,h)).
  • Rules select-left and select-right indicate the need for the creation of mCRL2 processes that control the state (enabled/disabled) of actions.
  • Rules repeat, continue and break indicate, respectively, the need for the creation of a mCRL2 process that manages a repetition task in order to detect the need for a new iteration, the return back to the beginning of the iteration, or the end of the iteration.
  • The first part of the conjunction states that a deploy may not be performed before a resolve.

6. Case studies

  • Three case studies demonstrate the certification framework of HPC Shelf, as well as the use of C4 and SWC2 certifiers.
  • The workflow maintains three versions of each image, which overlap the center of the Pleiades cluster, each corresponding to a different color band: red, infrared and blue.
  • Fig. 12(a) presents the architecture of a single subworkflow.
  • To do this, the application provider must configure certification bindings between these component instances and one or more instances of C4MPISimple.
  • Fig. 15 reports execution times for this certification case study, by varying the number of processing nodes and cores per node involved in the execution of the tactical component ISP.11.

6.2.1. The non-iterative system with three stages

  • They are the intermediate stages of a pipeline.
  • It is required an intermediate for communication between reducer_2 and application, since they are placed in distinct virtual platforms.
  • The workflow of the non-iterative system initially performs the lifecycle action activation sequence (resolve, deploy, instantiate, and run) for all components, because, in a pipeline pattern, they are required from the beginning of the computation.
  • Then, computations and connectors that will be placed on these virtual platforms.
  • After all the iterations are terminated, the parallel activation completes and all components are released.

6.2.2. The iterative system with a single stage

  • In the iterative workflow (Fig. 17), the single stage consists of a shuffler and a pair of parallel reducers.
  • In turn, before the next iterations, the following code is executed to enable the collector facets that receive pairs of the reducers and disable the collector facet that receive pairs of source: 0 <parallel> 1 <sequence> 2 <invoke port="task_shuffle_collector_active_status_0" action="CHANGE_STATUS_BEGIN.
  • Mappers and reducers receive chunks of input pairs in the first iteration (read_chunk/finish_chunk activation) and process them (invocation to the mapping or reduction function) when the action perform is activated.
  • The former describes precedences of execution between two distinct components or component actions.
  • Fig. 18 depicts the average certification times for both workflows by varying the number of units (processing nodes) of the tactical component from 1 to 16.

6.3. Parallel sorting

  • Parallel sorting is often used in HPC systems when dealing with huge amounts of data [49].
  • The contextual signature of Sorting declares a set of context parameters that may guide the choice of a sorting component that implement the supposedly better algorithm according to the contextual contract.
  • Sorting_place states whether internal or external sorting must be employed, also known as They are described below.
  • Contrariwise, it may employ a noncomparison-based algorithm.
  • In turn, number_nodes, muticore, and accelerator_type are so-called platform parameters, since they describe properties of the underlying parallel computing platform that must be taken into account in the component implementation.

6.3.1. Certifying parallel sorting components

  • Let QuickSortImpl and MergeSortImpl be two concrete components of Sorting that implement parallel versions of the well-known Quicksort and Mergesort algorithms, respectively.
  • Also, virtual platforms containing 2 processing nodes have been chosen for all tactical components.
  • For both QuickSortImpl and MergeSortImpl, all default properties of C4MPIComplex have been proved successfully.
  • The parallel times calculated for this case study makes it possible to conclude that, in general, the smallest times happen for tactical components with a single unit running in a processing node with many cores.

6.4. Discussion

  • The case studies with Montage, MapReduce and Integer Sorting are primarily aimed at demonstrating the feasibility of certifying components of distinct kinds using the certification framework of HPC Shelf.
  • Once all the certification processes involved in the case studies have been completed, the experiment has been successful to demonstrate this.
  • Therefore, it is important to emphasize that the experiments whose results are evaluated in this article do not have the ambition to constitute a definitive validation study of the certification framework of HPC Shelf.
  • The case studies have also shown how the inherent parallelism supported by the certification framework, using the parallel computing infrastructure of HPC Shelf itself, may be used to accelerate certification tasks, even if the underlying certification tools have not been developed with parallelism in mind, which is the case of the theorem provers and model checkers used in the experiments.
  • To reinforce this expectation, it is worth noting that, despite the current implementation of the certification framework is not optimal in relation to performance, the certification times achieved in the experiments, varying between 20 seconds and 12 minutes, are not influenced by possible implementation overheads.

7.1. Certification of software components

  • The certification of software components is an active research area in component-based software engineering (CBSE) since the 1990’s [3–5,7].
  • From the current literature, the authors define the certification of software components as the study and application of methods and techniques intended to provide a well-defined level of confidence that the components of a system meet a given set of requirements.
  • The literature does not mention other proposals of general-purpose certification artifacts in the context of CBHPC13 research, which could be directly compared to the certification framework of HPC Shelf.
  • In such applications, incorrect results and execution failures may cause unsustainable increases in project costs and schedules.

7.2. Verification-as-a-service (VaaS)

  • As pointed out earlier, the kind of certification focused on this paper is the verification of functional and behavioral properties of components of parallel computing systems in a cloud environment through formal methods, automated by deductive and model-checking tools.
  • The authors have systematically searched for related work on VaaS in the most comprehensive databases of scientific literature in computer science: IEEE,14 Scopus,15 ACM16 and Science Direct,17 applying the search string “(platform OR framework) AND service AND formal AND verification AND component AND cloud” to title, abstract and keywords fields.
  • Most of the discarded papers do not propose platforms or frameworks for the intended purpose.
  • The column Total 1 represents the number of distinct papers found, i.e. after removing redundancies.
  • The following sections (7.2.1 and 7.2.2) describe the papers classified in these two groups, respectively.

7.2.1. Verification of cloud administration concerns

  • Evangelidis et al. propose a probabilistic verification scheme aimed at dynamically evaluating auto-scaling policies of IaaS and PaaS virtual machines in Amazon EC2 and Microsoft Azure [59].
  • For that, it applies a Markov model implemented in the PRISM model checker [60].
  • Zhou et al. propose a formal framework for resource provisioning as a service [61].
  • Di Cosmo et al. propose the Aeolus component model [63].
  • The architecture and methodology for enabling SDV to operate in Azure, as well as the results of SDV on single drivers and driver suites using various configurations of the cloud relative to a local machine are reported.

7.2.2. Verification of functional requirements

  • Nezhad et al. propose COOL, a framework for provider-side design of cloud solutions based on formal methods and model-driven engineering [70].
  • Klai and Ochi address the problem of abstracting and verifying the correctness of integrating service-based business processes (SBPs) [74].
  • Skowyra et al. present Verificare, a verification platform for applications based on Software-Defined Networks (SDN) [82].
  • Three layers compose the framework: graphical layer, which uses sequence diagrams for system modeling; formal specification layer, which uses π -calculus to formalize the UML sequence diagram; and verification layer, in which π -calculus processes are verified by NuSMV.
  • It parallelizes symbolic execution, a popular model checking technique, to run on large shared-nothing clusters of computers, such as Amazon EC2.

7.3. Discussion

  • In comparison with the related work described above, the certification framework of HPC Shelf has the following distinguishing characteristics: .
  • It is a general-purpose framework that can be used for automatic certification of a wide range of requirements, including both functional and non-functional, while other works address a particular requirement.
  • For that, it may employ the same parallel computing infrastructure where certifiable components perform their tasks.
  • In turn, certifier selection is a component developer responsibility, using contextual contracts.
  • Also, when designing certifier components, certification authorities may provide high-level interfaces to facilitate the interaction of application providers and component developers with the underlying verification tools.

Did you find this useful? Give us your feedback

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , a multi-dimensional certification scheme for service selection in cloud computing is presented, where additional dimensions model relevant aspects (e.g., programming languages and development processes) that significantly contribute to the quality of the results.
Abstract: The cloud computing has deeply changed how distributed systems are engineered, leading to the proliferation of ever-evolving and complex environments, where legacy systems, microservices, and nanoservices coexist. These services can severely impact on individuals’ security and safety, introducing the need of solutions that properly assess and verify their correct behavior. Security assurance stands out as the way to address such pressing needs, with certification techniques being used to certify that a given service holds some non-functional properties. However, existing techniques build their evaluation on software artifacts only, falling short in providing a thorough evaluation of the non-functional properties under certification. In this paper, we present a multi-dimensional certification scheme where additional dimensions model relevant aspects (e.g., programming languages and development processes) that significantly contribute to the quality of the certification results. Our multi-dimensional certification enables a new generation of service selection approaches capable to handle a variety of user's requirements on the full system life cycle, from system development to its operation and maintenance. The performance and the quality of our approach are thoroughly evaluated in several experiments.

1 citations

Journal ArticleDOI
10 Jun 2020
TL;DR: The need to incorporate componentes personalizados in plataformas empresariales cuyas ediciones al publico en general son libres and limitados de caracteristicas that solo cuentan las edicione premium or pagadas is a hot topic of discussion among programadores as mentioned in this paper.
Abstract: Existe la necesidad de incorporar componentes personalizados en plataformas empresariales cuyas ediciones al publico en general son libres y limitados de caracteristicas que solo cuentan las ediciones premium o pagadas. Esto se ha vuelto un tema de incomodo por parte de los desarrolladores de software de empresas publicas y privadas porque al desarrollar sus aplicaciones con estas soluciones de TI se ven restringidos al momento de extender funcionalidades de acuerdo al modelo de negocio. Por consecuente, con la crisis economica a nivel mundial, las organizaciones optan por manejar versiones libres para evitar elevados costos de licencias, pero esto obliga a los programadores a tener un desarrollo de software de una manera no convencional para extender las funcionalidades.

1 citations

Journal ArticleDOI
TL;DR: In this article , a multi-dimensional certification scheme for service selection in cloud computing is presented, where additional dimensions model relevant aspects (e.g., programming languages and development processes) that significantly contribute to the quality of the certification results.
Abstract: The cloud computing has deeply changed how distributed systems are engineered, leading to the proliferation of ever/evolving and complex environments, where legacy systems, microservices, and nanoservices coexist. These services can severely impact on individuals' security and safety, introducing the need of solutions that properly assess and verify their correct behavior. Security assurance stands out as the way to address such pressing needs, with certification techniques being used to certify that a given service holds some non/functional properties. However, existing techniques build their evaluation on software artifacts only, falling short in providing a thorough evaluation of the non/functional properties under certification. In this paper, we present a multi/dimensional certification scheme where additional dimensions model relevant aspects (e.g., programming languages and development processes) that significantly contribute to the quality of the certification results. Our multi/dimensional certification enables a new generation of service selection approaches capable to handle a variety of user's requirements on the full system life cycle, from system development to its operation and maintenance. The performance and the quality of our approach are thoroughly evaluated in several experiments.
Proceedings Article
01 Jan 2015
TL;DR: In this article, the authors propose a bottom-up approach to check the correct interaction between different service-based business processes distributed over a cloud environment and which may be provided by various organizations.
Abstract: Cloud environments are being increasingly used for deploying and executing business processes and particularly service-based business processes (SBPs). In this paper, we propose a bottom-up approach to check the correct interaction between different SBPs distributed over a Cloud environment and which may be provided by various organizations. The whole system's model being unavailable, an up-down analysis approach is not appropriate. To check the correctness of the composition of several SBPs communicating asynchronously and sharing resources (hardware, platform, and software), we consider temporal properties that can be expressed with the LTL logic. Each part of the whole composite SBP exposes its abstract model, represented by a Symbolic Observation Graph (SOG), to allow the correct collaboration with possible partners in the Cloud. The SOG is adapted in order to reduce the verification of the entire composite model to the verification of the composition of the SOG-based abstractions.
References
More filters
Journal Article
TL;DR: The NuSMV tool as mentioned in this paper is a symbolic model checker developed at CMU and designed to be applicable in technology transfer projects, it is a well structured, open, flexible and documented platform for model checking, and is robust and close to industrial systems standards.
Abstract: This paper describes version 2 of the NuSMV tool. NuSMV is a symbolic model checker originated from the reengineering, reimplementation and extension of SMV, the original BDD-based model checker developed at CMU [15]. The NuSMV project aims at the development of a state-of-the-art symbolic model checker, designed to be applicable in technology transfer projects: it is a well structured, open, flexible and documented platform for model checking, and is robust and close to industrial systems standards [6].

1,377 citations

Journal ArticleDOI
TL;DR: The results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities are presented.
Abstract: This paper describes the Pegasus framework that can be used to map complex scientific workflows onto distributed resources. Pegasus enables users to represent the workflows at an abstract level without needing to worry about the particulars of the target execution systems. The paper describes general issues in mapping applications and the functionality of Pegasus. We present the results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities. A real-life astronomy application is used as the basis for the study.

1,324 citations

Journal ArticleDOI
TL;DR: This paper presents the Alloy language in its entirety, and explains its motivation, contributions and deficiencies.
Abstract: Alloy is a little language for describing structural properties. It offers a declaration syntax compatible with graphical object models, and a set-based formula syntax powerful enough to express complex constraints and yet amenable to a fully automatic semantic analysis. Its meaning is given by translation to an even smaller (formally defined) kernel. This paper presents the language in its entirety, and explains its motivation, contributions and deficiencies.

1,280 citations


"A component-based framework for cer..." refers methods in this paper

  • ...SDN components, safety and security requirements, can be specified from a variety of formal libraries and automatically translated and verified through a variety of tools, such as PRISM [60], SPIN [81] and Alloy [83]....

    [...]

Journal ArticleDOI
TL;DR: Hoare's deductive system for proving partial correctness of sequential programs is extended to include the parallelism described by the language, and the proof method lends insight into how one should understand and present parallel programs.
Abstract: A language for parallel programming, with a primitive construct for synchronization and mutual exclusion, is presented. Hoare's deductive system for proving partial correctness of sequential programs is extended to include the parallelism described by the language. The proof method lends insight into how one should understand and present parallel programs. Examples are given using several of the standard problems in the literature. Methods for proving termination and the absence of deadlock are also given.

1,050 citations


"A component-based framework for cer..." refers methods in this paper

  • ...Tactical components for deductive verification require the target component programs to be annotated with assertions in the style of the Floyd-Hoare logic or its extensions, namely, separation logic [16], for mutable data structures, Owicki-Gries reasoning [17], for shared-memory parallel programs, and Apt’s reasoning [18], for distributed-memory components....

    [...]

Book
20 Jul 2007
TL;DR: This chapter discusses core Maude, a Hierarchy of Data Types: From Trees to Sets to Sets, and Object-Based Programming, which specifies Parameterized Data Structures in Maude.
Abstract: I: Core Maude.- Using Maude.- Syntax and Basic Parsing.- Functional Modules.- A Hierarchy of Data Types: From Trees to Sets.- System Modules.- Playing with Maude.- Module Operations.- Predefined Data Modules.- Specifying Parameterized Data Structures in Maude.- Object-Based Programming.- Model Checking Invariants Through Search.- LTL Model Checking.- Reflection, Metalevel Computation, and Strategies.- Metaprogramming Applications.- Mobile Maude.- User Interfaces and Metalanguage Applications.- II: Full Maude.- Full Maude: Extending Core Maude.- Object-Oriented Modules.- III: Applications and Tools.- A Sampler of Application Areas.- Some Tools.- IV: Reference.- Debugging and Troubleshooting.- Complete List of Maude Commands.- Core Maude Grammar.

900 citations


Additional excerpts

  • ...propose a semantic framework based on bigraphical reactive systems (BRS) [67] and Maude language [68] for modeling both structural and behavioral aspects of cloud-based systems, aimed at verifying elasticity properties inherent to these systems through model checking [69]....

    [...]

Frequently Asked Questions (1)
Q1. What are the contributions in "A component-based framework for certification of components in a cloud of hpc services" ?

In this paper, the authors propose a certification framework for HPC Shelf, a cloud-based general-purpose certification framework.