scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A component-based framework for certification of components in a cloud of HPC services

TL;DR: A Verification-as-a-Service (VaaS) framework for component certification on HPC Shelf is presented, aimed at providing higher confidence that components of parallel computing systems of HPC shelf behave as expected according to one or more requirements expressed in their contracts.
About: This article is published in Science of Computer Programming.The article was published on 2020-06-01 and is currently open access. It has received 4 citations till now. The article focuses on the topics: Cloud computing & Certification.

Summary (8 min read)

1. Introduction

  • HPC Shelf is a cloud computing platform aimed at addressing domain-specific, computationally intensive problems typically emerging from computational science and engineering domains.
  • In HPC Shelf, they must be able to identify and combine components to form parallel computing systems.
  • Through the proposed framework, components called certifiers may use a set of different certification tools to certify that the components of parallel computing systems meet a certain set of requirements.
  • The case studies used to demonstrate the proposed certification framework are particularly focused on functional and behavioral requirements that can be verified through automated verification methods and tools, such as theorem provers and model checkers.
  • From this assessment, the following outstanding features and contributions have been identified in favor of the certification framework of HPC Shelf: .

2. HPC Shelf

  • HPC Shelf is a cloud computing platform that provides HPC services for providers of domain-specific applications.
  • An application is a problem-solving environment through which specialist users, the end users of HPC Shelf, specify problems and obtain computational solutions for them.
  • It is assumed that these solutions are computationally intensive, thus demanding the use of large-scale parallel computing infrastructure, i.e. comprising multiple parallel computing platforms engaged in a single computational task.
  • Applications generate computational solutions as component-oriented parallel computing systems.
  • To do so, these components comply to Hash [8], a parallel component model whose components may exploit parallel processing in distributed-memory parallel computing platforms.

2.1. Component kinds of parallel computing systems

  • Component platforms that comply to the Hash component model distinguish components according to a set of component kinds.
  • Action bindings connect a set of action ports belonging to computation and connector components.
  • It may be programmed by using a general-purpose programming language (currently, C#) or SAFeSWL (SAFe Scientific Workflow Language), an XML-based orchestration language designed for activating the computational tasks of the solution components in a prescribed order [1].
  • In a MapReduce parallel computing system, mappers, reducers, splitters, and shufflers may be connected through bindings between their compatible collect_pairs and feed_pairs ports.
  • In turn, the connectors have only the read_chunk/finish_chunk and chunk_ready action names, distributed in their facets.

2.3. Stakeholders

  • The following stakeholders work around HPC Shelf: The specialists (end users) use applications for specifying problems using a domain-specific interface.
  • They do not handle directly with components, which are hidden behind the domain-specific abstractions of the application interface.
  • The providers create and deploy applications, by designing their high-level interfaces and by programming the generation of parallel computing systems.
  • For that, they are experts in parallel computer architectures and parallel programming.
  • Through contextual contracts, they may specify the architectural features of the virtual platforms they support.

2.4. Architecture

  • The multilayer cloud architecture of HPC Shelf for servicing applications comprises the three elements in Fig. 5: Frontend, Core and Backend.
  • The Frontend is SAFe (Shelf Application Framework) [1], a collection of classes and design patterns used by providers to build applications.
  • It supports SAFeSWL as a language for specifying parallel computing systems.
  • In turn, using the orchestration subset, the provider may program its workflow, by specifying the order in which action names must be activated.
  • Applications access the services of the Core for resolving contextual contracts and deploying the selected components on virtual platforms.

2.5. Contextual contracts

  • Since both abstract components named Mapper and Reducer are derived from MRComputation, function is used to specify the custom map and reduce functions that they will execute in particular MapReduce computations.
  • A contextual contract is an abstract component whose context parameters have particular execution context assumptions associated to each one of them.
  • In the classification phase, the list of candidate system components is ordered taking into account the best fulfillment of the contract requirements and the resource allocation policies of HPC Shelf.

3. The certification framework

  • For the purpose of leveraging component certification in HPC Shelf, a certification framework is introduced in this section.
  • It encompasses a set of component kinds, composition rules and design patterns.
  • They provide an environment where certification tools can be encapsulated into components to provide some level of assurance to application providers and component developers that components of parallel computing systems meet a predetermined set of requirements prior to their instantiation.
  • Each certifier associated with a certifiable component may impose its own set of obligations on compatible certifiable components, such as the use of certain programming languages, design patterns, code conventions, annotations, etc.
  • The service interface determines which kind of ad hoc properties are supported and how they are specified.

3.1. Parallel certification systems

  • Certifier components are implemented as parallel certification systems, comprising the following architectural elements, as depicted in Fig. 6: A set of tactical components; A certification-workflow component that orchestrates the tactical ones; A set of bindings, connecting the tactical components to the certification-workflow component.
  • The certification-workflow component performs a certification procedure on the certifiable components connected to the certifier.
  • Parallel certification systems are analogous to parallel computing systems, but aimed at certification purposes.
  • In turn, tactical components play the role of solution components.
  • For this reason, they must be seen as special kinds of virtual platforms on which the proof infrastructure is installed and ready to run verification tasks.

3.2. Tactical components

  • As stated earlier, a tactical component encapsulates a certification infrastructure comprising one or more certification tools.
  • An action port with action names perform, conclusive and inconclusive, where the latter two are alternative; .
  • When the certification subroutine terminates, either conclusive or inconclusive may be activated by the tactical component.
  • In the current implementation, the contextual signature of Tactical, the component type from which specific tactical components are derived, is similar to that of EnvironmentPortCertification.

3.3. Certifier components

  • In the orchestration code, an activation of the action certify will instantiate a parallel certification system for each certifier, which will certificate the certifiable component in parallel.
  • Each one may be reused to certify all certifiable components associated with the same certifier, when their certify actions are activated.
  • After the certification procedure, a certifiable component is considered certified if all default, contractual, component and ad hoc properties have been checked by the certifier.
  • Regarding the action ports, the certification-workflow component has ports to be connected to the action ports of its tactical components.
  • The units may run on different data partitions and synchronize by exchanging messages in a discipline akin to the MPI programming model [15].

4.1. Tactical components for C4

  • The verification of computation components may resort to two different classes of methods and tools.
  • The first class is based on deductive program verification, which partially automate axiomatic program verification based on some variant of the Floyd-Hoare logic.
  • The alternative approach explores the space of reachable states of a system through model checking.

4.1.1. Deductive tactical components

  • Tactical components for deductive verification require the target component programs to be annotated with assertions in the style of the Floyd-Hoare logic or its extensions, namely, separation logic [16], for mutable data structures, Owicki-Gries reasoning [17], for shared-memory parallel programs, and Apt’s reasoning [18], for distributed-memory components.
  • Actually, only ParTypes [19] can verify C/MPI programs, annotated in the syntax of VCC [20], against a high-level communication protocol stored by the certifier as an ad hoc property.
  • In such a case, they are equipped with reasonably complex interfaces for editing, searching and choosing suitable proof procedures and heuristics.
  • Using this approach, the application may either automatically interact with the tactical component or require some intervention of the specialist user to proceed the verification subroutine.

4.1.2. Model checking tactical components

  • Model checking provides a powerful alternative to deductive verification tools to establish properties of MPI programs.
  • In the context of the certification framework discussed in this article, the following tools were explored: ISP (In-situ Partial Order) [34] and CIVL (Concurrency Intermediate Verification Language) [35].
  • Both verify a fixed , although sufficiently expressive set of safety properties.
  • The former handles deadlock absence, assertion violations, MPI object leaks, and communication races (i.e. unexpected communication matches) in components written in C, C++ or C#, carrying MPI/OpenMP directives.
  • CIVL, on its turn, is also able to establish functional equivalence between programs and is able to discharge verification conditions to the provers Z3, CVC3 and CVC4.

4.2. Contextual contracts and architecture

  • Thus, C4 certifiers may prescribe the host programming language on which the computation component is written, as well as the message passing library for communication between the units of the computation component.
  • Also, certifiers can determine whether or not the ad hoc properties are supported.
  • The tactical components of C4MPIComplex are ISP, JWFA and CZ, whose abstract components restrict the bounds of the context parameters of Tactical to define the interface types through which they talk to the certification-workflow component.
  • Like in most scientific workflows management systems, such as Askalon [37], BPEL Sedna [38], Kepler [39], Pegasus [40], Taverna [41] and Triana [42], SAFeSWL workflows are usually represented by components and execution dependencies among them, usually adopting abstract descriptions of components and abstracting away from the computing platforms on which they run.
  • At an appropriate time of the workflow execution, a resolution procedure may be triggered for discovering an appropriate component implementation, making it relevant to ensure that the activation of computational actions of components is made after their effective resolution.

5.2. Architecture and contextual contracts

  • SWC2 prescribes two default properties: Deadlock absence; Obedience to the protocol in which lifecycle actions must be activated, for each component, presented in Section 2.1.
  • It supports all the default properties prescribed for SWC2 certifiers, as well as ad hoc properties.
  • There are two concrete components of mCRL2Certifier.
  • MCRL2Certifier certifiers may be able to exploit parallelism by initiating different verification tasks on distinct processing nodes of the tactical component.
  • For simplifying formulas, mCRL2 allows the use of regular expressions over the set of actions as possible labels of both necessity and eventuality modalities.

5.3.1. The translation process

  • The translation process follows directly the operational rules (Fig. 10) defined for an abstract grammar of a formal specification of the orchestration subset of SAFeSWL (Fig. 9).
  • Rule finish indicates that an action asynchronously activated can actually occur, having its handle registered in F and emitting an action to the system ((a,h)).
  • Rules select-left and select-right indicate the need for the creation of mCRL2 processes that control the state (enabled/disabled) of actions.
  • Rules repeat, continue and break indicate, respectively, the need for the creation of a mCRL2 process that manages a repetition task in order to detect the need for a new iteration, the return back to the beginning of the iteration, or the end of the iteration.
  • The first part of the conjunction states that a deploy may not be performed before a resolve.

6. Case studies

  • Three case studies demonstrate the certification framework of HPC Shelf, as well as the use of C4 and SWC2 certifiers.
  • The workflow maintains three versions of each image, which overlap the center of the Pleiades cluster, each corresponding to a different color band: red, infrared and blue.
  • Fig. 12(a) presents the architecture of a single subworkflow.
  • To do this, the application provider must configure certification bindings between these component instances and one or more instances of C4MPISimple.
  • Fig. 15 reports execution times for this certification case study, by varying the number of processing nodes and cores per node involved in the execution of the tactical component ISP.11.

6.2.1. The non-iterative system with three stages

  • They are the intermediate stages of a pipeline.
  • It is required an intermediate for communication between reducer_2 and application, since they are placed in distinct virtual platforms.
  • The workflow of the non-iterative system initially performs the lifecycle action activation sequence (resolve, deploy, instantiate, and run) for all components, because, in a pipeline pattern, they are required from the beginning of the computation.
  • Then, computations and connectors that will be placed on these virtual platforms.
  • After all the iterations are terminated, the parallel activation completes and all components are released.

6.2.2. The iterative system with a single stage

  • In the iterative workflow (Fig. 17), the single stage consists of a shuffler and a pair of parallel reducers.
  • In turn, before the next iterations, the following code is executed to enable the collector facets that receive pairs of the reducers and disable the collector facet that receive pairs of source: 0 <parallel> 1 <sequence> 2 <invoke port="task_shuffle_collector_active_status_0" action="CHANGE_STATUS_BEGIN.
  • Mappers and reducers receive chunks of input pairs in the first iteration (read_chunk/finish_chunk activation) and process them (invocation to the mapping or reduction function) when the action perform is activated.
  • The former describes precedences of execution between two distinct components or component actions.
  • Fig. 18 depicts the average certification times for both workflows by varying the number of units (processing nodes) of the tactical component from 1 to 16.

6.3. Parallel sorting

  • Parallel sorting is often used in HPC systems when dealing with huge amounts of data [49].
  • The contextual signature of Sorting declares a set of context parameters that may guide the choice of a sorting component that implement the supposedly better algorithm according to the contextual contract.
  • Sorting_place states whether internal or external sorting must be employed, also known as They are described below.
  • Contrariwise, it may employ a noncomparison-based algorithm.
  • In turn, number_nodes, muticore, and accelerator_type are so-called platform parameters, since they describe properties of the underlying parallel computing platform that must be taken into account in the component implementation.

6.3.1. Certifying parallel sorting components

  • Let QuickSortImpl and MergeSortImpl be two concrete components of Sorting that implement parallel versions of the well-known Quicksort and Mergesort algorithms, respectively.
  • Also, virtual platforms containing 2 processing nodes have been chosen for all tactical components.
  • For both QuickSortImpl and MergeSortImpl, all default properties of C4MPIComplex have been proved successfully.
  • The parallel times calculated for this case study makes it possible to conclude that, in general, the smallest times happen for tactical components with a single unit running in a processing node with many cores.

6.4. Discussion

  • The case studies with Montage, MapReduce and Integer Sorting are primarily aimed at demonstrating the feasibility of certifying components of distinct kinds using the certification framework of HPC Shelf.
  • Once all the certification processes involved in the case studies have been completed, the experiment has been successful to demonstrate this.
  • Therefore, it is important to emphasize that the experiments whose results are evaluated in this article do not have the ambition to constitute a definitive validation study of the certification framework of HPC Shelf.
  • The case studies have also shown how the inherent parallelism supported by the certification framework, using the parallel computing infrastructure of HPC Shelf itself, may be used to accelerate certification tasks, even if the underlying certification tools have not been developed with parallelism in mind, which is the case of the theorem provers and model checkers used in the experiments.
  • To reinforce this expectation, it is worth noting that, despite the current implementation of the certification framework is not optimal in relation to performance, the certification times achieved in the experiments, varying between 20 seconds and 12 minutes, are not influenced by possible implementation overheads.

7.1. Certification of software components

  • The certification of software components is an active research area in component-based software engineering (CBSE) since the 1990’s [3–5,7].
  • From the current literature, the authors define the certification of software components as the study and application of methods and techniques intended to provide a well-defined level of confidence that the components of a system meet a given set of requirements.
  • The literature does not mention other proposals of general-purpose certification artifacts in the context of CBHPC13 research, which could be directly compared to the certification framework of HPC Shelf.
  • In such applications, incorrect results and execution failures may cause unsustainable increases in project costs and schedules.

7.2. Verification-as-a-service (VaaS)

  • As pointed out earlier, the kind of certification focused on this paper is the verification of functional and behavioral properties of components of parallel computing systems in a cloud environment through formal methods, automated by deductive and model-checking tools.
  • The authors have systematically searched for related work on VaaS in the most comprehensive databases of scientific literature in computer science: IEEE,14 Scopus,15 ACM16 and Science Direct,17 applying the search string “(platform OR framework) AND service AND formal AND verification AND component AND cloud” to title, abstract and keywords fields.
  • Most of the discarded papers do not propose platforms or frameworks for the intended purpose.
  • The column Total 1 represents the number of distinct papers found, i.e. after removing redundancies.
  • The following sections (7.2.1 and 7.2.2) describe the papers classified in these two groups, respectively.

7.2.1. Verification of cloud administration concerns

  • Evangelidis et al. propose a probabilistic verification scheme aimed at dynamically evaluating auto-scaling policies of IaaS and PaaS virtual machines in Amazon EC2 and Microsoft Azure [59].
  • For that, it applies a Markov model implemented in the PRISM model checker [60].
  • Zhou et al. propose a formal framework for resource provisioning as a service [61].
  • Di Cosmo et al. propose the Aeolus component model [63].
  • The architecture and methodology for enabling SDV to operate in Azure, as well as the results of SDV on single drivers and driver suites using various configurations of the cloud relative to a local machine are reported.

7.2.2. Verification of functional requirements

  • Nezhad et al. propose COOL, a framework for provider-side design of cloud solutions based on formal methods and model-driven engineering [70].
  • Klai and Ochi address the problem of abstracting and verifying the correctness of integrating service-based business processes (SBPs) [74].
  • Skowyra et al. present Verificare, a verification platform for applications based on Software-Defined Networks (SDN) [82].
  • Three layers compose the framework: graphical layer, which uses sequence diagrams for system modeling; formal specification layer, which uses π -calculus to formalize the UML sequence diagram; and verification layer, in which π -calculus processes are verified by NuSMV.
  • It parallelizes symbolic execution, a popular model checking technique, to run on large shared-nothing clusters of computers, such as Amazon EC2.

7.3. Discussion

  • In comparison with the related work described above, the certification framework of HPC Shelf has the following distinguishing characteristics: .
  • It is a general-purpose framework that can be used for automatic certification of a wide range of requirements, including both functional and non-functional, while other works address a particular requirement.
  • For that, it may employ the same parallel computing infrastructure where certifiable components perform their tasks.
  • In turn, certifier selection is a component developer responsibility, using contextual contracts.
  • Also, when designing certifier components, certification authorities may provide high-level interfaces to facilitate the interaction of application providers and component developers with the underlying verification tools.

Did you find this useful? Give us your feedback

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , a multi-dimensional certification scheme for service selection in cloud computing is presented, where additional dimensions model relevant aspects (e.g., programming languages and development processes) that significantly contribute to the quality of the results.
Abstract: The cloud computing has deeply changed how distributed systems are engineered, leading to the proliferation of ever-evolving and complex environments, where legacy systems, microservices, and nanoservices coexist. These services can severely impact on individuals’ security and safety, introducing the need of solutions that properly assess and verify their correct behavior. Security assurance stands out as the way to address such pressing needs, with certification techniques being used to certify that a given service holds some non-functional properties. However, existing techniques build their evaluation on software artifacts only, falling short in providing a thorough evaluation of the non-functional properties under certification. In this paper, we present a multi-dimensional certification scheme where additional dimensions model relevant aspects (e.g., programming languages and development processes) that significantly contribute to the quality of the certification results. Our multi-dimensional certification enables a new generation of service selection approaches capable to handle a variety of user's requirements on the full system life cycle, from system development to its operation and maintenance. The performance and the quality of our approach are thoroughly evaluated in several experiments.

1 citations

Journal ArticleDOI
10 Jun 2020
TL;DR: The need to incorporate componentes personalizados in plataformas empresariales cuyas ediciones al publico en general son libres and limitados de caracteristicas that solo cuentan las edicione premium or pagadas is a hot topic of discussion among programadores as mentioned in this paper.
Abstract: Existe la necesidad de incorporar componentes personalizados en plataformas empresariales cuyas ediciones al publico en general son libres y limitados de caracteristicas que solo cuentan las ediciones premium o pagadas. Esto se ha vuelto un tema de incomodo por parte de los desarrolladores de software de empresas publicas y privadas porque al desarrollar sus aplicaciones con estas soluciones de TI se ven restringidos al momento de extender funcionalidades de acuerdo al modelo de negocio. Por consecuente, con la crisis economica a nivel mundial, las organizaciones optan por manejar versiones libres para evitar elevados costos de licencias, pero esto obliga a los programadores a tener un desarrollo de software de una manera no convencional para extender las funcionalidades.

1 citations

Journal ArticleDOI
TL;DR: In this article , a multi-dimensional certification scheme for service selection in cloud computing is presented, where additional dimensions model relevant aspects (e.g., programming languages and development processes) that significantly contribute to the quality of the certification results.
Abstract: The cloud computing has deeply changed how distributed systems are engineered, leading to the proliferation of ever/evolving and complex environments, where legacy systems, microservices, and nanoservices coexist. These services can severely impact on individuals' security and safety, introducing the need of solutions that properly assess and verify their correct behavior. Security assurance stands out as the way to address such pressing needs, with certification techniques being used to certify that a given service holds some non/functional properties. However, existing techniques build their evaluation on software artifacts only, falling short in providing a thorough evaluation of the non/functional properties under certification. In this paper, we present a multi/dimensional certification scheme where additional dimensions model relevant aspects (e.g., programming languages and development processes) that significantly contribute to the quality of the certification results. Our multi/dimensional certification enables a new generation of service selection approaches capable to handle a variety of user's requirements on the full system life cycle, from system development to its operation and maintenance. The performance and the quality of our approach are thoroughly evaluated in several experiments.
Proceedings Article
01 Jan 2015
TL;DR: In this article, the authors propose a bottom-up approach to check the correct interaction between different service-based business processes distributed over a cloud environment and which may be provided by various organizations.
Abstract: Cloud environments are being increasingly used for deploying and executing business processes and particularly service-based business processes (SBPs). In this paper, we propose a bottom-up approach to check the correct interaction between different SBPs distributed over a Cloud environment and which may be provided by various organizations. The whole system's model being unavailable, an up-down analysis approach is not appropriate. To check the correctness of the composition of several SBPs communicating asynchronously and sharing resources (hardware, platform, and software), we consider temporal properties that can be expressed with the LTL logic. Each part of the whole composite SBP exposes its abstract model, represented by a Symbolic Observation Graph (SOG), to allow the correct collaboration with possible partners in the Cloud. The SOG is adapted in order to reduce the verification of the entire composite model to the verification of the composition of the SOG-based abstractions.
References
More filters
Journal ArticleDOI
TL;DR: A distributed approach to verification of computation tree logic formulas on very large state spaces is described, which exploits and integrates the authors' parametric state–space builder, designed to ease the adoption of ‘big data’ platforms.
Abstract: The recent extensive availability of 'cloud' computing platforms is very appealing for the formal verification community. In fact, these platforms represent a great opportunity to run massively parallel jobs and analyze 'big data' problems, although classical formal verification tools and techniques must undergo a deep technological transformation to take advantage of the available powerful architectures. A distributed approach to verification of computation tree logic formulas on very large state spaces is described. The approach exploits and integrates our parametric state-space builder, designed to ease the adoption of 'big data' platforms. The whole framework adopts a MAPREDUCEapproach as the core computational model and can be tailored to different modeling formalisms. This paper includes proofs of correctness, a short theoretical discussion about complexity, and reports a practical experience with some benchmarking Petri net models. The outcomes of several tests are presented, thus showing the convenience of the proposed approach. Copyright © 2015 John Wiley & Sons, Ltd.

8 citations


"A component-based framework for cer..." refers methods in this paper

  • ...propose a distributed framework for verifying CTL formulas on a cloud, based on a MapReduce algorithm [88]....

    [...]

Journal ArticleDOI
TL;DR: Strong evidence is presented about the efficacy and the efficiency of HPE (Hash Programming Environment), a CBHPC platform that provides full support for parallel programming, on the development, deployment and execution of numerical simulation code onto cluster computing platforms.

8 citations


"A component-based framework for cer..." refers methods in this paper

  • ...HTS (Hash Type System) [12] is a type system firstly introduced by HPE (Hash Programming Environment) [8,13], the first reference implementation of the Hash component model, for the following purposes: • The separation between specification (interface) and implementation of components, for promoting modularity and safety; • The support of alternative implementations of a given component specification for different execution contexts, where an execution context is defined by the requirements of the host application and the architectural characteristics of the target parallel computing platform; • The dynamic selection among a set of alternative component implementations according to the execution context....

    [...]

  • ...HTS (Hash Type System) [12] is a type system firstly introduced by HPE (Hash Programming Environment) [8,13], the first reference implementation of the Hash component model, for the following purposes:...

    [...]

Proceedings ArticleDOI
01 Nov 2015
TL;DR: This research designs a formal procedure for a service broker to present the worst scenario for users nonfunctional besides functional requirements and presents the end-result analysis of QoS measures such as the probability of success, price and average service time by implementing the formal framework.
Abstract: Web Services has been more recently emerged as the technology of choice to realize Service-Oriented Computing (SOC), a significant computing paradigm. The accomplishment of customer's satisfaction and trust is thought-provoking for web service providers. Subsequently, the attainment of non-functional requirements (aka QoS measures) is still a critical research challenge in realizing the Web Service Composition (WSC). The reason behind this research is to design a formal procedure for a service broker to present the worst scenario for users nonfunctional besides functional requirements. We formally address the workflow-based abstract level description of web services coordination through a formal framework of a service broker by composing the functional and non-functional requirements. The syntax of the formal framework is defined and analyzed using π-calculus. However, the semantic analysis of the framework is carried out by considering a case study of the Travel Agent (TA) system. Finally, we present the end-result analysis of QoS measures such as the probability of success, price and average service time by implementing the formal framework.

6 citations


"A component-based framework for cer..." refers methods in this paper

  • ...produced a formal framework for a service broker [76], helping to compose formally described QoS metrics by following the workflow-based nature of web services composition....

    [...]

Journal ArticleDOI
TL;DR: A new specification for the observational determinism security property in linear temporal logic is proposed and a general method to create the appropriate program model using the self-composition approach is presented.
Abstract: Observational determinism is a property that ensures the confidentiality in concurrent programs It conveys that public variables are independent of private variables during the execution of programs, and the scheduling policy of threads Different definitions for observational determinism have been proposed On the other hand, observational determinism is not a standard property and it should be checked over two or more executions of a program The self-composition approach allows comparing two different copies of a program using a single formula In this paper, we propose a new specification for the observational determinism security property in linear temporal logic We also present a general method to create the appropriate program model using the self-composition approach Both the program model and the observational determinism property are encoded in embedded C codes in PROMELA using the SPIN model checker The paper also discusses a method for the instrumentation of PROMELA code in order to encode the program model for specifying the observational determinism security property

6 citations

Book ChapterDOI
01 Jan 2017
TL;DR: It is demonstrated that parallel deterministic sample sort for GPU (GPU Bucket Sort) is not only considerably faster than the best comparison-based sorting algorithm for GPUs (Thrust Merge) but also as fast as randomized samplesort forGPU (GPU Sample Sort).
Abstract: Selim Akl has been a ground breaking pioneer in the field of parallel sorting algorithms. His ‘Parallel Sorting Algorithms’ book [12], published in 1985, has been a standard text for researchers and students. Here we discuss recent advances in parallel sorting methods for many-core GPUs. We demonstrate that parallel deterministic sample sort for GPUs (GPU Bucket Sort) is not only considerably faster than the best comparison-based sorting algorithm for GPUs (Thrust Merge) but also as fast as randomized sample sort for GPUs (GPU Sample Sort). However, deterministic sample sort has the advantage that bucket sizes are guaranteed and therefore its running time does not have the input data dependent fluctuations that can occur for randomized sample sort.

6 citations


Additional excerpts

  • ...Consider the following contextual signature of an abstract component called Sorting, representing a family of concrete components, each one representing a particular parallel implementation of a well-known sorting algorithm, such as Quicksort, Mergesort, Bitonic Sort, Heapsort, Radix Sort, and so on [49,53,54]:...

    [...]

Frequently Asked Questions (1)
Q1. What are the contributions in "A component-based framework for certification of components in a cloud of hpc services" ?

In this paper, the authors propose a certification framework for HPC Shelf, a cloud-based general-purpose certification framework.