scispace - formally typeset
Search or ask a question

Showing papers on "Software portability published in 2017"


Journal ArticleDOI
TL;DR: This work introduces a framework for creating, testing, versioning and archiving portable applications for analyzing neuroimaging data organized and described in compliance with the Brain Imaging Data Structure (BIDS).
Abstract: The rate of progress in human neurosciences is limited by the inability to easily apply a wide range of analysis methods to the plethora of different datasets acquired in labs around the world. In this work, we introduce a framework for creating, testing, versioning and archiving portable applications for analyzing neuroimaging data organized and described in compliance with the Brain Imaging Data Structure (BIDS). The portability of these applications (BIDS Apps) is achieved by using container technologies that encapsulate all binary and other dependencies in one convenient package. BIDS Apps run on all three major operating systems with no need for complex setup and configuration and thanks to the comprehensiveness of the BIDS standard they require little manual user input. Previous containerized data processing solutions were limited to single user environments and not compatible with most multi-tenant High Performance Computing systems. BIDS Apps overcome this limitation by taking advantage of the Singularity container technology. As a proof of concept, this work is accompanied by 22 ready to use BIDS Apps, packaging a diverse set of commonly used neuroimaging algorithms.

235 citations


Journal ArticleDOI
TL;DR: The aim of this article is to propose a first systematic interpretation of this new right, by suggesting a pragmatic and extensive approach, particularly taking advantage as much as possible of the interrelationship that this new legal provision can have with regard to the Digital Single Market and the fundamental rights of digital users.

137 citations


Proceedings ArticleDOI
04 Feb 2017
TL;DR: This paper describes how LIFT IR programs are compiled into efficient OpenCL code, a new data-parallel IR which encodes OpenCL-specific constructs as functional patterns which is flexible enough to express GPU programs with complex optimizations achieving performance on par with manually optimized code.
Abstract: Parallel patterns (e.g., map, reduce) have gained traction as an abstraction for targeting parallel accelerators and are a promising answer to the performance portability problem. However, compiling high-level programs into efficient low- level parallel code is challenging. Current approaches start from a high-level parallel IR and proceed to emit GPU code directly in one big step. Fixed strategies are used to optimize and map parallelism exploiting properties of a particular GPU generation leading to performance portability issues. We introduce the Lift IR, a new data-parallel IR which encodes OpenCL-specific constructs as functional patterns. Our prior work has shown that this functional nature simplifies the exploration of optimizations and mapping of parallelism from portable high-level programs using rewrite-rules. This paper describes how Lift IR programs are compiled into efficient OpenCL code. This is non-trivial as many performance sensitive details such as memory allocation, array accesses or synchronization are not explicitly represented in the Lift IR. We present techniques which overcome this challenge by exploiting the pattern’s high-level semantics. Our evaluation shows that the Lift IR is flexible enough to express GPU programs with complex optimizations achieving performance on par with manually optimized code.

123 citations


Journal ArticleDOI
TL;DR: A new programming language for image processing pipelines, called Halide, that separates the algorithm from its schedule, and is expressive enough to describe organizations that match or outperform state-of-the-art hand-written implementations of many computational photography and computer vision algorithms.
Abstract: Writing high-performance code on modern machines requires not just locally optimizing inner loops, but globally reorganizing computations to exploit parallelism and locality---doing things such as tiling and blocking whole pipelines to fit in cache. This is especially true for image processing pipelines, where individual stages do much too little work to amortize the cost of loading and storing results to and from off-chip memory. As a result, the performance difference between a naive implementation of a pipeline and one globally optimized for parallelism and locality is often an order of magnitude. However, using existing programming tools, writing high-performance image processing code requires sacrificing simplicity, portability, and modularity. We argue that this is because traditional programming models conflate the computations defining the algorithm with decisions about intermediate storage and the order of computation, which we call the schedule.We propose a new programming language for image processing pipelines, called Halide, that separates the algorithm from its schedule. Programmers can change the schedule to express many possible organizations of a single algorithm. The Halide compiler then synthesizes a globally combined loop nest for an entire algorithm, given a schedule. Halide models a space of schedules which is expressive enough to describe organizations that match or outperform state-of-the-art hand-written implementations of many computational photography and computer vision algorithms. Its model is simple enough to do so often in only a few lines of code, and small changes generate efficient implementations for x86, ARM, Graphics Processors (GPUs), and specialized image processors, all from a single algorithm.Halide has been public and open source for over four years, during which it has been used by hundreds of programmers to deploy code to tens of thousands of servers and hundreds of millions of phones, processing billions of images every day.

123 citations


Proceedings ArticleDOI
12 Nov 2017
TL;DR: Charliecloud promises to bring an industry-standard UDSS user workflow to existing, minimally altered HPC resources by using the Linux user and mount namespaces to run industry- standard Docker containers with no privileged operations or daemons on center resources.
Abstract: Supercomputing centers are seeing increasing demand for user-defined software stacks (UDSS), instead of or in addition to the stack provided by the center. These UDSS support user needs such as complex dependencies or build requirements, externally required configurations, portability, and consistency. The challenge for centers is to provide these services in a usable manner while minimizing the risks: security, support burden, missing functionality, and performance. We present Charliecloud, which uses the Linux user and mount namespaces to run industry-standard Docker containers with no privileged operations or daemons on center resources. Our simple approach avoids most security risks while maintaining access to the performance and functionality already on offer, doing so in just 800 lines of code. Charliecloud promises to bring an industry-standard UDSS user workflow to existing, minimally altered HPC resources.

105 citations


Journal ArticleDOI
TL;DR: A subjective evaluation of AEPS’s effectiveness as an educational tool shows that the proposed platform not only promotes the students’ learning interest and practical ability but also consolidates their understanding and impression of theoretical concepts.
Abstract: With the purpose of further mastering and grasping the course of speech signal processing, a novel Android-based, mobile-assisted educational platform (AEPS) is proposed in this paper. The goal of this work was to design AEPS as an educational signal-processing auxiliary system by simulating signal analysis methods commonly used in speech signal processing and bridging the gap for transition from undergraduate study to industry practice or academic research. The educational platform is presented in a highly intuitive, easy-to-interpret and strongly maneuverable graphical user interface. It also has the characteristics of high portability, strong affordability, and easy adoptability for application extension and popularization. Through adequate intuitive user interface, rich visual information, and extensive hands-on experiences, it greatly facilitates students in authentic, interactive, and creative learning. This paper details a subjective evaluation of AEPS’s effectiveness as an educational tool. The re...

91 citations


Proceedings ArticleDOI
18 Apr 2017
TL;DR: This paper argues why new solutions to performance engineering for microservices are needed and identifies open issues and outline possible research directions with regard to performance-aware testing, monitoring, and modeling of microservices.
Abstract: Microservices complement approaches like DevOps and continuous delivery in terms of software architecture. Along with this architectural style, several important deployment technologies, such as container-based virtualization and container orchestration solutions, have emerged. These technologies allow to efficiently exploit cloud platforms, providing a high degree of scalability, availability, and portability for microservices.Despite the obvious importance of a sufficient level of performance, there is still a lack of performance engineering approaches explicitly taking into account the particularities of microservices. In this paper, we argue why new solutions to performance engineering for microservices are needed. Furthermore, we identify open issues and outline possible research directions with regard to performance-aware testing, monitoring, and modeling of microservices.

87 citations


Proceedings ArticleDOI
30 Oct 2017
TL;DR: Using the supercop framework, this work evaluates the Jasmin compiler on representative cryptographic routines and concludes that the code generated by the compiler is as efficient as fast, hand-crafted, implementations.
Abstract: Jasmin is a framework for developing high-speed and high-assurance cryptographic software. The framework is structured around the Jasmin programming language and its compiler. The language is designed for enhancing portability of programs and for simplifying verification tasks. The compiler is designed to achieve predictability and efficiency of the output code (currently limited to x64 platforms), and is formally verified in the Coq proof assistant. Using the supercop framework, we evaluate the Jasmin compiler on representative cryptographic routines and conclude that the code generated by the compiler is as efficient as fast, hand-crafted, implementations. Moreover, the framework includes highly automated tools for proving memory safety and constant-time security (for protecting against cache-based timing attacks). We also demonstrate the effectiveness of the verification tools on a large set of cryptographic routines.

85 citations


Journal ArticleDOI
TL;DR: Ongoing efforts to integrate vision-based measurement and control, augmented reality (AR), and multi-touch interaction on mobile devices in the development of Mixed-Reality Learning Environments (MRLE) that enhance interactions with laboratory test-beds for science and engineering education are discussed.
Abstract: Even as mobile devices have become increasingly powerful and popular among learners and instructors alike, research involving their comprehensive integration into educational laboratory activities remains largely unexplored. This paper discusses efforts to integrate vision-based measurement and control, augmented reality (AR), and multi-touch interaction on mobile devices in the development of Mixed-Reality Learning Environments (MRLE) that enhance interactions with laboratory test-beds for science and engineering education. A learner points her device at a laboratory test-bed fitted with visual markers while a mobile application supplies a live view of the experiment augmented with interactive media that aid in the visualization of concepts and promote learner engagement. As the learner manipulates the augmented media, her gestures are mapped to commands that alter the behavior of the test-bed on the fly. Running in the background of the mobile application are algorithms performing vision-based estimation and wireless control of the test-bed. In this way, the sensing, storage, computation, and communication (SSCC) capabilities of mobile devices are leveraged to relieve the need for laboratory-grade equipment, improving the cost-effectiveness and portability of platforms to conduct hands-on laboratories. We hypothesize that students using the MRLE platform demonstrate improvement in their knowledge of dynamic systems and control concepts and have generally favorable experiences using the platform. To validate the hypotheses concerning the educational effectiveness and user experience of the MRLEs, an evaluation was conducted with two classes of undergraduate students using an illustrative platform incorporating a tablet computer and motor test-bed to teach concepts of dynamic systems and control. Results of the evaluation validate the hypotheses. The benefits and drawbacks of the MRLEs observed throughout the study are discussed with respect to the traditional hands-on, virtual, and remote laboratory formats. Mobile devices and test-beds can be integrated according to a novel lab education paradigm.Vision-based measurement and control, AR, and touchscreen enhance lab interactions.The proposed paradigm can offer the benefits of hands-on, virtual, and remote labs.An implementation is developed using an iPad and a motor test-bed to teach control.Evaluation with students validates the implementation's educational effectiveness.

77 citations


Journal ArticleDOI
Asif Khan1
TL;DR: This paper addresses a set of capabilities required of a container orchestration platform to embody the design principles as illustrated by twelve factor app design and provides a non-exhaustive and prescriptive guide to identifying and implementing key mechanisms required in a container Orchestration platform.
Abstract: As compute evolves from bare metal to virtualized environments to containers towards serverless, the efficiency gains have enabled a wide variety of use cases. Organizations have used containers to run long running services, batch processing at scale, control planes, Internet of Things, and Artificial Intelligence workloads. Further, methodologies for software as a service, such as twelve-factor app, emphasize a clean contract with the underlying operating system and maximum portability between execution environments.1 In this paper, we address a set of capabilities required of a container orchestration platform to embody the design principles as illustrated by twelve factor app design. This paper also provides a non-exhaustive and prescriptive guide to identifying and implementing key mechanisms required in a container orchestration platform. We will cover capabilities such as cluster state management and scheduling, high availability and fault tolerance, security, networking, service discovery, continuous deployment, monitoring, and governance.

71 citations


Journal ArticleDOI
TL;DR: An Augmented and Virtual Reality (AR/VR) based IoT prototype system is presented to improve safety, maintain availability, reduce errors and decrease the time needed for scheduled or ad hoc interventions in extreme work environments.

Journal ArticleDOI
05 Sep 2017
TL;DR: A systematic overview of AR in engineering analysis and simulation is provided with respect to its pros and cons, as well as its suitability to particular types of applications.
Abstract: Augmented reality (AR) has recently become a worldwide research topic. AR technology renders intuitive computer-generated contents on users’ physical surroundings. To improve process efficiency and productivity, researchers and developers have paid increasing attention to AR applications in engineering analysis and simulation. The integration of AR with numerical simulation, such as the finite element method, provides a cognitive and scientific way for users to analyze practical problems. By incorporating scientific visualization technologies, an AR-based system superimposes engineering analysis and simulation results directly on real-world objects. Engineering analysis and simulation involving diverse types of data are normally processed using specific computer software. Correct and effective visualization of these data using an AR platform can reduce the misinterpretation in spatial and logical aspects. Moreover, tracking performance of the AR platforms in engineering analysis and simulation is crucial as it influences the overall user experience. The operating environment of the AR platforms requires robust tracking performance to deliver stable and accurate information to the users. In addition, over the past several decades, AR has undergone a transition from desktop to mobile computing. The portability and propagation of mobile platforms has provided engineers with convenient access to relevant information in situ. However, on-site working environment imposes constraints on the development of mobile AR-based systems. This paper aims to provide a systematic overview of AR in engineering analysis and simulation. The visualization, tracking techniques as well as the implementation on mobile platforms are discussed. Each technique is analyzed with respect to its pros and cons, as well its suitability to particular types of applications.

Book ChapterDOI
01 Jan 2017
TL;DR: The technical features of the Satellite CCRMA platform are described and it is compared with personal computer-based systems used in the past as well as emerging smart phone-based platforms.
Abstract: This paper describes a new Beagle Board-based platform for teaching and practicing interaction design for musical applications. The migration from desktop and laptop computer-based sound synthesis to a compact and integrated control, computation and sound generation platform has enormous potential to widen the range of computer music instruments and installations that can be designed, and improves the portability, autonomy, extensibility and longevity of designed systems. We describe the technical features of the Satellite CCRMA platform and contrast it with personal computer-based systems used in the past as well as emerging smart phone-based platforms. The advantages and trade-offs of the new platform are considered, and some project work is described.

Proceedings ArticleDOI
18 Apr 2017
TL;DR: This paper provides a general formulation of the Elastic provisioning of Virtual machines for Container Deployment (for short, EVCD) as an Integer Linear Programming problem, which takes explicitly into account the heterogeneity of container requirements and virtual machine resources.
Abstract: Docker containers enable to package an application together with all its dependencies and easily run it in any environment. Thanks to their ease of use and portability, containers are gaining an increasing interest and promise to change the way how Cloud platforms are designed and managed. For their execution in the Cloud, we need to solve the container deployment problem, which deals with the identification of an elastic set of computing machines that can host and execute those containers, while considering the diversity of their requirements.In this paper, we provide a general formulation of the Elastic provisioning of Virtual machines for Container Deployment (for short, EVCD) as an Integer Linear Programming problem, which takes explicitly into account the heterogeneity of container requirements and virtual machine resources. Besides optimizing multiple QoS metrics, EVCD can reallocate containers at runtime, when a QoS improvement can be achieved. Using the proposed formulation as benchmark, we evaluate two well-known heuristics, i.e., greedy first-fit and round-robin, that are usually adopted for solving the container deployment problem.

Journal ArticleDOI
TL;DR: A methodology based on large community efforts in engineering and standardisation is outlined, which will depend on identifying a taxonomy of key activities– perhaps based on existing efforts to develop domain-specific languages, identify common patterns in weather and climate codes, and develop community approaches to commonly needed tools and libraries – and then collaboratively building up those key components.
Abstract: . Weather and climate models are complex pieces of software which include many individual components, each of which is evolving under pressure to exploit advances in computing to enhance some combination of a range of possible improvements (higher spatio-temporal resolution, increased fidelity in terms of resolved processes, more quantification of uncertainty, etc.). However, after many years of a relatively stable computing environment with little choice in processing architecture or programming paradigm (basically X86 processors using MPI for parallelism), the existing menu of processor choices includes significant diversity, and more is on the horizon. This computational diversity, coupled with ever increasing software complexity, leads to the very real possibility that weather and climate modelling will arrive at a chasm which will separate scientific aspiration from our ability to develop and/or rapidly adapt codes to the available hardware. In this paper we review the hardware and software trends which are leading us towards this chasm, before describing current progress in addressing some of the tools which we may be able to use to bridge the chasm. This brief introduction to current tools and plans is followed by a discussion outlining the scientific requirements for quality model codes which have satisfactory performance and portability, while simultaneously supporting productive scientific evolution. We assert that the existing method of incremental model improvements employing small steps which adjust to the changing hardware environment is likely to be inadequate for crossing the chasm between aspiration and hardware at a satisfactory pace, in part because institutions cannot have all the relevant expertise in house. Instead, we outline a methodology based on large community efforts in engineering and standardisation, which will depend on identifying a taxonomy of key activities – perhaps based on existing efforts to develop domain-specific languages, identify common patterns in weather and climate codes, and develop community approaches to commonly needed tools and libraries – and then collaboratively building up those key components. Such a collaborative approach will depend on institutions, projects, and individuals adopting new interdependencies and ways of working.

Proceedings ArticleDOI
18 Jun 2017
TL;DR: FlexCL is presented, an analytical performance model for OpenCL workloads on flexible FPGAs that estimates the overall performance by tightly coupling the off-chip global memory and on-chip computation models based on the communication mode.
Abstract: The recent adoption of OpenCL programming model by FPGA vendors has realized the function portability of OpenCL workloads on FPGA. However, the poor performance portability prevents its wide adoption. To harness the power of FPGAs using OpenCL programming model, it is advantageous to design an analytical performance model to estimate the performance of OpenCL workloads on FPGAs and provide insights into the performance bottlenecks of OpenCL model on FPGA architecture. To this end, this paper presents FlexCL, an analytical performance model for OpenCL workloads on flexible FPGAs. FlexCL estimates the overall performance by tightly coupling the off-chip global memory and on-chip computation models based on the communication mode. Experiments demonstrate that with respect to RTL-based implementation, the average of absolute error of FlexCL is 9.5% and 8.7% for the Rodinia and PolyBench suite, respectively. Moreover, FlexCL enables rapid exploration of the design space within seconds instead of hours or days.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: An FPGA implementation of CNN designed for addressing portability and power efficiency is presented, showing that the proposed implementation is as efficient as a general purpose 16-core CPU, and almost 15 times faster than a SoC GPU for mobile application.
Abstract: Convolutional Neural Networks (CNNs) allow fast and precise image recognition. Nowadays this capability is highly requested in the embedded system domain for video processing applications such as video surveillance and homeland security. Moreover, with the increasing requirement of portable and ubiquitous processing, power consumption is a key issue to be accounted for.In this paper, we present an FPGA implementation of CNN designed for addressing portability and power efficiency. Performance characterization results show that the proposed implementation is as efficient as a general purpose 16-core CPU, and almost 15 times faster than a SoC GPU for mobile application. Moreover, external memory footprint is reduced by 84% with respect to a standard CNN software application.

Journal ArticleDOI
08 Mar 2017
TL;DR: Boris Groysberg, in his book Chasing Stars, tries to come up with an answer for the key question: ‘Are employees’ talents and skills portable across employers so that the employee performance would remain constant even after a change of firms?
Abstract: In the current scenario, companies are progressing by employing the best talent for competing and succeeding in business. Several organizations are trying to poach star performers by enticing them. While this may not be the best practice, there is also a risk for the company that instead of continuing to excel, the star employee might turn out to be a comet, fading out in a new setting. However, star performers believe that they are creative independent resources whose abilities and skills can be easily transferred from one firm to another. Boris Groysberg, in his book Chasing Stars, tries to come up with an answer for the key question: ‘Are employees’ talents and skills portable across employers so that the employee performance would remain constant even after a change of firms?’

Journal ArticleDOI
TL;DR: In this article, the authors present a remote monitoring platform (RMP) to monitor an experimental smart microgrid (SMG) in an isolated regime, which integrates renewable energy sources (RESs) (solar and wind) and hydrogen to operate in isolated regime.


Journal ArticleDOI
TL;DR: A uniform, integrated, machine-readable, semantic representation of cloud services, patterns, appliances and their compositions is proposed, using semantic models and automatic reasoning to enhance potability and interoperability when multiple platforms are involved.
Abstract: During the past years the Cloud Computing offer has exponentially grown, with new Cloud providers, platforms and services being introduced in the IT market. The extreme variety of services, often providing non uniform and incompatible interfaces, makes it hard for customers to decide how to develop, or even worse to migrate, their own application into the Cloud. This situation can only get worse when customers want to exploit services from different providers, because of the portability and interoperability issues that often arise. In this paper we propose a uniform, integrated, machine-readable, semantic representation of cloud services, patterns, appliances and their compositions. Our approach aims at supporting the development of new applications for the Cloud environment, using semantic models and automatic reasoning to enhance potability and interoperability when multiple platforms are involved. In particular, the proposed reasoning procedure allows to: perform automatic discovery of Cloud services and Appliances; map between agnostic and vendor dependent Cloud Patterns and Services; automatically enrich the semantic knowledge base.

Journal ArticleDOI
TL;DR: SoC is proposed, a service-oriented system-on-chip framework that integrates both embedded processors and software defined hardware accelerators s as computing services on a single chip and outperforms state-of-the-art literature with great flexibility.
Abstract: The integration of software services-oriented architecture (SOA) and hardware multiprocessor system-on-chip (MPSoC) has been pursued for several years. However, designing and implementing a service-oriented system for diverse applications on a single chip has posed significant challenges due to the heterogeneous architectures, programming interfaces, and software tool chains. To solve the problem, this paper proposes SoSoC, a service-oriented system-on-chip framework that integrates both embedded processors and software defined hardware accelerators s as computing services on a single chip. Modeling and realizing the SOA design principles, SoSoC provides well-defined programming interfaces for programmers to utilize diverse computing resources efficiently. Furthermore, SoSoC can provide task level parallelization and significant speedup to MPSoC chip design paradigms by providing out-of-order execution scheme with hardware accelerators. To evaluate the performance of SoSoC, we implemented a hardware prototype on Xilinx Virtex5 FPGA board with EEMBC benchmarks. Experimental results demonstrate that the service componentization over original version is less than 3 percent, while the speedup for typical software Benchmarks is up to 372x. To show the portability of SoSoC, we implement the convolutional neural network as a case study on both Xilinx Zynq and Altera DE5 FPGA boards. Results show the SoSoC outperforms state-of-the-art literature with great flexibility.

Proceedings ArticleDOI
12 Nov 2017
TL;DR: REFINE, a novel framework that addresses limitations of current practices in compiler-based FI and how they impact the interpretation of results in resilience studies, is proposed, providing the portability and efficiency of compiler- based FI, while keeping accuracy comparable to binary-level FI methods.
Abstract: Compiler-based fault injection (FI) has become a popular technique for resilience studies to understand the impact of soft errors in supercomputing systems. Compiler-based FI frameworks inject faults at a high intermediate-representation level. However, they are less accurate than machine code, binary-level FI because they lack access to all dynamic instructions, thus they fail to mimic certain fault manifestations. In this paper, we study the limitations of current practices in compiler-based FI and how they impact the interpretation of results in resilience studies.We propose REFINE, a novel framework that addresses these limitations, performing FI in a compiler backend. Our approach provides the portability and efficiency of compiler-based FI, while keeping accuracy comparable to binary-level FI methods. We demonstrate our approach in 14 HPC programs and show that, due to our unique design, its runtime overhead is significantly smaller than state-of-the-art compiler-based FI frameworks, reducing the time for large FI experiments.

BookDOI
02 Jan 2017
TL;DR: A flexible core that allows the addition of various schedulers, each with a different feature set, as required by applications as well as providing the wrapping mechanism for offering integration facilities like the job notification API.
Abstract: This books is open access under a CC BY 4.0 license. This book summarizes work being undertaken within the collaborative MODAClouds research project, which aims to facilitate interoperability between heterogeneous Cloud platforms and remove the constraints of deployment, portability, and reversibility for end users of Cloud services. Experts involved in the project provide a clear overview of the MODAClouds approach and explain how it operates in a variety of applications. While the wide spectrum of available Clouds constitutes a vibrant technical environment, many early-stage issues pose specific challenges from a software engineering perspective. MODAClouds will provide methods, a decision support system, and an open source IDE and run-time environment for the high-level design, early prototyping,semiautomatic code generation, and automatic deployment of applications on multiple Clouds. It will free developers from the need to commit to a fixed Cloud technology stack during software design and offer benefits in terms of cost savings, portability of applications and data between Clouds, reversibility (moving applications and data from Cloud to non-Cloud environments), risk management, quality assurance, and flexibility in the development process.

Journal ArticleDOI
TL;DR: This article considers the vendor lock-in problem, which is a direct consequence of the lack of interoperability and portability of inter-connected clouds.
Abstract: Inter-connected cloud computing is an inherent evolution of Cloud Computing. Numerous benefits provided by connecting clouds have garnered attraction from the academic as well as the industry sector. Just as every new evolution faces challenges, inter-connected clouds have their own set of challenges such as security, monitoring, authorization and identity management, vendor lock-in, and so forth. This article considers the vendor lock-in problem, which is a direct consequence of the lack of interoperability and portability. An extensive literature review by surveying more than 120 papers has been done to analyze and categorize various solutions suggested in literature for solving the interoperability and portability issues of inter-connected clouds. After categorizing the solutions, the literature has been mapped to a specific solution and a comparative analysis of the papers under the same solution has been done. The term “inter-connected clouds” has been used generically in this article to refer to any collaboration of clouds which may be from the user side (Multi-clouds or Aggregated service by Broker) or the provider side (Federated clouds or Hybrid clouds). Lastly, two closely related issues (Brokers and Meta-scheduling) and the remaining challenges of inter-connected clouds are discussed.

Proceedings ArticleDOI
01 Dec 2017
TL;DR: An industrial machine condition-monitoring open-source software database, equipped with a dictionary and small enough to fit into the memory of edge data-analytic devices is created, which will prevent excessive industrial and smart grid machine data from being sent to the cloud.
Abstract: The Industrial Internet of Things (IIoT) is quite different from the general IoT in terms of latency, bandwidth, cost, security and connectivity. Most existing IoT platforms are designed for general IoT needs, and thus cannot handle the specificities of IIoT. With the anticipated big data generation in IIoT, an open source platform capable of minimizing the amount of data being sent from the edge and at the same time, that can effectively monitor and communicate the condition of the large-scale engineering system by doing efficient real-time edge analytics is sorely needed. In this work, an industrial machine condition-monitoring open-source software database, equipped with a dictionary and small enough to fit into the memory of edge data-analytic devices is created. The database-dictionary system will prevent excessive industrial and smart grid machine data from being sent to the cloud since only fault report and requisite recommendations, sourced from the edge dictionary and database will be sent. An open source software (Python SQLite) situated on Linux operating system is used to create the edge database and the dictionary so that inter-platform portability will be achieved and most IIoT machines will be able to use the platform. Statistical analysis at the network edge using well known industrial methods such as kurtosis and skewness reveal significant differences between generated machine signal and reference signal. This database-dictionary approach is a new paradigm since it is different from legacy methods in which databases are situated only in the cloud with huge memory and servers. The open source deployment will also help to satisfy the criteria of Industrial IoT Consortium and the Open Fog Architecture.

Journal ArticleDOI
25 May 2017
TL;DR: In this article, the authors examine Linux container technology for the distribution of a nontrivial scientific computing software stack and its execution on a spectrum of platforms from laptop computers through high-performance computing systems.
Abstract: Containers are an emerging technology that holds promise for improving productivity and code portability in scientific computing. The authors examine Linux container technology for the distribution of a nontrivial scientific computing software stack and its execution on a spectrum of platforms from laptop computers through high-performance computing systems. For Python code run on large parallel computers, the runtime is reduced inside a container due to faster library imports. The software distribution approach and data that the authors present will help developers and users decide on whether container technology is appropriate for them. The article also provides guidance for vendors of HPC systems that rely on proprietary libraries for performance on what they can do to make containers work seamlessly and without performance penalty.

Journal ArticleDOI
TL;DR: This work explores the use of OpenCL to develop a portable code that can take advantage of the many parallel processor architectures now available and presents a program called SeisCL for 2D and 3D viscoelastic FWI in the time domain.

Journal ArticleDOI
TL;DR: The Divide‐Expand‐Consolidate scheme is a linear‐scaling and massively parallel framework for high accuracy coupled cluster calculations on large molecular systems designed as a black‐box method, which ensures error control in the correlation energy and molecular properties.
Abstract: The Divide-Expand-Consolidate (DEC) scheme is a linear-scaling and massively parallel framework for high accuracy coupled cluster (CC) calculations on large molecular systems. It is designed as a black-box method, which ensures error control in the correlation energy and molecular properties. DEC is combined with a massively parallel implementation to fully utilize modern manycore architectures providing a fast time to solution. The implementation ensures performance portability and will straightforwardly benefit from new hardware developments. The DEC scheme has been applied to several levels of CC theory and extended the range of application of those methods. For further resources related to this article, please visit the WIREs website.

Journal ArticleDOI
Steve G. Langer1
TL;DR: Recent increases in the complexity of computer criminal applications (and defensive countermeasures) and the pervasiveness of Internet connected devices have raised the bar and this work examines how a medical center can adapt to these evolving threats.
Abstract: In 1999–2003, SIIM (then SCAR) sponsored the creation of several special topic Primers, one of which was concerned with computer security. About the same time, a multi-society collaboration authored an ACR Guideline with a similar plot; the latter has recently been updated. The motivation for these efforts was the launch of Health Information Portability and Accountability Act (HIPAA). That legislation directed care providers to enable the portability of patient medical records across authorized medical centers, while simultaneously protecting patient confidentiality among unauthorized agents. These policy requirements resulted in the creation of numerous technical solutions which the above documents described. While the mathematical concepts and algorithms in those papers are as valid today as they were then, recent increases in the complexity of computer criminal applications (and defensive countermeasures) and the pervasiveness of Internet connected devices have raised the bar. This work examines how a medical center can adapt to these evolving threats.