scispace - formally typeset
Search or ask a question

Showing papers on "Software portability published in 2020"


Proceedings ArticleDOI
Michael Wolfe1
29 Jun 2020
TL;DR: This work will focus a great deal on the importance of compilers in supercomputing, and compare and contrast the advantages and impacts of compiler solutions to the "Performance + Portability + Productivity" problem with language and runtime solutions.
Abstract: Between a problem statement and its solution as a computer simulation are several steps, from choosing a method, writing a program, compiling to machine code, making runtime decisions, and hardware execution. Here we will look at the middle three decision points. What decisions should be and must be left to the programmer? What decisions should be and must be relegated to a compiler? What decisions should be and must be left until runtime? Given my background, I will focus a great deal on the importance of compilers in supercomputing, and compare and contrast the advantages and impacts of compiler solutions to the "Performance + Portability + Productivity" problem with language and runtime solutions.

729 citations


Journal ArticleDOI
TL;DR: An extensive survey of the EN technology and its wide range of application fields is provided, through a comprehensive analysis of algorithms proposed in the literature, while exploiting related domains with possible future suggestions for this research topic.
Abstract: In the last two decades, improvements in materials, sensors and machine learning technologies have led to a rapid extension of electronic nose (EN) related research topics with diverse applications. The food and beverage industry, agriculture and forestry, medicine and health-care, indoor and outdoor monitoring, military and civilian security systems are the leading fields which take great advantage from the rapidity, stability, portability and compactness of ENs. Although the EN technology provides numerous benefits, further enhancements in both hardware and software components are necessary for utilizing ENs in practice. This paper provides an extensive survey of the EN technology and its wide range of application fields, through a comprehensive analysis of algorithms proposed in the literature, while exploiting related domains with possible future suggestions for this research topic.

176 citations


Journal ArticleDOI
14 Oct 2020-Nature
TL;DR: This study proposes 'neuromorphic completeness', which relaxes the requirement for hardware completeness, and proposes a corresponding system hierarchy, which consists of a Turing-complete software-abstraction model and a versatile abstract neuromorphic architecture.
Abstract: Neuromorphic computing draws inspiration from the brain to provide computing technology and architecture with the potential to drive the next wave of computer engineering1-13. Such brain-inspired computing also provides a promising platform for the development of artificial general intelligence14,15. However, unlike conventional computing systems, which have a well established computer hierarchy built around the concept of Turing completeness and the von Neumann architecture16-18, there is currently no generalized system hierarchy or understanding of completeness for brain-inspired computing. This affects the compatibility between software and hardware, impairing the programming flexibility and development productivity of brain-inspired computing. Here we propose 'neuromorphic completeness', which relaxes the requirement for hardware completeness, and a corresponding system hierarchy, which consists of a Turing-complete software-abstraction model and a versatile abstract neuromorphic architecture. Using this hierarchy, various programs can be described as uniform representations and transformed into the equivalent executable on any neuromorphic complete hardware-that is, it ensures programming-language portability, hardware completeness and compilation feasibility. We implement toolchain software to support the execution of different types of program on various typical hardware platforms, demonstrating the advantage of our system hierarchy, including a new system-design dimension introduced by the neuromorphic completeness. We expect that our study will enable efficient and compatible progress in all aspects of brain-inspired computing systems, facilitating the development of various applications, including artificial general intelligence.

92 citations


Journal ArticleDOI
TL;DR: DolphinNext is a flexible, intuitive, web-based data processing and analysis platform that enables creating, deploying, sharing, and executing complex Nextflow pipelines with extensive revisioning and interactive reporting to enhance reproducible results.
Abstract: The emergence of high throughput technologies that produce vast amounts of genomic data, such as next-generation sequencing (NGS) is transforming biological research. The dramatic increase in the volume of data, the variety and continuous change of data processing tools, algorithms and databases make analysis the main bottleneck for scientific discovery. The processing of high throughput datasets typically involves many different computational programs, each of which performs a specific step in a pipeline. Given the wide range of applications and organizational infrastructures, there is a great need for highly parallel, flexible, portable, and reproducible data processing frameworks. Several platforms currently exist for the design and execution of complex pipelines. Unfortunately, current platforms lack the necessary combination of parallelism, portability, flexibility and/or reproducibility that are required by the current research environment. To address these shortcomings, workflow frameworks that provide a platform to develop and share portable pipelines have recently arisen. We complement these new platforms by providing a graphical user interface to create, maintain, and execute complex pipelines. Such a platform will simplify robust and reproducible workflow creation for non-technical users as well as provide a robust platform to maintain pipelines for large organizations. To simplify development, maintenance, and execution of complex pipelines we created DolphinNext. DolphinNext facilitates building and deployment of complex pipelines using a modular approach implemented in a graphical interface that relies on the powerful Nextflow workflow framework by providing 1. A drag and drop user interface that visualizes pipelines and allows users to create pipelines without familiarity in underlying programming languages. 2. Modules to execute and monitor pipelines in distributed computing environments such as high-performance clusters and/or cloud 3. Reproducible pipelines with version tracking and stand-alone versions that can be run independently. 4. Modular process design with process revisioning support to increase reusability and pipeline development efficiency. 5. Pipeline sharing with GitHub and automated testing 6. Extensive reports with R-markdown and shiny support for interactive data visualization and analysis. DolphinNext is a flexible, intuitive, web-based data processing and analysis platform that enables creating, deploying, sharing, and executing complex Nextflow pipelines with extensive revisioning and interactive reporting to enhance reproducible results.

59 citations


Journal ArticleDOI
TL;DR: The proposed query generation model takes into account the objects’ major features in terms of typology and associated functionalities, and the characteristics of the applications, and generates a dataset, which is made available to the research community to study the navigability of the SIoT network.

56 citations


Proceedings ArticleDOI
01 Jan 2020
TL;DR: This paper proposes a new model called the Multiple Device Model (MDM) that formally incorporates the device to device variation during a profiled side-channel attack and demonstrates how MDM can improve the performance of an attack by order of magnitude, completely negating the influence of portability.

48 citations



Journal ArticleDOI
TL;DR: This article revisited the portability of performance paradox by examining how the nature of an employe can affect the performance of an ex-employee after switching firms, and found that external hires fail to replicate prior performance after switching companies.
Abstract: This study revisits the portability of performance paradox—the common finding that external hires fail to replicate prior performance after switching firms—by examining how the nature of an employe...

41 citations


Journal ArticleDOI
TL;DR: This paper opens a discourse on the security attacks that future SDN-based VANETs should confront and examines how SDNs could be advantageous in building new countermeasures and provides the conclusion of the whole study.
Abstract: Vehicular ad-hoc networks (VANETs) are the specific sort of ad-hoc networks that are utilized in intelligent transportation systems (ITS). VANETs have become one of the most reassuring, promising, and quickest developing subsets of the mobile ad-hoc networks (MANETs). They include smart vehicles, roadside units (RSUs), and on-board units (OBUs) which correspond through inconsistent wireless network. The current research in the vehicles industry and media transmission innovations alongside the remarkable multimodal portability administrations expedited center-wise ITS, of which VANETs increase considerably more attention. The particular characteristics of the software defined networks (SDNs) use the vehicular systems by its condition of the centralized art having a complete understanding of the network. Security is an important issue in the SDN-based VANETs, as a result of the effect the threats and vulnerabilities can have on driver’s conduct and personal satisfaction. This paper opens a discourse on the security attacks that future SDN-based VANETs should confront and examines how SDNs could be advantageous in building new countermeasures. SDN-based VANETs encourage us to dispose of the confinement and difficulties that are available in the traditional VANETs. It helps us to diminish the general burden on the system by dealing with the general system through a single wireless controller. While SDN-based VANETs provide us some benefits in terms of applications and services, they also have some important challenges which need to be solved. In this study we discuss and elaborate the challenges, along with the applications, and the future directions of SDN-based VANETs. At the end we provide the conclusion of the whole study.

38 citations


Journal ArticleDOI
TL;DR: By highlighting advances in usability, this work aims to encourage thoughtful and rigorous design at the academic prototyping stage to address one outstanding hurdle that limits the number of PADs that make it from the benchtop to the point-of-care.

37 citations


Proceedings ArticleDOI
09 Mar 2020
TL;DR: LeapIO is a new cloud storage stack that leverages ARM-based co-processors to offload complex storage services and uses a set of OS/software techniques and new hardware properties that provide a uni- form address space across the x86 and ARM cores and ex- pose virtual NVMe storage to unmodified guest VMs, at a performance that is competitive with bare-metal servers.
Abstract: Today's cloud storage stack is extremely resource hungry, burning 10-20% of datacenter x86 cores, a major "storage tax" that cloud providers must pay. Yet, the complex cloud storage stack is not completely offload-ready to today's IO accelerators. We present LeapIO, a new cloud storage stack that leverages ARM-based co-processors to offload complex storage services. LeapIO addresses many deployment challenges, such as hardware fungibility, software portability, virtualizability, composability, and efficiency. It uses a set of OS/software techniques and new hardware properties that provide a uni- form address space across the x86 and ARM cores and ex- pose virtual NVMe storage to unmodified guest VMs, at a performance that is competitive with bare-metal servers.

Book ChapterDOI
03 Dec 2020
TL;DR: ROSMonitoring as discussed by the authors is a framework to support Runtime Verification (RV) of robotic applications developed using the Robot Operating System (ROS), and it can be used in a traditional ROS example.
Abstract: Recently, robotic applications have been seeing widespread use across industry, often tackling safety-critical scenarios where software reliability is paramount. These scenarios often have unpredictable environments and, therefore, it is crucial to be able to provide assurances about the system at runtime. In this paper, we introduce ROSMonitoring, a framework to support Runtime Verification (RV) of robotic applications developed using the Robot Operating System (ROS). The main advantages of ROSMonitoring compared to the state of the art are its portability across multiple ROS distributions and its agnosticism w.r.t. the specification formalism. We describe the architecture behind ROSMonitoring and show how it can be used in a traditional ROS example. To better evaluate our approach, we apply it to a practical example using a simulation of the Mars curiosity rover. Finally, we report the results of some experiments to check how well our framework scales.

Journal ArticleDOI
TL;DR: A low-cost portable system, assembling an ad hoc-designed analog front end (AFE) and a development board equipped with a system on chip integrating a microcontroller and a Wi-Fi network processor that enables the transmission of measurements directly to a cloud service for sharing device outcome with users.
Abstract: The measurement of the analyte concentration in electrochemical biosensors traditionally requires costly laboratory equipment to obtain accurate results. Innovative portable solutions have recently been proposed, but usually, they lean on personal computers (PCs) or smartphones for data elaboration and they exhibit poor resolution or portability and proprietary software. This paper presents a low-cost portable system, assembling an ad hoc -designed analog front end (AFE) and a development board equipped with a system on chip integrating a microcontroller and a Wi-Fi network processor. The wireless module enables the transmission of measurements directly to a cloud service for sharing device outcome with users (physicians, caregivers, and so on). In doing so, the system does not require neither the customized software nor other devices involved in data acquisition. Furthermore, when any Internet connection is lost, the data are stored on board for subsequent transmission when a Wi-Fi connection is available. The noise output voltage spectrum has been characterized. Since the designed device is intended to be battery-powered to enhance portability, investigations about battery lifetime were carried out. Finally, data acquired with a conventional benchtop Autolab PGSTAT-204 electrochemical workstation are compared with the outcome of our developed device to validate the effectiveness of our proposal. To this end, we selected ferri/ferrocyanide as redox probe, obtaining the calibration curves for both the platforms. The final outcomes are shown to be feasible, accurate, and repeatable.

Proceedings ArticleDOI
01 Jan 2020
TL;DR: A continuous integration and continuous deployment system that validates the security of Docker images throughout the software development life cycle is described and dynamic analysis is used to assess theSecurity of Docker containers based on their behavior and shows that it complements the static analyses typically used for security assessments.
Abstract: Docker is popular within the software development community due to the versatility, portability, and scalability of containers. However, concerns over vulnerabilities have grown as the security of applications become increasingly dependent on the security of the images that serve as the applications' building blocks. As more development processes migrate to the cloud, validating the security of images that are pulled from various repositories is paramount. In this paper, we describe a continuous integration and continuous deployment (CI/CD) system that validates the security of Docker images throughout the software development life cycle. We introduce images with vulnerabilities and measure the effectiveness of our approach at identifying the vulnerabilities. In addition, we use dynamic analysis to assess the security of Docker containers based on their behavior and show that it complements the static analyses typically used for security assessments.

Journal ArticleDOI
TL;DR: The results obtained show that it is only feasible to directly transfer predictive models or apply them to different courses with an acceptable accuracy and without losing portability under some circumstances.
Abstract: Predicting students’ academic performance is one of the older challenges faced by the educational scientific community. However, most of the research carried out in this area has focused on obtaining the best accuracy models for their specific single courses and only a few works have tried to discover under which circumstances a prediction model built on a source course can be used in other different but similar courses. Our motivation in this work is to study the portability of models obtained directly from Moodle logs of 24 university courses. The proposed method intends to check if grouping similar courses by the degree or the similar level of usage of activities provided by the Moodle logs, and if the use of numerical or categorical attributes affect in the portability of the prediction models. We have carried out two experiments by executing the well-known classification algorithm over all the datasets of the courses in order to obtain decision tree models and to test their portability to the other courses by comparing the obtained accuracy and loss of accuracy evaluation measures. The results obtained show that it is only feasible to directly transfer predictive models or apply them to different courses with an acceptable accuracy and without losing portability under some circumstances.

Proceedings ArticleDOI
01 Dec 2020
TL;DR: A modeling extension for imperative workflow languages is introduced to enable the integration of quantum computations and ease the orchestration of classical applications and quantum circuits and it is shown how the extension can be mapped to native modeling constructs of extended workflow languages to retain the portability of the workflows.
Abstract: Quantum computing has the potential to significantly impact many application domains, as several quantum algorithms are promising to solve problems more efficiently than possible on classical computers. However, various complex pre- and post-processing tasks have to be performed when executing a quantum circuit, which require immense mathematical and technical knowledge. For example, calculations on today's quantum computers are noisy and require an error mitigation task after the execution. Hence, integrating classical applications with quantum circuits is a difficult challenge. In this paper, we introduce a modeling extension for imperative workflow languages to enable the integration of quantum computations and ease the orchestration of classical applications and quantum circuits. Further, we show how the extension can be mapped to native modeling constructs of extended workflow languages to retain the portability of the workflows. We validate the practical feasibility of our approach by applying our proposed extension to BPMN and introduce Quantum4BPMN.

Proceedings ArticleDOI
23 Feb 2020
TL;DR: In this paper, the authors present a model to optimize matrix multiplication for FPGA platforms, simultaneously targeting maximum performance and minimum off-chip data movement, within constraints set by the hardware.
Abstract: Data movement is the dominating factor affecting performance and energy in modern computing systems. Consequently, many algorithms have been developed to minimize the number of I/O operations for common computing patterns. Matrix multiplication is no exception, and lower bounds have been proven and implemented both for shared and distributed memory systems. Reconfigurable hardware platforms are a lucrative target for I/O minimizing algorithms, as they offer full control of memory accesses to the programmer. While bounds developed in the context of fixed architectures still apply to these platforms, the spatially distributed nature of their computational and memory resources requires a decentralized approach to optimize algorithms for maximum hardware utilization. We present a model to optimize matrix multiplication for FPGA platforms, simultaneously targeting maximum performance and minimum off-chip data movement, within constraints set by the hardware. We map the model to a concrete architecture using a high-level synthesis tool, maintaining a high level of abstraction, allowing us to support arbitrary data types, and enables maintainability and portability across FPGA devices. Kernels generated from our architecture are shown to offer competitive performance in practice, scaling with both compute and memory resources. We offer our design as an open source project to encourage the open development of linear algebra and I/O minimizing algorithms on reconfigurable hardware platforms.

Journal ArticleDOI
01 Mar 2020
TL;DR: A Systematic Mapping Study in the context of the European COST Action cHiPSet revealed that the desired features of programming languages for data-intensive HPC applications are portability, performance and usability.
Abstract: A major challenge in modelling and simulation is the need to combine expertise in both software technologies and a given scientific domain. When High-Performance Computing (HPC) is required to solve a scientific problem, software development becomes a problematic issue. Considering the complexity of the software for HPC, it is useful to identify programming languages that can be used to alleviate this issue. Because the existing literature on the topic of HPC is very dispersed, we performed a Systematic Mapping Study (SMS) in the context of the European COST Action cHiPSet. This literature study maps characteristics of various programming languages for data-intensive HPC applications, including category, typical user profiles, effectiveness, and type of articles. We organised the SMS in two phases. In the first phase, relevant articles are identified employing an automated keyword-based search in eight digital libraries. This lead to an initial sample of 420 papers, which was then narrowed down in a second phase by human inspection of article abstracts, titles and keywords to 152 relevant articles published in the period 2006–2018. The analysis of these articles enabled us to identify 26 programming languages referred to in 33 of relevant articles. We compared the outcome of the mapping study with results of our questionnaire-based survey that involved 57 HPC experts. The mapping study and the survey revealed that the desired features of programming languages for data-intensive HPC applications are portability, performance and usability. Furthermore, we observed that the majority of the programming languages used in the context of data-intensive HPC applications are text-based general-purpose programming languages. Typically these have a steep learning curve, which makes them difficult to adopt. We believe that the outcome of this study will inspire future research and development in programming languages for data-intensive HPC applications.

Posted Content
TL;DR: This paper presents a high-level, preliminary report on the onnx-mlir compiler, which generates code for the inference of deep neural network models described in the ONNX format using the Multi-Level Intermediate Representation (MLIR) infrastructure recently integrated in the LLVM project.
Abstract: Deep neural network models are becoming increasingly popular and have been used in various tasks such as computer vision, speech recognition, and natural language processing. Machine learning models are commonly trained in a resource-rich environment and then deployed in a distinct environment such as high availability machines or edge devices. To assist the portability of models, the open-source community has proposed the Open Neural Network Exchange (ONNX) standard. In this paper, we present a high-level, preliminary report on our onnx-mlir compiler, which generates code for the inference of deep neural network models described in the ONNX format. Onnx-mlir is an open-source compiler implemented using the Multi-Level Intermediate Representation (MLIR) infrastructure recently integrated in the LLVM project. Onnx-mlir relies on the MLIR concept of dialects to implement its functionality. We propose here two new dialects: (1) an ONNX specific dialect that encodes the ONNX standard semantics, and (2) a loop-based dialect to provide for a common lowering point for all ONNX dialect operations. Each intermediate representation facilitates its own characteristic set of graph-level and loop-based optimizations respectively. We illustrate our approach by following several models through the proposed representations and we include some early optimization work and performance results.

Proceedings ArticleDOI
01 Feb 2020
TL;DR: P4Knocking can provide a more transparent and efficient way to deploy the port knocking service compared to a host-based port knocking implementation, and requires no specific purpose externs apart from registers, hence its higher portability and flexibility with local or remote control planes.
Abstract: The introduction of Software-Defined Networks (SDN) and the evolution towards programmable data planes bring the opportunity to offload several functions to the data plane. In this context, the P4 programming language opens the door to the customization of data planes. It can provide packet processing functionalities that can be applied to improve network security among other areas. This paper presents P4Knocking, a P4-based port knocking implementation that can externally open ports that appear to be closed. The goal of bringing port knocking capabilities to the network is to seamlessly deploy firewall functions in the data plane, reliving hosts from dealing with unintended traffic. Our work presents a total of four implementations that involve data and control planes in different degrees. In this case, P4Knocking can provide a more transparent and efficient way to deploy the port knocking service compared to a host-based port knocking implementation. In fact, it requires no specific purpose externs apart from registers, hence its higher portability and flexibility with local or remote control planes.

Journal ArticleDOI
TL;DR: An overview of the state-of-the-art for delivering performance, portability, and productivity to CFD applications, ranging from high-level libraries that allow the symbolic description of PDEs to low-level techniques that target individual algorithmic patterns are given.

Journal ArticleDOI
TL;DR: The Kernel Tuning Toolkit as discussed by the authors enables applications to re-tune performance-critical kernels at runtime whenever needed, for example, when input data changes, which is key to performance portability.

Journal ArticleDOI
TL;DR: EngineCL is a new OpenCL-based runtime system that outstandingly simplifies the co-execution of a single massive data-parallel kernel on all the devices of a heterogeneous system.

Journal ArticleDOI
11 Apr 2020
TL;DR: A comprehensive survey for parallel programming models for heterogeneous many-core architectures and the compiling techniques of improving programmability and portability are reviewed and various software optimization techniques for minimizing the communicating overhead are examined.
Abstract: Heterogeneous many-cores are now an integral part of modern computing systems ranging from embedding systems to supercomputers. While heterogeneous many-core design offers the potential for energy-efficient high-performance, such potential can only be unlocked if the application programs are suitably parallel and can be made to match the underlying heterogeneous platform. In this article, we provide a comprehensive survey for parallel programming models for heterogeneous many-core architectures and review the compiling techniques of improving programmability and portability. We examine various software optimization techniques for minimizing the communicating overhead between heterogeneous computing devices. We provide a road map for a wide variety of different research areas. We conclude with a discussion on open issues in the area and potential research directions. This article provides both an accessible introduction to the fast-moving area of heterogeneous programming and a detailed bibliography of its main achievements.

Posted Content
TL;DR: The PETSc library enables application developers to use their preferred programming model, such as Kokkos, RAJA, SYCL, HIP, CUDA, or OpenCL, on upcoming exascale systems.
Abstract: The Portable Extensible Toolkit for Scientific computation (PETSc) library delivers scalable solvers for nonlinear time-dependent differential and algebraic equations and for numerical optimization.The PETSc design for performance portability addresses fundamental GPU accelerator challenges and stresses flexibility and extensibility by separating the programming model used by the application from that used by the library, and it enables application developers to use their preferred programming model, such as Kokkos, RAJA, SYCL, HIP, CUDA, or OpenCL, on upcoming exascale systems. A blueprint for using GPUs from PETSc-based codes is provided, and case studies emphasize the flexibility and high performance achieved on current GPU-based systems.

Journal ArticleDOI
TL;DR: To understand how portable current ONT analysis methods are, several tools, from base-calling to genome assembly, were ported and benchmarked on an Android smartphone and the portability scenario is not favorable.
Abstract: Motivation Oxford Nanopore technologies (ONT) add miniaturization and real time to high-throughput sequencing. All available software for ONT data analytics run on cloud/clusters or personal computers. Instead, a linchpin to true portability is software that works on mobile devices of internet connections. Smartphones' and tablets' chipset/memory/operating systems differ from desktop computers, but software can be recompiled. We sought to understand how portable current ONT analysis methods are. Results Several tools, from base-calling to genome assembly, were ported and benchmarked on an Android smartphone. Out of 23 programs, 11 succeeded. Recompilation failures included lack of standard headers and unsupported instruction sets. Only DSK, BCALM2 and Kraken were able to process files up to 16 GB, with linearly scaling CPU-times. However, peak CPU temperatures were high. In conclusion, the portability scenario is not favorable. Given the fast market growth, attention of developers to ARM chipsets and Android/iOS is warranted, as well as initiatives to implement mobile-specific libraries. Availability and implementation The source code is freely available at: https://github.com/marco-oliva/portable-nanopore-analytics.

Journal ArticleDOI
TL;DR: This study conducts measurement studies on Amazon AWS, Alibaba, and Huawei clouds for over one year and shows that tenants experience severe performance-cost imbalance on FPGA IaaS platforms, and sheds some lights for cloud providers to improve the performance of FGPA clouds.
Abstract: Cloud service providers promote their new field programmable gate array (FPGA) infrastructure as a service (IaaS) as the new era of cloud product. This FPGA IaaS wraps virtualized compute resources with FPGA boards, e.g., Amazon AWS F1, and reserves acceleration capability for specific applications. Though this acceleration technique sounds promising, questions like real-world performance, best-fit scenarios, portability, etc., still need further clarification. In this paper, we present one of the first few empirical studies that take a close look at FPGA clouds from the tenants' perspective. We have conducted measurement studies on Amazon AWS, Alibaba, and Huawei clouds for over one year. The experimental results show that: (1) Tenants experience severe performance-cost anomaly on FPGA IaaS platforms; (2) The inter-communication performance in FPGA clouds is tightly constrained by hardware drivers, e.g., small optimization of DMA drivers for PCIe can harvest significant performance gain; (3) The virtualized FPGA clouds are far from mature, e.g., small-sized jobs can greatly degrade the performance of FPGA clouds due to underutilized PCIe bandwidth. Our study not only provides useful hints to help tenants with FPGA service selection, but also sheds some lights for cloud providers to improve the performance of FPGA clouds.

Journal ArticleDOI
TL;DR: This work proposes a comprehensive conceptual model, portraying the most important actors, mechanisms, data types, and external influences in cross-platform reputation portability, and deduces the need for clear regulatory guidance and identifies a large gap in empirical research.
Abstract: Establishing and curating online reputation is becoming more important and inherent in day-to-day life. Until now, a plethora of research has focused on either a) the role of reputation within given (but enclosed) platform environments or b) the general idea of data portability between platforms. However, little scholarly attention has been paid to the question of cross-platform reputation portability. With this work, we introduce reputation portability as one aspect of a broader dialogue on digital identity management. We propose a comprehensive conceptual model, portraying the most important actors, mechanisms, data types, and external influences. By detailing these dimensions, we deduce the need for clear regulatory guidance and identify a large gap in empirical research. Where today’s leading platforms currently forgo implementing adequate mechanisms for users, Personal Information Management Systems (PIMS) and blockchain technology may provide means to factually establish reputation portability. To that end, we derive future scenarios, implications and critical assessments for platforms, PIMS, and governing bodies to inform the ongoing debate among researchers and practitioners.

Book ChapterDOI
01 Apr 2020
TL;DR: This paper explores the Singular Vector Canonical Correlation Analysis (SVCCA) tool to interpret what neural networks learn while training on different side-channel datasets, by concentrating on deep layers of the network.
Abstract: While several works have explored the application of deep learning for efficient profiled side-channel analysis, explainability, or, in other words, what neural networks learn remains a rather untouched topic. As a first step, this paper explores the Singular Vector Canonical Correlation Analysis (SVCCA) tool to interpret what neural networks learn while training on different side-channel datasets, by concentrating on deep layers of the network. Information from SVCCA can help, to an extent, with several practical problems in a profiled side-channel analysis like portability issue and criteria to choose a number of layers/neurons to fight portability, provide insight on the correct size of training dataset and detect deceptive conditions like over-specialization of networks.

Journal ArticleDOI
TL;DR: By providing high-level interfaces for allocating computing and networking resources, the FDK abstracts the complexities of fog computing from developers and enables the rapid development of fog systems.
Abstract: With the rise of the Internet of Things (IoT), fog computing has emerged to help traditional cloud computing in meeting scalability demands. Fog computing makes it possible to fulfill real-time requirements of applications by bringing more processing, storage, and control power geographically closer to end devices. However, since fog computing is a relatively new field, there is no standard platform for research and development in a realistic environment, and this dramatically inhibits innovation and development of fog-based applications. In response to these challenges, we propose the Fog Development Kit (FDK). By providing high-level interfaces for allocating computing and networking resources, the FDK abstracts the complexities of fog computing from developers and enables the rapid development of fog systems. In addition to supporting application development on a physical deployment, the FDK supports the use of emulation tools (e.g., GNS3 and Mininet) to create realistic environments, allowing fog application prototypes to be built with zero additional costs and enabling seamless portability to a physical infrastructure. Using a physical testbed and various kinds of applications running on it, we verify the operation and study the performance of the FDK. Specifically, we demonstrate that resource allocations are appropriately enforced and guaranteed, even amidst extreme network congestion. We also present simulation-based scalability analysis of the FDK versus the number of switches, the number of end devices, and the number of fog devices.