scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 2020"


Journal ArticleDOI
TL;DR: The toolkit incorporates over 130 functions, which are designed to meet the increasing demand for big-data analyses, ranging from bulk sequence processing to interactive data visualization, and a new plotting engine developed to maximum their interactive ability.

5,173 citations


Journal ArticleDOI
TL;DR: A rewrite of the top-level computation driver, and concomitant adoption of the MolSSI QCARCHIVE INFRASTRUCTURE project, makes the latest version of PSI4 well suited to distributed computation of large numbers of independent tasks.
Abstract: PSI4 is a free and open-source ab initio electronic structure program providing implementations of Hartree-Fock, density functional theory, many-body perturbation theory, configuration interaction, density cumulant theory, symmetry-adapted perturbation theory, and coupled-cluster theory. Most of the methods are quite efficient, thanks to density fitting and multi-core parallelism. The program is a hybrid of C++ and Python, and calculations may be run with very simple text files or using the Python API, facilitating post-processing and complex workflows; method developers also have access to most of PSI4's core functionalities via Python. Job specification may be passed using The Molecular Sciences Software Institute (MolSSI) QCSCHEMA data format, facilitating interoperability. A rewrite of our top-level computation driver, and concomitant adoption of the MolSSI QCARCHIVE INFRASTRUCTURE project, makes the latest version of PSI4 well suited to distributed computation of large numbers of independent tasks. The project has fostered the development of independent software components that may be reused in other quantum chemistry programs.

387 citations


Posted Content
TL;DR: A novel test case selection technique is proposed that derives new test cases from the successful ones and helps uncover software errors in the production phase and can be used in the absence of test oracles.
Abstract: In software testing, a set of test cases is constructed according to some predefined selection criteria. The software is then examined against these test cases. Three interesting observations have been made on the current artifacts of software testing. Firstly, an error-revealing test case is considered useful while a successful test case which does not reveal software errors is usually not further investigated. Whether these successful test cases still contain useful information for revealing software errors has not been properly studied. Secondly, no matter how extensive the testing has been conducted in the development phase, errors may still exist in the software [5]. These errors, if left undetected, may eventually cause damage to the production system. The study of techniques for uncovering software errors in the production phase is seldom addressed in the literature. Thirdly, as indicated by Weyuker in [6], the availability of test oracles is pragmatically unattainable in most situations. However, the availability of test oracles is generally assumed in conventional software testing techniques. In this paper, we propose a novel test case selection technique that derives new test cases from the successful ones. The selection aims at revealing software errors that are possibly left undetected in successful test cases which may be generated using some existing strategies. As such, the proposed technique augments the effectiveness of existing test selection strategies. The technique also helps uncover software errors in the production phase and can be used in the absence of test oracles.

341 citations


Journal ArticleDOI
30 Apr 2020
TL;DR: OpenFermion as mentioned in this paper is an open-source software library written largely in Python under an Apache 2.0 license, aimed at enabling the simulation of fermionic and bosonic models and quantum chemistry problems on quantum hardware.
Abstract: Quantum simulation of chemistry and materials is predicted to be an important application for both near-term and fault-tolerant quantum devices. However, at present, developing and studying algorithms for these problems can be difficult due to the prohibitive amount of domain knowledge required in both the area of chemistry and quantum algorithms. To help bridge this gap and open the field to more researchers, we have developed the OpenFermion software package (www.openfermion.org). OpenFermion is an open-source software library written largely in Python under an Apache 2.0 license, aimed at enabling the simulation of fermionic and bosonic models and quantum chemistry problems on quantum hardware. Beginning with an interface to common electronic structure packages, it simplifies the translation between a molecular specification and a quantum circuit for solving or studying the electronic structure problem on a quantum computer, minimizing the amount of domain expertise required to enter the field. The package is designed to be extensible and robust, maintaining high software standards in documentation and testing. This release paper outlines the key motivations behind design choices in OpenFermion and discusses some basic OpenFermion functionality which we believe will aid the community in the development of better quantum algorithms and tools for this exciting area of research.

258 citations


Journal ArticleDOI
04 Jun 2020
TL;DR: This survey reviews the current literature adopting deep-learning-/neural-network-based approaches for detecting software vulnerabilities, aiming at investigating how the state-of-the-art research leverages neural techniques for learning and understanding code semantics to facilitate vulnerability discovery.
Abstract: The constantly increasing number of disclosed security vulnerabilities have become an important concern in the software industry and in the field of cybersecurity, suggesting that the current approaches for vulnerability detection demand further improvement. The booming of the open-source software community has made vast amounts of software code available, which allows machine learning and data mining techniques to exploit abundant patterns within software code. Particularly, the recent breakthrough application of deep learning to speech recognition and machine translation has demonstrated the great potential of neural models’ capability of understanding natural languages. This has motivated researchers in the software engineering and cybersecurity communities to apply deep learning for learning and understanding vulnerable code patterns and semantics indicative of the characteristics of vulnerable code. In this survey, we review the current literature adopting deep-learning-/neural-network-based approaches for detecting software vulnerabilities, aiming at investigating how the state-of-the-art research leverages neural techniques for learning and understanding code semantics to facilitate vulnerability discovery. We also identify the challenges in this new field and share our views of potential research directions.

231 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the method Hybrid-DeepCom outperforms the state-of-the-art by a substantial margin and the results show that reducing the out- of-vocabulary tokens improves the accuracy effectively.
Abstract: During software maintenance, developers spend a lot of time understanding the source code. Existing studies show that code comments help developers comprehend programs and reduce additional time spent on reading and navigating source code. Unfortunately, these comments are often mismatched, missing or outdated in software projects. Developers have to infer the functionality from the source code. This paper proposes a new approach named Hybrid-DeepCom to automatically generate code comments for the functional units of Java language, namely, Java methods. The generated comments aim to help developers understand the functionality of Java methods. Hybrid-DeepCom applies Natural Language Processing (NLP) techniques to learn from a large code corpus and generates comments from learned features. It formulates the comment generation task as the machine translation problem. Hybrid-DeepCom exploits a deep neural network that combines the lexical and structure information of Java methods for better comments generation. We conduct experiments on a large-scale Java corpus built from 9,714 open source projects on GitHub. We evaluate the experimental results on both machine translation metrics and information retrieval metrics. Experimental results demonstrate that our method Hybrid-DeepCom outperforms the state-of-the-art by a substantial margin. In addition, we evaluate the influence of out-of-vocabulary tokens on comment generation. The results show that reducing the out-of-vocabulary tokens improves the accuracy effectively.

175 citations


Journal ArticleDOI
TL;DR: The performance of 14 different bagging and boosting based ensembles, including XGBoost, LightGBM and Random Forest, is empirically analyzed in terms of predictive capability and efficiency.

165 citations


Journal ArticleDOI
12 Jun 2020
TL;DR: This position paper summarised and developed a basis for community discussion about what makes software different from data concerning the application of the FAIR principles, and which desired characteristics of research software go beyond FAIR.
Abstract: The FAIR Guiding Principles, published in 2016, aim to improve the findability, accessibility, interoperability and reusability of digital research objects for both humans and machines. The FAIR principles are also directly relevant to research software. In this position paper “Towards FAIR principles for research software”, we summarised and developed a basis for community discussion. At the start, we discussed what makes software different from data concerning the application of the FAIR principles, and which desired characteristics of research software go beyond FAIR. Then, we presented an analysis of where the existing principles can directly apply to software, and where they need to be adapted or reinterpreted. Our next step after the position paper is to prompt for community-agreed identifiers for FAIR research software.Acknowledgments To all the authors of Towards FAIR principles for research software https://doi.org/10.3233/DS-190026, and the numerous people who contributed to the discussions around FAIR research software at different occasions preceding the work on this paper. References Lamprecht, Anna-Lena, et al. (2019) Towards FAIR principles for research software. Data Science. https://doi.org/10.3233/DS-190026 ABOUT THE AUTHOR(S) Dr Paula Andrea Martinez is leading the National Training Program for the Characterisation Community in Australia since 2019. She works for the National Image Facility (NIF). Last year she worked at ELIXIR Europe coordinating the Bioinformatics and Data Science training program in Belgium and collaborated with multiple ELIXIR nodes in the development of Software best practices. Her career, spanning Sweden, Australia and Belgium nurtured her experience in Bioinformatics and Research Software development for complex and dataintensive science. She started a career in Computer Science, later on, interested in research methods development and now outreach and advocacy in data and software best practices

160 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the data reduction pipeline of the MUSE integral field spectrograph operated at the ESO Paranal Observatory, and demonstrate that the pipeline provides dataacubes ready for scientific analysis.
Abstract: The processing of raw data from modern astronomical instruments is often carried out nowadays using dedicated software, known as pipelines, largely run in automated operation. In this paper we describe the data reduction pipeline of the Multi Unit Spectroscopic Explorer (MUSE) integral field spectrograph operated at the ESO Paranal Observatory. This spectrograph is a complex machine: it records data of 1152 separate spatial elements on detectors in its 24 integral field units. Efficiently handling such data requires sophisticated software with a high degree of automation and parallelization. We describe the algorithms of all processing steps that operate on calibrations and science data in detail, and explain how the raw science data is transformed into calibrated datacubes. We finally check the quality of selected procedures and output data products, and demonstrate that the pipeline provides datacubes ready for scientific analysis.

156 citations


Book
11 Dec 2020
TL;DR: The 5th edition of Computer Organization and Design moves forward into the post-PC era with new examples, exercises, and material highlighting the emergence of mobile computing and the cloud with updated content featuring tablet computers, cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures.
Abstract: The 5th edition of Computer Organization and Design moves forward into the post-PC era with new examples, exercises, and material highlighting the emergence of mobile computing and the cloud. This generational change is emphasized and explored with updated content featuring tablet computers, cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures. Because an understanding of modern hardware is essential to achieving good performance and energy efficiency, this edition adds a new concrete example, "Going Faster," used throughout the text to demonstrate extremely effective optimization techniques. Also new to this edition is discussion of the "Eight Great Ideas" of computer architecture. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, memory hierarchies and I/O. Instructors looking for4th Edition teaching materials should e-mail textbook@elsevier.com. Includes new examples, exercises, and material highlighting the emergence of mobile computing and the Cloud. Covers parallelism in depth with examples and content highlighting parallel hardware and software topics Features the Intel Core i7, ARM Cortex-A8 and NVIDIA Fermi GPU as real-world examples throughout the book Adds a new concrete example, "Going Faster," to demonstrate how understanding hardware can inspire software optimizations that improve performance by 200 times. Discusses and highlights the "Eight Great Ideas" of computer architecture: Performance via Parallelism; Performance via Pipelining; Performance via Prediction; Design for Moore's Law; Hierarchy of Memories; Abstraction to Simplify Design; Make the Common Case Fast; and Dependability via Redundancy. Includes a full set of updated and improved exercises.

154 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an approach that enables accurate prototyping of graphical user interface (GUI) via three tasks: detection, classification, and assembly, where logical components of a GUI are detected from a mock-up artifact using either computer vision techniques or mockup metadata.
Abstract: It is common practice for developers of user-facing software to transform a mock-up of a graphical user interface (GUI) into code. This process takes place both at an application's inception and in an evolutionary context as GUI changes keep pace with evolving features. Unfortunately, this practice is challenging and time-consuming. In this paper, we present an approach that automates this process by enabling accurate prototyping of GUIs via three tasks: detection , classification , and assembly . First, logical components of a GUI are detected from a mock-up artifact using either computer vision techniques or mock-up metadata. Then, software repository mining, automated dynamic analysis, and deep convolutional neural networks are utilized to accurately classify GUI-components into domain-specific types (e.g., toggle-button). Finally, a data-driven, K-nearest-neighbors algorithm generates a suitable hierarchical GUI structure from which a prototype application can be automatically assembled . We implemented this approach for Android in a system called ReDraw . Our evaluation illustrates that ReDraw achieves an average GUI-component classification accuracy of 91 percent and assembles prototype applications that closely mirror target mock-ups in terms of visual affinity while exhibiting reasonable code structure. Interviews with industrial practitioners illustrate ReDraw's potential to improve real development workflows.

Journal ArticleDOI
TL;DR: An overview of the new features of the finite element library deal, version 9.2.II is provided.
Abstract: Abstract This paper provides an overview of the new features of the finite element library deal.II, version 9.2.

Journal ArticleDOI
TL;DR: This work is planning to develop an efficient approach for software defect prediction by using soft computing based machine learning techniques which helps to predict optimize the features and efficiently learn the features.
Abstract: Recent advancements in technology have emerged the requirements of hardware and software applications. Along with this technical growth, software industries also have faced drastic growth in the demand of software for several applications. For any software industry, developing good quality software and maintaining its eminence for user end is considered as most important task for software industrial growth. In order to achieve this, software engineering plays an important role for software industries. Software applications are developed with the help of computer programming where codes are written for desired task. Generally, these codes contain some faulty instances which may lead to the buggy software development cause due to software defects. In the field of software engineering, software defect prediction is considered as most important task which can be used for maintaining the quality of software. Defect prediction results provide the list of defect-prone source code artefacts so that quality assurance team scan effectively allocate limited resources for validating software products by putting more effort on the defect-prone source code. As the size of software projects becomes larger, defect prediction techniques will play an important role to support developers as well as to speed up time to market with more reliable software products. One of the most exhaustive and pricey part of embedded software development is consider as the process of finding and fixing the defects. Due to complex infrastructure, magnitude, cost and time limitations, monitoring and fulfilling the quality is a big challenge, especially in automotive embedded systems. However, meeting the superior product quality and reliability is mandatory. Hence, higher importance is given to V&V (Verification & Validation). Software testing is an integral part of software V&V, which is focused on promising accurate functionality and long-term reliability of software systems. Simultaneously, software testing requires much effort, cost, infrastructure and expertise as the development. The costs and efforts elevate in safety critical software systems. Therefore, it is essential to have a good testing strategy for any industry with high software development costs. In this work, we are planning to develop an efficient approach for software defect prediction by using soft computing based machine learning techniques which helps to predict optimize the features and efficiently learn the features.

Journal ArticleDOI
TL;DR: Proline is presented, a robust software suite for analysis of MS-based proteomics data, which collects, processes and allows visualization and publication of proteomics datasets, and its ease of use for various steps in the validation and quantification workflow, its data curation capabilities and its computational efficiency are illustrated.
Abstract: Motivation The proteomics field requires the production and publication of reliable mass spectrometry-based identification and quantification results. Although many tools or algorithms exist, very few consider the importance of combining, in a unique software environment, efficient processing algorithms and a data management system to process and curate hundreds of datasets associated with a single proteomics study. Results Here, we present Proline, a robust software suite for analysis of MS-based proteomics data, which collects, processes and allows visualization and publication of proteomics datasets. We illustrate its ease of use for various steps in the validation and quantification workflow, its data curation capabilities and its computational efficiency. The DDA label-free quantification workflow efficiency was assessed by comparing results obtained with Proline to those obtained with a widely used software using a spiked-in sample. This assessment demonstrated Proline's ability to provide high quantification accuracy in a user-friendly interface for datasets of any size. Availability and implementation Proline is available for Windows and Linux under CECILL open-source license. It can be deployed in client-server mode or in standalone mode at http://proline.profiproteomics.fr/#downloads. Supplementary information Supplementary data are available at Bioinformatics online.

Journal ArticleDOI
Hongyang Jia1, Hossein Valavi1, Yinqi Tang1, Jintao Zhang2, Naveen Verma1 
TL;DR: This paper presents a programmable in-memory-computing processor, demonstrated in a 65nm CMOS technology, and takes the approach of tight coupling with an embedded CPU, through accelerator interfaces enabling integration in the standard processor memory space.
Abstract: In-memory computing (IMC) addresses the cost of accessing data from memory in a manner that introduces a tradeoff between energy/throughput and computation signal-to-noise ratio (SNR). However, low SNR posed a primary restriction to integrating IMC in larger, heterogeneous architectures required for practical workloads due to the challenges with creating robust abstractions necessary for the hardware and software stack. This work exploits recent progress in high-SNR IMC to achieve a programmable heterogeneous microprocessor architecture implemented in 65-nm CMOS and corresponding interfaces to the software that enables mapping of application workloads. The architecture consists of a 590-Kb IMC accelerator, configurable digital near-memory-computing (NMC) accelerator, RISC-V CPU, and other peripherals. To enable programmability, microarchitectural design of the IMC accelerator provides the integration in the standard processor memory space, area- and energy-efficient analog-to-digital conversion for interfacing to NMC, bit-scalable computation (1–8 b), and input-vector sparsity-proportional energy consumption. The IMC accelerator demonstrates excellent matching between computed outputs and idealized software-modeled outputs, at 1b TOPS/W of 192|400 and 1b-TOPS/mm2 of 0.60|0.24 for MAC hardware, at $V_{DD}$ of 1.2|0.85 V, both of which scale directly with the bit precision of the input vector and matrix elements. Software libraries developed for application mapping are used to demonstrate CIFAR-10 image classification with a ten-layer CNN, achieving accuracy, throughput, and energy of 89.3%|92.4%, 176|23 images/s, and $5.31\mid 105.2~\mu \text{J}$ /image, for 1|4 b quantization levels.

Journal ArticleDOI
TL;DR: The open-source AiZynthFinder software that can be readily used in retrosynthetic planning is presented, based on a Monte Carlo tree search that recursively breaks down a molecule to purchasable precursors.
Abstract: We present the open-source AiZynthFinder software that can be readily used in retrosynthetic planning. The algorithm is based on a Monte Carlo tree search that recursively breaks down a molecule to purchasable precursors. The tree search is guided by an artificial neural network policy that suggests possible precursors by utilizing a library of known reaction templates. The software is fast and can typically find a solution in less than 10 s and perform a complete search in less than 1 min. Moreover, the development of the code was guided by a range of software engineering principles such as automatic testing, system design and continuous integration leading to robust software with high maintainability. Finally, the software is well documented to make it suitable for beginners. The software is available at http://www.github.com/MolecularAI/aizynthfinder .

Journal ArticleDOI
TL;DR: This work proposes – in a holistic and structured way – four traits that characterize RPA, providing orientation as well as a focus for further research.
Abstract: Within digital transformation, which is continuously progressing, robotic process automation (RPA) is drawing much corporate attention. While RPA is a popular topic in the corporate world, the academic research lacks a theoretical and synoptic analysis of RPA. Conducting a literature review and tool analysis, we propose – in a holistic and structured way – four traits that characterize RPA, providing orientation as well as a focus for further research. Software robots automate processes originally performed by human work. Thus, software robots follow a choreography of technological modules and control flow operators while operating within IT ecosystems and using established applications. Ease-of-use and adaptability allow companies to conceive and implement software robots through (agile) projects. Organizational and IT strategy, governance structures, and management systems therefore must address both the direct effects of software robots automating processes and their indirect impacts on firms.

Journal ArticleDOI
TL;DR: This paper proposes a novel blind Zero code based Watermark detection approach named KeySplitWatermark, for the protection of software against cyber-attacks, and shows that the proposed approach reports promising results against Cyber-attacks that are powerful and viable.
Abstract: Cyber-attacks are evolving at a disturbing rate. Data breaches, ransomware attacks, crypto-jacking, malware and phishing attacks are now rampant. In this era of cyber warfare, the software industry is also growing with an increasing number of software being used in all domains of life. This evolution has added to the problems of software vendors and users where they have to prevent a wide range of attacks. Existing watermark detection solutions have a low detection rate in the software. In order to address this issue, this paper proposes a novel blind Zero code based Watermark detection approach named KeySplitWatermark, for the protection of software against cyber-attacks. The algorithm adds watermark logically into the code utilizing the inherent properties of code and gives a robust solution. The embedding algorithm uses keywords to make segments of the code to produce a key-dependent on the watermark. The extraction algorithms use this key to remove watermark and detect tampering. When tampering increases to a certain user-defined threshold, the original software code is restored making it resilient against attacks. KeySplitWatermark is evaluated on tampering attacks on three unique samples with two distinct watermarks. The outcomes show that the proposed approach reports promising results against cyber-attacks that are powerful and viable. We compared the performance of our proposal with state-of-the-art works using two different software codes. Our results depict that KeySplitWatermark correctly detects watermarks, resulting in up to 15.95 and 17.43 percent reduction in execution time on given code samples with no increase in program size and independent of watermark size.

Journal ArticleDOI
TL;DR: It is concluded that model-agnostic techniques are needed to explain individual predictions of defect models and that more than half of the practitioners perceive that the contrastive explanations are necessary and useful to understand the predictions of defects.
Abstract: Software analytics have empowered software organisations to support a wide range of improved decision-making and policy-making. However, such predictions made by software analytics to date have not been explained and justified. Specifically, current defect prediction models still fail to explain why models make such a prediction and fail to uphold the privacy laws in terms of the requirement to explain any decision made by an algorithm. In this paper, we empirically evaluate three model-agnostic techniques, i.e., two state-of-the-art Local Interpretability Model-agnostic Explanations technique (LIME) and BreakDown techniques, and our improvement of LIME with Hyper Parameter Optimisation (LIME-HPO). Through a case study of 32 highly-curated defect datasets that span across 9 open-source software systems, we conclude that (1) model-agnostic techniques are needed to explain individual predictions of defect models; (2) instance explanations generated by model-agnostic techniques are mostly overlapping (but not exactly the same) with the global explanation of defect models and reliable when they are re-generated; (3) model-agnostic techniques take less than a minute to generate instance explanations; and (4) more than half of the practitioners perceive that the contrastive explanations are necessary and useful to understand the predictions of defect models. Since the implementation of the studied model-agnostic techniques is available in both Python and R, we recommend model-agnostic techniques be used in the future.

Journal ArticleDOI
14 Oct 2020-Nature
TL;DR: This study proposes 'neuromorphic completeness', which relaxes the requirement for hardware completeness, and proposes a corresponding system hierarchy, which consists of a Turing-complete software-abstraction model and a versatile abstract neuromorphic architecture.
Abstract: Neuromorphic computing draws inspiration from the brain to provide computing technology and architecture with the potential to drive the next wave of computer engineering1-13. Such brain-inspired computing also provides a promising platform for the development of artificial general intelligence14,15. However, unlike conventional computing systems, which have a well established computer hierarchy built around the concept of Turing completeness and the von Neumann architecture16-18, there is currently no generalized system hierarchy or understanding of completeness for brain-inspired computing. This affects the compatibility between software and hardware, impairing the programming flexibility and development productivity of brain-inspired computing. Here we propose 'neuromorphic completeness', which relaxes the requirement for hardware completeness, and a corresponding system hierarchy, which consists of a Turing-complete software-abstraction model and a versatile abstract neuromorphic architecture. Using this hierarchy, various programs can be described as uniform representations and transformed into the equivalent executable on any neuromorphic complete hardware-that is, it ensures programming-language portability, hardware completeness and compilation feasibility. We implement toolchain software to support the execution of different types of program on various typical hardware platforms, demonstrating the advantage of our system hierarchy, including a new system-design dimension introduced by the neuromorphic completeness. We expect that our study will enable efficient and compatible progress in all aspects of brain-inspired computing systems, facilitating the development of various applications, including artificial general intelligence.

Journal ArticleDOI
TL;DR: In this paper, a many-objective search-based approach using NSGA-III is proposed to find the optimal remodularization solutions that improve the structure of packages, minimize the number of changes, preserve semantics coherence, and re-use the history of changes.
Abstract: Software systems nowadays are complex and difficult to maintain due to continuous changes and bad design choices. To handle the complexity of systems, software products are, in general, decomposed in terms of packages/modules containing classes that are dependent. However, it is challenging to automatically remodularize systems to improve their maintainability. The majority of existing remodularization work mainly satisfy one objective which is improving the structure of packages by optimizing coupling and cohesion. In addition, most of existing studies are limited to only few operation types such as move class and split packages. Many other objectives, such as the design semantics, reducing the number of changes and maximizing the consistency with development change history, are important to improve the quality of the software by remodularizing it. In this paper, we propose a novel many-objective search-based approach using NSGA-III. The process aims at finding the optimal remodularization solutions that improve the structure of packages, minimize the number of changes, preserve semantics coherence, and re-use the history of changes. We evaluate the efficiency of our approach using four different open-source systems and one automotive industry project, provided by our industrial partner, through a quantitative and qualitative study conducted with software engineers.

Journal ArticleDOI
TL;DR: An original hardware/software architecture, Motion Analysis System (MAS), aimed at the human body digitalization and analysis during the execution of manufacturing/assembly tasks within the common industrial workstation is presented.

Journal ArticleDOI
TL;DR: Flash as mentioned in this paper is a sequential model-based method that sequentially explores the configuration space by reflecting on the configurations evaluated so far to determine the next best configuration to explore, which reduces the effort required to find the better configuration.
Abstract: Finding good configurations of a software system is often challenging since the number of configuration options can be large. Software engineers often make poor choices about configuration or, even worse, they usually use a sub-optimal configuration in production, which leads to inadequate performance. To assist engineers in finding the better configuration, this article introduces Flash , a sequential model-based method that sequentially explores the configuration space by reflecting on the configurations evaluated so far to determine the next best configuration to explore. Flash scales up to software systems that defeat the prior state-of-the-art model-based methods in this area. Flash runs much faster than existing methods and can solve both single-objective and multi-objective optimization problems. The central insight of this article is to use the prior knowledge of the configuration space (gained from prior runs) to choose the next promising configuration. This strategy reduces the effort (i.e., number of measurements) required to find the better configuration. We evaluate Flash using 30 scenarios based on 7 software systems to demonstrate that Flash saves effort in 100 and 80 percent of cases in single-objective and multi-objective problems respectively by up to several orders of magnitude compared to state-of-the-art techniques.

Posted Content
Jianjun Zhao1
TL;DR: The survey summarizes the technology available in the various phases of the quantum software life cycle, including quantum software requirements analysis, design, implementation, test, and maintenance and covers the crucial issue of quantum software reuse.
Abstract: Quantum software plays a critical role in exploiting the full potential of quantum computing systems. As a result, it is drawing increasing attention recently. This paper defines the term "quantum software engineering" and introduces a quantum software life cycle. Based on these, the paper provides a comprehensive survey of the current state of the art in the field and presents the challenges and opportunities that we face. The survey summarizes the technology available in the various phases of the quantum software life cycle, including quantum software requirements analysis, design, implementation, test, and maintenance. It also covers the crucial issue of quantum software reuse.

Journal ArticleDOI
TL;DR: This work proposes a novel approach that leverages deep learning techniques to predict the number of defects in software systems by preprocessing a publicly available dataset and passing the modeled data to a specially designed deep neural network-based model.

Posted Content
TL;DR: This work quantifies the carbon output of computer systems to show that most emissions related to modern mobile and data-center equipment come from hardware manufacturing and infrastructure, and outlines future directions for minimizing the environmental impact of computing systems.
Abstract: Given recent algorithm, software, and hardware innovation, computing has enabled a plethora of new applications. As computing becomes increasingly ubiquitous, however, so does its environmental impact. This paper brings the issue to the attention of computer-systems researchers. Our analysis, built on industry-reported characterization, quantifies the environmental effects of computing in terms of carbon emissions. Broadly, carbon emissions have two sources: operational energy consumption, and hardware manufacturing and infrastructure. Although carbon emissions from the former are decreasing thanks to algorithmic, software, and hardware innovations that boost performance and power efficiency, the overall carbon footprint of computer systems continues to grow. This work quantifies the carbon output of computer systems to show that most emissions related to modern mobile and data-center equipment come from hardware manufacturing and infrastructure. We therefore outline future directions for minimizing the environmental impact of computing systems.

Journal ArticleDOI
04 Feb 2020
TL;DR: XACC as discussed by the authors is a system-level software infrastructure for quantum-classical computing that promotes a service-oriented architecture to expose interfaces for core quantum programming, compilation, and execution tasks.
Abstract: Quantum programming techniques and software have advanced significantly over the past five years, with a majority focusing on high-level language frameworks targeting remote REST library APIs. As quantum computing architectures advance and become more widely available, lower-level, system software infrastructures will be needed to enable tighter, co-processor programming and access models. Here we present XACC, a system-level software infrastructure for quantum-classical computing that promotes a service-oriented architecture to expose interfaces for core quantum programming, compilation, and execution tasks. We detail XACC's interfaces, their interactions, and its implementation as a hardware-agnostic framework for both near-term and future quantum-classical architectures. We provide concrete examples demonstrating the utility of this framework with paradigmatic tasks. Our approach lays the foundation for the development of compilers, associated runtimes, and low-level system tools tightly integrating quantum and classical workflows.

Journal ArticleDOI
TL;DR: The data reduction pipeline of the Multi Unit Spectroscopic Explorer (MUSE) integral field spectrograph operated at ESO's Paranal observatory is described and how the raw science data gets transformed into calibrated datacubes is explained.
Abstract: Processing of raw data from modern astronomical instruments is nowadays often carried out using dedicated software, so-called "pipelines" which are largely run in automated operation. In this paper we describe the data reduction pipeline of the Multi Unit Spectroscopic Explorer (MUSE) integral field spectrograph operated at ESO's Paranal observatory. This spectrograph is a complex machine: it records data of 1152 separate spatial elements on detectors in its 24 integral field units. Efficiently handling such data requires sophisticated software, a high degree of automation and parallelization. We describe the algorithms of all processing steps that operate on calibrations and science data in detail, and explain how the raw science data gets transformed into calibrated datacubes. We finally check the quality of selected procedures and output data products, and demonstrate that the pipeline provides datacubes ready for scientific analysis.

Journal ArticleDOI
01 Feb 2020
TL;DR: The Activity Browser provides a graphical user interface (GUI) to the brightway LCA framework and makes common tasks such as managing projects and databases, modeling life cycle inventories, and analyzing LCA results easier and more intuitive.
Abstract: The Activity Browser is an open source software for advanced Life Cycle Assessment (LCA). The Activity Browser provides a graphical user interface (GUI) to the brightway LCA framework and makes common tasks such as managing projects and databases, modeling life cycle inventories, and analyzing LCA results easier and more intuitive. In addition, it provides advanced features for LCA modeling and data analyses and thus facilitates state-of-the-art LCA research. It can be extended to implement novel LCA modeling approaches and analyses as needed.​​​​​​​

Proceedings ArticleDOI
30 May 2020
TL;DR: The insight is that many prior accelerator architectures can be approximated by composing a small number of hardware primitives, specifically those from spatial architectures, which is used to develop the DSAGEN framework, which automates the hardware/software co-design process for reconfigurable accelerators.
Abstract: Domain-specific hardware accelerators can provide orders of magnitude speedup and energy efficiency over general purpose processors. However, they require extensive manual effort in hardware design and software stack development. Automated ASIC generation (eg. HLS) can be insufficient, because the hardware becomes inflexible. An ideal accelerator generation framework would be automatable, enable deep specialization to the domain, and maintain a uniform programming interface. Our insight is that many prior accelerator architectures can be approximated by composing a small number of hardware primitives, specifically those from spatial architectures. With careful design, a compiler can understand how to use available primitives, with modular and composable transformations, to take advantage of the features of a given program. This suggests a paradigm where accelerators can be generated by searching within such a rich accelerator design space, guided by the affinity of input programs for hardware primitives and their interactions. We use this approach to develop the DSAGEN framework, which automates the hardware/software co-design process for reconfigurable accelerators. For several existing accelerators, our evaluation demonstrates that the compiler can achieve 89% of the performance of manually tuned versions. For automated design space exploration, we target multiple sets of workloads which prior accelerators are design for; the generated hardware has mean 1.3 x perf2/mm2 over prior programmable accelerators.