scispace - formally typeset
Search or ask a question

Showing papers in "ACM Sigsoft Software Engineering Notes in 2019"


Journal ArticleDOI
TL;DR: With "grey literature" the authors identify materials and research produced outside of the traditional academic publishing and distribution channels, which varies considerably in type, quality and publication means.
Abstract: With "grey literature" we identify materials and research produced outside of the traditional academic publishing and distribution channels. Currently available grey literature spans from industrial whitepapers and technical reports to blog post and videos published on the web. It hence varies considerably in type, quality and publication means, even if the common point is that no independent peer reviewing occurs before a content is made available to the public.

43 citations


Journal ArticleDOI
TL;DR: This paper reports on the results of the 6th International Workshop on Engineering Multi-Agent Systems (EMAS 2018), where participants discussed the issues above focusing on the state of affairs and the road ahead for researchers and engineers in this area.
Abstract: The continuous integration of software-intensive systems together with the ever-increasing computing power offer a breeding ground for intelligent agents and multi-agent systems (MAS) more than ever before Over the past two decades, a wide variety of languages, models, techniques and methodologies have been proposed to engineer agents and MAS Despite this substantial body of knowledge and expertise, the systematic engineering of large-scale and open MAS still poses many challenges Researchers and engineers still face fundamental questions regarding theories, architectures, languages, processes, and platforms for designing, implementing, running, maintaining, and evolving MAS This paper reports on the results of the 6th International Workshop on Engineering Multi-Agent Systems (EMAS 2018, 14th-15th of July, 2018, Stockholm, Sweden), where participants discussed the issues above focusing on the state of affairs and the road ahead for researchers and engineers in this area

33 citations


Journal ArticleDOI
TL;DR: Saffron is proposed, an adaptive grammar-based fuzzing approach to effectively and efficiently generate inputs that expose expensive executions in programs and outperforms state-of-the-art baselines.
Abstract: Fuzz testing has been gaining ground recently with substantial e orts devoted to the area. Typically, fuzzers take a set of seed inputs and leverage random mutations to continually improve the inputs with respect to a cost, e.g. program code coverage, to discover vulnerabilities or bugs. Following this methodology, fuzzers are very good at generating unstructured inputs that achieve high coverage. However fuzzers are less e ective when the inputs are structured, say they conform to an input grammar. Due to the nature of random mutations, the overwhelming abundance of inputs generated by this common fuzzing practice often adversely hinders the effectiveness and efficiency of fuzzers on grammar-aware applications. The problem of testing becomes even harder, when the goal is not only to achieve increased code coverage, but also to nd complex vulnerabilities related to other cost measures, say high resource consumption in an application. We propose Saffron an adaptive grammar-based fuzzing approach to effectively and efficiently generate inputs that expose expensive executions in programs. Saffron takes as input a user-provided grammar, which describes the input space of the program under analysis, and uses it to generate test inputs. Saffron assumes that the grammar description is approximate since precisely describing the input program space is often difficult as a program may accept unintended inputs due to e.g., errors in parsing. Yet these inputs may reveal worst-case complexity vulnerabilities. The novelty of Saffron is then twofold: (1) Given the user-provided grammar, Saffron attempts to discover whether the program accepts unexpected inputs outside of the provided grammar, and if so, it repairs the grammar via grammar mutations. The repaired grammar serves as a speci cation of the actual inputs accepted by the application. (2) Based on the re ned grammar, it generates concrete test inputs. It starts by treating every production rule in the grammar with equal probability of being used for generating concrete inputs. It then adaptively re nes the probabilities along the way by increasing the probabilities for rules that have been used to generate inputs that improve a cost, e.g., code coverage or arbitrary user-de ned cost. Evaluation results show that Saffron signi cantly outperforms state-of-the-art baselines.

15 citations


Journal ArticleDOI
TL;DR: The extent to which code analysis tools can be used as a step towards continuous security inspection in software engineering projects is investigated to greatly reduce the cost to fix flaws and help building more secure software.
Abstract: Security is a non-functional requirement difficult-to-handle during software development. However, it appears to be common in software engineering, that security is taken care of during the design- and test-phase only. If security is neglected during the implementation phase, flaws will be introduced. Those may be - if at all - found during testing where the cost-to-fix is higher as if found during the implementation phase. Hence, this research proposal suggests to investigate the extent to which code analysis tools can be used as a step towards continuous security inspection in software engineering projects. By automating security testing in development flaws can be found as soon as they are introduced. This could greatly reduce the cost to fix flaws and help building more secure software.

10 citations


Journal ArticleDOI
TL;DR: This paper reports on the results of the joint 5th International Workshop on Rapid Continuous Software Engineering (RCoSE 2019) and the 1st International workshop on Data-Driven Decisions, Experimentation and Evolution (DDrEE 2019), which focuses on the challenges and potential solutions in the area of continuous data-driven software engineering.
Abstract: The rapid pace with which software needs to be built, together with the increasing need to evaluate changes for end users both quantitatively and qualitatively calls for novel software engineering approaches that focus on short release cycles, continuous deployment and delivery, experiment-driven feature development, feedback from users, and rapid tool-assisted feedback to developers. To realize these approaches there is a need for research and innovation with respect to automation and tooling, and furthermore for research into the organizational changes that support flexible data-driven decision-making in the development lifecycle. Most importantly, deep synergies are needed between software engineers, managers, and data scientists. This paper reports on the results of the joint 5th International Workshop on Rapid Continuous Software Engineering (RCoSE 2019) and the 1st International Workshop on Data-Driven Decisions, Experimentation and Evolution (DDrEE 2019), which focuses on the challenges and potential solutions in the area of continuous data-driven software engineering.

8 citations


Journal ArticleDOI
TL;DR: This paper presents an incremental approach to attack synthesis that reuses model counting results from prior iterations in each attack step to improve efficiency and drastically improves performance, reducing the attack synthesis time by an order of magnitude.
Abstract: Information leakage is a signi cant problem in modern software systems. Information leaks due to side channels are especially hard to detect and analyze. In recent years, techniques have been developed for automated synthesis of adaptive side-channel attacks that recover secret values by iteratively generating inputs to reveal partial information about the secret based on the sidechannel observations. Prominent approaches of attack synthesis use symbolic execution, model counting, and meta-heuristics to maximize information gain. These approaches could bene t by reusing results from prior steps in each step. In this paper, we present an incremental approach to attack synthesis that reuses model counting results from prior iterations in each attack step to improve efficiency. Experimental evaluation demonstrates that our approach drastically improves performance, reducing the attack synthesis time by an order of magnitude.

6 citations


Journal ArticleDOI
TL;DR: Software development is prone to software faults due to the involvement of multiple stakeholders especially during the fuzzy phases (requirements and design).
Abstract: Software development is prone to software faults due to the involvement of multiple stakeholders especially during the fuzzy phases (requirements and design). Software inspections are commonly used in industry to detect and fix problems in requirements and design artifacts thereby mitigating the fault propagation to later phases. The requirements documented in natural language (NL) are prone to contain faults because of different vocabularies among stakeholders. This research employs various NL processing with semantic analysis (SA) and mining solutions from graph theory to NL requirements to develop inter-related requirements (IRRs) that can help identify requirements that may need similar fixes. Additionally, our approach aims at aiding requirements' engineers with fault-prone regions both pre and post inspection. Pre-inspection, our approach using IRRs help removing redundant and extraneous faults within related requirements while post-inspection, it aids engineers analyse the impact of a change in one requirement on another related requirements. So, this research aims at developing a graph of inter-related requirements using natural language processing and semantic analysis approaches on a given requirements document that can be used to aid various decisions pre and post-inspections.

6 citations


Journal ArticleDOI
TL;DR: The insights from the workshop highlight a number of interesting directions for research on the interplay between software warranties and cybersecurity.
Abstract: This workshop focused on bringing software developers and legal professionals together to understand the shared challenges they face in promoting the development of secure software on the one hand, and software at all, on the other hand. This report sum- marizes current scienti c research on the topics and challenges discussed in the workshop breakout sessions. The insights from the workshop highlight a number of interesting directions for fur- ther research on the interplay between software warranties and cybersecurity.

6 citations


Journal ArticleDOI
TL;DR: This work uses symbolic execution to extract path constraints, automata-based model counting to estimate the probability of execution paths, and meta-heuristic methods to maximize information gain based on entropy for synthesizing adaptive attack steps.
Abstract: Information leaks are a significant problem in modern computer systems and string manipulation is prevalent in modern software. We present techniques for automated synthesis of side-channel attacks that recover secret string values based on timing observations on string manipulating code. Our attack synthesis techniques iteratively generate inputs which, when fed to code that accesses the secret, reveal partial information about the secret based on the timing observations, leading to recovery of the secret at the end of the attack sequence. We use symbolic execution to extract path constraints, automata-based model counting to estimate the probability of execution paths, and meta-heuristic methods to maximize information gain based on entropy for synthesizing adaptive attack steps.

6 citations


Journal ArticleDOI
TL;DR: This paper maps the discussions and results of the Fourth International Workshop on Software Engineering for Smart Cyber-Physical Systems (SEsCPS 2018), which focuses on challenges and promising solutions in the area of software engineering for sCPS.
Abstract: Smart Cyber-Physical Systems (sCPS) are a novel kind of Cyber- Physical System engineered to take advantage of large-scale cooperation between devices, users and environment to achieve added value in the face of uncertainty and changing environments. Examples of sCPS include modern traffic systems, Industry 4.0 systems, systems for smart buildings, and smart energy grids. The uniting aspect of all these systems is that to achieve their high level of intelligence, adaptivity and ability to optimize and learn, they rely heavily on software. This makes them software-intensive systems, where software becomes their most complex part. Engineering sCPS thus becomes a recognized software engineering discipline, which, due to specifics of sCPS, can only partially rely on the existing body of knowledge in software engineering. In fact, it turns out that many of the traditional approaches to architecture modeling and software development fall short in their ability to cope with the high dynamicity and uncertainty of sCPS. This calls for innovative approaches that jointly reflect and address the specifics of such systems. This paper maps the discussions and results of the Fourth International Workshop on Software Engineering for Smart Cyber-Physical Systems (SEsCPS 2018), which focuses on challenges and promising solutions in the area of software engineering for sCPS.

5 citations


Journal ArticleDOI
TL;DR: A prototype, called STARFIX, is developed to repair invalid Java data structures violating given specifications in separation logic, and preliminary results show that tool can efficiently detect and repair inconsistent data structures including lists and trees.
Abstract: Software systems are often shipped and deployed with both known and unknown bugs. On-the-fly program repairs, which handle runtime errors and allow programs to continue successfully, can help software reliability, e.g., by dealing with inconsistent or corrupted data without interrupting the runni program. We report on our work-in-progress that repairs dat structure using separation logic. Our technique, inspired by existing works on specification-based repair, takes as input specification written in a separation logic formula and a concrete data structure that fails that specification, and performs on-thefly repair to make the data conforms with the specification. The use of separation logic allows us to compactly and precisely represent desired properties of data structures and use existing analyses in separation logic to detect and repair bugs in complex data structures. We have developed a prototype, called STARFIX, to repair invalid Java data structures violating given specifications in separation logic. Preliminary results show that tool can efficiently detect and repair inconsistent data structures including lists and trees.

Journal ArticleDOI
TL;DR: A set of guidelines to help SE researchers to conduct a Grey Literature Review (GLR) that is more in line with practitioners' needs are proposed and evaluated.
Abstract: Context: In the last years, diverse research areas increased their interest in Grey Literature (GL). In Software Engineering (SE), SE practitioners became heavy consumers of GL, by way of contrast to traditional research papers. Problem: This is unfortunate, in particular, although the increase of Systematic Literature Reviews (SLR) published, researchers, claim to the lack of them of connection to the practice. Goal: Propose and evaluate a set of guidelines to help SE researchers to conduct a Grey Literature Review (GLR) that are more in line with practitioners' needs. Method: First, we are conducting a tertiary study to understand how secondary studies use GL. Second, we plan to employ qualitative research with researchers of SLRs and SE practitioners. Third, we plan to review and analyze the use of GL source according to the context in SE. Fourth, we plan to conduct a GLR. Finally, we plan to perform and evaluate our guideline. Preliminary Results: The tertiary study retrieved a total of 14,043 papers. We removed the duplicate studies and also which were not peer-reviewed articles. Currently, we are solving the conflicts of disagreement of the selection process. Conclusions: We present preliminary findings, show our proposed approach and the next steps.

Journal ArticleDOI
TL;DR: One particular topic dominated the discussion - the resurgence of artificial intelligence and machine learning algorithms in software engineering research and industry practice, and its implications for the collaboration between these two communities are presented.
Abstract: Challenges of implementing successful research collaborations between industry and academia in software engineering are varied and many. Differing timelines, metrics, expectations, and perceptions of these two communities are some common obstacles, which need be analyzed and discussed, to discover synergies and strengthen collaborations between researchers and practitioners. In this report, we present insights from the 6th International Workshop on Software Engineering Research and Industrial Practice held at the International Conference on Software Engineering 2019. Specifically, one particular topic dominated the discussion - the resurgence of artificial intelligence and machine learning algorithms in software engineering research and industry practice, and its implications for the collaboration between these two communities. We present takeaways from keynote talks on this subject, insights from paper presentations, and findings from the discussion session.

Journal ArticleDOI
TL;DR: A detailed description of the rules for benchmark verification tasks, the integration of new tools into SV-COMP's benchmarking framework and experimental results of a benchmarking run on three state-of-the-art Java verification tools are given.
Abstract: Empirical evaluation of verification tools by benchmarking is a common method in software verification research. The Competition on Software Verification (SV-COMP) aims at standardization and reproducibility of benchmarking within the software verification community in an annual basis, through comparative evaluation of fully-automatic software verifiers for C programs. Building upon this success, we describe here how to re-use the ecosystem developed around SV-COMP for benchmarking Java verification tools. We provide a detailed description of the rules for benchmark verification tasks, the integration of new tools into SV-COMP's benchmarking framework and also give experimental results of a benchmarking run on three state-of-the-art Java verification tools, JPF-SE, JayHorn and JBMC.

Journal ArticleDOI
TL;DR: In this paper, the authors define a methodology for applying observational studies in empirical software engineering, providing guidelines on how to conduct such studies, how to analyze the data, and how to report the studies themselves.
Abstract: Background: Starting from the 1960s, practitioners and researchers have looked for ways to empirically investigate new technologies such as inspecting the effectiveness of new methods, tools, or practices. With this purpose, the empirical software engineering domain started to identify different empirical methods, borrowing them from various domains such as medicine, biology, and psychology. Nowadays, a variety of empirical methods are commonly applied in software engineering, ranging from controlled and quasi-controlled experiments to case studies, from systematic literature reviews to the newly introduced multivocal literature reviews. However, to date, the only available method for proving any cause-effect relationship are controlled experiments. Objectives: The goal of the thesis is introducing new methodologies for studying causality in empirical software engineering. Methods: Other fields use observational studies for proving causality. They allow observing the effect of a risk factor and testing this without trying to change who is or is not exposed to it. As an example, with an observational study it is possible to observe the effect of pollution on the growth of a forest or the effect of different factors on development productivity without the need of waiting years for the forest to grow or exposing developers to a specific treatment. Conclusion: In this thesis, we aim at defining a methodology for applying observational studies in empirical software engineering, providing guidelines on how to conduct such studies, how to analyze the data, and how to report the studies themselves.

Journal ArticleDOI
TL;DR: The idea of shadow symbolic execution (SSE) is adapted and combine complete/standard symbolic execution with the idea of four-way forking to expose diverging behavior and attempts to comprehensively test the new be- haviors introduced by a change.
Abstract: Regression testing ensures the correctness of the software during its evolution, with special attention on the absence of unintended side-e ects that might be introduced by changes. However, the manual creation of regression test cases, which expose divergent behavior, needs a lot of e ort. A solution is the idea of shadow symbolic execution, which takes a uni ed version of the old and the new programs and performs symbolic execution guided by concrete values to explore the changed behavior. In this work, we adapt the idea of shadow symbolic execution (SSE) and combine complete/standard symbolic execution with the idea of four-way forking to expose diverging behavior. There- fore, our approach attempts to comprehensively test the new be- haviors introduced by a change. We implemented our approach in the tool ShadowJPF+, which performs complete shadow sym- bolic execution on Java bytecode. It is an extension of the tool ShadowJPF, which is based on Symbolic PathFinder. We applied our tool on 79 examples, for which it was able to reveal more di- verging behaviors than common shadow symbolic execution. Ad- ditionally, the approach has been applied on a real-world patch for the Joda-Time library, for which it successfully generated test cases that expose a regression error.

Journal ArticleDOI
TL;DR: This paper maps the discussions and results of the Third International Workshop on Software Engineering for Smart Cyber-Physical Systems (SEsCPS 2017), which specifically focuses on challenges and promising solutions in the area of software engineering for sCPS.
Abstract: Smart Cyber-Physical Systems (sCPS) are a novel kind of Cyber- Physical Systems engineered to take advantage of large-scale cooperation between devices, users and environment to achieve added value in face of uncertainty and various situations in their environment. Examples of sCPS include modern traffic systems, Industry 4.0 systems, systems for smart-buildings, smart energy grids, etc. The uniting aspect of all these systems is that to achieve their high-level of intelligence, adaptivity and ability to optimize and learn, they heavily rely on software. This makes them software-intensive systems, where software becomes their most complex part. Engineering sCPS thus becomes a recognized software engineering discipline, which however, due to specifics of sCPS, can only partially rely on the existing body of knowledge in software engineering. In fact, it turns out that many of the traditional approaches to architecture modeling and software development fail to cope with the high dynamicity and uncertainty of sCPS. This calls for innovative approaches that jointly reflect and address the specifics of such systems. This paper maps the discussions and results of the Third International Workshop on Software Engineering for Smart Cyber-Physical Systems (SEsCPS 2017), which specifically focuses on challenges and promising solutions in the area of software engineering for sCPS.

Journal ArticleDOI
TL;DR: This study aims to understand the relationship between TD decisions and the success or failure of software startups, and explore the best practices related to TD decisions that would better contribute to the startup success.
Abstract: Context: Technical Debt (TD) is a metaphor used to describe outstanding software maintenance tasks or shortcuts made in the software development to achieve short-term benefits (i.e. time to market), but negatively impact the software quality in the long term. TD is quite common in a software startup, which is characterized as a young company with low resources and a small client base, aiming to accelerate time to market. Decisions related to TD can be critical for startup success. Objective: I aim to understand the relationship between TD decisions and the success or failure of software startups, and explore the best practices related to TD decisions that would better contribute to the startup success. Method: I plan to apply multiple retrospective case studies in different software startups that succeed or failed to pass the startup period and become a mature organization. Semi structured interviews will be used to collect data from the team who was involved in the software development in the startup era. Contribution: The outcome of this study will help software founders/entrepreneurs to make effective TD decisions during the startup timeframe; that can better contribute to the startup success and decrease the risk of the startup failure.

Journal ArticleDOI
TL;DR: The theme of ICSSP 2018 was studying "Demands on Processes, Processes on Demand" by recognizing the demands on processes that include the need for both well-developed plans and incremental deliveries, utilization of increased automation (model-based engineering and DevOps), higher degrees of customer collaboration, and performance requirements of enterprise-level architectures.
Abstract: The International Conference on Software and System Processes (ICSSP), continuing the success of Software Process Workshop (SPW), the Software Process Modeling and Simulation Workshop (ProSim) and the International Conference on Software Process (ICSP) conference series, has become the established premier event in the field of software and systems engineering processes. It provides a leading forum for the exchange of research outcomes and industrial best-practices in process development from software and systems disciplines. ICSSP 2018 was held in Gothenburg, Sweden, 26-27 May 2018, co-located with the 40th International Conference on Software Engineering (ICSE). The theme of ICSSP 2018 was studying \Demands on Processes, Processes on Demand" by recognizing the demands on processes that include the need for both welldeveloped plans and incremental deliveries (agile and hybrid processes), utilization of increased automation (model-based engineering and DevOps), higher degrees of customer collaboration, comprehensive analysis of existing products for reuse (open source and COTS), and performance requirements of enterprise-level architectures. Papers presented at ICSSP discussed these issues across different domains, providing concepts, evidence, and experiences.

Journal ArticleDOI
TL;DR: This work proposes to de-velop a framework to provide guidance for engineering IoT software systems considering its characteristics and multidisciplinarity, and develops a framework based on evidence collected both from literature and real case scenarios from the practice.
Abstract: Internet of Things represents a promising paradigm for the development of systems that have been largely explored in the academy and industry. One of the recognized features of IoT is its large multidisciplinarity by the integration of different devices and technologies. However, the studies presented in literature have sought to explore particular issues not addressing the IoT multidisciplinary as a whole. We argue that to better support the development of these systems, it is essential to have a more holistic view of their engineering. To that end, we propose to de-velop a framework to provide guidance for engineering IoT software systems considering its characteristics and multidisciplinarity. The research follows a (1) Conceptual Phase with a set of studies conducted to present the proposed technology foundations; (2) Development Phase that aims to operationalize and (3) Evaluation Phase that aims its assessment in real cases. In this way, the framework is developed based on evidence collected both from literature and real case scenarios from the practice. Keywords: Internet of Things; Software Development, Empirical Software Engineering.

Journal ArticleDOI
TL;DR: At XP2017 in Koln, a panel was convened to discuss the classic 1987 IEEE Software paper by Frederick P. Brooks, "No Silver Bullet: Essence and Accidents in Software Engineering."
Abstract: At XP2017 in Koln, a panel was convened to discuss the classic 1987 IEEE Software paper by Frederick P. Brooks, "No Silver Bullet: Essence and Accidents in Software Engineering." The ideas presented in his paper have influenced several generations of software developers. Brooks emphasized the notions of essential complexity and accidental complexity, and he offered suggestions for promising approaches to software development. While his approaches are linked to what we now recognize as "agile practices," panelists offered an implicit caveat that they must be done with discipline to avoid increased accidental complexity. Panelists also observed that agile development itself is not a "silver bullet".

Journal ArticleDOI
TL;DR: In this paper, the authors investigate whether negotiation theories can be effective to aid software engineers in defending their estimates and, therefore, reduce distortions, and apply the Design Science Research (DSR) approach by defining and evaluating negotiation guidelines adapted to the context of software project estimation.
Abstract: Software estimation is a critical task in software projects, and the accuracy of software estimates has been a concern for researchers and practitioners. Researchers have already identified some factors that impact estimates accuracy, like cognitive biases, that ultimately lead to unintentional distortions of software estimates. However, there is evidence that intentional distortions of software estimates are also part of the reality of estimation in software projects. The goal of this research project is to investigate whether negotiation theories can be effective to aid software engineers in defending their estimates and, therefore, reduce distortions. To achieve this, we apply the Design Science Research (DSR) approach by defining and evaluating negotiation guidelines adapted to the context of software project estimation. The problem addressed in this project is investigated through a systematic literature mapping (SLM) and a case study. Additionally, the guidelines are going to be applied in real scenarios in software companies. The expected contributions in this project are (i) the negotiation guidelines aiming at reducing software estimate distortions, (ii) empirical evidence about whether negotiation theories are effective to aid software engineers to deal with estimate distortions, and (iii) a set of technological rules about the use of these guidelines.

Journal ArticleDOI
TL;DR: The idea is to build a core structure which allows the configuration of experiment elements based commonalities with previous replications and desired variabilities to fit the specific replication purposes, as well as the current research progress and expected future contributions.
Abstract: Replication is essential to build knowledge in empirical science. Experiment replications reported in the software engineering context present variabilities on their experiment elements, e.g., variables, materials. Further understanding these variabilities could help planning lack of strategy to support the representation of experiment variabilities and commonalities. In addition, there is also a gap related to effective reuse and traceability of experiment elements. These problems are likely to hamper the replication understanding and planning. In order to overcome these gaps, we intend to create a conceptual model and a tool to support replication planning. To develop these solutions, we will use concepts of experimentation and software product lines. Our idea is to build a core structure which allows the configuration of experiment elements based commonalities with previous replications and desired variabilities to fit the specific replication purposes. In this paper we describe related work, our research methodology, as well as the current research progress and expected future contributions.

Journal ArticleDOI
TL;DR: The discussion session at the 6th International Genetic Improvement Workshop (GI-2019 @ ICSE) as mentioned in this paper was held as part of the 41st ACM/IEEE International Confer- ence on Software Engineering on Tuesday 28th May 2019.
Abstract: We report the discussion session at the sixth international Genetic Improvement workshop, GI-2019 @ ICSE, which was held as part of the 41st ACM/IEEE International Confer- ence on Software Engineering on Tuesday 28th May 2019. Topics included GI representations, the maintainability of evolved code, automated software testing, future areas of GI research, such as co-evolution, and existing GI tools and benchmarks.

Journal ArticleDOI
TL;DR: The goal of the workshop was to bring together participants from diverse backgrounds to formulate ideas for software design of future systems and related research opportunities and challenges.
Abstract: We report here on the Future of Software DesignWorkshop that was held on Jan 12-14, 2018 in Pittsburgh, PA under the sponsorship of the Carnegie Mellon University Software Engineering Institute. The software industry is awash in modern trends that involve artificial intelligence (AI), autonomy, data everywhere, etc. These trends affect the structure of software-intensive systems and their designs. The goal of the workshop was to bring together participants from diverse backgrounds to formulate ideas for software design of future systems and related research opportunities and challenges. In this report we summarize the outcomes of the workshop.

Journal ArticleDOI
TL;DR: This paper introduces JPFBar, a novel technique to estimate the percentage of work done by the JPF search by computing weights for the execution paths it explores and summing up the weights.
Abstract: Software model checkers, such as JPF, are routinely used to explore executions of programs that have very large state spaces. Sometimes the exploration can take a significant amount of time before a bug is found or the checking is complete, in which case the user must patiently wait, possibly for quite some time, to learn the result of checking. A progress bar that accurately shows the status of the search provides the user useful feedback about the time expected for the search to complete. This paper introduces JPFBar, a novel technique to estimate the percentage of work done by the JPF search by computing weights for the execution paths it explores and summing up the weights. JPFBar is embodied into a listener that prints a progress bar during JPF execution. An experimental evaluation using a variety of Java subjects shows that JPFBar provides accurate information about the search's progress and fares well in comparison with a statebased progress estimator that is part of the standard JPF distribution.

Journal ArticleDOI
TL;DR: The Future of Software Design Workshop as mentioned in this paper was held on Jan 12-14, 2018 in Pittsburgh, PA under the sponsorship of the Carnegie Mellon University Software Engineering Institute (CMU SEI).
Abstract: We report here on the Future of Software DesignWorkshop that was held on Jan 12-14, 2018 in Pittsburgh, PA under the sponsorship of the Carnegie Mellon University Software Engineering Institute. Th...

Journal ArticleDOI
TL;DR: A technique to quantify the exploration of Korat, a well-known tool that explores the bounded space of all candidate inputs and enumerates desired inputs that satisfy given constraints, to provide automated test input generation for systematic testing.
Abstract: Tools that explore very large state spaces to nd bugs, e.g., when model checking, or to nd solutions, e.g., when constraint solving, can take a considerable amount of time before the search termi- nates, and the user may not get useful feedback on the state of the search during that time. Our focus is a tool that solves im- perative constraints to provide automated test input generation for systematic testing. Speci cally, we introduce a technique to quantify the exploration of Korat, a well-known tool that explores the bounded space of all candidate inputs and enumerates desired inputs that satisfy given constraints. Our technique quanti es the size of the input space as it is explored by the Korat search, and provides the user exact information on the size of the remaining input space. In addition, it allows studying key characteristics of the search, such as the distribution of solutions as the search nds them. We implement the technique as a listener for the Korat search, and present initial experimental results using it.

Journal ArticleDOI
TL;DR: Topics included were: distributed teams, methods and processes, business strategies, technologies supporting distributed cooperative work, education, and emerging technologies to support/improve/enhance GSE.
Abstract: The International Conference on Global Software Engineering, in its 14th iteration, continues to provide researchers and practitioners with a leading forum to share their research ndings, experiences, and new ideas on diverse topics related to global software engineering. ICGSE 2019 was held in Montreal, Canada on May 25-26, in conjunction with the 41st International Conference on Software Engineering under the theme \Succeeding in the Global Software Industry". Topics included were: distributed teams, methods and processes, business strategies, technologies supporting distributed cooperative work, education, and emerging technologies to support/improve/enhance GSE. Contributions presented at ICGSE tackled these topics providing concepts, evidence, and experiences.