scispace - formally typeset
Search or ask a question

Showing papers in "ACM Sigsoft Software Engineering Notes in 2017"


Journal ArticleDOI
TL;DR: The research methodology comprises studies that put together results from a systematic literature review and empirical data collected from qualitative and quantitative studies to identify difficulty patterns related to learning how to program, a crucial part of software engineers training.
Abstract: New software engineers and casual developers are needed in many different areas. However, students face many difficulties while learning the logic of computer programming, frequently failing in university courses. This Ph.D. research aims to identify difficulty patterns related to learning how to program, a crucial part of software engineers training. The research methodology comprises studies that put together results from a systematic literature review and empirical data collected from qualitative and quantitative studies. The difficulties identified will be compiled into a model, which may assist students in sharpening their focus, and teachers in preparing their lessons and teaching material, as well as researchers in employing methods and tools to support learning

79 citations


Journal ArticleDOI
TL;DR: This paper reports on the results of the Second International Workshop on Software Engineering for Smart Cyber--Physical Systems (SEsCPS 2016), which specifically focuses on challenges and promising solutions in the area of software engineering for sCPS.
Abstract: Smart Cyber--Physical Systems (sCPS) are modern CPS systems that are engineered to seamlessly integrate a large number of computation and physical components; they need to control entities in their environment in a smart and collective way to achieve a high degree of effectiveness and efficiency. At the same time, these systems are supposed to be safe and secure, deal with environment dynamicity and uncertainty, cope with external threats, and optimize their behavior to achieve the best possible outcome. This "smartness" typically stems from highly cooperative behavior, self--awareness, self--adaptation, and selfoptimization. Most of the "smartness" is implemented in software, which makes the software one of the most complex and most critical constituents of sCPS. As the specifics of sCPS render traditional software engineering approaches not directly applicable, new and innovative approaches to software engineering of sCPS need to be sought. This paper reports on the results of the Second International Workshop on Software Engineering for Smart Cyber--Physical Systems (SEsCPS 2016), which specifically focuses on challenges and promising solutions in the area of software engineering for sCPS.

52 citations


Journal ArticleDOI
TL;DR: This year's workshop indicates convergence on a common definition on technical debt and its elements which drive the maturation of a research roadmap, demonstrating that managing technical debt is a mainstream topic in software engineering research.
Abstract: We report here on the Eighth International Workshop on Managing Technical Debt, collocated with the International Conference on Software Maintenance and Evolution (ICSME 2016). The technical debt research community continues to expand through collaborations of industry, tool vendors, and academia. The major themes of discussion this year indicate convergence on a common definition on technical debt and its elements which drive the maturation of a research roadmap, demonstrating that managing technical debt is a mainstream topic in software engineering research bringing empirical analysis, data science, software design and architecture analysis and automation among other challenges together.

21 citations


Journal ArticleDOI
TL;DR: Results of the first International Workshop on Variability and Complexity in Software Design are reported that brought together researchers and engineers interested in the topic of complexity and variability and outlines directions the field might move in the future.
Abstract: Many of today's software systems accommodate different usage and deployment scenarios. Intentional and unintentional variability in functionality or quality attributes (e.g., performance) of software significantly increases the complexity of the problem and design space of those systems. The complexity caused by variability becomes increasingly difficult to handle due to the increasing size of software systems, new and emerging application domains, dynamic operating conditions under which software systems have to operate, fast moving and highly competitive markets, and more powerful and versatile hardware. This paper reports results of the first International Workshop on Variability and Complexity in Software Design that brought together researchers and engineers interested in the topic of complexity and variability. It also outlines directions the field might move in the future

18 citations


Journal ArticleDOI
TL;DR: A novel approach and implementation in Symbolic PathFinder for handling symbolic arrays in Java is presented, which enables analyzing a broader class of programs that manipulates arrays.
Abstract: Symbolic Execution is a program analysis technique used to increase software reliability. Modern software often manipulate complex data structures, many of which being similar to arrays. We present a novel approach and implementation in Symbolic PathFinder for handling symbolic arrays in Java. It enables analyzing a broader class of programs that manipulates arrays. We also extend the Symbolic Pathfinder testcase generation to support numeric arrays.

12 citations


Journal ArticleDOI
TL;DR: In this article, a machine learning based approach for cross-project change-proneness prediction is proposed, which consists of training a model from one project and testing it on dataset belonging to a different project.
Abstract: Change-prone classes or modules are defined as regions of the source code which are more likely to change as a result of a software development of maintenance activity. Automatic identification of change-prone classes are useful for the software development team as they can focus their testing efforts on areas within the source code which are more likely to change. Several machine learning techniques have been proposed for predicting change-prone classes based on the application of source code metrics as indicators. However, most of the work has focused on within-project training and model building. There are several real word scenario in which sufficient training dataset is not available for model building such as in the case of a new project. Cross-project prediction is an approach which consists of training a model from dataset belonging to one project and testing it on dataset belonging to a different project. Cross-project change-proneness prediction is relatively unexplored.We propose a machine learning based approach for cross-project change-proneness prediction. We conduct experiments on 10 open-source Eclipse plug-ins and demonstrate the effectiveness of our approach. We frame several research questions comparing the performance of within project and cross project prediction and also propose a Genetic Algorithm (GA) based approach for identifying the best set of source code metrics. We conclude that for within project experimental setting, Random Forest (RF) technique results in the best precision. In case of cross-project change-proneness prediction, our analysis reveals that the NDTF ensemble method performs higher than other individual classifiers (such as decision tree and logistic regression) and ensemble methods in the experimental dataset. We conduct a comparison of within-project, cross-project without GA and cross-project with GA and our analysis reveals that cross-project with GA performs best followed by within-project and then cross-project without GA.

10 citations


Journal ArticleDOI
TL;DR: The aim of this work is to extend Psyco with symbolic search and plan to eventually use symbolic search to compute a termination criterion for Psyco that guarantees the correctness of learned interfaces.
Abstract: The Java PathFinder extension Psyco generates interfaces of Java components using a combination of dynamic symbolic execution and automata learning to explore different combinations of method invocations on a component. Such interfaces are useful in contract-based compositional verification of component-based systems. Psyco relies on testing for validating learned interfaces and currently cannot guarantee that a generated interface is correct. Instead, it simply returns the most recent learned interface once a user-defined time limit is exceeded. In this paper, we report on work that was performed during the 2016 Google Summer of Code. The aim of this work is to extend Psyco with symbolic search. During symbolic search, Psyco uses fully symbolic method summaries for exploring the state space of a component symbolically. We plan to eventually use symbolic search to compute a termination criterion for Psyco that guarantees the correctness of learned interfaces (e.g., by using symbolic search as a basis for symbolically model-checking a component against a learned interface)

9 citations


Journal ArticleDOI
TL;DR: A dynamic slicing algorithm for feature-oriented programs named Execution Trace File Based Feature-Oriented Dynamic Slicing (ETBFODS) algorithm which uses a dependence based representation called Dynamic Feature Composition Dependence Graph (DFCDG) and an execution trace file to store execution history of the program for a given input.
Abstract: Feature-Oriented Programming (FOP) is a general paradigm for synthesizing programs in software product lines. A family of software systems constitutes a software product line (SPL). The unique characteristics of feature-oriented programs such as mixin layers, refinements of classes, refinements of constructors, constants, refinements, etc. pose special difficulties in the slicing of these programs. This paper proposes a dynamic slicing algorithm for feature-oriented programs. The algorithm is named Execution Trace File Based Feature-Oriented Dynamic Slicing (ETBFODS) algorithm. The ETBFODS algorithm uses a dependence based representation called Dynamic Feature Composition Dependence Graph (DFCDG) and an execution trace file to store execution history of the program for a given input. The dynamic slice is computed by traversing the DFCDG in breadth--first or depth-first wise and then mapping the resultant traversed vertices to the program statements.

8 citations


Journal ArticleDOI
TL;DR: The aim of this research is to explore work characteristics in software engineering research and to investigate the relationship between these characteristics and work outcomes and it is expected that these results can lead to a proposal of an instrument for measuring work characteristics for software engineering and the understanding of how to use such measures to improve software development practice.
Abstract: Context: Work Design refers to how work is conceived, assigned across organizational levels, and structured into tasks performed by individuals or teams. Recent studies have argued that work characteristics need further investigation to improve our understanding of how to design work and tasks in this software engineering practice. Goal: The aim of this research is to explore work characteristics in software engineering research and to investigate the relationship between these characteristics and work outcomes. We expect that these results can lead us toward a proposal of an instrument for measuring work characteristics in software engineering and the understanding of how to use such measures to improve software development practice. Method: The methodological strategy to conduct this research includes a non-exact replication of two surveys performed in other fields. Further, we plan to execute the same survey, enlarging the sample and enhancing the procedures. We also intend to perform a qualitative research with a sample of the participants of the previous survey. Current status: This thesis proposal is in early stage of development. We divided this study in three phases: First, we carried out a non-exact replication of two surveys performed in other areas. Professionals of Brazilian software organizations composed our sample and this phase aimed to identify the work characteristics of software development and assess the potential relationships of these characteristics with work outcomes. The second step of this study aims to enlarge our sample by including international software professionals. We also intend to enhance the variability of software development roles of the professionals in our sample. Finally, we will perform qualitative research, which aims to triangulate data collected in the previous phases and deepen our understanding about work characteristics in software engineering

7 citations


Journal ArticleDOI
TL;DR: The results show that SSSP approaches solving identical challenges can differ in their computational time, preciseness of results and that the approach is capable of quantifying these differences, and highlight that focused approaches generally outperform more sophisticated approaches for identical SSSP problems.
Abstract: For the Staffing and Scheduling a Software Project (SSSP), one has to find an allocation of resources to tasks while considering parameters such skills and availability to identify the optimal delivery of the project. Many approaches have been proposed that solve SSSP tasks by representing them as optimization problems and applying optimization techniques and heuristics. However, these approaches tend to vary in the parameters they consider, such as skill and availability, as well as the optimization techniques, which means their accuracy, performance, and applicability can vastly differ, making it difficult to select the most suitable approach for the problem at hand. The fundamental reason for this lack of comparative material lies in the absence of a systematic evaluation method that uses a validation dataset to benchmark SSSP approaches. We introduce an evaluation process for SSSP approaches together with benchmark data to address this problem. In addition, we present the initial evaluation of five SSSP approaches. The results show that SSSP approaches solving identical challenges can differ in their computational time, preciseness of results and that our approach is capable of quantifying these differences. In addition, the results highlight that focused approaches generally outperform more sophisticated approaches for identical SSSP problems

6 citations


Journal ArticleDOI
TL;DR: This paper focuses on improving the cohesion of different classes of object-oriented software using a newly proposed similarity metric based on Frequent Usage Patterns (FUP) and performs hierarchical agglomerative clustering using complete linkage strategy to cluster member functions.
Abstract: Due to wide adoption of object-oriented programming in software development, there is always a requirement to produce well-designed software systems, so that the overall software maintenance cost is reduced and reusability of the component is increased. But, due to prolonged maintenance activities, the internal structure of software system deteriorates. In this situation, restructuring is a widely used solution to improve the overall internal structure of the system without changing its external behavior. As, it is known that, one technique to perform restructuring is to use refactoring on the existing source code to alter its internal structure without modifying its external functionality. However, the refactoring solely depends on our ability to identify various code smells present in the system. Refactoring aims at improving cohesion and reducing coupling in the software system. So, in this paper, a restructuring approach based on refactoring is proposed through improvement in cohesion. This paper focuses on improving the cohesion of different classes of object-oriented software using a newly proposed similarity metric based on Frequent Usage Patterns (FUP). The proposed similarity metric measure the relatedness among member functions of the classes. The metric makes use of FUPs used by member functions. The FUP consists of unordered sequences of member variables accessed by member function in performing its task. The usage pattern includes both direct and indirect usages based on sub-function calls within a member function. Based on the values of the similarity metric, we performed hierarchical agglomerative clustering using complete linkage strategy to cluster member functions. Finally, based on the clusters obtained, the source code of the software is refactored using proposed refactoring algorithm. The applicability of our proposed approach is tested using two java projects related to different domains of real life. The result obtained encourages the applicability of proposed approach in the restructuring of a software system.patterns, refactoring, hierarchical clustering, maintainability. usage patterns, refactoring, hierarchical clustering, maintainability.

Journal ArticleDOI
TL;DR: This paper runs Java Pathfinder as an Android application that executes Java bytecode, which gives us direct access to the Android environment and allows us to verify rich Android apps that rely on native calls.
Abstract: Because Android apps are written in Java and executed on a virtual machine (VM), there is an opportunity to employ Java Pathfinder (JPF) for their verification. There already exist two JPF extensions, jpf-android and jpf-pathdroid. The former executes Java bytecode on the Java VM, while the latter executes Android applications in their original format. Both do not support native methods, and thus depend on a model of the Android environment. This paper introduces an alternative approach: we run JPF as an Android application that executes Java bytecode, which gives us direct access to the Android environment. This approach allows us to verify rich Android apps that rely on native calls

Journal ArticleDOI
TL;DR: The goal of the research is to define a systematic, empirically validated decision support system (DSS) for selecting a tool for software test automation.
Abstract: Context: Test automation is an investment having a high initial economic impact on software development. Utilization of test automation may positively affect the costs (e.g. by speeding up development iterations by providing repeatable tests and regression testing) and the quality of software or system, in large scale. Approaches to test automation may not always be appropriate or successful. The trade-off between manual and automated testing and the tools to be used have to be identified and justified. The task to decide which tools to use, to maximize the benefits is not a trivial one. There are numerous software testing or software test automation tools available, both commercial and open source and unique, multifaceted goals in every development environment (context). The exact number of tools is unknown and chances or resources to try out different choices are very limited. Objective: Contextual factors are acknowledged as an issue and well known and common to both practitioners in the field and consultation service providers. Selecting and utilizing the most effective and efficient tool(s) for specific purpose(s) in a specific context is essential for the success of business. The goal of the research is to define a systematic, empirically validated decision support system (DSS) for selecting a tool for software test automation

Journal ArticleDOI
TL;DR: Social and organizational impacts on the architect and the architecting process are often neglected and were the topics of the First International Workshop on the Social and Organizational Dimensions of Software Architecting.
Abstract: Software architecting is about making decisions that have system-wide impact and that shape software product and process alike. While researchers and practitioners have tried to define and scope the role of the architecture, social and organizational impacts on the architect and the architecting process are often neglected. These impacts were the topics of the First International Workshop on the Social and Organizational Dimensions of Software Architecting. This report summarizes the workshop.

Journal ArticleDOI
TL;DR: A requirements analysis framework is proposed for development of SOS systems and formal representation of business process requirements for SOS based on business scenario and Cause-Effect-Dependency (CED) graph in dimensions of six aspects of services is presented.
Abstract: In Service Oriented Systems (SOS), implementation of business processes is accomplished through services in distributed, loosely coupled manner based on business process requirements of the users. Consequently, importance of businessprocess requirements analysis for development of SOS is strongly highlighted in both academia and industry. Usually, traditional requirements engineering is competent enough to specify and analysis business requirements for development of software systems efficiently. However, Service Oriented Requirement Engineering (SORE) emerging for SOS development is differed from traditional requirement engineering due to complex nature of services. Yet, a serious gap is still exist between early and detailed specification of business process requirements in SORE and further mapping towards design of SOS from set of business processes. To address this issue, in this paper, a requirements analysis framework is proposed for development of SOS systems. The contribution of the proposed work is formal representation of business process requirements for SOS based on business scenario and Cause-Effect-Dependency (CED) graph in dimensions of six aspects of services -- What, Why, How, Who, When and Where (5W1H). Both early and detailed level requirements analysis in the context of SORE is facilitated by the proposed approach. Beside, traceability of proposed approach towards design of business processes for development of SOS is also exhibited in this paper. Moreover, the practical utility of the proposed approach is demonstrated using a suitable case study.

Journal ArticleDOI
TL;DR: This paper model the process of software testing by using the joint probability of both test cases and program faults, and propose two probability models for the software's old and new versions, respectively, and implement a theorem combining set operations and probability models.
Abstract: The paper presents a test suite reduction approach based on probability models in regression testing. First, we model the process of software testing by using the joint probability of both test cases and program faults, and propose two probability models for the software's old and new versions, respectively. Then, a theorem combining set operations and probability models is given to implement our approach. Different from the traditional coverage-based test suite reduction methods, the reduced test suite constructed by our approach does not need to cover all test requirements, but also remains the same fault detection capability of the original test suite.The paper presents a test suite reduction approach based on probability models in regression testing. First, we model the process of software testing by using the joint probability of both test cases and program faults, and propose two probability models for the software's old and new versions, respectively. Then, a theorem combining set operations and probability models is given to implement our approach. Different from the traditional coverage-based test suite reduction methods, the reduced test suite constructed by our approach does not need to cover all test requirements, but also remains the same fault detection capability of the original test suite.

Journal ArticleDOI
TL;DR: The continuous move towards reducing upfront architecture design efforts, and the popularity of practices such as test-driven development highlight the importance of enriching software implementation practices with new architecting notions, practices and tools.
Abstract: In software engineering there has traditionally been a distinction between high-level architecting and lower level implementation activities (e.g. coding and testing). Those who are developing and maintaining the software are often not engaged in early design activities. For example, software programmers tend to lack design and architecture skills and architects are often blamed for not knowing how to write good code and for not being involved in low-level implementation tasks. This results in software quality issues, software implementations that drifted from initial design, and incorrect or missing architectural decisions in the code. The continuous move towards reducing upfront architecture design efforts, and the popularity of practices such as test-driven development highlight the importance of enriching software implementation practices with new architecting notions, practices and tools. This was the topic of the First International Workshop on Bringing Architecture Design Thinking into Developers' Daily Activities. In this paper we summarize the workshop

Journal ArticleDOI
TL;DR: The authors provide a more detailed analysis of the emerging evidence-based insights on Environmental Big Data, by applying the well-defined method of systematic mapping, and reveals the need for more empirical research able to provide new metrics measuring efficiency and effectiveness.
Abstract: Big data sets and analytics are increasingly being used by government agencies, non-governmental organizations, and privatecompanies to forward environmental protection. Improving energy efficiency, promoting environmental justice, tracking climate change, and monitoring water quality are just a few of the objectives being furthered by the use of Big Data. The authors provide a more detailed analysis of the emerging evidence-based insights on Environmental Big Data (EBD), by applying the well-defined method of systematic mapping. The analysis of results throws light on the current open issues of Environmental Big Data. Moreover, different facets of the study can be combined nto answer more specific research questions. The report reveals the need for more empirical research able to provide new metrics measuring efficiency and effectiveness of the proposed analytics and new methods and tools supporting data processing workflow in EBD

Journal ArticleDOI
TL;DR: The goal is to improve the communication of requirements between the members of a development team, reducing the loss of requirements information during the execution of the software project.
Abstract: The effective communication of the requirements influences the success of software development projects. Achieving effective communication of the requirements is difficult due to the involvement of several persons with different roles, skills, knowledge and responsibilities. Although many studies analyze the communication between clients and system analysts, they do not focus on the communication within the development team. In this research, we propose the creation of a set of artifacts and models to support the communication of requirements. We will base our proposal on the different perspectives of the team members according to their experience in the artifacts and models adopted in the development process within their organization. We will follow a methodology based on Design Science Research guidelines, which will guide us through the creation and evaluation of the artifacts and models to solve problems with the communication of requirements. Our goal is to improve the communication of requirements between the members of a development team, reducing the loss of requirements information during the execution of the software project

Journal ArticleDOI
TL;DR: The goal of this PhD research is to build a substantive theory of job rotation in software engineering, along with the construction and validation of a set of guidelines to improve the use of job rotations in software companies.
Abstract: Context. Job Rotation is an organizational practice whereby individuals are moved among jobs or projects in the same organization. In software companies, job rotation is a common practice as well, especially to promote the movement of professionals among different software projects. For several years, researchers from different research areas have studied the effects of this practice on the work of employees, however in software engineering research the studies regarding this practice are still wispy. Goal: The goal of this PhD research is to build a substantive theory of job rotation in software engineering, along with the construction and validation of a set of guidelines to improve the use of job rotation in software companies. Thus, we seek to provide instruments to plan, execute and evaluate the effects of this practice on the work of software engineers. Method: Consistent with the nature of our problem and the investigated phenomenon, a multi-method approach, specially based on longitudinal and exploratory studies, is being performed to understand, interpret and explain the effects of job rotation in software engineers and posteriorly in the software development process. So far, we have concluded a systematic literature review and an industrial case study on this theme. Moreover, a cross-sectional survey is being concluded and ethnographically-supported multiple case studies are being planned to improve and complement the current findings. Results: Until this point, this PhD work has presented a set of contributions both to academic research and to industrial practice. Our initial theory contributes to raise the awareness of the potential conflicts associated to the practice of job rotation and start to prepare practitioners to deal with negative impacts of this practice. However, further research is still needed to improve this theory and to construct guidelines to industry

Journal ArticleDOI
TL;DR: This work presents NonDex for JPF, which includes JPF models for 11 methods from the Java standard library (i.e., all methods that JPF supports from the current methods in Non-Dex), and uses these models to systematically explore state spaces of 46 tests from student homework submissions.
Abstract: Some Java libraries have underdetermined specifications that allow more than one correct output for the same input, e.g., an output array may have its elements in any order. While such specifications have a number of advantages (e.g., a library can change while still satisfying the specification), the non-determinism inherent in underdetermined specifications can lead to failures in client code that erroneously assumes behaviors based on the library implementation instead of only the specification. Our recent work introduced the NonDex approach for detecting such erroneous assumptions by checking client code against models of library methods, which encode all behaviors allowed by the specificationsWe present NonDex for JPF, which includes JPF models for 11 methods from the Java standard library (i.e., all methods that JPF supports from the current methods in Non-Dex). We use these models to systematically explore state spaces of 46 tests from student homework submissions. Our experiments show several interesting results, which provide new insights into the complexity of exploring the behaviors of code that uses underdetermined APIs and the structure of state spaces that arise in the exploration, and provide basis for future work on better detecting faults in tests that invoke underdetermined APIs as well as developing tool support for writing and maintaining more robust test suites

Journal ArticleDOI
TL;DR: This work is the first that applies bounded model checking to the verification of Lua programs, and shows that BMCLua produces an ANSI-C code that is more efficient for verification, when compared with other existing approaches.
Abstract: Lua is a programming language designed as scripting language, which is fast, lightweight, and suitable for embedded applications. Due to its features, Lua is widely used in the development of games and interactive applications for digital TV. However, during the development phase of such applications, some errors may be introduced, such as deadlock, arithmetic overflow, and division by zero. This paper describes a novel verification approach for software written in Lua, using as backend the Efficient SMTBased Context-Bounded Model Checker (ESBMC). Such an approach, called bounded model checking - Lua (BMCLua), consists in translating Lua programs into ANSI-C source code, which is then verified with ESBMC. Experimental results show that the proposed verification methodology is effective and efficient, when verifying safety properties in Lua programs. The performed experiments have shown that BMCLua produces an ANSI-C code that is more efficient for verification, when compared with other existing approaches. To the best of our knowledge, this work is the first that applies bounded model checking to the verification of Lua programs.

Journal ArticleDOI
TL;DR: The aim of this research is to analyze how to improve the MDD process with variability modeling in real industrial environments in order to enable systems following a Model Driven Development approach to manage variability.
Abstract: Software Product Lines (SPLs) have proven to be successful at reducing the costs and time to market of product development through the planned reuse of software components into products within the same scope. SPL adoption has been typically regarded to follow a proactive approach, although recent surveys show that most of the SPLs are planned following reactive approaches. It seems necessary to refocus SPL engineering research, methodologies and tools for existing systems into SPL. We believe that systems following a Model Driven Development (MDD) approach can highly benefit from these re-engineering efforts, in order to enable them to manage variability. The aim of this research is to analyze how to improve the MDD process with variability modeling in real industrial environments. Nowadays, we have performed three empirical studies related to variability modeling in MDD approaches. These studies are the following: (1) an usability evaluation of a MDD approach with variability modeling, (2) comprehensibility of variability in model fragments for product configuration and (3) an evaluation about bug-fixing in a MDD-SPL tool

Journal ArticleDOI
TL;DR: A historical overview of ACM SIGSOFT SEN is provided and the bibliometric analysis presented in this paper can provide insights on the extent to which the SEN is meeting its desired objectives.
Abstract: Bibliometric analysis is a commonly used technique to analyze scholarly publications to extract useful insights about research and scientific papers which can then be used for decision making by policy makers and administrators. Bibliometric analysis helps in understanding various aspects of scientific knowledge creation and dissemination such as author and institute productivity, impact of articles in terms of citations, university and industry collaboration, geographical contributions and ethnic and gender minority in authorship. ACM SIGSOFT Software Engineering Notes (SEN) is a non-refereed but a reputed and edited publication for informal writings and reports about Software Engineering (SE). ACM SIGSOFT SEN publishes various types of submissions such as paper, report, column, announcement and book review. These submissions are published in the ACM Digital Library (DL). We conduct a bibliometric analysis of articles published in ACM SIGSOFT SEN during a ten year period from 2007 to 2016. Our objective is to provide a historical overview (one decade) of ACM SIGSOFT SEN and reflect on the past so that the ACM SIGSOFT community and contributors can assess the strengths and shortcomings of the SEN. We believe that the bibliometric analysis presented in this paper can provide insights on the extent to which the SEN is meeting its desired objectives.

Journal ArticleDOI
TL;DR: The 1st COMMitMDE workshop provided a forum to discuss the state of research and practice on collaborative MDE, to create new synergies between tool vendors, researchers, and practitioners, to inform the community about the new means for collaborative Mde, and to reect on the needs and research gaps in the collaborative M DE area.
Abstract: COMMitMDE was the 1st international workshop on Collaborative Modelling in MDE, held on the 4th of October 2016 as a satellite event of the 19th International Conference on Model Driven Engineering Languages and Systems (MoDELS 2016), St. Malo, France. The goal of the workshop was to bring together researchers and practitioners in order to investigate (i) the potential impact of collaborative software engineering methods and principles into Model-Driven Engineering (MDE) practices and (ii) how MDE methods and techniques can support collaborative software engineering activities.The 1st COMMitMDE workshop provided a forum to discuss the state of research and practice on collaborative MDE, to create new synergies between tool vendors, researchers, and practitioners, to inform the community about the new means for collaborative MDE, and to reect on the needs and research gaps in the collaborative MDE area.

Journal ArticleDOI
TL;DR: This summary has the purpose of leveraging the transfer of the findings of the workshop into future activities of the automatic vehicle control (AVC) community.
Abstract: In this report, we summarize topics, challenges, and research questions discussed in the workshop contributions and during the sessions of our workshop. This summary has the purpose of leveraging the transfer of our findings into future activities of the automatic vehicle control (AVC) community.

Journal ArticleDOI
TL;DR: A tool is described as an extension to Java PathFinder, called State-Comparator, which compares states in the state space to identify variables that should be abstracted, in order to increase state matching.
Abstract: Model checking software applications can result in exploring large or infinite state spaces. It is thus essential to identify and abstract variables that could potentially take on a large number of values, in order to increase state matching. In this paper we describe a tool we created as an extension to Java PathFinder, called State-Comparator, which compares states in the state space to identify variables that should be abstracted.

Journal ArticleDOI
TL;DR: RePa 2016 was part of the 24th IEEE International Requirements Engineering Conference (RE'16) held in Beijing and consisted in an Introduction, a Keynote, Paper Presentations and Discussion Sessions.
Abstract: RePa 2016 was part of the 24th IEEE International Requirements Engineering Conference (RE'16) held in Beijing. The all day program consisted in an Introduction, a Keynote, Paper Presentations and Discussion Sessions. All papers were presented and the attendance was around 15 people.

Journal ArticleDOI
TL;DR: The Y2K panic reminded us of how much code continued to be in use in the world long after the demise of its coders, its intellectual parents, and it seems important that software developers do what they can to prepare these intellectual offspring for their future.
Abstract: Your children are not your children….. their souls dwell in the house of tomorrow, which you cannot visit, not even in your dreams. Khalil Gibran This is an excerpt from a famous poem much-loved by many parents. It reminds parents that their children, the products of their bodies, will go off to an unknowable future that they will inhabit and influence, eventually without any direct parental guidance and supervision. The wise parent must think about how to prepare his or her children for such a future. But the poet seems also to have something to say to software developers (amazingly enough). For our software, the products of our minds, will also go off to an unknowable future that it will have to inhabit, and might inevitably influence. But in this case too, eventually this will probably have to be without the direct guidance and supervision of the developer of the software. So, it seems important that we software developers also do what we can to prepare these intellectual offspring for their future as well. Like our children, we should expect that the software we produce will live on long after we do. The Y2K panic reminded us of how much code continued to be in use in the world long after the demise of its coders, its intellectual parents. But if anyone thinks that this situation, with its attendant problems, was cleaned up by the year 2000, they should think again. Millions of lines of Cobol, written in the middle of the 20th century continued to be pivotally important to the operations of the US Social Security Administration well into the 21st century, and may still be in service today. Ancient Fortran code is still in use by various government agencies. And the US Federal Aviation Administration's management of the National Air Space continued to rely upon, for a period of at least 30 years and until only relatively recently, programs written in assembly language code, running on elaborate simulators of a long-extinct machine. We continue to hear reports about the central importance of ancient code to banks, airlines, and large corporations in all areas of endeavor, suggesting that government is not the only area of our economy that is centrally reliant upon the " orphans " of long-expired software developers. And, indeed, it is likely that few, if any, of these original developers expected that the code they wrote …