scispace - formally typeset
Search or ask a question

Showing papers in "Information & Software Technology in 2014"


Journal ArticleDOI
TL;DR: The results indicate that software engineering work practices are chosen opportunistically, adapted and configured to provide value under the constrains imposed by the startup context.
Abstract: Context: Software startups are newly created companies with no operating history and fast in producing cutting-edge technologies. These companies develop software under highly uncertain conditions, tackling fast-growing markets under severe lack of resources. Therefore, software startups present a unique combination of characteristics which pose several challenges to software development activities. Objective: This study aims to structure and analyze the literature on software development in startup companies, determining thereby the potential for technology transfer and identifying software development work practices reported by practitioners and researchers. Method: We conducted a systematic mapping study, developing a classification schema, ranking the selected primary studies according their rigor and relevance, and analyzing reported software development work practices in startups. Results: A total of 43 primary studies were identified and mapped, synthesizing the available evidence on software development in startups. Only 16 studies are entirely dedicated to software development in startups, of which 10 result in a weak contribution (advice and implications (6); lesson learned (3); tool (1)). Nineteen studies focus on managerial and organizational factors. Moreover, only 9 studies exhibit high scientific rigor and relevance. From the reviewed primary studies, 213 software engineering work practices were extracted, categorized and analyzed. Conclusion: This mapping study provides the first systematic exploration of the state-of-art on software startup research. The existing body of knowledge is limited to a few high quality studies. Furthermore, the results indicate that software engineering work practices are chosen opportunistically, adapted and configured to provide value under the constrains imposed by the startup context.

359 citations


Journal ArticleDOI
TL;DR: It was generally discovered that, existing prioritization techniques suffer from a number of limitations which includes: lack of scalability, methods of dealing with rank updates during requirements evolution, coordination among stakeholders and requirements dependency issues.
Abstract: Context: During requirements engineering, prioritization is performed to grade or rank requirements in their order of importance and subsequent implementation releases It is a major step taken in making crucial decisions so as to increase the economic value of a system Objective: The purpose of this study is to identify and analyze existing prioritization techniques in the context of the formulated research questions Method: Search terms with relevant keywords were used to identify primary studies that relate requirements prioritization classified under journal articles, conference papers, workshops, symposiums, book chapters and IEEE bulletins Results: 73 Primary studies were selected from the search processes Out of these studies; 13 were journal articles, 35 were conference papers and 8 were workshop papers Furthermore, contributions from symposiums as well as IEEE bulletins were 2 each while the total number of book chapters amounted to 13 Conclusion: Prioritization has been significantly discussed in the requirements engineering domain However, it was generally discovered that, existing prioritization techniques suffer from a number of limitations which includes: lack of scalability, methods of dealing with rank updates during requirements evolution, coordination among stakeholders and requirements dependency issues Also, the applicability of existing techniques in complex and real setting has not been reported yet

331 citations


Journal ArticleDOI
TL;DR: This paper introduces and details the MoDisco open source MDRE framework and presents the underlying MDRE global methodology and architecture accompanying this proposed tooling to make easier the design and building of modelbased solutions dedicated to legacy systems RE.
Abstract: Context: Most companies, independently of their size and activity type, are facing the problem of managing, maintaining and/or replacing (part of) their existing software systems. These legacy systems are often large applications playing a critical role in the company’s information system and with a non-negligible impact on its daily operations. Improving their comprehension (e.g., architecture, features, enforced rules, handled data) is a key point when dealing with their evolution/modernization. Objective: The process of obtaining useful higher-level representations of (legacy) systems is called reverse engineering (RE), and remains a complex goal to achieve. Socalled Model Driven Reverse Engineering (MDRE) has been proposed to enhance more traditional RE processes. However, generic and extensible MDRE solutions potentially addressing several kinds of scenarios relying on different legacy technologies are still missing or incomplete. This paper proposes to make a step in this direction. Method: MDRE is the application of Model Driven Engineering (MDE) principles and techniques to RE in order to generate relevant model-based views on legacy systems, thus facilitating their understanding and manipulation. In this context, MDRE is practically used in order to 1) discover initial models from the legacy artifacts composing a given system and 2) understand (process) these models to generate relevant views (i.e., derived models) on this system. Results: Capitalizing on the different MDRE practices and our previous experience (e.g., in real modernization projects), this paper introduces and details the MoDisco open source MDRE framework. It also presents the underlying MDRE global methodology and architecture accompanying this proposed tooling. Conclusion: MoDisco is intended to make easier the design and building of modelbased solutions dedicated to legacy systems RE. As an empirical evidence of its relevance and usability, we report on its successful application in real industrial projects and on the concrete experience we gained from that.

198 citations


Journal ArticleDOI
TL;DR: The focus of the GSD SLR research is mapping the research rather than providing evidence-based guidance to industry, and empirical support for the majority of risks identified is moderate to low.
Abstract: Context There is extensive interest in global software development (GSD) which has led to a large number of papers reporting on GSD. A number of systematic literature reviews (SLRs) have attempted to aggregate information from individual studies. Objective: We wish to investigate GSD SLR research with a focus on discovering what research has been conducted in the area and to determine if the SLRs furnish appropriate risk and risk mitigation advice to provide guidance to organizations involved with GSD. Method: We performed a broad automated search to identify GSD SLRs. Data extracted from each study included: (1) authors, their affiliation and publishing venue, (2) SLR quality, (3) research focus, (4) GSD risks, (5) risk mitigation strategies and, (6) for each SLR the number of primary studies reporting each risk and risk mitigation strategy. Results: We found a total of 37 papers reporting 24 unique GSD SLR studies. Major GSD topics covered include: (1) organizational environment, (2) project execution, (3) project planning and control and (4) project scope and requirements. We extracted 85 risks and 77 risk mitigation advice items and categorized them under four major headings: outsourcing rationale, software development, human resources, and project management. The largest group of risks was related to project management. GSD outsourcing rationale risks ranked highest in terms of primary study support but in many cases these risks were only identified by a single SLR. Conclusions: The focus of the GSD SLRs we identified is mapping the research rather than providing evidence-based guidance to industry. Empirical support for the majority of risks identified is moderate to low, both in terms of the number of SLRs identifying the risk, and in the number of primary studies providing empirical support. Risk mitigation advice is also limited, and empirical support for these items is low.

138 citations


Journal ArticleDOI
TL;DR: With the operationalization in hand, researchers no longer need to start from scratch when researching open source ecosystems’ health, and stakeholders can make better decisions on whether to invest in an ecosystem.
Abstract: Background The livelihood of an open source ecosystem is important to different ecosystem participants: software developers, end-users, investors, and participants want to know whether their ecosystem is healthy and performing well. Currently, there exists no working operationalization available that can be used to determine the health of open source ecosystems. Health is typically looked at from a project scope, not from an ecosystem scope. Objectives With such an operationalization, stakeholders can make better decisions on whether to invest in an ecosystem: developers can select the healthiest ecosystem to join, keystone organizers can establish which governance techniques are effective, and end-users can select ecosystems that are robust, will live long, and prosper. Method Design research is used to create the health operationalization. The evaluation step is done using four ecosystem health projects from literature. Results The Open Source Ecosystem Health Operationalization is provided, which establishes the health of a complete software ecosystem, using the data from collections of open source projects that belong to the ecosystem. Conclusion The groundwork is done, by providing a summary of research challenges, for more research in ecosystem health. With the operationalization in hand, researchers no longer need to start from scratch when researching open source ecosystems’ health.

137 citations


Journal ArticleDOI
TL;DR: In this article, the root cause analysis of software project failures is conducted to understand which causes, so called bridge causes, interconnect different process areas, and which causes were perceived as the most promising targets for process improvement.
Abstract: Context: Software project failures are common. Even though the reasons for failures have been widely studied, the analysis of their causal relationships is lacking. This creates an illusion that the causes of project failures are unrelated. Objective: The aim of this study is to conduct in-depth analysis of software project failures in four software product companies in order to understand the causes of failures and their relationships. For each failure, we want to understand which causes, so called bridge causes, interconnect different process areas, and which causes were perceived as the most promising targets for process improvement. Method: The causes of failures were detected by conducting root cause analysis. For each cause, we classified its type, process area, and interconnectedness to other causes. We quantitatively analyzed which type, process area, and interconnectedness categories (bridge, local) were common among the causes selected as the most feasible targets for process improvement activities. Finally, we qualitatively analyzed the bridge causes in order to find common denominators for the causal relationships interconnecting the process areas. Results: For each failure, our method identified causal relationships diagrams including 130-185 causes each. All four cases were unique, albeit some similarities occurred. On average, 50% of the causes were bridge causes. Lack of cooperation, weak task backlog, and lack of software testing resources were common bridge causes. Bridge causes, and causes related to tasks, people, and methods were common among the causes perceived as the most feasible targets for process improvement. The causes related to the project environment were frequent, but seldom perceived as feasible targets for process improvement. Conclusion: Prevention of a software project failure requires a case-specific analysis and controlling causes outside the process area where the failure surfaces. This calls for collaboration between the individuals and managers responsible for different process areas.

131 citations


Journal ArticleDOI
TL;DR: A large organization within Ericsson with 400 persons in 40 Scrum teams at three sites adopted the use of Communities of Practice (CoP) as part of their transformation from a traditional plan-driven organization to lean and agile.
Abstract: Context: Communities of practice—groups of experts who share a common interest or topic and collectively want to deepen their knowledge—can be an important part of a successful lean and agile adoption in particular in large organizations. Objective: In this paper, we present a study on how a large organization within Ericsson with 400 persons in 40 Scrum teams at three sites adopted the use of Communities of Practice (CoP) as part of their transformation from a traditional plan-driven organization to lean and agile. Methods: We collected data by 52 semi-structured interviews on two sites, and longitudinal non-participant observation of the transformation during over 20 site visits over a period of two years. Results: The organization had over 20 CoPs, gathering weekly, bi-weekly or on a need basis. CoPs had several purposes including knowledge sharing and learning, coordination, technical work, and organizational development. Examples of CoPs include Feature Coordination CoPs to coordinate between teams working on the same feature, a Coaching CoP to discuss agile implementation challenges and successes and to help lead the organizational continuous improvement, an end-to-end CoP to remove bottlenecks from the flow, and Developers CoPs to share good development practices. Success factors of well-functioning CoPs include having a good topic, passionate leader, proper agenda, decision making authority, open community, supporting tools, suitable rhythm, and cross-site participation when needed. Organizational support include creating a supportive atmosphere and providing a suitable infrastructure for CoPs. Conclusions: In the case organization, CoPs were initially used to support the agile transformation, and as part of the distributed Scrum implementation. As the transformation progressed, the CoPs also took on the role of supporting continuous organizational improvements. CoPs became a central mechanism behind the success of the large-scale agile implementation in the case organization that helped mitigate some of the most pressing problems of the agile transformation. 2014 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

129 citations


Journal ArticleDOI
TL;DR: In this article, the authors identified specific challenges, proposed solutions, and unsolved problems faced when testing scientific software and identified methods to potentially overcome these challenges and their limitations, and described unsolved challenges and how software engineering researchers and practitioners can help to overcome them.
Abstract: Context: Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective: This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method: We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results: We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions: Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.

119 citations


Journal ArticleDOI
TL;DR: A literature review of studies published from the year 1998 up to the 1st semester of 2013 showed a number of leveraged strategies that can support both the selection of products, and the actual testing of products.
Abstract: Context: Testing plays an important role in the quality assurance process for software product line engineering. There are many opportunities for economies of scope and scale in the testing activities, but techniques that can take advantage of these opportunities are still needed. Objective: The objective of this study is to identify testing strategies that have the potential to achieve these economies, and to provide a synthesis of available research on SPL testing strategies, to be applied towards reaching higher defect detection rates and reduced quality assurance effort. Method: We performed a literature review of two hundred seventy-six studies published from the year 1998 up to the 1st semester of 2013. We used several filters to focus the review on the most relevant studies and we give detailed analyses of the core set of studies. Results: The analysis of the reported strategies comprised two fundamental aspects for software product line testing: the selection of products for testing, and the actual test of products. Our findings indicate that the literature offers a large number of techniques to cope with such aspects. However, there is a lack of reports on realistic industrial experiences, which limits the inferences that can be drawn. Conclusion: This study showed a number of leveraged strategies that can support both the selection of products, and the actual testing of products. Future research should also benefit from the problems and advantages identified in this study.

118 citations


Journal ArticleDOI
TL;DR: A classification of replications is proposed which provides experimenters in SE with guidance about what changes can they make in a replication and, based on these, what verification purposes such a replication can serve.
Abstract: Context Replication plays an important role in experimental disciplines. There are still many uncertainties about how to proceed with replications of SE experiments. Should replicators reuse the baseline experiment materials? How much liaison should there be among the original and replicating experimenters, if any? What elements of the experimental configuration can be changed for the experiment to be considered a replication rather than a new experiment? Objective To improve our understanding of SE experiment replication, in this work we propose a classification which is intend to provide experimenters with guidance about what types of replication they can perform. Method The research approach followed is structured according to the following activities: (1) a literature review of experiment replication in SE and in other disciplines, (2) identification of typical elements that compose an experimental configuration, (3) identification of different replications purposes and (4) development of a classification of experiment replications for SE. Results We propose a classification of replications which provides experimenters in SE with guidance about what changes can they make in a replication and, based on these, what verification purposes such a replication can serve. The proposed classification helped to accommodate opposing views within a broader framework, it is capable of accounting for less similar replications to more similar ones regarding the baseline experiment. Conclusion The aim of replication is to verify results, but different types of replication serve special verification purposes and afford different degrees of change. Each replication type helps to discover particular experimental conditions that might influence the results. The proposed classification can be used to identify changes in a replication and, based on these, understand the level of verification.

116 citations


Journal ArticleDOI
TL;DR: Strong indications are obtained that external quality is positively influenced, which has to be further substantiated by industry experiments and longitudinal case studies and in comparison to previous reviews.
Abstract: Context: Test driven development (TDD) has been extensively researched and compared to traditional approaches (test last development, TLD). Existing literature reviews show varying results for TDD. Objective: This study investigates how the conclusions of existing literature reviews change when taking two study quality dimension into account, namely rigor and relevance. Method: In this study a systematic literature review has been conducted and the results of the identified primary studies have been analyzed with respect to rigor and relevance scores using the assessment rubric proposed by Ivarsson and Gorschek 2011. Rigor and relevance are rated on a scale, which is explained in this paper. Four categories of studies were defined based on high/low rigor and relevance. Results: We found that studies in the four categories come to different conclusions. In particular, studies with a high rigor and relevance scores show clear results for improvement in external quality, which seem to come with a loss of productivity. At the same time high rigor and relevance studies only investigate a small set of variables. Other categories contain many studies showing no difference, hence biasing the results negatively for the overall set of primary studies. Given the classification differences to previous literature reviews could be highlighted. Conclusion: Strong indications are obtained that external quality is positively influenced, which has to be further substantiated by industry experiments and longitudinal case studies. Future studies in the high rigor and relevance category would contribute largely by focusing on a wider set of outcome variables (e.g. internal code quality). We also conclude that considering rigor and relevance in TDD evaluation is important given the differences in results between categories and in comparison to previous reviews.

Journal ArticleDOI
TL;DR: A taxonomy that classifies the information and artefacts considered as evidence for safety, and the results strongly suggest the need for more practitioner-oriented and industry-driven empirical studies in the area of safety certification.
Abstract: Context Critical systems in domains such as aviation, railway, and automotive are often subject to a formal process of safety certification. The goal of this process is to ensure that these systems will operate safely without posing undue risks to the user, the public, or the environment. Safety is typically ensured via complying with safety standards. Demonstrating compliance to these standards involves providing evidence to show that the safety criteria of the standards are met. Objective In order to cope with the complexity of large critical systems and subsequently the plethora of evidence information required for achieving compliance, safety professionals need in-depth knowledge to assist them in classifying different types of evidence, and in structuring and assessing the evidence. This paper is a step towards developing such a body of knowledge that is derived from a large-scale empirically rigorous literature review. Method We use a Systematic Literature Review (SLR) as the basis for our work. The SLR builds on 218 peer-reviewed studies, selected through a multi-stage process, from 4963 studies published between 1990 and 2012. Results We develop a taxonomy that classifies the information and artefacts considered as evidence for safety. We review the existing techniques for safety evidence structuring and assessment, and further study the relevant challenges that have been the target of investigation in the academic literature. We analyse commonalities in the results among different application domains and discuss implications of the results for both research and practice. Conclusion The paper is, to our knowledge, the largest existing study on the topic of safety evidence. The results are particularly relevant to practitioners seeking a better grasp on evidence requirements as well as to researchers in the area of system safety. As a major finding of the review, the results strongly suggest the need for more practitioner-oriented and industry-driven empirical studies in the area of safety certification.

Journal ArticleDOI
TL;DR: There is a need to use knowledge-based approaches to improve the quality attributes of software documents that receive less attention, especially credibility, conciseness, and unambiguity.
Abstract: Context: Software documents are core artifacts produced and consumed in documentation activity in the software lifecycle. Meanwhile, knowledge-based approaches have been extensively used in software development for decades, however, the software engineering community lacks a comprehensive understanding on how knowledge-based approaches are used in software documentation, especially documentation of software architecture design. Objective: The objective of this work is to explore how knowledge-based approaches are employed in software documentation, their influences to the quality of software documentation, and the costs and benefits of using these approaches. Method: We use a systematic literature review method to identify the primary studies on knowledge-based approaches in software documentation, following a pre-defined review protocol. Results: Sixty studies are finally selected, in which twelve quality attributes of software documents, four cost categories, and nine benefit categories of using knowledge-based approaches in software documentation are identified. Architecture understanding is the top benefit of using knowledge-based approaches in software documentation. The cost of retrieving information from documents is the major concern when using knowledge-based approaches in software documentation. Conclusions: The findings of this review suggest several future research directions that are critical and promising but underexplored in current research and practice: (1) there is a need to use knowledge-based approaches to improve the quality attributes of software documents that receive less attention, especially credibility, conciseness, and unambiguity; (2) using knowledge-based approaches with the knowledge content in software documents which gets less attention in current applications of knowledge-based approaches in software documentation, to further improve the practice of software documentation activity; (3) putting more focus on the application of software documents using the knowledge-based approaches (knowledge reuse, retrieval, reasoning, and sharing) in order to make the most use of software documents; and (4) evaluating the costs and benefits of using knowledge-based approaches in software documentation qualitatively and quantitatively.

Journal ArticleDOI
TL;DR: Examining iteration objectives on agile teams and how they relate to the golden triangle of project management success factors to see whether these teams incorporate the golden Triangle factors in their objectives and whether they include additional objectives in their iterations provides important insight to the continuing effort to better assess project managementsuccess.
Abstract: Context: While project management success factors have long been established via the golden triangle, little is known about how project iteration objectives and critical decisions relate to these success factors. It seems logical that teams' iteration objectives would reflect project management success factors, but this may not always be the case. If not, how are teams' objectives for iterations differing from the golden triangle of project management success factors? Objective: This study identifies iteration objectives and the critical decisions that relate to the golden triangle of project management success factors in agile software development teams working in two-week iterations. Method: The author conducted semi-structured interviews with members across three different agile software development teams using a hybrid of XP and Scrum agile methodologies. Iteration Planning and Retrospective meetings were also observed. Interview data was transcribed, coded and reviewed by the researcher and two independently trained research assistants. Data analysis involved organizing the data to identify iteration objectives and critical decisions to identify whether they relate to project management success factors. Results: Agile teams discussed four categories of iteration objectives: Functionality, Schedule, Quality and Team Satisfaction. Two of these objectives map directly to two aspects of the golden triangle: schedule and quality. The agile teams' critical decisions were also examined to understand the types of decisions the teams would have made differently to ensure success, which resulted in four categories of such decisions: Quality, Dividing Work, Iteration Amendments and Team Satisfaction. Conclusion: This research has contributed to the software development and project management literature by examining iteration objectives on agile teams and how they relate to the golden triangle of project management success factors to see whether these teams incorporate the golden triangle factors in their objectives and whether they include additional objectives in their iterations. What's more, this research identified four critical decisions related to the golden triangle. These findings provide important insight to the continuing effort to better assess project management success, particularly for agile teams.

Journal ArticleDOI
TL;DR: Examining agile practices using shared mental models theory elucidates how agile practices improve collaboration during the software development process and demonstrates the value of agile practices in developing sharedmental models among developers and customers in software development teams.
Abstract: Context Agile software development is an alternative software development methodology that originated from practice to encourage collaboration between developers and users, to leverage rapid development cycles, and to respond to changes in a dynamic environment. Although agile practices are widely used in organizations, academics call for more theoretical research to understand the value of agile software development methodologies. Objective This study uses shared mental models theory as a lens to examine practices from agile software methodologies to understand how agile practices enable software development teams to work together to complete tasks and work together effectively as a team. Method A conceptual analysis of specific agile practices was conducted using the lens of shared mental models theory. Three agile practices from Xtreme Programming and Scrum are examined in detail, system metaphor, stand-up meeting, and on-site customer, using shared mental models theory. Results Examining agile practices using shared mental models theory elucidates how agile practices improve collaboration during the software development process. The results explain how agile practices contribute toward a shared understanding and enhanced collaboration within the software development team. Conclusions This conceptual analysis demonstrates the value of agile practices in developing shared mental models (i.e. shared understanding) among developers and customers in software development teams. Some agile practices are useful in developing a shared understanding about the tasks to be completed, while other agile practices create shared mental models about team processes and team interactions. To elicit the desired outcomes of agile software development methods, software development teams should consider whether or not agile practices are used in a manner that enhances the team’s shared understanding. Using three specific agile practices as examples, this research demonstrates how theory, such as shared mental models theory, can enhance our understanding regarding how agile practices are useful in enhancing collaboration in the workplace.

Journal ArticleDOI
TL;DR: This paper proposes and validates a framework to help requirements engineers select the most adequate elicitation techniques at any time, and selects techniques other than open interview, offers a wider range of possible techniques and captures more requirements information.
Abstract: Context: This research deals with requirements elicitation technique selection for software product requirements and the overselection of open interviews. Objectives: This paper proposes and validates a framework to help requirements engineers select the most adequate elicitation techniques at any time. Method: We have explored both the existing underlying theory and the results of empirical research to build the framework. Based on this, we have deduced and put together justified proposals about the framework components. We have also had to add information not found in theoretical or empirical sources. In these cases, we drew on our own experience and expertise. Results: A new validated approach for requirements technique selection. This new approach selects techniques other than open interview, offers a wider range of possible techniques and captures more requirements information. Conclusions: The framework is easily extensible and changeable. Whenever any theoretical or empirical evidence for an attribute, technique or adequacy value is unearthed, the information can be easily added to the framework.

Journal ArticleDOI
TL;DR: A systematic mapping study covering studies published between January 2002 and January 2012 found that current research focuses on documenting architectural decisions, and it is found that only several studies describe architectural decisions from the industry.
Abstract: Context The software architecture of a system is the result of a set of architectural decisions. The topic of architectural decisions in software engineering has received significant attention in recent years. However, no systematic overview exists on the state of research on architectural decisions. Objective The goal of this study is to provide a systematic overview of the state of research on architectural decisions. Such an overview helps researchers reflect on previous research and plan future research. Furthermore, such an overview helps practitioners understand the state of research, and how research results can help practitioners in their architectural decision-making. Method We conducted a systematic mapping study, covering studies published between January 2002 and January 2012. We defined six research questions. We queried six reference databases and obtained an initial result set of 28,895 papers. We followed a search and filtering process that resulted in 144 relevant papers. Results After classifying the 144 relevant papers for each research question, we found that current research focuses on documenting architectural decisions. We found that only several studies describe architectural decisions from the industry. We identified potential future research topics: domain-specific architectural decisions (such as mobile), achieving specific quality attributes (such as reliability or scalability), uncertainty in decision-making, and group architectural decisions. Regarding empirical evaluations of the papers, around half of the papers use systematic empirical evaluation approaches (such as surveys, or case studies). Still, few papers on architectural decisions use experiments. Conclusion Our study confirms the increasing interest in the topic of architectural decisions. This study helps the community reflect on the past ten years of research on architectural decisions. Researchers are offered a number of promising future research directions, while practitioners learn what existing papers offer.

Journal ArticleDOI
TL;DR: In this paper, the authors conducted an operational replication study where extensive psychometric data from 279 master level students have been collected in a software engineering program at a Swedish University and found connections between the emotional intelligence and work preferences, while no associations were found for selfcompassion.
Abstract: Context: There is an increasing awareness among Software Engineering (SE) researchers and practitioners that more focus is needed on understanding the engineers developing software. Previous studies show significant associations between the personalities of software engineers and their work preferences. Objective: Various studies on personality in SE have found large, small or no effects and there is no consensus on the importance of psychometric measurements in SE. There is also a lack of studies employing other psychometric instruments or using larger datasets. We aim to evaluate our results in a larger sample, with software engineers in an earlier state of their career, using advanced statistics. Method: An operational replication study where extensive psychometric data from 279 master level students have been collected in a SE program at a Swedish University. Personality data based on the Five-Factor Model, Trait Emotional Intelligence Questionnaire and Self-compassion have been collected. Statistical analysis investigated associations between psychometrics and work preferences and the results were compared to our previous findings from 47 SE professionals. Results: Analysis confirms existence of two main clusters of software engineers; one with more "intense" personalities than the other. This corroborates our earlier results on SE professionals. The student data also show similar associations between personalities and work preferences. However, for other associations there are differences due to the different population of subjects. We also found connections between the emotional intelligence and work preferences, while no associations were found for self-compassion. Conclusion: The associations can help managers to predict and adapt projects and tasks to available staff. The results also show that the Emotional Intelligence instrument can be predictive. The research methods and analytical tools we employ can detect subtle associations and reflect differences between different groups and populations and thus can be important tools for future research as well as industrial practice.

Journal ArticleDOI
TL;DR: The use of mockups to guide the MDWE process helps in the reduction of the development cycle as well as in the incorporation of agile practices in the model-driven workflow, and this process suggests being more efficient, in terms of errors and effort, than traditional modeling in MDWE.
Abstract: Context: Agile software development approaches are currently becoming the industry standard for Web Application development. On the other hand, Model-Driven Web Engineering (MDWE) methodologies are known to improve productivity when building this kind of applications. However, current MDWE methodologies tend to ignore important aspects of Web Applications development supported by agile processes, such as constant customer feedback or early design of user interfaces. Objective: In this paper we analyze the difficulties of supporting agile features in MDWE methodologies. Then, we propose an approach that eases the incorporation of well-known agile practices to MDWE. Method: We propose using User Interface prototypes (usually known as mockups) as a way to start the modeling process in the context of a mixed agile-MDWE process. To assist this process, we defined a lightweight metamodel that allows modeling features over mockups, interacting with end-users and generating MDWE models. Then, we conducted a statistical evaluation of both approaches (traditional vs. mockup-based modeling). Results: First we comment on how agile features can be added to MDWE processes using mockups. Then, we show by means of a quantitative study that the proposed approach is faster, less error-prone and still as complete as traditional MDWE processes. Conclusion: The use of mockups to guide the MDWE process helps in the reduction of the development cycle as well as in the incorporation of agile practices in the model-driven workflow. Complete MDWE models can be built and generated by using lightweight modeling over User Interface mockups, and this process suggests being more efficient, in terms of errors and effort, than traditional modeling in MDWE.

Journal ArticleDOI
TL;DR: It is shown that state of the art provides a rich set of methods such as access control models but still several open research challenges remain and that security in PAIS is a challenging interdisciplinary research field that assembles research methods and principles from security and PAIS.
Abstract: Display Omitted HighlightsSecurity in PAIS constitutes an interdisciplinary research area.Security is a multifaceted subject that covers technical and human aspects in PAIS.12 security controls are identified for Process-Aware Information Systems (PAIS).Security controls have centered on preventive countermeasures so far.Reaction, detection, and human orientation constitute promising research directions. ContextSecurity in Process-Aware Information Systems (PAIS) has gained increased attention in current research and practice. However, a common understanding and agreement on security is still missing. In addition, the proliferation of literature makes it cumbersome to overlook and determine state of the art and further to identify research challenges and gaps. In summary, a comprehensive and systematic overview of state of the art in research and practice in the area of security in PAIS is missing. ObjectiveThis paper investigates research on security in PAIS and aims at establishing a common understanding of terminology in this context. Further it investigates which security controls are currently applied in PAIS. MethodA systematic literature review is conducted in order to classify and define security and security controls in PAIS. From initially 424 papers, we selected in total 275 publications that related to security and PAIS between 1993 and 2012. Furthermore, we analyzed and categorized the papers using a systematic mapping approach which resulted into 5 categories and 12 security controls. ResultsIn literature, security in PAIS often centers on specific (security) aspects such as security policies, security requirements, authorization and access control mechanisms, or inter-organizational scenarios. In addition, we identified 12 security controls in the area of security concepts, authorization and access control, applications, verification, and failure handling in PAIS. Based on the results, open research challenges and gaps are identified and discussed with respect to possible solutions. ConclusionThis survey provides a comprehensive review of current security practice in PAIS and shows that security in PAIS is a challenging interdisciplinary research field that assembles research methods and principles from security and PAIS. We show that state of the art provides a rich set of methods such as access control models but still several open research challenges remain.

Journal ArticleDOI
TL;DR: This review presents the different software process modeling languages that have been developed in the last ten years, showing the relevant fact that model-based SPMLs (Software Process Modeling Languages) are being considered as a current trend.
Abstract: Context: Organizations working in software development are aware that processes are very important assets as well as they are very conscious of the need to deploy well-defined processes with the goal of improving software product development and, particularly, quality. Software process modeling languages are an important support for describing and managing software processes in software-intensive organizations. Objective: This paper seeks to identify what software process modeling languages have been defined in last decade, the relationships and dependencies among them and, starting from the current state, to define directions for future research. Method: A systematic literature review was developed. 1929 papers were retrieved by a manual search in 9 databases and 46 primary studies were finally included. Results: Since 2000 more than 40 languages have been first reported, each of which with a concrete purpose. We show that different base technologies have been used to define software process modeling languages. We provide a scheme where each language is registered together with the year it was created, the base technology used to define it and whether it is considered a starting point for later languages. This scheme is used to illustrate the trend in software process modeling languages. Finally, we present directions for future research. Conclusion: This review presents the different software process modeling languages that have been developed in the last ten years, showing the relevant fact that model-based SPMLs (Software Process Modeling Languages) are being considered as a current trend. Each one of these languages has been designed with a particular motivation, to solve problems which had been detected. However, there are still several problems to face, which have become evident in this review. This let us provide researchers with some guidelines for future research on this topic.

Journal ArticleDOI
TL;DR: It is found that variability models—while providing system-wide abstractions over code—work best in centralized variability management and are, thus, absent in ecosystems with large free markets.
Abstract: Context Software ecosystems are increasingly popular for their economic, strategic, and technical advantages. Application platforms such as Android or iOS allow users to highly customize a system by selecting desired functionality from a large variety of assets. This customization is achieved using variability mechanisms. Objective Variability mechanisms are well-researched in the context of software product lines. Although software ecosystems are often seen as conceptual successors, the technology that sustains their success and growth is much less understood. Our objective is to improve empirical understanding of variability mechanisms used in successful software ecosystems. Method We analyze five ecosystems, ranging from the Linux kernel through Eclipse to Android. A qualitative analysis identifies and characterizes variability mechanisms together with their organizational context. This analysis leads to a conceptual framework that unifies ecosystem-specific aspects using a common terminology. A quantitative analysis investigates scales, growth rates, and—most importantly—dependency structures of the ecosystems. Results In all the studied ecosystems, we identify rich dependency languages and variability descriptions that declare many direct and indirect dependencies. Indirect dependencies to abstract capabilities, as opposed to concrete variability units, are used predominantly in fast-growing ecosystems. We also find that variability models—while providing system-wide abstractions over code—work best in centralized variability management and are, thus, absent in ecosystems with large free markets. These latter ecosystems tend to emphasize maintaining capabilities and common vocabularies, dynamic discovery, and binding with strong encapsulation of contributions, together with uniform distribution channels. Conclusion The use of specialized mechanisms in software ecosystems with large free markets, as opposed to software product lines, calls for recognition of a new discipline—variability encouragement.

Journal ArticleDOI
TL;DR: This paper analyzes the state of the art in the field of formal verification of models, restricting the analysis to those approaches applied over static software models complemented or not with constraints expressed in textual languages, typically the Object Constraint Language (OCL).
Abstract: Context Model-driven Engineering (MDE) promotes the utilization of models as primary artifacts in all software engineering activities. Therefore, mechanisms to ensure model correctness become crucial, specially when applying MDE to the development of software, where software is the result of a chain of (semi)automatic model transformations that refine initial abstract models to lower level ones from which the final code is eventually generated. Clearly, in this context, an error in the model/s is propagated to the code endangering the soundness of the resulting software. Formal verification of software models is a promising approach that advocates the employment of formal methods to achieve model correctness, and it has received a considerable amount of attention in the last few years. Objective The objective of this paper is to analyze the state of the art in the field of formal verification of models, restricting the analysis to those approaches applied over static software models complemented or not with constraints expressed in textual languages, typically the Object Constraint Language (OCL). Method We have conducted a Systematic Literature Review (SLR) of the published works in this field, describing their main characteristics. Results The study is based on a set of 48 resources that have been grouped in 18 different approaches according to their affinity. For each of them we have analyzed, among other issues, the formalism used, the support given to OCL, the correctness properties addressed or the feedback yielded by the verification process. Conclusions One of the most important conclusions obtained is that current model verification approaches are strongly influenced by the support given to OCL. Another important finding is that in general, current verification tools present important flaws like the lack of integration into the model designer tool chain or the lack of efficiency when verifying large, real-life models.

Journal ArticleDOI
TL;DR: It is observed that core developers’ attitudes and knowledge sharing behaviors were linked to their involvement in actual software development and the demands of their wider project teams, and core developers appeared to naturally possess high levels of insightful characteristics, which became evident very early during teamwork.
Abstract: Context Prior research has established that a few individuals generally dominate project communication and source code changes during software development. Moreover, this pattern has been found to exist irrespective of task assignments at project initiation. Objective While this phenomenon has been noted, prior research has not sought to understand these dominant individuals. Previous work considering the effect of team structures on team performance has found that core communicators are the gatekeepers of their teams’ knowledge, and the performance of these members was correlated with their teams’ success. Building on this work, we have employed a longitudinal approach to study the way core developers’ attitudes, knowledge sharing behaviors and task performance change over the course of their project, based on the analysis of repository data. Method We first used social network analysis (SNA) and standard statistical analysis techniques to identify and select artifacts from ten different software development teams. These procedures were also used to select central practitioners among these teams. We then applied psycholinguistic analysis and directed content analysis (CA) techniques to interpret the content of these practitioners’ messages. Finally, we inspected these core developers’ activities as recorded in system change logs at various points in time during systems’ development. Results Among our findings, we observe that core developers’ attitudes and knowledge sharing behaviors were linked to their involvement in actual software development and the demands of their wider project teams. However, core developers appeared to naturally possess high levels of insightful characteristics, which became evident very early during teamwork. Conclusions Project performance would likely benefit from strategies aimed at surrounding core developers with other competent communicators. Core developers should also be supported by a wider team who are willing to ask questions and challenge their ideas. Finally, the availability of adequate communication channels would help with maintaining positive team climate, and this is likely to mitigate the negative effects of distance during distributed developments.

Journal ArticleDOI
TL;DR: The concept of software ecosystem architecture can be used analytically and constructively in respectively the analysis and design of software ecosystems.
Abstract: Context Telemedicine, the provision of health care at a distance, is arguably an effective way of increasing access to, reducing cost of, and improving quality of care. However, the deployment of telemedicine is faced with standards that are hard to use, application-specific data models, and application stove-pipes that inhibit the adoption of telemedical solutions. To which extent can a software ecosystem approach to telemedicine alleviate this? Objective In this article, we define the concept of software ecosystem architecture as the structure(s) of a software ecosystem comprising elements, relations among them, and properties of both. Our objective is to show how this concept can be used (i) in the analysis of existing software ecosystems and (ii) in the design of new software ecosystems. Method We performed a mixed-method study that consisted of a case study and an experiment. For (i), we performed a descriptive, revelatory case study of the Danish telemedicine ecosystem and for (ii), we experimentally designed, implemented, and evaluated the architecture of 4S. Results We contribute in three areas. First, we define the software ecosystem architecture concept that captures organization, business, and software aspects of software ecosystems. Secondly, we apply this concept in our case study and demonstrate that it is a viable concept for software ecosystem analysis. Finally, based on our experiments, we discuss the practice of software engineering for software ecosystems drawn from experience in creating and evolving the 4S telemedicine ecosystem. Conclusion The concept of software ecosystem architecture can be used analytically and constructively in respectively the analysis and design of software ecosystems.

Journal ArticleDOI
TL;DR: To improve the state-of-the-art, a new dependency model is proposed to tackle the problems identified from the case study and the related literature, and suggests nine dependency types with precise definitions as its initial set.
Abstract: Context: The dependencies between individual requirements have an important influence on software engineering activities e.g., project planning, architecture design, and change impact analysis. Although dozens of requirement dependency types were suggested in the literature from different points of interest, there still lacks an evaluation of the applicability of these dependency types in requirements engineering. Objective: Understanding the effect of these requirement dependencies to software engineering activities is useful but not trivial. In this study, we aimed to first investigate whether the existing dependency types are useful in practise, in particular for change propagation analysis, and then suggest improvements for dependency classification and definition. Method: We conducted a case study that evaluated the usefulness and applicability of two well-known generic dependency models covering 25 dependency types. The case study was conducted in a real-world industry project with three participants who offered different perspectives. Results: Our initial evaluation found that there exist a number of overlapping and/or ambiguous dependency types among the current models; five dependency types are particularly useful in change propagation analysis; and practitioners with different backgrounds possess various viewpoints on change propagation. To improve the state-of-the-art, a new dependency model is proposed to tackle the problems identified from the case study and the related literature. The new model classifies dependencies into intrinsic and additional dependencies on the top level, and suggests nine dependency types with precise definitions as its initial set. Conclusions: Our case study provides insights into requirement dependencies and their effects on change propagation analysis for both research and practise. The resulting new dependency model needs further evaluation and improvement.

Journal ArticleDOI
TL;DR: Agile developers have a higher sense of being able to impact the organisation than non-agile developers and have information channels that is significantly differently from non-AGile developers.
Abstract: Context Empowerment of employees at work has been known to have a positive impact on job motivation and satisfaction. Software development is a field of knowledge work wherein one should also expect to see these effects, and the idea of empowerment has become particularly visible in agile methodologies, in which proponents emphasise team empowerment and individual control of the work activities as a central concern. Objective This research aims to get a better understanding of how empowerment is enabled in software development teams, both agile and non-agile, to identify differences in empowering practices and levels of individual empowerment. Method Twenty-five interviews with agile and non-agile developers from Norway and Canada on decision making and empowerment are analysed. The analysis is conducted using a conceptual model with categories for involvement, structural empowerment and psychological empowerment. Results Both kinds of development organisations are highly empowered and they are similar in most aspects relating to empowerment. However, there is a distinction in the sense that agile developers have more possibilities to select work tasks and influence the priorities in a development project due to team empowerment. Agile developers seem to put a higher emphasis on the value of information in decision making, and have more prescribed activities to enable low-cost information flow. More power is obtained through the achievement of managing roles for the non-agile developers who show interest and are rich in initiatives. Conclusion Agile developers have a higher sense of being able to impact the organisation than non-agile developers and have information channels that is significantly differently from non-agile developers. For non-agile teams, higher empowerment can be obtained by systematically applying low-cost participative decision making practices in the manager–developer relation and among peer developers. For agile teams, it is essential to more rigorously follow the empowering practices already established.

Journal ArticleDOI
TL;DR: A panoramic view on the anatomy of the quality models for web services may be a good reference for prospective researchers and practitioners in the field and especially may help avoiding the definition of new proposals that do not align with current research.
Abstract: Context: Quality of Service (QoS) is a major issue in various web service related activities. Quality models have been proposed as the engineering artefact to provide a common framework of understanding for QoS, by defining the quality factors that apply to web service usage. Objective: The goal of this study is to evaluate the current state of the art of the proposed quality models for web services, specifically: (1) which are these proposals and how are they related; (2) what are their structural characteristics; (3) what quality factors are the most and least addressed; and (4) what are their most consolidated definitions. Method: We have conducted a systematic mapping by defining a robust protocol that combines automatic and manual searches from different sources. We used a rigorous method to elicitate the keywords from the research questions and a selection criteria to retrieve the final papers to evaluate. We have adopted the ISO/IEC 25010 standard to articulate our analysis. Results: We have evaluated 47 different quality models from 65 papers that fulfilled the selection criteria. By analyzing in depth these quality models, we have: (1) distributed the proposals along the time dimension and identified their relationships; (2) analyzed their size (visualizing the number of nodes and levels) and definition coverage (as indicator of quality of the proposals); (3) quantified the coverage of the different ISO/IEC 25010 quality factors by the proposals; (4) identified the quality factors that appeared in at least 30% of the surveyed proposals and provided the most consolidated definitions for them. Conclusions: We believe that this panoramic view on the anatomy of the quality models for web services may be a good reference for prospective researchers and practitioners in the field and especially may help avoiding the definition of new proposals that do not align with current research.

Journal ArticleDOI
TL;DR: These algorithms are the first known techniques that are efficient enough to be used on dependencies extracted from real systems, opening new possibilities of creating reverse engineering and model management tools for variability models.
Abstract: Context Variability modeling, and in particular feature modeling, is a central element of model-driven software product line architectures. Such architectures often emerge from legacy code, but, creating feature models from large, legacy systems is a long and arduous task. We describe three synthesis scenarios that can benefit from the algorithms in this paper. Objective This paper addresses the problem of automatic synthesis of feature models from propositional constraints. We show that the decision version of the problem is NP-hard. We designed two efficient algorithms for synthesis of feature models from CNF and DNF formulas respectively. Method We performed an experimental evaluation of the algorithms against a binary decision diagram (BDD)-based approach and a formal concept analysis (FCA)-based approach using models derived from realistic models. Results Our evaluation shows a 10 to 1,000-fold performance improvement for our algorithms over the BDD-based approach. The performance of the DNF-based algorithm was similar to the FCA-based approach, with advantages for both techniques. We identified input properties that affect the runtimes of the CNF- and DNF-based algorithms. Conclusions Our algorithms are the first known techniques that are efficient enough to be used on dependencies extracted from real systems, opening new possibilities of creating reverse engineering and model management tools for variability models.

Journal ArticleDOI
TL;DR: The results indicate that development performance and product quality achieved byFollowing the Agile Process was superior to those achieved by following the Incremental Process in the projects compared.
Abstract: Context: Although Agile software development models have been widely used as a base for the software project life-cycle since 1990s, the number of studies that follow a sound empirical method and quantitatively reveal the effect of using these models over Traditional models is scarce. Objective: This article explains the empirical method of and the results from systematic analyses and comparison of development performance and product quality of Incremental Process and Agile Process adapted in two projects of a middle-size, telecommunication software development company. The Incremental Process is an adaption of the Waterfall Model whereas the newly introduced Agile Process is a combination of the Unified Software Development Process, Extreme Programming, and Scrum. Method: The method followed to perform the analyses and comparison is benefited from the combined use of qualitative and quantitative methods. It utilizes; GQM Approach to set measurement objectives, CMMI as the reference model to map the activities of the software development processes, and a pre-defined assessment approach to verify consistency of process executions and evaluate measure characteristics prior to quantitative analysis. Results: The results of the comparison showed that the Agile Process had performed better than the Incremental Process in terms of productivity (79%), defect density (57%), defect resolution effort ratio (26%), Test Execution V&V Effectiveness (21%), and effort prediction capability (4%). These results indicate that development performance and product quality achieved by following the Agile Process was superior to those achieved by following the Incremental Process in the projects compared. Conclusion: The acts of measurement, analysis, and comparison enabled comprehensive review of the two development processes, and resulted in understanding their strengths and weaknesses. The comparison results constituted objective evidence for organization-wide deployment of the Agile Process in the company.