scispace - formally typeset
Search or ask a question

Showing papers in "Information & Software Technology in 2012"


Journal ArticleDOI
TL;DR: The mapping study delivers the first systematic summary of maturity model research, and helps practitioners planning to use a maturity model to identify which maturity models are suitable for their domain and where limitations exist.
Abstract: Context: Maturity models offer organizations a simple but effective possibility to measure the quality of their processes. Emerged out of software engineering, the application fields have widened and maturity model research is becoming more important. During the last two decades the publication amount steadily rose as well. Until today, no studies have been available summarizing the activities and results of the field of maturity model research. Objective: The objective of this paper is to structure and analyze the available literature of the field of maturity model research to identify the state-of-the-art research as well as research gaps. Method: A systematic mapping study was conducted. It included relevant publications of journals and IS conferences. Mapping studies are a suitable method for structuring a broad research field concerning research questions about contents, methods, and trends in the available publications. Results: The mapping of 237 articles showed that current maturity model research is applicable to more than 20 domains, heavily dominated by software development and software engineering. The study revealed that most publications deal with the development of maturity models and empirical studies. Theoretical reflective publications are scarce. Furthermore, the relation between conceptual and design-oriented maturity model development was analyzed, indicating that there is still a gap in evaluating and validating developed maturity models. Finally, a comprehensive research framework was derived from the study results and implications for further research are given. Conclusion: The mapping study delivers the first systematic summary of maturity model research. The categorization of available publications helps researchers gain an overview of the state-of-the-art research and current research gaps. The proposed research framework supports researchers categorizing their own projects. In addition, practitioners planning to use a maturity model may use the study as starting point to identify which maturity models are suitable for their domain and where limitations exist.

414 citations


Journal ArticleDOI
TL;DR: A systematic literature review of empirical studies on ML model published in the last two decades finds that eight types of ML techniques have been employed in SDEE models, and overall speaking, the estimation accuracy of these ML models is close to the acceptable level and is better than that of non-ML models.
Abstract: Context: Software development effort estimation (SDEE) is the process of predicting the effort required to develop a software system. In order to improve estimation accuracy, many researchers have proposed machine learning (ML) based SDEE models (ML models) since 1990s. However, there has been no attempt to analyze the empirical evidence on ML models in a systematic way. Objective: This research aims to systematically analyze ML models from four aspects: type of ML technique, estimation accuracy, model comparison, and estimation context. Method: We performed a systematic literature review of empirical studies on ML model published in the last two decades (1991-2010). Results: We have identified 84 primary studies relevant to the objective of this research. After investigating these studies, we found that eight types of ML techniques have been employed in SDEE models. Overall speaking, the estimation accuracy of these ML models is close to the acceptable level and is better than that of non-ML models. Furthermore, different ML models have different strengths and weaknesses and thus favor different estimation contexts. Conclusion: ML models are promising in the field of SDEE. However, the application of ML models in industry is still limited, so that more effort and incentives are needed to facilitate the application of ML models. To this end, based on the findings of this review, we provide recommendations for researchers as well as guidelines for practitioners.

403 citations


Journal ArticleDOI
TL;DR: This paper considers the cross-company defect prediction scenario where source and target data are drawn from different companies, and proposes a novel algorithm called Transfer Naive Bayes (TNB), which is more accurate in terms of AUC, within less runtime than the state of the art methods.
Abstract: Context: Software defect prediction studies usually built models using within-company data, but very few focused on the prediction models trained with cross-company data. It is difficult to employ these models which are built on the within-company data in practice, because of the lack of these local data repositories. Recently, transfer learning has attracted more and more attention for building classifier in target domain using the data from related source domain. It is very useful in cases when distributions of training and test instances differ, but is it appropriate for cross-company software defect prediction? Objective: In this paper, we consider the cross-company defect prediction scenario where source and target data are drawn from different companies. In order to harness cross company data, we try to exploit the transfer learning method to build faster and highly effective prediction model. Method: Unlike the prior works selecting training data which are similar from the test data, we proposed a novel algorithm called Transfer Naive Bayes (TNB), by using the information of all the proper features in training data. Our solution estimates the distribution of the test data, and transfers cross-company data information into the weights of the training data. On these weighted data, the defect prediction model is built. Results: This article presents a theoretical analysis for the comparative methods, and shows the experiment results on the data sets from different organizations. It indicates that TNB is more accurate in terms of AUC (The area under the receiver operating characteristic curve), within less runtime than the state of the art methods. Conclusion: It is concluded that when there are too few local training data to train good classifiers, the useful knowledge from different-distribution training data on feature level may help. We are optimistic that our transfer learning method can guide optimal resource allocation strategies, which may reduce software testing cost and increase effectiveness of software testing process.

369 citations


Journal ArticleDOI
TL;DR: A new framework is proposed for evaluating competing prediction systems based upon an unbiased statistic, Standardised Accuracy, testing the result likelihood relative to the baseline technique of random 'predictions', that is guessing, and calculation of effect sizes, which leads to meaningful results.
Abstract: Context: Software engineering has a problem in that when we empirically evaluate competing prediction systems we obtain conflicting results. Objective: To reduce the inconsistency amongst validation study results and provide a more formal foundation to interpret results with a particular focus on continuous prediction systems. Method: A new framework is proposed for evaluating competing prediction systems based upon (1) an unbiased statistic, Standardised Accuracy, (2) testing the result likelihood relative to the baseline technique of random 'predictions', that is guessing, and (3) calculation of effect sizes. Results: Previously published empirical evaluations of prediction systems are re-examined and the original conclusions shown to be unsafe. Additionally, even the strongest results are shown to have no more than a medium effect size relative to random guessing. Conclusions: Biased accuracy statistics such as MMRE are deprecated. By contrast this new empirical validation framework leads to meaningful results. Such steps will assist in performing future meta-analyses and in providing more robust and usable recommendations to practitioners.

341 citations


Journal ArticleDOI
TL;DR: The situational factor reference framework presented herein represents a sound initial reference framework for the key situational elements affecting the software process definition and optimisation and provides support for practitioners who are challenged with defining and maintaining software development processes.
Abstract: Context: An optimal software development process is regarded as being dependent on the situational characteristics of individual software development settings. Such characteristics include the nature of the application(s) under development, team size, requirements volatility and personnel experience. However, no comprehensive reference framework of the situational factors affecting the software development process is presently available. Objective: The absence of such a comprehensive reference framework of the situational factors affecting the software development process is problematic not just because it inhibits our ability to optimise the software development process, but perhaps more importantly, because it potentially undermines our capacity to ascertain the key constraints and characteristics of a software development setting. Method: To address this deficiency, we have consolidated a substantial body of related research into an initial reference framework of the situational factors affecting the software development process. To support the data consolidation, we have applied rigorous data coding techniques from Grounded Theory and we believe that the resulting framework represents an important contribution to the software engineering field of knowledge. Results: The resulting reference framework of situational factors consists of eight classifications and 44 factors that inform the software process. We believe that the situational factor reference framework presented herein represents a sound initial reference framework for the key situational elements affecting the software process definition. Conclusion: In addition to providing a useful reference listing for the research community and for committees engaged in the development of standards, the reference framework also provides support for practitioners who are challenged with defining and maintaining software development processes. Furthermore, this framework can be used to develop a profile of the situational characteristics of a software development setting, which in turn provides a sound foundation for software development process definition and optimisation.

315 citations


Journal ArticleDOI
TL;DR: A multiple case study consisting of four projects in two software product companies that recently adopted Scrum identified three main challenges to shared decision-making in agile software development: alignment of strategic product plans with iteration plans, allocation of development resources, and performing development and maintenance tasks in teams.
Abstract: Context: Agile software development changes the nature of collaboration, coordination, and communication in software projects. Objective: Our objective was to understand the challenges of shared decision-making in agile software development teams. Method: We designed a multiple case study consisting of four projects in two software product companies that recently adopted Scrum. We collected data in semi-structured interviews, through participant observations, and from process artifacts. Results: We identified three main challenges to shared decision-making in agile software development: alignment of strategic product plans with iteration plans, allocation of development resources, and performing development and maintenance tasks in teams. Conclusion: Agile software development requires alignment of decisions on the strategic, tactical, and operational levels in order to overcome these challenges. Agile development also requires a transition from specialized skills to redundancy of functions and from rational to naturalistic decision-making. This takes time; the case companies needed from one to two years to change from traditional, hierarchical decision-making to shared decision-making in software development projects.

178 citations


Journal ArticleDOI
TL;DR: A framework for congruent reference architectures is proposed based on state of the art results from literature and practice and validated as a analytical tool for the analysis of the congruence of software reference architectures.
Abstract: Context: A software reference architecture is a generic architecture for a class of systems that is used as a foundation for the design of concrete architectures from this class. The generic nature of reference architectures leads to a less defined architecture design and application contexts, which makes the architecture goal definition and architecture design non-trivial steps, rooted in uncertainty. Objective: The paper presents a structured and comprehensive study on the congruence between context, goals, and design of software reference architectures. It proposes a tool for the design of congruent reference architectures and for the analysis of the level of congruence of existing reference architectures. Method: We define a framework for congruent reference architectures. The framework is based on state of the art results from literature and practice. We validate our framework and its quality as analytical tool by applying it for the analysis of 24 reference architectures. The conclusions from our analysis are compared to the opinions of experts on these reference architectures documented in literature and dedicated communication. Results: Our framework consists of a multi-dimensional classification space and of five types of reference architectures that are formed by combining specific values from the multi-dimensional classification space. Reference architectures that can be classified in one of these types have better chances to become a success. The validation of our framework confirms its quality as a tool for the analysis of the congruence of software reference architectures. Conclusion: This paper facilitates software architects and scientists in the inception, design, and application of congruent software reference architectures. The application of the tool improves the chance for success of a reference architecture.

161 citations


Journal ArticleDOI
TL;DR: This review aims to obtain an overview of the existing approaches in analyzing and improving software evolvability at architectural level, and investigate impacts on research and practice.
Abstract: Context: Software evolvability describes a software system's ability to easily accommodate future changes. It is a fundamental characteristic for making strategic decisions, and increasing economic value of software. For long-lived systems, there is a need to address evolvability explicitly during the entire software lifecycle in order to prolong the productive lifetime of software systems. For this reason, many research studies have been proposed in this area both by researchers and industry practitioners. These studies comprise a spectrum of particular techniques and practices, covering various activities in software lifecycle. However, no systematic review has been conducted previously to provide an extensive overview of software architecture evolvability research. Objective: In this work, we present such a systematic review of architecting for software evolvability. The objective of this review is to obtain an overview of the existing approaches in analyzing and improving software evolvability at architectural level, and investigate impacts on research and practice. Method: The identification of the primary studies in this review was based on a pre-defined search strategy and a multi-step selection process. Results: Based on research topics in these studies, we have identified five main categories of themes: (i) techniques supporting quality consideration during software architecture design, (ii) architectural quality evaluation, (iii) economic valuation, (iv) architectural knowledge management, and (v) modeling techniques. A comprehensive overview of these categories and related studies is presented. Conclusion: The findings of this review also reveal suggestions for further research and practice, such as (i) it is necessary to establish a theoretical foundation for software evolution research due to the fact that the expertise in this area is still built on the basis of case studies instead of generalized knowledge; (ii) it is necessary to combine appropriate techniques to address the multifaceted perspectives of software evolvability due to the fact that each technique has its specific focus and context for which it is appropriate in the entire software lifecycle.

155 citations


Journal ArticleDOI
TL;DR: It is still necessary to define a method with the necessary guidelines to implement both software development processes and ITSM processes reducing the amount of effort, especially because some processes of both categories are overlapped.
Abstract: Context: In recent years, many software companies have considered Software Process Improvement (SPI) as essential for successful software development. These companies have also shown special interest in IT Service Management (ITSM). SPI standards have evolved to incorporate ITSM best practices. Objective: This paper presents a systematic literature review of ITSM Process Improvement initiatives based on the ISO/IEC 15504 standard for process assessment and improvement. Method: A systematic literature review based on the guidelines proposed by Kitchenham and the review protocol template developed by Biolchini et al. is performed. Results: Twenty-eight relevant studies related to ITSM Process Improvement have been found. From the analysis of these studies, nine different ITSM Process Improvement initiatives have been detected. Seven of these initiatives use ISO/IEC 15504 conformant process assessment methods. Conclusion: During the last decade, in order to satisfy the on-going demand of mature software development companies for assessing and improving ITSM processes, different models which use the measurement framework of ISO/IEC 15504 have been developed. However, it is still necessary to define a method with the necessary guidelines to implement both software development processes and ITSM processes reducing the amount of effort, especially because some processes of both categories are overlapped.

126 citations


Journal ArticleDOI
TL;DR: This paper elaborates the design, implementation, and evaluation of a novel variable-strength-based on harmony search algorithm, called Harmony Search Strategy (HSS), which tends to outperform the pure computational-based strategies in terms of test size.
Abstract: Context: Although useful, AI-based variable strength t-way strategies are lacking in terms of the support for high interaction strength. Additionally, most AI-based strategies generally do not address the support for constraints. Addressing the aforementioned issues, this paper elaborates the design, implementation, and evaluation of a novel variable-strength-based on harmony search algorithm, called Harmony Search Strategy (HSS). Objective: The objective of this work is to investigate the adoption of harmony search algorithm for constructing variable-strength t-way strategy. Method: Implemented in Java, HSS integrates the harmony search algorithm as parts of its search engine. Result: Benchmarking results demonstrate that HSS gives competitive results against most existing AI-based (and pure computational) counterparts. However, unlike other AI-based counterparts, HSS addresses the support for high interaction strength and permits the support for constraints. Conclusion: AI-based t-way strategies tend to outperform the pure computational-based strategies in terms of test size.

122 citations


Journal ArticleDOI
TL;DR: It is concluded that approaches supporting MPLs need to consider both technical aspects like structuring large models and defining dependencies between product lines as well as organizational aspects such as distributed modeling and product derivation by multiple stakeholders.
Abstract: Context: Complex software-intensive systems comprise many subsystems that are often based on heterogeneous technological platforms and managed by different organizational units. Multi product lines (MPLs) are an emerging area of research addressing variability management for such large-scale or ultra-large-scale systems. Despite the increasing number of publications addressing MPLs the research area is still quite fragmented. Objective: The aims of this paper are thus to identify, describe, and classify existing approaches supporting MPLs and to increase the understanding of the underlying research issues. Furthermore, the paper aims at defining success-critical capabilities of infrastructures supporting MPLs. Method: Using a systematic literature review we identify and analyze existing approaches and research issues regarding MPLs. Approaches described in the literature support capabilities needed to define and operate MPLs. We derive capabilities supporting MPLs from the results of the systematic literature review. We validate and refine these capabilities based on a survey among experts from academia and industry. Results: The paper discusses key research issues in MPLs and presents basic and advanced capabilities supporting MPLs. We also show examples from research approaches that demonstrate how these capabilities can be realized. Conclusions: We conclude that approaches supporting MPLs need to consider both technical aspects like structuring large models and defining dependencies between product lines as well as organizational aspects such as distributed modeling and product derivation by multiple stakeholders. The identified capabilities can help to build, enhance, and evaluate MPL approaches.

Journal ArticleDOI
TL;DR: The main result is a list of 132 tools, which, according to the literature, have been, or are intended to be, used in global software projects, and the classification of these tools includes lists of features for communication, coordination and control as well as how the tool has been validated in practice.
Abstract: Context: This systematic mapping review is set in a Global Software Engineering (GSE) context, characterized by a highly distributed environment in which project team members work separately in different countries. This geographic separation creates specific challenges associated with global communication, coordination and control. Objective: The main goal of this study is to discover all the available communication and coordination tools that can support highly distributed teams, how these tools have been applied in GSE, and then to describe and classify the tools to allow both practitioners and researchers involved in GSE to make use of the available tool support in GSE. Method: We performed a systematic mapping review through a search for studies that answered our research question, ''Which software tools (commercial, free or research based) are available to support Global Software Engineering?'' Applying a range of related search terms to key electronic databases, selected journals, and conferences and workshops enabled us to extract relevant papers. We then used a data extraction template to classify, extract and record important information about the GSD tools from each paper. This information was synthesized and presented as a general map of types of GSD tools, the tool's main features and how each tool was validated in practice. Results: The main result is a list of 132 tools, which, according to the literature, have been, or are intended to be, used in global software projects. The classification of these tools includes lists of features for communication, coordination and control as well as how the tool has been validated in practice. We found that out the total of 132, the majority of tools were developed at research centers, and only a small percentage of tools (18.9%) are reported as having been tested outside the initial context in which they were developed. Conclusion: The most common features in the GSE tools included in this study are: team activity and social awareness, support for informal communication, Support for Distributed Knowledge Management and Interoperability with other tools. Finally, there is the need for an evaluation of these tools to verify their external validity, or usefulness in a wider global environment.

Journal ArticleDOI
TL;DR: This case study approach builds on research that identifies risk factors leading to the failure of outsourced strategic IT development projects to investigate the BSkyB project, a strategic development project which was the subject of recent litigation in the British High Court.
Abstract: Context: IT plays an increasingly strategic role in the business performance of organizations, however, the development of strategic IT systems involves a high degree of risk and outsourcing the development of such systems increases the risk. Objective: Using a case study approach we build on research that identifies risk factors leading to the failure of outsourced strategic IT development projects. We investigate the BSkyB project, a strategic development project, which was the subject of recent litigation in the British High Court. We wish to discover what factors led to the failure of such a high profile project; in particular we wish to identify which factors were under the control of the client. We also review the usefulness of the case study methodology when it is not possible to interview any of the people involved with a project. Method: Detailed-step-by-step guidelines designed for multiple industrial case studies are used to investigate the failure factors of the BSkyB project. We use transcripts of court proceedings and media reports to determine the failure factors. We compare the factors identified with those in a conceptual risk framework developed in prior research thus providing an initial validation of that framework. Results: The following factors were identified as problems in the BSkyB project: contract, requirements, project complexity, planning and control, execution, and team. A time and materials contract was a risk not originally included in the risk framework that we used. Conclusion: The BSkyB project failed because of problems that can be traced to both client and vendor. According to the judge's summing up the major fault was with the vendor, although some problems did emanate from the client side. We found that many sections in the case study methodology we used were unnecessary for a single case study based on court proceedings and media reports. The risk framework helped with risk identification.

Journal ArticleDOI
TL;DR: The RE process seems to be well covered by current RE tools, but there is still a certain margin for amelioration, principally with regard to requirements modeling, open data model and data integration features.
Abstract: Context: There is a significant number of requirements engineering (RE) tools with different features and prices. However, existing RE tool lists do not provide detailed information about the features of the tools that they catalogue. It would therefore be interesting for both practitioners and tool developers to be aware of the state-of-the-art as regards RE tools. Objective: This paper presents the results of a survey answered by RE tool vendors. The purpose of the survey was to gain an insight into how current RE tools support the RE process by means of concrete capabilities, and to what degree. Method: The ISO/IEC TR 24766:2009 is a framework for assessing RE tools' capabilities. A 146-item questionnaire based principally on the features covered by this international guideline was sent to major tool vendors worldwide. A descriptive statistical study was then carried out to provide comparability, and bivariate correlation tests were also applied to measure the association between different variables. A sample of the tools was subjected to neutral assessment and an interrater reliability analysis was performed to ensure the reliability of the results. Results: The 38 participants sent back their answers. Most tools are delivered under a proprietary license, and their licenses are not free. A growing number of them facilitate Web access. Moreover, requirements elicitation exemplifies the best supported category of features in this study, whereas requirements modeling and management are the most badly supported categories. Conclusion: The RE process seems to be well covered by current RE tools, but there is still a certain margin for amelioration, principally with regard to requirements modeling, open data model and data integration features. These subjects represent areas for improvement for RE tool developers. Practitioners might also obtain useful ideas from the study to be taken into account when selecting an appropriate RE tool to be successfully applied to their work.

Journal ArticleDOI
TL;DR: The GT process area and associated threats presented in this paper provides both a guide and motivation for software managers to better understand how to manage technical talent across the globe.
Abstract: Context: Global Software Engineering (GSE) continues to experience substantial growth and is fundamentally different to collocated development. As a result, software managers have a pressing need for support in how to successfully manage teams in a global environment. Unfortunately, de facto process frameworks such as the Capability Maturity Model Integration (CMMI(R)) do not explicitly cater for the complex and changing needs of global software management. Objective: To develop a Global Teaming (GT) process area to address specific problems relating to temporal, cultural, geographic and linguistic distance which will meet the complex and changing needs of global software management. Method: We carried out three in-depth case studies of GSE within industry from 1999 to 2007. To supplement these studies we conducted three literature reviews. This allowed us to identify factors which are important to GSE. Based on a gap analysis between these GSE factors and the CMMI(R), we developed the GT process area. Finally, the literature and our empirical data were used to identify threats to software projects if these processes are not implemented. Results: Our new GT process area brings together practices drawn from the GSE literature and our previous empirical work, including many socio-technical factors important to global software development. The GT process area presented in this paper encompasses recommended practices that can be used independently or with existing models. We found that if managers are not proactive in implementing new GT practices they are putting their projects under threat of failure. We therefore include a list of threats that if ignored could have an adverse effect on an organization's competitive advantage, employee satisfaction, timescales, and software quality. Conclusion: The GT process area and associated threats presented in this paper provides both a guide and motivation for software managers to better understand how to manage technical talent across the globe.

Journal ArticleDOI
TL;DR: This work conducted semi-structured, open-ended interviews with 21 participants representing 11 different companies in Pakistan, and analyzed the data qualitatively using the Glaserian strand of grounded theory research procedures to obtain rich insight into SPI success factors for small and medium Web companies.
Abstract: Context: The context of this research is software process improvement (SPI) in small and medium Web companies. Objective: The primary objective of this paper is to identify software process improvement (SPI) success factors for small and medium Web companies. Method: To achieve this goal, we conducted semi-structured, open-ended interviews with 21 participants representing 11 different companies in Pakistan, and analyzed the data qualitatively using the Glaserian strand of grounded theory research procedures. The key steps of these procedures that were employed in this research included open coding, focused coding, theoretical coding, theoretical sampling, constant comparison, and scaling up. Results: An initial framework of key SPI success factors for small and medium Web companies was proposed, which can be of use for small and medium Web companies engaged in SPI. The paper also differentiates between small and medium Web companies and analyzes crucial SPI requirements for companies operating in the Web development domain. Conclusion: The results of this work, in particular the use of qualitative techniques - allowed us to obtain rich insight into SPI success factors for small and medium Web companies. Future work comprises the validation of the SPI success factors with small and medium Web companies.

Journal ArticleDOI
TL;DR: It is found that different functional application types have distinctly different levels of energy efficiency, with text and image editing and gaming applications being the most energy inefficient due to their intense use of the processor.
Abstract: Context: The energy efficiency of IT systems, also referred to as Green IT, is attracting more and more attention. While several researchers have focused on the energy efficiency of hardware and embedded systems, the role of application software in IT energy consumption still needs investigation. Objective: This paper aims to define a methodology for measuring software energy efficiency and to understand the consequences of abstraction layers and application development environments for the energy efficiency of software applications. Method: We first develop a measure of energy efficiency that is appropriate for software applications. We then examine how the use of application development environments relates to this measure of energy efficiency for a sample of 63 open source software applications. Results: Our findings indicate that a greater use of application development environments - specifically, frameworks and external libraries - is more detrimental in terms of energy efficiency for larger applications than for smaller applications. We also find that different functional application types have distinctly different levels of energy efficiency, with text and image editing and gaming applications being the most energy inefficient due to their intense use of the processor. Conclusion: We conclude that different designs can have a significant impact on the energy efficiency of software applications. We have related the use of software application development environments to software energy efficiency suggesting that there may be a trade-off between development efficiency and energy efficiency. We propose new research to further investigate this topic.

Journal ArticleDOI
TL;DR: The analyses reveal that subjective norm and training play a significant role in influencing software developers' use of agile processes and methods, while perceived benefits and perceived limitations are not primary drivers of agile use among adopters.
Abstract: Context: Agile software development with its emphasis on producing working code through frequent releases, extensive client interactions and iterative development has emerged as an alternative to traditional plan-based software development methods. While a number of case studies have provided insights into the use and consequences of agile, few empirical studies have examined the factors that drive the adoption and use of agile. Objective: We draw on intention-based theories and a dialectic perspective to identify factors driving the use of agile practices among adopters of this software development methodology. Method: Data for the study was gathered through an anonymous online survey of software development professionals. We requested participation from members of a selected list of online discussion groups, and received 98 responses. Results: Our analyses reveal that subjective norm and training play a significant role in influencing software developers' use of agile processes and methods, while perceived benefits and perceived limitations are not primary drivers of agile use among adopters. Interestingly, perceived benefit emerges as a significant predictor of agile use only if adopters face hindrances to their agile practices. Conclusion: We conclude that research in the adoption of software development innovations should examine the effects of both enabling and detracting factors and the interactions between them. Since training, subjective norm, and the interplay between perceived benefits and perceived hindrances appear to be key factors influencing the adoption of agile methods, researchers can focus on how to (a) perform training on agile methods more effectively, (b) facilitate the dialog between developers and managers about perceived benefits and hindrances, and (c) capitalize on subjective norm to publicize the benefits of agile methods within an organization. Further, when managing the transition to new software development methods, we recommend that practitioners adapt their strategies and tactics contingent on the extent of perceived hindrances to the change.

Journal ArticleDOI
TL;DR: In this paper, a literature survey and a comparison of existing (business process model) repository technology is performed, and the authors conclude that existing repositories focus on traditional functionality rather than exploiting the full potential of information management tools, thus they show that there is a strong basis for further research.
Abstract: Context: Large organizations often run hundreds or even thousands of different business processes. Managing such large collections of business process models is a challenging task. Software can assist in performing that task, by supporting common management functions such as storage, search and version management of models. It can also provide advanced functions that are specific for managing collections of process models, such as managing the consistency of public and private processes. Software that supports the management of large collections of business process models is called: business process model repository software. Objective: This paper contributes to the development of business process model repositories, by analyzing the state of the art. Method: To perform the analysis a literature survey and a comparison of existing (business process model) repository technology is performed. Result: The results of the state of the art analysis are twofold. First, a framework for business process model repositories is presented, which consists of a management model and a reference architecture. The management model lists the functionality that can be provided and the reference architecture presents the components that provide that functionality. Second, an analysis is presented of the extent to which existing business process model repositories implement the functionality from the framework. Conclusion: The results presented in the paper are valuable as a comprehensive overview of business process model repository functionality. In addition they form a basis for a future research agenda. We conclude that existing repositories focus on traditional functionality rather than exploiting the full potential of information management tools, thus we show that there is a strong basis for further research.

Journal ArticleDOI
TL;DR: The integration of inspection and testing techniques is a promising research direction for the exploitation of additional synergy effects and an overview of existing approaches and a suitable basis for identifying future research directions is provided.
Abstract: Context: A lot of different quality assurance techniques exist to ensure high quality products However, most often they are applied in isolation A systematic combination of different static and dynamic quality assurance techniques promises to exploit synergy effects, such as higher defect detection rates or reduced quality assurance costs However, a systematic overview of such combinations and reported evidence about achieving synergy effects with such kinds of combinations is missing Objective: The main goal of this article is the classification and thematic analysis of existing approaches that combine different static and dynamic quality assurance technique, including reported effects, characteristics, and constraints The result is an overview of existing approaches and a suitable basis for identifying future research directions Method: A systematic mapping study was performed by two researchers, focusing on four databases with an initial result set of 2498 articles, covering articles published between 1985 and 2010 Results: In total, 51 articles were selected and classified according to multiple criteria The two main dimensions of a combination are integration (ie, the output of one quality assurance technique is used for the second one) and compilation (ie, different quality assurance techniques are applied to ensure a common goal, but in isolation) The combination of static and dynamic analyses is one of the most common approaches and usually conducted in an integrated manner With respect to the combination of inspection and testing techniques, this is done more often in a compiled way than in an integrated way Conclusion: The results show an increased interest in this topic in recent years, especially with respect to the integration of static and dynamic analyses Inspection and testing techniques are currently mostly performed in an isolated manner The integration of inspection and testing techniques is a promising research direction for the exploitation of additional synergy effects

Journal ArticleDOI
TL;DR: This paper identifies potential XSS vulnerabilities in program source code and secures them with appropriate escaping mechanisms which prevent input values from causing any script execution and develops a tool, saferXSS, to implement the proposed approach.
Abstract: Context: Cross site scripting (XSS) vulnerability is among the top web application vulnerabilities according to recent surveys. This vulnerability occurs when a web application uses inputs received from users in web pages without properly checking them. This allows an attacker to inject malicious scripts in web pages via such inputs such that the scripts perform malicious actions when a client visits the exploited web pages. Such an attack may cause serious security violations such as account hijacking and cookie theft. Current approaches to mitigate this problem mainly focus on effective detection of XSS vulnerabilities in the programs or prevention of real time XSS attacks. As more sophisticated attack vectors are being discovered, vulnerabilities if not removed could be exploited anytime. Objective: To address this issue, this paper presents an approach for removing XSS vulnerabilities in web applications. Method: Based on static analysis and pattern matching techniques, our approach identifies potential XSS vulnerabilities in program source code and secures them with appropriate escaping mechanisms which prevent input values from causing any script execution. Results: We developed a tool, saferXSS, to implement the proposed approach. Using the tool, we evaluated the applicability and effectiveness of the proposed approach based on the experiments on five Java-based web applications. Conclusion: Our evaluation has shown that the tool can be applied to real-world web applications and it automatically removed all the real XSS vulnerabilities in the test subjects.

Journal ArticleDOI
TL;DR: A path selection strategy for selecting test cases able to effectively kill mutants when performing weak mutation testing is presented and analysed and suggests that the strategy used can play an important role in making the mutation testing method more appealing and practical.
Abstract: Context: Generally, mutation analysis has been identified as a powerful testing method. Researchers have shown that its use as a testing criterion exercises quite thoroughly the system under test while it achieves to reveal more faults than standard structural testing criteria. Despite its potential, mutation fails to be adopted in a widespread practical use and its popularity falls significantly short when compared with other structural methods. This can be attributed to the lack of thorough studies dealing with the practical problems introduced by mutation and the assessment of the effort needed when applying it. Such an incident, masks the real cost involved preventing the development of easy and effective to use strategies to circumvent this problem. Objective: In this paper, a path selection strategy for selecting test cases able to effectively kill mutants when performing weak mutation testing is presented and analysed. Method: The testing effort is highly correlated with the number of attempts the tester makes in order to generate adequate test cases. Therefore, a significant influence on the efficiency associated with a test case generation strategy greatly depends on the number of candidate paths selected in order to achieve a predefined coverage goal. The effort can thus be related to the number of infeasible paths encountered during the test case generation process. Results: An experiment, investigating well over 55 million of program paths is conducted based on a strategy that alleviates the effects of infeasible paths. Strategy details, along with a prototype implementation are reported and analysed through the experimental results obtained by its employment to a set of program units. Conclusion: The results obtained suggest that the strategy used can play an important role in making the mutation testing method more appealing and practical.

Journal ArticleDOI
TL;DR: An open working environment with only half-height glass barriers and communal space plays a major role in communication among team members and the presence of status boards significantly help in reducing unnecessary communication by providing the required information to individuals and therefore, in turn reduce distractions a team member may confront in their absence.
Abstract: Context: Communication, collaboration and coordination are key enablers of software development and even more so in agile methods. The physical environment of the workspace plays a significant role in effective communication, collaboration, and coordination among people while developing software. Objective: In this paper, we have studied and further evaluated empirically the effect of different constituents of physical environment on communication, coordination, and collaboration, respectively. The study aims to provide a guideline for prospective agile software developers. Method: A survey was conducted among software developers at a software development organization. To collect data, a survey was carried out along with observations, and interviews. Results: It has been found that half cubicles are 'very effective' for the frequency of communication. Further, half cubicles were discovered 'effective' but not 'very effective' for the quality/effectiveness of communication. It is found that half-height cubicles and status boards are 'very effective' for the coordination among team members according to the survey. Communal/discussion space is found to be 'effective' but not 'very effective' for coordination among team members. Our analysis also reveals that half-height glass barriers are 'very effective' during the individuals problem-solving activities while working together as a team. Infact, such a physically open environment appears to improve communication, coordination, and collaboration. Conclusion: According to this study, an open working environment with only half-height glass barriers and communal space plays a major role in communication among team members. The presence of status boards significantly help in reducing unnecessary communication by providing the required information to individuals and therefore, in turn reduce distractions a team member may confront in their absence. As communication plays a significant role in improving coordination and collaboration, it is not surprising to find the effect of open working environment and status boards in improving coordination and collaboration. An open working environment increases the awareness among software developers e.g. who is doing what, what is on the agenda, what is taking place, etc. That in turn, improves coordination among them. A communal/discussion space helps in collaboration immensely.

Journal ArticleDOI
TL;DR: An educational board game to reinforce and teach the application of EVM concepts in the context of undergraduate computing programs complementing expository lessons on EVM basics, which can contribute to the learning of the EVM on the cognitive levels of remembering, understanding and application.
Abstract: Context: To meet the growing need for education in Software Project Management, educational games have been introduced as a beneficial instructional strategy. However, there are no low-cost board games openly available to teach Earned Value Management (EVM) in computing programs. Objective: This paper presents an educational board game to reinforce and teach the application of EVM concepts in the context of undergraduate computing programs complementing expository lessons on EVM basics. Method: The game has been developed based on project management fundamentals and teaching experience in this area. So far, it has been applied in two project management courses in undergraduate computing programs at the Federal University of Santa Catarina. We evaluated motivation, user experience and the game's contribution to learning through case studies on Kirkpatrick's level one based on the perception of the students. Results: First results of the evaluation of the game indicate a perceived potential of the game to contribute to the learning of EVM concepts and their application. The results also point out a very positive effect of the game on social interaction, engagement, immersion, attention and relevance to the course objectives. Conclusion: We conclude that the game DELIVER! can contribute to the learning of the EVM on the cognitive levels of remembering, understanding and application. The illustration of the application of EVM through the game can motivate its usefulness. The game has proven to be an engaging instructional strategy, keeping students on the task and attentive. In this respect, the game offers a possibility to complement traditional instructional strategies for teaching EVM. In order to further generalize and to strengthen the validity of the results, it is important to obtain further evaluations.

Journal ArticleDOI
TL;DR: Findings from a case study aimed at understanding overscoping in large-scale, market-driven software development projects, and how agile requirements engineering practices may affect this situation provide an increased understanding of scoping as a complex and continuous activity.
Abstract: Context: Scope management is a core part of software release management and often a key factor in releasing successful software products to the market. In a market-driven case, when only a few requirements are known a priori, the risk of overscoping may increase. Objective: This paper reports on findings from a case study aimed at understanding overscoping in large-scale, market-driven software development projects, and how agile requirements engineering practices may affect this situation. Method: Based on a hypothesis of which factors that may be involved in an overscoping situation, semi-structured interviews were performed with nine practitioners at a large, market-driven software company. The results from the interviews were validated by six (other) practitioners at the case company via a questionnaire. Results: The results provide a detailed picture of overscoping as a phenomenon including a number of causes, root causes and effects, and indicate that overscoping is mainly caused by operating in a fast-moving market-driven domain and how this ever-changing inflow of requirements is managed. Weak awareness of overall goals, in combination with low development involvement in early phases, may contribute to 'biting off' more than a project can 'chew'. Furthermore, overscoping may lead to a number of potentially serious and expensive consequences, including quality issues, delays and failure to meet customer expectations. Finally, the study indicates that overscoping occurs also when applying agile requirements engineering practices, though the overload is more manageable and perceived to result in less wasted effort when applying a continuous scope prioritization, in combination with gradual requirements detailing and a close cooperation within cross-functional teams. Conclusion: The results provide an increased understanding of scoping as a complex and continuous activity, including an analysis of the causes, effects, and a discussion on possible impact of agile requirements engineering practices to the issue of overscoping. The results presented in this paper can be used to identify potential factors to address in order to achieve a more realistic project scope.

Journal ArticleDOI
TL;DR: This paper proposes the definition of thresholds for gateway complexity measures based on the application of statistical techniques on empirical data, and concludes that thresholds classified business process models in the specific level of understandability and modifiability, so these thresholds were good and useful for decision-making.
Abstract: Context: Quality assurance of business process models has been recognized as an important factor for modeling success at an enterprise level. Since quality of models might be subject to different interpretations, it should be addressed in the most objective way, by the application of measures. That said, however, assessment of measurement results is not a straightforward task: it requires the identification of relevant threshold values, which are able to distinguish different levels of process model quality. Objective: Since there is no consensual technique for obtaining these values, this paper proposes the definition of thresholds for gateway complexity measures based on the application of statistical techniques on empirical data. Method: To this end, we conducted a controlled experiment that evaluates quality characteristics of understandability and modifiability of process models in two different runs. The thresholds obtained were validated in a replication of the experiment. Results: The thresholds for gateway complexity measures are instrumental as guidelines for novice modelers. A tool for supporting business process model measurement and improvement is described, based on the automatic application of measurement, and assessment as well as derivation of advice about how to improve the quality of the model. Conclusion: It is concluded that thresholds classified business process models in the specific level of understandability and modifiability, so these thresholds were good and useful for decision-making.

Journal ArticleDOI
TL;DR: This article develops a framework for specifying and automatically extracting design aspects relevant to safety requirements through the combination of a methodology for establishing traceability between safety requirements and design and an algorithm that can extract for any given safety requirement a minimized fragment (slice) of the design that is sound, and yet easy to understand and inspect.
Abstract: Context: Traceability is one of the basic tenets of all safety standards and a key prerequisite for software safety certification In the current state of practice, there is often a significant traceability gap between safety requirements and software design Poor traceability, in addition to being a non-compliance issue on its own, makes it difficult to determine whether the design fulfills the safety requirements, mainly because the design aspects related to safety cannot be clearly identified Objective: The goal of this article is to develop a framework for specifying and automatically extracting design aspects relevant to safety requirements This goal is realized through the combination of two components: (1) A methodology for establishing traceability between safety requirements and design, and (2) an algorithm that can extract for any given safety requirement a minimized fragment (slice) of the design that is sound, and yet easy to understand and inspect Method: We ground our framework on System Modeling Language (SysML) The framework includes a traceability information model, a methodology to establish traceability, and mechanisms for model slicing based on the recorded traceability information The framework is implemented in a tool, named SafeSlice Results: We prove that our slicing algorithm is sound for temporal safety properties, and argue about the completeness of slices based on our practical experience We report on the lessons learned from applying our approach to two case studies, one benchmark and one industrial case Both studies indicate that our approach substantially reduces the amount of information that needs to be inspected for ensuring that a given (behavioral) safety requirement is met by the design

Journal ArticleDOI
TL;DR: The work on unifying the code clone maintenance process by bridging the gap between clone detection and refactoring is described, which yields considerable increases in the instances of clone groups that may be suggested to the programmer for refactored within Eclipse.
Abstract: Context: Clone detection tools provide an automated mechanism to discover clones in source code. On the other side, refactoring capabilities within integrated development environments provide the necessary functionality to assist programmers in refactoring. However, we have observed a gap between the processes of clone detection and refactoring. Objective: In this paper, we describe our work on unifying the code clone maintenance process by bridging the gap between clone detection and refactoring. Method: Through an Eclipse plug-in called CeDAR (Clone Detection, Analysis, and Refactoring), we forward clone detection results to the refactoring engine in Eclipse. In this case, the refactoring engine is supplied with information about the detected clones to which it can then determine those clones that can be refactored. We describe the extensions to Eclipse's refactoring engine to allow clones with additional similarity properties to be refactored. Results: Our evaluation of open source artifacts shows that this process yields considerable increases in the instances of clone groups that may be suggested to the programmer for refactoring within Eclipse. Conclusion: By unifying the processes of clone detection and refactoring, in addition to providing extensions to the refactoring engine of an IDE, the strengths of both processes (i.e., more significant detection capabilities and an established framework for refactoring) can be garnered.

Journal ArticleDOI
TL;DR: Evaluation of traceability management in Model-Driven Engineering shows that the most addressed operations are storage, CRUD and visualization, while the most immature operations are exchange and analysis traceability information.
Abstract: Context: Model-Driven Engineering provides a new landscape for dealing with traceability in software development. Objective: Our goal is to analyze the current state of the art in traceability management in the context of Model-Driven Engineering. Method: We use the systematic literature review based on the guidelines proposed by Kitchenham. We propose five research questions and six quality assessments. Results: Of the 157 relevant studies identified, 29 have been considered primary studies. These studies have resulted in 17 proposals. Conclusion: The evaluation shows that the most addressed operations are storage, CRUD and visualization, while the most immature operations are exchange and analysis traceability information.

Journal ArticleDOI
TL;DR: An overview of existing approaches that are able to reduce testing effort is presented both for researchers and practitioners in order to identify, in the one hand, future research directions and, on the other hand, potential for improvements in practical environments.
Abstract: Context: Quality assurance effort, especially testing effort, is often a major cost factor during software development, which sometimes consumes more than 50% of the overall development effort. Consequently, one major goal is often to reduce testing effort. Objective: The main goal of the systematic mapping study is the identification of existing approaches that are able to reduce testing effort. Therefore, an overview should be presented both for researchers and practitioners in order to identify, on the one hand, future research directions and, on the other hand, potential for improvements in practical environments. Method: Two researchers performed a systematic mapping study, focusing on four databases with an initial result set of 4020 articles. Results: In total, we selected and categorized 144 articles. Five different areas were identified that exploit different ways to reduce testing effort: approaches that predict defect-prone parts or defect content, automation, test input reduction approaches, quality assurance techniques applied before testing, and test strategy approaches. Conclusion: The results reflect an increased interest in this topic in recent years. A lot of different approaches have been developed, refined, and evaluated in different environments. The highest attention was found with respect to automation and prediction approaches. In addition, some input reduction approaches were found. However, in terms of combining early quality assurance activities with testing to reduce test effort, only a small number of approaches were found. Due to the continuous challenge of reducing test effort, future research in this area is expected.