scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Global Software Engineering in 2015"


Proceedings ArticleDOI
13 Jul 2015
TL;DR: Results show that the effort estimation techniques used within the AGSD and collocated contexts remained unchanged, with planning poker being the one employed the most.
Abstract: Effort estimation is a project management activity that is mandatory for the execution of software projects. Despite its importance, there have been just a few studies published on such activities within the Agile Global Software Development (AGSD) context. Their aggregated results were recently published as part of a secondary study that reported the state of the art on effort estimation in AGSD. This study aims to complement the above-mentioned secondary study by means of an empirical investigation on the state of the practice on effort estimation in AGSD. To do so, a survey was carried out using as instrument an on-line questionnaire and a sample comprising software practitioners experienced in effort estimation within the AGSD context. Results show that the effort estimation techniques used within the AGSD and collocated contexts remained unchanged, with planning poker being the one employed the most. Sourcing strategies were found to have no or a small influence upon the choice of estimation techniques. With regard to effort predictors, global challenges such as cultural and time zone differences were reported, in addition to factors that are commonly considered in the collocated context, such as team experience. Finally, many challenges that impact the accuracy of the effort estimates were reported by the respondents, such as problems with the software requirements and the fact that the communication effort between sites is not properly accounted.

32 citations


Proceedings ArticleDOI
13 Jul 2015
TL;DR: This paper describes and discusses how one team leader coached and improved a global virtual agile team at a large savings and insurance company over a period of one year, and how the team members became highly motivated and self-managing.
Abstract: Virtual teams, with a high level of interdependence and cooperation among team members, are the building block of successful global software organizations. While becoming agile helps on communication and collaboration, such teams meet several challenges in the form of cultural differences, language barriers, national traditions, different values and norms, lack of face-to-face communication, time-zone differences, and difficulties in building and maintaining trust. A successful agile virtual team needs to have the right structure, but equally important is the ability to improve as a team, to become self-managing with shared decision-making and shared leadership. It takes a long time to form such a team, and expert coaching is needed. We describe and discuss how one team leader coached and improved a global virtual agile team at a large savings and insurance company over a period of one year. Because the team members had overlapping working hours the team was able to base coordination on mutual adjustment and frequent feedback. Social software and face-to-face meetings were important factors to achieve this. By involving the remote developers in the strategy of the product, enabling everyone to pick their own tasks, and focusing on continuous learning, knowledge sharing and team build activities, the team members became highly motivated and self-managing.

31 citations


Proceedings ArticleDOI
13 Jul 2015
TL;DR: How continuous testing based on continuous and frequent feedback ensures knowledge sharing and safeguarding the quality of the system under development is described and how enablers for a successful virtual agile team are found.
Abstract: In globally distributed software projects the testing expertise may be scattered across multiple locations. We describe and discuss a globally distributed agile project at DNV GL Software, a multinational provider of software for a safer, smarter and greener future in the energy, process and maritime industries. DNV GL Software is headquartered in Norway. The project is distributed across two locations with 12 team members in Norway and three testers in China. In a distributed agile team with little overlap in working hours the challenge is to coordinate tasks and test activities in a way that makes coordination and communication efficient. DNV GL Software believes in including the remote testers as part of the agile team, enabling self-managing, cross-functional virtual teams that are capable of taking the full responsibility for implementing and verifying one entire feature. To support the communication between testers in China and the rest of the team in Norway, the team needs a shared understanding of the goal of a release and how to collaborate. We conducted interviews with the team and representatives from different roles in the organization, and we performed retrospectives with the team. In this article we describe how continuous testing based on continuous and frequent feedback ensures knowledge sharing and safeguarding the quality of the system under development. We found the following enablers for a successful virtual agile team: coordination by mutual adjustment, dedicated testers and low turnover, shifting working hours, and self-management and autonomy. Non-technical factors, such as socio-technical and organizational factors, have a significant influence on the way software testing is performed in an agile virtual team. To be successful the organization needs to invest in bringing the remote testers closer to the rest of the team, as part of the virtual team.

23 citations


Proceedings ArticleDOI
13 Jul 2015
TL;DR: Student evaluations & reflections on the "IT in Society" course are analyzed to unpack their perceptions of software engineering, the perceived relevance of a global learning experience and its role in reshaping their identities as global software engineers.
Abstract: With a goal of preparing software engineering students for practice in today's global settings, Uppsala University has for some years run courses involving global collaboration. The "IT in Society" course is one such course which applies an 'Open Ended Group Project' model, in partnership with a local health sector client and global educational partners. Within each iteration of the course, students across the partnering institutions are given a brief around an open-ended problem. They work in collaboration with their client and stakeholders to investigate options and produce a report with their findings and recommendations, informed by global perspectives. The report may or may not be supported by working software prototypes. We analyze student evaluations a reflections on the course to unpack their perceptions of software engineering, the perceived relevance of a global learning experience and its role in reshaping their identities as global software engineers.

21 citations


Proceedings ArticleDOI
13 Jul 2015
TL;DR: The results suggest that knowledge sharing across remote locations in distributed agile projects heavily relies on knowledge codification, i.e. Technocratic KM strategies, even when the same knowledge is shared tacitly within the same location.
Abstract: Knowledge management (KM) is essential for success in any software project, but especially in global software development where team members are separated by time and space. Software organizations are managing knowledge in various ways to increase transparency and improve software team performance. One way to classify these strategies is proposed by Earl who defined seven knowledge management schools. The objective of this research is to study knowledge creation and sharing practices in a number of distributed agile projects, map these practices to the knowledge management strategies and determine which strategies are most common, which are applied only locally and which are applied globally. This is done by conducting a series of semi-structured qualitative interviews over a period of time span during May, 2012-June, 2013. Our results suggest that knowledge sharing across remote locations in distributed agile projects heavily relies on knowledge codification, i.e. Technocratic KM strategies, even when the same knowledge is shared tacitly within the same location, i.e. Through behavioral KM strategies.

19 citations


Proceedings ArticleDOI
13 Jul 2015
TL;DR: An enhanced analogy-based model for the estimation of software development effort and a new approach using similarity functions and measures for software effort estimation, which could be a useful approach for early stage effort estimation on distributed projects.
Abstract: Context: Software development has always been characterised by certain parameters. In the case of global software development, one of the important challenges for software developers is that of predicting the development effort of a software system on the basis of developer details, size, complexity, and other measures. Objective: The main research topics related to global software development effort estimation are the definition and empirical evaluation of a search-based approach with which to build new estimation models and the definition and empirical evaluation of all available early data. Datasets have been used as a basis to carry out an analogy-based estimation using similarity functions and measures. Method: Many of the problems concerning the existing effort estimation challenges can be solved by creating an analogy. This paper describes an enhanced analogy-based model for the estimation of software development effort and proposes a new approach using similarity functions and measures for software effort estimation. Result: A new approach for analogy-based reasoning with which to enhance the performance of cost estimation in distributed or combined software projects dealing with numerical and categorical data. The proposed method will be validated empirically using The International Software Benchmarking Standards Group dataset as a basis. Conclusion: The proposed estimation model could be a useful approach for early stage effort estimation on distributed projects.

18 citations


Proceedings ArticleDOI
13 Jul 2015
TL;DR: The degree of professionalism and systematization of software development to draw a map of strengths and weaknesses is investigated and suggests that the necessity for a systematic software development is well recognized, while software development still follows an ad-hoc rather than a systematized style.
Abstract: The speed of innovation and the global allocation of resources to accelerate development or to reduce cost put pressure on the software industry In the global competition, especially so-called high-price countries have to present arguments why the higher development cost is justified and what makes these countries an attractive host for software companies Often, high-quality engineering and excellent quality of products, eg, Machinery and equipment, are mentioned Yet, the question is: Can such arguments be also found for the software industry? We aim at investigating the degree of professionalism and systematization of software development to draw a map of strengths and weaknesses To this end, we conducted as a first step an exploratory survey in Germany, presented in this paper In this survey, we focused on the perceived importance of the two general software engineering process areas project- and quality management and their implementation in practice So far, our results suggest that the necessity for a systematic software development is well recognized, while software development still follows an ad-hoc rather than a systematized style Our results provide initial findings, which we finally use to elaborate a set of working hypotheses Those hypotheses allow to steer the adaptation of our instrument in the future to eventually facilitate replications toward a more comprehensive theory on systematic globally distributed software development in practice

15 citations


Proceedings ArticleDOI
13 Jul 2015
TL;DR: This work identifies four key principles for global software engineering student projects: reconcile contrasting assessment demands between institutions, create a detailed joint timetable to reconcile teaching calendars, provide a project management framework to support phased delivery and carefully manage project scope.
Abstract: Universities face many challenges when creating opportunities for student experiences of global software engineering. We provide a model for introducing global software engineering into the computing curriculum. Our model is based on a three year collaboration between Robert Gordon University, UK and the International Institute for IT Bangalore, India. We provide evidence based on student feedback from three cohorts of virtual team who never met face to face. We found potential employers were supportive of global software engineering in university curricula. We identify four key principles for global software engineering student projects: reconcile contrasting assessment demands between institutions, create a detailed joint timetable to reconcile teaching calendars, provide a project management framework to support phased delivery and carefully manage project scope.

14 citations


Proceedings ArticleDOI
13 Jul 2015
TL;DR: This paper aims to give practitioners a snapshot of their current GSE process strengths and weaknesses that can act as a guide for future software process improvement activities through its lightweight Global Teaming Assessment (GTA).
Abstract: After more than a decade of research in Global Software Engineering (GSE), organisations have a wealth of practices that they can draw on to support them in their global development activities However, practitioners are now asking, "What is the current status of my GSE practices?", "How prepared is my company for GSE?", "What practices need improving?" We aim to give practitioners the answers they need by giving them a snapshot of their current GSE process strengths and weaknesses that can act as a guide for future software process improvement activities We do this through our lightweight Global Teaming Assessment (GTA) based on 70 recommended GSE practices which we piloted in a small company

12 citations


Proceedings ArticleDOI
13 Jul 2015
TL;DR: An ethnographically-informed investigation of a global software practice at a large vendor organization in India is conducted to uncover some cultural models embedded in the practice that appear to have influenced the organization of the behaviors of the participant members.
Abstract: Cultural dynamics play a significant role in the unfolding of the global software practice. Research in the other disciplines have utilized the idea of cultural models to help researchers investigate cultural influence in their respective fields. Cultural models are defined as the taken-for-granted, pre-supposed models of the world that are shared widely by members of a society and that help the members understand their world and influence their behavior in that world. Utilizing this cultural models idea, we conducted an ethnographically-informed investigation of a global software practice at a large vendor organization in India to uncover some cultural models embedded in the practice. This paper presents an ethnographic account of an escalation situation that occurred in the field. Using this situation as our unit of analysis, we uncovered three cultural models -- Agreement, Flexibility, and Trust Cultural Models -- that appear to have influenced the organization of the behaviors of the participant members. The findings illustrate how the internalized cultural models influenced the members' behaviors to make decision that conflicted the business prospects (e.g., Agreement Model), the different members internalized different understanding of the cultural elements (e.g., Trust Model), and the cultural models tacitly played the role of "cultural blind spots" (e.g., Flexibilty Model). Thus, the research demonstrates how the technical system of global software engineering is complexly intertwined with the cultural system of its members.

9 citations


Proceedings ArticleDOI
13 Jul 2015
TL;DR: A technique of modeling a global software development project suitable for such analysis as a complex socio-technical system that consists of functional components connected with each other through output-input relationships is presented.
Abstract: Any global software development project needs to deal with distances -- geographical, cultural, time zone, etc. -- between the groups of developers engaged in the project. To successfully manage the risks caused by such distances, there is a need to explicate and present the distances in a form suitable for manual or semi-automatic analysis, the goal of which is to detect potential risks and find ways of mitigating them. The paper presents a technique of modeling a global software development project suitable for such analysis. The project is modeled as a complex socio-technical system that consists of functional components connected with each other through output-input relationships. The components do not coincide with the organizational units of the project and can be distributed through the geographical and organizational landscape of the project. The modeling technique helps to explicate and represent various kinds of distances between the functional components to determine which of them constitute risk factors. The technique was developed during two case studies, of which the second is used for presenting and demonstrating the new modeling technique in the paper.

Proceedings ArticleDOI
13 Jul 2015
TL;DR: An extended taxonomy to classify empirical DSD evidence is presented and preliminary evaluation of the proposed taxonomy suggests that it can be used to synthesize existing knowledge, to identify gaps in literature, to identifies related work and to help researchers who will publish or review further empirical work, as well as practitioners who are interested in published empirical studies.
Abstract: Distributed Software Development (DSD) has been discussed by industry and academia for almost two decades now, and, as consequence, there is a large number of empirical scientific papers and industrial reports on it. However, the description of the context in which the empirical study was conducted is not always clear or complete, making the process of searching for empirical evidence burdensome. It becomes difficult to understand or to judge the relevance of study given that DSD scenarios are diverse. What works in one context might not apply to another. To reduce such difficulty, we need, as a research community, to have means to standardize how we report empirical studies and their findings aiming to make them more readily available to practitioners and researchers. In this paper we present an extended taxonomy to classify empirical DSD evidence. We conducted an expert opinion survey with researchers and practitioners to identify elements to compose the taxonomy. Preliminary evaluation of the proposed taxonomy suggests that it can be used to synthesize existing knowledge, to identify gaps in literature, to identify related work and to help researchers who will publish or review further empirical work, as well as practitioners who are interested in published empirical studies.

Proceedings ArticleDOI
13 Jul 2015
TL;DR: The challenges in this context are outlined and a methodology for distributed software engineering in collaborative research projects is presented, which covers all major aspects of the software engineering process including requirements engineering, architecture, issue tracking, and social aspects of developer community building in collaborative projects.
Abstract: Collaborative research projects involve distributed construction of software prototypes as part of the project methodology. A major challenge thereby is the need to establish a developer community that shall effectively and efficiently align development efforts with requirements offered by researchers and other stakeholders. These projects are inherently different in nature compared to commercial software projects. The literature offers little research on this aspect of software engineering. In this paper, we outline the challenges in this context and present a methodology for distributed software engineering in collaborative research projects. The methodology covers all major aspects of the software engineering process including requirements engineering, architecture, issue tracking, and social aspects of developer community building in collaborative projects. The methodology can be tailored to different project contexts and may provide support in planning software engineering work in future projects.

Proceedings ArticleDOI
Utpal Samanta1, V. S. Mani1
13 Jul 2015
TL;DR: To achieve the primary goal and larger benefits of Lean, the mindset of the global development team needed to be changed, and approaches to enable this change were defined.
Abstract: This paper describes the experience of a global software engineering organization in transforming to Lean. Deploying Lean processes did result in several improvements, but the primary goal -- usable software at the end of each takt (iterations or increments, or single-piece flow) -- was not achieved. We recognized that to achieve the primary goal and larger benefits of Lean, we needed to change the mindset of the global development team, and we defined approaches to enable this change. We have described the steps taken to make this change. The results achieved are also presented.

Proceedings ArticleDOI
13 Jul 2015
TL;DR: A tool to analyze legacy systems in order to detect parts of the system with higher energy consumption and give the engineer evidences about candidates to be refactored inorder to reduce energy consumption.
Abstract: Nowadays, software sustainability is growing in importance. Not only IT infrastructure is becoming greener, but also software. It is possible to find methods and methodologies intended to produce more sustainable software with lower power consumption. In spite the slow evolution of software engineering towards " Green software", there exist a huge amount of legacy systems still running in organizations. Is then necessary to develop such systems from scratch in order to make them more sustainable? Probably, the most logical and appropriate answer for this question is no, since existing software can be refactored in order to improve its green ability quality characteristic. As a first step towards power consumption improvement, the authors propose a tool to analyze legacy systems in order to detect parts of the system with higher energy consumption. Using the profiling technique, the proposed tool instrument legacy Java systems in order to keep track of its execution. This information, together with the energy consumption (logged by means a data logger hardware), enables the engineer to analyze legacy system consumption detecting energy peaks in the system (e.g. The PC). The analysis gives the engineer evidences about candidates to be refactored in order to reduce energy consumption.

Proceedings ArticleDOI
13 Jul 2015
TL;DR: Practices that improved communication and collaboration, helped the team overcome the key project challenges caused by the product owners and scrum master not being co-located, insufficiently collaborative environments, and the tendency to micromanage are presented.
Abstract: This paper describes a case study of scrum adaption in a legacy project referred here as Global Configurator Project (GCP) where key stakeholders are distributed across locations in Germany, India and the U.S. The paper presents practices that improved communication and collaboration, helped the team overcome the key project challenges caused by the product owners and scrum master not being co-located, insufficiently collaborative environments, and the tendency to micromanage. These practices have contributed significantly towards the success of the project. The paper also presents how the project team developed a high performance team and took on new roles and responsibilities. The paper is targeted at distributed scrum masters and product owners in global software development.

Proceedings ArticleDOI
13 Jul 2015
TL;DR: A new framework to assess survivability of software projects accounting for media capability details as introduced in Media Synchronicity Theory (MST) is proposed and an analytical model to assess how the project recovers from project disasters related to process and communication failures is proposed.
Abstract: In this paper we propose a new framework to assess survivability of software projects accounting for media capability details as introduced in Media Synchronicity Theory (MST). Specifically, we add to our global engineering framework the assessment of the impact of inadequate conveyance and convergence available in the communication infrastructure selected to be used by the project, on the system ability to recover from project disasters. We propose an analytical model to assess how the project recovers from project disasters related to process and communication failures. Our model is based on media synchronicity theory to account for how information exchange impacts recovery. Then, using the proposed model we evaluate how different interventions impact communication effectiveness. Finally, we parameterize and instantiate the proposed survivability model based on a data gathering campaign comprising thirty surveys collected from senior global software development experts at ICGSE'2014 and GSD'2015.

Proceedings ArticleDOI
Tulasi Anand1, V. S. Mani1
13 Jul 2015
TL;DR: This industrial practice paper presents the experience of a globally distributed test team of a software engineering organization spread across three countries while moving to agile methodology that had to test mission-critical software that needed to conform to regulatory requirements.
Abstract: This industrial practice paper presents the experience of a globally distributed test team of a software engineering organization spread across three countries (two in Europe and India) while moving to agile methodology. The team had to test mission-critical software that needed to conform to regulatory requirements. The paper describes the challenges faced by the team such as creating the test infrastructure in time, test automation, documentation and reporting mandated by regulatory agencies. The practices defined by the team to overcome these challenges are also described. These practices also helped to improve test effectiveness and velocity.

Proceedings ArticleDOI
13 Jul 2015
TL;DR: This study carried out a longitudinal case study and utilized data from the 'Novo pay' project, which involved an outgoing New Zealand based vendor and incoming Australian based vendor, and showed that the demand for the same human resources, dependency upon cooperation and collaboration between vendors, reliance on each other system's configurations and utilizing similar strategies by the client generated a set of tensions which needed to be continuously managed throughout the project.
Abstract: This study is directed towards highlighting tensions of incoming and outgoing vendors during outsourcing in a near-shore context. Incoming-and-outgoing of vendors generate a complex form of relationship in which the participating organizations cooperate and compete simultaneously. It is of great importance to develop knowledge about this kind of relationship typically in the current GSE-related multi-sourcing environment. We carried out a longitudinal case study and utilized data from the 'Novo pay' project, which is available in the public domain. This project involved an outgoing New Zealand based vendor and incoming Australian based vendor. The results show that the demand for the same human resources, dependency upon cooperation and collaboration between vendors, reliance on each other system's configurations and utilizing similar strategies by the client, which worked for the previous vendor, generated a set of tensions which needed to be continuously managed throughout the project.

Proceedings ArticleDOI
13 Jul 2015
TL;DR: This document describes the proposal for a doctoral thesis which aims to identify the potential bottlenecks and constraints in the software development process in a learning environment through empirical observation of the real activities carried out by groups of students engaged in the laboratory project.
Abstract: This document describes the proposal for a doctoral thesis which aims to identify the potential bottlenecks and constraints in the software development process in a learning environment. The research method is based on the empirical observation of the real activities carried out by groups of students engaged in the laboratory project, part of the assignments of a Software Engineering undergraduate course. In order to standardize the software development activities we adopted the Unified Process (UP), with some modifications and adjustments aligned with to our research proposal, which also was also used to define the set of artifacts produced in each phase. The principles of the Theory of Constraints (TOC) were used to model the problem and consequently identify the bottlenecks in software process development. Currently this work is in the early experimental stage, with the first of the experiments finalized and a second one, aiming to obtain more detailed data is under way. Once completed this step, we can analyze the results and validate or refute some hypotheses and perhaps answer the question: "what are the bottlenecks in the software production process"?

Proceedings ArticleDOI
13 Jul 2015
TL;DR: This paper presents a prototype that records the power demand of hardware components and called operations of running mobile apps, which can help developers in determining the cause of high power demanded of their apps, and an evaluation that includes the evaluation of the accuracy of the total power demand measurements.
Abstract: Determining the power demand of mobile applications (apps) is becoming a key area of interest for both end-users and developers due to the limited battery lifetime of mobile devices. Addressing this issue requires tools that measure the power demand of a mobile app. This power demand depends on the hardware components and the called operations of the mobile app. Therefore, this paper presents a prototype that records the power demand of hardware components and called operations of running mobile apps. This data can help developers in determining the cause of high power demand of their apps, which assists in power demand reduction. This paper covers an analysis of the overhead produced by the prototype and an evaluation that includes: firstly the evaluation of the accuracy of the total power demand measurements, secondly the evaluation for the allocation of the GPS power demand of the operations and thirdly the evaluation of the allocation of the CPU power demand of the operations.

Proceedings ArticleDOI
13 Jul 2015
TL;DR: A model used in the case company to measure the value for money of a software product during development process as well as a first application of this model within industry context are presented.
Abstract: GSD is a fact for internationally operating software engineering companies. The Economic Value Added (EVA) provides an overview of the cost and revenue of such projects and the RaD teams influence the cost position and revenue to a huge extent. The measurement of impact of RaD and it's influence on "value for money", however is a less ventured region. The actual outcome of a product in terms of value for money provides insights on the role of RaD (Research a Development) to increase revenue and reduce cost of the product. In any software engineering project this is difficult, but in a globally distributed setting measuring the value is even more important in order to make balanced decision considering for example short and long-term value creation or future development settings (e.g. How and where to distribute the development teams). This paper presents a model for measuring the value of a software product in software development projects. We have identified different factors, which should be taken into account when looking for the value of a software product, based on a literature review, available data within the company and several stakeholder interviews. These factors are quantitative (e.g. Number of defects) and qualitative factors (e.g. Innovation). Next, we mapped these factors to a value curve to show the value performance within a globally distributed software product development project. Additionally, we have applied these factors to a globally distributed project in order to understand the value evolution. The outcome of this paper is a model used in the case company to measure the value for money of a software product during development process as well as a first application of this model within industry context.

Proceedings ArticleDOI
13 Jul 2015
TL;DR: In this experience sharing paper, this team would like to share their journey of transformation, in a span of just ~8 years, from a “Single location, One Product” organization to an organization working with “10+ R&D locations across Asia, Europe and America on 26+ Global Products” with a clear goal of becoming the “Software Engineering Partner of choice”, for the Siemens R&d organization.
Abstract: Our toughest challenge was changing the general perception about a typical low cost offshore supplier, in terms of its Quality of Deliverables, Timely Delivery and Transparency in Communication, when we aspired to become the Partner of Choice. We knew, however, breaking that perception was so crucial to achieving our primary goal. In this experience sharing paper, we would like to share our journey of transformation, in a span of just a#x007E;8 years, from a "Single location, One Product" organization to an organization working with "10+ RaD locations across Asia, Europe and America on 26+ Global Products" with a clear goal of becoming the "Software Engineering Partner of choice", for the Siemens RaD organization.

Proceedings ArticleDOI
13 Jul 2015
TL;DR: Approximator is presented, a system that estimates the interruptibility of a user based exclusively on the sensing ability of commodity laptops, and shows early but promising results that represent a starting point for designing tools with support for interruptibility capable of improving distributed awareness and cooperation to be used in global software development.
Abstract: Assessing the presence and availability of a remote colleague is key in coordination in global software development but is not easily done using existing computer-mediated channels. Previous research has shown that automated estimation of interruptibility is feasible and can achieve a precision closer to, or even better than, human judgment. However, existing approaches to assess interruptibility have been designed to rely on external sensors. In this paper, we present Approximator, a system that estimates the interruptibility of a user based exclusively on the sensing ability of commodity laptops. Experimental results show that the information aggregated from several activity monitors (i.e., Key-logger, mouse-logger, and face-detection) provide useful data, which, once combined with machine learning techniques, can automatically estimate the interruptibility of users with a 78% accuracy. These early but promising results represent a starting point for designing tools with support for interruptibility capable of improving distributed awareness and cooperation to be used in global software development.

Proceedings ArticleDOI
13 Jul 2015
TL;DR: This paper exposes an ethical foundation, which emphasizes the need for green software engineering, and the question of responsibility will be asked, and what the ethical foundation can contribute hereunto.
Abstract: Code of ethics for software engineers already exists. However, none of these explicitly faces principles of green or sustainable software engineering. This paper tries to address this shortcoming by exposing an ethical foundation, which emphasizes the need for green software engineering. After that, the question of responsibility will be asked, and what the ethical foundation can contribute hereunto.

Proceedings ArticleDOI
13 Jul 2015
TL;DR: An understanding of how the transition of distributed projects is enacted needs to be constructed to contribute to an understanding of the complexity and nature of this key project phase.
Abstract: Context: This study is directed towards understanding the problems related to the transition phase carried out during the switching of outsourcing vendors in a near-shore context. Objective: Given the scarcity of such studies, an understanding of how the transition of distributed projects is enacted needs to be constructed. This study will contribute such an understanding of the complexity and nature of this key project phase. Method: This study demonstrates the use of secondary data in a longitudinal case study research. The two data analysis techniques used are temporal bracketing and dilemma analysis which enabled us to understand the vendor transition process over time and further come to grips with the various stakeholders' views and tensions -- respectively. Results: The results of this study will contribute across substantive, conceptual and methodological domains. Conclusion: In the constructed framework the aim is to identify archetypes and patterns to identify problems before they manifest as major failures.

Proceedings ArticleDOI
13 Jul 2015
TL;DR: An enhanced version of a Queued Petri Nets model whose simulation runs allow to predict and to study the performance and energy consumption of a distributed data management system and can reduce both investments in time and hardware.
Abstract: In the last couple of years it became more difficult for end users of data management system to optimize existing systems for performance and energy consumption. The reasons are the big number of data management systems to chose from, extensive use cases and a variety of hardware configurations. The number of factors that influence the performance and energy efficiency rises even more when it comes to distributed data management systems. For instance, in addition to vertical scale-out effects the horizontal scale-out ones have to be considered. This paper introduces an enhanced version of a Queued Petri Nets model whose simulation runs allow to predict and to study the performance and energy consumption of a distributed data management system. In contrast to traditional ways, the simulation runs can reduce both investments in time and hardware. The model's prediction in terms of performance and energy consumption were evaluated and compared with the actual experimental results. The predicted and experimental results for the response times differ by nearly 20 percent on average.

Proceedings ArticleDOI
13 Jul 2015
TL;DR: The role of Static Analysis in reducing technical debt by identifying potential defects early in the software development life cycle (SDLC) is substantiated.
Abstract: Assessment of code and design quality using Software Code Assessment tools is important for continuous improvement and monitoring of code quality in the software industry in general and for global software development in particular. Static Analysis is believed to help identify issues at an early stage in the software development life cycle (SDLC), however it is still under utilized in the industry. In this paper, we discuss our experiences in determining the importance of Static Analysis and the extent to which the defects (that could otherwise be slipped to later stages of SDLC) could have been reduced with the continuous use of Static Analysis. Towards this end, we have performed analysis of defects reported from testing/field by correlating the defects to Static Analysis Rules, for projects that were developed across different regions. Our focus was to gather insight into the number and type of defects that could be identified in advance through Static Analysis Rules. The purpose of gaining the insights is to justify the ROI for software quality checks, fine tune, update existing software quality practices and introduce new practices uniformly across the teams as necessary. The results substantiate the role of Static Analysis in reducing technical debt by identifying potential defects early in the SDLC.