Actionable Software Metrics: An Industrial Perspective
Summary (3 min read)
1 Introduction
- One of the benchmarks of a software metrics program’s effectiveness is how actionable it is [22], since software practitioners are not interested only in mining data for insights but also in using those insights to guide their actions [20, 24].
- Even a non-actionable metric can be useful, but an actionable metric mostly requires interpretation.
- This exploration of industry’s views on actionable metrics help characterize actionable metrics in practical terms.
- The goal of the EU H2020 Project, Q-Rapids, is to develop an agile-based, data-driven, and quality-aware rapid software development process [3].
3.1 Research context
- The following table characterizes the context of the four companies, to which the participants of this study belong: CC1 used the Solution in the context of a modeling tool for model-driven development, which is part of a mature product line, 2 Little’s Law: Avg. Cycle Time = Avg. Work in Progress / Avg. Throughput.
- With multiple releases already in the market.
- A comparable Solution use at CC3 was hindered due to several unforeseeable technical and managerial circumstances.
- Following a positive and a formative experience from their pilot use case, CC4 continued to use the Solution in the next UC, albeit with customizations that are exclusive to this case company only.
- The experience accumulated from these two UCs inform CC4’s views on actionable metrics.
3.2 Data collection
- The case companies shared metrics data with the Project researchers on a monthly basis, along with a short report documenting their use of the Solution.
- These 15 statements and their sources of reference are shown in the table below: With Q1-Q5 statements, the authors enquire about the general characteristics of an actionable metric.
- Both these sections were followed by an option for additional comments.
- The five data quality characteristics used in the questionnaire refer to the inherent data quality, as prescribed in ISO/IEC 25012:2008 [6].
- The aim was to elicit responses from the users that were involved in the metrics-driven improvement actions at their companies.
3.3 Data analysis
- For analyzing the questionnaire, the authors used the data analysis approach adopted in [5].
- Similar to their research, the questionnaire in [5] (N=15) is also characterized by small sample size.
- The authors calculated the following indicators to interpret the responses: 1. % Agree: Percentage of participants responding with either ‘Agree’ or ‘Strongly agree’ 2. Top-Box: Percentage of participants responding with ‘Strongly agree’ 3. Net-Top-2-Box - Percentage of participants that chose bottom two responses (Strongly disagree and Disagree) subtracted from the participants that chose top two responses (Strongly agree and Agree).
- Coefficient of Variance (CV): Standard deviation divided by the mean.
- The first three indicators are a measure of central tendency, which indicates a single value that helps to identify the central value in a set of data.
4 Results
- Results from the questionnaire helps answer RQ2, which are also complemented by inputs from the invited participants.
- Using the ‘well-defined issues jira’ metric, the UC Champion learned that the developers were not always following the practice of maintaining the ‘Definition of Done’ (DoD)4 field in Jira5.
- Due to a relatively shorter development period and smaller project size, CC4’s UC team have been able to contextualize their metrics, without putting a strain on the available resources.
- In contrast, CC2 are neutral (Md=3), and the comment that an actionable metric should be, “easy to understand what it means, quickly”, provides a rationale for the said perspective.
- CC1 exercises caution in labeling a metric actionable, as most metrics can only be one of the several contributors and not the primary driver of action, an argument justified by their experience of metrics’ use in the Project so far.
5 Discussion
- First, the authors discuss the results of RQ1, as presented above, informed by the views and rationale provided by the invited participants.
- Choice of metrics and their utility can be dictated by company size and project characteristics [8].
- Just like CC3, company size and project characteristics significantly influence a metric’s role in CC4.
- There is some support for the perspectives that every metric has to be actionable, and conversely, only certain metrics need to be actionable.
- The ‘complete’ data quality requirement can be classified as a contextual quality [10], because the data quality requirement’s relevance and importance is based on the context of the task at hand.
6 Threats to validity
- The authors address threats to their study’s validity based on the guidelines recommended by Runeson and Höst [18].
- This shortcoming can still pose a threat to their study’s construct validity.
- Combining the questionnaire responses with the empirical accounts of actionable metric use, and further validation by a practitioner from each case company help us mitigate this threat.
- The authors study involves only four companies, and the participants for the questionnaire are based on convenience sampling.
- This affects the external validity of their study.
7 Conclusion
- Ideally, a metrics program should facilitate data-driven decisionmaking.
- In the context of the EU H2020 Project, Q-Rapids, the authors attempted to address this research gap by collaborating with the four industrial partners.
- Building upon the empirical accounts of metrics’ use for decision-making at the case companies, the authors administered an online questionnaire to document the involved practitioners’ views on actionable metrics.
- There is some support for the perspectives that every metric has to be actionable and only certain metrics can be actionable.
- The evidence suggests that both can be valid.
Did you find this useful? Give us your feedback
Citations
6 citations
4 citations
References
3,620 citations
"Actionable Software Metrics: An Ind..." refers methods in this paper
...We address threats to our study’s validity based on the guidelines recommended by Runeson and Höst [18]....
[...]
...Following the guidelines recommended by Runeson and Höst [18] we conducted a multiple case study to answer the two RQs....
[...]
1,542 citations
193 citations
"Actionable Software Metrics: An Ind..." refers background in this paper
..."resolved issues throughput" more actionable than "number of open issues") [1, 19, 25]...
[...]
...Buse and Zimmerman [1] surveyed 110 practitioners from Microsoft to understand their decision-making process, and found that managers rate data/metrics highly for taking actions....
[...]
...Q8 Without interpretation, a metric cannot be actionable [1]...
[...]
...lower-level managers and developers tend to rely on detailed (lowlevel) metrics [1]....
[...]
167 citations
154 citations