scispace - formally typeset
Search or ask a question
Author

Milla Ahola

Bio: Milla Ahola is an academic researcher. The author has contributed to research in topics: Metric (unit) & Software development. The author has co-authored 1 publications.

Papers
More filters
Proceedings ArticleDOI
15 Apr 2020
TL;DR: An exploration of industry's views onactionable metrics help characterize actionable metrics in practical terms and facilitate their definition and development right from the start of a software metrics program.
Abstract: Background: Practitioners would like to take action based on software metrics, as long as they find them reliable. Existing literature explores how metrics can be made reliable, but remains unclear if there are other conditions necessary for a metric to be actionable. Context & Method: In the context of a European H2020 Project, we conducted a multiple case study to study metrics' use in four companies, and identified instances where these metrics influenced actions. We used an online questionnaire to enquire about the project participants' views on actionable metrics. Next, we invited one participant from each company to elaborate on the identified metrics' use for taking actions and the questionnaire responses (N=17). Result: We learned that a metric that is practical, contextual, and exhibits high data quality characteristics is actionable. Even a non-actionable metric can be useful, but an actionable metric mostly requires interpretation. However, the more these metrics are simple and reflect the software development context accurately, the less interpretation required to infer actionable information from the metric. Company size and project characteristics can also influence the type of metric that can be actionable. Conclusion: This exploration of industry's views on actionable metrics help characterize actionable metrics in practical terms. This awareness of what characteristics constitute an actionable metric can facilitate their definition and development right from the start of a software metrics program.

3 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Tim as discussed by the authors is a tool, implemented at a company developing embedded systems, where software development occurs in parallel branches and nightly testing is partitioned over software branches, test systems and test cases.
Abstract: Abstract Software testing is key for quality assurance of embedded systems. However, with increased development pace, the amount of test results data risks growing to a level where exploration and visualization of the results are unmanageable. This paper covers a tool, Tim, implemented at a company developing embedded systems, where software development occurs in parallel branches and nightly testing is partitioned over software branches, test systems and test cases. Tim aims to replace a previous solution with problems of scalability, requirements and technological flora. Tim was implemented with a reference group over several months. For validation, data were collected both from reference group meetings and logs from the usage of the tool. Data were analyzed quantitatively and qualitatively. The main contributions from the study include the implementation of eight views for test results exploration and visualization, the identification of four solutions patterns for these views (filtering, aggregation, previews and comparisons), as well as six challenges frequently discussed at reference group meetings (expectations, anomalies, navigation, integrations, hardware details and plots). Results are put in perspective with related work and future work is proposed, e.g., enhanced anomaly detection and integrations with more systems such as risk management, source code and requirements repositories.

6 citations

Journal ArticleDOI
TL;DR: Tim as discussed by the authors is a tool, implemented at a company developing embedded systems, where software development occurs in parallel branches and nightly testing is partitioned over software branches, test systems and test cases.
Abstract: Abstract Software testing is key for quality assurance of embedded systems. However, with increased development pace, the amount of test results data risks growing to a level where exploration and visualization of the results are unmanageable. This paper covers a tool, Tim, implemented at a company developing embedded systems, where software development occurs in parallel branches and nightly testing is partitioned over software branches, test systems and test cases. Tim aims to replace a previous solution with problems of scalability, requirements and technological flora. Tim was implemented with a reference group over several months. For validation, data were collected both from reference group meetings and logs from the usage of the tool. Data were analyzed quantitatively and qualitatively. The main contributions from the study include the implementation of eight views for test results exploration and visualization, the identification of four solutions patterns for these views (filtering, aggregation, previews and comparisons), as well as six challenges frequently discussed at reference group meetings (expectations, anomalies, navigation, integrations, hardware details and plots). Results are put in perspective with related work and future work is proposed, e.g., enhanced anomaly detection and integrations with more systems such as risk management, source code and requirements repositories.

4 citations

Proceedings ArticleDOI
09 Nov 2022
TL;DR: A metodologia empregada em revisão da literatura, considerando artigos relevantes de bases acadêmicas consistentes, como ACM, IEEE and Elsevie, além de dissertações, teses e revistas conceituadas na temática as discussed by the authors .
Abstract: A introdução das Metodologias Ágeis transformou a forma como as organizações trabalham. Menor tempo de desenvolvimento para o mercado, custos baixos e melhoria contínua, gerando autonomia para que o time encontre a solução ideal para o momento. As métricas ágeis proporcionam insights sobre o processo atual e ajustes necessários para melhorar o fluxo de trabalho. Isso ajuda a avaliar a qualidade de um produto, monitorar o desempenho do time e melhorar seu desempenho ao longo do tempo. Neste artigo, discutimos métricas ágeis que são relevantes para o sucesso das organizações. Citamos também algumas métricas ágeis essenciais para o desenvolvimento de negócios de uma organização. A metodologia empregada é baseada em revisão da literatura, considerando artigos relevantes de bases acadêmicas consistentes, como ACM, IEEE e ScienceDirect – Elsevie, além de dissertações, teses e revistas conceituadas na temática.