scispace - formally typeset
Journal ArticleDOI

Quantifying the Effect of Code Smells on Maintenance Effort

Reads0
Chats0
TLDR
To reduce maintenance effort, a focus on reducing code size and the work practices that limit the number of changes may be more beneficial than refactoring code smells.
Abstract
Context: Code smells are assumed to indicate bad design that leads to less maintainable code. However, this assumption has not been investigated in controlled studies with professional software developers. Aim: This paper investigates the relationship between code smells and maintenance effort. Method: Six developers were hired to perform three maintenance tasks each on four functionally equivalent Java systems originally implemented by different companies. Each developer spent three to four weeks. In total, they modified 298 Java files in the four systems. An Eclipse IDE plug-in measured the exact amount of time a developer spent maintaining each file. Regression analysis was used to explain the effort using file properties, including the number of smells. Result: None of the 12 investigated smells was significantly associated with increased effort after we adjusted for file size and the number of changes; Refused Bequest was significantly associated with decreased effort. File size and the number of changes explained almost all of the modeled variation in effort. Conclusion: The effects of the 12 smells on maintenance effort were limited. To reduce maintenance effort, a focus on reducing code size and the work practices that limit the number of changes may be more beneficial than refactoring code smells.

read more

Citations
More filters
Journal ArticleDOI

When and Why Your Code Starts to Smell Bad (and Whether the Smells Go Away)

TL;DR: The findings mostly contradict common wisdom stating that smells are being introduced during evolutionary tasks, and call for the need to develop a new generation of recommendation systems aimed at properly planning smell refactoring activities.
Journal ArticleDOI

Comparing and experimenting machine learning techniques for code smell detection

TL;DR: The largest experiment of applying machine learning algorithms to code smells to the best of the authors' knowledge concludes that the application of machine learning to the detection of these code smells can provide high accuracy (>96 %), and only a hundred training examples are needed to reach at least 95 % accuracy.
Proceedings ArticleDOI

When and why your code starts to smell bad

TL;DR: The findings mostly contradict common wisdom, showing that most of the smell instances are introduced when an artifact is created and not as a result of its evolution, and at the same time, 80 percent of smells survive in the system.
Journal ArticleDOI

On the diffuseness and the impact on maintainability of code smells: a large scale empirical investigation

TL;DR: The results show that smells characterized by long and/or complex code (e.g., Complex Class) are highly diffused, and that smelly classes have a higher change- and fault-proneness than smell-free classes.
Journal ArticleDOI

Evolution of software in automated production systems

TL;DR: In this article, the authors provide an interdisciplinary survey on challenges and state-of-the-art in evolution of automated production systems, and summarize future research directions to address the challenges of evolution in automated production system.
References
More filters
Proceedings ArticleDOI

What works for whom, where, when, and why?: on the role of context in empirical software engineering

TL;DR: An overview of how context affects empirical research and how empirical software engineering research can be better `contextualized' in order to provide a better understanding of what works for whom, where, when, and why is provided.
Proceedings ArticleDOI

On the Impact of Design Flaws on Software Defects

TL;DR: It is found that, while some design flaws are more frequent, none of them can be considered more harmful with respect to software defects.
Proceedings ArticleDOI

Clones: What is that smell?

TL;DR: The great majority of bugs are not significantly associated with clones, and it is found that clones may be less defect prone than non-cloned code, and little evidence that clones with more copies are actually more error prone.
Journal ArticleDOI

A systematic review of quasi-experiments in software engineering

TL;DR: Quasi-experimentation is useful in many settings in software engineering (SE), but their design and analysis must be improved to ensure that inferences made from this kind of experiment are valid.
Journal ArticleDOI

Variability and Reproducibility in Software Engineering: A Study of Four Companies that Developed the Same System

TL;DR: It is found that the observed outcome of the four development projects matched the expectations, which were formulated partially on the basis of SE folklore, Nevertheless, achieving more reproducibility in SE remains a great challenge for SE research, education, and industry.
Related Papers (5)