scispace - formally typeset
Search or ask a question

Showing papers on "Halstead complexity measures published in 2005"


Journal ArticleDOI
TL;DR: The results of this investigation show that most of the spatial complexity metrics evaluated offer no substantially better information about program complexity than the number of lines of code, however, one metric shows more promise and is deemed to be a candidate for further use and investigation.
Abstract: Software comprehension is one of the largest costs in the software lifecycle. In an attempt to control the cost of comprehension, various complexity metrics have been proposed to characterize the difficulty of understanding a program and, thus, allow accurate estimation of the cost of a change. Such metrics are not always evaluated. This paper evaluates a group of metrics recently proposed to assess the "spatial complexity" of a program (spatial complexity is informally defined as the distance a maintainer must move within source code to build a mental model of that code). The evaluation takes the form of a large-scale empirical study of evolving source code drawn from a commercial organization. The results of this investigation show that most of the spatial complexity metrics evaluated offer no substantially better information about program complexity than the number of lines of code. However, one metric shows more promise and is thus deemed to be a candidate for further use and investigation.

25 citations


Journal ArticleDOI
TL;DR: This paper presents an approach to enhance managerial usage of software metrics programs that combines an organizational problem-solving process with a view on metrics programs as information media and offers comparisons to related approaches to software metrics.
Abstract: This paper presents an approach to enhance managerial usage of software metrics programs. The approach combines an organizational problem-solving process with a view on metrics programs as information media. A number of information-centric views are developed to engage key stakeholders in debating current metrics practices and identifying possible improvements. We present our experiences and results of using the approach at Software Inc., we offer comparisons to related approaches to software metrics, and we discuss how to use the information-centric approach for improvement purposes.

23 citations


Proceedings ArticleDOI
15 Dec 2005
TL;DR: A new set of metrics for analyzing the interaction between the modules of a large software system, based on the rationale that code partitioning should bebased on the principle of similarity of service provided by the different functions encapsulated in a module is presented.
Abstract: We present a new set of metrics for analyzing the interaction between the modules of a large software system. We believe that these metrics would be important to any automatic or semi-automatic code modularization algorithm. The metrics are based on the rationale that code partitioning should be based on the principle of similarity of service provided by the different functions encapsulated in a module. Although module interaction metrics are necessary for code modularization, in practice they must be accompanied by metrics that measure other important attributes of how the code is partitioned into modules. These other metrics, dealing with code properties such as the approximate uniformity of module sizes, conformance to any size constraints on the modules, etc., are also included in the work presented here. To give the reader some insight into the workings of our metrics, this paper also includes some results obtained by applying the metrics to the body of code that constitutes the open-source Apache HTTP server. We apply our metrics to this code as packaged by the developers of the software and to the other partially and fully randomized versions of the code.

21 citations


Proceedings ArticleDOI
03 Jan 2005
TL;DR: Focusing on coupling metrics, this paper presents an empirical study to analyze the relationship between static and dynamic coupling metrics and proposes the concept of pseudo dynamic metrics to estimate the dynamic behavior early in the software development lifecycle.
Abstract: Summary form only given. Software metrics have become an integral part of software development and are used during every phase of the software development life cycle. Research in the area of software metrics tends to focus predominantly on static metrics that are obtained by static analysis of the software artifact. But software quality attributes such as performance and reliability depend on the dynamic behavior of the software artifact. Estimating software quality attributes based on dynamic metrics for the software system are more accurate and realistic. The research presented in this paper attempts to narrow the gap between static metrics and dynamic metrics, and lay the foundation for a more systematic approach to estimate the dynamic behavior of a software system early in the software development cycle. Focusing on coupling metrics, we present an empirical study to analyze the relationship between static and dynamic coupling metrics and propose the concept of pseudo dynamic metrics to estimate the dynamic behavior early in the software development lifecycle.

20 citations


01 Jan 2005
TL;DR: This paper analyzes the design and definitions of Halstead’s metrics, the set of which is commonly referred to as ‘software science’, based on a measurement analysis framework to structure, compare, analyze and provide an understanding of the various measurement approaches presented in the software engineering measurement literature.
Abstract: Some software measures are still not widely used in industry, despite the fact that they were defined many years ago, and some additional insights might be gained by revisiting them today with the benefit of recent lessons learned about how to analyze their design. In this paper, we analyze the design and definitions of Halstead’s metrics, the set of which is commonly referred to as ‘software science’. This analysis is based on a measurement analysis framework defined to structure, compare, analyze and provide an understanding of the various measurement approaches presented in the software engineering measurement literature.

20 citations


Proceedings ArticleDOI
19 Sep 2005
TL;DR: This paper discusses the idea of change metrics which are modification aware, that is metrics which evaluate the change itself and not just the change in a measurement of the system before and after the change.
Abstract: In this paper we propose the notion of change metrics, those that measure change in a project or its entities. In particular we are interested in measuring fine-grained changes, such as those stored by version control systems (such as CVS). A framework for the classification of change metrics is provided. We discuss the idea of change metrics which are modification aware, that is metrics which evaluate the change itself and not just the change in a measurement of the system before and after the change. We then provide examples of the use of these metrics on two mature projects

19 citations


Journal ArticleDOI
01 Jan 2005
TL;DR: The aim of this paper is further development of the mathematical theory of algorithmic complexity measures for evaluation of the computer software design and utilization and development of more efficient software metrics.
Abstract: The aim of this paper is further development of the mathematical theory of algorithmic complexity measures for evaluation of the computer software design and utilization. We consider some basic concepts and constructions from the theory of algorithmic complexity and develop a system structure in the theory of algorithmic complexity. The paper contains main concepts from the axiomatic complexity theory, which are fundamental for the whole theory of algorithmic complexity. Important classes of dual complexity measures are studied. The research is oriented at the development of software engineering and, in particular, creation of more efficient software metrics.

5 citations


Journal ArticleDOI
TL;DR: A technique that facilitates cross-validation of software metrics for component-based development and a formalization for the metrics suite that combines the UML 2.0 metamodel with OCL is presented.
Abstract: The objective it to present a technique that facilitates cross-validation of software metrics for component-based development. The technique is illustrated with a cross-validation experiment for a suite of reusability metrics for component based design published in the literature. These metrics were originally proposed using a semi-formal notation, namely a combination of mathematical formulae with natural language descriptions for their elementary parts. They were then computed using proprietary tools. By contrast, we present a formalization for the metrics suite that combines the UML 2.0 metamodel with OCL. This technique provides a formal, portable and executable definition of the metrics set that can be used to perform cross-validations of the metrics suite, such as the one presented in this paper. The ability to independently replicate metrics validation experiments is essential to the scientific progress of component based software engineering.

4 citations


Journal Article
TL;DR: Representative methods of constructing relation model between goal of decision and quantitative software attributes, namely data of software metrics, must be constructed are introduced in this paper.
Abstract: The aim of software metrics is to support decision making. To make correct decision, the relation model between goal of decision and quantitative software attributes, namely data of software metrics, must be constructed. Representative methods of constructing this relation model, which had been widely used in software metrics field, are introduced in this paper. For every method, the principle, approach, application in software metrics and advantage or disadvantage are discussed. Finally, two potential methods are mentioned because they don’t be widely used in software metrics.

1 citations


11 May 2005
TL;DR: The paper investigates the use of two of the most popular software complexity measurement theories, the Halstead's Software Science metrics and McCabe's cyclomatic complexity, to analyze basic characteristics of multitasking systems implemented in the programming language C.
Abstract: The paper investigates the use of two of the most popular software complexity measurement theories, the Halstead's Software Science metrics and McCabe's cyclomatic complexity, to analyze basic characteristics of multitasking systems implemented in the programming language C. Additional extensions of both systems of metrics are proposed to increase the level of obtained information connected with the typical characteristics of multitasking systems.


Proceedings ArticleDOI
21 Mar 2005
TL;DR: An approach that applies McCabe Complexity and Halstead Software Measures to create a hypothetical, language-independent representation of an algorithm, identifying the encapsulated, measurable components that compose that algorithm is illustrated.
Abstract: The inclusion of data hiding techniques in everything from consumer electronics to military systems is becoming more commonplace. This has resulted in a growing interest in benchmarks for embedding algorithms, which until now has focused primarily on the theoretical and product oriented aspects of algorithms (such as PSNR) rather than the factors that are often imposed by the system (e.g., size, execution speed, complexity). This paper takes an initial look at these latter issues through the application of some simple and well known software engineering metrics: McCabe Complexity and Halstead Software Measures. This paper illustrates an approach that applies these metrics to create a hypothetical, language-independent representation of an algorithm, identifying the encapsulated, measurable components that compose that algorithm. This is the first step in developing a representation that will not only allow for comparison between disparate algorithms, but describe and define algorithms in such a way as to remove language and platform dependency. Bringing these concepts to their logical conclusion highlights how such an approach would provide existing benchmarking systems a more in-depth and fair analysis of algorithms in the context of systems as a whole, and decrease variability which affects the accuracy of the theoretical and product measures used today.