scispace - formally typeset
Search or ask a question

Showing papers on "Halstead complexity measures published in 1990"


Journal ArticleDOI
TL;DR: Predictive models that incorporate a functional relationship of program error measures with software complexity metrics and metrics based on factor analysis of empirical data are developed and suggest that predictive models are indeed possible for the determination of program errors from these orthogonal complexity domains.
Abstract: Predictive models that incorporate a functional relationship of program error measures with software complexity metrics and metrics based on factor analysis of empirical data are developed. Specific techniques for assessing regression models are presented for analyzing these models. Within the framework of regression analysis, the authors examine two separate means of exploring the connection between complexity and errors. First, the regression models are formed from the raw complexity metrics. Essentially, these models confirm a known relationship between program lines of code and program errors. The second methodology involves the regression of complexity factor measures and measures of errors. These complexity factors are orthogonal measures of complexity from an underlying complexity domain model. From this more global perspective, it is believed that there is a relationship between program errors and complexity domains of program structure and size (volume). Further, the strength of this relationship suggests that predictive models are indeed possible for the determination of program errors from these orthogonal complexity domains. >

246 citations


Journal ArticleDOI
TL;DR: Although one of the design metrics (informational fan-out)/ was able to identify change-prone, fault-prone and complex programs, code metrics (i.e. lines of code and number of branches) were better.
Abstract: Some software design metrics are evaluated using data from a communications system. The design metrics investigated were based on the information flow metrics proposed by S. Henry and D. Kafura (1981) and the problems they encountered are discussed. The slightly simpler metrics used in this study are described. The ability of the design metrics to identify change-prone, error-prone and complex programs are contrasted with that of simple code metrics. Although one of the design metrics (informational fan-out)/ was able to identify change-prone, fault-prone and complex programs, code metrics (i.e. lines of code and number of branches) were better. In this context 'better' means correctly identifying a larger proportion of change-prone, error-prone and/or complex programs, while maintaining a relatively low false identification rate (i.e. incorrectly identifying a program which did not in fact exhibit any undesirable features). >

103 citations


Journal ArticleDOI
TL;DR: This paper describes a method of software quality control based on the use of software metrics that is applied to software design metrics to illustrate how design metrics can be used constructively during the software production process.
Abstract: The paper describes a method of software quality control based on the use of software metrics. The method is applied to software design metrics to illustrate how design metrics can be used constructively during the software production process. The various types of design metrics and how they can be used to support module (procedure) quality-control are discussed. This involves adapting conventional quality control methods such as control charts to the realities of software, by using: ‘robust’ summary statistics to construct ranges of acceptable metric values; scatterplots to detect modules with unusual combinations of metric values; and different types of metrics to help identify the underlying reasons for a module having unacceptable (anomalous) metric values. The approach is illustrated with examples of metrics from a number of existing software products.

40 citations


Proceedings ArticleDOI
31 Oct 1990
TL;DR: The relationship between measures of software complexity and programming errors is explored and a factor analytic technique used to construct a linear compound of lines of code with control metrics was found to yield models of superior predictive quality.
Abstract: The relationship between measures of software complexity and programming errors is explored. Four distinct regression models were developed for an experimental set of data to create a predictive model from software complexity metrics to program errors. The lines of code metric, traditionally associated with programming errors in predictive models, was found to be less valuable as a criterion measure in these models than measures of software control complexity. A factor analytic technique used to construct a linear compound of lines of code with control metrics was found to yield models of superior predictive quality. >

26 citations


Journal ArticleDOI
TL;DR: A statistical procedure for validating conventional metrics based on Halstead's Software Science and McCabe's Cyclomatic Complexity is presented and a methodology to analyze large software projects with a single set of metrics is proposed.

20 citations


Journal ArticleDOI
TL;DR: Guidelines for establishing a standard metrics program for organizations are presented and the collection effort should be minimal, meaning the data to be processed should already be in the collection phase.
Abstract: Guidelines for establishing a standard metrics program for organizations are presented. Initially, it is suggested that the collection effort be minimal, meaning the data to be processed should already be in the collection phase so that the total metrics effort will not be viewed as a burden; the raw metric data must be such that it can be processed automatically; the initial metrics effort should rely on computer programs that already exist in some basic form; and the metric must be viewed as worthwhile. Sources of data are discussed. The data needed for documentation metrics, source code metrics, problem-change report metrics, cost metrics, productivity metrics, and rework metrics are identified. >

5 citations


Journal ArticleDOI
TL;DR: It is argued that there is a need to develop more relevant metrics and, in particular, to concentrate effort on stochastic-based metrics.
Abstract: The rapid development of software systems and their interaction with many human activities have already established that in-service software performance is not always reliable. Most software metrics developed to date fail to reflect the software's potential for operational malfunctions. A critical review of existing non-stochastic-based software metrics and their relevance to several well defined concepts of reliability is given. The paucity of such metrics that provide any real information towards the quantification of operational reliability is shown. It is argued that there is a need to develop more relevant metrics and, in particular, to concentrate effort on stochastic-based metrics.

4 citations


Book ChapterDOI
13 Apr 1990
TL;DR: How far current metrics can be used for software management and engineering is surveyed, with emphasis on metrics suited to quantify maintainability characteristics in the different stages of software development.
Abstract: In the last 10-15 years, a multitude of software quality metrics has been developed. The most well-known of these metrics are Halstead's Software Science measures, McCabe's cyclomatic number, Gilb's logical complexity metrics, Henry and Kafura's information flow metrics, and Yau and Collofello's stability measures. In particular, metrics appropriate for quantifying characteristics of software maintainability drew widespread attention from both managers and engineers, since the rising cost of maintaining software systems is still an important concern for software developers and customers: The resources invested in software maintenance have been estimated to consume two thirds of the life cycle costs of software. Quality metrics are considered an effective aid to help manage the software development and maintenance process. Their periodic application is viewed as a control system providing continuous feedback and allowing corrective actions during the whole development process. This paper surveys the state of the art in measuring and predicting the quality of software by means of metrics. Emphasis is placed on metrics which are suited to quantify maintainability characteristics in the different stages of software development. Results of empirical validation studies are reported to show the analytical and predictive power of these metrics. From the survey it is concluded how far current metrics can be used for software management and engineering.

4 citations


Journal ArticleDOI
S. Bhide1
TL;DR: A software process-integrated metrics framework that is derived and driven by the user satisfaction and profitability to the producer (USPP) and further defined by the successive translations of USPP to lower-level metrics at the bottom is presented.

4 citations


01 Jan 1990
TL;DR: The difficulties in applying standard metrics to object- oriented code are described and a set of metrics which are specifically geared toward the features which make the object-oriented approach unique are defined.
Abstract: Software metrics are in use to guide current software development practices. As commercial organizations make use of the benefits of the object-oriented paradigm, the desire to apply metrics to that paradigm has logically followed. However, standard procedural metrics are limited in their ability to describe true object-oriented designs and code, and in some aspects fail outright. This paper describes the difficulties in applying standard metrics to object-oriented code and defines a set of metrics which are specifically geared toward the features which make the object-oriented approach unique.

3 citations


Journal ArticleDOI
TL;DR: In a direct comparison of two alternate modeling techniques, the reduced factor model was found to have better predictive quality than an association with raw complexity metrics.

01 Jun 1990
TL;DR: The results of the analysis of the programs using four metrics, cyclomatic complexity, bandwidth, nested complexity and the number of statements, show that control-structure metrics can be effectively used to detect the more fault-prone modules.
Abstract: : The increasing cost and complexity of software in recent years is causing growing interest in the development of measurement technology to evaluate, predict and compare software complexity. Metrics can be used throughout all the development cycle providing valuable information to the software developers in order to enhance the final products. The goal of this thesis is to verify empirically the fault-predictive ability of some software complexity metrics and specifically their usefulness during the testing phase. A set of eight programs, varying in length from 1,186 to 2,489 lines of Pascal code with 157 faults identified with specific modules, provided the data for this study. The results of the analysis of the programs using four metrics, cyclomatic complexity, bandwidth, nested complexity and the number of statements, show that control-structure metrics can be effectively used to detect the more fault-prone modules. The nested complexity of the modules seems to be some relation with the number of faults caused by wrong use of variables and overrestrictive input checks. These observations can be particularly useful during the testing phase because testers can use control-structure metrics to predict not only the modules that may cause more problems but also the more frequent types of faults and use the metrics to guide the choice of testing techniques.