scispace - formally typeset
Search or ask a question
Author

Sallie M. Henry

Bio: Sallie M. Henry is an academic researcher from Virginia Tech. The author has contributed to research in topics: Software construction & Software metric. The author has an hindex of 15, co-authored 41 publications receiving 2201 citations.

Papers
More filters
Journal ArticleDOI
Wei Li, Sallie M. Henry1
TL;DR: This research concentrates on several object-oriented software metrics and the validation of these metrics with maintenance effort in two commercial systems.

1,111 citations

Proceedings ArticleDOI
Wei Li, Sallie M. Henry1
21 May 1993
TL;DR: This research concentrates on several object oriented software metrics and the validation of these metrics with maintenance effort in two commercial systems.
Abstract: Software metrics have been studied in the procedural paradigm as a quantitative means of assessing the software development process as well as the quality of software products. Several studies have validated that various metrics are useful indicators of maintenance effort in the procedural paradigm. However, software metrics have rarely been studied in the object oriented paradigm. Very few metrics have been proposed to measure object oriented systems, and the proposed ones have not been validated. This research concentrates on several object oriented software metrics and the validation of these metrics with maintenance effort in two commercial systems. >

277 citations

Journal ArticleDOI
TL;DR: A set of criteria that has guided the development of a metric system for measuring the quality of a largescale software product is state, which uses the flow of information within the system as an index of system interconnectivity.

110 citations

Journal ArticleDOI
TL;DR: This paper reports on a positive experience with a set of quantitative measures of software structure, which were used to evaluate the design and implementation of a software system which exhibits the interconnectivity of components typical of large‐scale software systems.
Abstract: The design and analysis of the structure of software systems has typically been based on purely qualitative grounds. In this paper we report on our positive experience with a set of quantitative measures of software structure. These metrics, based on the number of possible paths of information flow through a given component, were used to evaluate the design and implementation of a software system (the UNIX operating system kernel) which exhibits the interconnectivity of components typical of large-scale software systems. Several examples are presented which show the power of this technique in locating a variety of both design and implementation defects. Suggested repairs, which agree with the commonly accepted principles of structured design and programming, are presented. The effect of these alterations on the structure of the system and the quantitative measurements of that structure lead to a convincing validation of the utility of information flow metrics.

100 citations

Journal ArticleDOI
01 Jan 1981
TL;DR: The primary result of this study is that Halstead's and McCabe's metrics are highly correlated while the information flow metric appears to be an independent measure of complexity.
Abstract: Automatable metrics of software quality appear to have numerous advantages in the design, construction and maintenance of software systems. While numerous such metrics have been defined, and several of them have been validated on actual systems, significant work remains to be done to establish the relationships among these metrics. This paper reports the results of correlation studies made among three complexity metrics which were applied to the same software system. The three complexity metrics used were Halstead's effort, McCabe's cyclomatic complexity and Henry and Kafura's information flow complexity. The common software system was the UNIX operating system. The primary result of this study is that Halstead's and McCabe's metrics are highly correlated while the information flow metric appears to be an independent measure of complexity.

94 citations


Cited by
More filters
Book
02 Sep 2011
TL;DR: This research addresses the needs for software measures in object-orientation design through the development and implementation of a new suite of metrics for OO design, and suggests ways in which managers may use these metrics for process improvement.
Abstract: Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and/or improved approaches to software development, with perhaps the most prominent being object-orientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. This research addresses these needs through the development and implementation of a new suite of metrics for OO design. Metrics developed in previous research, while contributing to the field's understanding of software development processes, have generally been subject to serious criticisms, including the lack of a theoretical base. Following Wand and Weber (1989), the theoretical base chosen for the metrics was the ontology of Bunge (1977). Six design metrics are developed, and then analytically evaluated against Weyuker's (1988) proposed set of measurement principles. An automated data collection tool was then developed and implemented to collect an empirical sample of these metrics at two field sites in order to demonstrate their feasibility and suggest ways in which managers may use these metrics for process improvement. >

5,476 citations

Journal ArticleDOI
TL;DR: Several of Chidamber and Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle and are better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.
Abstract: This paper presents the results of a study in which we empirically investigated the suite of object-oriented (OO) design metrics introduced in (Chidamber and Kemerer, 1994). More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described in (Li and Henry, 1993) where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber and Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.

1,741 citations

Journal ArticleDOI
TL;DR: H holistic models for software defect prediction, using Bayesian belief networks, are recommended as alternative approaches to the single-issue models used at present and research into a theory of "software decomposition" is argued for.
Abstract: Many organizations want to predict the number of defects (faults) in software systems, before they are deployed, to gauge the likely delivered quality and maintenance effort. To help in this numerous software metrics and statistical models have been developed, with a correspondingly large literature. We provide a critical review of this literature and the state-of-the-art. Most of the wide range of prediction models use size and complexity metrics to predict defects. Others are based on testing data, the "quality" of the development process, or take a multivariate approach. The authors of the models have often made heroic contributions to a subject otherwise bereft of empirical studies. However, there are a number of serious theoretical and practical problems in many studies. The models are weak because of their inability to cope with the, as yet, unknown relationship between defects and failures. There are fundamental statistical and data quality problems that undermine model validity. More significantly many prediction models tend to model only part of the underlying problem and seriously misspecify it. To illustrate these points the Goldilock's Conjecture, that there is an optimum module size, is used to show the considerable problems inherent in current defect prediction approaches. Careful and considered analysis of past and new results shows that the conjecture lacks support and that some models are misleading. We recommend holistic models for software defect prediction, using Bayesian belief networks, as alternative approaches to the single-issue models used at present. We also argue for research into a theory of "software decomposition" in order to test hypotheses about defect introduction and help construct a better science of software engineering.

1,063 citations

Journal ArticleDOI
TL;DR: An improved hierarchical model that relates design properties such as encapsulation, modularity, coupling, and cohesion to high-level quality attributes such as reusability, flexibility, and complexity using empirical and anecdotal information is described.
Abstract: The paper describes an improved hierarchical model for the assessment of high-level design quality attributes in object-oriented designs. In this model, structural and behavioral design properties of classes, objects, and their relationships are evaluated using a suite of object-oriented design metrics. This model relates design properties such as encapsulation, modularity, coupling, and cohesion to high-level quality attributes such as reusability, flexibility, and complexity using empirical and anecdotal information. The relationship or links from design properties to quality attributes are weighted in accordance with their influence and importance. The model is validated by using empirical and expert opinion to compare with the model results on several large commercial object-oriented systems. A key attribute of the model is that it can be easily modified to include different relationships and weights, thus providing a practical quality assessment tool adaptable to a variety of demands.

980 citations

01 Jan 1998
TL;DR: Data types Sorting and searching parallel and distributed algorithms 3.0 and 4.0 are presented, covering sorting, searching, and distributing in the context of distributed systems.
Abstract: data types Sorting and searching parallel and distributed algorithms 3. [AR] Computer Architecture

833 citations