scispace - formally typeset
Search or ask a question
Book

A metrics suite for object oriented design

TL;DR: This research addresses the needs for software measures in object-orientation design through the development and implementation of a new suite of metrics for OO design, and suggests ways in which managers may use these metrics for process improvement.
Abstract: Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and/or improved approaches to software development, with perhaps the most prominent being object-orientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. This research addresses these needs through the development and implementation of a new suite of metrics for OO design. Metrics developed in previous research, while contributing to the field's understanding of software development processes, have generally been subject to serious criticisms, including the lack of a theoretical base. Following Wand and Weber (1989), the theoretical base chosen for the metrics was the ontology of Bunge (1977). Six design metrics are developed, and then analytically evaluated against Weyuker's (1988) proposed set of measurement principles. An automated data collection tool was then developed and implemented to collect an empirical sample of these metrics at two field sites in order to demonstrate their feasibility and suggest ways in which managers may use these metrics for process improvement. >

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: Several of Chidamber and Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle and are better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.
Abstract: This paper presents the results of a study in which we empirically investigated the suite of object-oriented (OO) design metrics introduced in (Chidamber and Kemerer, 1994). More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described in (Li and Henry, 1993) where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber and Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.

1,741 citations

Proceedings ArticleDOI
16 Oct 2006
TL;DR: This paper recommends benchmarking selection and evaluation methodologies, and introduces the DaCapo benchmarks, a set of open source, client-side Java benchmarks that improve over SPEC Java in a variety of ways, including more complex code, richer object behaviors, and more demanding memory system requirements.
Abstract: Since benchmarks drive computer science research and industry product development, which ones we use and how we evaluate them are key questions for the community. Despite complex runtime tradeoffs due to dynamic compilation and garbage collection required for Java programs, many evaluations still use methodologies developed for C, C++, and Fortran. SPEC, the dominant purveyor of benchmarks, compounded this problem by institutionalizing these methodologies for their Java benchmark suite. This paper recommends benchmarking selection and evaluation methodologies, and introduces the DaCapo benchmarks, a set of open source, client-side Java benchmarks. We demonstrate that the complex interactions of (1) architecture, (2) compiler, (3) virtual machine, (4) memory management, and (5) application require more extensive evaluation than C, C++, and Fortran which stress (4) much less, and do not require (3). We use and introduce new value, time-series, and statistical metrics for static and dynamic properties such as code complexity, code size, heap composition, and pointer mutations. No benchmark suite is definitive, but these metrics show that DaCapo improves over SPEC Java in a variety of ways, including more complex code, richer object behaviors, and more demanding memory system requirements. This paper takes a step towards improving methodologies for choosing and evaluating benchmarks to foster innovation in system design and implementation for Java and other managed languages.

1,561 citations


Cites background or methods from "A metrics suite for object oriented..."

  • ...1 Code Complexity To measure the complexity of the benchmark code, we use the Chidamber and Kemerer object-oriented programming (CK) metrics [12] measured with the ckjm software package [36]....

    [...]

  • ...For example, DaCapo benchmarks have much richer code complexity, class structures, and class hierarchies than SPEC according to the Chidamber and Kemerer metrics [12]....

    [...]

  • ...We present Chidamber and Kemerer’s software complexity metrics [12] and a number of virtual machine and architecture independent dynamic metrics, such as, classes loaded and bytecodes compiled....

    [...]

Journal ArticleDOI
TL;DR: H holistic models for software defect prediction, using Bayesian belief networks, are recommended as alternative approaches to the single-issue models used at present and research into a theory of "software decomposition" is argued for.
Abstract: Many organizations want to predict the number of defects (faults) in software systems, before they are deployed, to gauge the likely delivered quality and maintenance effort. To help in this numerous software metrics and statistical models have been developed, with a correspondingly large literature. We provide a critical review of this literature and the state-of-the-art. Most of the wide range of prediction models use size and complexity metrics to predict defects. Others are based on testing data, the "quality" of the development process, or take a multivariate approach. The authors of the models have often made heroic contributions to a subject otherwise bereft of empirical studies. However, there are a number of serious theoretical and practical problems in many studies. The models are weak because of their inability to cope with the, as yet, unknown relationship between defects and failures. There are fundamental statistical and data quality problems that undermine model validity. More significantly many prediction models tend to model only part of the underlying problem and seriously misspecify it. To illustrate these points the Goldilock's Conjecture, that there is an optimum module size, is used to show the considerable problems inherent in current defect prediction approaches. Careful and considered analysis of past and new results shows that the conjecture lacks support and that some models are misleading. We recommend holistic models for software defect prediction, using Bayesian belief networks, as alternative approaches to the single-issue models used at present. We also argue for research into a theory of "software decomposition" in order to test hypotheses about defect introduction and help construct a better science of software engineering.

1,063 citations


Cites background from "A metrics suite for object oriented..."

  • ...Finally, in Section 8 we record our conclusions....

    [...]

01 Jul 1997
TL;DR: It is argued that automatic code obfuscation is currently the most viable method for preventing reverse engineering and the design of a code obfuscator is described, a tool which converts a program into an equivalent one that is more diicult to understand and reverse engineer.
Abstract: It has become more and more common to distribute software in forms that retain most or all of the information present in the original source code. An important example is Java bytecode. Since such codes are easy to decompile, they increase the risk of malicious reverse engineering attacks. In this paper we review several techniques for technical protection of software secrets. We will argue that automatic code obfuscation is currently the most viable method for preventing reverse engineering. We then describe the design of a code obfuscator, a tool which converts a program into an equivalent one that is more diicult to understand and reverse engineer. The obfuscator is based on the application of code transformations, in many cases similar to those used by compiler optimizers. We describe a large number of such transformations, classify them, and evaluate them with respect to their potency (To what degree is a human reader confused?), resilience (How well are automatic deobfuscation attacks resisted?), and cost (How much overhead is added to the application?). We nally discuss some possible deobfuscation techniques (such as program slicing) and possible countermeasures an obfuscator could employ against them.

1,019 citations


Additional excerpts

  • ...7 OO Metric Chidamber [3]E(C) increases with ( a7) the number of methods in C, ( b7) the depth (distance from theroot) of C in the inheritance tree, ( c7) the number of direct subclasses of C, ( d7) the numberof other classes to which C is coupleda, ( e7) the number of methods that can be executed inresponse to a message sent to an object of C, ( f7) the degree to which C's methods do notreference the same set of instance variables....

    [...]

  • ...7 OO Metric Chidamber [3] E(C) increases with ( a7) the number of methods in C, ( b7) the depth (distance from the root) of C in the inheritance tree, ( c7) the number of direct subclasses of C, ( d7) the number of other classes to which C is coupleda, ( e7) the number of methods that can be executed in response to a message sent to an object of C, ( f7) the degree to which C's methods do not reference the same set of instance variables....

    [...]

Journal ArticleDOI
TL;DR: An improved hierarchical model that relates design properties such as encapsulation, modularity, coupling, and cohesion to high-level quality attributes such as reusability, flexibility, and complexity using empirical and anecdotal information is described.
Abstract: The paper describes an improved hierarchical model for the assessment of high-level design quality attributes in object-oriented designs. In this model, structural and behavioral design properties of classes, objects, and their relationships are evaluated using a suite of object-oriented design metrics. This model relates design properties such as encapsulation, modularity, coupling, and cohesion to high-level quality attributes such as reusability, flexibility, and complexity using empirical and anecdotal information. The relationship or links from design properties to quality attributes are weighted in accordance with their influence and importance. The model is validated by using empirical and expert opinion to compare with the model results on several large commercial object-oriented systems. A key attribute of the model is that it can be easily modified to include different relationships and weights, thus providing a practical quality assessment tool adaptable to a variety of demands.

980 citations

References
More filters
Book
04 Oct 1993
TL;DR: In this paper, a graph-theoretic complexity measure for managing and controlling program complexity is presented. But the complexity is independent of physical size, and complexity depends only on the decision structure of a program.
Abstract: This paper describes a graph-theoretic complexity measure and illustrates how it can be used to manage and control program complexity. The paper first explains how the graph theory concepts apply and gives an intuitive explanation of the graph concepts in programming terms. The control graphs of several actual FORTRAN programs are then presented to illustrate the correlation between intuitive complexity and the graph theoretic complexity. Several properties of the graph-theoretic complexity are then proved which show, for example, that complexity is independent of physical size (adding or subtracting functional statements leaves complexity unchanged) and complexity depends only on the decision structure of a program.The issue of using non-structured control flow is also discussed. A characterization of non-structured control graphs is given and a method of measuring the “structuredness” of a program is developed. The relationship between structure and reducibility is illustrated with several examples.The last section of the paper deals with a testing methodology used in conjunction with the complexity measure; a testing strategy is defined that dictates that a program can either admit of a certain minimal testing level or the program can be structurally reduced.

5,171 citations

Book
01 May 1990
TL;DR: Booch gives practical guidance for the construction of complex object-oriented design methods and draws on his extensive experience in developing very large software systems to illuminate the complex challenges and potential problems developers often face.
Abstract: Booch gives practical guidance for the construction of complex object-oriented design methods. A pioneer in the area, he draws on his extensive experience in developing very large software systems to illuminate both the complex challenges and potential problems developers often face.

2,312 citations


Additional excerpts

  • ...While there are many object oriented design (OOD) methodologies, one that reflects the essential features of OOD is presented by Booch [ 4 ]1....

    [...]

Book
01 Jan 1977

2,146 citations

Book
01 Apr 1991
TL;DR: The book has been comprehensively re-written and re-designed to take account of the fast changing developments in software metrics, most notably their widespread penetration into industrial practice.
Abstract: From the Publisher: This book is the Second Edition of the highly successful Software Metrics: A Rigorous Approach. The book has been comprehensively re-written and re-designed to take account of the fast changing developments in software metrics, most notably their widespread penetration into industrial practice. Thus there are now extensive case studies, worked examples, and exercises. While every section of the book has been improved and updated, there are also entirely new sections dealing with process maturity and measurement, goal-question-metric, metrics plans, experimentation, empirical studies, object-oriented metrics, and metrics tools. The book continues to provide an accessible and comprehensive introduction to software metrics, now an essential component in the software engineering process. Software Metrics, 2/e is ideal for undergraduate and graduates studying a course in software metrics or software quality assurance. It also provides an excellent resource for practitioners in industry.

1,094 citations


"A metrics suite for object oriented..." refers background in this paper

  • ...Fenton suggests that Weyuker's properties are not predicated on a single consistent view of complexity [ 14 ]....

    [...]