scispace - formally typeset
Search or ask a question
Author

Frank Mcgarry

Bio: Frank Mcgarry is an academic researcher from Goddard Space Flight Center. The author has contributed to research in topics: Software development & Software construction. The author has an hindex of 12, co-authored 28 publications receiving 809 citations. Previous affiliations of Frank Mcgarry include Computer Sciences Corporation & University of Maryland, College Park.

Papers
More filters
Proceedings ArticleDOI
01 Jun 1992
TL;DR: The SEL is discussed as a functioning example of an operational software experience factory and the characteristics of and major lessons learned from 15 years of SEL operations are summarized.
Abstract: For 15 years, the Software Engineering Laboratory (SEL) has been carrying out studies and experiments for the purpose of understand- ing, assessing, and improving software and software processes within a production software development environment at the National Aeronautics and Space Administration/Goddard Space Flight Center (NASA/GSFC). The SEL comprises three major organizations: NASA/GSFC, Flight Dynamics Division University of Maryland, Department of Computer Science Computer Sciences Corporation, Flight Dynamics Technology Group - These organizations have jointly carried out several hundred software studies, producing hundreds of reports, papers, and documents, all of which de scribe some aspect of the software engineering technology that has been analyzed in the flight dynamics environment at NASA. The studies range from small, controlled experiments (such as analyzing the effectiveness of code readingversus that of functional testing) tolarge, multiple- project studies (such as assessing the impacts of Ada on a production environment). The organization's driving goal is to improve the software process continually, so that sustained improvement may be observed in the resulting products. This paper discusses the SEL as a functioning example of an operational software experience factory and summarizes the characteristics of and major lessons learned from 15 years of SEL operations.

183 citations

Proceedings ArticleDOI
19 May 2002
TL;DR: The history of the SEL is described, the research that was conducted, how the understanding of software process improvement evolved, and a set of lessons learned and hypotheses that should enable future groups to learn from and improve on their quarter century of experiences are described.
Abstract: For 25 years the NASA/GSFC Software Engineering Laboratory (SEL) has been a major resource in software process improvement activities But due to a changing climate at NASA, agency reorganization, and budget cuts, the SEL has lost much of its impact In this paper we describe the history of the SEL and give some lessons learned on what we did right, what we did wrong, and what others can learn from our experiences We briefly describe the research that was conducted by the SEL, describe how we evolved our understanding of software process improvement, and provide a set of lessons learned and hypotheses that should enable future groups to learn from and improve on our quarter century of experiences

151 citations

Journal ArticleDOI
TL;DR: It is necessary to select candidates for process change on the basis of quantified Software Engineering Laboratory (SEL) experiences and clearly defined goals for the software to adopt, discard, or revise the process.
Abstract: We select candidates for process change on the basis of quantified Software Engineering Laboratory (SEL) experiences and clearly defined goals for the software. After we select the changes, we provide training and formulate experiment plans. We then apply the new process to one or more production projects and take detailed measurements. We assess process success by comparing these measures with the continually evolving baseline. Based upon the results of the analysis, we adopt, discard, or revise the process. >

88 citations

01 Dec 1994
TL;DR: The concept of process improvement within the SEL focuses on the continual understanding of both process and product as well as goal-driven experimentation and analysis of process change within a production environment.
Abstract: The Software Engineering Laboratory (SEL) was established in 1976 for the purpose of studying and measuring software processes with the intent of identifying improvements that could be applied to the production of ground support software within the Flight Dynamics Division (FDD) at the National Aeronautics and Space Administration (NASA)/Goddard Space Flight Center (GSFC). The SEL has three member organizations: NASA/GSFC, the University of Maryland, and Computer Sciences Corporation (CSC). The concept of process improvement within the SEL focuses on the continual understanding of both process and product as well as goal-driven experimentation and analysis of process change within a production environment.

64 citations

Proceedings ArticleDOI
01 Aug 1985
TL;DR: The results indicated that module strength is a good criterion with respect to fault rate, whereas arbitrary module size limitations inhibit programmer productivity.
Abstract: A central issue in programming practice involves determining the appropriate size and information content of a software module. This study attempted to determine the effectiveness of two widely used criteria for software modularization, strength and size, in reducing fault rate and development cost. Data from 453 FORTRAN modules developed by professional programmers were analyzed. The results indicated that module strength is a good criterion with respect to fault rate, whereas arbitrary module size limitations inhibit programmer productivity. This analysis is a first step toward defining empirically based standards for software modularization.

56 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Several of Chidamber and Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle and are better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.
Abstract: This paper presents the results of a study in which we empirically investigated the suite of object-oriented (OO) design metrics introduced in (Chidamber and Kemerer, 1994). More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described in (Li and Henry, 1993) where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber and Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.

1,741 citations

Journal ArticleDOI
TL;DR: It is shown that static code attributes used to build defect predictors are much more important than which particular attributes are used, and contrary to prior pessimism, they are demonstrably useful and yield predictors with a mean probability of detection and mean false alarms rates.
Abstract: The value of using static code attributes to learn defect predictors has been widely debated. Prior work has explored issues like the merits of "McCabes versus Halstead versus lines of code counts" for generating defect predictors. We show here that such debates are irrelevant since how the attributes are used to build predictors is much more important than which particular attributes are used. Also, contrary to prior pessimism, we show that such defect predictors are demonstrably useful and, on the data studied here, yield predictors with a mean probability of detection of 71 percent and mean false alarms rates of 25 percent. These predictors would be useful for prioritizing a resource-bound exploration of code that has yet to be inspected

1,135 citations

01 Jan 1998
TL;DR: Data types Sorting and searching parallel and distributed algorithms 3.0 and 4.0 are presented, covering sorting, searching, and distributing in the context of distributed systems.
Abstract: data types Sorting and searching parallel and distributed algorithms 3. [AR] Computer Architecture

833 citations

Journal ArticleDOI
TL;DR: Whether or not many software organizations admit it, they face the challenge of sustaining the level of competence needed to win contracts and fulfill undertakings.
Abstract: Software organizations' main assets are not plants, buildings, or expensive machines. A software organization's main asset is its intellectual capital, as it is in sectors such asconsulting, law, investment banking, and advertising. The major problem with intellectual capital is that it has legs and walks home every day. At the same rate experience walks out the door, inexperience walks in the door. Whether or not many software organizations admit it, they face the challenge ofsustaining the level of competence needed to win contracts and fulfill undertakings.

753 citations

Journal ArticleDOI
TL;DR: It is demonstrated in this paper that the minimum number of data samples required to build effective defect predictors can be quite small and can be collected quickly within a few months.
Abstract: We propose a practical defect prediction approach for companies that do not track defect related data. Specifically, we investigate the applicability of cross-company (CC) data for building localized defect predictors using static code features. Firstly, we analyze the conditions, where CC data can be used as is. These conditions turn out to be quite few. Then we apply principles of analogy-based learning (i.e. nearest neighbor (NN) filtering) to CC data, in order to fine tune these models for localization. We compare the performance of these models with that of defect predictors learned from within-company (WC) data. As expected, we observe that defect predictors learned from WC data outperform the ones learned from CC data. However, our analyses also yield defect predictors learned from NN-filtered CC data, with performance close to, but still not better than, WC data. Therefore, we perform a final analysis for determining the minimum number of local defect reports in order to learn WC defect predictors. We demonstrate in this paper that the minimum number of data samples required to build effective defect predictors can be quite small and can be collected quickly within a few months. Hence, for companies with no local defect data, we recommend a two-phase approach that allows them to employ the defect prediction process instantaneously. In phase one, companies should use NN-filtered CC data to initiate the defect prediction process and simultaneously start collecting WC (local) data. Once enough WC data is collected (i.e. after a few months), organizations should switch to phase two and use predictors learned from WC data.

625 citations