scispace - formally typeset
Search or ask a question
Author

Steven P. Miller

Bio: Steven P. Miller is an academic researcher from Rockwell International. The author has contributed to research in topics: Formal verification & Formal specification. The author has an hindex of 2, co-authored 2 publications receiving 527 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The paper describes the modified condition/decision coverage criterion, its properties and areas for further work.
Abstract: Modified condition/decision coverage is a structural coverage criterion requiring that each condition within a decision is shown by execution to independently and correctly affect the outcome of the decision. This criterion was developed to help meet the need for extensive testing of complex Boolean expressions in safety-critical applications. The paper describes the modified condition/decision coverage criterion, its properties and areas for further work.

521 citations

Journal ArticleDOI
01 Mar 1996
TL;DR: This paper discusses a project undertaken to answer some of the questions regarding the formal verification of the microcode in the AAMP5 microprocessor, formally specifying in the PVS language a Rockwell proprietary microprocessor at both the instruction-set and register-transfer levels and using the P VS theorem prover to show the micro code correctly implemented the Instruction-level specification for a representative subset of instructions.
Abstract: Formal specification combined with mechanical verification is a promising approach for achieving the extremely high levels of assurance required of safety-critical digital systems. However, many questions remain regarding their use in practice: Can these techniques scale up to industrial systems, where are they likely to be useful, and how should industry go about incorporating them into practice? This paper discusses a project undertaken to answer some of these questions, the formal verification of the microcode in the AAMP5 microprocessor. This project consisted of formally specifying in the PVS language a Rockwell proprietary microprocessor at both the instruction-set and register-transfer levels and using the PVS theorem prover to show the microcode correctly implemented the instruction-level specification for a representative subset of instructions. Notable aspects of this project include the use of a formal specification language by practicing hardware and software engineers, the integration of traditional inspections with formal specifications, and the use of a mechanical theorem prover to verify a portion of a commercial, pipelined microprocessor that was not explicitly designed for formal verification.

33 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper surveys each area of minimization, selection and prioritization technique and discusses open problems and potential directions for future research.
Abstract: Regression testing is a testing activity that is performed to provide confidence that changes do not harm the existing behaviour of the software. Test suites tend to grow in size as software evolves, often making it too costly to execute entire test suites. A number of different approaches have been studied to maximize the value of the accrued test suite: minimization, selection and prioritization. Test suite minimization seeks to eliminate redundant test cases in order to reduce the number of tests to run. Test case selection seeks to identify the test cases that are relevant to some set of recent changes. Test case prioritization seeks to order test cases in such a way that early fault detection is maximized. This paper surveys each area of minimization, selection and prioritization technique and discusses open problems and potential directions for future research. Copyright © 2010 John Wiley & Sons, Ltd.

1,276 citations

Book
01 Jan 2009
TL;DR: In this paper, the authors define testing as the process of applying a few well-defined, general-purpose test criteria to a structure or model of the software, and present an innovative approach to explaining the process.
Abstract: Extensively class tested, this text takes an innovative approach to explaining the process of software testing: it defines testing as the process of applying a few well-defined, general-purpose test criteria to a structure or model of the software. The structure of the text directly reflects the pedagogical approach and incorporates the latest innovations in testing, including techniques to test modern types of software such as OO, web applications, and embedded software.

1,126 citations

MonographDOI
01 Jan 2008
TL;DR: The structure of the text directly reflects the pedagogical approach and incorporates the latest innovations in testing, including techniques to test modern types of software such as OO, web applications and embedded software.
Abstract: Extensively class tested, this text takes an innovative approach to explaining the process of software testing: it defines testing as the process of applying a few well-defined, general-purpose test criteria to a structure or model of the software. The structure of the text directly reflects the pedagogical approach and incorporates the latest innovations in testing, including techniques to test modern types of software such as OO, web applications, and embedded software.

1,079 citations

Book ChapterDOI
27 Oct 2008
TL;DR: Existing adequacy measures for conformance testing that only consider model coverage can be strengthened by combining them with rigorous requirements coverage metrics.
Abstract: Conformance testing in model-based development refers to the testing activity that verifies whether the code generated (manually or automatically) from the model is behaviorally equivalent to the model. Presently the adequacy of conformance testing is inferred by measuring structural coverage achieved over the model. We hypothesize that adequacy metrics for conformance testing should consider structural coverage over the requirementseither in place of or in addition to structural coverage over the model. Measuring structural coverage over the requirements gives a notion of how well the conformance tests exercise the required behavior of the system. We conducted an experiment to investigate the hypothesis stating structural coverage over formal requirements is more effective than structural coverage over the model as an adequacy measure for conformance testing. We found that the hypothesis was rejected at 5% statistical significance on three of the four case examples in our experiment. Nevertheless, we found that the tests providing requirements coverage found several faults that remained undetected by tests providing model coverage. We thus formed a second hypothesis stating that complementing model coverage with requirements coverage will prove more effective as an adequacy measure than solely using model coverage for conformance testing. In our experiment, we found test suites providing both requirements coverage and model coverage to be more effective at finding faults than test suites providing model coverage alone, at 5% statistical significance. Based on our results, we believe existing adequacy measures for conformance testing that only consider model coverage can be strengthened by combining them with rigorous requirements coverage metrics.

631 citations

Journal ArticleDOI
TL;DR: The implementation of the GA-based system is described and the effectiveness of this approach on a number of programs, one of which is significantly larger than those for which results have previously been reported in the literature, are examined.
Abstract: This paper discusses the use of genetic algorithms (GAs) for automatic software test data generation. This research extends previous work on dynamic test data generation where the problem of test data generation is reduced to one of minimizing a function. In our work, the function is minimized by using one of two genetic algorithms in place of the local minimization techniques used in earlier research. We describe the implementation of our GA-based system and examine the effectiveness of this approach on a number of programs, one of which is significantly larger than those for which results have previously been reported in the literature. We also examine the effect of program complexity on the test data generation problem by executing our system on a number of synthetic programs that have varying complexities.

498 citations