scispace - formally typeset
M

Matt Staats

Researcher at Google

Publications -  34
Citations -  1729

Matt Staats is an academic researcher from Google. The author has contributed to research in topics: Test case & Test suite. The author has an hindex of 19, co-authored 34 publications receiving 1632 citations. Previous affiliations of Matt Staats include University of Minnesota & University of Luxembourg.

Papers
More filters
Book ChapterDOI

Requirements Coverage as an Adequacy Measure for Conformance Testing

TL;DR: Existing adequacy measures for conformance testing that only consider model coverage can be strengthened by combining them with rigorous requirements coverage metrics.
Proceedings ArticleDOI

Parallel symbolic execution for structural test generation

TL;DR: This work proposes a technique, Simple Static Partitioning, for parallelizing symbolic execution, which uses a set of pre-conditions to partition the symbolic execution tree, allowing us to effectively distribute symbolic execution and decrease the time needed to explore the symbolic executions tree.
Proceedings ArticleDOI

Programs, tests, and oracles: the foundations of testing revisited

TL;DR: This work extends Gourlay's functional description of testing with the notion of a test oracle, an aspect of testing largely overlooked in previous foundational work and only lightly explored in general.
Journal ArticleDOI

The Risks of Coverage-Directed Test Case Generation

TL;DR: This work evaluates the effectiveness of test suites generated to satisfy four coverage criteria through counterexample-based test generation and a random generation approach-where tests are randomly generated until coverage is achieved-contrasted against purely random test suites of equal size.
Proceedings ArticleDOI

Does automated white-box test generation really help software testers?

TL;DR: A controlled experiment comparing a total of 49 subjects split between writing tests manually and writing tests with the aid of an automated unit test generation tool, EvoSuite found that tool support leads to clear improvements in commonly applied quality metrics such as code coverage, however, there was no measurable improvement in the number of bugs actually found by developers.