scispace - formally typeset
Search or ask a question

Showing papers on "Test harness published in 2017"


Proceedings ArticleDOI
20 May 2017
TL;DR: Challenges that need to be addressed in order to improve fault detection in test generation tools are demonstrated, such as a need to integrate with popular build tools, and to improve the readability of the generated tests.
Abstract: Automated unit test generation has been extensively studied in the literature in recent years. Previous studies on open source systems have shown that test generation tools are quite effective at detecting faults, but how effective and applicable are they in an industrial application? In this paper, we investigate this question using a life insurance and pension products calculator engine owned by SEB Life & Pension Holding AB Riga Branch. To study fault-finding effectiveness, we extracted 25 real faults from the version history of this software project, and applied two up-to-date unit test generation tools for Java, EVOSUITE and RANDOOP, which implement search-based and feedback-directed random test generation, respectively. Automatically generated test suites detected up to 56.40% (EVOSUITE) and 38.00% (RANDOOP) of these faults. The analysis of our results demonstrates challenges that need to be addressed in order to improve fault detection in test generation tools. In particular, classification of the undetected faults shows that 97.62% of them depend on either "specific primitive values" (50.00%) or the construction of "complex state configuration of objects" (47.62%). To study applicability, we surveyed the developers of the application under test on their experience and opinions about the test generation tools and the generated test cases. This leads to insights on requirements for academic prototypes for successful technology transfer from academic research to industrial practice, such as a need to integrate with popular build tools, and to improve the readability of the generated tests.

122 citations


Proceedings ArticleDOI
20 May 2017
TL;DR: The idea of learning to test is proposed, which learns the characteristics of bug-revealing test programs from previous test programs that triggered bugs, an approach to prioritizing test programs for compiler testing acceleration.
Abstract: Compiler testing is a crucial way of guaranteeing the reliability of compilers (and software systems in general). Many techniques have been proposed to facilitate automated compiler testing. These techniques rely on a large number of test programs (which are test inputs of compilers) generated by some test-generation tools (e.g., CSmith). However, these compiler testing techniques have serious efficiency problems as they usually take a long period of time to find compiler bugs. To accelerate compiler testing, it is desirable to prioritize the generated test programs so that the test programs that are more likely to trigger compiler bugs are executed earlier. In this paper, we propose the idea of learning to test, which learns the characteristics of bug-revealing test programs from previous test programs that triggered bugs. Based on the idea of learning to test, we propose LET, an approach to prioritizing test programs for compiler testing acceleration. LET consists of a learning process and a scheduling process. In the learning process, LET identifies a set of features of test programs, trains a capability model to predict the probability of a new test program for triggering compiler bugs and a time model to predict the execution time of a test program. In the scheduling process, LET prioritizes new test programs according to their bug-revealing probabilities in unit time, which is calculated based on the two trained models. Our extensive experiments show that LET significantly accelerates compiler testing. In particular, LET reduces more than 50% of the testing time in 24.64% of the cases, and reduces between 25% and 50% of the testing time in 36.23% of the cases.

68 citations


Journal ArticleDOI
TL;DR: To work more efficiently and effectively, test engineers must be aware of various automated-testing strategies and tools that assist test activities other than test execution.
Abstract: To work more efficiently and effectively, test engineers must be aware of various automated-testing strategies and tools that assist test activities other than test execution. However, automation doesn't come for free, so it must be carefully implemented.

56 citations


Journal ArticleDOI
TL;DR: This work designed a test system for testing CPSs and analyzed the variability that it needed to test different configurations, and proposed a methodology supported by a tool named ASTERYSCO that automatically generates simulation-based test system instances to test individual configurations of CPSs.
Abstract: Cyber-physical systems (CPSs) are ubiquitous systems that integrate digital technologies with physical processes. These systems are becoming configurable to respond to the different needs that users demand. As a consequence, their variability is increasing, and they can be configured in many system variants. To ensure a systematic test execution of CPSs, a test system must be elaborated encapsulating several sources such as test cases or test oracles. Manually building a test system for each configuration is a non-systematic, time-consuming, and error-prone process. To overcome these problems, we designed a test system for testing CPSs and we analyzed the variability that it needed to test different configurations. Based on this analysis, we propose a methodology supported by a tool named ASTERYSCO that automatically generates simulation-based test system instances to test individual configurations of CPSs. To evaluate the proposed methodology, we selected different configurations of a configurable Unmanned Aerial Vehicle, and measured the time required to generate their test systems. On average, around 119 s were needed by our tool to generate the test system for 38 configurations. In addition, we compared the process of generating test system instances between the method we propose and a manual approach. Based on this comparison, we believe that the proposed tool allows a systematic method of generating test system instances. We believe that our approach permits an important step toward the full automation of testing in the field of configurable CPSs.

35 citations


Proceedings ArticleDOI
20 May 2017
TL;DR: This paper proposes a greedy algorithm to reduce the number of test executions by suggesting test movements while considering historical build information and actual dependencies of tests, which can lead to a reduction of 21.66 million test executions across all subject projects.
Abstract: Modern build systems help increase developer productivity by performing incremental building and testing. These build systems view a software project as a group of interdependent modules and perform regression test selection at the module level. However, many large software projects have imprecise dependency graphs that lead to wasteful test executions. If a test belongs to a module that has more dependencies than the actual dependencies of the test, then it is executed unnecessarily whenever a code change impacts those additional dependencies. In this paper, we formulate the problem of wasteful test executions due to suboptimal placement of tests in modules. We propose a greedy algorithm to reduce the number of test executions by suggesting test movements while considering historical build information and actual dependencies of tests. We have implemented our technique, called TestOptimizer, on top of CloudBuild, the build system developed within Microsoft over the last few years. We have evaluated the technique on five large proprietary projects. Our results show that the suggested test movements can lead to a reduction of 21.66 million test executions (17.09%) across all our subject projects. We received encouraging feedback from the developers of these projects; they accepted and intend to implement a80% of our reported suggestions.

32 citations


Book ChapterDOI
15 Oct 2017
TL;DR: This work builds a test harness for driving an arbitrary AV’s code in a simulated world, using the game Grand Theft Auto V as world simulator for AV testing, and proposes and demonstrates necessary analyses to validate the simulation results relative to the real world.
Abstract: The testing of Autonomous Vehicles (AVs) requires driving the AV billions of miles under varied scenarios in order to find bugs, accidents and otherwise inappropriate behavior. Because driving a real AV that many miles is too slow and costly, this motivates the use of sophisticated ‘world simulators’, which present the AV’s perception pipeline with realistic input scenes, and present the AV’s control stack with realistic traffic and physics to which to react. Thus the simulator is a crucial piece of any CAD toolchain for AV testing. In this work, we build a test harness for driving an arbitrary AV’s code in a simulated world. We demonstrate this harness by using the game Grand Theft Auto V (GTA) as world simulator for AV testing. Namely, our AV code, for both perception and control, interacts in real-time with the game engine to drive our AV in the GTA world, and we search for weather conditions and AV operating conditions that lead to dangerous situations. This goes beyond the current state-of-the-art where AVs are tested under ideal weather conditions, and lays the ground work for a more comprehensive testing effort. We also propose and demonstrate necessary analyses to validate the simulation results relative to the real world. The results of such analyses allow the designers and verification engineers to weigh the results of simulation-based testing.

31 citations


Proceedings ArticleDOI
01 Mar 2017
TL;DR: It is shown that mutation analysis can be a useful tool, uncovering gaps in even well-tested modules like RCU, and argued that mutation testing can and should be more extensively used in practice.
Abstract: Mutation analysis is an established technique for measuring the completeness and quality of a test suite. Despite four decades of research on this technique, its use in large systems is still rare, in part due to computational requirements and high numbers of false positives. We present our experiences using mutation analysis on the Linux kernel's RCU (Read Copy Update) module, where we adapt existing techniques to constrain the complexity and computation requirements. We show that mutation analysis can be a useful tool, uncovering gaps in even well-tested modules like RCU. This experiment has so far led to the identification of 3 gaps in the RCU test harness, and 2 bugs in the RCU module masked by those gaps. We argue that mutation testing can and should be more extensively used in practice.

27 citations


Journal ArticleDOI
TL;DR: A light-weight software-implemented fault injection (SWIFI) testing approach is introduced, focusing on technical process faults and system faults, and the execution of test cases both against simulation or the real aPS, is enabled.

22 citations


Journal ArticleDOI
TL;DR: The algorithms and techniques adopted for addressing input and oracle generation, dynamic scheduling, and session planning issues supporting service functional test automation are illustrated and planned evolution of the technology deals with the testing and troubleshooting of distributed systems that integrate connected objects.
Abstract: This paper presents the approach to functional test automation of services (black-box testing) and service architectures (grey-box testing) that has been developed within the MIDAS project and is accessible on the MIDAS SaaS. In particular, the algorithms and techniques adopted for addressing input and oracle generation, dynamic scheduling, and session planning issues supporting service functional test automation are illustrated. More specifically, the paper details: (i) the test input generation based on formal methods and temporal logic specifications, (ii) the test oracle generation based on service formal specifications, (iii) the dynamic scheduling of test cases based on probabilistic graphical reasoning, and (iv) the reactive, evidence-based planning of test sessions with on-the-fly generation of new test cases. Finally, the utilisation of the MIDAS prototype for the functional test of operational services and service architectures in the healthcare industry is reported and assessed. A planned evolution of the technology deals with the testing and troubleshooting of distributed systems that integrate connected objects.

20 citations


Proceedings ArticleDOI
10 Jul 2017
TL;DR: Cloud Unit Testing (CUT), a tool for automatically executing unit tests in distributed execution environments, and given a set of unit tests, CUT allocates appropriate computational resources, and schedules the execution of tests over them.
Abstract: Unit tests can be significantly sped up by running them in parallel over distributed execution environments, such as the cloud. However, manually setting up such environments and configuring the testing frameworks to effectively use them is cumbersome and requires specialized expertise that developers might lack. We present Cloud Unit Testing (CUT), a tool for automatically executing unit tests in distributed execution environments. Given a set of unit tests, CUT allocates appropriate computational resources, i.e., virtual machines or containers, and schedules the execution of tests over them. Developers do not need to change existing unit test code, and can easily control relevant aspects of test execution, including resource allocation and test scheduling. Additionally, during the execution CUT monitors and publishes events about the running tests which enables stream analytics. CUT and videos showcasing its main features are freely available at: https://www.st.cs.uni-saarland.de/testing/cut/

18 citations


Proceedings ArticleDOI
04 Sep 2017
TL;DR: This paper proposes to combine existing automated regression tests with random test generation to enhance existing GUI test cases with additional, randomly generated interactions, and conducts an experiment using a mature, widely-used open source application.
Abstract: Many software projects maintain automated GUI tests that are repeatedly executed for regression testing. Every test run executes exactly the same fixed sequence of steps confirming that the currently tested version shows precisely the same behavior as the last version. The confirmatory approach implemented by these tests limits their ability to find new defects. We therefore propose to combine existing automated regression tests with random test generation. Random test generation creates a rich variety of test steps that interact with the system under test in new, unexpected ways. Enhancing existing test cases with random test steps allows revealing new, hidden defects with little extra effort. In this paper we describe our implementation of a hybrid approach that enhances existing GUI test cases with additional, randomly generated interactions. We conducted an experiment using a mature, widely-used open source application. On average the added random interactions increased the number of visited application windows per test by 23.6% and code coverage by 12.9%. Running the enhanced tests revealed three new defects.

Journal ArticleDOI
TL;DR: A family of algorithms able to automatically enhance an existing test program, reducing the time required to run it and, as a side effect, its size is described.
Abstract: The compaction of test programs for processor-based systems is of utmost practical importance: Software-Based Self-Test (SBST) is nowadays increasingly adopted, especially for in-field test of safety-critical applications, and both the size and the execution time of the test are critical parameters. However, while compacting the size of binary test sequences has been thoroughly studied over the years, the reduction of the execution time of test programs is still a rather unexplored area of research. This paper describes a family of algorithms able to automatically enhance an existing test program, reducing the time required to run it and, as a side effect, its size. The proposed solutions are based on instruction removal and restoration, which is shown to be computationally more efficient than instruction removal alone. Experimental results demonstrate the compaction capabilities, and allow analyzing computational costs and effectiveness of the different algorithms.

Proceedings ArticleDOI
25 Jul 2017
TL;DR: The paper describes the transition from manual exploratory testing to automated GUI test generation and describes the successful application of test generation in a real-world industry project, and highlights several open issues to be addressed by future research on test automation.
Abstract: Test automation is essential in fast-paced agile de-velopment environments. The main goal is to speed up test execu-tion cycles and to reduce the effort involved in running tests manually. We took test automation one step further and applied test generation to a GUI-based application developed in a large industry project. The paper describes the transition from manual exploratory testing to automated GUI test generation. Key les-sons to be learned are: (1) the test automation pyramid proposed for agile development tends to underestimate the need for high-level GUI testing, (2) automated test generation does not reduce test effort but shifts it to writing test adapters and checks, and (3) the effort for analyzing results produced by generated tests limits the practical application of automated test generation. The report describes the successful application of test generation in a real-world industry project, but it also highlights several open issues to be addressed by future research on test automation.

Journal ArticleDOI
TL;DR: A framework, which automatically records selected tester’s actions in the system under test is presented and a model of the screen and action flows is reengineered and test cases are prepared, which shows that Exploratory Testing aided by this machine support is less resource demanding thanexploratory testing performed manually only.
Abstract: Exploratory Testing technique is well applicable to software development projects, where test basis is not available (or at least not complete and consistent to the extent allowing the creation of efficient test cases). The key factor for the efficiency of this technique is a structured process for the recording of explored path in the system under test. This approach also allows the creation of the test cases during exploratory testing process. These test cases can be used in the following re-testing of the system. If performed manually, the efficiency of such process strongly depends on the team organization and systematic work of the individuals in the team. This process can be aided by an automated support. In the paper, a framework, which automatically records selected tester's actions in the system under test is presented. From these recordings, a model of the screen and action flows is reengineered and test cases are prepared. Tester is also able to define more meta-data in the test cases during this process. The recorded model and defined test cases are then available for the next rounds of testing. The performed case study shows that Exploratory Testing aided by this machine support is less resource demanding than Exploratory Testing performed manually only. Also, larger part of SUT was explored during the tests, when this systematic support was available to testers.

Proceedings ArticleDOI
13 Mar 2017
TL;DR: O!Snap automatically maximizes reuse of existing virtual machines, and interleaves the creation of updated test images with the execution of tests to minimize overall test execution time and/or cost.
Abstract: Porting a testing environment to a cloud infrastructure is not straightforward This paper presents O!Snap, an approach to generate test plans to cost-efficiently execute tests in the cloud O!Snap automatically maximizes reuse of existing virtual machines, and interleaves the creation of updated test images with the execution of tests to minimize overall test execution time and/or cost In an evaluation involving 2,600+ packages and 24,900+ test jobs of the Debian continuous integration environment, O!Snap reduces test setup time by up to 88% and test execution time by up to 433% without additional costs

Journal ArticleDOI
TL;DR: Fujitsu researchers have developed a methodology to automate testing of industrial-strength embedded software implemented in C or C++ that generates unit-level tests, greatly reducing test generation time and cost while providing excellent test coverage.
Abstract: Fujitsu researchers have developed a methodology to automate testing of industrial-strength embedded software implemented in C or C++. The methodology’s core is a program analysis technique called symbolic execution, which the researchers have customized to automate testing. The methodology generates unit-level tests, greatly reducing test generation time and cost while providing excellent test coverage.

Patent
08 Feb 2017
TL;DR: In this paper, a software automated test method and system is presented, where data of software is stored in a database associated with the software, and a script file comprising functions for performing operations of addition, deletion, modification, query and the like on the database is generated; the script file is loaded to a preset automated test framework; and in the automated test case writing, the functions in the script files are packaged into user keywords, thereby enabling the test framework to call the user keywords to directly execute the functions to perform corresponding operations of adding, deletion and modifying the database during software
Abstract: The invention provides a software automated test method and system. Data of software is stored in a database associated with the software, and a script file comprising functions for performing operations of addition, deletion, modification, query and the like on the database is generated; the script file is loaded to a preset automated test framework; and in the automated test framework, the functions in the script file are packaged into user keywords, thereby enabling the automated test framework to call the user keywords to directly execute the functions to perform corresponding operations of addition, deletion, modification, query and the like on the database during software test case writing. The method and the system are suitable for all relational databases; when a database server is changed, the business processing of the databases can be realized without new scripts; and therefore, the universality and the flexibility are high, and the burden of test personnel is greatly reduced.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A verification mechanism for checking conformance of an IEC 61131-3 PLC software with a generalized test table, making use of a state-of-the-art model checker, inspired by widely-used paradigms found in spreadsheet applications.
Abstract: With recent trends in manufacturing automation, such as Industry 4.0, control software in automated production systems becomes more and more complex and volatile, complicating and increasing importance of quality assurance. Test tables are a widely used and generally accepted means to intuitively specify test cases for automation software. However, each table only specifies a single software trace, whereas the actual software behavior may cover multiple similar traces not covered by the table. Within this work, we present a generalization concept for test tables allowing for bounded and unbounded repetition of steps, “don't-care” values, as well as calculations with earlier observed values. We provide a verification mechanism for checking conformance of an IEC 61131-3 PLC software with a generalized test table, making use of a state-of-the-art model checker. Our notation is inspired by widely-used paradigms found in spreadsheet applications. By an empirical study with mechanical engineering students, we show that the notation matches user expectations. A real-world example extracted from an industrial automation plant illustrates our approach.

Proceedings ArticleDOI
13 Mar 2017
TL;DR: A rule based semi-automatic approach is proposed to derive the input space model elements from Use case specifications and UML use case diagrams and the results are promising and this approach will be of good use to the test designer.
Abstract: Combinatorial Testing is a test design methodology that aims to detect the interaction failures existing in the software under test. The combinatorial input space model comprises of the parameters and the values it can take. Building this input space model is a domain knowledge and experience intensive task. The objective of the paper is to assist test designer in building this test model. A rule based semi-automatic approach is proposed to derive the input space model elements from Use case specifications and UML use case diagrams. A natural language processing based parser and an XMI based parser are implemented. The rules formulated are applied on synthetic case studies and the output model is evaluated using precision and recall metrics. The results are promising and this approach will be of good use to the test designer.

14 Aug 2017
TL;DR: In this paper, the authors use static template matching to find recurrences of fuzzer-discovered vulnerabilities and implement a simple yet effective match-ranking algorithm that uses test coverage data to focus attention on matches comprising untested code.
Abstract: Taint-style vulnerabilities comprise a majority of fuzzer discovered program faults. These vulnerabilities usually manifest as memory access violations caused by tainted program input. Although fuzzers have helped uncover a majority of taint-style vulnerabilities in software to date, they are limited by (i) extent of test coverage; and (ii) the availability of fuzzable test cases. Therefore, fuzzing alone cannot provide a high assurance that all taint-style vulnerabilities have been uncovered. In this paper, we use static template matching to find recurrences of fuzzer-discovered vulnerabilities. To compensate for the inherent incompleteness of template matching, we implement a simple yet effective matchranking algorithm that uses test coverage data to focus attention on matches comprising untested code. We prototype our approach using the Clang/LLVM compiler toolchain and use it in conjunction with afl-fuzz, a modern coverage-guided fuzzer. Using a case study carried out on the Open vSwitch codebase, we show that our prototype uncovers corner cases in modules that lack a fuzzable test harness. Our work demonstrates that static analysis can effectively complement fuzz testing, and is a useful addition to the security assessment tool-set. Furthermore, our techniques hold promise for increasing the effectiveness of program analysis and testing, and serve as a building block for a hybrid vulnerability discovery framework.

Journal ArticleDOI
TL;DR: In this article, the authors propose a framework for requirement-driven test generation that combines contract-based interface theories with model-based testing, which is driven by a single requirement interface at a time.
Abstract: We propose a framework for requirement-driven test generation that combines contract-based interface theories with model-based testing. We design a specification language, requirement interfaces, for formalizing different views (aspects) of synchronous data-flow systems from informal requirements. Various views of a system, modeled as requirement interfaces, are naturally combined by conjunction. We develop an incremental test generation procedure with several advantages. The test generation is driven by a single requirement interface at a time. It follows that each test assesses a specific aspect or feature of the system, specified by its associated requirement interface. Since we do not explicitly compute the conjunction of all requirement interfaces of the system, we avoid state space explosion while generating tests. However, we incrementally complete a test for a specific feature with the constraints defined by other requirement interfaces. This allows catching violations of any other requirement during test execution, and not only of the one used to generate the test. This framework defines a natural association between informal requirements, their formal specifications, and the generated tests, thus facilitating traceability. Finally, we introduce a fault-based test-case generation technique, called model-based mutation testing, to requirement interfaces. It generates a test suite that covers a set of fault models, guaranteeing the detection of any corresponding faults in deterministic systems under test. We implemented a prototype test generation tool and demonstrate its applicability in two industrial use cases.

Patent
10 May 2017
TL;DR: In this paper, a cloud computing-based software test system is described, which consists of a cloud platform, a test platform, and an interactive platform, where the cloud computing platform is used for constructing the cloud platform and creating virtual machine cluster on each node and simulating different test environments according to different test tasks.
Abstract: The invention discloses a cloud computing-based software test system, belongs to the technical field of tests, and solves the technical problems in how to combine a conventional software test system with cloud computing to form the loud computing-based software test system, ensuring effective utilization of dynamic extensible massive resources of a cloud platform, saving test time and reducing test costs According to the adopted technical scheme, the system comprises a cloud computing platform, a test platform and an interactive platform, wherein the cloud computing platform is used for constructing the cloud platform, creating a virtual machine cluster on each node and simulating different test environments according to different test tasks; and the test platform is used for constructing a cloud test framework, constructing an automated test platform by utilizing an automated test framework with open sources, or researching and developing new test tools or technologies according to demands

Proceedings ArticleDOI
13 Mar 2017
TL;DR: A method to describe generic test scenarios by means of regular expressions, whose symbols point to a SUT operation, which are annotated with a set of when clauses that are processed by the combinatorial algorithm to include the oracle in the generated test cases.
Abstract: A test case describes a specific execution scenario of the system under test (SUT). Its goal is to discover errors by means of its oracle, that emits a pass or fail verdict depending on the SUT behavior. The test case has a sequence of calls to SUT's operations with specific test data, which may come from the application of a combinatorial algorithm. This paper describes a method to describe generic test scenarios by means of regular expressions, whose symbols point to a SUT operation. The tester assigns values to each operation's parameter. A further step expands the regular expression and produces a set of operation sequences, which are then passed to a combinatorial algorithm to generate actual test cases. Regular expressions are annotated with a set of when clauses, that are processed by the combinatorial algorithm to include the oracle in the generated test cases.

Journal ArticleDOI
16 Feb 2017
TL;DR: An automated approach to generating test data (test relational databases and test inputs for query parameters) for a set of SQL queries, with the aim of covering test requirements as obtained from said queries.
Abstract: Testing database applications is a complex task since it involves designing test databases with meaningful test data in order to reveal faults and, at the same time, with a small size in order to carry out the testing process in an efficient way. This paper presents an automated approach to generating test data (test relational databases and test inputs for query parameters) for a set of SQL queries, with the aim of covering test requirements as obtained from said queries. The test data generation follows an incremental approach where, in each increment, test data are generated to cover a test requirement by re-using test data previously generated for other test requirements. The test data generation for each test requirement is formulated as a constraint satisfaction problem, where constraints are derived from the test requirement, initial database states and previously generated test data. The generation process is fully automated and supports the execution on complex queries and databases. Evaluation is carried out on a real life application, and the results show that small-size generated test relational databases achieve high coverage scores for the queries under test in a short generating time.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: This work in progress paper presents a novel approach for predicting the execution time of test cases based on test specifications and available historical data on previously executed test cases by extracting timing information for various steps in manual test cases.
Abstract: Knowing the execution time of test cases is importantto perform test scheduling, prioritization and progressmonitoring. This work in progress paper presents a novelapproach for predicting the execution time of test cases basedon test specifications and available historical data on previouslyexecuted test cases. Our approach works by extractingtiming information (measured and maximum execution time)for various steps in manual test cases. This information is thenused to estimate the maximum time for test steps that have notpreviously been executed, but for which textual specificationsexist. As part of our approach, natural language parsing ofthe specifications is performed to identify word combinationsto check whether existing timing information on various testactivities is already available or not. Finally, linear regressionis used to predict the actual execution time for test cases. A proof-of-concept use case at Bombardier Transportationserves to evaluate the proposed approach.

DOI
20 May 2017
TL;DR: It is argued that there may be deceiving local optima in the research landscape on search-based unit test generation, and specific challenges and opportunities to escape these local optimas are outlined.
Abstract: Research in search-based unit test generation has seen steady development in recent years. New techniques and tools have been developed, and empirical evidence has been collected on the wide-ranging capabilities of search-based algorithms for unit test generation, and for many related software engineering practices. But why are developers not generating all their tests automatically yet in practice? In this paper, we argue that there may be deceiving local optima in the research landscape on search-based unit test generation, and we outline specific challenges and opportunities to escape these local optima.

Proceedings ArticleDOI
Satoshi Masuda1
13 Mar 2017
TL;DR: The software architecture of automated vehicle simulation as the target software is discussed and issues on software testing in these simulations are raised on the basis of the related works.
Abstract: Research and development in the field of automated vehicles has increased along with related works about its software. Software testing in automated vehicles is key to launching safe and reliable vehicles. Several issues in the software testing of automated vehicles have been raised including extremely large space of test input, the high cost of test executions in a physical environment, test oracles not being simple Boolean properties, and so on. Automated vehicle simulations are a solution of cost reduction in test execution. However, space of test input is extremely large. Extremely large space of test input comes from sensing data. In this paper, we discuss the software architecture of automated vehicle simulation as the target software. We raise issues on software testing in these simulations on the basis of the related works. We then discuss test design techniques used in automated vehicle simulations.

Proceedings ArticleDOI
01 Mar 2017
TL;DR: The Accelerating Test Automation Platform (ATAP) as discussed by the authors allows the creation of an automation test script through a domain specific language based on English, which is then converted to machine executable code using Selenium WebDriver.
Abstract: Test automation involves the automatic execution of test scripts instead of being manually run This significantly reduces the amount of manual effort needed and thus is of great interest to the software testing industry There are two key problems in the existing tools & methods for test automation - a) Creating an automation test script is essentially a code development task, which most testers are not trained on, and b) the automation test script is seldom readable, making the task of maintenance an effort intensive process We present the Accelerating Test Automation Platform (ATAP) which is aimed at making test automation accessible to non-programmers ATAP allows the creation of an automation test script through a domain specific language based on English The English-like test scripts are automatically converted to machine executable code using Selenium WebDriver ATAP's English-like test script makes it easy for non-programmers to author The functional flow of an ATAP script is easy to understand as well thus making maintenance simpler (you can understand the flow of the test script when you revisit it many months later) ATAP has been built around the Eclipse ecosystem and has been used in a real-life testing project We present the details of the implementation of ATAP and the results from its usage in practice

Proceedings ArticleDOI
10 Jul 2017
TL;DR: The results show that this method not only provides test isolation essentially for free, it also reduces testing time by 44% on average.
Abstract: Test isolation is a prerequisite for the correct execution of test suites on web applications. We present Test Execution Checkpointing, a method for efficient test isolation. Our method instruments web applications to support checkpointing and exploits this support to isolate and optimize tests. We have implemented and evaluated this method on five popular PHP web applications. The results show that our method not only provides test isolation essentially for free, it also reduces testing time by 44% on average.

Proceedings ArticleDOI
01 Mar 2017
TL;DR: The SAGA toolbox lets the user describe the test, and at the same time get immediate feedback on the test result based on a trace from the System Under Test (SUT), and enables an interactive feedback loop.
Abstract: This paper presents the SAGA toolbox It centers around development of tests, and analysis of test results, on Guarded Assertions (GA) format Such a test defines when to test, and what to expect in such a state The SAGA toolbox lets the user describe the test, and at the same time get immediate feedback on the test result based on a trace from the System Under Test (SUT) The feedback is visual using plots of the trace This enables the test engineer to play around with the data and use an agile development method, since the data is already there Moreover, the SAGA toolbox also enables the test engineer to change test stimuli plots to study the effect they have on a test It can later generate computer programs that can feed these test stimuli to the SUT This enables an interactive feedback loop, where immediate feedback on changes to the test, or to the test stimuli, indicate whether the test is correct and it passed or failed