scispace - formally typeset
Search or ask a question

Showing papers on "Test harness published in 2011"


Journal ArticleDOI
TL;DR: DieCast is presented, an approach to scaling network services in which the authors multiplex all of the nodes in a given service configuration as virtual machines across a much smaller number of physical machines in a test harness, to provide the illusion that each VM matches a machine in the original service in terms of both available computing resources and communication behavior.
Abstract: Large-scale network services can consist of tens of thousands of machines running thousands of unique software configurations spread across hundreds of physical networks. Testing such services for complex performance problems and configuration errors remains a difficult problem. Existing testing techniques, such as simulation or running smaller instances of a service, have limitations in predicting overall service behavior at such scales.Testing large services should ideally be done at the same scale and configuration as the target deployment, which can be technically and economically infeasible. We present DieCast, an approach to scaling network services in which we multiplex all of the nodes in a given service configuration as virtual machines across a much smaller number of physical machines in a test harness. We show how to accurately scale CPU, network, and disk to provide the illusion that each VM matches a machine in the original service in terms of both available computing resources and communication behavior. We present the architecture and evaluation of a system we built to support such experimentation and discuss its limitations. We show that for a variety of services---including a commercial high-performance cluster-based file system---and resource utilization levels, DieCast matches the behavior of the original service while using a fraction of the physical resources.

117 citations


Patent
21 Oct 2011
TL;DR: In this article, a cloud-based test system is described, which consists of several slave servers and a main server, each slave server corresponds to one of the cloud systems for controlling the corresponding virtual test machines, and the main server transmits test instruction and the corresponding test environment condition to the server slave servers for testing.
Abstract: A cloud-based test system is disclosed. The cloud-based test system utilizes several cloud systems for testing. Each cloud system includes several cloud servers for providing a cloud resource to simulate several virtual test machines. The cloud-based test system includes several slave servers and a main server. Each slave server corresponds to one of the cloud systems for controlling the corresponding virtual test machines. The main server receives a test instruction, which is utilized to execute a target test item for a target electrical device, from a client, and generates a test environment condition corresponding to the test instruction. The main server determines the virtual test machines for executing the target test item and the at least one server to control the virtual test machines. The main server transmits the test instruction and the corresponding test environment condition to the server slave servers for testing.

97 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: A mixed symbolic execution based approach that is unique in how it favors program paths associated with a performance measure of interest, operates in an iterative-deepening beam-search fashion to discard paths that are unlikely to lead to high-load tests, and generates a test suite of a given size and level of diversity.
Abstract: Load tests aim to validate whether system performance is acceptable under peak conditions. Existing test generation techniques induce load by increasing the size or rate of the input. Ignoring the particular input values, however, may lead to test suites that grossly mischaracterize a system's performance. To address this limitation we introduce a mixed symbolic execution based approach that is unique in how it 1) favors program paths associated with a performance measure of interest, 2) operates in an iterative-deepening beam-search fashion to discard paths that are unlikely to lead to high-load tests, and 3) generates a test suite of a given size and level of diversity. An assessment of the approach shows it generates test suites that induce program response times and memory consumption several times worse than the compared alternatives, it scales to large and complex inputs, and it exposes a diversity of resource consuming program behavior.

89 citations


Patent
08 Aug 2011
TL;DR: In this article, a test case automation tool provides functionality for defining an automated test set and associated test cases within a testing user interface without the use of scripting languages or compiled programming.
Abstract: A computer implemented method and system including techniques for developing and executing automated test cases are described herein. In one embodiment, a test case automation tool provides functionality for defining an automated test set and associated test cases within a testing user interface without the use of scripting languages or compiled programming. The definition of each test case may occur within a testing user interface, including displaying and receiving user selection of available methods for testing; displaying user parameter fields and receiving user parameter values in response for testing; abstracting parameter types in the user parameter values; and generating XML-format definitions of the test case. The test case automation tool may then execute the selected methods of the software application using parameters provided in the XML-format definitions, and return testing results of the test case execution.

82 citations


Proceedings ArticleDOI
17 Jul 2011
TL;DR: Evaluated on five open source libraries, the generated parameterized unit tests are more expressive, characterizing general rather than concrete behavior; need fewer computation steps, making them easier to understand; and achieve a higher coverage than regular unit tests.
Abstract: State-of-the art techniques for automated test generation focus on generating executions that cover program behavior. As they do not generate oracles, it is up to the developer to figure out what a test does and how to check the correctness of the observed behavior. In this paper, we present an approach to generate parameterized unit tests---unit tests containing symbolic pre- and postconditions characterizing test input and test result. Starting from concrete inputs and results, we use test generation and mutation to systematically generalize pre- and postconditions while simplifying the computation steps. Evaluated on five open source libraries, the generated parameterized unit tests are (a) more expressive, characterizing general rather than concrete behavior; (b) need fewer computation steps, making them easier to understand; and (c) achieve a higher coverage than regular unit tests.

75 citations


Proceedings ArticleDOI
17 Jul 2011
TL;DR: A technique to automatically suggest repairs for web application test scripts based on differential testing, which compares the behavior of the test case on two successive versions of the web application and suggests repairs that can be applied to repair the scripts.
Abstract: Web applications tend to evolve quickly, resulting in errors and failures in test automation scripts that exercise them. Repairing such scripts to work on the updated application is essential for maintaining the quality of the test suite. Updating such scripts manually is a time consuming task, which is often difficult and is prone to errors if not performed carefully. In this paper, we propose a technique to automatically suggest repairs for such web application test scripts. Our technique is based on differential testing and compares the behavior of the test case on two successive versions of the web application: first version in which the test script runs successfully and the second version in which the script results in an error or failure. By analyzing the difference between these two executions, our technique suggests repairs that can be applied to repair the scripts. To evaluate our technique, we implemented it in a tool called WATER and exercised it on real web applications with test cases. Our experiments show that WATER can suggest meaningful repairs for practical test cases, many of which correspond to those made later by developers themselves.

73 citations


Proceedings ArticleDOI
22 Sep 2011
TL;DR: A survey on existing concolic testing tools is conducted, discussing their strengths and limitations, and environments in which they can be applied, as well as the effectiveness and scalability of the publicly available tools.
Abstract: Automatic testing, in particular test input generation, has become increasingly popular in the research community over the past ten years. In this paper, we conduct a survey on existing concolic testing tools, discussing their strengths and limitations, and environments in which they can be applied. We also conduct a case study to determine the prevalence of the identified limitations in six large software systems (four from open-source and two from ABB), as well as the effectiveness and scalability of the publicly available tools. The results show that pointers and native calls are the most prevalent limitations, preventing tools from generating high branch coverage test cases, and variables of float type are the least prevalent. The scalability of the publically available tools is also a limitation for industrial use, due to the large overhead of creating a test harness. Finally, we propose suggestions on how practitioners can use these tools and how researchers can improve concolic testing.

65 citations


Patent
Michal Matyjek1
15 Mar 2011
TL;DR: In this paper, a method and apparatus for generating automated test case scripts from natural language test cases is described, which may include parsing the received NLL test case to locate terms relevant to testing a software application within the NLL text, selecting one or more of the terms, and causing a search of a testing framework system for automated testing script commands based on the selected terms.
Abstract: A method and apparatus for generating automated test case scripts from natural language test cases is described. The method may include receiving a natural language test case for testing a software application. The method may also include parsing the received natural language test case to locate terms relevant to testing a software application within the natural language test case, selecting one or more of the terms, and causing a search of a testing framework system for automated testing script commands based on the selected terms. The method may also include generating an automated test case script that corresponds to the natural language test case based on results of the search.

62 citations


Patent
14 Sep 2011
TL;DR: In this paper, a testing service receives a test execution request for executing test operations on a test target, and the testing service determines a computing capacity for executing the testing and appropriates a plurality of workers in a cloud computing service.
Abstract: In some implementations, a testing service receives a test execution request for executing test operations on a test target. The testing service may map the test execution request to a particular type of supported test framework from among a plurality of types of supported test frameworks. The testing service may obtain a test package provided by a user that requested the testing, such as from a cloud storage location. The testing service determines a computing capacity for executing the testing and appropriates a plurality of workers in a cloud computing service. The testing service configures the plurality of workers for executing the test operations based on at least one of the test framework, the test execution request or the test package. The testing service provides test execution chunks from the test package to the plurality of workers for executing the testing on the test target.

59 citations


Patent
12 May 2011
TL;DR: In this article, the authors present a testing framework that automates the querying, extraction and loading of test data into a test result database from plurality of data sources and application interfaces using source specific adaptors.
Abstract: The present method and apparatus provides for automated testing of data integration and business intelligence projects using Extract, Load and Validate (ELV) architecture. The method and computer program product provides a testing framework that automates the querying, extraction and loading of test data into a test result database from plurality of data sources and application interfaces using source specific adaptors. The test data available for extraction using the adaptors include metadata such as the database query generated by the OLAP Tools that are critical to validate the changes in business intelligence systems. A validation module helps define validation rules for verifying the test data loaded into the test result database. The validation module further provides a framework for comparing the test data with previously archived test data as well as benchmark test data.

58 citations


Book ChapterDOI
20 Jun 2011
TL;DR: A tool, ISTA (Integration and System Test Automation), for automated test generation and execution by using high-level Petri nets as finite state test models, useful not only for function testing but also for security testing by using Petrinets as threat models.
Abstract: Automated software testing has gained much attention because it is expected to improve testing productivity and reduce testing cost. Automated generation and execution of tests, however, are still very limited. This paper presents a tool, ISTA (Integration and System Test Automation), for automated test generation and execution by using high-level Petri nets as finite state test models. ISTA has several unique features. It allows executable test code to be generated automatically from a MID (Model-Implementation Description) specification - including a high-level Petri net as the test model and a mapping from the Petri net elements to implementation constructs. The test code can be executed immediately against the system under test. It supports a variety of languages of test code, including Java, C/C++, C#, VB, and html/Selenium IDE (for web applications). It also supports automated test generation for various coverage criteria of Petri nets. ISTA is useful not only for function testing but also for security testing by using Petri nets as threat models. It has been applied to several industry-strength systems.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: An automatic technique for generating maintainable regression unit tests for programs that achieves good coverage and mutation kill score, were readable by the product's developers, and required few edits as the system under test evolved.
Abstract: This paper presents an automatic technique for generating maintainable regression unit tests for programs. We found previous test generation techniques inadequate for two main reasons. First. they were designed for and evaluated upon libraries rather than applications. Second, they were designed to find bugs rather than to create maintainable regression test suites: the test suites that they generated were brittle and hard to understand. This paper presents a suite of techniques that address these problems by enhancing an existing unit test generation system. In experiments using an industrial system, the generated tests achieved good coverage and mutation kill score, were readable by the product's developers, and required few edits as the system under test evolved. While our evaluation is in the context of one test generator, we are aware of many research systems that suffer similar limitations, so our approach and observations are more generally relevant.

Patent
17 Oct 2011
TL;DR: In this article, the authors present a system for automatically converting a manual test case representation (in a natural language) into a machine-readable test-case representation using a methodical process of trial-and-error to resolve ambiguities.
Abstract: A computer system, method and computer program product for automatically converting, through automating-test-automation software, a manual test case representation (in a natural language), for testing a target software, into a machine-readable test case representation. In preferred embodiments, the machine-readable test case is in the form of a keyword-based test case that is made from action-target-data tuples. The automation-test-software uses a methodical process of trial-and-error to resolve ambiguities that are generally present (and generally resolvable by humans) in the manual test case representation.

Proceedings ArticleDOI
01 Dec 2011
TL;DR: This paper introduces an automated test case generation approach for industrial automation applications which are specified by UML state chart diagrams and presents a prototype application of the presented approach for a sorting machine.
Abstract: The need for increasing flexibility of industrial automation system products leads to the trend to shift functional behavior from hardware solutions to software components. This trend causes an increasing complexity of software components and the need for comprehensive and automated testing approaches to ensure a requested quality level. Nevertheless, a key task in software testing is to identify appropriate test cases typically requiring high effort for test case generation and rework effort for adapting test cases in case of requirements changes. Semi-automated derivation of test cases based on models, like UML, can support test case generation. In this paper we introduce an automated test case generation approach for industrial automation applications which are specified by UML state chart diagrams. In addition we present a prototype application of the presented approach for a sorting machine. Major results showed that state charts (a) can support efficient test case generation and (b) enable automated code generation of test cases and code for the industrial automation domain.

Proceedings ArticleDOI
21 Mar 2011
TL;DR: The main contribution of this paper is to present the integration of the TEMA model-based graphical user interface test generator with a keyword-driven test automation framework, Robot Framework, providing a base for future MBT utilization.
Abstract: Model-based testing (MBT) is a relatively new approach to software testing that extends test automation from test execution to test design using automatic test generation from models. The effective use of the new approach requires new skills and knowledge, such as test modeling skills, but also good tool support. This paper focuses upon the integration of the TEMA model-based graphical user interface test generator with a keyword-driven test automation framework, Robot Framework. Both of the tools are available as open source. The purpose of the integration was to enable the wide testing library support of Robot Framework to be used in online model-based testing. The main contribution of this paper is to present the integration providing a base for future MBT utilization, but we will also describe a short case study where we experimented with the integration in testing a Java Swing GUI application and discuss early experiences in using the framework in testing Web GUIs.

Patent
25 Jan 2011
TL;DR: In this article, a test management platform collects production data relating to execution of a prior release of an application within a production environment, and extracts unique messages from the collected production data to create a test bed including a plurality of test cases.
Abstract: An approach for enabling maintenance of a test bed for use in executing software testing is described. A test management platform collects production data relating to execution of a prior release of an application within a production environment. The test management platform extracts unique messages from the collected production data to create a test bed including a plurality of test cases. Input messages to be processed by the application are generated based on a determination of which unique messages require a change based on a current release of the application.

Proceedings ArticleDOI
Tuli Nivas1
17 Jul 2011
TL;DR: This paper provides a simple framework that can be easily used to complete an end to end testing process -pre test, traffic generation and post test activities and addresses the design and properties of such a harness.
Abstract: Scarcity of commercially available testing tools that could support all native or application specific message formats as well as those that cater to non GUI or non web based backend applications leads to creating your own customized traffic generators or scripts. Also the test environment setup may differ from one system to another -- some may use simulators or mocks to stub out complex software, others may just be a scaled down (in terms of number of servers) replica of the production environment. So what are the factors that need to be considered when creating scripts that can be used for native request formats and for non GUI or web based applications? How do we design a script that is easy to maintain and extend when new test scenarios are added to accurately assess the performance of an application? This paper provides (1) the general design principles for a test script that can be used to generate traffic for any request format as well as (2) specific factors to keep in mind when creating a script that will work in a test environment that uses a mock. In addition to this the core activities of testing include not only traffic generation but also setting up the environment, verifying that both the hardware and software configurations are accurate prior to sending traffic and creating a report at the end of the test. Therefore the test script needs to be part of a complete harness that accomplishes these tasks. The paper will address the (3) design and properties of such a harness. It provides a simple framework that can be easily used to complete an end to end testing process -pre test, traffic generation and post test activities.

Patent
18 Mar 2011
TL;DR: In this paper, the authors provided mechanisms and methods for automated test case generation and scheduling, which can provide an automated manner of generating test cases and scheduling tests associated with such test cases.
Abstract: In accordance with embodiments, there are provided mechanisms and methods for automated test case generation and scheduling. These mechanisms and methods for automated test case generation and scheduling can provide an automated manner of generating test cases and scheduling tests associated with such test cases. The ability to provide this automation can improve efficiency in a testing environment.

Proceedings ArticleDOI
01 Nov 2011
TL;DR: This work proposes an automatic test case generation approach to verify the system behavior in erroneous situations using fault injection, simulating component (device) defects during runtime and demonstrates the applicability on a laboratory plant.
Abstract: The development of PLC control software in machine and plant automation is facing increasing challenges, since more and more functionality and safety aspects are in the control software's responsibility. Reliability and robustness of reactive systems in long-term operation is being influenced by physical conditions. These aspects must be considered at an early development stage in order to reduce development costs and fulfill quality requirements at the same time. We propose an automatic test case generation approach to verify the system behavior in erroneous situations using fault injection, simulating component (device) defects during runtime. We focus on the generation of a reduced set of meaningful test cases to be executed in a simulated environment to increase reliability. The applicability is demonstrated on a laboratory plant.

Proceedings ArticleDOI
19 Dec 2011
TL;DR: This research aims to develop an integrated test automation framework by which implementations on multiple heterogeneous platforms can be tested efficiently, and extended open source test frameworks to handle the common events in the mobile platforms.
Abstract: Implementation of mobile application should be tested on the mobile platform. Since there are several mobile platforms compete in the marketplace, a lot of effort is needed to test the implementations on every platform. This research aims to develop an integrated test automation framework by which implementations on multiple heterogeneous platforms can be tested efficiently. Commonly used events in the mobile platforms are extracted and mapped into the functions of each testing framework. We extended open source test frameworks to handle the common events. By doing so, test can be performed by describing test cases in a high level without generating test code manually. The proposed integrated framework was evaluated with the implementation of several mobile applications on the Android and iPhone platforms and found that the framework is effective and valid.

Patent
04 Feb 2011
TL;DR: In this article, an automated test tool interface is described for obtaining an accurate identification of a root element of a component and any sub elements within the root element on a web page.
Abstract: An automated test tool interface is described. A developer of a reusable web component provides an interface for obtaining an accurate identification of a root element of a component and any sub elements within the root element on a web page. An automated test framework uses this interface when recording automated tests to obtain a stable identification of the element that is independent of the rendering of the component on the web page. When the automated test is played back, the test framework again uses the interface to convert the stable identification of the element to a form that is dependent on the rendering of the component on the web page. Thus, changes in the rendering of a component will no longer cause an automated test tool to fail, as element identification in the testing framework is no longer tied to the specific rendering of the web page.

Journal ArticleDOI
TL;DR: A test case reusability analysis technique to identify reusable test cases of the original test suite based on graph analysis and a test suite augmentation technique to generate new test cases to cover the change‐related parts of the new model.
Abstract: Model-based testing helps test engineers automate their testing tasks so that they are more cost-effective When the model is changed because of the evolution of the specification, it is important to maintain the test suites up to date for regression testing A complete regeneration of the whole test suite from the new model, although inefficient, is still frequently used in the industry, including Microsoft To handle specification evolution effectively, we propose a test case reusability analysis technique to identify reusable test cases of the original test suite based on graph analysis We also develop a test suite augmentation technique to generate new test cases to cover the change-related parts of the new model The experiment on four large protocol document testing projects shows that our technique can successfully identify a high percentage of reusable test cases and generate low-redundancy new test cases When compared with a complete regeneration of the whole test suite, our technique significantly reduces regression testing time while maintaining the stability of requirement coverage over the evolution of requirements specifications Copyright © 2011 John Wiley & Sons, Ltd

Patent
27 Sep 2011
TL;DR: In this paper, a user can select at least one test script corresponding to a network service and the selected test script can be executed over a topology that can be generated by the user.
Abstract: Disclosed herein are methods, systems, and computer programs for providing an end-to-end solution in a test automation framework present in a communication network. A user can select at least one test script corresponding to a network service. The selected test script can be executed over a topology that can be generated by the user. The topology can be generated by a simple drag and drop function. Once, the selected test script is executed, a log report can be generated that includes details associated with the executed test script. The method can also facilitate reserving of the topology so that it can be used at a later point in time. The scripts can be generated automatically without user intervention.

01 Jan 2011
TL;DR: Testing is the dominating method for quality assurance of industrial software and despite its importance and the vast amount of resources invested, there are surprisingly limited efforts spent on test efforts.
Abstract: Testing is the dominating method for quality assurance of industrial software. Despite its importance and the vast amount of resources invested, there are surprisingly limited efforts spent on test ...

Proceedings ArticleDOI
12 Jul 2011
TL;DR: An approach for search-based software testing for dynamically typed programming languages that can generate test scenarios and both simple and more complex test data and achieves full or higher statement coverage on more cases and does so faster than randomly generated test cases.
Abstract: Manually creating test cases is time consuming and error prone. Search-based software testing can help automate this process and thus reduce time and effort and increase quality by automatically generating relevant test cases. Previous research has mainly focused on static programming languages and simple test data inputs such as numbers. This is not practical for dynamic programming languages that are increasingly used by software developers. Here we present an approach for search-based software testing for dynamically typed programming languages that can generate test scenarios and both simple and more complex test data. The approach is implemented as a tool, RuTeG, in and for the dynamic programming language Ruby. It combines an evolutionary search for test cases that give structural code coverage with a learning component to restrict the space of possible types of inputs. The latter is called for in dynamic languages since we cannot always know statically which types of objects are valid inputs. Experiments on 14 cases taken from real-world Ruby projects show that RuTeG achieves full or higher statement coverage on more cases and does so faster than randomly generated test cases.

Patent
16 Aug 2011
TL;DR: In this paper, the authors describe a test automation management system, in which a request for initiating at least one test automation task is received by an electronic computing device from a mobile device.
Abstract: Systems, methods and computer program products relating to test automation management are described. In some aspects, a request for initiating at least one test automation task is received by an electronic computing device from a mobile device. A web service associated with the received request and at least one automation tool are identified. At least one automation tool is launched in response to the received request. The launched at least one automation tool executes at least one test script based on the received request, the at least one test script can include a sequence of instructions. Test data are loaded based on at least a portion of the executed a sequence of instructions for the at least one test automation task, and one or more test results associated with the executed at least one test script are stored.

Proceedings ArticleDOI
18 Jul 2011
TL;DR: A model-based testing approach for web application black box testing is presented and a notation forweb application control flow models augmented with data flow information is introduced.
Abstract: Model-based testing is a promising technique for test case design that is used in an increasing number of application domains. However, to fully gain efficiency advantages, intuitive domain-specific notations with comfortable tool support as well as a high degree of automation in the whole testing process are required. In this paper, a model-based testing approach for web application black box testing is presented. A notation for web application control flow models augmented with data flow information is introduced. The described research prototype demonstrates the fully automated generation of ready to use test case scripts for common test automation tools including test oracles from the model.

Proceedings ArticleDOI
25 Sep 2011
TL;DR: This paper presents a tool that allows testers to easily collect, prioritize, and reduce user-session-based test cases in order to manage large test suites.
Abstract: Test suite prioritization and reduction are two approaches to managing large test suites. They play an important role in regression testing, where a large number of tests accumulate over time from previous versions of the system. Accumulation of tests is exacerbated in user-session-based testing of web applications, where field usage data is continually logged and converted into test cases. This paper presents a tool that allows testers to easily collect, prioritize, and reduce user-session-based test cases. Our tool provides four contributions: (1) guidance to users on how to configure their web server to log important usage information, (2) automated parsing of web logs into XML formatted test cases that can be used by test replay tools, (3) automated prioritization of test cases by length-based and combinatorial-based criteria, and (4) automated reduction of test cases by combinatorial coverage.

Patent
17 Nov 2011
TL;DR: In this paper, an integrated development environment incorporating a compliance test tool for analyzing system data generated during execution of the software application under test is presented. But this tool is limited to a single application.
Abstract: An integrated development environment incorporates a mechanism to automatically test the execution of a software application for compliance with requirements set forth in a compliance test. The integrated development environment incorporates a compliance test tool for analyzing system data generated during execution of the software application under test. The execution of the software application is performed in a runtime environment having code markers that are used to identify the occurrence of events that are associated with the requirements of the compliance test.

Patent
22 Sep 2011
TL;DR: In this paper, an API testing component is provided that is configured to deploy test suites to one or more test virtual machine instances, and test results generated by the API tests are collected and stored.
Abstract: An API testing component is provided that is configured to deploy test suites to one or more test virtual machine instances. The test suites include an API test. The API tests are periodically executed on the test virtual machine instances, and test results generated by the API tests are collected and stored. The API testing component also provides a user interface for viewing the test results using a user interface specification that defines a visual layout for presenting test results generated by one or more test suites. The API testing component might also generate one or more alarm messages utilizing the test results and an alarm specification.