scispace - formally typeset
Search or ask a question

Showing papers on "Test harness published in 2006"


Patent
11 Dec 2006
TL;DR: In this paper, a generic testing framework is proposed to automatically allocate, install and verify a given version of a system under test, to exercise the system against a series of tests in a "hands-off" objective manner, and then to export information about the tests to one or more developer repositories (such as a query-able database, an email list, a developer web server, a source code version control system, a defect tracking system).
Abstract: A generic testing framework to automatically allocate, install and verify a given version of a system under test, to exercise the system against a series of tests in a “hands-off” objective manner, and then to export information about the tests to one or more developer repositories (such as a query-able database, an email list, a developer web server, a source code version control system, a defect tracking system, or the like). The framework does not “care” or concern itself with the particular implementation language of the test as long as the test can issue directives via a command line or configuration file. During the automated testing of a given test suite having multiple tests, and after a particular test is run, the framework preferably generates an “image” of the system under test and makes that information available to developers, even while additional tests in the suite are being carried out. In this manner, the framework preserves the system “state” to facilitate concurrent or after-the-fact debugging. The framework also will re-install and verify a given version of the system between tests, which may be necessary in the event a given test is destructive or otherwise places the system in an unacceptable condition.

156 citations


Proceedings ArticleDOI
23 May 2006
TL;DR: This paper uses UML activity diagrams as design specifications, and presents an automatic test case generation approach which randomly generates abundant test cases for a JAVA program under testing and can also be used to check the consistency between the program execution traces and the behavior of U ML activity diagrams.
Abstract: The test case generation from design specifications is an important work in testing phase. In this paper, we use UML activity diagrams as design specifications, and present an automatic test case generation approach. The approach first randomly generates abundant test cases for a JAVA program under testing. Then, by running the program with the generated test cases, we can get the corresponding program execution traces. Last, by comparing these traces with the given activity diagram according to the specific coverage criteria, we can get a reduced test case set which meets the test adequacy criteria. The approachcan also be used to check the consistency between the program execution traces and the behavior of UML activity diagrams.

156 citations


Proceedings ArticleDOI
17 Sep 2006
TL;DR: This paper presents a new approach to prioritize test cases based on the coverage requirements present in the relevant slices of the outputs of test cases, and presents experimental results comparing the effectiveness of the prioritization approach with existing techniques that only account for total requirement coverage.
Abstract: Software testing and retesting occurs continuously during the software development lifecycle to detect errors as early as possible. The sizes of test suites grow as software evolves. Due to resource constraints, it is important to prioritize the execution of test cases so as to increase chances of early detection of faults. Prior techniques for test case prioritization are based on the total number of coverage requirements exercised by the test cases. In this paper, we present a new approach to prioritize test cases based on the coverage requirements present in the relevant slices of the outputs of test cases. We present experimental results comparing the effectiveness of our prioritization approach with that of existing techniques that only account for total requirement coverage, in terms of ability to achieve high rate of fault detection. Our results present interesting insights into the effectiveness of using relevant slices for test case prioritization.

138 citations


Book ChapterDOI
03 Jul 2006
TL;DR: Orstra as mentioned in this paper is an automated approach and its supporting tool, called Orstra, for augmenting an automatically generated unit-test suite with regression oracle checking, which has an improved capability of guarding against regression faults.
Abstract: A test case consists of two parts: a test input to exercise the program under test and a test oracle to check the correctness of the test execution. A test oracle is often in the form of executable assertions such as in the JUnit testing framework. Manually generated test cases are valuable in exposing program faults in the current program version or regression faults in future program versions. However, manually generated test cases are often insufficient for assuring high software quality. We can then use an existing test-generation tool to generate new test inputs to augment the existing test suite. However, without specifications these automatically generated test inputs often do not have test oracles for exposing faults. In this paper, we have developed an automatic approach and its supporting tool, called Orstra, for augmenting an automatically generated unit-test suite with regression oracle checking. The augmented test suite has an improved capability of guarding against regression faults. In our new approach, Orstra first executes the test suite and collects the class under test's object states exercised by the test suite. On collected object states, Orstra creates assertions for asserting behavior of the object states. On executed observer methods (public methods with non-void returns), Orstra also creates assertions for asserting their return values. Then later when the class is changed, the augmented test suite is executed to check whether assertion violations are reported. We have evaluated Orstra on augmenting automatically generated tests for eleven subjects taken from a variety of sources. The experimental results show that an automatically generated test suite's fault-detection capability can be effectively improved after being augmented by Orstra.

99 citations


Journal Article
TL;DR: Results show that an automatically generated test suite's fault-detection capability can be effectively improved after being augmented by Orstra, and the augmented test suite has an improved capability of guarding against regression faults.
Abstract: A test case consists of two parts: a test input to exercise the program under test and a test oracle to check the correctness of the test execution. A test oracle is often in the form of executable assertions such as in the JUnit testing framework. Manually generated test cases are valuable in exposing program faults in the current program version or regression faults in future program versions. However, manually generated test cases are often insufficient for assuring high software quality. We can then use an existing test-generation tool to generate new test inputs to augment the existing test suite. However, without specifications these automatically generated test inputs often do not have test oracles for exposing faults. In this paper, we have developed an automatic approach and its supporting tool, called Orstra, for augmenting an automatically generated unit-test suite with regression oracle checking. The augmented test suite has an improved capability of guarding against regression faults. In our new approach, Orstra first executes the test suite and collects the class under test's object states exercised by the test suite. On collected object states, Orstra creates assertions for asserting behavior of the object states. On executed observer methods (public methods with non-void returns), Orstra also creates assertions for asserting their return values. Then later when the class is changed, the augmented test suite is executed to check whether assertion violations are reported. We have evaluated Orstra on augmenting automatically generated tests for eleven subjects taken from a variety of sources. The experimental results show that an automatically generated test suite's fault-detection capability can be effectively improved after being augmented by Orstra.

93 citations


Proceedings ArticleDOI
24 Sep 2006
TL;DR: A test case prioritization technique that takes advantage of user knowledge through a machine learning algorithm, case-based ranking (CBR), which elicits just relative priority information from the user, in the form of pairwise test case comparisons.
Abstract: The test case execution order affects the time at which the objectives of testing are met. If the objective is fault detection, an inappropriate execution order might reveal most faults late, thus delaying the bug fixing activity and eventually the delivery of the software. Prioritizing the test cases so as to optimize the achievement of the testing goal has potentially a positive impact on the testing costs, especially when the test execution time is long. Test engineers often possess relevant knowledge about the relative priority of the test cases. However, this knowledge can be hardly expressed in the form of a global ranking or scoring. In this paper, we propose a test case prioritization technique that takes advantage of user knowledge through a machine learning algorithm, Case-Based Ranking (CBR). CBR elicits just relative priority information from the user, in the form of pairwise test case comparisons. User input is integrated with multiple prioritization indexes, in an iterative process that successively refines the test case ordering. Preliminary results on a case study indicate that CBR overcomes previous approaches and, for moderate suite size, gets very close to the optimal solution.

90 citations


Journal ArticleDOI
TL;DR: An innovative coverage-based program prioritization algorithm, a novel path selection algorithm that takes into consideration program priority and functional calling relationship, and a constraint solver for test data generation that derives constraints from bytecode and solves complex constraints involving strings and dynamic objects are presented.
Abstract: Most automatic test generation research focuses on generation of test data from pre-selected program paths or input domains or program specifications. This paper presents a methodology for a full solution to code-coverage-based test case generation, which includes code coverage-based path selection, test data generation and actual test case representation in program's original languages. We implemented this method in an automatic testing framework, eXVantage. Experimental results and industrial trials show that the framework is able to generate tests to achieve program line coverage from 20% to 98% with reduced overall testing effort. Our major contributions include an innovative coverage-based program prioritization algorithm, a novel path selection algorithm that takes into consideration program priority and functional calling relationship, and a constraint solver for test data generation that derives constraints from bytecode and solves complex constraints involving strings and dynamic objects.

74 citations


Journal ArticleDOI
01 Jul 2006
TL;DR: The operational violation approach for unit-test generation and selection is presented, a black-box approach without requiring a priori specifications that dynamically generates operational abstractions from executions of the existing unit test suite.
Abstract: Unit testing, a common step in software development, presents a challenge. When produced manually, unit test suites are often insufficient to identify defects. The main alternative is to use one of a variety of automatic unit-test generation tools: these are able to produce and execute a large number of test inputs that extensively exercise the unit under test. However, without a priori specifications, programmers need to manually verify the outputs of these test executions, which is generally impractical. To reduce this cost, unit-test selection techniques may be used to help select a subset of automatically generated test inputs. Then programmers can verify their outputs, equip them with test oracles, and put them into the existing test suite. In this paper, we present the operational violation approach for unit-test generation and selection, a black-box approach without requiring a priori specifications. The approach dynamically generates operational abstractions from executions of the existing unit test suite. These operational abstractions guide test generation tools to generate tests to violate them. The approach selects those generated tests violating operational abstractions for inspection. These selected tests exercise some new behavior that has not been exercised by the existing tests. We implemented this approach by integrating the use of Daikon (a dynamic invariant detection tool) and Parasoft Jtest (a commercial Java unit testing tool), and conducted several experiments to assess the approach.

74 citations


Patent
11 Jan 2006
TL;DR: In this article, the authors propose an automated system that randomly generates test cases for use in hardware or software quality assurance testing, wherein a given test case comprises a sequence (or chain) of discrete, atomic steps (or building blocks).
Abstract: An automated system that randomly generates test cases for use in hardware or software quality assurance testing, wherein a given test case comprises a sequence (or “chain”) of discrete, atomic steps (or “building blocks”). A particular test case (i.e., a given sequence) has a variable number of building blocks. The system takes a set of test actions (or even test cases) and links them together in a relevant and useful manner to create a much larger library of test cases or “chains.” The chains comprise a large number of random sequence tests that facilitate “chaos-like” or exploratory testing of the overall system under test. Upon execution in the system under test, the test case is considered successful (i.e., a pass) if each building block in the chain executes successfully; if any building block fails, the test case, in its entirety, is considered a failure. The system adapts and dynamically generates new test cases as underlying data changes (e.g., a building block is added, deleted, modified) or new test cases themselves are generated. The system also is tunable to generate test sequences that have a given (e.g., higher) likelihood of finding bugs or generating errors from which the testing entity can then assess the system operation. Generated chains can be replayed easily to provide test reproducibility.

68 citations


Patent
21 Nov 2006
TL;DR: A test framework suited for use with distributed business applications allows developers to specify a test, or suite of tests, to be easily selected and executed as discussed by the authors, and execution of a test suite instantiates objects such as a test runner and a test result object that set up, activate, and observe a test cycle.
Abstract: A test framework suited for use with distributed business applications allows developers to specify a test, or suite of tests, to be easily selected and executed. Execution of a test suite instantiates objects such as a test runner and a test result object that set up, activate, and observe a test cycle. Results may be forwarded to a variety of special-purpose listeners which evaluate variable and state changes and ultimately determine pass/fail metrics. Results from profilers may be used to determine code coverage for each of the tests performed by the suite. APIs allow integration of the test framework with other development processes such as a be source code management system. In one embodiment, new or changed source code may not be checked in until successfully passing a test cycle.

59 citations


Patent
21 Apr 2006
TL;DR: In this article, a test engine for multimedia application programming interfaces (APIs) is presented, which is resident in memory on the wireless device and is operable to collect multimedia test data and, in some aspects, wireless device performance data.
Abstract: Apparatus and methods may include a multimedia test engine operable to exercise and test multimedia application programming interfaces (APIs) of a wireless device based upon execution of a test configuration comprising a test script downloadable to the wireless device. The test engine is resident in memory on the wireless device and is operable to collect multimedia test data and, in some aspects, wireless device performance data, based upon the test configuration and forward the collected data to another device operable to analyze the collected data and generate a multimedia API test report viewable by an authorized user.

Patent
29 Sep 2006
TL;DR: An automatic test system that can be configured to perform any of a number of test processes can be found in this article, where a test system contains multiple functional modules that are interconnected by a network.
Abstract: An automatic test system that can be configured to perform any of a number of test processes. The test system contains multiple functional modules that are interconnected by a network. By using software to configure data flow between functional modules, combinations of modules can be made, thereby creating virtual instruments. As test requirements change, the test system can be reconfigured to contain other virtual instruments, eliminating or reducing the need to add instruments to meet changing test requirements. To ensure adequate performance of the test system, a proposed configuration may be simulated, and if a virtual instrument does not provide a required level of performance, the test system may be reconfigured.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper describes an approach to extend the functionalities of structural test techniques to the board and system level to improve the test accessibility, test time, and diagnostic capability in a large telecommunication company.
Abstract: The success of system test is measured by test quality and cost. System test quality and cost rely on several factors, such as component and board test quality, system test completeness, the support of system diagnostics, and a process that controls overall quality, resource and cost balances. Traditional structural test techniques used at the component level can achieve both high test quality and low test costs. This paper describes an approach to extend the functionalities of structural test techniques to the board and system level to improve the test accessibility, test time, and diagnostic capability. This approach has become practice in a large telecommunication company and the benefits received from this practice are tremendous. Examples will be given at the end of the paper.

Proceedings ArticleDOI
29 Aug 2006
TL;DR: This paper proposes an experimental framework for comparison of test techniques with respect to efficiency, effectiveness and applicability, and plans to evaluate ease of automation, which has not been addressed by previous studies.
Abstract: Software testing is expensive for the industry, and always constrained by time and effort. Although there is a multitude of test techniques, there are currently no scientifically based guidelines for the selection of appropriate techniques of different domains and contexts. For large complex systems, some techniques are more efficient in finding failures than others and some are easier to apply than others are. From an industrial perspective, it is important to find the most effective and efficient test design technique that is possible to automate and apply. In this paper, we propose an experimental framework for comparison of test techniques with respect to efficiency, effectiveness and applicability. We also plan to evaluate ease of automation, which has not been addressed by previous studies. We highlight some of the problems of evaluating or comparing test techniques in an objective manner. We describe our planned process for this multi-phase experimental study. This includes presentation of some of the important measurements to be collected with the dual goals of analyzing the properties of the test technique, as well as validating our experimental framework.

Book ChapterDOI
27 Mar 2006
TL;DR: In this article, the authors present a theory and technique for generating fault-based test cases for concurrent systems, based on the notion of refinement, which is used to generate test purposes from faults that have been injected into a model of the system under test, which form a specification of a more detailed test case that can detect the injected fault.
Abstract: Fault-based testing is a technique where testers anticipate errors in a system under test in order to assess or generate test cases. The idea is to have enough test cases capable of detecting these anticipated errors. This paper presents a theory and technique for generating fault-based test cases for concurrent systems. The novel idea is to generate test purposes from faults that have been injected into a model of the system under test. Such test purposes form a specification of a more detailed test case that can detect the injected fault. The theory is based on the notion of refinement. The technique is automated using the TGV test case generator and an equivalence checker of the CADP tools. A case study of testing web servers demonstrates the practicability of the approach.

Patent
22 Feb 2006
TL;DR: In this paper, a test case template is created upon saving a change history entry, and a subset of the actual and new test cases is determined, and new actual test cases generated and documented from the subset.
Abstract: Maintaining and testing a software application by performing regression testing uses standard reusable test cases from change history records to generate actual test cases. A new test case template is created upon saving a change history entry. A subset of the actual and new test cases is determined, and new actual test cases generated and documented from the subset. The new actual test cases are released after successful verification.

Patent
12 Jun 2006
TL;DR: In this article, a recording agent is implemented to capture the GUI interactions of one or more human software testers, which is used to improve automated testing of a software application's graphical user interface (GUI).
Abstract: A method, apparatus and computer-usable medium for the improved automated testing of a software application's graphical user interface (GUI) through implementation of a recording agent that allows the GUI interactions of one or more human software testers to be captured and incorporated into an error-tolerant and adaptive automated GUI test system. A recording agent is implemented to capture the GUI interactions of one or more human software testers. Testers enact a plurality of predetermined test cases or procedures, with known inputs compared against preconditions and expected outputs compared against the resulting postconditions, which are recorded and compiled into an aggregate test procedure. The resulting aggregate test procedure is amended and configured to correct and/or reconcile identified abnormalities to create a final test procedure that is implemented in an automated testing environment. The results of each test run are subsequently incorporated into the automated test procedure, making it more error-tolerant and adaptive as the number of test runs increases.

Patent
12 Oct 2006
TL;DR: In this article, a test library contains test elements, which are building blocks that codify all possible interactions with business processes in business process application (155) configuration, which interact with the application's user interface.
Abstract: Systems and methods are provided for automated testing of business process application configurations. A test library (101) contains test elements, which are building blocks that codify all possible interactions with business processes in business process application (155) configuration. The elements interact with the business process application's user interface. A business process test (120) can be defined in test development environment by adding data input elements (141) to the test to test specific business processes. The flow of execution in business process test can be defined by adding control elements (154) to the test. The control elements interact with the application's user interface submit or cancel business process operations. The business process test can be executed as a test script (151) to perform automated testing. The tests can continue to function properly when the application or its user interface changes, because the elements are independent of most details of the user interface.

Proceedings ArticleDOI
27 Apr 2006
TL;DR: An on-going project on a multiagent based framework to coordinate distributed test agents to generate, plan, execute, monitor and communicate tests on WS.
Abstract: Web services (WS) is currently the major implementation of service-oriented architecture (SOA). It defines a framework for agile and flexible integration among autonomous services based on Internet open standards. However, testing has been a challenge due the dynamic and collaborative nature of WS. This paper introduces an on-going project on a multiagent based framework to coordinate distributed test agents to generate, plan, execute, monitor and communicate tests on WS. Test agents are classified into different roles which communicate through XML-based agent test protocols. Test master accepts test cases from test generator, generates test plans and distributed them to various test groups. A set of test agents that implement a test plan are organized into a test group, which is coordinated by a test coordinator. Test runners execute the test scripts, collect test results and forward the results to test analyzer for quality and reliability analysis. The status of the test agents are monitored by the test monitor. Test agents are dynamically created, deployed and organized. Through the monitoring and coordinating mechanism, the agents can re-adjust the test plan and their behavior at runtime to be adaptive to the changing environment.

Patent
13 Mar 2006
TL;DR: In this article, a framework for automation of modular scripts based on a method for automating a script step in a modular script is provided, which comprises preparing a software environment for automation, performing one or more user actions on a software product executing on a computer system, the actions being representative of the script step that is to be automated, while the computer system records the user actions such that the modularity of the modular script was retained and the script steps are automated.
Abstract: An automation framework for automation of modular scripts based on a method for automating a script step in a modular script is provided. In accordance with one embodiment of the invention, the method comprises preparing a software environment for automation; performing one or more user actions on a software product executing on a computer system, the actions being representative of the script step that is to be automated, while the computer system records the user actions such that the modularity of the modular script is retained and the script step is automated; and providing user input to the computer system indicating that all the user actions have been performed. The modular script may be a modular test script prescribing test script steps that test a software product. Script steps that are shared by a large number of modular scripts and that are affected by a change to a corresponding software product need to be re-automated or updated only once, leading to automatically updated modular scripts that share the updated script steps.

Patent
31 Mar 2006
TL;DR: In this article, a method for testing a software program creates test data by simulating data exchange messages between a server and a client and stores test data in Comma Separated Value (CSV) files.
Abstract: A method for testing a software program creates test data by simulating data exchange messages between a server and a client and stores test data in Comma Separated Value (CSV) files. Data repository files stored in the CSV format can be edited by common tools, like a spreadsheet program, and can be maintained easily. The test automation method provides a data capturer tool so that the data repository could be created based on any existing test environment. The test automation method converts data repository files and simulates messages in order to load data to a mobile infrastructure system and set up data fixtures. The test automation method could be integrated in a build process so that data repository and test cases are validated against any program changes periodically.

Patent
17 Feb 2006
TL;DR: In this paper, a test system for performing tests on devices under test (DUTs) includes a storage device storing test data for performing the tests on the DUTs, a shared processor for generating test data, storing the test data in the storage device and generating a test control signal including one or more test instructions for executing the tests.
Abstract: A test system for performing tests on devices under test (DUTs) includes a storage device storing test data for performing the tests on the DUTs, a shared processor for generating the test data, storing the test data in the storage device and generating a test control signal including one or more test instructions for executing the tests, and, for each DUT, a dedicated processor configured to receive a test control signal from the shared processor, and in response to the test control signal, transfer the test data for one of the test instructions to the DUT to execute that test instruction and verify the completion of that test instruction.

Patent
01 Sep 2006
TL;DR: In this article, a functional testing of application software through exercising graphical user interface functions of the application software is automated and enhanced by providing one or more test data sets, classes of panels in which each panel is described according to a set of graphical user interfaces objects and corresponding methods.
Abstract: Functional testing of application software through exercising graphical user interface functions of the application software is automated and enhanced by providing one or more test data sets, one or more classes of panels in which each panel is described according to a set of graphical user interface objects and a set of corresponding methods, and one or more engines which encapsulate one or more test method calls or invocations. During testing and in cooperating with a functional test system, the test data sets are parsed to obtain individual test operations, which are then acting upon by invoking one or more of the engines in order to subject the application program to one or more test conditions. Results are logged, summarized, and optionally emailed to test personnel.

Proceedings ArticleDOI
07 Nov 2006
TL;DR: An extensive experimental study examines usage-based customized test requirements for the test suite reduction problem in Web application testing and shows that the reduced suites' program coverage and fault detection effectiveness increases with the context or data associated with the reduction requirement.
Abstract: Test suite reduction uses test requirement coverage to determine if the reduced test suite maintains the original suite?s requirement coverage. Based on observations from our previous experimental studies on test suite reduction, we believe there is a need for customized test requirements for web applications. In this paper, we examine usagebased customized test requirements for the test suite reduction problem in web application testing. We conduct an extensive experimental study to evaluate the tradeoffs between five classes of customized requirements with respect to reduced test suite size, program coverage and fault detection effectiveness. Our results show that the reduced suites? program coverage and fault detection effectiveness increases with the context or data associated with the reduction requirement. Based on our experimental results, we provide guidance to testers on the most useful test requirement for web applications in general and provide intuition on factors testers need to consider when selecting test requirements.

Book ChapterDOI
23 Oct 2006
TL;DR: This work addresses the problem of misalignment of artifacts developed in agile software development projects and those required by model-based test generation tools by introducing a coverage language and an algorithm for automatic test generation.
Abstract: We address the problem of misalignment of artifacts developed in agile software development projects and those required by model-based test generation tools. Our solution is domain specific and relies on the existence of domain experts to design the test models. The testers interface the test generation systems with use cases that are converted into sequences of so called action words corresponding to user events at a high level of abstraction. To support this scheme, we introduce a coverage language and an algorithm for automatic test generation.

Proceedings ArticleDOI
13 Mar 2006
TL;DR: The design space of tags within the Cantag system is explored, and the design parameters and performance characteristics which an application writer can use to select the best tag system for any given scenario are described.
Abstract: This paper presents Cantag, an open source software toolkit for building marker-based vision (MBV) systems that can identify and accurately locate printed markers in three dimensions. The extensibility of the system makes it ideal for dynamic location and poses determination in pervasive computing systems. Unlike prior MBV systems, Cantag supports multiple fiducial shapes, payload types, data sizes and image processing algorithms in one framework. It allows the application writer to generate a custom tag design and associated optimised executable for any given application. The system includes a test harness which can be used to quantify, compare and contrast the performance of different designs. This paper explores the design space of tags within the Cantag system, and describes the design parameters and performance characteristics which an application writer can use to select the best tag system for any given scenario. It presents quantitative analysis of different markers and processing algorithms, which are compared fairly for the first time.

Patent
15 May 2006
TL;DR: In this paper, an adaptive test system includes one or more reconfigurable test boards, with each test board including at least one re-configurable test processor, which can transmit communicate with one another using an inter-processor communications controller associated with each Re-Configurable Test Processor.
Abstract: An adaptive test system includes one or more reconfigurable test boards, with each test board including at least one re-configurable test processor. The re-configurable test processors can transmit communicate with one another using an inter-processor communications controller associated with each re-configurable test processor. The communications include configuration information, control information, communication protocols, stimulus data, and responses. Configuration information and stimulus data can also be read from a memory. Configuration information is used to configure one or more re-configurable test processors. Once configured, the re-configurable test processor or processors process the data in order to generate one or more test signals. The one or more test signals are then used to test a DUT.

Patent
28 Dec 2006
TL;DR: In this article, a finite state machine is produced from a device description to serve as a basis for a test script, and the test script is executed, with data being sent to and received from the device description.
Abstract: In a method for testing device descriptions for field devices of automation technology, a finite state machine is produced from a device description to serve as a basis for a test script. For testing the device description, the test script is executed, with data being sent to and received from the device description. In such case, it is tested whether desired values set in the test script agree with actual values delivered e.g. from the field device.

Patent
30 Nov 2006
TL;DR: In this paper, a process server comprising process modeling tools creates workflows comprising activities linked together based on a set of rules, and a test script server connected to one or more activities in a workflow receives requests from the one and more activities to automate an activity.
Abstract: A computer implemented method, data processing system, and computer program product for automating processes using data driven pre-recorded transactions. A process server comprising process modeling tools creates workflows comprising activities linked together based on a set of rules. A test script server connected to one or more activities in a workflow receives requests from the one or more activities to automate an activity. A remote test script agent connected to the test script server receives instructions from the test script server to play back a robotic test script of the activity, wherein the robotic test script is driven by a set of input parameters obtained from recording the activity, and wherein the robotic test script interacts with an application under test to perform the activity as an automated task.

Patent
31 May 2006
TL;DR: In this paper, extensible user interface testing supports testing of a user interface of a program, and each test step describes at least a part of a test to be performed on the user interface.
Abstract: Automated extensible user interface testing supports testing of a user interface of a program. Test data is accessed, the test data including multiple test steps. Each test step describes at least a part of a test to be performed on the user interface. For each of the multiple test steps, one or more application program interface (API) methods to invoke to carry out the part of the test is determined. This determination is based at least in part on the test data and on an identification from the API of methods supported by the API. Each of the one or more API methods is then invoked to carry out the part of the test. Verification can be performed to ensure, for example, that specified files were created, or registry values were changed, or user interface elements appear and exist.