scispace - formally typeset
Search or ask a question

Showing papers on "Test harness published in 2004"


Proceedings ArticleDOI
Wang Linzhang1, Yuan Jiesong1, Yu Xiaofeng1, Hu Jun1, Li Xuandong1, Zheng Guoliang1 
30 Nov 2004
TL;DR: This paper proposes an approach to generate test cases directly from UML activity diagram using Gray-box method, where the design is reused to avoid the cost of test model creation.
Abstract: Test case generation is the most important part of the testing efforts, the automation of specification based test case generation needs formal or semi-formal specifications. As a semi-formal modelling language, UML is widely used to describe analysis and design specifications by both academia and industry, thus UML models become the sources of test generation naturally. Test cases are usually generated from the requirement or the code while the design is seldom concerned, this paper proposes an approach to generate test cases directly from UML activity diagram using Gray-box method, where the design is reused to avoid the cost of test model creation. In this approach, test scenarios are directly derived from the activity diagram modelling an operation. Then all the information for test case generation, i.e. input/output sequence and parameters, the constraint conditions and expected object method sequence, is extracted from each test scenario. At last, the possible values of all the input/output parameters could be generated by applying category-partition method, and test suite could be systematically generated to find the inconsistency between the implementation and the design. A prototype tool named UMLTGF has been developed to support the above process.

206 citations


Patent
15 Apr 2004
TL;DR: In this article, a computer-based testing system consisting of a data administration system including centrally hosted data administration servers, a network and an operational testing system is described. But the authors do not specify the architecture of the system.
Abstract: A computer-based testing system is disclosed comprising: a data administration system including centrally hosted data administration servers; a network and an operational testing system the data administration system including a browser-capable workstation connectible via the network to the centrally hosted data administration servers. The operational testing system may include three subsystems connected to the network: a test delivery server running on a test delivery workstation and managing all aspects of a test session by acting as a data repository and hub for communication between the other subsystems, a proctor software running on a proctor test workstation providing a user interface for managing a test session by communicating with the test delivery server, and a student test software running on a student test workstation providing a user interface for displaying test items and recording responses.

183 citations


Proceedings ArticleDOI
01 Jul 2004
TL;DR: The tools and interfaces created by the AGEDIS project support a model based testing methodology that features a large degree of automation and also includes a feedback loop integrating coverage and defect analysis tools with the test generator and execution framework.
Abstract: We describe the tools and interfaces created by the AGEDIS project, a European Commission sponsored project for the creation of a methodology and tools for automated model driven test generation and execution for distributed systems. The project includes an integrated environment for modeling, test generation, test execution, and other test related activities. The tools support a model based testing methodology that features a large degree of automation and also includes a feedback loop integrating coverage and defect analysis tools with the test generator and execution framework. Prototypes of the tools have been tried in industrial settings providing important feedback for the creation of the next generation of tools in this area.

129 citations


Patent
14 Jan 2004
TL;DR: In this article, a test administration system comprises a central computer and associated database containing a plurality of tests that may be distributed to a test taker, and test administrator uses the system to generate test identification codes for a chosen set of ordered tests.
Abstract: A test administration system comprises a central computer and associated database containing a plurality of tests that may be distributed to a test taker. The central computer provides a website to be accessed by the test administrator and the test taker at remote personal computers when using the test administration system. The website includes an administrator workspace for use by the test administrator and a testing workspace for use by the test taker. The administrator workspace provides the test administrator with the ability to order any number of the tests contained in the database. After ordering a number of tests, the test administrator uses the system to generate test identification codes for a chosen set of ordered tests. The system automatically provides the test identification codes to those test subjects taking a test, and provides the test subject with access information and instructions for using the system to take the test. The test administrator workspace also provides the test administrator with valuable test status information concerning the tests ordered by the administrator.

91 citations


Patent
22 Oct 2004
TL;DR: In this paper, a test tool running on a development computer that communicates with an agent executing on the target device is presented, which can be used for testing an application's rendering of output to individual controls of a GUI, displayed on a target device executing the application.
Abstract: Methods and systems disclosed herein can be used for testing an application's rendering of output to individual controls of a Graphical User Interface (GUI), displayed on a target device executing the application. A system according to the present invention includes a test tool running on a development computer that communicates with an agent executing on the target device. Testing is performed by using the test tool to execute test scripts, which cause test input to be injected via the agent into the application on the target device. The test tool can validate whether actual output on the target device matches expected output known to the test tool. The present invention includes a variety of key components, such as a flexible trap manager for handling unexpected screens that appear during an automated test, and a configuration manager for testing against multiple languages and platform configurations.

86 citations


Patent
29 Jan 2004
TL;DR: In this paper, a test harness is used for testing multiple low-end computing devices simultaneously, and different tests are executed simultaneously on different platforms using a single instance of a Tester to which multiple devices are connected.
Abstract: In an arrangement for testing multiple low-end computing devices simultaneously, different tests are executed simultaneously on different platforms using a single instance of a test harness to which multiple devices are connected. A platform-specific API is provided for independent components of the tests and platform-specific components are implemented for each test according to the respective platform-specific API. At run-time the test harness deploys each test together with a platform-specific execution agent, configured according to the components of the test. The agents execute the test suites, and return test results to the test harness.

82 citations


Proceedings ArticleDOI
01 Jul 2004
TL;DR: A new compile-time analysis that enables a testing methodology for white-box coverage testing of error recovery code in Java web services using compiler-directed fault injection, and incorporates refinements that establish sufficient context sensitivity to ensure relatively precise def-use links.
Abstract: This paper presents a new compile-time analysis that enables a testing methodology for white-box coverage testing of error recovery code (i.e., exception handlers) in Java web services using compiler-directed fault injection. The analysis allows compiler-generated instrumentation to guide the fault injection and to record the recovery code exercised. (An injected fault is experienced as a Java exception.) The analysis (i) identifies the exception-flow 'def-uses' to be tested in this manner, (ii) determines the kind of fault to be requested at a program point, and (iii) finds appropriate locations for code instrumentation. The analysis incorporates refinements that establish sufficient context sensitivity to ensure relatively precise def-use links and to eliminate some spurious def-uses due to demonstrably infeasible control flow. A runtime test harness calculates test coverage of these links using an exception def-catch metric. Experiments with the methodology demonstrate the utility of the increased precision in obtaining good test coverage on a set of moderately-sized Java web services benchmarks.

66 citations


Proceedings ArticleDOI
28 Sep 2004
TL;DR: This paper studies a path-oriented approach to the problem, which is based on the combination of symbolic execution and constraint solving and an implemented toolkit is described with some examples.
Abstract: Automatic test data generation is a challenging task in software engineering research. This paper studies a path-oriented approach to the problem, which is based on the combination of symbolic execution and constraint solving. Methods for representing expressions and path conditions are discussed. An implemented toolkit is described with some examples. The toolkit transforms an input program (possibly embedded with assertions) to an extended finite state machine and then performs depth-first or breadth-first search on it. The goal is to find values for input variables such that a terminal state can be reached. If successful, input test data are found (which might reveal a bug in the program).

58 citations


Proceedings ArticleDOI
02 Apr 2004
TL;DR: The approach defined in this paper provides a language front-end to the Jemmy library to eliminate the programming usually needed to use this Java API and introduces a GUI-event test specification language from which an automated test engine is generated.
Abstract: This paper presents a specification-driven approach to test automation for GUI-based JAVA programs as an alternative to the use of capture/replay. The NetBeans Jemmy library provides the basic technology. We introduce a GUI-event test specification language from which an automated test engine is generated. The test engine uses the library and incorporates the generation of GUI events, the capture of event responses, and an oracle to verify successful completion of events. The engine, once generated, can be used to test multiple versions of the application. The approach defined in this paper provides a language front-end to the Jemmy library to eliminate the programming usually needed to use this Java API. Results from applying the specification-driven approach to automate the grading of student programs indicate the feasibility of this approach. The specification-driven approach is equally useful for testing during development and regression testing. The primary benefit is that testers can focus on test case design rather than building test harnesses. This approach supports N-version testing, where each version of the application is intended to satisfy the same specification, and where each version is tested in an identical manner.

53 citations


Proceedings ArticleDOI
16 Feb 2004
TL;DR: The test architecture design for the chiplet-based PNX8550, the most complex Nexperia/spl trade/ SOC designed to date, is presented and significant savings in test time and TAM wires could be obtained with the help of TR-ARCHITECT, an in-house tool for automated design of SOC test architectures.
Abstract: Philips has adopted a modular manufacturing test strategy for its SOCs that are part of the Nexperia/spl trade/ home platform. The on-chip infrastructure that enables modular testing consists of wrappers and test access mechanisms (TAMs). Optimizing that infrastructure minimizes the test application time and helps to fit the test data into the ATE vector memory. This paper presents the test architecture design for the chiplet-based PNX8550, the most complex Nexperia/spl trade/ SOC designed to date. Significant savings in test time and TAM wires could be obtained with the help of TR-ARCHITECT, an in-house tool for automated design of SOC test architectures.

48 citations


Patent
23 Feb 2004
TL;DR: In this article, a test system having a test executive software system for performing tests on units under test is described, which includes a test kernel component that provides control through a generic interface to the testexecutive software.
Abstract: The present invention provides for a test system having a test executive software system for performing tests on units under test. The test executive software system includes a test kernel component that provides control through a generic interface to the test executive software. Test components, instrument components, support objects and a test system interface component are communicatively coupled to the test kernel component. The instrument components can be written as a dynamically linked library (DLL) file so that the instrument component can be broken into basic functional modules associated with the particular instrument type. Each instrument component supports operation in both live mode and virtual mode, so that testing can be performed in both normal mode and simulation mode. Virtual mode allows instruments to be inserted and removed without impacting test applications that do not utilize them, thereby reducing tester downtime.

Patent
Fred Nikolac1, Roy Liang Chua1
30 Jan 2004
TL;DR: In this paper, a computer implemented method for generating a test plan for a communication device under test is described. But the test plan does not define test tools, test methodologies, test configurations, and algorithms for executing the test.
Abstract: A computer implemented method for generating a test plan for a communication device under test. The test plan defines test tools, test methodologies, test configurations, and algorithms for executing the test. A user input is received to define the communication device under test. Next, a knowledge database is searched to identify test plan parameters for the communication device under test. Thereafter, the test plan parameters and the user input are analyzed to identify the test plan. A system and computer readable medium having program instructions for generating a test plan for a communication device under test are also described.

Proceedings ArticleDOI
06 Nov 2004
TL;DR: The Inca test harness and reporting framework is a generic system for the automated testing, data collection, verification, and monitoring of service agreements and is being used by the TeraGrid project to verify software installations, monitor service availability, and collect performance data.
Abstract: Virtual organizations (VOs), communities that enable coordinated resource sharing among multiple sites, are becoming more prevalent in the high-performance computing community. In order to promote cross-site resource usability, most VOs prepare service agreements that include a minimum set of common resource functionality, starting with a common software stack and evolving into more complicated service and interoperability agreements. VO service agreements are often difficult to verify and maintain, however, because the sites are dynamic and autonomous. Automated verification of service agreements is critical: manual and user tests are not practical on a large scale. The Inca test harness and reporting framework is a generic system for the automated testing, data collection, verification, and monitoring of service agreements. This paper describes Inca’s architecture, system impact, and performance. Inca is being used by the TeraGrid project to verify software installations, monitor service availability, and collect performance data.

Patent
29 Jan 2004
TL;DR: In this article, a test execution system has a central repository that contains a management unit, available test suites and a single test execution harness, and all necessary information is obtained from a single central location.
Abstract: A test execution system has a central repository that contains a management unit, available test suites and a single test execution harness. Using the management unit, a system administrator establishes active versions of the various test suites, and their individual configurations. End users install clients of the central repository, using a system-provided installer program. In the client, an execution script is created, which downloads the harness and a local configuration file. Then, when the harness is executed at the client, it loads with all designated test suites already installed, configured and ready for execution. The client always has the most current versions of all test suites. All necessary information is obtained from a single central location.

Journal ArticleDOI
TL;DR: The potential use of data mining algorithms for automated modeling of tested systems and a state-of-the-art data mining algorithm called Info-Fuzzy Network (IFN) is applied to execution data of a complex mathematical package.
Abstract: In today's software industry, the design of test cases is mostly based on human expertise, while test automation tools are limited to execution of pre-planned tests only. Evaluation of test outcomes is also associated with a considerable effort by human testers who often have imperfect knowledge of the requirements specification. Not surprisingly, this manual approach to software testing results in heavy losses to the world's economy. In this paper, we demonstrate the potential use of data mining algorithms for automated modeling of tested systems. The data mining models can be utilized for recovering system requirements, designing a minimal set of regression tests, and evaluating the correctness of software outputs. To study the feasibility of the proposed approach, we have applied a state-of-the-art data mining algorithm called Info-Fuzzy Network (IFN) to execution data of a complex mathematical package. The IFN method has shown a clear capability to identify faults in the tested program.

Proceedings ArticleDOI
23 May 2004
TL;DR: This paper complements the current research on automated specification-based testing by proposing a scheme that combines the setup process, test execution, and test validation into a single test program for testing the behavior of object-oriented classes.
Abstract: Most research on automated specification-based software testing has focused on the automated generation of test cases. Before a software system can be tested, it must be set up according to the input requirements of the test cases. This setup process is usually performed manually, especially when testing complex data structures and databases. After the system is properly set up, a test execution tool runs the system according to the test cases and pre-recorded test scripts to obtain the outputs, which are evaluated by a test evaluation tool. This paper complements the current research on automated specification-based testing by proposing a scheme that combines the setup process, test execution, and test validation into a single test program for testing the behavior of object-oriented classes. The test program can be generated automatically given the desired test cases and closed specifications of the classes. With closed specifications, every class method is defined in terms of other methods which are, in turn, defined in their own class specifications. The core of the test program generator is a partial-order planner which plans the sequence of instructions required in the test program. The planner is, in turn, implemented as a tree-search algorithm. It makes function calls to the Omega Calculator library, which solves the constraints given in the test cases. A first-cut implementation of the planner has been completed, which is able to handle simple arithmetics and existential quantifications in the class specifications. A soundness and completeness proof sketch of the planner is also provided in this paper.

Patent
29 Oct 2004
TL;DR: A test case instance generator uses a permutation engine to generate test matrices from the tests models and generates XML documents from the test matrix as discussed by the authors, which are then applied to an XML-based application interface to test the interface.
Abstract: A test case generator including a test model generator for generating test models. A test case instance generator uses a permutation engine to generate test matrices from the tests models and generates XML documents from the test matrices. The documents are applied to an XML-based application interface to test the interface.

Patent
29 Mar 2004
TL;DR: In this article, the environments of a test automator and a test analyst are separated, thereby relieving a test results analyst from being required to have knowledge of the code that was used to test the computer program.
Abstract: Systems and methods for evaluating the testing of a computer program wherein a test automator generates code to test the computer program with respect to predetermined testing criteria. A test results analyst reviews test results generated by applying the code to test the computer program. The environments of a test automator and a test analyst are separated, thereby relieving a test results analyst from being required to have knowledge of the code that was used to test the computer program.

Patent
29 Sep 2004
TL;DR: The separation of aspects of automated testing into architectural layers enables automated testing to occur sooner and faster and to provide more comprehensive testing as discussed by the authors, where a physical layer provides an object model over the user interface of an application.
Abstract: Separation of aspects of automated testing into architectural layers enables automated testing to occur sooner and faster and to provide more comprehensive testing. A physical layer provides an object model over the user interface of an application. A logical layer provides an object model around the functions of an application. A test case executor may execute a test case. A data manager may ensure variability in test data. A behavior manager may determine execution details appropriate for a particular test case. A verification manager may perform the verification processing after the test case has executed.

Patent
02 Dec 2004
TL;DR: In this paper, a verification environment, comprising a testbench and a test harness, is used to automatically verify the operation of a processor device against the desired operation as specified by the instruction set architecture (ISA).
Abstract: A Verification environment, comprising a testbench and a test harness, which is used to automatically verify the operation of a processor device as described by a hardware description language (HDL) against the desired operation as specified by the instruction set architecture (ISA). Also described is a method of generating test instructions for use in such a system, in which the verification environment selects an instruction from the processor specification in accordance with one or more first constraints, then configures and encodes this instruction in accordance with one or more second constraints.

Proceedings ArticleDOI
02 Nov 2004
TL;DR: The difficulties and opportunities encountered in the development of Tefkat included the concurrent development of a unit test suite for the engine, which draws implications for the broader problem of testing in a model-driven environment, and of using models for testing.
Abstract: Tefkat is an implementation of a rule- and pattern-based engine for the transformation of models defined using the Object Management Group's (OMG) Model-Driven Architecture (MDA). The process for the development of the engine included the concurrent development of a unit test suite for the engine. The test suite is constructed as a number of models, whose elements comprise the test cases, and which are passed to a test harness for processing. The paper discusses the difficulties and opportunities encountered in the process, and draws implications for the broader problem of testing in a model-driven environment, and of using models for testing.

Patent
29 Jan 2004
TL;DR: In this paper, a mechanism has been developed for transforming different test suites, written for different test harnesses, into a common XML-type format that can be read by one test harness.
Abstract: A mechanism has been developed for transforming different test suites, written for different test harnesses, into a common XML-type format that can be read by one test harness. Thus differences in the structure of the test suites is transparent to the test harness. To implement this mechanism, a component has been developed that parses XML descriptors and provides an API to the test harness.

Book
19 Feb 2004
TL;DR: Effective testing and automation with integrated toolsets research on gui-based automation test technology driven by toshiba e studio service manual software and more.
Abstract: effective software test automation developing an automated effective software test automation: developing an effective gui test automation developing an automated gui effective software test automation developing an automated effective gui test automation developing an automated gui writing the business case for automated software testing implementing automated software testing gbv achieving business benefits through automated software testing effective gui test automation: developing an automated gui test automation architectures: planning for test automation experiences of test automation: case studies of software automated testing best practices perforce sony bdp s5100 manual pdf ebook | browserfame qa automation for testing medical device software developing a test automation framework for agile comparative study of automated testing tools: selenium test automation frequently asked questions mosaic, inc. investing in testing--auto or manual improving software testing efficiency using automation methods dmca. copyrighted work that you can claim. select e-book anne karppinen university of oulu staffing your test automation team mosaic home page effective testing and automation with integrated toolsets research on gui-based automation test technology driven by toshiba e studio service manual software mtbenv romantic poetry an annotated anthology ncpdev modelbased approach to security test automation t-vec the rescuer the omalley series 6 raske comparing the effectiveness of automated test generation television and ethics a bibliography g k hall reference books useful automated software testing metrics a guide for using d aulaires book of greek myths in the test automation frameworks – build or buy odintech 2010 subaru legacy manual transmission elosuk analysis of automation and manual testing using software inferno in chechnya the russian chechen wars the al qaeda outdoor first care sdunn testing frameworks university of colorado boulder the testing knowledge library e-books listing development of an automated testing tool for students concrete poems about flowers in science nulet citroen c1 user manual zarlo adaptive automation: leveraging machine learning to

Patent
01 Oct 2004
TL;DR: In this article, the source code is compiled into a model, and the model is automatically analyzed to generate numerous test scripts that can exercise the behavior of the software package, when the tests are run, their results are compared against intended behaviors, and discrepancies are used to correct the software packages' intended behaviors.
Abstract: Disclosed is a method for using source code to create the models used in model-based testing. After exploring the intended behavior of a software package, a test engineer writes source code to model that intended behavior. The source code is compiled into a model, and the model is automatically analyzed to generate numerous test scripts that can exercise the behavior of the software package. When the tests are run, their results are compared against intended behaviors, and discrepancies are used to correct the software package (or to correct the source-code model if it was prepared incorrectly). The model coding, test generation, test execution, and comparison steps are repeated as often as necessary to thoroughly test the software package. In some embodiments, the test scripts generated by the model are written in XML (Extensible Markup Language), allowing the easy integration of the test scripts with a number of XML-based tools.

Patent
Karthik Kalyanaraman1
15 Sep 2004
TL;DR: In this article, a data driven test pattern class library is proposed to generate concrete prioritized test cases dynamically using code document object model (code DOM) for substantially each data record by using a class decorated with known custom attributes.
Abstract: Systems and methods for a test harness that are provided that allow for effective control over both the data records and the test methods that are used in a software test run. Data records and/or test methods can be associated with a priority, such that a level of priority may be selected for a particular test run, and substantially only data records and test methods of the selected priority are used in the test. The invention may be implemented as a data driven test pattern class library. The data driven test pattern class library may generate concrete prioritized test cases dynamically using code document object model (code DOM) for substantially each data record by using a class decorated with known custom attributes. The invention can be used to help testers implement data driven tests effectively and in an easily maintainable fashion with minimal code.

Patent
21 Jun 2004
TL;DR: In this article, the authors present a test driver that controls delivery of a computer-based test to one or more test candidates and that controls caching of test components during delivery of the test.
Abstract: A system and method for computer-based testing includes a test driver that controls delivery of a computer-based test to one or more test candidates and that controls caching of test components during delivery of the test. The system includes various monitoring components, including monitoring of candidate progress, candidate performance, network bandwidth, network latency and server response, during delivery of the test and adjusting the source of the test components or the volume of the test components being cached for delivery of the test. Based upon this monitoring of the system, for example, if network communication failure is detected, the test candidate is able to continue computer-based testing while connectivity is being reestablished.

Proceedings ArticleDOI
23 May 2004
TL;DR: The importance of considering the characteristics of the compression method when performing TAM design is illustrated, and it is shown how an existing TAM design method can be enhanced toward a compression-driven solution.
Abstract: Driven by the industrial need for low-cost test methodologies, the academic community and the industry alike have put forth a number of efficient test data compression (TDC) methods. In addition, the need for core-based System-on-a-Chip (SoC) test led to considerable research in test access mechanism (TAM) design. While most previous work has considered TAM design and TDC independently, this work analyzes the interrelations between the two, outlining that a minimum test time solution obtained using TAM design will not necessarily correspond to a minimum test time solution when compression is applied. This is due to the dependency of some TDC methods on test bus width and care bit density, both of which are related to test time, and hence to TAM design. Therefore, this paper illustrates the importance of considering the characteristics of the compression method when performing TAM design, and it also shows how an existing TAM design method can be enhanced toward a compression-driven solution.

Journal Article
TL;DR: Writing specifications using Java Modeling Language has been accepted for a long time as a practical approach to increasing the correctness and quality of Java programs, but the current JML testing system can only generate skeletons of test fixture and test case class.
Abstract: Writing specifications using Java Modeling Language has been accepted for a long time as a practical approach to increasing the correctness and quality of Java programs. However, the current JML testing system (the JML and JUnit framework) can only generate skeletons of test fixture and test case class. Writing codes for generating test cases, especially those with a complicated data structure is still a labor-intensive job in the test for programs annotated with JML specifications. This paper presents JMLAutoTest, a novel framework for automated testing of Java programs annotated with JML specifications. Firstly, given a method, three test classes (a skeleton of test client class, a JUnit test class and a test case class) can be generated. Secondly, JMLAutoTest can generate all nonisomorphic test cases that satisfy the requirements defined in the test client class. Thirdly, JMLAutoTest can avoid most meaningless cases by running the test in a double-phase way which saves much time of exploring meaningless cases in the test. This method can be adopted in the testing not only for Java programs, but also for programs written with other languages. Finally, JMLAutoTest executes the method and uses JML runtime assertion checker to decide whether its post-condition is violated. That is whether the method works correctly.

Patent
08 Jun 2004
TL;DR: In this article, the authors describe techniques to automatically populate test algorithm data in creating the test program. But they do not specify a set of test language abstractions, only the test structure, header, and test algorithm catalogs.
Abstract: Automating techniques provide a way to create efficient test programs for characterizing semiconductor devices, such as those on a silicon die sample. Typically, test program creation is a drawn out process involving data entry for every test to be run as part of the test program. The described techniques improve test algorithm selection and automatically populate the test algorithm data in creating the test program. The automatic population may occur by accessing test structure, header, and test algorithm catalogs. The test structure catalog contains physical data for the test program, while the header catalog contains global parameter values. The test algorithm catalog has all of the various test algorithms that may be run in a given test, where these test algorithms may be in a template form and specific to any number of different test language abstractions. After test program creation, a validation process is executed to determine if the test program data is valid. Invalid data may be flagged, in an example. Once validated, techniques are described for converting the validated test program into an executable form, by formatting the various test algorithm data in the test program into a form compatible with the applicable test language abstraction selected by the user or the tester.

Patent
31 Mar 2004
TL;DR: In this paper, the authors present a system and method for providing a generic user interface testing framework, together with the systems and methods embodying the technology, maps the native test development language and environment into arbitrary languages and environments.
Abstract: A system and method for providing a generic user interface testing framework. The test framework, together with the systems and methods embodying the technology, maps the native test development language and environment into arbitrary languages and environments. The generic UI test framework insulates test developers from learning the tool-specific scripting language and environment. In accordance with one embodiment, the UI test framework provides a set of function interfaces and implementations that cover all generic UI testing operations. New users need only map their testing logic to the supported library interface in order to use the test tool, or another test tool, without having to learn the details of the underlying test-tool-specific scripting language.