scispace - formally typeset
Search or ask a question

Showing papers on "Integration testing published in 2007"


Proceedings ArticleDOI
20 Oct 2007
TL;DR: RANDOOP, which generates unit tests for Java code using feedback-directed random test generation, and RANDOOP, which is an annotation-based interface for specifying configuration parameters that affect R )'s behavior and output.
Abstract: RANDOOP for Java generates unit tests for Java code using feedback-directed random test generation. Below we describe RANDOOP's input, output, and test generation algorithm. We also give an overview of RANDOOP's annotation-based interface for specifying configuration parameters that affect RANDOOP's behavior and output.

438 citations


Proceedings ArticleDOI
05 Nov 2007
TL;DR: This tutorial describes concolic testing and some of its recent extensions and uses the tools to find bugs in several real-world software systems including SGLIB, a popular C data structure library used in a commercial tool, and the Sun Microsystems' JDK 1.4 collection framework.
Abstract: Concolic testing automates test input generation by combining the concrete and symbolic (concolic) execution of the code under test. Traditional test input generation techniques use either (1) concrete execution or (2) symbolic execution that builds constraints and is followed by a generation of concrete test inputs from these constraints. In contrast, concolic testing tightly couples both concrete and symbolic executions: they run simultaneously, and each gets feedback from the other. We have implemented concolic testing in tools for testing both C and Java programs. We have used the tools to find bugs in several real-world software systems including SGLIB, a popular C data structure library used in a commercial tool, a third-party implementation of the Needham-Schroeder protocol and the TMN protocol, the scheduler of Honeywell's DEOS real-time operating system, and the Sun Microsystems' JDK 1.4 collection framework. In this tutorial, we will describe concolic testing and some of its recent extensions

185 citations


Journal ArticleDOI
TL;DR: This work presents a new approach for test suite reduction that attempts to use additional coverage information of test cases to selectively keep some additional test cases in the reduced suites that are redundant with respect to the testing criteria used for suite minimization, with the goal of improving the FDE retention of the reduction suites.
Abstract: Software testing is a critical part of software development. As new test cases are generated over time due to software modifications, test suite sizes may grow significantly. Because of time and resource constraints for testing, test suite minimization techniques are needed to remove those test cases from a suite that, due to code modifications over time, have become redundant with respect to the coverage of testing requirements for which they were generated. Prior work has shown that test suite minimization with respect to a given testing criterion can significantly diminish the fault detection effectiveness (FDE) of suites. We present a new approach for test suite reduction that attempts to use additional coverage information of test cases to selectively keep some additional test cases in the reduced suites that are redundant with respect to the testing criteria used for suite minimization, with the goal of improving the FDE retention of the reduced suites. We implemented our approach by modifying an existing heuristic for test suite minimization. Our experiments show that our approach can significantly improve the FDE of reduced test suites without severely affecting the extent of suite size reduction

166 citations


Proceedings ArticleDOI
20 May 2007
TL;DR: A method that uses the model transformation technology of MDA to generate unit test cases from a platform-independent model of the system, using model-to-model transformations.
Abstract: In this paper, we demonstrate a method that uses the model transformation technology of MDA to generate unit test cases from a platform-independent model of the system. The method we propose is based on sequence diagrams. First we model the sequence diagram and then this model is automatically transformed into a general unit test case model (an xUnit model which is independent of a particular unit testing framework), using model-to-model transformations. Then model-to-text transformations are applied on the xUnit model to generate platform- specific (JUnit, SUnit etc.) test cases that are concrete and executable. We have implemented the transformations in a prototype tool based on the Tefkat transformation tool and MOFScript. The paper gives details of the tool and the transformations that we have developed. We have applied the method to a small example (ATM simulation).

119 citations


Journal ArticleDOI
TL;DR: The results show that the proposed technique effectively detects all the seeded integration faults when complying with the most demanding adequacy criterion and still achieves reasonably good results for less expensive adequacy criteria.
Abstract: Correct functioning of object-oriented software depends upon the successful integration of classes. While individual classes may function correctly, several new faults can arise when these classes are integrated together. In this paper, we present a technique to enhance testing of interactions among modal classes. The technique combines UML collaboration diagrams and statecharts to automatically generate an intermediate test model, called SCOTEM (State COllaboration TEst Model). The SCOTEM is then used to generate valid test paths. We also define various coverage criteria to generate test paths from the SCOTEM model. In order to assess our technique, we have developed a tool and applied it to a case study to investigate its fault detection capability. The results show that the proposed technique effectively detects all the seeded integration faults when complying with the most demanding adequacy criterion and still achieves reasonably good results for less expensive adequacy criteria.

105 citations


Proceedings ArticleDOI
09 Jul 2007
TL;DR: An intensive experimental analysis of the efficiency of random testing on an existing industrial-grade code base using a large-scale cluster of computers provides insights into the effectiveness of randomTesting and a number of lessons for testing researchers and practitioners.
Abstract: Progress in testing requires that we evaluate the effectiveness of testing strategies on the basis of hard experimental evidence, not just intuition or a priori arguments. Random testing, the use of randomly generated test data, is an example of a strategy that the literature often deprecates because of such preconceptions. This view is worth revisiting since random testing otherwise offers several attractive properties: simplicity of implementation, speed of execution, absence of human bias.We performed an intensive experimental analysis of the efficiency of random testing on an existing industrial-grade code base. The use of a large-scale cluster of computers, for a total of 1500 hours of CPU time, allowed a fine-grain analysis of the individual effect of the various parameters involved in the random testing strategy, such as the choice of seed for a random number generator. The results provide insights into the effectiveness of random testing and a number of lessons for testing researchers and practitioners.

86 citations


Book
01 Jul 2007
TL;DR: The Testing Environment, Automated Testing Tools, and Future Directions in Testing: Analyzing and Interpreting Test Results.
Abstract: Preface. Acknowledgments. 1. Overview of Testing. 2. The Software Development Lifecycle. 3. Overview of Structured Testing. 4. Testing Strategy. 5. Test Planning. 6. Static Testing. 7. Functional Testing. 8. Structural (Non-functional) Testing. 9. Performance Testing. 10. The Testing Environment . 11. Automated Testing Tools . 12. Analyzing and Interpreting Test Results. 13. A Full Software Development Lifecycle Testing Project. 14. Testing Complex Applications . 15. Future Directions in Testing. References. References. Index.

82 citations


Journal ArticleDOI
TL;DR: A comprehensive overview of various issues that can arise in component testing by the component user at the stage of its integration within the target system is discussed.
Abstract: Component-based development has emerged as a system engineering approach that promises rapid software development with fewer resources. Yet, improved reuse and reduced cost benefits from software components can only be achieved in practice if the components provide reliable services, thereby rendering component analysis and testing a key activity. This paper discusses various issues that can arise in component testing by the component user at the stage of its integration within the target system. The crucial problem is the lack of information for analysis and testing of externally developed components. Several testing techniques for component integration have recently been proposed. These techniques are surveyed here and classified according to a proposed set of relevant attributes. The paper thus provides a comprehensive overview which can be useful as introductory reading for newcomers in this research field, as well as to stimulate further investigation. Copyright © 2006 John Wiley & Sons, Ltd.

69 citations


Proceedings ArticleDOI
03 Jan 2007
TL;DR: AutoTest is a testing tool that provides a "best of both worlds" strategy: it integrates developers' test cases into an automated process of systematic contract-driven testing, and to treat the two types of tests in a unified fashion.
Abstract: Software can be tested either manually or automatically. The two approaches are complementary: automated testing can perform a large number of tests in little time, whereas manual testing uses the knowledge of the testing engineer to target testing to the parts of the system that are assumed to be more error-prone. Despite this complementarity, tools for manual and automatic testing are usually different, leading to decreased productivity and reliability of the testing process. AutoTest is a testing tool that provides a "best of both worlds" strategy: it integrates developers' test cases into an automated process of systematic contract-driven testing. This allows it to combine the benefits of both approaches while keeping a simple interface, and to treat the two types of tests in a unified fashion: evaluation of results is the same, coverage measures are added up, and both types of tests can be saved in the same format

65 citations


Proceedings ArticleDOI
03 Sep 2007
TL;DR: A new technique, differential testing, is presented that alleviates the test repair problem and detects more changes than regression testing alone and makes automated test generators more useful because it abstracts away the interpretation and management of large volumes of tests by focusing on the changes between test suites.
Abstract: Regression testing, as it's commonly practiced, is unsound due to inconsistent test repair and test addition. This paper presents a new technique, differential testing, that alleviates the test repair problem and detects more changes than regression testing alone. Differential testing works by creating test suites for both the original system and the modified system and contrasting both versions of the system with these two suites.Differential testing is made possible by recent advances in automated unit test generation. Furthermore, it makes automated test generators more useful because it abstracts away the interpretation and management of large volumes of tests by focusing on the changes between test suites.In our preliminary empirical study of 3 subjects, differential testing discovered 21%, 34%, and 21% more behavior changes than regression testing alone.

61 citations


Journal ArticleDOI
TL;DR: Prevalences are relevant in order to be aware on the expected rate of false classification during the toxicological testing and to implement appropriate measures for their avoidance and will lead to more science based testing strategies integrating alternative methods without compromising the protection of consumers.
Abstract: Large scale toxicological testing programmes which are currently ongoing such as the new European chemical legislation REACH require the development of new integrated testing strategies rather than applying traditional testing schemes to thousands of chemicals. The current practice of requiring in vivo testing for every possible adverse effect endanger the success of these programmes due (i) to limited testing facilities and sufficient capacity of scientific/technical knowledge for reproductive toxicity; (ii) an unacceptable number of laboratory animals involved (iii) an intolerable number of chemicals classified as false positive. A key aspect of the implementation of new testing strategies is the determination of prevalence of reproductive toxicity in the universe of industrial chemicals. Prevalences are relevant in order to be aware on the expected rate of false classification during the toxicological testing and to implement appropriate measures for their avoidance. Furthermore, a detailed understanding on the subendpoints affected by reproductive toxicants and the underlying mechanisms will lead to more science based testing strategies integrating alternative methods without compromising the protection of consumers.

Patent
08 May 2007
TL;DR: In this article, a dynamic vehicle tester providing integrated testing and simulation for determining characteristics of a unit under test is presented, where changes occurred on the unit under testing are dynamically obtained, considered and incorporated in generating test conditions to be applied to the unit.
Abstract: A dynamic vehicle tester providing integrated testing and simulation for determining characteristics of a unit under test. Changes occurred on the unit under test are dynamically obtained, considered and incorporated in generating test conditions to be applied to the unit under test. Additionally, durability testing is conducted using such techniques that compares a physical specimen under test to a real-time model of that specimen.

Proceedings ArticleDOI
11 Oct 2007
TL;DR: In the present paper, it is discussed how a testing method called Metamorphic Testing can be used to construct statistical hypothesis tests without knowing exact theoretical characteristics or having a reference implementation.
Abstract: Testing software with random output is a challenging task as the output corresponding to a given input differs from execution to execution. Therefore, the usual approaches to software testing are not applicable to randomized software. Instead, statistical hypothesis tests have been proposed for testing those applications. To apply these statistical hypothesis tests, either knowledge about the theoretical values of statistical characteristics of the program output (e. g. the mean) or a reference implementation (e. g. a legacy system) are required to apply statistical hypothesis tests. But often, both are not available. In the present paper, it is discussed how a testing method called Metamorphic Testing can be used to construct statistical hypothesis tests without knowing exact theoretical characteristics or having a reference implementation. For that purpose, two or more independent output sequences are generated by the implementation under test (IUT). Then, these sequences are compared according to the metamorphic relation using statistical hypothesis tests.

Journal ArticleDOI
TL;DR: A new strategy of adaptive software testing in the context of software cybernetics is proposed, intended to circumvent the drawbacks of the assumption that all remaining defects are equally detectable at constant rate and to reduce the underlying computational complexity of on-line parameter estimations.

Proceedings ArticleDOI
01 Oct 2007
TL;DR: This paper will present a generic end-to-end solution that mitigates the challenges of a software-in-the-loop configuration to bring it to its full potential and will be exemplified by its use in two government funded projects.
Abstract: The low fidelity and speed of traditional simulations have become unacceptable for the complex large-scale networks of today. In this paper we propose alternative techniques and focus on a software-in-the-loop implementation. Software-in-the-loop provides us with the following two-fold advantages: (a) it helps solve traditional simulation problems of model validity and (b) it can be used in the design phase as well as in the testing phase of a project. However, software-in-the-loop brings its own set of challenges, as we discuss in this paper. We will present a generic end-to-end solution that mitigates the challenges of a software-in-the-loop configuration to bring it it to its full potential. The success of our solution will be exemplified by its use in two government funded projects where it was successfully used to analyze scalability performance in one case and to perform unit and integration testing in a second case. The focus of this paper will be on the use of software-in-the-loop versus traditional simulations, discussing the challenges, issues and decision processes involved with the use of software-in-the-loop.

Journal ArticleDOI
TL;DR: This paper presents an approach that addresses this problem by making the system verification process more component-oriented and partially automates the testing process, thereby reducing the level of effort needed to establish the acceptability of the system.
Abstract: Today component- and service-based technologies play a central role in many aspects of enterprise computing. However, although the technologies used to define, implement, and assemble components have improved significantly over recent years, techniques for verifying systems created from them have changed very little. The correctness and reliability of component-based systems are still usually checked using the traditional testing techniques that were in use before components and services became widespread, and the associated costs and overheads still remain high. This paper presents an approach that addresses this problem by making the system verification process more component-oriented. Based on the notion of built-in tests (BIT)--tests that are packaged and distributed with prefabricated, off-the-shelf components--the approach partially automates the testing process, thereby reducing the level of effort needed to establish the acceptability of the system. The approach consists of a method that defines how components should be written to support and make use of run-time tests, and a resource-aware infrastructure that arranges for tests to be executed when they have a minimal impact on the delivery of system services. After providing an introduction to the principles behind component-based verification and explaining the main features of the approach and its supporting infrastructure, we show by means of a case study how it can reduce system verification effort.

Journal ArticleDOI
TL;DR: The in-house web-based application that is designed and implemented to support customized version of OATS, a systematic, statistical way of testing pair-wise interactions, and the experience in piloting and getting this method used in projects are reported.
Abstract: Combinatorial testing methods address generation of test cases for problems involving multiple parameters and combinations. The Orthogonal Array Based Testing Strategy (OATS) is one such combinatorial testing method, a systematic, statistical way of testing pair-wise interactions. It provides representative (uniformly distributed) coverage of all variable pair combinations. This makes the technique particularly useful for testing of software, wherever there is combinatorial explosion: a. In system testing for handling feature interactions b. In integration testing components c. It is also quite useful for testing products with a large number of configuration possibilities.One of the fundamental assumptions behind OATS approach is that a subset covering all pair-wise combinations will be more effective than a randomly selected subset. OATS provides a means to select a minimal test set that guarantees testing the pair-wise combinations of all the selected variables. Covering pair-wise combinations has been reported to be very effective in the literature. Successful use of this technique, with 50% effort saving and improved testing with a factor of 2.6 is reported in the literature.In this paper, we report on the in-house web-based application that we designed and implemented to support customized version of OATS and our experience in piloting and getting this method used in projects. In the in-house tool we have introduced a number of additional features, that help in generation and post processing of test-cases. We have also designed a supporting process for using this method, and we discuss the steps in this process in the paper. We share details on application in feature testing of a mobile phone application. This method has also been successfully used in designing feature interaction test cases and for augmenting the regression suite to increase coverage.

Book ChapterDOI
24 Mar 2007
TL;DR: This work proposes a model-based, automated integration test technique that can be applied during domain engineering and reduces the effort for creating placeholders by minimizing the number of placeholders needed to execute the integration test case scenarios.
Abstract: The development process in software product line engineering is divided into domain engineering and application engineering. As a consequence of this division, tests should be performed in both processes. However, existing testing techniques for single systems cannot be applied during domain engineering, because of the variability in the domain artifacts. Existing software product line test techniques only cover unit and system tests. Our contribution is a model-based, automated integration test technique that can be applied during domain engineering. For generating integration test case scenarios, the technique abstracts from variability and assumes that placeholders are created for variability. The generated scenarios cover all interactions between the integrated components, which are specified in a test model. Additionally, the technique reduces the effort for creating placeholders by minimizing the number of placeholders needed to execute the integration test case scenarios. We have experimentally measured the performance of the technique and the potential reduction of placeholders.

Book
01 Jan 2007
TL;DR: By the end of this book, you'll know more about the nuts and bolts of testing than most testers learn in an entire career, and you'll be ready to put those ideas into action on your next test project.
Abstract: A hands-on guide to testing techniques that deliver reliable software and systems Testing even a simple system can quickly turn into a potentially infinite task. Faced with tight costs and schedules, testers need to have a toolkit of practical techniques combined with hands-on experience and the right strategies in order to complete a successful project. World-renowned testing expert Rex Black provides you with the proven methods and concepts that test professionals must know. He presents you with the fundamental techniques for testing and clearly shows you how to select and apply successful strategies to test a system with budget and time constraints. Black begins by discussing the goals and tactics of effective and efficient testing. Next, he lays the foundation of his technique for risk-based testing, explaining how to analyze, prioritize, and document risks to the quality of the system using both informal and formal techniques. He then clearly describes how to design, develop, and, ultimately, document various kinds of tests. Because this is a hands-on activity, Black includes realistic, life-sized exercises that illustrate all of the major test techniques with detailed solutions. By the end of this book, you'll know more about the nuts and bolts of testing than most testers learn in an entire career, and you'll be ready to put those ideas into action on your next test project. With the help of real-world examples integrated throughout the chapters, you'll discover how to: * Analyze the risks to system quality * Allocate your testing effort appropriately based on the level of risk * Choose the right testing strategies every time * Design tests based on a system's expected behavior (black box) or internal structure (white box) * Plan and perform integration testing * Explore and attack the system * Focus your hard work to serve the needs of the project The author's companion Web site provides exercises, tips, and techniques that can be used to gain valuable experience and effectively test software and systems. Wiley Technology Publishing Timely. Practical. Reliable. Visit the author's Web site at http://www.rexblackconsulting.com/

Patent
11 Oct 2007
TL;DR: In this paper, the size of a software application testing project is determined, and person/hours required for the testing project are estimated by counting the number of different parameter types that occur within testing activities associated with the application.
Abstract: Size of a software application testing project is determined, and person/hours required for the testing project is estimated. The software application is sized by counting the number of different parameter types that occur within testing activities associated with the application. The parameter type numbers are then divided by a scaling weight to arrive at a Testing Unit number, which is then divided by a Testing Unit rate, e.g., person hours associated with each testing unit, to arrive at an estimated testing project effort. Some embodiments include an uncertainty calculation that potentially increases testing time based on clarity of the project requirements, the tester familiarity with the application area and the tester familiarity with the domain. Some embodiments calculate separate testing project times for different phases of the testing project.

Book ChapterDOI
26 Jun 2007
TL;DR: This work investigates the use of parameterized state machine models to drive integration testing, in the case where the models of components are not available beforehand, and proposes a new strategy where integration tests can be derived from the data collected during the learning process.
Abstract: We investigate the use of parameterized state machine models to drive integration testing, in the case where the models of components are not available beforehand. Therefore, observations from tests are used to learn partial models of components, from which further tests can be derived for integration. We have extended previous algorithms to the case of finite state models with predicates on input parameters and observable non-determinism. We also propose a new strategy where integration tests can be derived from the data collected during the learning process. Our work typically addresses the problem of assembling telecommunication services from black box COTS.

Proceedings ArticleDOI
09 Jul 2007
TL;DR: A novel technique to automatically generate test cases for a software system, combining black-box model-based testing with white-box parameterized unit testing, and provides tool support, integrated into the model- based testing tool.
Abstract: We have devised a novel technique to automatically generate test cases for a software system, combining black-box model-based testing with white-box parameterized unit testing. The former provides general guidance for the structure of the tests in the form of test sequences, as well as the oracle to check for conformance of an application under test with respect to a behavioral model. The latter finds a set of concrete parameter values that maximize code coverage using symbolic analysis. By applying these techniques together, we can produce test definitions (expressed as code to be run in a test management framework) that exercise all selected paths in the model, while also covering code branches specific to the implementation. These results cannot be obtained from any of the individual approaches alone, as the model cannot predict what values are significant to a particular implementation, while parameterized unit testing requires manually written test sequences and correctness validations. We provide tool support, integrated into our model-based testing tool.

Proceedings ArticleDOI
24 Jul 2007
TL;DR: An automated testing tool called CASCAT for Java components is presented and a case study of the tool shows the high fault detecting ability.
Abstract: Algebraic testing is an automated software testing method based on algebraic formal specifications. It has the advantages of highly automated testing process and independence of the software's implementation details. This paper applies the method to software components. An automated testing tool called CASCAT for Java components is presented. A case study of the tool shows the high fault detecting ability.

Proceedings ArticleDOI
20 May 2007
TL;DR: The Integrated - Black-box Approach for Component Change Identification (I-BACCI) process that selects regression tests for applications based upon static binary code analysis to Version 4 to support DLL components is evolved.
Abstract: Software products are often configured with commercial-off-the-shelf (COTS) components. When new releases of these components are made available for integration and testing, source code is usually not provided. Various regression test selection processes have been developed and have been shown to be cost effective. However, the majority of these test selection techniques rely on access to source code for change identification. Based on our prior work, we are studying the solution to regression testing COTS-based applications that incorporate components of dynamic link library (DLL) files. We evolved the Integrated - Black-box Approach for Component Change Identification (I-BACCI) process that selects regression tests for applications based upon static binary code analysis to Version 4 to support DLL components. A feasibility case study was conducted at ABB on products written in C/C++ to determine the effectiveness of the I-BACCI process. The results of the case study indicate this process can reduce the required number of regression tests by as much as 100% if our analysis indicates the changes to the component are not called by the glue code of the application using the COTS component. Similar to other regression test selection techniques, when there are many changes in the new component I-BACCI suggests a retest-all regression test strategy.

Proceedings ArticleDOI
09 Jul 2007
TL;DR: The approach presented in this paper is to use task models to describe the interaction between environment and system, which restricts the possible state space to a feasible size and enables the generation of task sequences, which cover the critical interaction scenarios.
Abstract: When integrating different system components, the interaction between different features is often error prone. Typically errors occur on interruption, concurrency or disabling/enabling between different features. Test case generation for integration testing struggles with two problems: the large state space and that these critical relationships are often not explicitly modeled. The approach presented in this paper is to use task models to describe the interaction between environment and system. This restricts the possible state space to a feasible size and enables the generation of task sequences, which cover the critical interaction scenarios. These task sequences are too abstract for testing the System-Under-Test (SUT), due to missing input- and expected output behavior. To overcome the different abstraction levels, the tasks are mapped to component behavior models. Based on this mapping, task sequences can be enriched with additional information from the component models and thereby executed to test the SUT.

Journal ArticleDOI
TL;DR: A testing requirement reduction method to generate the reduced testing requirement set is presented, which is the basis of test suite generation, reduction and optimization and it contributes to the systematic, reasonable and effective testing.
Abstract: Test suite optimization aims at satisfying all testing objectives with the least number of test cases. According to the given testing objectives, the reduced testing requirement set can improve the effectiveness and efficiency of test suite optimization. This paper proposes a testing requirement reduction model that can describe the interrelations among the testing requirements in detail. Based on the model, this paper presents a testing requirement reduction method to generate the reduced testing requirement set, which is the basis of test suite generation, reduction and optimization. The experimental results show that the method is helpful to generate the smaller test suite and it contributes to the systematic, reasonable and effective testing.

Proceedings ArticleDOI
Tim Trew1
09 Jul 2007
TL;DR: The presentation will give an overview of how the software in embedded consumer products has evolved over the last decade, with the shift of signal processing from analogue hardware to software and from monolithic development organizations to ones that integrate and test components developed by others.
Abstract: When releasing a new consumer product, the anticipated profits can be slashed by being late to market or having poor quality. Companies are keen to improve the efficiency and effectiveness of testing, to reduce their lead time and to be confident of the quality. However, it might appear that progress in the deployment of new approaches has been agonizingly slow. In practice, with the rapid evolution in the technology and business of consumer electronics, a major challenge is to anticipate the testing needs of the future while addressing the detailed issues that hamper the adoption of new approaches.The presentation will give an overview of how the software in embedded consumer products, together with its development approach, has evolved over the last decade, with the shift of signal processing from analogue hardware to software and from monolithic development organizations to ones that integrate and test components developed by others. Integration testing, one of the least well-understood areas, becomes crucial and there will be illustrations of the insight that testability can give into how architectures must be constrained to ensure that this testing can be effective.The presentation will also give examples of the successful introduction of new test technology, current trials of model-checking techniques and the new testing challenges posed by the introduction of the increasing complexity of embedded systems with systems-on-chip and their network-on-chip interconnects.

Patent
25 May 2007
TL;DR: In this article, the authors present a methodology for efficient development of distributed web services, which simplifies integration testing of the web service by allowing a developer to initially test a web service on a single server or a small number of servers and then easily scale-up the test environment.
Abstract: Development tools and a methodology for efficient development of distributed web services. A tool tracks changes in packages used to create images deployed for testing. Rather than build a complete image for each change, a current image may be created by substituting changed packages in a previously created image. Another tool allocates components of an image to a number of servers specified by a user of the tool. Such a tool simplifies integration testing of the web service by allowing a developer to initially test a web service on a single server or a small number of servers and then easily scale-up the test environment. The servers may be physical servers or may be virtual servers. Interface rules for packages that constitute the software for the web service are defined to reduce the likelihood of integration problems as the environment is scaled up.

Proceedings ArticleDOI
11 Mar 2007
TL;DR: Through employing preliminary experiments on some medium scale systems, the regression testing method based on built-in test design has been proven to be fairly feasible and cost-effective in practice.
Abstract: Component-based software technology is expected to be an effective and widely used method of constructing software system. However, some specialties of component bring a great challenge for testing the systems built by externally-provided components, especially for regression testing. Built-in test design is a fairly effective way to improve component's testability. In this paper, we present an improved regression testing method based on built-in test design for component-based systems. It needs the mutual collaboration between the component developers and users. Component developers are responsible for analyzing the affected methods and constructing the corresponding testing-interfaces in the new component version, and then component users can conveniently pick out the subset of test cases for regression testing with these testing-interfaces. Through employing preliminary experiments on some medium scale systems, our regression testing method based on built-in test design has been proven to be fairly feasible and cost-effective in practice.

Patent
22 Jan 2007
TL;DR: In this paper, a method and system for functionally testing units under test, such as electronic controller boards for a spa system, is presented, based on a testbed-based approach.
Abstract: A method and system for functionally testing units under test, such as electronic controller boards for a spa system.