scispace - formally typeset
Search or ask a question

Showing papers on "Integration testing published in 2004"


Journal ArticleDOI
TL;DR: The integration and testing of Advanced Guidance and Control Technologies has recently completed a rigorous testing phase where these algorithms faced high-fidelity vehicle models and were required to perform a variety of representative tests as discussed by the authors.
Abstract: There are a number of approaches to advanced guidance and control that have the potential for achieving the goals of significantly increasing reusable launch vehicle (or any space vehicle that enters an atmosphere) safety and reliability, and reducing the cost. This paper examines some approaches to entry guidance. An effort called Integration and Testing of Advanced Guidance and Control Technologies has recently completed a rigorous testing phase where these algorithms faced high-fidelity vehicle models and were required to perform a variety of representative tests. The algorithm developers spent substantial effort improving the algorithm performance in the testing. This paper lists the test cases used to demonstrate that the desired results are achieved, shows an automated test scoring method that greatly reduces the evaluation effort required, and displays results of the tests. Results show a significant improvement over previous guidance approaches. The two best-scoring algorithm approaches show roughly equivalent results and are ready to be applied to future vehicle concepts.

54 citations


Proceedings ArticleDOI
11 Sep 2004
TL;DR: Two alternative models for white-box testing of a real-world Web application using the navigation model and the control flow model are presented and white- box testing criteria are defined on them.
Abstract: White-box testing exercises a software system by ensuring that a model of the internal structure is covered by the test cases. Extending this approach to Web applications is far from obvious, because at least two abstraction levels can be considered to represent the internal structure of a Web application: the navigation model and the control flow model. To further complicate the matter, dynamic code generation must be taken into account in both models. In this paper, the two alternative models are presented and white-box testing criteria are defined on them. Their usage for the white-box testing of a real-world Web application is described, highlighting the associated costs and benefits.

53 citations


Proceedings ArticleDOI
20 Sep 2004
TL;DR: T-UPPAAL is a new tool for model based testing of embedded real-time systems that automatically generates and executes tests "online" from a state machine model of the implementation under test (IUT) and its assumed environment which combined specify the required and allowed observable (realtime) behavior of the IUT.
Abstract: The goal of testing is to gain confidence in a physical computer based system by means of executing it. More than one third of typical project resources are spent on testing embedded and real-time systems, but still it remains ad-hoc, based on heuristics, and error-prone. Therefore systematic, theoretically well-founded and effective automated real-time testing techniques are of great practical value. Testing conceptually consists of three activities: test case generation, test case execution and verdict assignment. We present T-UPPAAL-a new tool for model based testing of embedded real-time systems that automatically generates and executes tests "online" from a state machine model of the implementation under test (IUT) and its assumed environment which combined specify the required and allowed observable (realtime) behavior of the IUT. T-UPPAAL implements a sound and complete randomized testing algorithm, and uses a formally defined notion of correctness (relativized timed input/output conformance) to assign verdicts. Using online testing, events are generated and simultaneously executed.

53 citations


Journal ArticleDOI
19 Feb 2004
TL;DR: Experimental results show that the CDS dynamic control is better than other common control rules with respect to the number of tardy jobs.
Abstract: This paper presents a data-mining-based production control approach for the testing and rework cell in a dynamic computer-integrated manufacturing system. The proposed competitive decision selector (CDS) observes the status of the system and jobs at every decision point, and makes its decision on job preemption and dispatching rules in real time. The CDS equipped with two algorithms combines two different knowledge sources, the long-run performance and the short-term performance of each rule on the various status of the system. The short-term performance information is mined by a data-mining approach from large-scale training data generated by simulation with data partition. A decision tree-based module generates classification rules on each partitioned data that are suitable for interpretation and verification by users and stores the rules in the CDS knowledge bases. Experimental results show that the CDS dynamic control is better than other common control rules with respect to the number of tardy jobs.

49 citations


Journal ArticleDOI
TL;DR: Simulation‐based (stochastic) experiments, combined with optimized design‐of‐experiment plans, in the case study have shown a minimum productivity increase of 100 times in comparison to current practice without DoD STEP deployment.
Abstract: This paper presents some original solutions with regard to the deployment of the U.S. Department of Defense Simulation, Test and Evaluation Process (DoD STEP), using an automated target tracking radar system as a case study. Besides the integration of modelling and simulation, to form a model-based approach to the software testing process, the number of experiments, i.e. test cases, have been dramatically reduced by applying an optimized design-of-experiment plan and an orthogonal array-based robust testing methodology. Also, computer-based simulation at various abstraction levels of the system/software under test can serve as a test oracle. Simulation-based (stochastic) experiments, combined with optimized design-of-experiment plans, in the case study have shown a minimum productivity increase of 100 times in comparison to current practice without DoD STEP deployment. Copyright © 2004 John Wiley & Sons, Ltd.

45 citations


Proceedings ArticleDOI
11 Sep 2004
TL;DR: The authors' experiments indicate that this partial order of unit tests corresponding to a coverage hierarchy of their sets of covered method signatures is semantically meaningful, since faults that cause a unit test to break generally cause less specific unit tests to break as well.
Abstract: Current unit test frameworks present broken unit tests in an arbitrary order, but developers want to focus on the most specific ones first. We have therefore inferred a partial order of unit tests corresponding to a coverage hierarchy of their sets of covered method signatures: When several unit tests in this coverage hierarchy break, we can guide the developer to the test calling the smallest number of methods. Our experiments with four case studies indicate that this partial order is semantically meaningful, since faults that cause a unit test to break generally cause less specific unit tests to break as well.

41 citations


Patent
25 May 2004
TL;DR: In this article, a functional testing tool can include a configuration for producing multiple methods defining interactions with individual elements in a user interface to an application under test, and a test case can be generated which implements the task with at least one verification point.
Abstract: A method, system and apparatus for the object-oriented automated user interface testing of an application under test in a functional testing tool. The functional testing tool can include a configuration for producing multiple methods defining interactions with individual elements in a user interface to an application under test. The functional testing tool further can include one or more defined actions grouping selected ones of the methods which are used repeatedly within screens of the application under test. Preferably, at least one task defining a group of related activities in the user interface can be produced in the functional testing tool. Subsequently, a test case can be generated which implements the task with at least one verification point.

40 citations


Book
01 Jan 2004
TL;DR: This work focuses on software-based self-testing of Plasma/MIPS processor core and its applications, and on the development of self-test routine development: pseudorandom.
Abstract: List of Figures. List of Tables. Preface. Acknowledgments. 1. Introduction. 1.1 Book Motivation and Objectives. 1.2 Book Organization. 2. Design Of Processor-Based SOC. 2.1 Integrated Circuits Technology. 2.2 Embedded Core-Based System-on-Chip Design. 2.3 Embedded Processors in SoC Architectures. 3. Testing Of Processor-Based SOC. 3.1 Testing and Design for Testability. 3.2 Hardware-Based Self-Testing. 3.3 Software-Based Self-Testing. 3.4 Software-Based Self-Test and Test Resource Partitioning. 3.5 Why is Embedded Processor Testing Important? 3.6 Why is Embedded Processor Testing Challenging? 4. Processor Testing Techniques. 4.1 Processor Testing Techniques Objectives. 4.1.1 External Testing versus Self-Testing. 4.1.2 DfT-based Testing versus Non-Intrusive Testing. 4.1.3 Functional Testing versus Structural Testing. 4.1.4 Combinational Faults versus Sequential Faults Testing. 4.1.5 Pseudorandom versus Deterministic Testing. 4.1.6 Testing versus Diagnosis. 4.1.7 Manufacturing Testing versus On-line/Field Testing. 4.1.8 Microprocessor versus DSP Testing. 4.2 Processor Testing Literature. 4.2.1 Chronological List of Processor Testing Research. 4.2.2 Industrial Microprocessors Testing. 4.3 Classification of the Processor Testing Methodologies. 5. Software-Based Processor Self-Testing. 5.1 Software-based self-testing concept and flow. 5.2 Software-based self-testing requirements. 5.2.1 Fault coverage and test quality. 5.2.2 Test engineering effort for self-test generation. 5.2.3 Test application time. 5.2.4 A new self-testing efficiency measure. 5.2.5 Embedded memory size for self-test execution. 5.2.6 Knowledge of processor architecture. 5.2.7 Component based self-test code development. 5.3 Software-based self-test methodology overview. 5.4 Processor components classification. 5.4.1 Functional components. 5.4.2 Control components. 5.4.3 Hidden components. 5.5 Processor components test prioritization. 5.5.1 Component size and contribution to fault coverage. 5.5.2 Component accessibility and ease of test. 5.5.3 Components' testability correlation. 5.6 Component operations identification and selection. 5.7 Operand selection. 5.7.1 Self-test routine development: ATPG. 5.7.2 Self-test routine development: pseudorandom. 5.7.3 Self-test routine development: pre-computed tests. 5.7.4 Self-test routine development: style selection. 5.8 Test development for processor components. 5.8.1 Test development for functional components. 5.8.2 Test development for control components. 5.8.3 Test development for hidden components. 5.9 Test responses compaction in software-based self-testing. 5.10 Optimization of self-test routines. 5.10.1 'Chained' component testing. 5.10.2 'Parallel' component testing. 5.11 Software-based self-testing automation. 6. Case Studies - Experimental Results. 6.1 Parwan processor core. 6.1.1 Software-based self-testing of Parwan. 6.2 Plasma/MIPS processor core. 6.2.1 Software-based self-testing of Plasma/MIPS. 6.3 Meister/MIPS reconfigurable processor core. 6.3.1 Software-based self-testing of Meister/MIPS. 6.4 Jam processor core. 6.4.1 Software-based self-testing of Jam. 6.5 oc8051 microcontroller core. 6.5.1 Software-based self-testing of oc8051. 6.6 RISC-MCU microcontroller core. 6.6.1 Software-based self-testing of RISC-MCU. 6.7 oc54x DSP Core. 6.7.1 Software-based self-testing of oc54x. 6.8 Compaction of test responses. 6.9 Summary of Benchmarks. 7. Processor-Based Testing Of SOC. 7.1 The concept. 7.1.1 Methodology advantages and objectives. 7.2 Literature review. 7.3 Research focus in processor-based SOC testing. 8. Conclusions. References. Index. About the Authors.

36 citations


Book ChapterDOI
TL;DR: A Meta language in XML is introduced, which allows defining test cases for services and has the possibility to test and monitor if certain workflows between multiple service endpoints really behave as described with the XML Meta language.
Abstract: Service-Oriented Architectures (SOAs) have recently emerged as a new promising paradigm for supporting distributed computing. Testing SOAs is very challenging and automated test tools can help to reduce the development costs enormously. In this paper we will propose an approach as to how automatic testing for SOAs can be done. We will introduce a Meta language in XML, which allows defining test cases for services. This paper focuses on a real life prototype implementation called SITT (Service Integration Test Tool). It has the possibility to test and monitor if certain workflows between multiple service endpoints really behave as described with the XML Meta language. This paper shows how SITT is designed and we will present its features by introducing a real-world application scenario from the domain of Telecommunications providers, namely “Mobile Number Portability”.

36 citations


Book ChapterDOI
TL;DR: This paper proposes progressive group testing techniques to test large number of Web services (WS) available on Internet using progressively increasing number of test cases.
Abstract: This paper proposes progressive group testing techniques to test large number of Web services (WS) available on Internet. At the unit testing level, the WS with the same functionality are tested in group using progressively increasing number of test cases. A small number of WS that scored best will be integrated into the real environment for operational testing. At the integration testing level, many composite services will be constructed and tested by group integration testing. The results of group testing at both unit and integration levels are verified by weighted majority voting mechanisms. The weights are based on the reliability history of the WS under test. A case study is designed and implemented, where the dependency among the test cases in WS is analyzed and used to generate progressive layers of test cases.

34 citations


01 Dec 2004
TL;DR: In this paper, the authors propose progressive group testing techniques to test large number of Web services available on Internet, where the WS with the same functionality are tested in group using progressively increasing number of test cases.
Abstract: This paper proposes progressive group testing techniques to test large number of Web services (WS) available on Internet. At the unit testing level, the WS with the same functionality are tested in group using progressively increasing number of test cases. A small number of WS that scored best will be integrated into the real environment for operational testing. At the integration testing level, many composite services will be constructed and tested by group integration testing. The results of group testing at both unit and integration levels are verified by weighted majority voting mechanisms. The weights are based on the reliability history of the WS under test. A case study is designed and implemented, where the dependency among the test cases in WS is analyzed and used to generate progressive layers of test cases.

Book ChapterDOI
27 Sep 2004
TL;DR: This paper proposes an approach for implementing self-testing components, which allow integration test specifications and suites to be developed by observing both the behavior of the component and of the entire system.
Abstract: Internet software tightly integrates classic computation with communication software. Heterogeneity and complexity can be tackled with a component-based approach, where components are developed by application experts and integrated by domain experts. Component-based systems cannot be tested with classic approaches but present new problems. Current techniques for integration testing are based upon the component developer providing test specifications or suites with their components. However, components are often being used in ways not envisioned by their developers, thus the packaged test specifications and suites cannot be relied upon. Often this results in conditions being placed upon a components use, however, what is required is a method for allowing test suites to be adapted for new situations. In this paper, we propose an approach for implementing self-testing components, which allow integration test specifications and suites to be developed by observing both the behavior of the component and of the entire system.

Proceedings ArticleDOI
09 Sep 2004
TL;DR: A possible solution to the above issues exploiting Infrastructure IPs is proposed, and the results gathered on two case studies are reported.
Abstract: SoCs normally include microprocessor/microcontroller cores. Testing them following the software-based self-test approach is attractive, mainly because this allows at speed testing, and does not require internally modifying the core. However, this raises some issues, such as how to upload and launch the test, how to monitor the results, how to embed the adopted solutions into a suitable wrapper to enhance core modularity and test reusability. The paper proposes a possible solution to the above issues exploiting Infrastructure IPs, and reports the results gathered on two case studies.

Proceedings ArticleDOI
02 Nov 2004
TL;DR: This paper provides an argument that the perceived costs of unit testing may be exaggerated and that the likely benefits in terms of defect detection are actually quite high in relation to those costs.
Abstract: Unit testing is a technique that receives a lot of criticism in terms of the amount of time that it is perceived to take and in how much it costs to perform. However it is also the most effective means to test individual software components for boundary value behavior and ensure that all code has been exercise adequately (e.g. statement, branch or MC/DC coverage). In this paper we examine the available data from three safety related software projects undertaken by Pi Technology that have made use of unit testing. Additionally we discuss the different issues that have been found applying the technique at different phases of the development and using different methods to generate those test. In particular we provide an argument that the perceived costs of unit testing may be exaggerated and that the likely benefits in terms of defect detection are actually quite high in relation to those costs.

Proceedings ArticleDOI
01 Jan 2004
TL;DR: In this paper, a negative binomial regression model was developed to predict which files in a large software system are most likely to contain the largest numbers of faults that manifest as failures in the next release, using information from all previous releases.
Abstract: We perform static analysis and develop a negative binomial regression model to predict which files in a large software system are most likely to contain the largest numbers of faults that manifest as failures in the next release, using information from all previous releases. This is then used to guide the dynamic testing process for software systems by suggesting that files identified as being likely to contain the largest numbers of faults be subjected to particular scrutiny during dynamic testing. In previous studies of a large inventory tracking system, we identified characteristics of the files containing the largest numbers of faults and those with the highest fault densities. In those studies, we observed that faults were highly concentrated in a relatively small percentage of the files, and that for every release, new files and old files that had been changed during the previous release generally had substantially higher average fault densities than old files that had not been changed. Other characteristics were observed to play a less central role. We now investigate additional potentially-important characteristics and use them, along with the previously-identified characteristics as the basis for the regression model of the current study. We found that the top 20% of files predicted by the statistical model contain between 71% and 85% of the observed faults found during dynamic testing of the twelve releases of the system that were available.

Journal ArticleDOI
TL;DR: A new method for generating test sets from a deterministic stream X-machine specification that generalises the existing integration testing method and no longer requires the implementations of the processing functions to be proved correct prior to the actual testing.
Abstract: One of the strengths of using stream X-machines to specify a system is that, under certain well defined conditions, it is possible to produce a test set that is guaranteed to determine the correctness of an implementation. However, the existing method assumes that the implementation of each processing function is proved to be correct before the actual testing can take place, so it only test the system integration. This paper presents a new method for generating test sets from a deterministic stream X-machine specification that generalises the existing integration testing method. This method no longer requires the implementations of the processing functions to be proved correct prior to the actual testing. Instead, the testing of the processing functions is performed along with the integration testing.

Journal Article
TL;DR: New techniques for analyzing and testing the polymorphic relationships that are found in O-O software can result in an increased ability to find faults and overall higher quality software.
Abstract: As we move from developing procedure-oriented to O-O programs, the complexity traditionally found in functions and procedures is moving to the connections among components. More faults occur as components are integrated to form higher level aggregates. Consequently, we need to place more effort on testing the connections among components. Although O-O tech- nology provides abstraction mechanisms to build components to integrate, it also adds new com- positional relations that can contain faults, which must be found during integration testing. This paper describes new techniques for analyzing and testing the polymorphic relationships that oc- cur in O-O software. The application of these techniques can result in an increased ability to find faults and overall higher quality software.

Proceedings ArticleDOI
28 Jan 2004
TL;DR: The basic structure of the Open Architecture Test System is described in this paper and it is shown that each modular unit can be replaced with another modular unit from a different vendor.
Abstract: An open architecture test system has been envisioned to address the re-usability of the test solutions. The open architecture provides a method and framework under which software and test instruments of different vendors can be developed and integrated into an automatic test equipment (ATE). The framework uses standard interfaces so that each modular unit (software or hardware) can be replaced with another modular unit from a different vendor. The basic structure of the Open Architecture Test System is described in this paper.

Proceedings ArticleDOI
16 Feb 2004
TL;DR: An innovative method of scan pattern timing creation based on the results from static timing analysis is presented and the usage of a clock control module on J750 tester, which creates fast clock by combining two tester channels with high edge placement accuracy is described.
Abstract: This paper discusses the aspects and associated requirements of design and implementation of at-speed scan testing. It also demonstrates some important vector generation and implementation procedures based on a real design. An innovative method of scan pattern timing creation based on the results from static timing analysis is presented. The paper also describes the usage of a clock control module on J750 tester, which creates fast clock by combining two tester channels with high edge placement accuracy. These methods allow a short test pattern preparation time and the use of low-cost test equipment, while providing the high quality at-speed testing.

Book ChapterDOI
27 Sep 2004
TL;DR: This paper focuses on software testing, which is based on a clever selection of “relevant” test cases, which may be manually or automatically run over the system.
Abstract: Software Model-Checking and Testing are some of the most used techniques to analyze software systems and identify hidden faults. While software model-checking allows for an exhaustive and automatic analysis of the system expressed through a model, software testing is based on a clever selection of “relevant” test cases, which may be manually or automatically run over the system.

Proceedings ArticleDOI
E.J. Marinissen1, T. Waayers1
22 Nov 2004
TL;DR: The paper describes the IEEE standard 1500 test wrapper for embedded modules and the basics of SOC-level test architecture design in relation to test time optimization and two application examples are given to illustrate current industrial practices.
Abstract: Large single-die system chips are designed in a modular fashion, including and reusing pre-designed and pre-verified design blocks. Modular testing is required for embedded non-logic modules and black-boxed IP cores. Also, modular testing is attractive for other blocks, as it supports 'divide-n-conquer' test generation and test reuse. Modular testing requires an on-chip infrastructure. This tutorial paper gives insight in the principles behind modular testing and its need for a dedicated on-chip test infrastructure. The paper describes the IEEE standard 1500 test wrapper for embedded modules and the basics of SOC-level test architecture design in relation to test time optimization. In addition, two application examples are given to illustrate current industrial practices.

Book ChapterDOI
27 Oct 2004
TL;DR: This paper presents an approach that generates test cases from the specification and transfers the specification-oriented testing process to model checking, and combines the advantages of testing and model checking.
Abstract: Testing is a necessary, but costly process for user-centric quality control. Moreover, testing is not comprehensive enough to completely detect faults. Many formal meth ods have been pro posed to avoid the drawbacks of testing, e.g., model checking that can be automatically carried out. This paper presents an approach that (i) generates test cases from the specification and (ii) transfers the specification-oriented testing process to model checking. Thus, the approach combines the advantages of testing and model checking assuming the availability of (i) a model that specifies the ex pected, desirable system behavior as required by the user and (ii) a second model that describes the system behavior as observed. The first model is complemented in also specifying the undesirable system properties. The approach analyzes both these specification models to generate test cases that are then converted into temporal logic formulae to be model checked on the second model.

Proceedings ArticleDOI
30 Nov 2004
TL;DR: An aspect-oriented test description language (AOTDL) and techniques to build top-level aspects for testing on generic aspects and a double-phase testing way to filter out meaningless test cases in this framework.
Abstract: Unit testing is a methodology for testing small parts of an application independently of whatever application uses them. It is time consuming and tedious to write unit tests, and it is especially difficult to write unit tests that model the pattern of usage of the application. Aspect-oriented programming (AOP) addresses the problem of separation of concerns in programs which is well suited to unit test problems. What's more, unit tests should be made from different concerns in the application instead of just from functional assertions of correctness or error. In this paper, we firstly present a new concept, application-specific Aspects, which mean top-level aspects picked up from generic low-level aspects in AOP for specific use. It can be viewed as the separation of concerns on applications of generic low-level aspects. Second, this paper describes an aspect-oriented test description language (AOTDL) and techniques to build top-level aspects for testing on generic aspects. Third, we generate JUnit unit testing framework and test oracles from AspectJ programs by integrating our tool with AspectJ and JUnit. We use runtime exceptions thrown by testing aspects to decide whether methods work well. Finally, we present a double-phase testing way to filter out meaningless test cases in our framework.

Book ChapterDOI
05 Jul 2004
TL;DR: This paper suggests a test development process model that takes software reuse techniques and activities into account and shows further that in order to produce reusable test material, the software entities must be expressed in terms of features, in which the test materials are attached to.
Abstract: Testing is the most time consuming activity in the software development process. The effectiveness of software testing is primarily determined by the quality of the testing process. Software reuse, when effectively applied, has shown to increase the productivity of a software process and enhance the quality of software by the use of components already tested on a large scale. While reusability of testing material and tests has a strong potential, few if any approaches have been proposed that combine these two aspects. Reusability of testing materials is desired, when test development is complex and time-consuming. This is the case, for example, in testing with test-specific languages, such as the TTCN-3. To meet these needs, this paper suggests a test development process model that takes software reuse techniques and activities into account. This paper shows further that in order to produce reusable test material, the software entities must be expressed in terms of features, in which the test materials are attached to. Also, the software components must be designed with reuse in mind when reusable test material is desired. The scope of the proposed test development approach is on the unit and integration testing, because the outcome of higher levels of testing is typically dependent on the tester’s subjective judgment.

Journal Article
TL;DR: A deterministic white-box system-level control-flow testing method for deterministic integration testing of real-time system software and allows test methods for sequential programs to be applied.
Abstract: In this paper we address the problem of testing real-time software in the functional domain. In order to achieve reproducible and deterministic test results of an entire multitasking real-time system it is essential not to only consider inputs and outputs, but also the order in which tasks communicate and synchronize with each other. We present a deterministic white-box system-level control-flow testing method for deterministic integration testing of real-time system software. We specifically address fixed priority scheduled real-time systems where synchronization is resolved using the Priority Ceiling Emulation Protocol or offsets in time. The method includes a testing strategy where the coverage criterion is defined by the number of paths in the system control flow. The method also includes a reachability algorithm for deriving all possible paths in terms of orderings of task starts, preemptions and completions of tasks executing in a real-time system. The deterministic testing strategy allows test methods for sequential programs to be applied, since each identified ordering can be regarded as a sequential program.

Proceedings ArticleDOI
24 Jun 2004
TL;DR: It is the tester who needs the broadest knowledge about a software system, and a set of tools are presented which help to satisfy these requirements and their practical application discussed.
Abstract: In this paper, program comprehension techniques are examined within the context of testing. First, the tasks of a tester are identified, then the information requirements of a tester to fulfill these tasks. Comprehension is viewed as a knowledge acquisition process. The knowledge needed depends on the level at which one is testing. For system testing, other knowledge is required than for unit and integration testing. In light of the scope of testing, the paper concludes that it is the tester who needs the broadest knowledge about a software system. Having established the information requirements of testing, a set of tools are presented which help to satisfy these requirements and their practical application discussed.

Proceedings ArticleDOI
26 Oct 2004
TL;DR: This paper stresses the practical SOC test integration issues, including real problems found in test scheduling, test IO reduction, timing of functional test, scan IO sharing, etc, and proposes a test scheduling method based on the test architecture and test access mechanism, considering IO resource constraints.
Abstract: One of the major costs in system-on-chip (SOC) development is test cost, especially the cost related to test integration Although there have been plenty of research works on individual topics about SOC testing, few of them took into account the practical integration issues In this paper, we stress the practical SOC test integration issues, including real problems found in test scheduling, test IO reduction, timing of functional test, scan IO sharing, etc A test scheduling method is proposed based on our test architecture and test access mechanism (TAM), considering IO resource constraints Detailed scheduling further reduces the overall test time of the system chip We also present a test wrapper architecture that supports the coexistence of scan test and functional test The test integration platform has been applied to an industrial SOC case The chip has been designed and fabricated The measurement results justify the approach-simple and efficient, ie, short test integration cost, short test time, and small area overhead

Patent
Rakesh K. Parimi1
11 Mar 2004
TL;DR: In this paper, a method, system and program product are disclosed for performing automatic testing of a system including a plurality of modules in which at least two modules lack a predetermined communication mechanism.
Abstract: A method, system and program product are disclosed for performing automatic testing of a system including a plurality of modules in which at least two modules lack a predetermined communication mechanism. In particular, the invention performs automated testing of such systems by finding and applying a logical correlation on test results for automated generation of a test map. Each test map includes a sequence of test scripts to be run by the modules and/or interface points. The test map generation can be based on test results from previous tests such that the invention learns, improves and obtains more functionality.

Journal ArticleDOI
Lj Lazić1, D. Velašević
TL;DR: Simulation-based (stochastic) experiments, combined with optimized design-of-experiment plans, in the case study have shown a minimum productivity increase of 100 times in comparison to current practice without DoD STEP deployment.
Abstract: This paper presents some original solutions with regard to the deployment of the U.S. Department of Defense Simulation, Test and Evaluation Process (DoD STEP), using an automated target tracking radar system as a case study. Besides the integration of modelling and simulation, to form a model-based approach to the software testing process, the number of experiments, i.e. test cases, have been dramatically reduced by applying an optimized design-of-experiment plan and an orthogonal array-based robust testing methodology. Also, computer-based simulation at various abstraction levels of the system/software under test can serve as a test oracle. Simulation-based (stochastic) experiments, combined with optimized design-of-experiment plans, in the case study have shown a minimum productivity increase of 100 times in comparison to current practice without DoD STEP deployment. Copyright © 2004 John Wiley & Sons, Ltd.

Junbeom Yoo, Su-Hyun Park, Hojung Bang, Tai-Hyo Kim, Sungdeok Cha1 
01 Jan 2004
TL;DR: A testing technique that can directly test FBD programs without generating intermediate code for testing purpose is proposed and is used in DPPS(Digital Plant Protection System) RPS(Reactor Protection System), which is currently being developed at KNICS (KNICS, -) in Korea.
Abstract: In this paper, we propose a testing technique that can directly test FBD programs without generating intermediate code for testing purpose. The previous PLC-based software testing generates an intermediate code such as C, which is equivalent to the original FBD, and targets an intermediate code. In order to apply unit and integration testing techniques to FBDs, we transform FBD program into a control flow graph and apply existing control flow testing coverage criteria to the graph. With our approach, PLC based software designed in FBD language can be tested cost-efficiently because we do not need to generate intermediate code. To demonstrate the usefulness of the proposed method, we use a trip logic of BP(Bistable Process) in DPPS(Digital Plant Protection System) RPS(Reactor Protection System), which is currently being developed at KNICS (KNICS, -) in Korea.