scispace - formally typeset
Search or ask a question

Showing papers on "Acceptance testing published in 2006"


Journal ArticleDOI
TL;DR: The main characteristics of a good quality process are discussed, the key testing phases are surveyed and modern functional and model-based testing approaches are presented.

658 citations


Proceedings ArticleDOI
23 Jul 2006
TL;DR: The standard environment for testing with Selenium is described, as well as modifications the authors performed to incorporate their script pages into a wiki, and whether additional automated functional testing below the GUI layer was still necessary and/or appropriate.
Abstract: Ever in search of a silver bullet for automated functional testing for Web applications, many folks have turned to Selenium. Selenium is an open-source project for in-browser testing, originally developed by ThoughtWorks and now boasting an active community of developers and users. One of Selenium's stated goals is to become the de facto open-source replacement for proprietary tools such as WinRunner. Of particular interest to the agile community is that it offers the possibility of test-first design of Web applications, red-green signals for customer acceptance tests, and an automated regression test bed for the Web tier. This experience report describes the standard environment for testing with Selenium, as well as modifications we performed to incorporate our script pages into a wiki. It includes lessons we learned about continuous integration, script writing, and using the Selenium Recorder (renamed IDE). We also discuss how long it took to write and maintain the scripts in the iterative development environment, how close we came to covering all of the functional requirements with tests, how often the tests should be (and were) run, and whether additional automated functional testing below the GUI layer was still necessary and/or appropriate. While no silver bullet, Selenium has become a valuable addition to our agile testing toolkit, and is used on the majority of our Web application projects. It promises to become even more valuable as it gains widespread adoption and continues to be actively developed.

121 citations


ReportDOI
01 Aug 2006
TL;DR: In this paper, the authors define a complete body of abuse tests intended to simulate actual use and abuse conditions that may be beyond the normal safe operating limits experienced by electrical energy storage systems used in electric and hybrid electric vehicles.
Abstract: This manual defines a complete body of abuse tests intended to simulate actual use and abuse conditions that may be beyond the normal safe operating limits experienced by electrical energy storage systems used in electric and hybrid electric vehicles. The tests are designed to provide a common framework for abuse testing various electrical energy storage systems used in both electric and hybrid electric vehicle applications. The manual incorporates improvements and refinements to test descriptions presented in the Society of Automotive Engineers Recommended Practice SAE J2464 ''Electric Vehicle Battery Abuse Testing'' including adaptations to abuse tests to address hybrid electric vehicle applications and other energy storage technologies (i.e., capacitors). These (possibly destructive) tests may be used as needed to determine the response of a given electrical energy storage system design under specifically defined abuse conditions. This manual does not provide acceptance criteria as a result of the testing, but rather provides results that are accurate and fair and, consequently, comparable to results from abuse tests on other similar systems. The tests described are intended for abuse testing any electrical energy storage system designed for use in electric or hybrid electric vehicle applications whether it is composed of batteries, capacitors, or a combination of the two.

106 citations


Journal ArticleDOI
Feng Li1, Yasunori Hashimura1, Robert Pendleton1, Jean Harms1, Erin Collins1, Brian Lee1 
TL;DR: A systematic approach is described to develop a bioreactor scale‐down model and to characterize a cell culture process for recombinant protein production in CHO cells to demonstrate robustness of manufacturing processes.
Abstract: The objective of process characterization is to demonstrate robustness of manufacturing processes by understanding the relationship between key operating parameters and final performance. Technical information from the characterization study is important for subsequent process validation, and this has become a regulatory expectation in recent years. Since performing the study at the manufacturing scale is not practically feasible, development of scale-down models that represent the performance of the commercial process is essential to achieve reliable process characterization. In this study, we describe a systematic approach to develop a bioreactor scale-down model and to characterize a cell culture process for recombinant protein production in CHO cells. First, a scale-down model using 2-L bioreactors was developed on the basis of the 2000-L commercial scale process. Profiles of cell growth, productivity, product quality, culture environments (pH, DO, pCO2), and level of metabolites (glucose, glutamine, lactate, ammonia) were compared between the two scales to qualify the scale-down model. The key operating parameters were then characterized in single-parameter ranging studies and an interaction study using this scale-down model. Appropriate operation ranges and acceptance criteria for certain key parameters were determined to ensure the success of process validation and the process performance consistency. The process worst-case condition was also identified through the interaction study.

103 citations


Patent
Paul M. Seckendorf1, Kenny Fok1
13 Mar 2006
TL;DR: In this paper, a product acceptance test application disposed on the wireless device and including simulated communications representative of actual communications with a wireless network is presented, and a communications processing engine is operable to process the simulated communications.
Abstract: Apparatus, methods, and programs for testing the communications processing ability, and determining product acceptance, of a wireless device. Embodiments include a product acceptance test application disposed on the wireless device and including simulated communications representative of actual communications with a wireless network. A communications processing engine disposed on the wireless device is operable to process the simulated communications, and thereby generates product acceptance data associated with a product acceptance decision.

58 citations


Proceedings ArticleDOI
28 May 2006
TL;DR: GridUnit is introduced, an extension of the widely adopted JUnit testing framework, able to automatically distribute the execution of software tests on a computational grid with minimum user intervention, and provides a cost-effectiveness improvement to the software testing experience.
Abstract: Software testing is a fundamental part of system development. As software grows, its test suite becomes larger and its execution time may become a problem to software developers. This is especially the case for agile methodologies, which preach a short develop/test cycle. Moreover, due to the increasing complexity of systems, there is the need to test software in a variety of environments. In this paper, we introduce GridUnit, an extension of the widely adopted JUnit testing framework, able to automatically distribute the execution of software tests on a computational grid with minimum user intervention. Experiments conducted with this solution have showed a speed-up of almost 70x, reducing the duration of the test phase of a synthetic application from 24 hours to less than 30 minutes. The solution does not require any source-code modification, hides the grid complexity from the user and provides a cost-effectiveness improvement to the software testing experience.

57 citations



Proceedings ArticleDOI
29 Aug 2006
TL;DR: The interviews focused on the amount of resources spent on testing, how the testing is conducted, and the knowledge of the personnel in the test organizations to indicate that the overall test maturity is low.
Abstract: This paper presents data from a study of the current state of practice of software testing. Test managers from twelve different software organizations were interviewed. The interviews focused on the amount of resources spent on testing, how the testing is conducted, and the knowledge of the personnel in the test organizations. The data indicate that the overall test maturity is low. Test managers are aware of this but have trouble improving. One problem is that the organizations are commercially successful, suggesting that products must already be "good enough." Also, the current lack of structured testing in practice makes it difficult to quantify the current level of maturity and thereby articulate the potential gain from increasing testing maturity to upper management and developers.

55 citations


Proceedings ArticleDOI
17 Sep 2006
TL;DR: The MORABIT project realizes such an infrastructure for built-in-test (BIT) and extends the BIT concepts to allow for a smooth integration of the testing process and the original business functionality execution.
Abstract: Runtime testing is important for improving the quality of software systems. This fact holds true especially for systems which cannot be completely assembled at development time, such as mobile or ad-hoc systems. The concepts of built-in-test (BIT) can be used to cope with runtime testing, but to our knowledge there does not exist an implemented infrastructure for BIT. The MORABIT project realizes such an infrastructure and extends the BIT concepts to allow for a smooth integration of the testing process and the original business functionality execution. In this paper the requirements on the infrastructure and our solution are presented

54 citations


Proceedings ArticleDOI
21 Sep 2006
TL;DR: The objective of this qualitative study was to understand the complex practice of software testing and based on this knowledge, to develop process improvement propositions that could concurrently reduce development and testing costs and improve software quality.
Abstract: The objective of this qualitative study was to understand the complex practice of software testing, and based on this knowledge, to develop process improvement propositions that could concurrently reduce development and testing costs and improve software quality. First, a survey of testing practices was onducted and 26 organizational units (OUs) were interviewed. From this sample, five OUs were further selected for an in-depth case study. The study used grounded theory as its research method and the data was collected from 41 theme-based interviews. The analysis yielded improvement propositions that included enhanced testability of software components, efficient communication and interaction between development and testing, early involvement of testing, and risk-based testing. The connective and central improvement proposition was that testing ought to adapt to the business orientation of the OU. Other propositions were integrated around this central proposition. The results of this study can be used in improving development and testing processes.

49 citations


Proceedings ArticleDOI
23 Jul 2006
TL;DR: Using an experimental method, it is found that customers, partnered with an IT professional, are able to use executable acceptance test (storytest)-based specifications to communicate and validate functional business requirements.
Abstract: Using an experimental method, we found that customers, partnered with an IT professional, are able to use executable acceptance test (storytest)-based specifications to communicate and validate functional business requirements. However, learnability and ease of use analysis indicates that an average customer may experience difficulties learning the technique. Several additional propositions are evaluated and usage observations made.

Proceedings ArticleDOI
22 Oct 2006
TL;DR: This paper presents an approach that has been used to address security when running projects according to agile principles, and misuse stories have been added to user stories to capture malicious use of the application.
Abstract: In this paper, we present an approach that we have used to address security when running projects according to agile principles. Misuse stories have been added to user stories to capture malicious use of the application. Furthermore, misuse stories have been implemented as automated tests (unit tests, acceptance tests) in order to perform security regression testing. Penetration testing, system hardening and securing deployment have been started in early iterations of the project.

Proceedings Article
01 Jan 2006
TL;DR: OntoTest is presented – an ontology of software testing, which has been developed to support acquisition, organization, reuse and sharing of testing knowledge, and is interested in defining a common well-established vocabulary for testing.
Abstract: A growing interest on the establishment of ontologies has been observed for the most different knowledge domains. This work presents OntoTest – an ontology of software testing, which has been developed to support acquisition, organization, reuse and sharing of testing knowledge. OntoTest has been built with basis on ISO/IEC 12207 standard and intends to explore the different aspects involved in the testing activity. Specifically, we are interested in defining a common well-established vocabulary for testing, which can be useful to develop supporting tools as well as to increase the inter-operability among them.

Book
01 May 2006
TL;DR: This comprehensive resource provides step-by-step guidelines, checklists, and templates for each testing activity, as well as a self-assessment that helps readers identify the sections of the book that respond to their individual needs
Abstract: Written by the founder and executive director of the Quality Assurance Institute, which sponsors the most widely accepted certification program for software testing Software testing is a weak spot for most developers, and many have no system in place to find and correct defects quickly and efficiently This comprehensive resource provides step-by-step guidelines, checklists, and templates for each testing activity, as well as a self-assessment that helps readers identify the sections of the book that respond to their individual needs Covers the latest regulatory developments affecting software testing, including Sarbanes-Oxley Section 404, and provides guidelines for agile testing and testing for security, internal controls, and data warehouses CD-ROM with all checklists and templates saves testers countless hours of developing their own test documentation Note: CD-ROM/DVD and other supplementary materials are not included as part of eBook file.

Proceedings ArticleDOI
17 Sep 2006
TL;DR: This paper presents an approach to self-testability which encompasses test case generation and test evaluation and is an integration of the self-testing COTS components (STECC) method and the metamorphic testing approach.
Abstract: An approach to improve the testability of a program is to augment that program with mechanisms specific to testing tools, i.e. to enhance the program with self-testability. In particular, this can decrease the amount of information which a human tester needs to thoroughly test and can simplify the testing process. Most approaches to self-testability, however, assume a given test suite and implement means for test execution. Few approaches address test case generation and test evaluation. This paper presents an approach to self-testability which encompasses test case generation and test evaluation. The approach presented is an integration of the self-testing COTS components (STECC) method and the metamorphic testing approach.

01 Jan 2006
TL;DR: Various assessment methods are discussed, factors that may influence the interpretation of their results are pointed out, and recent studies that have explored the relationships between them are reviewed.
Abstract: There are many tasks in radiology departments which involve assessment of image quality. Equipment purchasing is partly based on performance specifications, acceptance testing verifies that the system fulfils the specified performance criteria, constancy testing attempts to notice any changes in the imaging system, clinical testing concentrates on the fulfilment of clinical needs, and optimisation processes attempt to find best ways to use the imaging system for clinical purposes. These different tasks are best performed by different assessment methods and the outcome is often referred to as technical (or physical) image quality or clinical image quality, according to the method used. Although establishing the link between physical image quality measures and clinical utility has been pursued for decades, the relationship between the results of physical measurements, phantom evaluations and clinical performance is not fully understood. This report shortly discusses various assessment methods, points out factors that may influence the interpretation of their results, and reviews recent studies that have explored the relationships between them.

Proceedings ArticleDOI
23 May 2006
TL;DR: Results of experiments with undergraduate students demonstrate the benefits of the ATDD approach using EasyAccept and show that this tool can also help to teach and train good testing and development practices.
Abstract: This paper introduces EasyAccept, a tool to create and run client-readable acceptance tests easily, and describes how it can be used to allow a simple but powerful acceptance-test driven software development (ATDD) approach. EasyAccept takes acceptance tests enclosing business rules and a Facade to access the software under development, and checks if the outputs of the software's execution match expected results from the tests. Driven by EasyAccept runs, software can be constructed with focus, control and correctness, since the acceptance tests also serve as automated regression tests. Results of experiments with undergraduate students demonstrate the benefits of the ATDD approach using EasyAccept and show that this tool can also help to teach and train good testing and development practices.

Journal ArticleDOI
TL;DR: The portable seismic property analyzer (PSPA) as discussed by the authors is an example of such a device, which can be used to measure the modulus of each pavement layer shortly after placement.
Abstract: Existing practices for acceptance of hot-mix asphalt are based on parameters such as adequate density, adequate thickness, and adequate air voids of the placed and compacted materials. Current mechanistic routines for structural design of flexible pavements consider mainly the modulus of each layer. Therefore, a procedure to measure the modulus of each pavement layer shortly after placement is highly desired. Laboratory tests on specimens prepared from material retrieved during construction, such as the simple-performance tests, can be used to determine these moduli. However, these methods are time-consuming, and the equipment costs are high. In addition, prepared specimens may not be representative of as-placed materials. Seismic tests are more practical because they are rapid to perform and are nondestructive, immediate results are obtained, and the material is tested in its natural state. The portable seismic property analyzer (PSPA) is an example of such a device. The PSPA is presented as a real-time ...

Journal Article
TL;DR: An experimental study over two groups of students comprising of undergraduate students (seniors) who develop software using the conventional way of performing unit testing after development and also by extracting test cases before implementation as in Agile Programming showed that the software had less number of faults when developed using Agile programming.
Abstract: In this paper, we conduct an experimental study over two groups of students comprising of undergraduate students (seniors) who develop software using the conventional way of performing unit testing after development and also by extracting test cases before implementation as in Agile Programming. Both groups developed the same software using an incremental and iterative approach. The results showed that the software had less number of faults when developed using Agile Programming. Also, the quality of software was better and the productivity increased. Keywords: Test driven development, agile programming, case study 1. Introduction Test-Driven Development (TDD) is a technique that involves writing test cases first and then implementing the code necessary in order to pass the tests. The goal is to achieve immediate input and thereby construct a program. This technique is heavily emphasized in Agile or Extreme Programming [1, 2, 3]. This process of designing test-cases prior to the implementation is termed as “Test-First” approach. We consider unit-testing only (by the programmers) and have nothing to do with integration or acceptance testing. But we do take into account the number of faults found while SQA performs formal unit-testing, integrating and performing acceptance testing in order to measure the quality of the software produced. The first step in this approach is to quickly add a test, basically just enough code to fail. Then we run our tests, generally all of them but in order to finish the process quickly, may be only subsets of tests are run to make sure that the new test does fail. We then update the code in order to pass the new tests. Now, we again run our tests. If they fail, we again have to update and retest, else we add the next functionality. There are no particular rules to form the test-cases but more tests are added throughout the implementation. Though, refactoring should be performed in agile programming that is programmers alternate between adding new tests and functionalities to improve its consistency. It is done to improve the readability of code or change of design or removal of unwanted code. There are various advantages in employing TDD. Programmers tend to know immediately whether the new feature has been added in accordance with the specifications. The process in performed in steps comprising of small parts and hence easier to manage. Low number of faults are tend to be found during acceptance testing and maintenance can be viewed as another increment or addition of feature which would make it easier. There is no particular design phase and software is built through the process of refactoring. In short, TDD improves programmer productivity and software quality. There have been a number of studies [4, 5, 6, 7] that have been performed to test the effectiveness of TDD and the results give mixed opinions. We perform an experiment with 2 groups of students, one developing software the conventional way of testing it after implementation and the other group through TDD. In both groups, test cases were developed by programmers and regression testing was performed. Only difference is test cases are written prior to implementation and are tested throughout the production in TDD and test cases are written and tested after implementation in the conventional way. Each group consisted of 9 undergraduate students and time period for the whole study was 3 months. We investigate in this paper through experimental studies the promise of “Test-First” strategy emphasized in agile programming.

Proceedings ArticleDOI
15 Nov 2006
TL;DR: An approach adapting formal compositional analysis techniques to realize self-awareness and self-adaptation in embedded systems with respect to real-time properties such as latency constraints, buffer sizes, etc is presented.
Abstract: Integrating new functionality into complex embedded hard real-time systems requires considerable engineering effort. Emerging formal analysis methodologies and tools from real-time research assist system engineers solving this integration problem. For future organic computer systems, however, it is desirable to integrate these approaches into running systems, enabling them to autonomously perform e.g. online acceptance tests and self-optimization in case of system or environmental changes. This results in high system robustness and extensibility without explicit engineering effort. In this paper, we present an approach adapting formal compositional analysis techniques to realize self-awareness and self-adaptation in embedded systems with respect to real-time properties such as latency constraints, buffer sizes, etc. We introduce a framework for distributed online performance analysis running on embedded real-time systems. Based on this framework we implement an acceptance test for the integration of new functionality into an existing embedded real-time system. Furthermore, we present an online optimization algorithm based on the same framework. In a case study, we demonstrate the applicability of the approach and show that online optimization can increase the acceptance rate with reasonable computational effort.

Journal ArticleDOI
TL;DR: In this paper, the authors present 10 basic steps to improve requirements and show how to improve a project's requirements to the point where they are good enough to be used in other situations.
Abstract: Project teams can take several small, easy steps to improve requirements to the point where they're good enough. But every project is different. Your team might need to take steps that wouldn't be right in other situations. The author lists 10 basic steps to improve requirements.

01 Jun 2006
TL;DR: In this paper, the authors describe the search for a replacement for the nuclear density gauge and also summarizes new techniques that could potentially be used for acceptance of flexible pavement construction, including the Vertek and AquaPro moisture probes, the portable falling weight deflectometer, the dynamic cone penetrometer, instrumented vibratory rollers, automated proof rollers and groundpenetrating radar, and infrared imaging.
Abstract: Historically in Texas, acceptance of flexible pavement layers has been based upon density control, where the density of the layer must meet or exceed some minimum value The Texas Department of Transportation (TxDOT) utilizes the nuclear density gauge to perform this acceptance testing for subgrade and base layers However, TxDOT desires to find a non-nuclear method for measuring density; additionally, TxDOT desires to investigate the feasibility of a fundamental shift from density to mechanistic properties for acceptance This report describes the search for a replacement for the nuclear gauge and also summarizes new techniques that could potentially be used for acceptance of flexible pavement construction Currently no direct replacement exists for the nuclear density gauge; the most similar devices are the Pavement Quality Indicator and the Pavetracker Plus However, these two devices are only applicable to hot-mix asphalt layers and are best suited for measuring differential density Several systems exist that are not density-based that potentially could control flexible pavement construction These systems include the Vertek and AquaPro moisture probes, the portable falling weight deflectometer, the dynamic cone penetrometer, instrumented vibratory rollers, automated proof rollers, ground-penetrating radar, and infrared imaging The latter four systems provide near 100% coverage and could serve as screening tools to identify where to perform spot tests This report describes preliminary testing of the most promising systems and outlines a framework for continued testing with the most promising devices for use during the remainder of this project

Proceedings ArticleDOI
16 Oct 2006
TL;DR: The components of the toolset and model-based testing together with its usage are introduced and the applicability of the presented toolset is evaluated in a very large test project: interoperability testing of the CRM system at one of the largest companies in Europe.
Abstract: Interoperability testing of heterogeneous business software systems is one of the most important tasks in the process of quality improvement and reduction of operational costs. To guarantee competitiveness and reduce costs, we benefit from relocating parts of the testing process to India. To organize the offshore process and increase efficiency we developed an offshore framework which contains a toolset for standardized test specification methods. Model-based testing provides a systematic, transparent method for deriving, describing and executing tests. This paper introduces the components of the toolset and model-based testing together with its usage. The toolset represents tests on model level and generates executable tests from these. It is based on standardized testing methods such as U2TP and TTCN-3 and operates in combination with UML models. The applicability of the presented toolset is evaluated in case studies in a very large test project: interoperability testing of the CRM system at one of the largest companies in Europe.

Proceedings ArticleDOI
21 Jul 2006
TL;DR: This paper presents a statistically based approach for testing stochastic algorithms based on hypothesis testing, and describes the earlier experience with using automated testing for this system, in which it took a conventional approach, and the resulting difficulties.
Abstract: Automated tests can play a key role in ensuring system quality in software development. However, significant problems arise in automating tests of stochastic algorithms. Normally, developers write tests that simply check whether the actual result is equal to the expected result (perhaps within some tolerance). But for stochastic algorithms, restricting ourselves in this way severely limits the kinds of tests we can write: either to trivial tests, or to fragile and hard-tounderstand tests that rely on a particular seed for a random number generator. A richer and more powerful set of tests is possible if we accommodate tests of statistical properties of the results of running an algorithm many times. The work described in this paper has been done in the context of a real-world application, a large-scale simulation of urban development designed to inform major decisions about land use and transportation. We describe our earlier experience with using automated testing for this system, in which we took a conventional approach, and the resulting difficulties. We then present a statistically based approach for testing stochastic algorithms based on hypothesis testing. Three different ways of constructing such tests are given, which cover the most commonly used distributions. We evaluate these tests in terms of frequency of failing when they should and when they should not, and conclude with guidelines and practical suggestions for implementing such unit tests for other stochastic applications.

Proceedings ArticleDOI
21 Apr 2006
TL;DR: In this article, the authors present an approach for the verification of fiber optic components for a space flight mission in an ever changing commercial photonics industry, with some basic generic information about many space flight requirements.
Abstract: "Qualification" of fiber optic components holds a very different meaning than it did ten years ago In the past, qualification meant extensive prolonged testing and screening that led to a programmatic method of reliability assurance For space flight programs today, the combination of using higher performance commercial technology, with shorter development schedules and tighter mission budgets makes long term testing and reliability characterization unfeasible In many cases space flight missions will be using technology within years of its development and an example of this is fiber laser technology Although the technology itself is not a new product the components that comprise a fiber laser system change frequently as processes and packaging changes occur Once a process or the materials for manufacturing a component change, even the data that existed on its predecessor can no longer provide assurance on the newer version In order to assure reliability during a space flight mission, the component engineer must understand the requirements of the space flight environment as well as the physics of failure of the components themselves This can be incorporated into an efficient and effective testing plan that "qualifies" a component to specific criteria defined by the program given the mission requirements and the component limitations This requires interaction at the very initial stages of design between the system design engineer, mechanical engineer, subsystem engineer and the component hardware engineer Although this is the desired interaction what typically occurs is that the subsystem engineer asks the components or development engineers to meet difficult requirements without knowledge of the current industry situation or the lack of qualification data This is then passed on to the vendor who can provide little help with such a harsh set of requirements due to high cost of testing for space flight environments This presentation is designed to guide the engineers of design, development and components, and vendors of commercial components with how to make an efficient and effective qualification test plan with some basic generic information about many space flight requirements Issues related to the ~ physics of failure, acceptance criteria and lessons learned will also be discussed to assist with understanding how to approach a space flight mission in an ever changing commercial photonics industry

Journal ArticleDOI
TL;DR: The results of a study to evaluate the existing South Carolina Department of Transportation (SCDOT) hot-mix asphalt quality assurance specification are presented in this article, where acceptance test results from 39 different projects were analyzed to determine standard deviation values that were being obtained for asphalt content, air voids, voids in mineral aggregate, and density.
Abstract: The results of a study to evaluate the existing South Carolina Department of Transportation (SCDOT) hot-mix asphalt quality assurance specification are presented. When the existing specification was developed, assumed values were used for the standard deviations needed to establish the specification limits. Acceptance test results from 39 different projects were analyzed to determine standard deviation values that were being obtained for asphalt content, air voids, voids in mineral aggregate, and density. For each project, standard deviation values were calculated for each lot, and these were then pooled to obtain a typical value for the project. The target miss variability, that is, the ability of contractors to center their processes on the target, was also determined for each acceptance characteristic. The project and target miss values were then used to establish a typical process standard deviation value for each of the four acceptance characteristics. All these standard deviation values proved to be...

Journal ArticleDOI
TL;DR: In this article, the results of contractor-performed tests, originally performed for quality control purposes, are increasingly used in the acceptance decision in many states, including the Georgia Department of Transportation (GDOT).
Abstract: Quality assurance is the process by which highway construction elements are sampled and tested to ensure compliance with specifications and other project requirements. The results of contractor-performed tests, originally performed for quality control purposes, are increasingly used in the acceptance decision in many states. The Georgia Department of Transportation (GDOT) uses contractor-performed tests in the acceptance decision on acceptable corroboration of GDOT-performed tests. Statistical analyses were performed to assess differences between tests conducted on hot-mix asphalt concrete by GDOT and its contractors during the 2003 construction season. Measurements of gradation and asphalt content taken by both parties were compared both across all projects and on a project-by-project basis for projects large enough to meet sample size requirements for this type of analysis. Both tabular and graphic representations of data are used to interpret the results. Statistically significant differences occur in some cases; these differences are much more common when comparing variability of these measurements than with the means. At the project level, on most projects in which statistically significant differences occur, the GDOT value typically is larger.

Proceedings ArticleDOI
01 Jan 2006
TL;DR: In this paper, a process data reconciliation based on the VDI 2048 guideline is presented, which allows to determine the true process parameters with a statistical probability of 95%, by considering closed material, mass-and energy balances following the Gaussian correction principle.
Abstract: The determination of the thermal reactor power is traditionally done by establishing the heat balance: • for a boiling water reactor (BWR) at the interface of reactor control volume and heat cycle; • for a pressurized water reactor (PWR) at the interface of the steam generator control volume and turbine island on the secondary side. The uncertainty of these traditional methods is not easy to determine and it can be in the range of several percent. Technical and legal regulations (e.g. 10CFR50) cover an estimated instrumentation error of up to 2% by increasing the design thermal reactor power for emergency analysis to 102% of the licensed thermal reactor power. Basically, the licensee has the duty to warrant at any time operation inside the analysed region for thermal reactor power. This is normally done by keeping the indicated reactor power at the licensed 100% value. A better way is to use a method which allows a continuous warranty evaluation. The quantification of the level of fulfilment of this warranty is only achievable by a method which: • is independent of single measurements accuracies; • results in a certified quality of single process values and for the total heat cycle analysis; • leads to complete results including 2-sigma deviation especially for thermal reactor power. This method, which is called ‘process data reconciliation based on VDI 2048 guideline’, is presented here [1, 2]. The method allows to determine the true process parameters with a statistical probability of 95%, by considering closed material, mass- and energy balances following the Gaussian correction principle. The amount of redundant process information and complexity of the process improves the final results. This represents the most probable state of the process with minimized uncertainty according to VDI 2048. Hence, calibration and control of the thermal reactor power are possible with low effort but high accuracy and independent of single measurement accuracies. Furthermore, VDI 2048 describes the quality control of important process parameters. Applied to the thermal reactor power, the statistical certainty of warranting the allowable value can be quantified. This quantification allows keeping a safety margin in agreement with the authority. This paper presents the operational application of this method at an operating plant and describes the additional use of process data reconciliation for acceptance tests, power recapture and system and component diagnosis.Copyright © 2006 by ASME

Journal ArticleDOI
TL;DR: In this paper, a compressed test program was proposed and realized based on these criteria and the limited time of the test, in only a few cases some additional tests were required to justify the coil performance and acceptance.
Abstract: Each superconducting coil of the ATLAS Barrel Toroid has to pass the commissioning tests on surface before the installation in the underground cavern for the ATLAS Experiment at CERN. Particular acceptance criteria have been developed to characterize the individual coils during the on-surface testing. Based on these criteria and the limited time of the test, a compressed test program was proposed and realized. In only a few cases some additional tests were required to justify the coil performance and acceptance. In this paper the analysis of the test results is presented and discussed with respect to the acceptance criteria. Some differences in the parameters found between the identical coils are analyzed in relation to coil production features

Journal ArticleDOI
TL;DR: In this article, the authors present the experience gained at the Integration and Tests Laboratory (LIT), National Institute for Space Research (INPE), in Sao Jose dos Campos, SP, from the thermal vacuum tests campaign of the SATEC protoflight spacecraft, held from February 6th to 12th, 2003.