scispace - formally typeset
Search or ask a question

Showing papers on "Verification and validation of computer simulation models published in 2003"


Journal ArticleDOI
TL;DR: Verification and validation of computational simulations are the primary methods for building and quantifying this confidence in modeling and simulation.
Abstract: Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue.

735 citations


Journal ArticleDOI
TL;DR: A procedure was proposed for microscopic simulation model calibration and validation and an example case study is presented with real-world traffic data from Route 50 on Lee Jackson Highway in Fairfax, Virginia, indicating that the proposed procedure appears to be properly calibrating and validating the VISSIM simulation model for the test-bed network.
Abstract: Microscopic simulation models have been widely used in both transportation operations and management analyses because simulation is safer, less expensive, and faster than field implementation and testing. While these simulation models can be advantageous to engineers, the models must be calibrated and validated before they can be used to provide meaningful results. However, the transportation profession has not established any formal or consistent guidelines for the development and application of these models. In practice, simulation model-based analyses have often been conducted under default parameter values or bestguessed values. This is mainly due to either difficulties in field data collection or lack of a readily available procedure for simulation model calibration and validation. A procedure was proposed for microscopic simulation model calibration and validation and an example case study is presented with real-world traffic data from Route 50 on Lee Jackson Highway in Fairfax, Virginia. The propos...

233 citations


Journal ArticleDOI
TL;DR: This work supports research in the areas of simulation frameworks, benchmarking methodologies, analytic methods, and validation techniques, in order to help and make quantitative evaluations of computer systems manageable.
Abstract: We focus on problems suited to the current evaluation infrastructure. The current limitation and trends in evaluation techniques are troublesome and could noticeably slow the rate of computer system innovation. New research has been recommended to help and make quantitative evaluations of computer systems manageable. We support research in the areas of simulation frameworks, benchmarking methodologies, analytic methods, and validation techniques.

114 citations


Journal ArticleDOI
TL;DR: An application of formal methods for validation of flexible manufacturing systems controlled by distributed controllers using model-based verification of the execution control of function blocks following the new international standard IEC61499 is presented.
Abstract: This paper presents an application of formal methods for validation of flexible manufacturing systems controlled by distributed controllers. A software tool verification environment for distributed applications (VEDA) is developed for modeling and verification of distributed control systems. The tool provides an integrated environment for formal, model-based verification of the execution control of function blocks following the new international standard IEC61499. The modeling is performed in a closed-loop way using manually developed models of plants and automatically generated models of controllers.

108 citations


Book ChapterDOI
TL;DR: For each of the metrics used in simulation-based verification, this paper presents a corresponding metric that is suitable for the setting of formal verification, and describes an algorithmic way to check it.
Abstract: In formal verification, we verify that a system is correct with respect to a specification. Even when the system is proven to be correct, there is still a question of how complete the specification is, and whether it really covers all the behaviors of the system. The challenge of making the verification process as exhaustive as possible is even more crucial in simulation-based verification, where the infeasible task of checking all input sequences is replaced by checking a test suite consisting of a finite subset of them. It is very important to measure the exhaustiveness of the test suite, and indeed, there has been an extensive research in the simulation-based verification community on coverage metrics, which provide such a measure. It turns out that no single measure can be absolute, leading to the development of numerous coverage metrics whose usage is determined by industrial verification methodologies. On the other hand, prior research of coverage in formal verification has focused solely on state-based coverage. In this paper we adapt the work done on coverage in simulation-based verification to the formal-verification setting in order to obtain new coverage metrics. Thus, for each of the metrics used in simulation-based verification, we present a corresponding metric that is suitable for the setting of formal verification, and describe an algorithmic way to check it.

68 citations


Journal ArticleDOI
TL;DR: This paper presents a case study of a simulation model of an earthmoving operation with fairly complex control logic that was verified and validated by visualizing the operation in 3D.
Abstract: One of primary impediments in the use of discrete-event simulation to plan and design construction operations is that decision-makers often do not have the means, the knowledge, and/or the time to check the veracity and the validity of simulation models and thus have little confidence in the results. Visualizing simulated operations in 3D can be of substantial help in the verification, validation, and accreditation of simulation models. In addition, visualization can provide valuable insight into subtleties of modeled operations that are otherwise non-quantifiable and presentable. This paper investigates the efficacy of 3D visualization in verifying and validating discrete-event construction simulation models. The paper presents a case study of a simulation model of an earthmoving operation with fairly complex control logic that was verified and validated by visualizing the operation in 3D. The simulation model for the example was developed using Stroboscope and was visualized using the Dynamic Construction Visualizer.

65 citations


Book
05 Nov 2003
TL;DR: Part One: Why Functional Verification is Necessary Definition and Goals Architecture A Look at What is Being Verified Part Two: How functional Verification Works Determining the Validity of the Model Verification Methods Random Testing Co-Simulation Measuring Verification Quality Verification Languages Part Three: Application of Functional Verified The Verification Plan Projecting Costs
Abstract: Part One: Why Functional Verification is Necessary Definition and Goals Architecture A Look at What is Being Verified Part Two: How Functional Verification Works Determining the Validity of the Model Verification Methods Random Testing Co-Simulation Measuring Verification Quality Verification Languages Part Three: Application of Functional Verification The Verification Plan Projecting Costs Summary: The Project Verification Languages: Testbuilder, Vera, E Other Project Verification Tools: Bug-tracking Systems Other Project Verification Tools: Revision & Release Control Systems

51 citations


ReportDOI
01 Aug 2003
TL;DR: The use of code comparisons for validation is improper and dangerous, and while code comparisons may be argued to provide a beneficial component in code verification activities, there are higher quality code verification tasks that should take precedence.
Abstract: This report presents a perspective on the role of code comparison activities in verification and validation. We formally define the act of code comparison as the Code Comparison Principle (CCP) and investigate its application in both verification and validation. One of our primary conclusions is that the use of code comparisons for validation is improper and dangerous. We also conclude that while code comparisons may be argued to provide a beneficial component in code verification activities, there are higher quality code verification tasks that should take precedence. Finally, we provide a process for application of the CCP that we believe is minimal for achieving benefit in verification processes.

43 citations


Journal ArticleDOI
TL;DR: The authors provide a road map to navigate approaches to external validation by reconciling foundational philosophical approaches and incorporating a hermeneutical perspective to increase the usability, validity, and generalizability of simulation models.
Abstract: Validation is the most important tool in evaluating the effectiveness of a simulation model. An integral subset of validation is external validation. Through a historical philosophical discussion and examples, the authors provide a road map to navigate approaches to this topic. The authors also provide methodology suggestions by reconciling foundational philosophical approaches and incorporating a hermeneutical perspective to increase the usability, validity, and generalizability of simulation models.

39 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a verification and validation approach which is used here in order to complete the classical tool box the industrial user may utilize in enterprise modeling and integration domain.

38 citations


Proceedings ArticleDOI
06 Jan 2003
TL;DR: This paper offers yet another critique of modeling that uses the results and observations of nonlinear mathematics and bottom-up simulation themselves to develop a modeling paradigm that is significantly broader than the traditional model-focused paradigm.
Abstract: In the complexity and simulation communities there is growing support for the use of bottom-up computer-based simulation in the analysis of complex systems. The presumption is that because these models are more complex than their linear predecessors they must be more suited to the modeling of systems that appear, superficially at least, to be (compositionally and dynamically) complex. Indeed the apparent ability of such models to allow the emergence of collective phenomena from quite simple underlying rules is very compelling. But does this 'evidence' alone 'prove' that nonlinear bottom-up models are superior to simpler linear models when considering complex systems behavior? Philosophical explorations concerning the efficacy of models, whether they be formal scientific models or our personal worldviews, has been a popular pastime for many philosophers, particularly philosophers of science. This paper offers yet another critique of modeling that uses the results and observations of nonlinear mathematics and bottom-up simulation themselves to develop a modeling paradigm that is significantly broader than the traditional model-focused paradigm. In this broader view of modeling we are encouraged to concern ourselves more with the modeling process rather than the (computer) model itself and embrace a nonlinear modeling culture. This emerging view of modeling also counteracts the growing preoccupation with nonlinear models over linear models.

Journal ArticleDOI
01 Jan 2003
TL;DR: A template that was built in the simulation language Arena is described, which decreases the gap between the conceptualization activities and the translation into a simulation model, and a case example to test the applicability of the template.
Abstract: Redesigning organizational processes and their coordination often involves the construction of conceptual models, empirical models, and change alternatives. Modeling of organizational processes can be a very time-consuming activity, and during the redesign phase of an organization, time is a scarce resource. During the phase of translating process models of an organization into simulation models for analysis, diagnosis, and design, a lot of time is spent on translating conceptual models into simulation models. Furthermore, the resulting simulation models often lack any resemblance to the original conceptual models. This makes the communication, verification, and change activities within the simulation study quite difficult. This paper describes a template that was built in the simulation language Arena, which decreases the gap between the conceptualization activities and the translation into a simulation model. This paper describes the requirements for the template, a prototype implementation, and a case ...

01 Jan 2003
TL;DR: A layered approach for ANN V&V which includes a V &V software process for pre-trained neural networks, a detailed discussion of numerical issues, and techniques for dynamically measuring and monitoring the confidence of the ANN output are described.
Abstract: Artificial neural networks (ANNs) are used as an alternative to traditional models in the realm of control. Unfortunately, ANN models rarely provide any indication of accuracy or reliability of their predictions. Before ANNs can be used in safety critical applications (aircraft, nuclear plants, etc.), a certification process must be established for ANN based controllers. Traditional approaches to validation of neural networks are mostly based on empirical evaluation through simulation and/or experimental testing. For on-line trained ANNs used in safety critical applications, traditional methods of verification and validation cannot be applied, leaving a wide technological gap, which we attempt to address in this paper. We will describe a layered approach for ANN V&V which includes a V&V software process for pre-trained neural networks, a detailed discussion of numerical issues, and techniques for dynamically measuring and monitoring the confidence of the ANN output.

Journal ArticleDOI
TL;DR: Within this approach, graph transformation techniques are applied for automated translation of UML models into a language understood by a verification tool or directly into an implementation, allowing for a seamless integration of verification and validation into a UML-based development process.

01 Jan 2003
TL;DR: This paper focuses on the validation of the business process models on an intermediate abstraction level of the ARIS model of the eCommerce system development of Intershop.
Abstract: The eCommerce system development of Intershop is based on different models on various levels of abstraction. The software engineering tool ARIS represents most of these models. In this paper we focus on the validation of the business process models on an intermediate abstraction level of the ARIS model. The business processes may be derived from process patterns and have to follow specific rules (best practices). The validation of the compliance with these rules and the consistency with the original business process pattern is the focus of this paper.

Proceedings Article
07 Dec 2003
TL;DR: Four different approaches to deciding model validity are described, and two different paradigms that relate verification and validation to the model development process are presented.
Abstract: In this paper we discuss verification and validation of simulation models. Four different approaches to deciding model validity are described; two different paradigms that relate verification and validation to the model development process are presented; various validation techniques are defined; conceptual model validity, model verification, operational validity, and data validity are discussed; a way to document results is given; a recommended procedure for model validation is presented; and accreditation is briefly discussed.

ReportDOI
01 Jan 2003
TL;DR: In this article, the authors present a review of model validation studies that pertain to groundwater flow and transport modeling, focusing on site-specific, predictive groundwater models that are used for making decisions regarding remediation activities and site closure.
Abstract: Many sites of groundwater contamination rely heavily on complex numerical models of flow and transport to develop closure plans. This has created a need for tools and approaches that can be used to build confidence in model predictions and make it apparent to regulators, policy makers, and the public that these models are sufficient for decision making. This confidence building is a long-term iterative process and it is this process that should be termed ''model validation.'' Model validation is a process not an end result. That is, the process of model validation cannot always assure acceptable prediction or quality of the model. Rather, it provides safeguard against faulty models or inadequately developed and tested models. Therefore, development of a systematic approach for evaluating and validating subsurface predictive models and guiding field activities for data collection and long-term monitoring is strongly needed. This report presents a review of model validation studies that pertain to groundwater flow and transport modeling. Definitions, literature debates, previously proposed validation strategies, and conferences and symposia that focused on subsurface model validation are reviewed and discussed. The review is general in nature, but the focus of the discussion is on site-specific, predictive groundwater models that are used for making decisions regarding remediation activities and site closure. An attempt is made to compile most of the published studies on groundwater model validation and assemble what has been proposed or used for validating subsurface models. The aim is to provide a reasonable starting point to aid the development of the validation plan for the groundwater flow and transport model of the Faultless nuclear test conducted at the Central Nevada Test Area (CNTA). The review of previous studies on model validation shows that there does not exist a set of specific procedures and tests that can be easily adapted and applied to determine the validity of site-specific groundwater models. This is true for both deterministic and stochastic models, with the latter posing a more difficult and challenging problem when it comes to validation. This report then proposes a general validation approach for the CNTA model, which addresses some of the important issues recognized in previous validation studies, conferences, and symposia as crucial to the process. The proposed approach links model building, model calibration, model predictions, data collection, model evaluations, and model validation in an iterative loop. The approach focuses on use of collected validation data to reduce model uncertainty and narrow the range of possible outcomes of stochastic numerical models. It accounts for the stochastic nature of the numerical CNTA model, which used Monte Carlo simulation approach. The proposed methodology relies on the premise that absolute validity is not even a theoretical possibility and is not a regulatory requirement. Rather, it highlights the importance of testing as many aspects of the model as possible and using as many diverse statistical tools as possible for rigorous checking and confidence building in the model and its predictions. It is this confidence that will eventually allow for regulator and public acceptance of decisions based on the model predictions.

Proceedings ArticleDOI
06 Oct 2003
TL;DR: It is shown how program slicing and dicing performed at the intermediate code level combined with assertion checking techniques can automate, to a large extent, the error finding process and behavior verification for physical system simulation models.
Abstract: Mathematical modeling and simulation of complex physical systems are emerging as key technologies in engineering. Modern approaches to physical system simulation allow users to specify simulation models with the help of equation-based languages. Due to the high-level declarative abstraction of these languages program errors are extremely hard to find. This paper presents an algorithmic semi-automated debugging framework for equation-based modeling languages. We show how program slicing and dicing performed at the intermediate code level combined with assertion checking techniques can automate, to a large extent, the error finding process and behavior verification for physical system simulation models.

Journal ArticleDOI
TL;DR: In this paper, a user friendly validation module (VLD) is proposed for building energy simulation, which is based on a general solution of the dynamic heat flow through opaque multi-layered building construction.

Proceedings Article
01 Jan 2003
TL;DR: Several network models and simulations of the evolution of SourceForge, and the verification and validation processes of the framework are described, and a process, dynamic fitness based on project life cycle, was discovered.
Abstract: We describe a research framework for studying social systems. The framework uses agent-based modeling and simulation as key components in the process of discovery and understanding. A collaborative social network composed of open source software (OSS) developers and projects is studied and used to demonstrate the research framework. By continuously collecting developer and project data for over two years from SourceForge, we are able to infer and model the structural and the dynamic mechanisms that govern the topology and evolution of this social network. We describe the use of these empirically derived agent-based models of the SourceForge OSS developer network to specify simulations implemented using Java/Swarm. Several network models and simulations of the evolution of SourceForge, and the verification and validation processes of the framework are described. The nature of social network processes hidden from view that could plausibly generate the observed system properties can be discover through an iterative modeling, simulation, and validation and verification process. Such a process, dynamic fitness based on project life cycle, was discovered.

Journal ArticleDOI
01 Feb 2003
TL;DR: Xspin/Project is presented, an extension to Xspin, which automatically controls and manages the validation trajectory when using the model checker Spin, and discusses the use of software configuration management techniques and tools to manage and control the verification trajectory.
Abstract: In this paper we take a closer look at the automated analysis of designs, in particular of verification by model checking. Model checking tools are increasingly being used for the verification of real-life systems in an industrial context. In addition to ongoing research aimed at curbing the complexity of dealing with the inherent state space explosion problem - which allows us to apply these techniques to ever larger systems - attention must now also be paid to the methodology of model checking, to decide how to use these techniques to their best advantage. Model checking "in the large" causes a substantial proliferation of interrelated models and model checking sessions that must be carefully managed in order to control the overall verification process. We show that in order to do this well both notational and tool support are required. We discuss the use of software configuration management techniques and tools to manage and control the verification trajectory. We present Xspin/Project, an extension to Xspin, which automatically controls and manages the validation trajectory when using the model checker Spin.

Proceedings ArticleDOI
06 Jan 2003
TL;DR: It is described the critical nature of quantified Verification and Validation with Uncertainty Quantification at specified Confidence levels in evaluating system certification status and how this can lead to a Value Engineering methodology for investment strategy.
Abstract: This paper represents a summary of our methodology for Verification and Validation and Uncertainty Quantification. A graded scale methodology is presented and related to other concepts in the literature. We describe the critical nature of quantified Verification and Validation with Uncertainty Quantification at specified Confidence levels in evaluating system certification status. Only after Verification and Validation has contributed to Uncertainty Quantification at specified confidence can rational tradeoffs of various scenarios be made. Verification and Validation methods for various scenarios and issues are applied in assessments of Quantified Reliability at Confidence and we summarize briefly how this can lead to a Value Engineering methodology for investment strategy.

Dissertation
Carina Andersson1
01 Jan 2003
TL;DR: In the future work, empirical research, including experiment results, will be used for calibration and validation of simulation models, with focus on using simulation as a method for decision support in the verification and validation process.
Abstract: Quality is an aspect of high importance in software development projects. The software organizations have to ensure that the quality of their developed products is what the customers expect. Thus, the organizations have to verify that the product is functioning as expected and validate that the product is what the customers expect. Empirical studies have shown that in many software development projects as much as half of the projected schedule is spent on the verification and validation activities. The research in this thesis focuses on exploring the state of practice of the verification and validation process and investigating methods for achieving efficient fault detection during the software development. The thesis aims at increasing the understanding of the activities conducted to verify and validate the software products, by the means of empirical research in the software engineering domain. A survey of eleven Swedish software development organizations investigates the current state of practice of the verification and validation activities, and how these activities are managed today. The need for communicating and visualising the verification and validation process was expressed during the survey. Therefore the usefulness of process simulations was evaluated in the thesis. The simulations increased the understanding of the relationships between different activities among the involved participants. In addition, an experiment was conducted to compare the performance of the two verification and validation activities, inspection and testing. In the future work, empirical research, including experiment results, will be used for calibration and validation of simulation models, with focus on using simulation as a method for decision support in the verification and validation process.

Book ChapterDOI
01 Jan 2003
TL;DR: The main aim of this work is to present their own rule base verification method, based on the decision units conception, that can perform different verification and validation actions during knowledge base development and realization.
Abstract: Expert systems are problem solvers for specialized domains of competence in which effective problem solving normally requires human expertise. The transition of expert systems technology from research laboratories to software development centers highlighted the fact that the quality assurance for expert system is a very important issue for most real-word problems. Although the basic verification concepts are shared by software engineering and knowledge engineering, verification methods of conventional software are not directly applicable to expert systems and the new, specific methods of verification are required. The main aim of this work is to present our own rule base verification method. In our opinion the decision units conception allows us to consider different verification and validation issues together. Thanks to properties of the decision units we can perform different verification and validation actions during knowledge base development and realization.

Journal ArticleDOI
TL;DR: The χ language as mentioned in this paper is a hybrid language for modeling, simulation and verification, which consists of a number of orthogonal operators that operate on all process terms, including differential algebraic equations.

Journal ArticleDOI
TL;DR: A compositional verification method for task models and problem-solving methods for knowledge-based systems is introduced, based on properties that are formalized in terms of temporal semantics; both static and dynamic properties are covered.
Abstract: In this paper a compositional verification method for task models and problem-solving methods for knowledge-based systems is introduced. Required properties of a system are formally verified by deriving them from assumptions that themselves are properties of sub-components, which in their turn may be derived from assumptions on sub-sub-components, and so on. The method is based on properties that are formalized in terms of temporal semantics; both static and dynamic properties are covered. The compositional verification method imposes structure on the verification process. Because of the possibility of focusing at one level of abstraction (information and process hiding), compositional verification provides transparency and limits the complexity per level. Since verification proofs are structured in a compositional manner, they can be reused in the event of reuse of models or modification of an existing system. The method is illustrated for a generic model for diagnostic reasoning.

Proceedings ArticleDOI
Bruce Archambeault1
14 Oct 2003
TL;DR: There are a number of different ways to validate EMI/EMC simulation as discussed by the authors, but extreme care must be used to ensure the model correctly simulates the measured situation, since the limitations of the measurement could affect the results significantly.
Abstract: There are a number of different ways to validate EMI/EMC simulation. Measurements can be useful, but care is needed when using measurements, since the limitations of the measurement could affect the results significantly. Other options are available to help engineers increase the confidence is their model results. Engineers should validate their models to ensure the model's correctness, and to help understand the basic physics behind the model. Measurements can be used to validate modeling results, but extreme care must be used to ensure the model correctly simulates the measured situation. Omitting feed cables, shielding or ground reflections, or different measurement scan areas can dramatically change the results. An incorrect model result might be indicated when, in fact, the measured and modeled results are obtained for different situations, and should not be directly compared. Intermediate results can also be used to help increase the confidence in a model. Using the RF current distribution in model, or the animation in a time domain model, can help ensure the overall results are correct by determining the intermediate results are correct. These intermediate results have the added benefit of increasing the engineer's understanding of the underlying causes and effects of the overall problem.

Proceedings ArticleDOI
23 Oct 2003
TL;DR: The unique element here is that the workload involved in running the simulation study is too time consuming to execute on a single workstation, so that the simulation analyst must utilize computer resources available through the Web.
Abstract: This paper describes a procedure for assigning simulation trials to a set of parallel processors for the purpose of conducting a simulation study involving a complex simulation model in near real time. Unlike distributed simulation, where a complex simulation model is decomposed and its parts run in a parallel environment, the parallel replications approach discussed here involves running simulation replications to completion for the entire model. The unique element here is that the workload involved in running the simulation study is too time consuming to execute on a single workstation, so that the simulation analyst must utilize computer resources available through the Web. New statistical methodology is needed for running a complex simulation study in a Web-based, parallel replications environment.

01 Jun 2003
TL;DR: The main goals are to evaluate whether the verification system has the capability to express the properties of software systems and to evaluateWhether the verify system can provide inter-level mapping, a feature required for understanding how a system meets security objectives.
Abstract: : Computer systems that earn a high degree of trust must be backed by rigorous verification methods. A verification system is an interactive environment for writing formal specifications and checking formal proofs. Verification systems allow large complicated proofs to be managed and checked interactively. We desire evaluation criteria that provide a means of finding which verification system is suitable for a specific research environment and what needs of a particular project the tool satisfies, Therefore, the purpose of this thesis is to develop a methodology and set of evaluation criteria to evaluate verification systems for their suitability to improve the assurance that systems meet security objectives. A specific verification system is evaluated with respect to the defined methodology. The main goals are to evaluate whether the verification system has the capability to express the properties of software systems and to evaluate whether the verification system can provide inter-level mapping, a feature required for understanding how a system meets security objectives.

Dissertation
01 Jan 2003
TL;DR: In this article, a measure of goodness is introduced, which measures whether faults could have been found earlier in the development process and two different review methods are compared, one based on factorial design and the other based on fractional factorial designs.
Abstract: Large and complex software systems are developed as a tremendous engineering effort. The aim of the development is to satisfy the customer by delivering the right product, with the right quality, and on time. Errors made by engineers will always occur when a system is developed, but their number can be decreased by process improvement and their effect can be reduced by removing them as early as possible. The research is performed at Ericsson Microwave Systems AB. The thesis consists of six papers and the main contributions are: - It is presented how measurements can be performed in the various testing activities in an organization. A measure of goodness is introduced, which measures whether faults could have been found earlier in the process. - It is shown how a template simulation model can be adapted and extended to fit an organization in order to estimate how a change will affect the development process. - The important characteristics of the verification and validation activities and the dependencies to other project activities in an organization are investigated in order to introduce process improvements. It is shown that interviews are a feasible means for the characterization. - Two different review methods are compared. The comparison shows that an active review method is more efficient and effective than a passive one. - The factorial design methodology is introduced as a method in system performance evaluation. The results show that a validation method, based on factorial design, is efficient when few factors are involved, and that prototyping and validation methods, based on fractional factorial designs, are efficient when many factors are involved. Altogether, this thesis concentrates on how to increase product quality and reduce lead-time, by improving the verification and validation processes and methods. (Less)