scispace - formally typeset
Search or ask a question
Topic

Verification and validation of computer simulation models

About: Verification and validation of computer simulation models is a research topic. Over the lifetime, 1556 publications have been published within this topic receiving 43203 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Recommendations for achieving transparency and validation are described, developed by a task force appointed by the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the Society for Medical Decision Making (SMDM).
Abstract: Trust and confidence are critical to the success of health care models. There are two main methods for achieving this: transparency (people can see how the model is built) and validation (how well the model reproducesreality).Thisreportdescribesrecommendationsforachieving transparency and validation developed by a taskforce appointed by the International Society for Pharmacoeconomics and Outcomes ResearchandtheSocietyforMedicalDecisionMaking.Recommendations were developed iteratively by the authors. A nontechnical description—including model type, intended applications, funding sources, structure, intended uses, inputs, outputs, other components that determine function, and their relationships, data sources, validation methods, results, and limitations—should be made available to anyone. Technical documentation, written in sufficient detail to enable a reader with necessary expertise to evaluate the model and potentially reproduce it, should be made available openly or under agreements that protect intellectual property, at the discretion of the modelers. Validation involves face validity (wherein experts evaluate model structure, data sources, assumptions, and results), verification or internal validity (check accuracy of coding), cross validity (comparison of results with other models analyzing the same problem), external validity (comparing model results with real-world results), and predictive validity (comparing model results with prospectively observed events). The last two are the strongest form of validation. Each section of this

602 citations

Proceedings ArticleDOI
01 Dec 1999
TL;DR: Four different approaches to deciding model validity are described, and a recommended procedure for model validation is presented, as well as various validation techniques defined.
Abstract: In this paper we discuss validation and verification of simulation models. Four different approaches to deciding model validity are described; two different paradigms that relate validation and verification to the model development process are presented; various validation techniques are defined; conceptual model validity, model verification, operational validity, and data validity are discussed; a way to document results is given; a recommended procedure for model validation is presented; and accreditation is briefly discussed.

518 citations

Journal ArticleDOI
TL;DR: This paper surveys current verification, validation, and testing approaches and discusses their strengths, weaknesses, and life-cycle usage and describes automated tools used toplement vahdation, verification, and testmg.
Abstract: Software quahty is achieved through the apphcatlon of development techniques and the use of verification procedures throughout the development process Careful consideratmn of specific quality attmbutes and validation reqmrements leads to the selection of a balanced collection of review, analysis, and testing techmques for use throughout the life cycle. This paper surveys current verification, validation, and testing approaches and discusses their strengths, weaknesses, and life-cycle usage. In conjunction with these, the paper describes automated tools used to nnplement vahdation, verification, and testmg. In the discussion of new research thrusts, emphasis is gwen to the continued need to develop a stronger theoretical basis for testing and the need to employ combinations of tools and techniques that may vary over each apphcation.

485 citations

Journal ArticleDOI
TL;DR: The problem of validating computer simulation models of industrial systems has received only limited attention in the management science literature, so the writings of economists who have been concerned with testing the validity of economic models are considered.
Abstract: The problem of validating computer simulation models of industrial systems has received only limited attention in the management science literature. The purpose of this paper is to consider the problem of validating computer models in the light of contemporary thought in the fields of philosophy of science, economic theory, and statistics. In order to achieve this goal we have attempted to gather together and present some of the ideas of scientific philosophers, economists, statisticians, and practitioners in the field of simulation which are relevant to the problem of verifying simulation models. We have paid particular attention to the writings of economists who have been concerned with testing the validity of economic models. Among the questions which we shall consider are included: What does it mean to verify a computer model of an industrial system? Are there any differences between the verification of computer models and the verification of other types of models? If so, what are some of these differ...

433 citations

Journal ArticleDOI
TL;DR: It is found that single-repetition holdout validation tends to produce estimates with 46-229 percent more bias and 53-863 percent more variance than the top-ranked model validation techniques, and out-of-sample bootstrap validation yields the best balance between the bias and variance.
Abstract: Defect prediction models help software quality assurance teams to allocate their limited resources to the most defect-prone modules. Model validation techniques, such as $k$ -fold cross-validation, use historical data to estimate how well a model will perform in the future. However, little is known about how accurate the estimates of model validation techniques tend to be. In this paper, we investigate the bias and variance of model validation techniques in the domain of defect prediction. Analysis of 101 public defect datasets suggests that 77 percent of them are highly susceptible to producing unstable results– - selecting an appropriate model validation technique is a critical experimental design choice. Based on an analysis of 256 studies in the defect prediction literature, we select the 12 most commonly adopted model validation techniques for evaluation. Through a case study of 18 systems, we find that single-repetition holdout validation tends to produce estimates with 46-229 percent more bias and 53-863 percent more variance than the top-ranked model validation techniques. On the other hand, out-of-sample bootstrap validation yields the best balance between the bias and variance of estimates in the context of our study. Therefore, we recommend that future defect prediction studies avoid single-repetition holdout validation, and instead, use out-of-sample bootstrap validation.

414 citations


Network Information
Related Topics (5)
Scheduling (computing)
78.6K papers, 1.3M citations
78% related
Software development
73.8K papers, 1.4M citations
78% related
Software
130.5K papers, 2M citations
77% related
Control system
129K papers, 1.5M citations
75% related
Robustness (computer science)
94.7K papers, 1.6M citations
74% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20236
20228
202115
20208
201923
201821