scispace - formally typeset
Search or ask a question

Showing papers on "Verification and validation of computer simulation models published in 2008"


Journal ArticleDOI
TL;DR: Some of the current issues in forecast verification are addressed, some of the most recently developed verification techniques are reviewed, and some recommendations for future research are provided.
Abstract: Research and development of new verification strategies and reassessment of traditional forecast verification methods has received a great deal of attention from the scientific community in the last decade. This scientific effort has arisen from the need to respond to changes encompassing several aspects of the verification process, such as the evolution of forecasting systems, or the desire for more meaningful verification approaches that address specific forecast user requirements. Verification techniques that account for the spatial structure and the presence of features in forecast fields, and which are designed specifically for high-resolution forecasts have been developed. The advent of ensemble forecasts has motivated the re-evaluation of some of the traditional scores and the development of new verification methods for probability forecasts. The expected climatological increase of extreme events and their potential socio-economical impacts have revitalized research studies addressing the challenges concerning extreme event verification. Verification issues encountered in the operational forecasting environment have been widely discussed, verification needs for different user communities have been identified, and models to assess the forecast value for specific users have been proposed. Proper verification practice and correct interpretation of verification statistics has been extensively promoted with recent publications and books, tutorials and workshops, and the development of open-source software and verification tools. This paper addresses some of the current issues in forecast verification, reviews some of the most recently developed verification techniques, and provides recommendations for future research. Copyright © 2008 Royal Meteorological Society and Crown in the right of Canada.

266 citations


Journal ArticleDOI
TL;DR: Different types and features of predictive models and strategies for model building, as well as measures appropriate for assessing their performance in the context of validation, are described.
Abstract: The increasing availability and use of predictive models to facilitate informed decision making highlights the need for careful assessment of the validity of these models. In particular, models involving biomarkers require careful validation for two reasons: issues with overfitting when complex models involve a large number of biomarkers, and interlaboratory variation in assays used to measure biomarkers. In this article, we distinguish between internal and external statistical validation. Internal validation, involving training-testing splits of the available data or cross-validation, is a necessary component of the model building process and can provide valid assessments of model performance. External validation consists of assessing model performance on one or more data sets collected by different investigators from different institutions. External validation is a more rigorous procedure necessary for evaluating whether the predictive model will generalize to populations other than the one on which it was developed. We stress the need for an external data set to be truly external, that is, to play no role in model development and ideally be completely unavailable to the researchers building the model. In addition to reviewing different types of validation, we describe different types and features of predictive models and strategies for model building, as well as measures appropriate for assessing their performance in the context of validation. No single measure can characterize the different components of the prediction, and the use of multiple summary measures is recommended.

171 citations


Proceedings ArticleDOI
16 Mar 2008
TL;DR: A process for validating agent-based simulation models that combines face validation, sensitivity analysis, calibration and statistical validation is proposed.
Abstract: Validity forms the basic prerequisite for every simulation model, therefore also for reasonable usage of the agent-based simulation paradigm. However, models based on the multi-agent system metaphor tend to need some particular approaches. In this paper, I propose a process for validating agent-based simulation models that combines face validation, sensitivity analysis, calibration and statistical validation.

140 citations


Journal ArticleDOI
Jin Ma, D. Han, W.-J. Sheng, R.-M. He, C.-Y. Yue, J. Zhang 
TL;DR: In this article, a new approach for wide area measurements-based simulation validation work is proposed, which is very simple and easy to be implemented in the available power system simulation software, and the trajectory sensitivity method is applied to the parameter calibration work.
Abstract: Simulation veracity is very important to the plan and operation of a power system. Too optimistic a simulation will put the system at risk whereas too pessimistic a simulation will waste investments. Wide area measurements (WAMs), one of the fast developing technologies in recent years, could record the synchronized phasors in the whole power grid, which provides the possibility for simulation validation work. Recent contingency reproduction work worldwide has shown the inaccuracy of current digital simulation. Therefore simulation validation and model calibration are necessary to enhance simulation veracity. However, because of the complexity of the power system, it is difficult to find the erroneous ones among a large amount of components, which makes the simulation validation a very challenging problem. A new approach for WAMs-based simulation validation work is proposed. The proposed method is very simple and easy to be implemented in the available power system simulation software. Then, the trajectory sensitivity method is applied to the parameter calibration work. Real applications via the proposed methods in the big NE power grid of China are studied, which fully show the efficiency of the proposed methods.

46 citations


Proceedings ArticleDOI
10 Mar 2008
TL;DR: This contribution presents a modelling technique that allows covering several layers in a single model and switching between the layers at any time, in particular dynamically during simulation, leading to an improved trade-off between simulation performance and accuracy.
Abstract: Simulation of transaction level models (TLMs) is an established embedded systems design technique. Its use cases include virtual prototyping for early software development, platform simulation for design space exploration, and reference modelling for verification. The different use cases mandate different trade-offs between simulation performance and accuracy. Therefore, multiple TLM abstraction layers have been defined of which one has to be chosen and integrated into the system model prior to simulation. In this contribution we present a modelling technique that allows covering several layers in a single model and switching between the layers at any time, in particular dynamically during simulation. This feature is employed to automatically adapt simulation accuracy to an appropriate level depending on the model's state, leading to an improved trade-off between simulation performance and accuracy.

40 citations


MonographDOI
03 Nov 2008
TL;DR: A new rigorous and systematic verification and validation process, Verification and Validation of 3D Free-Surface Flow Models, which discusses this procedure in detail and will be indispensable for students and professionals working on computational models and environmental engineering.
Abstract: In the past several years, computational models for free surface flow simulations have been increasingly in demand for engineering, construction and design, legislation, land planning, and management decisions. Many computational models have been hastily developed and delivered to clients without the proper scientific confirmation and certification. In order to correct this, the ASCE/EWRI Task Committee developed a new rigorous and systematic verification and validation process, Verification and Validation of 3D Free-Surface Flow Models, which discusses this procedure in detail. The topics include terminology and basic methodology; analytical solutions for mathematical verification; mathematical verification using prescribed or manufactured solutions; physical process validation; application site validation; the systematic model verification and validation procedure; systems analysis considerations, findings, and conclusions; and recommendations. The appendixes contain input data for test cases and formulations and codes for manufactured solutions. This publication will be indispensable for students and professionals working on computational models and environmental engineering.

38 citations


Proceedings ArticleDOI
14 Apr 2008
TL;DR: In this paper, the authors present best practices in verification, validation, and test that are applicable to any program, but are critical when applying Model-Based Design in production programs.
Abstract: Model-Based Design is no longer limited to RD it is frequently used for production programs at automotive companies around the world. The demands of production programs drive an even greater need for tools and practices that enable automation and rigor in the area of verification, validation, and test. Without these tools and practices, achieving the quality demanded by the automotive market is not possible. This paper presents best practices in verification, validation, and test that are applicable to any program, but are critical when applying Model-Based Design in production programs.

36 citations


Book ChapterDOI
07 Jul 2008
TL;DR: This proposal aims at providing tools for testing the behavior of new modeling algorithms proposed in the context of medical simulation by proposing a framework and a methodology for assessing deformable models.
Abstract: Computational techniques for the analysis of mechanical problems have recently moved from traditional engineering disciplines to biomedical simulations. Thus, the number of complex models describing the mechanical behavior of medical environments have increased these last years. While the development of advanced computational tools has led to interesting modeling algorithms, the relevances of these models are often criticized due to incomplete model verification and validation. The objective of this paper is to propose a framework and a methodology for assessing deformable models. This proposal aims at providing tools for testing the behavior of new modeling algorithms proposed in the context of medical simulation. Initial validation results comparing different modeling methods are reported as a first step towards a more complete validation framework and methodology.

35 citations


Journal ArticleDOI
TL;DR: A three-dimensional tradeoff space encompassing both cost and coverage is introduced to aid software engineers in selecting the appropriate technique for the formal verification or validation task at hand.
Abstract: Numerous techniques exist for conducting computer-assisted formal verification and validation. The cost associated with these techniques varies, depending on factors such as ease of use, the effort required to construct correct requirement specifications for complex real-world properties, and the effort associated with instrumentation of the software under test. Likewise, existing techniques differ in their ability to effectively cover the system under test and its associated requirements. To aid software engineers in selecting the appropriate technique for the formal verification or validation task at hand, we introduce a three-dimensional tradeoff space encompassing both cost and coverage.

30 citations


Journal ArticleDOI
TL;DR: In this article, a pre-test design and test technique for validating wind turbine blade structural models is discussed, and the importance of proper pre-testing and test techniques for wind turbine model validation is demonstrated.
Abstract: The focus of this paper is a test program designed for wind turbine blades. Model validation is a comprehensive undertaking which requires carefully designing and executing experiments, proposing appropriate physics-based models, and applying correlation techniques to improve these models based on the test data. Structural models are useful for making decisions when designing a new blade or assessing blade performance, and the process of model validation is needed to ensure the quality of these models. Blade modal testing is essential for validation of blade structural models, and this report discusses modal test techniques required to achieve validation. Choices made in the design of a modal test can significantly affect the final test result. This study aims to demonstrate the importance of the proper pre-test design and test technique for validating blade structural models.

27 citations


Book ChapterDOI
21 Apr 2008

Journal ArticleDOI
TL;DR: A rigorous approach to the development of complex modelling software is presented here together with techniques for the automated analysis of such models and a process for the automatic discovery of biological phenomena from large simulation data sets.
Abstract: Simulation software is often a fundamental component in systems biology projects and provides a key aspect of the integration of experimental and analytical techniques in the search for greater understanding and prediction of biology at the systems level. It is important that the modelling and analysis software is reliable and that techniques exist for automating the analysis of the vast amounts of data which such simulation environments generate. A rigorous approach to the development of complex modelling software is needed. Such a framework is presented here together with techniques for the automated analysis of such models and a process for the automatic discovery of biological phenomena from large simulation data sets. Illustrations are taken from a major systems biology research project involving the in vitro investigation, modelling and simulation of epithelial tissue.

Journal ArticleDOI
TL;DR: A new taxonomy is provided that can help researchers and developers to frame future verification and validation efforts and the four dimensions of this taxonomy are Objectivity, Sample Size, Frequency, and Purpose.

Proceedings ArticleDOI
07 Dec 2008
TL;DR: In this article, an expert group for verification and validation of simulation models and results for production and logistics systems has been established, which has analyzed the existing material and then developed proposals for definitions, overviews on existing V&V techniques, practical hints for the documentation of the procedural steps within a simulation study, and a specific procedure model for VandV in the context of simulation for production this article.
Abstract: Verification & validation of simulation models and results has been strongly investigated in the context of defence applications. Significantly less substantial work can be found for applications for production and logistics, which is surprising when taking into account the massive impact that wrong or inadequate simulation results can have on strategic and investment-related decisions for large production and logistics systems. The authors have, therefore, founded an expert group for this specific topic in the year 2003, which has analysed the existing material and then developed proposals for definitions, overviews on existing V&V techniques, practical hints for the documentation of the procedural steps within a simulation study, and a specific procedure model for V&V in the context of simulation for production and logistics. The results of this working group are available as a textbook, in German. This paper summarises major results.

Proceedings ArticleDOI
07 Jan 2008
TL;DR: The focus of this paper is on the development of validated models for wind turbine blades, a comprehensive undertaking which requires carefully designing and executing experiments, proposing appropriate physics-based models, and applying correlation techniques to improve these models based on the test data.
Abstract: The focus of this paper is on the development of validated models for wind turbine blades. Validation of these models is a comprehensive undertaking which requires carefully designing and executing experiments, proposing appropriate physics-based models, and applying correlation techniques to improve these models based on the test data. This paper will cover each of these three aspects of model validation, although the focus is on the third – model calibration. The result of the validation process is an understanding of the credibility of the model when used to make analytical predictions. These general ideas will be applied to a wind turbine blade designed, tested, and modeled at Sandia National Laboratories. The key points of the paper include discussions of the tests which are needed, the required level of detail in these tests to validate models of varying detail, and mathematical techniques for improving blade models. Results from investigations into calibrating simplified blade models are presented. I. Introduction HERE are a number of reasons why one desires to develop models of wind turbine blades, and in each case one wants to ensure that these models are useful for the intended purpose. For example, correctly predicting failure in blades using a model can reduce the need for numerous costly tests including the fabrication of additional blades for a test-based failure prediction approach. An additional benefit of modeling and simulation is that the time required to complete the design and fabrication cycle can be reduced significantly when validated models are used to evaluate key aspects of the design that would otherwise require testing. Additionally, modern blades are large and costly – a validated predictive tool would be useful for assessing larger blades of the future. An important step in ensuring that a model is useful for the purpose of the analysis, that is ensuring that a model accurately predicts the behavior of interest, is a process called model validation. A validated model is one in which an analyst or designer can place a great deal of confidence – one can use this model to accurately predict performance. The validation process incorporates both testing and analysis. A set of calibration experiments are designed which provide enough data to improve the model so that the observations from the test and the corresponding predictions from the analysis are suitably correlated. In the next step, additional “validation experiments” are conducted in order to ensure that the model is predictive for the conditions of the validation experiments. If the validation experiments can be predicted, then the model is considered validated, otherwise additional experiments must be performed to provide data for further improvement of the model. It is important to note that a model which has been calibrated to match the test data is not necessarily a valid model. The process of calibrating a model is called model updating, while model validation includes the additional step of performing validation experiments. The main objective of the paper is to detail a general model validation process applied to wind turbine blades designed and tested at Sandia National Laboratories. The key points to be covered include those related to 1) testing (experiment design), 2) analysis (model development), and 3) comparison of test-analysis data for use in model calibration. Key points are covered in each of these areas; however, the focus of this paper is on model calibration. Optimization of the model parameters incorporating various types of blade test observations, including modal testing and static testing, is a novel development. As an example encompassing the key points, a program aimed at improving current modeling capabilities for a research-sized wind turbine blade is discussed in detail.

BookDOI
04 Apr 2008
TL;DR: This title is devoted to presenting some of the most important concepts and techniques for describing real-time systems and analyzing their behavior in order to enable the designer to achieve guarantees of temporal correctness.
Abstract: This title is devoted to presenting some of the most important concepts and techniques for describing real-time systems and analyzing their behavior in order to enable the designer to achieve guarantees of temporal correctness. Topics addressed include mathematical models of real-time systems and associated formal verification techniques such as model checking, probabilistic modeling and verification, programming and description languages, and validation approaches based on testing. With contributions from authors who are experts in their respective fields, this will provide the reader with the state of the art in formal verification of real-time systems and an overview of available software tools.

Journal ArticleDOI
TL;DR: The generality of the simplified model ensures that the B-BAC methodology can be applied to control a wide variety of electric flow heaters and its application results in more effective control system, which is also verified by simulation in the comparison to the conventional PI controller.

Proceedings ArticleDOI
01 Mar 2008
TL;DR: This paper covers a project that is using advanced functional verification methods to verify a RTCA DO-254/EUROCAE ED80 Level A/B design, which includes constrained random simulation, design intent specification, the total coverage model, and formal verification.
Abstract: This paper covers a project that is using advanced functional verification methods to verify a RTCA DO-254/EUROCAE ED80 Level A/B design. These methods include constrained random simulation, design intent specification (designer-added assertions), the total coverage model (unified coverage database), and formal verification (formal model checking). The project is a real design currently being developed at Rockwell Collins. This paper will include a brief description of the project, the methodologies used, why they were chosen, a description of these methods, why they work, and how they're similar or different from other verification methods. This paper will also include a discussion of verification methodology issues that needed attention, and implications to achieving DO-254 certification using advanced verification methods. This paper should be of general interest in the mil-aero community, especially for those with DO-254 compliance requirements, as "advanced verification techniques" such as constrained random and formal verification have not been the traditional verification methodology.

Journal ArticleDOI
TL;DR: This paper presents a framework for augmenting independent validation and verification of software systems with computer-based IV&V techniques that uses execution-based model checking to validate the correctness of the assertions and to verify the correctness and adequacy of the system under test.
Abstract: This paper presents a framework for augmenting independent validation and verification (IV&V) of software systems with computer-based IV&V techniques. The framework allows an IV&V team to capture its own understanding of the application as well as the expected behavior of any proposed system for solving the underlying problem by using an executable system reference model, which uses formal assertions to specify mission- and safety-critical behaviors. The framework uses execution-based model checking to validate the correctness of the assertions and to verify the correctness and adequacy of the system under test.

ReportDOI
01 Sep 2008
TL;DR: This report provides an assessment of the thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decision-making.
Abstract: Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparentmore » accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decision-making.« less

Journal ArticleDOI
TL;DR: It is shown that for reliable results of the validation test a vector-valued test is required and that accurate noise modelling is indispensable for reliable model structure validation.

01 Jan 2008
TL;DR: This research develops a highly efficient evolutionary simulation-based decision making procedure which can be applied in real-time management situations and investigates a unique approach to validating low-probability, high-impact simulation systems based on a concrete example problem.
Abstract: Computer simulations are routines programmed to imitate detailed system operations. They are utilized to evaluate system performance and/or predict future behaviors under certain settings. In complex cases where system operations cannot be formulated explicitly by analytical models, simulations become the dominant mode of analysis as they can model systems without relying on unrealistic or limiting assumptions and represent actual systems more faithfully. Two main streams exist in current simulation research and practice: discrete event simulation and agent-based simulation. This dissertation facilitates the marriage of the two. By integrating the agent-based modeling concepts into the discrete event simulation framework, we can take advantage of and eliminate the disadvantages of both methods. Although simulation can represent complex systems realistically, it is a descriptive tool without the capability of making decisions. However, it can be complemented by incorporating optimization routines. The most challenging problem is that large-scale simulation models normally take a considerable amount of computer time to execute so that the number of solution evaluations needed by most optimization algorithms is not feasible within a reasonable time frame. This research develops a highly efficient evolutionary simulation-based decision making procedure which can be applied in real-time management situations. It basically divides the entire process time horizon into a series of small time intervals and operates simulation optimization algorithms for those small intervals separately and iteratively. This method improves computational tractability by decomposing long simulation runs; it also enhances system dynamics by incorporating changing information/data as the event unfolds. With respect to simulation optimization, this procedure solves efficient analytical models which can approximate the simulation and guide the search procedure to approach near optimality quickly. The methods of agent-based discrete event simulation modeling and evolutionary simulation-based decision making developed in this dissertation are implemented to solve a set of disaster response planning problems. This research also investigates a unique approach to validating low-probability, high-impact simulation systems based on a concrete example problem. The experimental results demonstrate the feasibility and effectiveness of our model compared to other existing systems. Keywords. Agent-based Simulation, Discrete Event Simulation, Simulation Validation, Geographic Information Systems, Evolutionary Systems, Real-time Decision Making, Simulation Optimization, Heuristics, Disaster Response, Emergency Medical Services, Situation Awareness.

Proceedings ArticleDOI
17 Nov 2008
TL;DR: This paper presents a fuzzy Petri nets-based method for verification and validation of fuzzy rules-based human behavior models, which consists of verification of fuzzy rule bases, static validation of human Behavior Model Verification, and dynamic validation ofhuman behavior models.
Abstract: To improve the fidelity and automation of simulation exercises, human behavior models have become key components in most military simulations. In this paper, we present a fuzzy Petri nets-based method for verification and validation of fuzzy rules-based human behavior models. This method consists of three parts: verification of fuzzy rule bases, static validation of human behavior models, and dynamic validation of human behavior models. We first use a formal description method to model human behavior models, then automatically map them to fuzzy Petri nets, and then generate reachability graphs to verify the incompleteness, inconsistency, circularity, and redundancy of the rule bases in human behavior models. We then construct a validation referent, and according to it search the fuzzy Petri nets to statically validate human behavior models. We finally reason the fuzzy Petri nets to dynamically validate human behavior models according to the validation referent.

Proceedings ArticleDOI
12 May 2008
TL;DR: This paper presents an Independent Software Verification and Validation process that applies reviews for verification and a systematic testing methodology to guide validation to a pilot project named Quality Software Embedded in Space Missions at INPE and pointed very good results.
Abstract: This paper presents an Independent Software Verification and Validation process that applies reviews for verification and a systematic testing methodology to guide validation. This process was applied to a pilot project named Quality Software Embedded in Space Missions (QSEE) at INPE and pointed very good results. The main feature of the process is that it uses a particular testing methodology named CoFI and an automatic test cases generation tool based in state-models. These features allowed systematizing validation activities which were carried on by a team not involved with the software development. The main activities of the process, the results in terms of the errors found not only through the reviews but also through the tests are presented. Lessons learned including drawbacks and benefits are discussed as well.

Proceedings ArticleDOI
08 Jun 2008
TL;DR: This paper describes how an assertion based approach successfully addressed challenges for the verification of an enterprise class chip-multi-threaded (CMT) SPARC microprocessor.
Abstract: Exhaustive property checking, design defect isolation and functional coverage measurement are some of the key challenges of design verification. This paper describes how an assertion based approach successfully addressed these challenges for the verification of an enterprise class chip-multi-threaded (CMT) SPARC™ microprocessor. Methodology and experiences are discussed and recommendations are made on how to incorporate this into the design verification process. Experience with using assertion checks for formal verification as well as simulation based verification is presented, which is part of over 100 person year design verification effort.

12 Dec 2008
TL;DR: To conclude which model performs best in the two cases, a weighted multiplier proposed by Sornette et al. (2007) is calculated based on each metric and finally multiplied to one score per model and experiment.
Abstract: Gaussian and Lagrangian model runs are evaluated in comparison to field data from the Odour Release and Odour Dispersion project and to wind tunnel data from the Mock Urban Setting Test (MUST). Different statistical metrics are discussed. To conclude which model performs best in the two cases, a weighted multiplier proposed by Sornette et al. (2007) is calculated based on each metric and finally multiplied to one score per model and experiment. The results illustrate once again that a good model performance is strongly dependent on the model input (e.g. terrain data, roughness length). Promising results are received from a combination of the Lagrangian dispersion model LASAT with wind field simulations calculated with the CFD model MISKAM.

Proceedings ArticleDOI
07 Dec 2008
TL;DR: This paper focuses on the ¿milestones approach¿ to simulation development - based on the popular ¿agile software¿ philosophy and the authors' own experiences in real-world simulation consulting practice.
Abstract: For simulation practitioners, the common steps in a simulation modeling engagement are likely familiar: problem assessment, requirements specification, model building, verification, validation, and delivery of results. And for industrial engineers, it's a well-known adage that paying careful attention to process can help achieve better results. In this paper, we'll apply this philosophy to the process of model building as well. We'll consider model building within the framework of a software development exercise, and discuss how best practices from the broader software community can be applied for process improvement. In particular, we'll focus on the "Milestones Approach" to simulation development -- based on the popular "agile software" philosophy and our own experiences in real-world simulation consulting practice. We'll discuss how thinking agile can help minimize risk within the model-building process, and help create a better simulation for your customers.

Proceedings ArticleDOI
10 Nov 2008
TL;DR: This work transfers existing compositional reasoning techniques for foundational models used in verification tools to design-level models and develops new compositional strategies exploiting the special features of adaptive models to reduce model complexity on design-model level.
Abstract: Formal verification of adaptive systems allows rigorously proving critical requirements. However, design-level models are in general too complex to be handled by verification tools directly. To counter this problem, we propose to reduce model complexity on design-model level in order to facilitate model-based verification. First, we transfer existing compositional reasoning techniques for foundational models used in verification tools to design-level models. Second, we develop new compositional strategies exploiting the special features of adaptive models. Based on these results, we establish a framework for modular model-based verification of adaptive systems by model checking.

Proceedings ArticleDOI
14 Oct 2008
TL;DR: The need to validate simulations is discussed and various means for the quantification of the agreement between different simulations used for validation are discussed.
Abstract: The need to perform validation of simulation results is often ignored because commercial (and non-commercial) software tools are 'trusted' due to previous results. However, previous results from different models do not indicate the current model was created properly. This paper discusses the need to validate simulations and discusses various means for the quantification of the agreement between different simulations used for validation.

01 Jan 2008
TL;DR: It is shown that a vast number of scenarios can be analyzed with moderate manual effort, and that results corresponding to high-coverage FMEA / FTA analysis can be produced to verify system safety and robust performance.
Abstract: This paper presents and exemplifies a novel methodology to perform simulation-based automated testing for verification of complex chassis-control systems. The methods are derived from computer-game principles and regard the system under test as an opponent that is defeated when the specification is violated. The methodology is demonstrated on an auto-coding implementation of a brake-blending function for a heavy vehicle, in combination with a simulation model implemented in Modelica. It is shown that a vast number of scenarios can be analyzed with moderate manual effort, and that results corresponding to high-coverage FMEA / FTA analysis can be produced to verify system safety and robust performance.