scispace - formally typeset
Search or ask a question

Showing papers on "Dependability published in 2008"


Book
23 Feb 2008
TL;DR: This book is to introduce basic model-based FDI schemes, advanced analysis and design algorithms and the needed mathematical and control theory tools at a level for graduate students and researchers as well as for engineers.
Abstract: A most critical and important issue surrounding the design of automatic control systems with the successively increasing complexity is guaranteeing a high system performance over a wide operating range and meeting the requirements on system reliability and dependability. As one of the key technologies for the problem solutions, advanced fault detection and identification (FDI) technology is receiving considerable attention. The objective of this book is to introduce basic model-based FDI schemes, advanced analysis and design algorithms and the needed mathematical and control theory tools at a level for graduate students and researchers as well as for engineers.

2,088 citations


Proceedings Article
16 Apr 2008
TL;DR: Remus as mentioned in this paper is a high availability service that allows existing, unmodified software to be protected from the failure of the physical machine on which it runs by encapsulating protected software in a virtual machine, asynchronously propagating changed state to a backup host at frequencies as high as forty times a second.
Abstract: Allowing applications to survive hardware failure is an expensive undertaking, which generally involves reengineering software to include complicated recovery logic as well as deploying special-purpose hardware; this represents a severe barrier to improving the dependability of large or legacy applications. We describe the construction of a general and transparent high availability service that allows existing, unmodified software to be protected from the failure of the physical machine on which it runs. Remus provides an extremely high degree of fault tolerance, to the point that a running system can transparently continue execution on an alternate physical host in the face of failure with only seconds of downtime, while completely preserving host state such as active network connections. Our approach encapsulates protected software in a virtual machine, asynchronously propagates changed state to a backup host at frequencies as high as forty times a second, and uses speculative execution to concurrently run the active VM slightly ahead of the replicated system state.

715 citations


Journal ArticleDOI
TL;DR: The present atlas is a result of the EURON perspective research project “Physical Human–Robot Interaction in anthropic DOMains (PHRIDOM)”, aimed at charting the new territory of pHRI, and constitutes the scientific basis for the ongoing STReP project ‘Physical Human-Robots Interaction: depENDability and Safety (PHRIENDS’.

699 citations


Journal ArticleDOI
TL;DR: In this article, the use of formalised and software-based procedures for the analysis and interpretation of qualitative interview data is advocated for International Business research, with a focus on international datasets, equivalence issues, multiple research environments and multiple researchers.
Abstract: Reliability, validity, generalisability and objectivity are fundamental concerns for quantitative researchers. For qualitative research, however, the role of these dimensions is blurred. Some researchers argue that these dimensions are not applicable to qualitative research and a qualitative researcher’s tool chest should be geared towards trustworthiness and encompass issues such as credibility, dependability, transferability and confirmability. This paper advocates the use of formalised and software-based procedures for the analysis and interpretation of qualitative interview data. It is argued that International Business research, with a focus on international datasets, equivalence issues, multiple research environments and multiple researchers, will benefit from formalisation. The use of software programmes is deemed to help to substantiate the analysis and interpretation of textual interview data.

534 citations


Journal ArticleDOI
TL;DR: The authors' classification of failures reveals the nature and extent of failures in the Sprint IP backbone and provides a probabilistic failure model, which can be used to generate realistic failure scenarios, as input to various network design and traffic engineering problems.
Abstract: As the Internet evolves into a ubiquitous communication infrastructure and supports increasingly important services, its dependability in the presence of various failures becomes critical. In this paper, we analyze IS-IS routing updates from the Sprint IP backbone network to characterize failures that affect IP connectivity. Failures are first classified based on patterns observed at the IP-layer; in some cases, it is possible to further infer their probable causes, such as maintenance activities, router-related and optical layer problems. Key temporal and spatial characteristics of each class are analyzed and, when appropriate, parameterized using well-known distributions. Our results indicate that 20% of all failures happen during a period of scheduled maintenance activities. Of the unplanned failures, almost 30% are shared by multiple links and are most likely due to router-related and optical equipment-related problems, respectively, while 70% affect a single link at a time. Our classification of failures reveals the nature and extent of failures in the Sprint IP backbone. Furthermore, our characterization of the different classes provides a probabilistic failure model, which can be used to generate realistic failure scenarios, as input to various network design and traffic engineering problems.

383 citations


BookDOI
01 Aug 2008
TL;DR: The Handbook of Performability Engineering as discussed by the authors provides a holistic view of the entire life cycle of activities of the product, along with the associated cost of environmental preservation at each stage, while maximizing the performance.
Abstract: The performance of a product, a system or a service is usually judged in terms of dependability (which can be defined as an aggregate of quality, reliability, and maintainability etc.) and safety, not overlooking the cost of achieving these attributes. As of now, dependability and cost effectiveness are primarily seen as instruments for conducting the international trade in the free market environment and thereby deciding the economic prosperity of a nation. However, the internalization of the hidden costs of environment preservation will have to be accounted for, sooner or later, in order to be able to produce sustainable products in the long run. These factors cannot be considered in isolation of each other. The Handbook of Performability Engineering considers all aspects of performability engineering, providing a holistic view of the entire life cycle of activities of the product, along with the associated cost of environmental preservation at each stage, while maximizing the performance.

306 citations


Proceedings Article
01 Jan 2008
TL;DR: The adjective resilient has been in use for decades in the field of dependable computing systems, however essentially as a synonym of fault-tolerant, thus generally ignoring the unexpected aspect of the phenomena the systems may have to face.
Abstract: Definition of resilience Resilience (from the Latin etymology resilire, to rebound) is literally the act or action of springing back. As a property, two strands can historically be identified: a) in social psychology [Claudel 1936], where it is about elasticity, spirit, resource and good mood, and b) and in material science, where it is about robustness and elasticity. The notion of resilience has then been elaborated: • in child psychology and psychiatry [Engle et al. 1996], referring to living and developing successfully when facing adversity; • in ecology [Holling 1973], referring to moving from a stability domain to another one under the influence of disturbances; • in business [Hamel & Välikangas 2003], referring to the capacity to reinvent a business model before circumstances force to; • in industrial safety [Hollnagel et al. 2006], referring to anticipating risk changes before damage occurrence. A common point to the above senses of the notion of resilience is the ability to successfully accommodate unforeseen environmental perturbations or disturbances. A careful examination of [Holling 1973] leads to draw interesting parallels between ecological systems and computing systems, due to: a) the emphasis on the notion of persistence of a property: resilience is said to " determine the persistence of relationships within a system and is a measure of the ability of these systems to absorb changes of state variables, driving variables, and parameters, and still persist " ; b) the dissociation between resilience and stability: it is noted that " a system can be very resilient and still fluctuate greatly, i.e., have low stability " and that " low stability seems to introduce high resilience " ; c) the mention that diversity is of significant influence on both stability (decreasing it) and resilience (increasing it). The adjective resilient has been in use for decades in the field of dependable computing systems, e.g. [Alsberg & Day 1976], and is more and more in use, however essentially as a synonym of fault-tolerant, thus generally ignoring the unexpected aspect of the phenomena the systems may have to face. A noteworthy exception is the preface of [Anderson 1985], which says " The two key attributes here are dependability and robustness. […] A computing system can be said to be robust if it retains its ability to deliver service in conditions which are beyond its normal domain of operation ". Fault-tolerant computing systems are known for exhibiting some robustness with respect …

233 citations


Journal ArticleDOI
TL;DR: It is shown that the proposed method correctly detects and diagnoses the most commonly occurring track circuit failures in a laboratory test rig of one type of audio frequency jointless track circuit.

123 citations


Book
07 Jan 2008
TL;DR: Dependability Benchmarking for Computer Systems provides a comprehensive collection of benchmarks for measuring dependability in hardware-software systems, and explains the concepts behind them.
Abstract: As computer systems become more complex and mission-critical, it becomes imperative for systems engineers and researchers to have metrics for a system's "illities": dependability, reliability, availability, and serviceability. Written by leading experts, Dependability Benchmarking for Computer Systems provides a comprehensive collection of benchmarks for measuring dependability in hardware-software systems, and explains the concepts behind them. It collects the expert knowledge from DBench, a special research project funded by the European Union, to provide an inclusive text for engineers, researchers, system vendors, system purchasers, dependability researchers, computer industry consultants, and system integrators.

109 citations


BookDOI
20 Dec 2008
TL;DR: Providing domain-specific solutions to various technical challenges, this handbook serves as a reliable, complete, and well-documented source of information on automotive embedded systems.
Abstract: Highlighting requirements, technologies, and business models, the Automotive Embedded Systems Handbook provides a comprehensive overview of existing and future automotive electronic systems. It presents state-of-the-art methodological and technical solutions in the areas of in-vehicle architectures, multipartner development processes, software engineering methods, embedded communications, and safety and dependability assessment. Divided into four parts, the book begins with an introduction to the design constraints of automotive-embedded systems. It also examines AUTOSAR as the emerging de facto standard and looks at how key technologies, such as sensors and wireless networks, will facilitate the conception of partially and fully autonomous vehicles. The next section focuses on networks and protocols, including CAN, LIN, FlexRay, and TTCAN. The third part explores the design processes of electronic embedded systems, along with new design methodologies, such as the virtual platform. The final section presents validation and verification techniques relating to safety issues. Providing domain-specific solutions to various technical challenges, this handbook serves as a reliable, complete, and well-documented source of information on automotive embedded systems.

103 citations


Proceedings ArticleDOI
01 Apr 2008
TL;DR: The content-addressable confidentiality scheme developed for DepSpace bridges the gap between Byzantine fault-tolerant replication and confidentiality of replicated data and can be used in other systems that store critical data.
Abstract: The tuple space coordination model is one of the most interesting coordination models for open distributed systems due to its space and time decoupling and its synchronization power. Several works have tried to improve the dependability of tuple spaces through the use of replication for fault tolerance and access control for security. However, many practical applications in the Internet require both fault tolerance and security. This paper describes the design and implementation of DepSpace, a Byzantine fault-tolerant coordination service that provides a tuple space abstraction. The service offered by DepSpace is secure, reliable and available as long as less than a third of service replicas are faulty. Moreover, the content-addressable confidentiality scheme developed for DepSpace bridges the gap between Byzantine fault-tolerant replication and confidentiality of replicated data and can be used in other systems that store critical data.

Journal ArticleDOI
TL;DR: A quantitative solution that minimizes the life cycle cost of a product by developing an optimal product validation plan that utilizes the inverse relationship between the cost of product validation activities and the expected cost of repair and warranty returns.

Journal ArticleDOI
J.C. Baraza, J. Gracia, Sara Blanc, D. Gil, Pedro Gil 
TL;DR: New proposals to implement saboteurs and mutants for models in VHDL which are easy-to-automate, and whose philosophy can be generalized to other hardware description languages are presented.
Abstract: Deep submicrometer devices are expected to be increasingly sensitive to physical faults. For this reason, fault-tolerance mechanisms are more and more required in VLSI circuits. So, validating their dependability is a prior concern in the design process. Fault injection techniques based on the use of hardware description languages offer important advantages with regard to other techniques. First, as this type of techniques can be applied during the design phase of the system, they permit reducing the time-to-market. Second, they present high controllability and reachability. Among the different techniques, those based on the use of saboteurs and mutants are especially attractive due to their high fault modeling capability. However, implementing automatically these techniques in a fault injection tool is difficult. Especially complex are the insertion of saboteurs and the generation of mutants. In this paper, we present new proposals to implement saboteurs and mutants for models in VHDL which are easy-to-automate, and whose philosophy can be generalized to other hardware description languages.

Proceedings ArticleDOI
07 May 2008
TL;DR: ADAPT is a tool that aims at easing the task of evaluating dependability measures in the context of modern model driven engineering processes based on AADL, and provides a dependability evaluation model in the form of a Generalized Stochastic Petri Net (GSPN).
Abstract: ADAPT is a tool that aims at easing the task of evaluating dependability measures in the context of modern model driven engineering processes based on AADL (Architecture Analysis and Design Language). Hence, its input is an AADL architectural model annotated with dependability-related information. Its output is a dependability evaluation model in the form of a Generalized Stochastic Petri Net (GSPN). The latter can be processed by existing dependability evaluation tools, to compute quantitative measures such as reliability, availability, etc.. ADAPT interfaces OSATE (the Open Source AADL Tool Environment) on the AADL side and SURF-2, on the dependability evaluation side. In addition, ADAPT provides the GSPN in XML/XMI format, which represents a gateway to other dependability evaluation tools, as the processing techniques for XML files allow it to be easily converted to a tool-specific GSPN.

Proceedings ArticleDOI
26 Mar 2008
TL;DR: The problem of achieving good performances in accuracy and promptness with a robot manipulator under the condition that safety is guaranteed throughout whole task execution is discussed.
Abstract: Robots designed to share an environment with humans, such as e.g. in domestic or entertainment applications or in cooperative material-handling tasks, must fulfill different requirements from those typically met in industry. It is often the case, for instance, that accuracy requirements are less demanding. On the other hand, concerns of paramount importance are safety and dependability of the robot system. According to such difference in requirements, it can be expected that usage of conventional industrial arms for anthropic environments will be far from optimal. An approach to increase the safety level of robot arms interacting with humans consists in the introduction of compliance at the mechanical design level. In this paper we discuss the problem of achieving good performances in accuracy and promptness with a robot manipulator under the condition that safety is guaranteed throughout whole task execution. Intuitively, while a rigid and powerful structure of the arm would favor its performance, lightweight compliant structures are more suitable for safe operation. The quantitative analysis of the resulting design trade-off between safety and performance has a strong impact on how robot mechanisms and controllers should be designed for human- interactive applications. We discuss few different possible concepts for safely actuating joints, and focus on aspects related to the implementation of the mechanics and control of this new class of robots.

Proceedings ArticleDOI
03 Dec 2008
TL;DR: This paper reviews the key concepts that are introduced by the error annex of the Architecture Analysis and Description Language, and compares it to the existing safety evaluation techniques regarding its ability in providing modeling, process and tool support.
Abstract: Early quality evaluation and support for decisions that affect quality characteristics are among the key incentives to formally specify the architecture of a software intensive system. The Architecture Analysis and Description Language (AADL) with its error annex is a new and promising architecture modeling language that supports analysis of safety and other dependability properties. This paper reviews the key concepts that are introduced by the error annex, and compares it to the existing safety evaluation techniques regarding its ability in providing modeling, process and tool support. Based on this review and the comparison, its strengths and weaknesses are identified and possible improvements for the model-driven safety evaluation methodology based on AADLpsilas error annex are highlighted.

Journal ArticleDOI
TL;DR: This paper presents a risk-based approach that creates modular attack trees for each component in the system, specified as parametric constraints, which allow quantifying the probability of security breaches that occur due to internal component vulnerabilities as well as vulnerabilities in the component's deployment environment.

Proceedings ArticleDOI
07 May 2008
TL;DR: This paper describes a framework similar in its spirit to so called honey- farms but built in a way that makes its large-scale deployment easily feasible and offers a very rich level of interaction with the attackers without suffering from the drawbacks of expensive high interaction systems.
Abstract: The dependability community has expressed a growing interest in the recent years for the effects of malicious, external, operational faults in computing systems, ie. intrusions. The term intrusion tolerance has been introduced to emphasize the need to go beyond what classical fault tolerant systems were able to offer. Unfortunately, as opposed to well understood accidental faults, the domain is still lacking sound data sets and models to offer rationales in the design of intrusion tolerant solutions. In this paper, we describe a framework similar in its spirit to so called honey- farms but built in a way that makes its large-scale deployment easily feasible. Furthermore, it offers a very rich level of interaction with the attackers without suffering from the drawbacks of expensive high interaction systems. The system is described, a prototype is presented as well as some preliminary results that highlight the feasibility as well as the usefulness of the approach.

Proceedings ArticleDOI
24 Jun 2008
TL;DR: A key feature is its formal semantics in terms of input/output-interactive Markov chains, which enables both compositional modeling and compositional state space generation and reduction.
Abstract: This paper proposes a formally well-rooted and extensible framework for dependability evaluation: Arcade (architectural dependability evaluation). It has been designed to combine the strengths of previous approaches to the evaluation of dependability. A key feature is its formal semantics in terms of input/output-interactive Markov chains, which enables both compositional modeling and compositional state space generation and reduction. The latter enables great computational reductions for many models. The Arcade approach is extensible, hence adaptable to new circumstances or application areas. The paper introduces the new modeling approach, discusses its formal semantics and illustrates its use with two case studies.

Journal ArticleDOI
TL;DR: The structure and functionality of CoSMIC (Component Synthesis using Model Integrated Computing), which is an MDM toolsuite that addresses key DRE application and middleware lifecycle challenges, including partitioning the components to use distributed resources effectively, validating software configurations, assuring multiple simultaneous QoS properties in real-time, and safeguarding against rapidly changing technology.

Book ChapterDOI
22 Sep 2008
TL;DR: This paper describes and demonstrates an approach that promises to bridge the gap between model-based systems engineering and the safety process of automotive embedded systems by integrating safety analysis techniques, a method for developing and managing Safety Cases, and the EAST-ADL2 architecture description language.
Abstract: This paper describes and demonstrates an approach that promises to bridge the gap between model-based systems engineering and the safety process of automotive embedded systems. The basis for this is the integration of safety analysis techniques, a method for developing and managing Safety Cases, and a systematic approach to model-based engineering --- the EAST-ADL2 architecture description language. Three areas are highlighted: (1) System model development on different levels of abstraction. This enables fulfilling many requirements on software development as specified by ISO-CD-26262; (2) Safety Case development in close connection to the system model; (3) Analysis of mal-functional behaviour that may cause hazards, by modelling of errors and error propagation in a (complex and hierarchical) system model.

Journal ArticleDOI
TL;DR: The model-view-controller design pattern can improve concern separation in a self-representation to model system functionality concerns, and is one of the foundations of distributed systems.
Abstract: A self-management infrastructure requires a self-representation to model system functionality concerns. The model-view-controller design pattern can improve concern separation in a self-representation. Future computing initiatives such as ubiquitous and pervasive computing, large-scale distribution, and on-demand computing will foster unpredictable and complex environments with challenging demands. Next-generation systems will require flexible system infrastructures that can adapt to both dynamic changes in operational requirements and environmental conditions, while providing predictable behavior in areas such as throughput, scalability, dependability, and security. Successful projects, once deployed, will require skilled administration personnel to install, configure, maintain, and provide 24/7 support. Message-oriented middleware is one of the foundations of distributed systems.

Journal ArticleDOI
TL;DR: An analysis of the dependability of an elementary yet critical robot component, i.e., the joint-level actuation subsystem, considers robot actuators that implement the VSA paradigm to explore and further promote dependability studies in robotics, as a means of addressing concerns in safety-critical robotic systems for physical interactions with humans.
Abstract: In this article, we performed an analysis of the dependability of an elementary yet critical robot component, i.e., the joint-level actuation subsystem. We consider robot actuators that implement the VSA paradigm, i.e., ability to change the effective transmission stiffness during motion to achieve high performance while constantly keeping injury risks by accidental impacts with humans below a given threshold. Without attempting a comprehensive review of different existing design approaches to VSA,we focused on the analysis of three different arrangements of agonistic/antagonistic actuation mechanisms for pHRI applications. Several aspects of their performance, safety, and dependability have been considered to get an indicative, though certainly not exhaustive, comparison of these alternatives. According to our results, the simple AA arrangement is more reliable (due to the simplicity of its mechanical implementation) if FM is not used. Proper FM actions can make other designs perform equally well as the simple AA concerning reliability and can perform better for steerability. Simulations of impacts in failed states (where FMis not used by a worst-case assumption) also show that the different designs have comparable safety properties. Although overall results for the bidirectional arrangements are somewhat superior, especially in terms of steerability (if FM is applied), we do not extrapolate any general claim in this regard. Indeed, many factors influence the results of similar studies, and each case should be considered in detail and very carefully. The scope of the study can become quite broad, and many of the theoretical and technical issues presented here (e.g., fault detection, supervisory control, and safety-related systems) will require further separated investigations. One of the purposes of this work was to explore and further promote dependability studies in robotics, as a means of addressing concerns in safety-critical robotic systems for physical interactions with humans. In this sense, a robot for pHRI applications is a unique benchmark for improving the state of art of fault tolerant design as well as in developing tools to master performance, dependability, and safety issues of a robotic structure.

Journal ArticleDOI
TL;DR: In the 21st century, software engineers face the often formidable challenges of simultaneously dealing with rapid change, uncertainty and emergence, dependability, diversity, and interdependence, but they also have opportunities to make significant contributions that will make a difference for the better.
Abstract: In the 21st century, software engineers face the often formidable challenges of simultaneously dealing with rapid change, uncertainty and emergence, dependability, diversity, and interdependence, but they also have opportunities to make significant contributions that will make a difference for the better.

Proceedings ArticleDOI
03 Dec 2008
TL;DR: Empirical evidences demonstrate that G2Way, in some cases, outperformed other strategies in terms of the number of generated test data within reasonable execution time and compares its effectiveness against existing strategies including AETG and its variations.
Abstract: Our continuous dependencies on software (i.e. to assist as well as facilitate our daily chores) often raise dependability issue particularly when software is being employed harsh and life threatening or (safety) critical applications. Here, rigorous software testing becomes immensely important. Many combinations of possible input parameters, hardware/software environments, and system conditions need to be tested and verified against for conformance. Due to resource constraints as well as time and costing factors, considering all exhaustive test possibilities would be impossible (i.e. due to combinatorial explosion problem). Earlier work suggests that pairwise sampling strategy (i.e. based on two-way parameter interaction) can be effective. Building and complementing earlier work, this paper discusses an efficient pairwise test data generation strategy, called G2Way. In doing so, this paper demonstrates the correctness of G2Way as well as compares its effectiveness against existing strategies including AETG and its variations, IPO, SA, GA, ACA, and All Pairs. Empirical evidences demonstrate that G2Way, in some cases, outperformed other strategies in terms of the number of generated test data within reasonable execution time.

01 Jan 2008
TL;DR: This paper provides solutions to verify the freshness constraints of the signals exchanged in the static segment of a FlexRay network, under the form of both simple non-schedulability tests and an exact analysis.
Abstract: This paper deals with the configuration of the static segment of a FlexRay network, in the case where the tasks producing the signals are not synchronized with the FlexRay communication cycle, as it can be the case, for instance, if legacy software is to be re-used. First, we provide solutions to verify the freshness constraints of the signals exchanged in the static segment, under the form of both simple non-schedulability tests and an exact analysis. Then we propose a heuristic to construct the communication schedule, which proved to be efficient in our experiments. Finally, we highlight some future work that should help us further optimize the configuration of FlexRay networks, be it in regard to hardware resource usage or dependability objectives.

Proceedings ArticleDOI
16 Mar 2008
TL;DR: The utility of low-cost, generic invariants in their capacity of error detectors within a spectrum-based fault localization (SFL) approach aimed to diagnose program defects in the operational phase is studied.
Abstract: Despite extensive testing in the development phase, residual defects can be a great threat to dependability in the operational phase This paper studies the utility of low-cost, generic invariants ("screeners") in their capacity of error detectors within a spectrum-based fault localization (SFL) approach aimed to diagnose program defects in the operational phase The screeners considered are simple bit-mask and range invariants that screen every load/store and function argument/return program point Their generic nature allows them to be automatically instrumented without any programmer-effort, while training is straightforward given the test cases available in the development phase Experiments based on the Siemens program set demonstrate diagnostic performance that is similar to the traditional, development-time application of SFL based on the program pass/fail information known before-hand This diagnostic performance is currently attained at an average 14% screener execution time overhead, but this overhead can be reduced at limited performance penalty

Journal ArticleDOI
TL;DR: A novel, simple coding scheme called Crosstalk Avoiding Double Error Correction Code (CADEC) is proposed that provides significant energy savings compared to previously proposed crosstalk avoiding single error correcting codes and error-detection/retransmission schemes.
Abstract: Network on Chip (NoC) is an enabling methodology of integrating a very high number of intellectual property (IP) blocks in a single System on Chip (SoC). A major challenge that NoC design is expected to face is the intrinsic unreliability of the interconnect infrastructure under technology limitations. Research must address the combination of new device-level defects or error-prone technologies within systems that must deliver high levels of reliability and dependability while satisfying other hard constraints such as low energy consumption. By incorporating novel error correcting codes it is possible to protect the NoC communication fabric against transient errors and at the same time lower the energy dissipation. We propose a novel, simple coding scheme called Crosstalk Avoiding Double Error Correction Code (CADEC). Detailed analysis followed by simulations with three commonly used NoC architectures show that CADEC provides significant energy savings compared to previously proposed crosstalk avoiding single error correcting codes and error-detection/retransmission schemes.

Book ChapterDOI
01 Jan 2008
TL;DR: This article attempts to summarize fifteen years of active research on binary ecision diagrams and risk and dependability studies, including the introduction of BDD in that field and several central mathematical definitions were questioned.
Abstract: Bryant’s binary ecision diagrams are state-of-the-art data structures used to encode and to manipulate Boolean functions Risk and dependability studies are heavy consumers of Boolean functions, for the most widely used modeling methods, namely fault trees and event trees, rely on them The introduction of BDD in that field renewed its algorithmic framework Moreover, several central mathematical definitions, like the notions of minimal cutsets and importance factors, were questioned This article attempts to summarize fifteen years of active research on those topics

Journal ArticleDOI
TL;DR: In this article, the use of Generalizability (G) theory in examining the dependability of concept map assessment scores and designing a concept map assessor for a particular practical application is discussed.
Abstract: In the first part of this article, the use of Generalizability (G) theory in examining the dependability of concept map assessment scores and designing a concept map assessment for a particular practical application is discussed. In the second part, the application of G theory is demonstrated by comparing the technical qualities of two frequently used mapping techniques: construct-a-map with created linking phrases (C) and construct-a-map with selected linking phrases (S). Some measurement facets that influence concept-map scores are explored and how to optimize different concept mapping techniques by varying the conditions for different facets is shown. It is found that C and S are not technically equivalent. The G coefficients for S are larger than those for C under the same condition. Furthermore, a decision(D) study shows that fewer items (propositions) would be needed for S than C to reach desired level of G coefficients if only one occasion could be afforded. On the other hand, C seems to reveal stu...