scispace - formally typeset
Search or ask a question

Showing papers on "Dependability published in 2013"


Journal ArticleDOI
TL;DR: This paper proposes a Stackelberg game between utility companies and end-users to maximize the revenue of each utility company and the payoff of each user and derive analytical results for the StACkelberg equilibrium of the game and proves that a unique solution exists.
Abstract: Demand Response Management (DRM) is a key component in the smart grid to effectively reduce power generation costs and user bills. However, it has been an open issue to address the DRM problem in a network of multiple utility companies and consumers where every entity is concerned about maximizing its own benefit. In this paper, we propose a Stackelberg game between utility companies and end-users to maximize the revenue of each utility company and the payoff of each user. We derive analytical results for the Stackelberg equilibrium of the game and prove that a unique solution exists. We develop a distributed algorithm which converges to the equilibrium with only local information available for both utility companies and end-users. Though DRM helps to facilitate the reliability of power supply, the smart grid can be succeptible to privacy and security issues because of communication links between the utility companies and the consumers. We study the impact of an attacker who can manipulate the price information from the utility companies. We also propose a scheme based on the concept of shared reserve power to improve the grid reliability and ensure its dependability.

705 citations


Proceedings ArticleDOI
16 Aug 2013
TL;DR: This paper describes several threat vectors that may enable the exploit of SDN vulnerabilities and sketches the design of a secure and dependable SDN control platform as a materialization of the concept here advocated.
Abstract: Software-defined networking empowers network operators with more flexibility to program their networks. With SDN, network management moves from codifying functionality in terms of low-level device configurations to building software that facilitates network management and debugging. By separating the complexity of state distribution from network specification, SDN provides new ways to solve long-standing problems in networking --- routing, for instance --- while simultaneously allowing the use of security and dependability techniques, such as access control or multi-path.However, the security and dependability of the SDN itself is still an open issue. In this position paper we argue for the need to build secure and dependable SDNs by design. As a first step in this direction we describe several threat vectors that may enable the exploit of SDN vulnerabilities. We then sketch the design of a secure and dependable SDN control platform as a materialization of the concept here advocated. We hope that this paper will trigger discussions in the SDN community around these issues and serve as a catalyser to join efforts from the networking and security & dependability communities in the ultimate goal of building resilient control planes.

667 citations


Journal ArticleDOI
TL;DR: A lightweight and dependable trust system (LDTS) for WSNs, which employ clustering algorithms, and a self-adaptive weighted method is defined for trust aggregation at CH level, which surpasses the limitations of traditional weighting methods for trust factors.
Abstract: The resource efficiency and dependability of a trust system are the most fundamental requirements for any wireless sensor network (WSN). However, existing trust systems developed for WSNs are incapable of satisfying these requirements because of their high overhead and low dependability. In this work, we proposed a lightweight and dependable trust system (LDTS) for WSNs, which employ clustering algorithms. First, a lightweight trust decision-making scheme is proposed based on the nodes' identities (roles) in the clustered WSNs, which is suitable for such WSNs because it facilitates energy-saving. Due to canceling feedback between cluster members (CMs) or between cluster heads (CHs), this approach can significantly improve system efficiency while reducing the effect of malicious nodes. More importantly, considering that CHs take on large amounts of data forwarding and communication tasks, a dependability-enhanced trust evaluating approach is defined for cooperations between CHs. This approach can effectively reduce networking consumption while malicious, selfish, and faulty CHs. Moreover, a self-adaptive weighted method is defined for trust aggregation at CH level. This approach surpasses the limitations of traditional weighting methods for trust factors, in which weights are assigned subjectively. Theory as well as simulation results shows that LDTS demands less memory and communication overhead compared with the current typical trust systems for WSNs.

188 citations


Journal ArticleDOI
TL;DR: The main goal of this paper is to present the survey of the latest trends in the communication domain of industrial distributed systems and to emphasize important questions as dependability, and standardization.
Abstract: Nowadays, computer systems are presented in almost all types of human activity and they support any kind of industry as well Most of these systems are distributed where the communication between nodes is based on computer networks of any kind Connectivity between system components is the key issue when designing distributed systems, especially systems of industrial informatics The industrial area requires a wide range of computer communication means, particularly time-constrained and safety-enhancing ones From fieldbus and industrial Ethernet technologies through wireless and internet-working solutions to standardization issues, there are many aspects of computer networks uses and many interesting research domains Lots of them are quite sophisticated or even unique The main goal of this paper is to present the survey of the latest trends in the communication domain of industrial distributed systems and to emphasize important questions as dependability, and standardization Finally, the general assessment and estimation of the future development is provided The presentation is based on the abstract description of dataflow within a system

163 citations


Proceedings ArticleDOI
30 Sep 2013
TL;DR: An adaptive anomaly identification mechanism that explores the most relevant principal components of different failure types in cloud computing infrastructures and integrates the cloud performance metric analysis with filtering techniques to achieve automated, efficient, and accurate anomaly identification.
Abstract: Cloud computing has become increasingly popular by obviating the need for users to own and maintain complex computing infrastructures. However, due to their inherent complexity and large scale, production cloud computing systems are prone to various runtime problems caused by hardware and software faults and environmental factors. Autonomic anomaly detection is a crucial technique for understanding emergent, cloud-wide phenomena and self-managing cloud resources for system-level dependability assurance. To detect anomalous cloud behaviors, we need to monitor the cloud execution and collect runtime cloud performance data. These data consist of values of performance metrics for different types of failures, which display different correlations with the performance metrics. In this paper, we present an adaptive anomaly identification mechanism that explores the most relevant principal components of different failure types in cloud computing infrastructures. It integrates the cloud performance metric analysis with filtering techniques to achieve automated, efficient, and accurate anomaly identification. The proposed mechanism adapts itself by recursively learning from the newly verified detection results to refine future detections. We have implemented a prototype of the anomaly identification system and conducted experiments in an on-campus cloud computing environment and by using the Google data center traces. Our experimental results show that our mechanism can achieve more efficient and accurate anomaly detection than other existing schemes.

105 citations


Journal ArticleDOI
TL;DR: A framework to evaluate network dependability and performability in the face of challenges is presented and it is shown that the impact of network challenges depends on the duration, the number of network elements in a challenged area, and the importance of the nodes in a challenge area.
Abstract: Communication networks play a vital role in our daily lives and they have become a critical infrastructure. However, networks in general, and the Internet in particular face a number of challenges to normal operation, including attacks and large-scale disasters, as well as due to mobility and the characteristics of wireless communication channels. Understanding network challenges and their impact can help us to optimise existing networks and improve the design of future networks; therefore it is imperative to have a framework and methodology to study them. In this paper, we present a framework to evaluate network dependability and performability in the face of challenges. We use a simulation-based approach to analyse the effects of perturbations to normal operation of networks. We analyse Sprint logical and physical topologies, synthetically generated topologies, and present a wireless example to demonstrate a wide spectrum of challenges. This framework can simulate challenges on logical or physical topologies with realistic node coordinates using the ns-3 discrete event simulator. The framework models failures, which can be static or dynamic that can temporally and spatially evolve. We show that the impact of network challenges depends on the duration, the number of network elements in a challenge area, and the importance of the nodes in a challenge area. We also show the differences between modelling the logical router-level and physical topologies. Finally, we discuss mitigation strategies to alleviate the impact of challenges.

101 citations


Proceedings ArticleDOI
13 May 2013
TL;DR: A new cloud model, the SLAaaS (SLA aware Service) model, enables a systematic integration of QoS levels and SLA into the cloud, and introduces CSLA, a novel language to describe QoS-oriented SLA associated with cloud services.
Abstract: Cloud Computing provides a convenient means of remote on-demand and pay-per-use access to computing resources. However, its ad hoc management of quality-of-service and SLA poses significant challenges to the performance, dependability and costs of online cloud services. The paper precisely addresses this issue and makes a threefold contribution. First, it introduces a new cloud model, the SLAaaS (SLA aware Service) model. SLAaaS enables a systematic integration of QoS levels and SLA into the cloud. It is orthogonal to other cloud models such as SaaS or PaaS, and may apply to any of them. Second, the paper introduces CSLA, a novel language to describe QoS-oriented SLA associated with cloud services. Third, the paper presents a control theoretic approach to provide performance, dependability and cost guarantees for online cloud services, with time-varying workloads. The proposed approach is validated through case studies and extensive experiments with online services hosted in clouds such as Amazon EC2. The case studies illustrate SLA guarantees for various services such as a MapReduce service, a cluster-based multi-tier e-commerce service, and a low-level locking service.

87 citations


Book
04 Mar 2013
TL;DR: Extensions of fault trees, including noncoherent fault Trees, fault trees with delay, and multiperformance fault trees are discussed along with classic and recent fault-tree algorithms.
Abstract: Construction, logical analysis, probability evaluation, and dependability are covered in this introduction to fault-tree analysis. Extensions of fault trees, including noncoherent fault trees, fault trees with delay, and multiperformance fault trees, are discussed along with classic and recent fault-tree algorithms.

81 citations


Proceedings ArticleDOI
18 Nov 2013
TL;DR: This work applies Message Authentication Codes (MACs) to protect against masquerade and replay attacks on CAN networks, and proposes an optimal Mixed Integer Linear Programming (MILP) formulation for solving the mapping problem from a functional model to the CAN-based platform while meeting both the security and the safety requirements.
Abstract: Cyber-security is a rising issue for automotive electronic systems, and it is critical to system safety and dependability. Current in-vehicles architectures, such as those based on the Controller Area Network (CAN), do not provide direct support for secure communications. When retrofitting these architectures with security mechanisms, a major challenge is to ensure that system safety will not be hindered, given the limited computation and communication resources. We apply Message Authentication Codes (MACs) to protect against masquerade and replay attacks on CAN networks, and propose an optimal Mixed Integer Linear Programming (MILP) formulation for solving the mapping problem from a functional model to the CAN-based platform while meeting both the security and the safety requirements. We also develop an efficient heuristic for the mapping problem under security and safety constraints. To the best of our knowledge, this is the first work to address security and safety in an integrated formulation in the design automation of automotive electronic systems. Experimental results of an industrial case study show the effectiveness of our approach.

80 citations


Journal ArticleDOI
TL;DR: The aim of this publication is to deal with the graphical aspects of Petri nets and it proposes first some very simple tricks and guidelines to structure and improve the drawing of standard PNs.

69 citations


Journal ArticleDOI
TL;DR: This survey summarizes, organizes, and integrates a decade of research on power-aware enterprise storage systems, and intends to stimulate integration of different power-reduction techniques in new energy-efficient file and storage systems.
Abstract: As data-intensive, network-based applications proliferate, the power consumed by the data-center storage subsystem surges. This survey summarizes, organizes, and integrates a decade of research on power-aware enterprise storage systems. All of the existing power-reduction techniques are classified according to the disk-power factor and storage-stack layer addressed. A majority of power-reduction techniques is based on dynamic power management. We also consider alternative methods that reduce disk access time, conserve space, or exploit energy-efficient storage hardware. For every energy-conservation technique, the fundamental trade-offs between power, capacity, performance, and dependability are uncovered. With this survey, we intend to stimulate integration of different power-reduction techniques in new energy-efficient file and storage systems.

Journal ArticleDOI
TL;DR: This paper aims to provide a better understanding of fault tolerance techniques used for fault tolerance in cloud environments along with some existing model and further compare them on various parameters.
Abstract: computing is the result of evolution of on demand service in computing paradigms of large scale distributed computing. It is the adoptable technology as it provides integration of software and resources which are dynamically scalable. These systems are more or less prone to failure. Fault tolerance assesses the ability of a system to respond gracefully to an unexpected hardware or software failure. In order to achieve robustness and dependability in cloud computing, failure should be assessed and handled effectively . This paper aims to provide a better understanding of fault tolerance techniques used for fault tolerance in cloud environments along with some existing model and further compare them on various parameters.

Book
26 Sep 2013
TL;DR: This SpringerBrief presents a survey of data center network designs and topologies and compares several properties in order to highlight their advantages and disadvantages.
Abstract: This SpringerBrief presents a survey of data center network designs and topologies and compares several properties in order to highlight their advantages and disadvantages. The brief also explores several routing protocols designed for these topologies and compares the basic algorithms to establish connections, the techniques used to gain better performance, and the mechanisms for fault-tolerance. Readers will be equipped to understand how current research on data center networks enables the design of future architectures that can improve performance and dependability of data centers. This concise brief is designed for researchers and practitioners working on data center networks, comparative topologies, fault tolerance routing, and data center management systems. The context provided and information on future directions will also prove valuable for students interested in these topics.

Journal ArticleDOI
TL;DR: This paper proposes a dependability evaluation tool for IoT applications, when hardware faults and permanent link faults are considered.

Journal ArticleDOI
TL;DR: This paper presents an integrated environment, namely, ASTRO, which contemplates: (i) Reliability Block Diagrams, Stochastic Petri Nets, and continuous-time Markov chains for dependability evaluation; and (ii) a method based on life-cycle assessment (LCA) for quantification of sustainability impact.

Proceedings Article
24 Sep 2013
TL;DR: In this article, the authors present a comprehensive optimisation approach that incorporates a flexible number of ob-jectives together with the corresponding external analyses for evaluating them and uses only a single system model as information repository for all objectives and analyses.
Abstract: With today's highly complex embedded systems, it is becoming increasingly difficult to find design solutions that meet all functional and non-functional requirements, such as performance, dependability and cost. In addition, there is often not a single optimal solution but a conflict of goals between requirements leads to a set of so-called Pareto optima. Such a multi-objective optimisation has received increasing attention in research and practice over the past few years. This paper presents current research in progress on developing a comprehensive optimisation approach that incorporates a flexible number of ob-jectives together with the corresponding external analyses for evaluating them and uses only a single system model as information repository for all objectives and analyses. This central model is defined using the modelling language EAST-ADL, an architecture description language for the automotive domain.

Book ChapterDOI
01 Jan 2013
TL;DR: The engineering challenge for Cyber-Physical System is the combination of characteristic properties of embedded systems such a real time, functional safety, dependability, closedness with characteristic propertiesof the internet such ad openness, partial availability, restricted quality of service and reduced dependability.
Abstract: Cyber-Physical Systems are the next step into globally integrated software systems. They are the result of the combination of embedded system with cyberspace. Cyber-Physical Systems support real world awareness in the Internet and the access to global data and services by embedded system. The engineering challenge for Cyber-Physical System is the combination of characteristic properties of embedded systems such a real time, functional safety, dependability, closedness with characteristic properties of the internet such ad openness, partial availability, restricted quality of service and reduced dependability.

Journal ArticleDOI
TL;DR: The modeling concept is presented which supports application development and which is supplemented by an implementation approach for standard automation devices, e.g., programmable logic controllers.
Abstract: This paper presents the elaboration of a concept to develop and implement real-time capable industrial automation software that increases the dependability of production automation systems by means of soft sensors. An application example with continuous behavior as it is a typical character treat of process automation is used to illustrate the initial requirements. Accordingly, the modeling concept is presented which supports application development and which is supplemented by an implementation approach for standard automation devices, e.g., programmable logic controllers. The paper further comprises an evaluation which adapts the concept for two use cases with discrete behavior (typical character treat of manufacturing automation) and validates the initially imposed requirements.

Proceedings Article
26 Jun 2013
TL;DR: This work shows that the classical durability enforcing mechanisms - logging, checkpointing, state transfer - can have a high impact on the performance of SMR-based services even if SSDs are used instead of disks, and proposes three techniques that can be used in a transparent manner without modifying the SMR programming model or requiring extra resources.
Abstract: State Machine Replication (SMR) is a fundamental technique for ensuring the dependability of critical services in modern internet-scale infrastructures. SMR alone does not protect from full crashes, and thus in practice it is employed together with secondary storage to ensure the durability of the data managed by these services. In this work we show that the classical durability enforcing mechanisms - logging, checkpointing, state transfer - can have a high impact on the performance of SMR-based services even if SSDs are used instead of disks. To alleviate this impact, we propose three techniques that can be used in a transparent manner, i.e., without modifying the SMR programming model or requiring extra resources: parallel logging, sequential checkpointing, and collaborative state transfer. We show the benefits of these techniques experimentally by implementing them in an open-source replication library, and evaluating them in the context of a consistent key-value store and a coordination service.

BookDOI
23 Jul 2013
TL;DR: This book summarizes the results of the DEPLOY research project on engineering methods for dependable systems through the industrial deployment of formal methods in software development, introduced a formal method, Event-B, into several industrial organisations and built on the lessons learned to provide an ecosystem of better tools, documentation and support to help others to select and introduce rigorous systems engineering methods.
Abstract: A formal method is not the main engine of a development process, its contribution is to improve system dependability by motivating formalisation where useful. This book summarizes the results of the DEPLOY research project on engineering methods for dependable systems through the industrial deployment of formal methods in software development. The applications considered were in automotive, aerospace, railway, and enterprise information systems, and microprocessor design. The project introduced a formal method, Event-B, into several industrial organisations and built on the lessons learned to provide an ecosystem of better tools, documentation and support to help others to select and introduce rigorous systems engineering methods. The contributing authors report on these projects and the lessons learned. For the academic and research partners and the tool vendors, the project identified improvements required in the methods and supporting tools, while the industrial partners learned about the value of formal methods in general. A particular feature of the book is the frank assessment of the managerial and organisational challenges, the weaknesses in some current methods and supporting tools, and the ways in which they can be successfully overcome. The book will be of value to academic researchers, systems and software engineers developing critical systems, industrial managers, policymakers, and regulators.

Book
22 Oct 2013
TL;DR: This book presents cutting-edge model-driven techniques for modeling and analysis of software dependability, based on the use of UML as software specification language, and describes two prominent model-to-model transformation techniques for deriving dependability analysis models from UML specifications.
Abstract: Over the last two decades, a major challenge for researchers working on modeling and evaluation of computer-based systems has been the assessment of system Non Functional Properties (NFP) such as performance, scalability, dependability and security.In this book, the authors present cutting-edge model-driven techniques for modeling and analysis of software dependability. Most of them are based on the use of UML as software specification language. From the software system specification point of view, such techniques exploit the standard extension mechanisms of UML (i.e., UML profiling). UML profiles enable software engineers to add non-functional properties to the software model, in addition to the functional ones. The authors detail the state of the art on UML profile proposals for dependability specification and rigorously describe the trade-off they accomplish. The focus is mainly on RAMS (reliability, availability, maintainability and safety) properties. Among the existing profiles, they emphasize the DAM (Dependability Analysis and Modeling) profile, which attempts to unify, under a common umbrella, the previous UML profiles from literature, providing capabilities for dependability specification and analysis. In addition, they describe two prominent model-to-model transformation techniques, which support the generation of the analysis model and allow for further assessment of different RAMS properties. Case studies from different domains are also presented, in order to provide practitioners with examples of how to apply the aforementioned techniques.Researchers and students will learn basic dependability concepts and how to model them usingUML and its extensions. They will also gain insights into dependability analysis techniques through the use of appropriate modeling formalisms as well as of model-to-model transformation techniques for deriving dependability analysis models from UML specifications. Moreover, software practitioners will find a unified framework for the specification of dependability requirements and properties of UML, and will benefit from the detailed case studies.

Proceedings ArticleDOI
04 Sep 2013
TL;DR: The MultiPARTES FP7 project aims at supporting mixed-criticality integration for embedded systems based on virtualization techniques for heterogeneous multicore processors by incorporating mechanisms that establish multiple partitions with strict temporal and spatial separation between the individual partitions.
Abstract: Modern embedded applications typically integrate a multitude of functionalities with potentially different criticality levels into a single system. Without appropriate preconditions, the integration of mixed-criticality subsystems can lead to a significant and potentially unacceptable increase of engineering and certification costs. A promising solution is to incorporate mechanisms that establish multiple partitions with strict temporal and spatial separation between the individual partitions. In this approach, subsystems with different levels of criticality can be placed in different partitions and can be verified and validated in isolation. The MultiPARTES FP7 project aims at supporting mixed-criticality integration for embedded systems based on virtualization techniques for heterogeneous multicore processors. A major outcome of the project is the MultiPARTES XtratuM, an open source hyper visor designed as a generic virtualization layer for heterogeneous multicore. MultiPARTES evaluates the developed technology through selected use cases from the offshore wind power, space, visual surveillance, and automotive domains. The impact of MultiPARTES on the targeted domains will be also discussed. In a number of ongoing research initiatives (e.g., RECOMP, ARAMIS, MultiPARTES, CERTAINTY) mixed-criticality integration is considered in multicore processors. Key challenges are the combination of software virtualization and hardware segregation and the extension of partitioning mechanisms to jointly address significant non-functional requirements (e.g., time, energy and power budgets, adaptivity, reliability, safety, security, volume, weight, etc.) along with development and certification methodology.

Journal Article
TL;DR: DFTCalc is presented, a powerful tool for FTA, providing efficient fault tree modelling via compact representations; effective analysis, allowing a wide range of dependability properties to be analysed; and a flexible and extensible framework, where gates can easily be changed or added.
Abstract: Effective risk management is a key to ensure that our nuclear power plants, medical equipment, and power grids are dependable; and is often required by law. Fault Tree Analysis (FTA) is a widely used methodology here, computing important dependability measures like system reliability. This paper presents DFTCalc, a powerful tool for FTA, providing (1) efficient fault tree modelling via compact representations; (2) effective analysis, allowing a wide range of dependability properties to be analysed (3) efficient analysis, via state-of-the-art stochastic techniques; and (4) a flexible and extensible framework, where gates can easily be changed or added. Technically, DFTCalc is realised via stochastic model checking, an innovative technique offering a wide plethora of pow- erful analysis techniques, including aggressive compression techniques to keep the underlying state space small.

01 Jan 2013
TL;DR: This paper is a tutorial on this relatively new protection topic and offers answers to the outlined challenges.
Abstract: Line current differential (87L) protection schemes face extra challenges compared with other forms of differential protection, in addition to the traditional requirements of sensitivity, speed, and immunity to current transformer saturation. Some of these challenges include data communication, alignment, and security; line charging current; and limited communications bandwidth. To address these challenges, microprocessor-based 87L relays apply elaborate operating characteristics, which are often different than a traditional percentage differential characteristic used for bus or transformer protection. These sophisticated elements may include adaptive restraining terms, apply an Alpha Plane, use external fault detection logic for extra security, and so on. While these operating characteristics provide for better performance, they create the following challenges for users: • Understanding how the 87L elements make the trip decision. • Understanding the impact of 87L settings on sensitivity and security, as well as grasping the relationship between the traditional percentage differential characteristic and the various 87L operating characteristics. • Having the ability to transfer settings between different 87L operating characteristics while keeping a similar balance between security and dependability. • Testing the 87L operating characteristics. These issues become particularly significant in applications involving more than two currents in the line protection zone (multiterminal lines) and lines terminated on dual-breaker buses. This paper is a tutorial on this relatively new protection topic and offers answers to the outlined challenges.

Journal ArticleDOI
TL;DR: This paper presents a multi-objective optimisation approach based on EAST-ADL , an ADL in the automotive domain, with the goal of combining the advantages of ADLs and architectural optimisation, designed to be extensible.

Book ChapterDOI
24 Sep 2013
TL;DR: In this paper, the authors present DFTCalc, a powerful tool for FTA, providing efficient fault tree modelling via compact representations; effective analysis, allowing a wide range of dependability properties to be analysed, via state-of-the-art stochastic techniques; and a flexible and extensible framework, where gates can easily be changed or added.
Abstract: Effective risk management is a key to ensure that our nuclear power plants, medical equipment, and power grids are dependable; and it is often required by law. Fault Tree Analysis (FTA) is a widely used methodology here, computing important dependability measures like system reliability. This paper presents DFTCalc, a powerful tool for FTA, providing (1) efficient fault tree modelling via compact representations; (2) effective analysis, allowing a wide range of dependability properties to be analysed (3) efficient analysis, via state-of-the-art stochastic techniques; and (4) a flexible and extensible framework, where gates can easily be changed or added. Technically, DFTCalc is realised via stochastic model checking, an innovative technique offering a wide plethora of powerful analysis techniques, including aggressive compression techniques to keep the underlying state space small.

Posted Content
TL;DR: In this article, the authors propose Bayesian Networks (BN) as a suitable tool for dependability analysis, by challenging the formalism with basic issues arising in dependability tasks and discuss how both modeling and analysis issues can be naturally dealt with by BN.
Abstract: Bayesian Networks (BN) provide robust probabilistic methods of reasoning under uncertainty, but despite their formal grounds are strictly based on the notion of conditional dependence, not much attention has been paid so far to their use in dependability analysis. The aim of this paper is to propose BN as a suitable tool for dependability analysis, by challenging the formalism with basic issues arising in dependability tasks. We will discuss how both modeling and analysis issues can be naturally dealt with by BN. Moreover, we will show how some limitations intrinsic to combinatorial dependability methods such as Fault Trees can be overcome using BN. This will be pursued through the study of a real-world example concerning the reliability analysis of a redundant digital Programmable Logic Controller (PLC) with majority voting 2:3

Journal ArticleDOI
TL;DR: The current state of risk-related activities in networks is reviewed, deficiencies and challenges are identified, and techniques, procedures, and metrics towards higher risk-awareness are suggested.

Journal ArticleDOI
TL;DR: The purpose of this paper is to provide an up-to-date treatment of advanced analytic, state-space based techniques to study dependability models with non-exponential distributions, including phase-type expansion and a general framework which allows us to deal with renewal, semi-Markov, and Markov-regenerative processes.
Abstract: The purpose of this paper is to provide an up-to-date treatment of advanced analytic, state-space based techniques to study dependability models with non-exponential distributions. We first provide an overview of different techniques for the solution of non-Markovian state-space based models, including phase-type expansion and a general framework which allows us to deal with renewal, semi-Markov, and Markov-regenerative processes, trying to characterize them on dependability contexts. In the last part of the paper, we illustrate these techniques by means of some examples dealing with common non-exponential reliability behaviors. Our aim is to provide a reference for practising engineers, researchers, and students in state-space dependability modeling and evaluation. Copyright © 2012 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
24 Jun 2013
TL;DR: A tool-supported model-based approach suitable for the development of interactive systems featuring multi-touch interactions techniques and demonstrates the possibility to describe touch interaction techniques in a complete and unambiguous way and that the formal description technique is amenable to verification.
Abstract: The widespread use of multi-touch devices and the large amount of research that has been carried out around them has made this technology mature in a very short amount of time. This makes it possible to consider multi-touch interactions in the context of safety critical systems. Indeed, beyond this technical aspect, multi-touch interactions present significant benefits such as input-output integration, reduction of physical space, sophisticated multi-modal interaction? However, interactive cockpits belonging to the class of safety critical systems, development processes and methods used in the mass market industry are not suitable as they usually focus on usability and user experience factors upstaging dependability. This paper presents a tool-supported model-based approach suitable for the development of interactive systems featuring multi-touch interactions techniques. We demonstrate the possibility to describe touch interaction techniques in a complete and unambiguous way and that the formal description technique is amenable to verification. The capabilities of the notation is demonstrated over two different interaction techniques (namely Pitch and Tap and Hold) together with a software architecture explaining how these interaction techniques can be embedded in an interactive application.