scispace - formally typeset
Search or ask a question

Showing papers on "Dependability published in 2017"


Journal ArticleDOI
TL;DR: The biggest challenge may be in creating an end-to-end design and deployment process that integrates the safety concerns of a myriad of technical specialties into a unified approach.
Abstract: Ensuring the safety of fully autonomous vehicles requires a multi-disciplinary approach across all the levels of functional hierarchy, from hardware fault tolerance, to resilient machine learning, to cooperating with humans driving conventional vehicles, to validating systems for operation in highly unstructured environments, to appropriate regulatory approaches. Significant open technical challenges include validating inductive learning in the face of novel environmental inputs and achieving the very high levels of dependability required for full-scale fleet deployment. However, the biggest challenge may be in creating an end-to-end design and deployment process that integrates the safety concerns of a myriad of technical specialties into a unified approach.

418 citations


Journal ArticleDOI
Sohag Kabir1
TL;DR: The standard fault tree with its limitations is reviewed and a number of prominent MBDA techniques where fault trees are used as a means for system dependability analysis are reviewed and an insight into their working mechanism, applicability, strengths and challenges are provided.
Abstract: I provide an overview of the Fault Tree Analysis method.I review different extensions of fault trees.A number of model-based dependability analysis approaches are reviewed.I outline the future outlook for model-based dependability analysis. Fault Tree Analysis (FTA) is a well-established and well-understood technique, widely used for dependability evaluation of a wide range of systems. Although many extensions of fault trees have been proposed, they suffer from a variety of shortcomings. In particular, even where software tool support exists, these analyses require a lot of manual effort. Over the past two decades, research has focused on simplifying dependability analysis by looking at how we can synthesise dependability information from system models automatically. This has led to the field of model-based dependability analysis (MBDA). Different tools and techniques have been developed as part of MBDA to automate the generation of dependability analysis artefacts such as fault trees. Firstly, this paper reviews the standard fault tree with its limitations. Secondly, different extensions of standard fault trees are reviewed. Thirdly, this paper reviews a number of prominent MBDA techniques where fault trees are used as a means for system dependability analysis and provides an insight into their working mechanism, applicability, strengths and challenges. Finally, the future outlook for MBDA is outlined, which includes the prospect of developing expert and intelligent systems for dependability analysis of complex open systems under the conditions of uncertainty.

258 citations


Journal ArticleDOI
TL;DR: This work designs a dependable distributed WSN framework for SHM (called DependSHM) and examines its ability to cope with sensor faults and constraints, and presents a distributed automated algorithm to detect such types of faults.
Abstract: As an alternative to current wired-based networks, wireless sensor networks (WSNs) are becoming an increasingly compelling platform for engineering structural health monitoring (SHM) due to relatively low-cost, easy installation, and so forth. However, there is still an unaddressed challenge: the application-specific dependability in terms of sensor fault detection and tolerance. The dependability is also affected by a reduction on the quality of monitoring when mitigating WSN constrains (e.g., limited energy, narrow bandwidth). We address these by designing a dependable distributed WSN framework for SHM (called DependSHM ) and then examining its ability to cope with sensor faults and constraints. We find evidence that faulty sensors can corrupt results of a health event (e.g., damage) in a structural system without being detected. More specifically, we bring attention to an undiscovered yet interesting fact, i.e., the real measured signals introduced by one or more faulty sensors may cause an undamaged location to be identified as damaged (false positive) or a damaged location as undamaged (false negative) diagnosis. This can be caused by faults in sensor bonding, precision degradation, amplification gain, bias, drift, noise, and so forth. In DependSHM , we present a distributed automated algorithm to detect such types of faults, and we offer an online signal reconstruction algorithm to recover from the wrong diagnosis. Through comprehensive simulations and a WSN prototype system implementation, we evaluate the effectiveness of DependSHM .

192 citations


Journal ArticleDOI
TL;DR: The main issues, research work and challenges in the field of safety-critical robots, linking up dependability and robotics concepts are reviewed, and many techniques need to be adapted or revised.

156 citations


Proceedings ArticleDOI
14 Oct 2017
TL;DR: Tock isolates software faults, provides memory protection, and efficiently manages memory for dynamic application workloads written in any language while retaining the dependability requirements of long-running applications.
Abstract: Low-power microcontrollers lack some of the hardware features and memory resources that enable multiprogrammable systems. Accordingly, microcontroller-based operating systems have not provided important features like fault isolation, dynamic memory allocation, and flexible concurrency. However, an emerging class of embedded applications are software platforms, rather than single purpose devices, and need these multiprogramming features. Tock, a new operating system for low-power platforms, takes advantage of limited hardware-protection mechanisms as well as the type-safety features of the Rust programming language to provide a multiprogramming environment for microcontrollers. Tock isolates software faults, provides memory protection, and efficiently manages memory for dynamic application workloads written in any language. It achieves this while retaining the dependability requirements of long-running applications.

108 citations


Journal ArticleDOI
TL;DR: The present review considers the contextual factors that influence ERP score reliability and the downstream effects that reliability has on statistical analyses and advocates the use of generalizability theory for estimating score dependability as an improvement on classical test theory reliability estimates, suggesting that the latter is less well suited to ERP research.

85 citations


Journal ArticleDOI
02 Nov 2017
TL;DR: This paper shows that with the right measure, this method can be used to validate the safety of a vehicle, while keeping the validity high and the data required lower than the state of the art statistical methods.
Abstract: Much effort is put right now into how to make autonomous vehicles as capable as possible in order to be able to replace humans as drivers. Less focus is put into how to ensure that this transition happens in a safe way that we can put trust in. The verification of the extreme dependability requirements connected to safety is expected to be one of the largest challenges to overcome in the commercialization of autonomous vehicles. Using traditional statistical methods to validate complete vehicle safety would require the vehicle to cover extreme distances to show that collisions occur rare enough. However, recent research has shown the possibility of using near-collisions in order to estimate the frequency of actual collisions using Extreme Value Theory. To use this method, there is a need for a measure related to the closeness of a collision. This paper shows that the choice of this threat measure has a significant impact on the inferences drawn from the data. With the right measure, this method can be used to validate the safety of a vehicle. This, while keeping the validity high and the data required lower than the state of the art statistical methods.

80 citations


Proceedings ArticleDOI
20 Feb 2017
TL;DR: D-Cube is designed, implemented, and employing, a low-cost tool that allows to accurately measure key dependability metrics such as end-to-end delay, reliability, and power consumption, as well as to graphically visualize their evolution in real-time.
Abstract: A large number of low-power wireless communication protocols has been proposed in the last decade by both academia and industry in an attempt to deliver information in an increasingly reliable, timely, and energy-efficient manner. However, their level of dependability has rarely been benchmarked under the same settings and environmental conditions. In this paper we present the execution and results of a competition aimed to evaluate the dependability of state-of-the-art low-power wireless protocols under the same settings, and push their performance to the limit. We define a scenario emulating the operation of a wireless sensor network in industrial environments rich with radio interference and compare the end-to-end dependability of systems based on protocol strategies ranging from adaptive and time-slotted frequency-hopping to multi-modal routing and flooding. To increase fairness and realism, we allow the developers of the competing protocols to interact with the benchmarking infrastructure and optimize the protocol parameters for the scenario at hand. We achieve this by designing, implementing, and employing D-Cube, a low-cost tool that allows to accurately measure key dependability metrics such as end-to-end delay, reliability, and power consumption, as well as to graphically visualize their evolution in real-time. This interaction with the benchmarking infrastructure and the competitiveness of the event have incited the developers to push the performance of their protocols to the limit and reach impressive results.

78 citations


Proceedings ArticleDOI
11 Dec 2017
TL;DR: Sieve is a platform to derive actionable insights from monitored metrics in distributed systems that reduces the dimensionality of metrics by automatically filtering out unimportant metrics by observing their signal over time and infers metrics dependencies between distributed components of the system using a predictive-causality model.
Abstract: Major cloud computing operators provide powerful monitoring tools to understand the current (and prior) state of the distributed systems deployed in their infrastructure. While such tools provide a detailed monitoring mechanism at scale, they also pose a significant challenge for the application developers/operators to transform the huge space of monitored metrics into useful insights. These insights are essential to build effective management tools for improving the efficiency, resiliency, and dependability of distributed systems. This paper reports on our experience with building and deploying Sieve---a platform to derive actionable insights from monitored metrics in distributed systems. Sieve builds on two core components: a metrics reduction framework, and a metrics dependency extractor. More specifically, Sieve first reduces the dimensionality of metrics by automatically filtering out unimportant metrics by observing their signal over time. Afterwards, Sieve infers metrics dependencies between distributed components of the system using a predictive-causality model by testing for Granger Causality. We implemented Sieve as a generic platform and deployed it for two microservices-based distributed systems: OpenStack and Share-Latex. Our experience shows that (1) Sieve can reduce the number of metrics by at least an order of magnitude (10 -- 100×), while preserving the statistical equivalence to the total number of monitored metrics; (2) Sieve can dramatically improve existing monitoring infrastructures by reducing the associated overheads over the entire system stack (CPU---80%, storage---90%, and network---50%); (3) Lastly, Sieve can be effective to support a wide-range of workflows in distributed systems---we showcase two such workflows: Orchestration of autoscaling, and Root Cause Analysis (RCA).

76 citations


Journal ArticleDOI
TL;DR: A novel result is the study of a simulator-based performance monitoring and control selection method that monitors and predicts MPC performance and switches to a highly reliable backup controller in cases when the MPC experiences performance issues.
Abstract: While model predictive control (MPC) is the industrially preferred method for advanced control in the process industries, it has not found much use in consumer products and safety-critical embedded systems applications in industries such as automotive, aerospace, medical, and robotics. The main barriers are implementability and dependability, where important factors are implementation of advanced numerical optimization algorithms on resource-limited embedded computing platforms and the associated complexity of verification. This challenge comes from a requirement of the use of ultrareliable hardware and software architectures in safety-critical applications, low-cost hardware in consumer products, or both. This paper surveys the state-of-the-art in the emerging field of dependable embedded MPC, and discusses some key challenges related to its design, implementation, and verification. A novel result is the study of a simulator-based performance monitoring and control selection method that monitors and predicts MPC performance and switches to a highly reliable backup controller in cases when the MPC experiences performance issues.

67 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the role of delivery dependability and time to market on the relation between the infrastructure framework and supply chain agility and examined the impacts of supply-chain agility on firm performance.
Abstract: Purpose The main purpose of this study is to examine the role of delivery dependability and time to market, on the relation between the infrastructure framework and supply chain agility Furthermore, the impacts of supply chain agility on firm performance are examined Design/methodology/approach Data were collected from 113 respondents, senior executives and managers, in purchasing, operations, supplying, planning and other supply chain functions in large manufacturing firms in the MENA region, which includes 12 countries (Jordan, Lebanon, Egypt, Saudi Arabia, United Arab Emirates, Kuwait, Oman, Bahrain, Qatar, Morocco, Tunis and Algeria) A large-scale survey questionnaire was used for data collection process The research framework was tested by using hypothesis-testing deductive approach The results are based on covariance-based analysis and structural equation modelling using analysis of moment structures software Findings The results show that infrastructure framework elements do not contribute significantly to support supply chain agility It is also found that delivery dependability and time to market partially mediate the relationship between infrastructure framework elements and supply chain agility Additionally, it is found that supply chain agility is associated with enhanced firm performance Originality/value This paper provides an overview and empirically shows that delivery dependability and time to market are appropriate logistics practices for mediating the impact of infrastructure framework and supply chain agility These relationships indicate a contribution to theory that explains how infrastructure framework elements can procreate supply chain agility, through the synchronising of appropriately matched logistics practices

Journal ArticleDOI
TL;DR: This research presents a meta-modelling architecture that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of integrating digital and physical systems into one another.
Abstract: Our society increasingly relies on the successful marriage of digital and physical systems to perform advanced automation and control tasks. Engineering these cyber-physical systems to ensure their efficiency, security, and dependability is a research area of tremendous importance.

Journal ArticleDOI
TL;DR: This article explains in detail how the proposed semantic approach for multimodal interaction between humans and industrial robots has been implemented in two real industrial cases in which a robot and a worker collaborate in assembly and deburring operations.
Abstract: This article presents a semantic approach for multimodal interaction between humans and industrial robots to enhance the dependability and naturalness of the collaboration between them in real indu...

Journal ArticleDOI
TL;DR: A systematic approach based on STPA is proposed to consider the operational safety of the fully automated driving architecture regarding different aspects at an early stage to evaluate the complex architecture design of fully automateddriving vehicles.

Journal ArticleDOI
TL;DR: Assessment of overall reliability, percentages of score variation accounted for by individual sources of measurement error, dependability of cut-scores for decision making, estimation of reliability and dependability for changes made to measurement procedures, and linkages of G-theory with classical test theory and structural equation modeling are described.
Abstract: Although widely recognized as a comprehensive framework for representing score reliability, generalizability theory (G-theory), despite its potential benefits, has been used sparingly in reporting of results for measures of individual differences. In this article, we highlight many valuable ways that G-theory can be used to quantify, evaluate, and improve psychometric properties of scores. Our illustrations encompass assessment of overall reliability, percentages of score variation accounted for by individual sources of measurement error, dependability of cut-scores for decision making, estimation of reliability and dependability for changes made to measurement procedures, disattenuation of validity coefficients for measurement error, and linkages of G-theory with classical test theory and structural equation modeling. We also identify computer packages for performing G-theory analyses, most of which can be obtained free of charge, and describe how they compare with regard to data input requirements, ease of use, complexity of designs supported, and output produced. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: A dependability benchmark to support NFV providers at making informed decisions about which virtualization, management, and application-level solutions can achieve the best dependability and which faults NFV designers should address to improve dependability.
Abstract: Network function virtualization (NFV) envisions the use of cloud computing and virtualization technology to reduce costs and innovate network services. However, this paradigm shift poses the question whether NFV will be able to fulfill the strict performance and dependability objectives required by regulations and customers. Thus, we propose a dependability benchmark to support NFV providers at making informed decisions about which virtualization, management, and application-level solutions can achieve the best dependability. We define in detail the use cases, measures, and faults to be injected. Moreover, we present a benchmarking case study on two alternative, production-grade virtualization solutions, namely VMware ESXi/vSphere (hypervisor-based) and Linux/Docker (container-based), on which we deploy an NFV-oriented IMS system. Despite the promise of higher performance and manageability, our experiments suggest that the container-based configuration can be less dependable than the hypervisor-based one, and point out which faults NFV designers should address to improve dependability.

Proceedings ArticleDOI
01 May 2017
TL;DR: This work presents Automated Stress Testing for Autonomy Architectures (ASTAA), a system that effectively, automatically robustness tests autonomy systems by building on classic principles, with important innovations to support this new domain.
Abstract: As robotic and autonomy systems become progressively more present in industrial and human-interactive applications, it is increasingly critical for them to behave safely in the presence of unexpected inputs. While robustness testing for traditional software systems is long-studied, robustness testing for autonomy systems is relatively uncharted territory. In our role as engineers, testers, and researchers we have observed that autonomy systems are importantly different from traditional systems, requiring novel approaches to effectively test them. We present Automated Stress Testing for Autonomy Architectures (ASTAA), a system that effectively, automatically robustness tests autonomy systems by building on classic principles, with important innovations to support this new domain. Over five years, we have used ASTAA to test 17 real-world autonomy systems, robots, and robotics-oriented libraries, across commercial and academic applications, discovering hundreds of bugs. We outline the ASTAA approach and analyze more than 150 bugs we found in real systems. We discuss what we discovered about testing autonomy systems, specifically focusing on how doing so differs from and is similar to traditional software robustness testing and other high-level lessons.

Journal ArticleDOI
TL;DR: This paper proposes an approach for tolerating faults in multisensor data fusion that is based on the more traditional method of duplication–comparison, and that offers detection and recovery services.

Journal ArticleDOI
TL;DR: This paper presents an online system maintenance method that takes into account the system dynamics and employs an online predictive diagnosis algorithm to distinguish between critical and non-critical assets.

Journal ArticleDOI
30 Jun 2017
TL;DR: The comprehensive and detailed study and survey of the state of the art techniques and mechanisms in this field are provided and trust characteristics such as integrity, security, availability, reliability, dependability, safety, dynamicity, confidentiality and scalability are defined.
Abstract: Cloud computing is a new model to enable convenient and on-demand access to the pool of configurable computing resources. In this kind of computing platform, the confidences are insufficient for the customers to identify the trustworthy cloud service providers. Therefore, one of the most important challenges in this environment is to evaluate the trust value to enable users for selecting the trustworthy resources, however, to the best of our knowledge, the comprehensive and detailed review of the most important techniques in this field is very rare. Hence, in this paper, the comprehensive and detailed study and survey of the state of the art techniques and mechanisms in this field are provided. Also, we discuss the trust evaluating mechanisms in the cloud computing and categorize them into two main groups including centralized and distributed mechanisms. Furthermore, we defined trust characteristics such as integrity, security, availability, reliability, dependability, safety, dynamicity, confidentiality and scalability, and we discuss the trust applications including monitoring and tracking. Also, this survey paper provides a discussion of differences between considered techniques in terms of integrity, security, availability, reliability, dependability, safety, dynamicity, confidentiality and scalability as well as directions for future research.

Journal ArticleDOI
TL;DR: In this paper, the authors advocate extending the concept of ultra reliable communication from the dependability perspective also in the space domain, and they initiate definitions on cell availability and system availability.
Abstract: As part of the 5G communication paradigm, ultra reliable communication (URC) is envisaged as an important technology pillar for providing anywhere and anytime services to end users. While most existing studies on reliable communication do not investigate this problem from a dependability theory perspective, those dependability-based studies tend to define reliability merely in the time domain. In this letter, we advocate extending the concept of URC from the dependability perspective also in the space domain . We initiate definitions on cell availability and system availability. The availability and the probability of providing a guaranteed level of availability in a network are analyzed both/either cell-wise and/or system-wise. Poisson point process and Voronoi tessellation are adopted to model the spatial characteristics of cell deployment in both homogeneous and heterogeneous networks.

Journal ArticleDOI
TL;DR: This paper proposes and presents a comprehensive WSN fault taxonomy that can be used not only in general purpose WSNs but also in Industrial W SNs, and is validated by applying it to an extensive set of faults described in the literature.
Abstract: Over the last decade, Wireless Sensor Networks (WSN) went from being a promising technology to the main enabler of countless Internet of Things applications in all types of areas. In industry, WSNs are now used for monitoring and controlling industrial processes, with the benefits of low installation costs, self-organization, self-configuration, and added functionality. Nevertheless, despite the fact that base WSN technologies are quite stable and subject to standardization, they have kept one of their main characteristics: fault-proneness. As a result, in recent years considerable effort has been made in order to provide mechanisms that increase the availability, reliability and maintainability of this type of networks. In this context, a whole range of techniques such as fault detection, fault identification and fault diagnosis used in other research fields are now being applied to WSNs. Unfortunately, this has not led to a consistent, comprehensive WSN fault taxonomy that can be used to characterize and/or classify faults. Neglecting the importance of WSN fault characterization (e.g., when using supervised algorithms for anomaly detection) may lead to bad classifiers and, consequently, bad fault handling procedures and/or tools. In this paper, we start by reviewing base fault management concepts and techniques that can be applied to WSNs. We then proceed to propose and present a comprehensive WSN fault taxonomy that can be used not only in general purpose WSNs but also in Industrial WSNs. Finally, the proposed taxonomy is validated by applying it to an extensive set of faults described in the literature.

Journal ArticleDOI
TL;DR: Specific applications of EA for embedded programs that go beyond the widely known stack-allocation and synchronization optimizations such as extended remote-procedure-call support for software-isolated applications, automated inference of immutable data, or improved upper space and time bounds for worst-case estimations are presented.
Abstract: The use of a managed, type-safe language such as Java in real-time and embedded systems offers productivity and, in particular, safety and dependability benefits at a reasonable cost. It has been shown for commodity systems that Escape Analysis (EA) enables a set of useful optimizations, and benefits from the properties of a type-safe language. In this article, we explore the application of escape analysis in KESO [Stilkerich et al. 2012], a Java ahead-of-time compiler targeting embedded real-time systems. We present specific applications of EA for embedded programs that go beyond the widely known stack-allocation and synchronization optimizations such as extended remote-procedure-call (RPC) support for software-isolated applications, automated inference of immutable data, or improved upper space and time bounds for worst-case estimations.

Journal ArticleDOI
02 Sep 2017-Sensors
TL;DR: This survey looks into the problem of ensuring the desired quality of data for dependable monitoring using WSNs from a dependability-oriented perspective, reviewing the possible impairments to dependability and the prominent existing solutions to solve or mitigate these impairments.
Abstract: Wireless sensor networks are being increasingly used in several application areas, particularly to collect data and monitor physical processes. Non-functional requirements, like reliability, security or availability, are often important and must be accounted for in the application development. For that purpose, there is a large body of knowledge on dependability techniques for distributed systems, which provide a good basis to understand how to satisfy these non-functional requirements of WSN-based monitoring applications. Given the data-centric nature of monitoring applications, it is of particular importance to ensure that data are reliable or, more generically, that they have the necessary quality. In this survey, we look into the problem of ensuring the desired quality of data for dependable monitoring using WSNs. We take a dependability-oriented perspective, reviewing the possible impairments to dependability and the prominent existing solutions to solve or mitigate these impairments. Despite the variety of components that may form a WSN-based monitoring system, we give particular attention to understanding which faults can affect sensors, how they can affect the quality of the information and how this quality can be improved and quantified.

Proceedings ArticleDOI
25 Jul 2017
TL;DR: This work evaluates the general properties of a microservice architecture and its dependability with reference to the legacy system; requirements for an equivalent microservice-based system and the migration process; services and data, containerization, communication via message queues; and achieving similar fault tolerance and high availability with the help of replication inside the resulting architecture.
Abstract: In contrast to applications relying on specialized and expensive highly-available infrastructure, the basic approach of microservice architectures to achieve fault tolerance – and finally high availability – is to modularize the software system into small, self-contained services that are connected via implementation-independent interfaces. Microservices and all dependencies are deployed into self-contained environments called containers that are executed as multiple redundant instances. If a service fails, other instances will often still work and take over. Due to the possibility of failing infrastructure, these services have to be deployed on several physical systems. This horizontal scaling of redundant service instances can also be used for load-balancing. Decoupling the service communication using asynchronous message queues can increase fault tolerance, too. The Deutsche Bahn AG (German railway company) uses as system called EPA for seat reservations for inter-urban rail services. Despite its high availability, the EPA system in its current state has several disadvantages such as high operational cost, need for special hardware, technological dependencies, and expensive and time-consuming updates. With the help of a prototype, we evaluate the general properties of a microservice architecture and its dependability with reference to the legacy system. We focus on requirements for an equivalent microservice-based system and the migration process; services and data, containerization, communication via message queues; and achieving similar fault tolerance and high availability with the help of replication inside the resulting architecture.

Proceedings ArticleDOI
27 Mar 2017
TL;DR: This paper summarizes current approaches for establishing self-awareness on those layers and elaborate why self- awareness needs to be addressed as a cross-layer problem, which is illustrated by practical examples.
Abstract: Self-awareness has been used in many research fields in order to add autonomy to computing systems. In automotive systems, we face several system layers that must be enriched with self-awareness to build truly autonomous vehicles. This includes functional aspects like autonomous driving itself, its integration on the hardware/software platform, and among others dependability, real-time, and security aspects. However, self-awareness mechanisms of all layers must be considered in combination in order to build a coherent vehicle self-awareness that does not cause conflicting decisions or even catastrophic effects. In this paper, we summarize current approaches for establishing self-awareness on those layers and elaborate why self-awareness needs to be addressed as a cross-layer problem, which we illustrate by practical examples.

Journal ArticleDOI
TL;DR: In this paper, the authors presented an attempt to describe the quality of warehouse operation in terms of dependability, and proposed an approach to determine dependability of warehouse facility based on technological and organizational redundancy related to material flow pile-ups.
Abstract: The quality of supply chain operation depends on quality of its particular elements, including warehouses. The paper presents an attempt to describe the quality of warehouse operation in terms of dependability. Authors discussed issues related to assessing warehouse operation, quality problems and solutions to increase the quality of work. The technical and organizational redundancy was proposed as a primary factor increasing dependability of warehouse operation in supply chain and thereby improving the quality of services. Authors discussed dependability of supply chain and warehouses and have proposed an approach to determination of dependability of warehouse facility based on technological and organisational redundancy related to material flow pile-ups. The approach was founded on OTIFEF index as a base for dependability estimation. Construction of that index basing on probabilities of correct realization of different aspects of logistics service was proposed. An important element of the approach presented in the paper is proposal of technical and organisational indicators defining different aspects of redundancy in aspect of dependability. The example of redundancy assessment in function of technical and organisational methods of increasing warehouse efficiency has been provided. Streszczenie: Jakość pracy łańcucha dostaw jest wynikiem jakości pracy jego elementów, w tym magazynów. W artykule przedstawiono próbę ujęcia zagadnień jakościowych pracy magazynu w kategoriach niezawodnościowych. Omówiono zagadnienia oceny pracy magazynów, źródła problemów jakościowych i stosowane rozwiązania zwiększające jakość pracy. Wskazano nadmiarowość techniczną i organizacyjną jako podstawowy środek zwiększania niezawodności realizacji zadań przez magazyny w łańcuchu dostaw i tym samym poprawę jakości świadczonych usług. Autorzy omówili zagadnienia niezawodności łańcucha dostaw i magazynów jako ich podstawowych elementów i zaproponowali podejście do określania niezawodności magazynu oparte o nadmiarowość technologiczną i organizacyjną ustalaną w oparciu o przewidywane spiętrzenia przepływu materiałów. Do tego celu wykorzystano miernik OTIFEF jako podstawę szacowania niezawodności.

Journal ArticleDOI
TL;DR: A heterogeneous discrete-time Susceptible-Infected-Susceptible (SIS) model is proposed to disclose the diffusion process by combining the SNs heterogeneity with the malware's spread probability, which is foreseen by a developed non-cooperative non-zero-sum game.

Journal ArticleDOI
TL;DR: A general model for multi-state deteriorating systems with condition based preventive maintenance is introduced and multi-objective optimization problems are formulated and solve in order to distinguish preventive maintenance policies that optimize simultaneously both the dependability and performance measures.

01 Jan 2017
TL;DR: This book brings together work across different disciplines in software engineering, including work expanded from conference tracks and workshops led by the editors, to solve the challenges imposed by building big data software systems.
Abstract: Software Architecture for Big Data and the Cloud is designed to be a single resource that brings together research on how software architectures can solve the challenges imposed by building big data software systems. The challenges of big data on the software architecture can relate to scale, security, integrity, performance, concurrency, parallelism, and dependability, amongst others. Big data handling requires rethinking architectural solutions to meet functional and non-functional requirements related to volume, variety and velocity. The book's editors have varied and complementary backgrounds in requirements and architecture, specifically in software architectures for cloud and big data, as well as expertise in software engineering for cloud and big data. This book brings together work across different disciplines in software engineering, including work expanded from conference tracks and workshops led by the editors. Key Features : - Discusses systematic and disciplined approaches to building software architectures for cloud and big data with state-of-the-art methods and techniques - Presents case studies involving enterprise, business, and government service deployment of big data applications - Shares guidance on theory, frameworks, methodologies, and architecture for cloud and big data