scispace - formally typeset
Search or ask a question

Showing papers by "Kees Goossens published in 2021"


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a three-mode recovery scheme, which includes full functionality, reduced functionality, and emergency halt modes, for Time-Triggered (TT) flows.
Abstract: Reliability is one of the major concerns of Time Sensitive Networking (TSN). Current systems mostly rely on static redundancy to protect functionality from permanent component failures. This greatly increases the cost of Time-Triggered (TT) flows. Instead, Software Defined Networking (SDN) enables dynamic redundancy. Disrupted traffic can be rerouted by a centralized controller to reduce the cost while maintaining reliability. This paper presents an approach to compute alternative paths at run-time and analyze their impact on reliability. We define a novel three-mode recovery scheme, which includes full functionality, reduced functionality, and emergency halt modes. Run-time recovery for TT flows is explored using Integer Linear Programming (ILP) and a heuristic algorithm. Then, a Markov chain-based design-time reliability analysis is developed to evaluate the Mean Time to Reduced Functionality Mode (MTTRF) and Mean Time to Failure (MTTF) of run-time recoverable systems. Our experiments show that run-time recovery provides better protection against multi-point failures than static redundancy. Compared with the state of the art, our proposed ILP has better routing efficiency. The proposed heuristic algorithm can perform routing and scheduling in polynomial time, but it tends to route multicast flows to longer paths than ILP. Furthermore, when applied to realistic recovery scenarios, our proposed ILP improves the MTTF by up to $2\times $ and the average execution time by up to $20\times $ than the raw ILP of the state of the art. Although less efficient with multicast flows, the heuristic algorithm achieves similar reliability as the ILP, and its worst-case recovery time is below $100ms$ on an embedded ARM processor.

11 citations


Journal ArticleDOI
TL;DR: In this paper, the authors evaluate different variants of two common topologies: domain and zone-based architectures in terms of total cost, failure probability, total communication cable length, communication load distribution, and functional load distribution.
Abstract: Safety-critical systems such as Advanced Driving Assistance Systems and Autonomous Vehicles require redundancy to satisfy their safety requirements and to be classified as fail-operational. Introducing redundancy in a system with high data rates and processing requirements also has a great impact on architectural design decisions. The current self-driving vehicle prototypes do not use a standardized system architecture but base their design on existing vehicles and the available components. In this work, we provide a novel analysis framework that allows us to qualitatively and quantitatively evaluate an in-vehicle architecture topology and compare it with others. With this framework, we evaluate different variants of two common topologies: domain and zone-based architectures. Each topology is evaluated in terms of total cost, failure probability, total communication cable length, communication load distribution, and functional load distribution. We introduce redundancy in selected parts of the systems using our automated process provided in the framework, in a safety-oriented design process that enables the ISO26262 Automotive Safety Integrity Level decomposition technique. After every design step, the architecture is re-evaluated. The advantages and disadvantages of the different architecture variants are evaluated to guide the designer towards the choice of correct architecture, with a focus on the introduction of redundancy.

9 citations


Proceedings ArticleDOI
01 Feb 2021
TL;DR: In this article, the authors present a bare-metal implementation of XRCE-DDS standard on the CompSOC platform as an instance of Multi-Processor System on Chip (MPSoC).
Abstract: The Publish-Subscribe paradigm is a design pattern for transparent communication in many recent distributed applications. Data Distribution Service (DDS) is a machine-to-machine communication standard that aims to provide reliable, highperformance, inter-operable, and real-time data exchange based on publish-subscribe paradigm. However, the high resource requirement of DDS limits its usage in low-cost embedded systems. XRCE-DDS is a Client-Agent based standard to enable resource-constrained small embedded systems to connect to the DDS global data space. Current XRCE-DDS implementations suffer from dependencies with host operating systems, target only single processing units, and lack performance analysis methods. In this paper, we present a bare-metal implementation of XRCE-DDS standard on the CompSOC platform as an instance of Multi-Processor System on Chip (MPSoC). The proposed framework includes a hard real-time side hosting the XRCE-DDS Client, and a soft real-time side hosting the XRCE-DDS Agent. A Scenario Aware Data Flow (SADF) model is proposed to capture the dynamism of the system behavior in terms of different execution scenarios. We analyze the long-term expected value for throughput by capturing the probabilistic scenario switching using a proposed Markov model which is experimentally validated.

7 citations


Proceedings ArticleDOI
06 Jun 2021
TL;DR: In this paper, the authors presented a simplification and a corresponding hardware architecture for hard-decision recursive projection-aggregation (RPA) decoding of Reed-Muller (RM) codes.
Abstract: In this work, we present a simplification and a corresponding hardware architecture for hard-decision recursive projection-aggregation (RPA) decoding of Reed-Muller (RM) codes. In particular, we transform the recursive structure of RPA decoding into a simpler and iterative structure with minimal error-correction degradation. Our simulation results for RM(7,3) show that the proposed simplification has a small error-correcting performance degradation (0.005 in terms of channel crossover probability) while reducing the average number of computations by up to 40%. In addition, we describe the first fully parallel hardware architecture for simplified RPA decoding. We present FPGA implementation results for an RM(6,3) code on a Xilinx Virtex-7 FPGA showing that our proposed architecture achieves a throughput of 171 Mbps at a frequency of 80 MHz.

4 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a probabilistic analysis of collision-free communication of coexisting BLE and TSCH networks and a fast coexistence simulation model is developed that computes the ratio of collision free transmissions.
Abstract: Bluetooth Low Energy (BLE) and IEEE 802.15.4 Time-Slotted Channel Hopping (TSCH) are widely used low-power standard technologies developed for short-range communications in the internet-of-things. In many applications, BLE and TSCH networks may be deployed in the vicinity of one another, which may lead to Cross Technology Interference (CTI) influencing each other’s performance. Both technologies utilize channel hopping to alleviate the impact of external interferences. However, having a model to quantitatively estimate the performance of coexisting TSCH and BLE networks and analyze the role of networks’ configuration settings is still an open problem. To address this, we provide a probabilistic analysis of collision-free communication of coexisting BLE and TSCH networks. Moreover, a fast coexistence simulation model is developed that computes the ratio of collision-free transmissions. This model is used to investigate how the performance of the coexisting networks may deviate from the results of the probabilistic analysis. It gives the designers a proper estimation of the worst case performance degradation due to such coexistence. It is shown that severity of the impact of coexisting BLE-TSCH networks on one another depends on the setup configurations and the relative timing of the two networks. The results show that these two technologies can coexist well with collision-free ratios of more than 92.58% and 96.29% in the tested configurations for TSCH and BLE, respectively.

3 citations


Proceedings ArticleDOI
15 Jun 2021
TL;DR: In this paper, the authors describe a model to characterize a mixed-criticality automotive system and the analysis steps to obtain quantified metrics such as cost, failure probability, total functional and communication loads, and total cable length.
Abstract: Future automotive systems, with Advanced Driving Assistance Systems and Autonomous Driving functionalities, will require fail-operational electronic systems. To achieve that, redundancy is a necessary technique, like in many other fields such as aviation. Moreover the applications have different safety requirements, from safety-critical related applications, for example for the driver replacement domain, to QoS-oriented applications, for example for the infotainment domain. Redundancy in mixed-criticality systems can be solved by physically separating system resources or by using isolated virtualized environments with e.g. hypervisors. There are costs associated to both solutions. In this work we describe a novel model we use to characterize a mixed-criticality automotive system and the analysis steps to obtain quantified metrics. The quantified metrics include cost, failure probability, total functional and communication loads, and total cable length, to compare the different solutions from a system-level perspective. We analyse the same set of mixed-criticality applications that represent a simplified automotive system in four scenarios. The architecture topology is either domain-based or zone-based, and we use either physical separation or virtualization to provide isolation. The obtained results show how the model and the analysis allows us to understand the trade-offs between the different solutions in specific applications scenarios, and how to vary the metrics used in the analysis to adapt to a different applications scenario.

1 citations


Proceedings ArticleDOI
11 Oct 2021
TL;DR: In this article, the authors propose a run-time deployment framework that is more flexible in defining constraints and optimization goals and works with more heterogeneous resources and resource models than existing solutions.
Abstract: Traditional embedded systems and recent platforms used in emerging computing paradigms (e.g., fog computing) have resource limits and require their applications and services to be dynamically added (i.e., deployed) and removed at run-time. These applications often have non-functional (quality) requirements (e.g., end-to-end latency) which are only satisfied when sufficient resources are allocated to them. Hence, a run-time decision-maker is needed to optimize the deployments, in terms of resource budgets that are allocated to applications. Additionally, computing platforms have become heterogeneous in terms of their resources and the applications they execute. However, the existing deployment solutions are limited to specific resources and services. In this paper, we propose a run-time deployment framework that is more flexible in defining constraints and optimization goals and works with more heterogeneous resources and resource models than existing solutions. The framework is implemented on an embedded platform as a proof of concept.

1 citations


Journal ArticleDOI
TL;DR: This paper focuses on Stage 1, library characterization, as both test quality and cost are determined by the set of cell-internal defects identified and simulated in the CAT tool flow, and proposes an approach to identify a comprehensive set, referred to as full set, of potential open- and short-defect locations based on cell layout.
Abstract: Cell-aware test (CAT) explicitly targets faults caused by defects inside library cells to improve test quality, compared with conventional automatic test pattern generation (ATPG) approaches, which target faults only at the boundaries of library cells. The CAT methodology consists of two stages. Stage 1, based on dedicated analog simulation, library characterization per cell identifies which cell-level test pattern detects which cell-internal defect; this detection information is encoded in a defect detection matrix (DDM). In Stage 2, with the DDMs as inputs, cell-aware ATPG generates chip-level test patterns per circuit design that is build up of interconnected instances of library cells. This paper focuses on Stage 1, library characterization, as both test quality and cost are determined by the set of cell-internal defects identified and simulated in the CAT tool flow. With the aim to achieve the best test quality, we first propose an approach to identify a comprehensive set, referred to as full set, of potential open- and short-defect locations based on cell layout. However, the full set of defects can be large even for a single cell, making the time cost of the defect simulation in Stage 1 unaffordable. Subsequently, to reduce the simulation time, we collapse the full set to a compact set of defects which serves as input of the defect simulation. The full set is stored for the diagnosis and failure analysis. With inspecting the simulation results, we propose a method to verify the test quality based on the compact set of defects and, if necessary, to compensate the test quality to the same level as that based on the full set of defects. For 351 combinational library cells in Cadence’s GPDK045 45nm library, we simulate only 5.4% defects from the full set to achieve the same test quality based on the full set of defects. In total, the simulation time, via linear extrapolation per cell, would be reduced by 96.4% compared with the time based on the full set of defects.