scispace - formally typeset
Search or ask a question

Showing papers presented at "International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing in 2010"


Proceedings ArticleDOI
05 May 2010
TL;DR: The semantics of Descriptor-based lock-free data structures are studied and a classification of their operations is proposed that helps to better understand the ABA problem and derive an effective ABA prevention scheme.
Abstract: An increasing number of modern real-time systems and the nowadays ubiquitous multicore architectures demand the application of programming techniques for reliable and efficient concurrent synchronization. Some recently developed Compare-And-Swap (CAS) based nonblocking techniques hold the promise of delivering practical and safer concurrency. The ABA problem is a fundamental problem to many CAS-based designs. Its significance has increased with the suggested use of CAS as a core atomic primitive for the implementation of portable lock-free algorithms. The ABA problem's occurrence is due to the intricate and complex interactions of the application's concurrent operations and, if not remedied, ABA can significantly corrupt the semantics of a nonblocking algorithm. The current state of the art leaves the elimination of the ABA hazards to the ingenuity of the software designer. In this work we provide the first systematic and detailed analysis of the ABA problem in lock-free Descriptor-based designs. We study the semantics of Descriptor-based lock-free data structures and propose a classification of their operations that helps us better understand the ABA problem and subsequently derive an effective ABA prevention scheme. Our ABA prevention approach outperforms by a large factor the use of the alternative CAS-based ABA prevention schemes. It offers speeds comparable to the use of the architecture-specific CAS2 instruction used for version counting. We demonstrate our ABA prevention scheme by integrating it into an advanced nonblocking data structure, a lock-free dynamically resizable array.

59 citations


Proceedings ArticleDOI
05 May 2010
TL;DR: TheMERASA system software as an RTOS developed on top of the MERASA multi-core processor fulfils the requirements for time-bounded execution of parallel hard real-time tasks and is evaluated by a worst-case execution time (WCET) analysis.
Abstract: Multi-cores are the contemporary solution to satisfy high performance and low energy demands in general and embedded computing domains. However, currently available multi-cores are not feasible to be used in safety-critical environments with hard real-time constraints. Hard real-time tasks running on different cores must be executed in isolation or their interferences must be time-bounded. Thus, new requirements also arise for a real-time operating system (RTOS), in particular if the parallel execution of hard real-time applications should be supported. In this paper we focus on the MERASA system software as an RTOS developed on top of the MERASA multi-core processor. The MERASA system software fulfils the requirements for time-bounded execution of parallel hard real-time tasks. In particular we focus on thread control with synchronisation mechanisms, memory management and resource management requirements. Our evaluations show that all system software functions are time-bounded by a worst-case execution time (WCET) analysis.

34 citations


Proceedings ArticleDOI
05 May 2010
TL;DR: This paper analyzes the Time–Triggered System–on–Chip (TTSoC) architecture and shows that the TTSoC architecture implements the core requirements of a MILS Separation Kernel and thus realizes its elementary security policies by design.
Abstract: High–integrity systems are deployed in order to realize safety–critical applications. To meet the rigorous requirements in this domain, these systems require a sophisticated approach to design, verfication, and certification. Not only safety consideration shave an impact on a product’s overall dependability, but also security has to be taken into account. In this paper we analyze the Time–Triggered System–on–Chip (TTSoC) architecture, which is a novel architecture for Multi–Processor System–on–Chip (MPSoC) devices, regarding its security properties. We discuss essential compliance criteria to the Multiple Independent Layers of Security (MILS) architecture, which is a industry–ready architecture for embedded high–integrity systems. We found that both architectures share intrinsic properties and we are able to show that the TTSoC architecture implements the core requirements of a MILS Separation Kernel and thus realizes its elementary security policies by design.

28 citations


Proceedings ArticleDOI
05 May 2010
TL;DR: This paper presents two secure packing methods that use AES encryption and the UPX packer to protect the intellectual property of software from reverse engineering attacks on Linux-based embedded system and analyses the performance of the two packing methods from the perspective of code size, execution time, and power consumption.
Abstract: Packing (or executable compression) is considered as one of the most effective anti-reverse engineering methods in the Microsoft Windows environment. Even though many reversing attacks are widely conducted in the Linux-based embedded system there is no widely used secure binary code packing tools for Linux. This paper presents two secure packing methods that use AES encryption and the UPX packer to protect the intellectual property (IP) of software from reverse engineering attacks on Linux-based embedded system. We call these methods: secure UPX and AES-encryption packing. Since the original UPX system is designed not for software protection but for code compression, we present two anti-debugging methods in the unpacking module of the secure UPX to detect or abort reverse engineering attacks. Furthermore, since embedded systems are highly resource constrained, minimizing unpacking overhead is important. Therefore, we analyze the performance of the two packing methods from the perspective of: (i) code size, (ii) execution time, and (iii) power consumption. Our analysis results show that the Secure UPX performs better than AES-encryption packing in terms of the code size, execution time, and power consumption.

27 citations


Proceedings ArticleDOI
04 May 2010
TL;DR: The requirements and architectural considerations that provide a SORT system with processes for observing, modeling, simulating, predicting, deciding, and acting in an external environment are described and how these concepts apply to SORT agent knowledge and coordination are shown.
Abstract: This paper is about the requirements and architectural considerations that provide a SORT system with processes for observing, modeling, simulating, predicting, deciding, and acting in an external environment. For our purposes, ``real time'' (RT) means coordinated with an external source of time or with sequences of events over which the system has no direct control. It is this unpredictability in the timing of responses that is the hardest constraint on a real-time system design, especially when it is known a priori that the system cannot keep up with all important events, and that ``as fast as possible'' is not appropriate for some external interactions. We will describe a testbed that we are developing as a student team project at California State Polytechnic University, Pomona (Cal Poly Pomona) to experiment with SORT strategies, and a set of games that we will use to benchmark performance. Then we will describe some useful technical background from several areas: reasoning and representation processes, situation theory, levels of meaningfulness in knowledge, and activity loops. Finally, we show how these concepts apply to SORT agent knowledge and coordination. Our contribution here is to outline a set of problems (in the form of cooperative games) that we hope others in the community will adopt as one method for benchmarking models, methods, strategies, and other processes used in SORT systems.

26 citations


Proceedings ArticleDOI
05 May 2010
TL;DR: This paper proposes the first adaptive WCET-aware compiler framework for an automatic search of compiler optimization sequences which yield highly optimized code and considers the worst-case execution time (WCET) which is a crucial parameter for real-time systems.
Abstract: With the growing complexity of embedded systems software, high code quality can only be achieved using a compiler. Sophisticated compilers provide a vast spectrum of various optimizations to improve code aggressively w. r. t. different objective functions, e. g., average-case execution time (ACET) or code size. Due to the complex interactions between the optimizations, the choice for a promising sequence of code transformations is not trivial. Compiler developers address this problem by proposing standard optimization levels, e. g., O3 or Os. However, previous studies have shown that these standard levels often miss optimization potential or might even result in performance degradation. In this paper, we propose the first adaptive WCET-aware compiler framework for an automatic search of compiler optimization sequences which yield highly optimized code. Besides the objective functions ACET and code size, we consider the worst-case execution time (WCET) which is a crucial parameter for real-time systems. To find suitable trade-offs between these objectives, stochastic evolutionary multi-objective algorithms identifying Pareto optimal solutions are exploited. A comparison based on statistical performance assessments is performed which helps to determine the most suitable multi-objective optimizer. The effectiveness of our approach is demonstrated on real-life benchmarks showing that standard optimization levels can be significantly outperformed.

22 citations


Proceedings ArticleDOI
05 May 2010
TL;DR: The paper discusses how the CORBA Component Model (CCM) could be combined with the ARINC-653 platform services and the lessons learned from this experiment and the results point towards both extending the CCM as well as revising the ARinc-653.
Abstract: The complexity of software in systems like aerospace vehicles has reached the point where new techniques are needed to ensure system dependability while improving the productivity of developers. One possible approach is to use precisely defined software execution platforms that (1) enable the system to be composed from separate components, (2) restrict component interactions and prevent fault propagation, and (3) whose compositional properties are well-known. In this paper we describe the initial steps towards building a platform that combines component-based software construction with hard real-time operating system services. Specifically, the paper discusses how the CORBA Component Model (CCM) could be combined with the ARINC-653 platform services and the lessons learned from this experiment. The results point towards both extending the CCM as well as revising the ARINC-653.

21 citations


Proceedings ArticleDOI
05 May 2010
TL;DR: This paper presents a prototype of the RTSC ­ the Real-Time System Compiler, a compiler-based tool that leverages the migration from event-triggered to time-trIGgered real-time systems and demonstrates the applicability of the approach and the operation.
Abstract: In this paper we present a prototype of the RTSC ­ the Real-Time System Compiler. The RTSC is a compiler-based tool that leverages the migration from event-triggered to time-triggered real-time systems. For this purpose, it uses an abstraction called Atomic Basic Blocks (ABBs) which is used to capture all relevant dependencies of the event-triggered system in a so-called ABB-graph. This ABB-graph is transformed by the RTSC and finally mapped to a statically computed schedule that could be executed by standard time-triggered real-time operating systems. Moreover, we demonstrate the applicability of our approach and the operation of our prototype by transforming the event-triggered implementation of a real-world embedded system into a trime-triggered equivalent.

18 citations


Proceedings ArticleDOI
05 May 2010
TL;DR: This paper presents a policy-driven approach for managing QoS in SOA systems and discusses the design of several key QoS services and empirically evaluate their ability to provide QoS under CPU overload and bandwidth-constrained situations.
Abstract: Service-oriented architecture (SOA) middleware has emerged as a powerful and popular distributed computing paradigm due to its high-level abstractions for composing systems and hiding platform-level details. Control of some details hidden by SOA middleware is necessary, however, to provide managed quality of service (QoS) for SOA systems that need predictable performance and behavior. This paper presents a policy-driven approach for managing QoS in SOA systems. We discuss the design of several key QoS services and empirically evaluate their ability to provide QoS under CPU overload and bandwidth-constrained situations.

17 citations


Proceedings ArticleDOI
05 May 2010
TL;DR: This work presents a programming framework for writing control applications for modular robots that provides a complex programming model that is based on standard finite state machines extended in syntax and semantics to support communication, variables, and actions.
Abstract: Modular robots are a powerful concept for robotics. A modular robot consists of many individual modules so it can adjust its configuration to the problem. However, the fact that a modular robot consists of many individual modules makes it a highly distributed, highly concurrent real-time system, which are notoriously hard to program. In this work, we present our programming framework for writing control applications for modular robots. The framework includes a toolset that allows a model-based programing approach for control application of modular robots with code generation and verification. The framework is characterized by the following three features. First, it provides a complex programming model that is based on standard finite state machines extended in syntax and semantics to support communication, variables, and actions. Second, the framework provides compositionality at the hardware and at the software level and allows building the modular robot and its control application from small building blocks. And third, the framework supports formal verification of the control application to aid the gait and task developer in identifying problems and bugs before the deployment and testing on the physical robot.

16 citations


Proceedings ArticleDOI
05 May 2010
TL;DR: This paper proposes a solution to the unbounded dynamism problem by providing an admission control protocol for real-time OSGi and provides a priority assignment approach to support temporal isolation.
Abstract: In previous work we motivated the need for using the OSGi Framework with the RTSJ to develop real-time systems. We found a number of issues with using these technologies together. One of the issues we discovered was unbounded dynamism caused by the absence of admission control. Components can be uninstalled, installed and updated without regulation. This means that it is impossible to guarantee resources to components. In this paper, we propose a solution to the unbounded dynamism problem by providing an admission control protocol for real-time OSGi. We also provide a priority assignment approach to support temporal isolation. The combination of admission control and temporal isolation ensure that it is safe to update components or install components into the system in terms of guaranteeing resources to components. We show the practicality of our admission control protocol by implementing a prototype and measuring the execution time overhead incurred when performing a component install with admission control.

Proceedings ArticleDOI
04 May 2010
TL;DR: The overall system requirements are described, the approach for combining real-time data streaming interaction with classical object-oriented remote calls in one distributed middleware stack is shown, and the resulting system will form the foundation for transmission of real- time data from devices at patients' home to the remote tele-medicine monitoring center.
Abstract: Density of medical clinics, ease of access to doctors, the age pyramid of the population - these are differentiating factors among patients living in urban or rural areas which may affect the probability to survive certain diseases and attacks dramatically. The German Fontane project, a collaboration of medical experts, IT researchers and companies, focuses on remote monitoring and aftercare facilities for stroke patients and patients with heart diseases in rural Brandenburg. Within this paper, we discuss central requirements on the object-oriented self-adaptive middleware being developed for Fontane. We describe the overall system requirements, and show our approach for combining real-time data streaming interaction with classical object-oriented remote calls in one distributed middleware stack. The resulting system will form the foundation for transmission of real-time data from devices at patients' home to the remote tele-medicine monitoring center.

Proceedings ArticleDOI
05 May 2010
TL;DR: A trace functionality which is a new functionality to confirm the state of components or calling components without modification of C source code is introduced in a real-time operating system (RTOS).
Abstract: Software component techniques have been widely used for enhancement and the cost reduction of software development. We herein introduce a component system with a real-time operating system (RTOS). A case study of a two-wheeled inverted pendulum balancing robot with the component system is presented. The component system can deal with RTOS resources, such as tasks and semaphores, as components. Moreover, a trace functionality which is a new functionality to confirm the state of components or calling components without modification of C source code is introduced.

Proceedings ArticleDOI
05 May 2010
TL;DR: Different tradeoffs that emerge from the definition of different propagation models for distributed real-time Java are analyzed.
Abstract: Many real-time systems use preemptive priorities in their internals to guarantee certain real-time performance. This includes technologies that range from RTSJ (The Real-Time Specification for Java) to middleware like Real-Time CORBA (Common Object Request Broker Architecture) which offers additional models and policies that blend client and server information. This decision makes easier the integration of real-time acceptance tests and dispatching policies in these kinds of infrastructures. In this paper, we analyze different tradeoffs that emerge from the definition of different propagation models for distributed real-time Java. The paper covers technological integration aspects, impact on interfaces, and other practical issues more related to the performance that this model offers to a real-time application

Proceedings ArticleDOI
05 May 2010
TL;DR: This paper presents a Virtual Machine Monitor (VMM)based monitoring service for embedded systems that checks the actual kernel data against a safe data specification, due to the VMM and multi-core nature of the system.
Abstract: The recent increase in complexity and functionality in embedded systems makes them more vulnerable to rootkit-type attacks, raising the need for integrity management systems. However, as of today there is no such system that can guarantee the system’s safety while matching the low-resource, real-time and multi-core requirements of embedded systems. In this paper, we present a Virtual Machine Monitor (VMM)based monitoring service for embedded systems that checks the actual kernel data against a safe data specification. However, due to the VMM and multi-core nature of the system, the guest OS can be preempted at any time, leading to the checking of potentially inconsistent states. We evaluated two approaches to solve this problem: detecting such invalid states by checking specific kernel data, and detecting system calls using the VMM.

Proceedings ArticleDOI
05 May 2010
TL;DR: A new embedded system modeling solution considering dual RTOS/GPOS systems is proposed, capable of modeling and evaluating all HW and SW system components providing the designer with valuable information for early system optimization and design space exploration.
Abstract: The increase of computational power in embedded systems has allowed integrating together hard real-time tasks and rich applications. Complex SW infrastructures containing both RTOS and GPOS are required to handle this complexity. To optimally map system functionality to the hard-RT SW domain, to the general purpose SW domain or to HW peripherals, early performance evaluations at the first steps of the design process are required. Approximate timed co-simulation has been proposed as a fast solution for system modeling at early design steps. This co-simulation technique allows simulating systems at speed close to functional execution, while considering timing effects. As a consequence, system performance estimations can be obtained early, allowing efficient design space exploration and system refinement. To achieve fast simulation speed, the SW code is pre-annotated with time information. The annotated code is then natively executed, performing what is called native-based co-simulation. Previous native-based simulation environments are not prepared to model multi-OS systems, so the performance evaluation of the different SW domains is not possible. This paper proposes a new embedded system modeling solution considering dual RTOS/GPOS systems. A real Linux-based infrastructure has been modeled an integrated into a state-of-the-art co-simulation environment. The resulting solution is capable of modeling and evaluating all HW and SW system components providing the designer with valuable information for early system optimization and design space exploration.

Proceedings ArticleDOI
05 May 2010
TL;DR: An implementation of the AHS written in pure ANSI C is presented and first real-world scenarios and results of test series are shown.
Abstract: Organic Computing is a new and promising research area. Inspired by nature, organic computing research wants to learn and adopt from techniques and properties of nature. The goal is to acquire the so called Self-X properties like self-organization and self-healing. Taking the hormone system of mammals as a role model, the artificial hormone system(AHS) was designed to map tasks on processing elements using artificial hormones. In previous publications we presented the idea of an organic middleware and first theoretical results. As we want to run this middleware in an embedded scenario we will now present an implementation of the AHS written in pure ANSI C and show first real-world scenarios and results of test series.

Proceedings ArticleDOI
05 May 2010
TL;DR: The Data Distribution Service specification provides a totally decentralized data-centric approach with real-time quality of service support, and is a perfect base upon which to develop a framework for the integration of real- time distributed architectures.
Abstract: In recent times real-time distributed systems have definitively become peer-to-peer organized. The common interactions are those of different real-time components dealing with sensors or actuators, implementing controllers, performing monitoring and surveillance tasks, and interacting between them in a dynamic decentralized way. There is a need for mechanisms that allow the integration of these independent components, saving development time while keeping their real-time capability. Services and events, thanks to their decoupled nature are perfect candidates for supporting these architectures. The data centric approach goes even farther, introducing a global data space that allows a flexible, decoupled and scalable coordination environment over which services and events can be added as specific interactions mechanisms inside this global data space, in order to support all the architectural possibilities. The Data Distribution Service specification provides a totally decentralized data-centric approach with real-time quality of service support. It is a perfect base upon which to develop a framework for the integration of real-time distributed architectures.

Proceedings ArticleDOI
04 May 2010
TL;DR: A new approach to distribute tasks connected by causal dependencies within a heterogeneous environment, e.g. several resources communicating with each other or a processor grid, which is able to meet real-time constraints is presented.
Abstract: In this paper we present a new approach to distribute tasks connected by causal dependencies within a heterogeneous environment, e.g. several resources communicating with each other or a processor grid. Our approach uses an artificial hormone system for task distribution which is able to meet real-time constraints. Several enhancements of the artificial hormone system are made such as partial suppressing of tasks and distributed task termination.

Proceedings ArticleDOI
04 May 2010
TL;DR: This work addresses the problem of dependable communications in the context of Controller Area Network (CAN), which is widely used in automotive and automation domains, and describes a methodology which enables the provision of appropriate scheduling guarantees.
Abstract: Dependable communications is becoming a critical factor due to the pervasive usage of networked embedded systems that increasingly interact with human lives in one way or the other in many real-time applications. Though many smaller systems are providing dependable services employing uniprocesssor solutions, stringent fault containment strategies etc., these practices are fast becoming inadequate due to the prominence of COTS in hardware and component based development (CBD) in software as well as the increased focus on building 'system of systems'. Hence the repertoire of design paradigms, methods and tools available to the developers of distributed real-time systems needs to be enhanced in multiple directions and dimensions. In future scenarios, potentially a network needs to cater to messages of multiple criticality levels (and hence varied redundancy requirements) and scheduling them in a fault-tolerant manner becomes an important research issue. We address this problem in the context of Controller Area Network (CAN), which is widely used in automotive and automation domains, and describe a methodology which enables the provision of appropriate scheduling guarantees. The proposed approach involves definition of fault-tolerant windows of execution for critical messages and the derivation of message priorities based on earliest deadline first (EDF).

Proceedings ArticleDOI
05 May 2010
TL;DR: Intermediate results of work on strategies for eliminating timing anomalies originating from the processor’s out-of-order instruction pipeline are presented, and it is explained how the proposed strategies remove the timing anomalies caused by the pipeline.
Abstract: Divide-and-conquer approaches to worst-case execution-time analysis (WCET analysis) pose a safety risk when applied to code for complex modern processors: Interferences between the hardware acceleration mechanisms of these processors lead to timing anomalies, i.e., a local timing change causes an either larger or inverse change of the global timing. This phenomenon may result in dangerous WCET underestimation. This paper presents intermediate results of our work on strategies for eliminating timing anomalies. These strategies are purely based on the modification of software, i.e., they do not require any changes to hardware. In an effort to eliminate the timing anomalies originating from the processor’s out-of-order instruction pipeline, we explored different methods of inserting instructions in the program code that render the dynamic instruction scheduler inoperative. We explain how the proposed strategies remove the timing anomalies caused by the pipeline. In the absence of working solutions for timing analysis for these complex processors, we chose portable metrics from compiler construction to assess the properties of our algorithms.

Proceedings ArticleDOI
04 May 2010
TL;DR: This paper describes a toolchain prototype implementation which is used to summarize lessons learned from practical insights and formalizes domain knowledge using OWL (Web Ontology Language) ontologies.
Abstract: When developing real-time embedded systems, various professional disciplines are involved. Concerning AAS (assistance and automotive systems) in the automotive domain, the project DeSCAS (Design of Safety-Critical Automotive Systems) has identified the design streams functional development and architecture, safety measures and human factors. What has been proposed are an interwoven development process and related methodologies to cope with these different design streams and their domain specific terminology, models, methods and tools. A key aspect in the proposed methodology is formalizing domain knowledge using OWL (Web Ontology Language) [4] ontologies. Reasoning is applied to support analysis steps (impact analysis as well as hazard and risk analysis) and infer consequences of design decisions for a single stream or for the entire development process. This paper describes a toolchain prototype implementation which is used to summarize lessons learned from practical insights. The toolchain currently interweaves two development streams: functional development and architecture activities with management of safety measures. A simple emergency braking system is modeled as an example application of an assistance and automation system to illustrate the proposed proceeding.

Proceedings ArticleDOI
05 May 2010
TL;DR: The VIS Analyzer is proposed, a visual assistant for VIS verification and analysis, which can help nuclear engineers take full benefits of VIS without being overwhelmed by incomplete and low-level details.
Abstract: Formal verification plays an important role in demonstrating the quality of safety-critical systems such as nuclear power plants. We have used the VIS verification system to determine behavioral equivalence between two successive revisions in developing the KNICS RPS (Reactor Protection System) in Korea. The VIS accepts a high-level programming language Verilog as input, and its verification results contain valuable information about one reason of the failure. However the VIS offers no graphical interface, and partially displays relevant information necessary to understand the full verification scenario accurately. Many nuclear engineers and verification experts found the information insufficient, and it makes hard to the wide use of the VIS verification system in industry. This paper proposes the VIS Analyzer, a visual assistant for VIS verification and analysis, which can help nuclear engineers take full benefits of VIS without being overwhelmed by incomplete and low-level details. The VIS Analyzer automates the VIS verification processes such as equivalence checking and model checking, and displays the verification results in visual formats. We used a recent case study introduced in to demonstrate its effectiveness and usefulness.

Proceedings ArticleDOI
04 May 2010
TL;DR: This paper defines ACOL: a model annotation language with at its core the combination of analysis, constraint and optimization expressions that results in a powerful framework for efficient architectural design space exploration, usable from the earliest phases of embedded system design.
Abstract: Architecture-Driven Development of embedded systems involves finding the right trade-off between multiple non-functional properties at the model level. In this paper we define ACOL: a model annotation language with at its core the combination of analysis, constraint and optimization expressions. This combination results in a powerful framework for efficient architectural design space exploration, usable from the earliest phases of embedded system design. We use AADL as an example to demonstrate how ACOL can be embedded in component-based Architecture Description Languages. The functionality of ACOL is illustrated with several use cases.

Proceedings ArticleDOI
05 May 2010
TL;DR: OASIS is presented, which is service-oriented middleware for instrumenting enterprise DRE systems to collect and extract metrics without design time knowledge of which metrics are collected, and its flexibility enables DRE system testers to precisely control the overhead incurred via instrumentation.
Abstract: Performance analysis tools for enterprise distributed real-time and embedded (DRE) systems require instrumenting heterogeneous sources (such as application- and system-level hardware and software resources). Traditional techniques for software instrumentation of such systems, however, are tightly coupled to system design and metrics of interest. It is therefore hard for system testers to increase their knowledge base and analytical capabilities for enterprise DRE system performance using existing instrumentation techniques when metrics of interest are not known during initial system design. This paper provides two contributions to research on software instrumentation for enterprise DRE systems. First, it presents OASIS, which is service-oriented middleware for instrumenting enterprise DRE systems to collect and extract metrics without design time knowledge of which metrics are collected. Second, this paper empirically evaluates OASIS in the context of a representative enterprise DRE system from the domain of shipboard computing. Results from applying OASIS to a representative enterprise DRE system show that its flexibility enables DRE system testers to precisely control the overhead incurred via instrumentation.

Proceedings ArticleDOI
05 May 2010
TL;DR: A FEC approach is proposed, where encoding functionality is placed at the root and on a subset of interior nodes in the multicast tree, combined to a gossiping algorithm, to guarantee a resilient and timely event dissemination despite of message losses.
Abstract: Publish/subscribe middleware is being increasingly used to devise large-scale critical systems. Although several reliable publish/subscribe solutions have been proposed, none of them properly address the problem of assuring message dissemination even if network omissions happen without breaking any temporal constraints. In order to fill this gap, we have investigated how to guarantee a resilient and timely event dissemination despite of message losses. The contribution of this paper is on proposing a FEC approach, where encoding functionality is placed at the root and on a subset of interior nodes in the multicast tree, combined to a gossiping algorithm. Simulation-based experiments demonstrate that the proposed approach allows all the interested subscribers to receive all the published messages and the adopted resiliency mean does not affect the timeliness of the multicast protocol.

Proceedings ArticleDOI
04 May 2010
TL;DR: This work proposes to enhance future real-time systems with an in-system model-based timing analysis engine capable of deciding whether a configuration is feasible to be executed and provides a synchronization protocol solving the model consistency issues.
Abstract: Allowing real-time systems to autonomously evolve or self-organize during their life-time poses challenges on guidance of such a process. Hard real-time systems must never break their timing constraints even if undergoing a change in configuration. We propose to enhance future real-time systems with an in-system model-based timing analysis engine capable of deciding whether a configuration is feasible to be executed. This engine is complemented by a formal procedure guiding system evolution. The distributed implementation of a runtime environment (RTE) implementing this procedure imposes two key questions of consistency: How do we ensure model consistency across the distributed system and how do we ensure consistency of the actual system behavior with the model? We present a synchronization protocol solving the model consistency issues and provide a discussion on implications of different mode-change protocols on consistency of the system with its model.

Proceedings ArticleDOI
05 May 2010
TL;DR: It is shown how adding a new level of optimization at the model level results in a more compact code.
Abstract: This paper addresses the problem of code optimization for Real-Time and Embedded Systems (RTES). Such systems are designed using Model-Based Development (MBD)approach that consists of performing three major steps: building models, generating code from them and compiling the generated code. Actually, during the code generation, an important part of the modeling language semantics which could be useful for optimization is lost, thus, making impossible some optimizations achievement. This paper shows how adding a new level of optimization at the model level results in a more compact code. It also discusses the impact of the code generation on optimization: whether this step promotes or prevents optimizations. We conclude on a proposal of a new MBD approach containing only steps that advance optimization: modeling and compiling steps.

Proceedings ArticleDOI
04 May 2010
TL;DR: This paper proposes to take advantage of Polychrony clock calculus, named hierarchization, to analyze timed systems specified in CCSL, and to generate code for simulation considering determinism, being integrated into the TimeSquare environment dedicated to the simulation of MARTE timed systems.
Abstract: The UML Profile for Modeling and Analysis of Real-Time and Embedded systems (MARTE) defines a broadly expressive Time Model to provide a generic timed interpretation for UML models. As a part of MARTE, Clock Constraint Specification Language (CCSL) allows the specification of systems with multiple clock domains as well as nondeterminism. In this paper, we propose to take advantage of Polychrony clock calculus, named hierarchization, to analyze timed systems specified in CCSL, and to generate code for simulation considering determinism. Hierarchization enables to identify the endochrony property in a system that allows code generation ensuring determinism. The presented work is being integrated into the TimeSquare environment dedicated to the simulation of MARTE timed systems.

Proceedings ArticleDOI
04 May 2010
TL;DR: This paper proposes auto correlation clustering (ACC) as a technique to predict the workload of single iterations of a periodic soft real-time application and adjusts the processor performance such that deadlines are exactly met.
Abstract: Embedded real-time systems often operate under energy constraints due to a limited battery lifetime. Modern processors provide techniques for dynamic voltage and frequency scaling to reduce energy consumption. However, while the processor possibly operates at a lower clock frequency, the running applications should still meet their deadlines and thus set some limits to the use of scaling techniques. In this paper, we propose auto correlation clustering (ACC) as a technique to predict the workload of single iterations of a periodic soft real-time application. Based on this prediction we adjust the processor performance such that deadlines are exactly met. We compare our technique to the broadly implemented race-to-idle (RTI) and identify situations where ACC can gain higher energy savings than RTI. Additionally, ACC can help saving energy in multithreaded processors where RTI can be applied only with a high overhead if at all.