scispace - formally typeset
Search or ask a question

Showing papers presented at "International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing in 2011"


Proceedings ArticleDOI
28 Mar 2011
TL;DR: The concept of "schedule porosity'' is introduced and the impact of time-triggered traffic on unsynchronized traffic as a function of schedule porosity is shown.
Abstract: Throughout many application areas of embedded and cyber-physical systems there is a demand to integrate more and more applications such that they share common resources. These applications may have different levels of criticality with respect to temporal or fault-tolerance properties and we call the result of their integration a mixed-criticality system. The communication network is a resource of particular importance and nowadays the system architecture is highly determined by a network's capabilities. A network for mixed-criticality systems has to establish partitioning such that the influence of messages from different applications on each other is bounded and the impact of low-critical messages on high-critical ones is minimized or removed at all. A straight forward way to establish network-wide partitioning is the time-triggered communication paradigm in which the communication schedule on the network is defined at design time and executed with respect to a globally synchronized time base. In this paper we discuss static scheduling methods for time-triggered traffic such that it can co-exist with non-time-triggered traffic. We introduce the concept of "schedule porosity'' and show the impact of time-triggered traffic on unsynchronized traffic as a function of schedule porosity.

93 citations


Proceedings ArticleDOI
28 Mar 2011
TL;DR: The goal is henceforth to design a body sensor network for unobtrusive and highly accurate profiling of body parameters over weeks in realistic environments and build a prototype of such a non-invasive wearable wireless monitoring system for accurate body temperature measurements and real-time feedback to the medic.
Abstract: Medical measurements and clinical trials are often carried out in controlled lab settings -- severely limiting the realism and duration of such studies. Our goal is henceforth to design a body sensor network for unobtrusive and highly accurate profiling of body parameters over weeks in realistic environments. One example application is monitoring the impact of sleep deprivation on periodic processes in the human body known as circadian rhythms, which requires highly accurate profiling of skin temperature across the human body over weeks with real-time feedback to a remote medic. We analyze the requirements on a body sensor network for such applications and highlight the need for self-organizing behavior such as adaptive sampling to ensure energy efficiency and thus longevity, adaptive communication strategies, self-testing, automatic compensation for environmental conditions, or automatic recording of a diary of activities. As a first step towards this goal, we design and build a prototype of such a non-invasive wearable wireless monitoring system for accurate body temperature measurements and real-time feedback to the medic. Through the design, parameterization, and calibration of an active measurement subsystem, we obtain an accuracy of 0.02°C over the typical body temperature range of 16-42°C. We report results from two preliminary trials regarding the impact of circadian rhythms and mental activity on skin temperature, indicating that our tool could indeed become a valuable asset for medical research.

50 citations


Proceedings ArticleDOI
28 Mar 2011
TL;DR: To what extent it is possible to apply some of the scheduling analysis techniques implemented in one open source tool on real-time systems deployed on AUTOSAR-compliant architectures is studied.
Abstract: AUTOSAR (Automotive Open System Architecture) is enjoying increasing interest and broad acceptance in the automotive domain. AUTOSAR aims at defining an open standardized software architecture to face future challenges in automotive development including the development of time-critical systems (e.g. brake-by-wire or steer-by-wire). Mastering the development of such systems requires being able to analyze their real-time behavior. Scheduling analysis is the theory that studies how far a real-time system may satisfy its real-time requirements against its real-time properties. In this paper, we will study to what extent it is possible to apply some of those scheduling analysis techniques on real-time systems deployed on AUTOSAR-compliant architectures. The paper focuses on scheduling analysis techniques implemented in one open source tool. A concrete case study shows the feasibility of the approach and shows scheduling analysis results.

43 citations


Proceedings ArticleDOI
28 Mar 2011
TL;DR: This work proposes an architectural blueprint for managing server system dependability in a pro-active fashion, in order to keep service-level promises for response times and availability even with increasing hardware failure rates.
Abstract: Next generation processor and memory technologies will provide tremendously increasing computing and memory capacities for application scaling. However, this comes at a price: Due to the growing number of transistors and shrinking structural sizes, overall system reliability of future server systems is about to suffer significantly. This makes reactive fault tolerance schemes less appropriate for server applications under reliability and timeliness constraints. We propose an architectural blueprint for managing server system dependability in a pro-active fashion, in order to keep service-level promises for response times and availability even with increasing hardware failure rates. We introduce the concept of anticipatory virtual machine migration that proactively moves computation away from faulty or suspicious machines. The migration decision is based on health indicators at various system levels that are combined into a global probabilistic reliability measure. Based on this measure, live migration techniques can be triggered in order to move computation to healthy machines even before a failure brings the system down.

41 citations


Proceedings ArticleDOI
28 Mar 2011
TL;DR: It is demonstrated how a proof of a non-functional system requirement can be conducted based on results from formal verification on the lowest possible level of human-written artefacts, that is the source code level.
Abstract: Often, an integrated mixed-criticality system is built in an environment which provides separation functionality for available on-board resources. In this paper we treat such an environment: the PikeOS separation kernel -- a commercial real-time embedded operating system. PikeOS allows applications with different safety and security levels to run on the same hardware. Obviously, a mixed-criticality system built on PikeOS relies on the correct implementation of the separation mechanisms. In the context of the Verisoft XT and TECOM projects we apply deductive formal software verification to the PikeOS separation mechanisms in order to validate this security requirement. In this work we consider formal verification of a kernel memory manager which is one of the crucial components of the separation functionality. The verification of the memory manager is carried out on the level of the source code using the VCC tool developed by Microsoft Research. Furthermore, we present the overall correctness arguments needed to prove the intended separation property, describe the necessary functional correctness properties of PikeOS, and explain how to formulate these properties in a modular way to be used by VCC. In doing so we demonstrate how a proof of a non-functional system requirement can be conducted based on results from formal verification on the lowest possible level of human-written artefacts, that is the source code level.

39 citations


Proceedings ArticleDOI
28 Mar 2011
TL;DR: This paper proposes a Simulated Annealing-based approach to determine the sequence and size of the time slots within the Major Frame on each processor such that both the safety-critical and non-critical applications are schedulable.
Abstract: In this paper we are interested in mixed-criticality embedded real-time applications mapped on distributed heterogeneous architectures. The architecture provides both spatial and temporal partitioning, thus enforcing enough separation for the critical applications. With temporal partitioning, each application is allowed to run only within predefined time slots, allocated on each processor. The sequence of time slots for all the applications on a processor are grouped within a Major Frame, which is repeated periodically. We assume that the safety-critical applications (on all criticality levels) are scheduled using static-cyclic scheduling and the non-critical applications are scheduled using fixed-priority preemptive scheduling. We consider that each application runs in a separate partition, and each partition is allocated several time slots on the processors where the application is mapped. We are interested to determine the sequence and size of the time slots within the Major Frame on each processor such that both the safety-critical and non-critical applications are schedulable. We have proposed a Simulated Annealing-based approach to solve this optimization problem. The proposed algorithm has been evaluated using several synthetic and real-life benchmarks.

31 citations


Proceedings ArticleDOI
28 Mar 2011
TL;DR: The approach to high-level model of structured knowledge and a formal model of awareness in such autonomic service-component ensembles of special autonomic components are presented.
Abstract: Knowledge is the source of intelligence and both knowledge representation and knowledge management are crucial for intelligent systems. Well employed knowledge helps such systems become aware of situations, recognize states and eventually respond to changes. This paper presents our vision of knowledge representation and awareness in mobile swarm systems formed as open-ended ensembles of special autonomic components. Such components encapsulate rules, constraints and mechanisms for self-management and acquire and process knowledge about themselves, other service components and their environment. In this paper, we present our approach to high-level model of structured knowledge and a formal model of awareness in such autonomic service-component ensembles.

25 citations


Proceedings ArticleDOI
28 Mar 2011
TL;DR: Some principles and mechanisms to securely operate mixed-criticality real-time systems on embedded platforms and an analysis of their impact on the global system safety, in particular on the determinism property of the PharOS model are provided.
Abstract: This paper provides an overview of some principles and mechanisms to securely operate mixed-criticality real-time systems on embedded platforms. Those principles are illustrated with PharOS a complete set of tools to design, implement and execute real-time systems on automotive embedded platforms. The keystone of this approach is a dynamic time-triggered methodology that supports full temporal isolation without wasting CPU time. In addition, memory isolation is handled through automatic off-line generation of fine-grained memory protection tables used at runtime. These isolation mechanisms are building blocks for the support of mixed-criticality applications. Several extensions have been brought to this model to expand the support for mixed-criticality within the system. These extensions feature fault recovery, support for the cohabitation of event-triggered with time-triggered tasks and paravirtualization of other operating systems. The contribution of this paper is to provide a high-level description of these extensions, along with an analysis of their impact on the global system safety, in particular on the determinism property of the PharOS model.

25 citations


Proceedings ArticleDOI
28 Mar 2011
TL;DR: A relative safety metric is introduced that compares test suites with respect to how well the observed worst-case behavior of program parts is exercised and empirically shows that common code coverage criteria from the domain of functional testing can produce unsafe WCET estimates in the context of MBTA for systems with a processor like the TriCore 1796.
Abstract: Measurement-based timing analysis (MBTA) is a hybrid approach that combines execution-time measurements with static program analysis techniques to obtain an estimate of the worst-case execution time (WCET) of a program. The most challenging part of MBTA is test data generation. Choosing an adequate set of test vectors determines safety and efficiency of the overall analysis. So far, there are no feasible criteria that determine how well the worst-case temporal behavior of program parts is covered by a given test-suite. In this paper we introduce a relative safety metric that compares test suites with respect to how well the observed worst-case behavior of program parts is exercised. Using this metric, we empirically show that common code coverage criteria from the domain of functional testing can produce unsafe WCET estimates in the context of MBTA for systems with a processor like the TriCore 1796. Further, we use the relative safety metric to examine coverage criteria that require all feasible pairs of, e.g., basic blocks to be exercised in combination. These are shown to be superior to code coverage criteria from the domain of functional testing, but there is still a chance that an unsafe WCET estimate is derived by MBTA in our experimental setup. Based on the outcomes of our evaluation we introduce and examine Balanced Path Generation, an input data generation technique that combines the advantages of all evaluated coverage criteria and random input data generation.

23 citations


Proceedings ArticleDOI
28 Mar 2011
TL;DR: The solution shows that authentication can be transparently applied to a time-triggered system exploiting the available global time base and without violating its timeliness properties.
Abstract: This paper investigates on the security of time -- triggered transmission channels, which are used to establish a predictable and timely message transfer in a distributed embedded system with potential safety constraints. Within such a system, safety and security are closely related, because malicious attacks can have an impact on a system's safety and thereby cause severe damage. An attacker could masquerade as an original sender and try to alter some system parameters by injecting malicious messages in the system. In the embedded real-time systems domain particularly the authenticity of data items is of interest, because a lack of integrity can lead to incorrect or erroneous system behavior. In addition, we address the open research question how a common notion of time can contribute to a system's security. Our solution encompasses an authentication protocol to secure time-triggered transmission channels. We illustrate two attack scenarios (insertion and substitution) that aim at injecting fake messages in such a channel thereby corrupting the internal system state of a receiver. We discuss the feasibility of several key management strategies for embedded systems and describe an authentication protocol using time-delayed release of symmetric keys for time-triggered systems. In a case study we implement the protocol for a prototype Time-Triggered Ethernet (TTE) system. The insight gained from the evaluation is that the computation of the cryptographic algorithms consumes most resources. Our solution shows that authentication can be transparently applied to a time-triggered system exploiting the available global time base and without violating its timeliness properties.

21 citations


Proceedings ArticleDOI
28 Mar 2011
TL;DR: A low cost and lightweight approach to measure end-to-end latency of time-triggered Ethernet traffic with off-the-shelf components and a validation with an Ethernet performance analyzer and a mathematical framework is presented to check the given results.
Abstract: The performance analysis and validation of distributed real-time systems poses significant challenges due to high accuracy requirements at the measurement tools. A fully synchronized time-scale at ultrafine granularity is not easy to generate. Even though there are several analyzer tools for standard switched Ethernet, these tools cannot be applied in time-triggered networks, since they do not meet the requirements of synchronized packet generation. This paper introduces a low cost and lightweight approach to measure end-to-end latency of time-triggered Ethernet traffic with off-the-shelf components. By using standard computer hardware and a real-time Linux Kernel, it is shown that measurement can be achieved in a resolution of microseconds. Furthermore, a validation with an Ethernet performance analyzer and a mathematical framework is presented to check the given results.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: A new intermediate dependability model is introduced that acts as a bridge between the high-level engineering language and the low-level dependability analysis formalism, and its features and its expressive power are discussed, showing its application for the modelling of a simple but representative case-study.
Abstract: Model-Driven engineering (MDE) aims to elevate models in the engineering process to a central role in the specification, design, integration, validation, and operation of a system. MDE is becoming a widely used approach within the dependability domain: the system, together with its main dependability-related characteristics, is represented by engineering language models, while automatic transformations are used to generate the analysis models for the dependability analyses. This paper discusses the dependability concerns that should be captured by engineering languages for dependability analysis. It motivates and defines a conceptual model where the specific dependability aspects related to specific dependability analyses can be consistently and unambiguously merged, also detailing the part of the conceptual model supporting state-based dependability analysis methods. Then, it introduces a new intermediate dependability model that acts as a bridge between the high-level engineering language and the low-level dependability analysis formalism, and we discuss its features and its expressive power showing its application for the modelling of a simple but representative case-study.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: This paper presents an approach for modeling simulating and analyzing multi clocks real time systems during the different steps of a design, which range from the first requirements to a model allocated on a specific execution platform.
Abstract: This paper presents an approach for modeling simulating and analyzing multi clocks real time systems during the different steps of a design. These steps range from the first requirements to a model allocated on a specific execution platform. The UML MARTE profile and the CCSL language are used together to specify the causal and temporal characteristics of the software as well as the hardware parts of the system. The Time Square environment allows a simulation of such specification and the detection of potential errors and deadlocks. When the specification refinement is finished, to prove the specification correctness, the CCSL specification is used to generate a synchronous model and some observers in Esterel. We illustrate the approach through a spark ignition control system.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: This work reports on a transformation from Sequential Function Charts and Function Block Diagrams of the IEC 61131 -- 3 standard to BIP, and establishes a notion of invariant preservation between the two languages.
Abstract: Many applications in the industrial control domain are safety-critical. A large number of analysis techniques to guarantee safety may be applied at different levels in the development process of a Programmable Logic Controller. The development process is typically associated with a tool chain comprising model transformations. The preservation of safety properties in model transformations is necessary to achieve a safe system. Preservation can be guaranteed by showing that invariants are preserved by transformations. Adequate transformation rules and invariant specification mechanisms are needed for this. We report on a transformation from Sequential Function Charts and Function Block Diagrams of the IEC 61131 -- 3 standard to BIP. Our presentation features a description of formal syntax and semantics of the involved languages. We present transformation rules for generating BIP code out of IEC 61131 -- 3 specifications. Based on this, we establish a notion of invariant preservation between the two languages.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: This paper describes how architectural design pattern templates can be used to build common features of FSW architectures and can be validated for functional and performance correctness.
Abstract: Software design patterns are best practice solutions to common software design problems. When they are properly applied, software design patterns can greatly improve the quality of software architectures. However, applying design patterns in practice can be difficult since design pattern descriptions are domain and platform independent. Leveraging the benefits of design patterns is particularly important in the space flight software (FSW) domain because better designs are needed to help reduce the number of in flight software related anomalies. In order to address the aforementioned problems, this paper presents software architectural design patterns for space flight software. This paper describes how architectural design pattern templates can be used to build common features of FSW architectures. The FSW architectures produced can be validated for functional and performance correctness.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: An overview of the model-based hardware generation and programming approach proposed within the MADES project and the way in which software is refactored by Compile-Time Virtualisation are concentrated on.
Abstract: This paper gives an overview of the model-based hardware generation and programming approach proposed within the MADES project. MADES aims to develop a model-driven development process for safety-critical, real-time embedded systems. MADES defines a systems modelling language based on subsets of MARTE and SysML that allows iterative refinement from high-level specification down to final implementation. The MADES project specifically focusses on three unique features which differentiate it from existing model-driven development frameworks. First, model transformations in the Epsilon modelling framework are used to move between system models and provide traceability. Second, the Zot verification tool is employed to allow early and frequent verification of the system being developed. Third, Compile-Time Virtualisation is used to automatically retarget architecturally-neutral software for execution on complex embedded architectures. This paper concentrates on MADES's approach to the specification of hardware and the way in which software is refactored by Compile-Time Virtualisation.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: The design and implementation of the Virtual CPU (VCPU) resource scheduling scheme in RT-Llama to achieve predictable process executions and it is shown that requests with end-to-end deadlines can be admitted and completed before deadlines with the VCPU scheme.
Abstract: Service-oriented architectures (SOA) provide application systems the flexibility and cost-savings of dynamically composing workflows from reusable services. However, current SOA frameworks do not provide support for real-time workflow planning and execution. The goal of the RT-Llama SOA middleware framework is to address these new requirements. It works both at the service-level, by enhancing existing SOA middleware with service execution reservation capabilities, and at the end-to-end workflow-level, by creating a distributed component infrastructure for deadline-based workflow composition. This paper focuses on the design and implementation of the Virtual CPU (VCPU) resource scheduling scheme in RT-Llama to achieve predictable process executions. We have created a prototype implementation of RT-Llama using Sun Real-time JVM running on Solaris OS. Experiments consisting of real world service applications show that requests with end-to-end deadlines can be admitted and completed before deadlines with the VCPU scheme. We also show that service class differentiation can be achieved.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: The paper proposes an efficient mapping technique that aims to minimize the amount of explicit feature annotations in the UML design model of SPL, and is illustrated with an ecommerce case study that models structural and behavioural SPL views.
Abstract: Product derivation is an essential part of the Software Product Line (SPL) development process. The paperproposes a model transformation for deriving automatically a UML model of a specific product from the UML model of a product line. This work is a part of a larger project aiming to integrate performance analysis in the SPL model-driven development. The SPL source model is expressed in UML extended with two separate profiles: a "product line" profile from literature for specifying the commonality and variability between products, and the MARTE profile recently standardized by OMG for performance annotations. The automatic derivation of a concrete product model based on a given feature configuration is enabled through the mapping between features from the feature model and their realizations in the design model. The paper proposes an efficient mapping technique that aims to minimize the amount of explicit feature annotations in the UML design model of SPL. Implicit feature mapping is inferred during product derivation from the relationships between annotated and non-annotated model elements as defined in the UML metamodel and well formedness rules. The transformation is realized in the Atlas Transformation Language (ATL) and illustrated with an ecommerce case study that models structural and behavioural SPL views.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: A virtualization architecture for the multi-core embedded system to provide more system reliability and security while maintaining the same performance without introducing additional special hardware supports or having to implement complex protection mechanism in the virtualization layer is proposed.
Abstract: In this paper, we propose a virtualization architecture for the multi-core embedded system to provide more system reliability and security while maintaining the same performance without introducing additional special hardware supports or having to implement complex protection mechanism in the virtualization layer. Virtualization has been widely used in embedded systems, especially in consumer electronics, albeit itself is not a new technique, because there are various needs for both GPOS (General Purpose Operating System) and RTOS (Real Time Operating System). The surge of the multi-core platform in the embedded system also helps the consolidation of the virtualization system for its better performance and lower power consumption. Embedded virtualization design usually uses two kinds of approaches. The first one is to use the traditional VMM, but it is too complicated for use in the embedded environment if there is no additional special hardware support. The other is the use of the micro kernel which imposes a modular design. The guest systems, however, would suffer from considerable amount of modifications because the micro kernel lets the guest systems to run in user space. For some RTOSes and theirs applications originally running in kernel space, it makes this approach more difficult to work because a lot of privileged instructions are used in those codes. To achieve better reliability and keep the virtualization layer design light weighted, a common hardware component adopted in the multi-core embedded processors is used in this work. In the most embedded platforms, vendors provide additional on-chip local memory for each physical core and these local memory areas are private only to their cores. By taking this memory architecture's advantage, we can mitigate above-mentioned problems at once. We choose to re-map the virtualization layer's program called SPUMONE, which it runs all its guest systems in kernel space, on the local memory. By doing so, it can provide additional reliability and security for the entire system because the SPUMONE's design in a multi-core platform has each instance being installed on a separated processor core which is different from the traditional virtualization layer design and the content of each SPUMONE is inaccessible to each others. We also achieve this goal without bringing any overhead to the overall performance.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: This paper proposes a distinct object cache for heap allocated data that is highly associative to track symbolic object addresses in the static analysis, statically analyzable and improves the performance of the Java processor JOP.
Abstract: Static cache analysis for data allocated on the heap is practically impossible for standard data caches. We propose a distinct object cache for heap allocated data. The cache is highly associative to track symbolic object addresses in the static analysis. Cache lines are organized to hold single objects and individual fields are loaded on a miss. This cache organization is statically analyzable and improves the performance. In this paper we present the design and implementation of the object cache in a uniprocessor and chip-multiprocessor version of the Java processor JOP.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: What some of those trade-off mechanisms are at both "design time" and during operations are discussed, some examples of how biological systems create mechanisms to support the fast resolution of trade-offs are presented, and a feasibility demonstration is presented by considering a very simple example of how such trade- off mechanisms can be implemented in the authors' SORT testbed.
Abstract: In order for a self-organizing real time (SORT) system to produce real time behavior that is "good enough," it must have the ability to trade off among competing performance metrics, of which time is only one. In this paper we discuss what some of those trade-offs are at both "design time" and during operations, present some examples of how biological systems create mechanisms to support the fast resolution of trade-offs, and then present a feasibility demonstration by considering a very simple example of how such trade-off mechanisms can be implemented in our SORT testbed.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: A Model Driven Engineering (MDE) approach is proposed that derives parametric WCET for composite components from parametricWCET of their subcomponents, giving more accurate WCET estimates than naaive additive compositional analysis by taking into account usage context of components.
Abstract: Worst Case Execution Time (WCET) computation is crucial to the overall timing analysis of real-time embedded systems. Facing the ever increasing complexity of such systems, techniques dedicated to WCET analysis can take advantage of Component Based Software Engineering (CBSE) by decomposing a difficult problem into smaller pieces, easier to analyse. To achieve this objective, the corresponding analysis results have to be composed to provide timing guarantees on the whole system. In this paper, we express the WCET of a component as a formula, allowing to represent its different computational modes. We then propose a Model Driven Engineering (MDE) approach that derives parametric WCET for composite components from parametric WCET of their subcomponents. This approach gives more accurate WCET estimates than naaive additive compositional analysis by taking into account usage context of components. However, analysis scalability concerns lead us to consider a trade-off between precision and scalability. This trade-off can be specified in the model. The composition of WCET estimations is automated and produces the parametric WCET expression of the composite component under analysis. This approach has been integrated in PRIDE.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: This work proposes a new scheme for self-organizing real-time service dissemination and collection using mobile agents in the mobile ad hoc network environment and conducts simulation experiments to show the performance of the proposed scheme with respect to the service dissemination time.
Abstract: A mobile ad hoc network consists of only mobile nodes without the access points and the center server. In the mobile ad hoc network environment, a set of nodes in the network and the network topology are frequently changing due to the node appearance, the node disappearance, and the node movement so that it is not easy for users in the network to obtain the services provided in the network. Therefore, without the service discovery scheme, it is impossible to inform the users in the network of the service information provided in the network. However, since there are no center servers in the mobile ad hoc network environment, the self-organized mechanism must be examined to collect the service information in the network and disseminate them to the users. Besides, since the changes of users or the services in the network occur many times with time, they should be reflected as short time as possible. Therefore, we propose a new scheme for self-organizing real-time service dissemination and collection using mobile agents in the mobile ad hoc network environment. Mobile agents migrate among nodes to collect the service information in the network and disseminate them to the users in the network. Finally, we conducted simulation experiments to show the performance of the proposed scheme with respect to the service dissemination time.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: The software infrastructure for CARS is described, based on the Wrapping approach to knowledge-based integration, including the autonomic agent infrastructure, i.e., the ``enabling software'' processes: health and status, local activity maintenance, and fault management.
Abstract: CARS (Computational Architecture for Reflective Systems) is a low-cost test bed for studying self-organization and real-time distributed behavior, using cars with on-board computers as autonomous agents, in an uncontrolled and largely unpredictable environment. This paper describes the software infrastructure for CARS, based on our Wrapping approach to knowledge-based integration. It allows us to share code between simulations for algorithm development and instrumented experiments with the real cars in a real environment. It also allows us to use many computational resources during algorithm development, and then to ``compile-out'' all resources that will not be needed, and all decision processes that have only one choice, in a given real environment. The instrumented experiment is run in parallel with the simulation, and the differences can be used to adjust the models. We describe the autonomic agent infrastructure, i.e., the ``enabling software'' processes: health and status, local activity maintenance, and fault management. These processes can be very resource-hungry in any agent, and our use of simulations allows us to study trade-offs directly between safety and capability in the agents, to tune the trade-off at deployment time, based on what we know or expect of the environment, and to monitor and change those assumptions when necessary.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: This work presents strategies for QoS management for IP multicast in tactical environments that provides information and user-level QoS and addresses the specific challenges of tactical radios (such as the lack of reliable capacity information).
Abstract: Wireless networking is moving toward the adoption of IP protocols and away from the multitude of special-purpose tactical radios traditionally in the hands of emergency personnel, military personnel, and law enforcement. The adoption of standards, such as IP multicast, has facilitated this. IP multicast also enables recovering some of the advantages of the broadcast medium when using IP in tactical environments. However, the traditional Quality of Service (QoS) approaches for IP multicast fall short of satisfying the stringent QoS requirements in tactical environments, which typically have single-hop, line-of-sight connections. The reasons for this are (1) QoS in IP networks, frequently based on Differentiated Services, relies on routers to enforce the priorities which typically don't exist in tactical networks, and (2) QoS for tactical users needs to be enforced at the information level, not the packet level where the loss or delay of a single packet can invalidate an entire object of information. We present strategies for QoS management for IP multicast in tactical environments that provides information and user-level QoS and addresses the specific challenges of tactical radios (such as the lack of reliable capacity information). We present our solutions in the context of a tactical information broker that provides beyond line-of-sight information management in a theater of operations.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: A new proposal for time synchronization in swarm robots is introduced which exploits the mobility of the robots for handling possible disconnections in the network and synchronize them at the beginning of tracking time slots.
Abstract: Cooperation is a key concept used in multi-robot systems for performing complex tasks. In swarm robotics, a self-organized cooperation is applied, where robots with limited intelligence cooperate and interact locally to build up the desired global behavior. In this paper, we are studying a mobile object tracking scenario performed by a swarm of robots. The robustness, scalability and flexibility of swarm robots make it an attractive approach for missions like object tracking in complex and dynamic environments. As the individual robot capabilities are limited in swarm systems, the robots may not be able to track the mobile object continuously. This limitation is overcome using the robots communication capability. In order to increase the probability of object detection, we propose a greedy self-deployment strategy, where the robots are spread uniformly in the environment to be monitored. For detecting a moving target, the robots use a biologically inspired algorithm for collecting robots currently located in other regions to track the target. In such cooperative tasks the robots normally need to be time synchronized for simultaneous activation. A new proposal for time synchronization in swarm robots is introduced which exploits the mobility of the robots for handling possible disconnections in the network and synchronize them at the beginning of tracking time slots.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: This work presents a component-based hazard analysis that considers the physical properties of different types of flow in mechatronic systems and identifies reusable patterns for the failure behavior which can be generated automatically and reduces the effort for the developer.
Abstract: One cannot image today's life without mechatronic systems, which have to be developed in a joint effort by teams of mechanical engineers, electrical engineers, control engineers and software engineers. Often these systems are applied in safety critical environments like in cars or aircrafts. This requires systems that function correctly and do not cause hazardous situations. However, random errors due to wear or external influences cannot be completely excluded. Consequently, we have to perform a hazard analysis for the system. Further, the union of four disciplines in one system requires the development and analysis of the system as a whole. We present a component-based hazard analysis that considers the entire mechatronic system including hardware, i.e. mechanical and electrical components, and software components. Our approach considers the physical properties of different types of flow in mechatronic systems. We have identified reusable patterns for the failure behavior which can be generated automatically. This reduces the effort for the developer. As cycles, e.g. control cycles, are an internal part of every mechatronic system our approach is able to handle cycles. The presented approach has been applied to a real-life case study.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: A self-organized distributed coordinator concept which performs the self-reconfiguration in case of a node failure using redundant slots in the FlexRay schedule and combination of messages in existing frames and slots to avoid a complete bus restart is proposed.
Abstract: In this paper we present an approach for the self-reconfiguration of FlexRay networks to increase their fault tolerance. We propose a self-organized distributed coordinator concept which performs the self-reconfiguration in case of a node failure using redundant slots in the FlexRay schedule and combination of messages in existing frames and slots to avoid a complete bus restart. Therefore, the self-reconfiguration is realized by means of predetermined information about resulting changes in the communication dependencies and (re-)assignments from a introduced heuristic, which determines initial configurations and, based on that, calculates valid reconfigurations for the remaining nodes of the FlexRay network. The distributed coordinator concept, which is implemented by lightweight tasks not consuming any significant resources, uses these information and performs the reconfiguration of the FlexRay network at run time to increase the fault tolerance of the system. An evaluation by means of realistic safety-critical automotive real-time systems revealed that this reconfiguration approach determines valid reconfigurations for up to 80% of possible individual node failures and thereby offers applicable information for the self-reconfiguration approach. Furthermore, in an iterative design process these results can be improved to optimize the reconfigurations. The evaluation of our self-organized distributed coordinator concept and the comparison to a centrally organzied solution with a dedicated coordinator proves its benefits regarding the additional hardware and communication overhead and the resulting reconfiguration time, which has an great impact on the fault tolerance of the FlexRay network.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: This paper presents another technique using game-board which is simple to implement and uses native data structures, however, this simplicity comes at a performance cost which has been analyzed in this paper.
Abstract: A new, purely functional model of computation, called Priority-based Functional Reactive Programming (P-FRP), has been introduced as a new paradigm for building real-time software. P-FRP allows assignment of static priorities to tasks and guarantees that, when a higher priority task is released, the system will immediately preempt any lower-priority tasks that may be executing at the time. This execution model is different from the classical preemptive model of real-time systems due to the abort nature of preempted tasks. Methods developed for determining actual response time in the preemptive model are not guaranteed to work in P-FRP. In previous work, the gap-enumeration technique has been presented as a viable alternative to simulations for computing actual response time in P-FRP. Unfortunately, this method is difficult to implement due to its use of a Red-Black tree which is not available as a native function in programming languages. Also this method requires a complex logic loop for finding idle periods. In this paper, we present another technique using game-board which is simple to implement and uses native data structures. However, this simplicity comes at a performance cost which has also been analyzed in this paper.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: A refactoring of the Real-Time Specification for Java (RTSJ) and the Safety Critical Java (SCJ) profile (JSR-302) is presented, which results in cleaner class hierarchies with no superfluous methods, well defined SCJ levels, thus making the profiles easier to comprehend and use for application developers and students.
Abstract: Just like other software, Java profiles benefits from refactoring when they have been used and have evolved for some time. This paper presents a refactoring of the Real-Time Specification for Java (RTSJ) and the Safety Critical Java (SCJ) profile (JSR-302). It highlights core concepts and makes it a suitable foundation for the proposed levels of SCJ. The ongoing work of specifying the SCJ profile builds on sub classing of RTSJ. This spurred our interest in a refactoring approach. It starts by extracting the common kernel of the specifications in a core package, which defines interfaces only. It is then possible to refactor SCJ with its three levels and RTSJ in such a way that each profile is in a separate package. This refactoring results in cleaner class hierarchies with no superfluous methods, well defined SCJ levels, elimination of SCJ annotations like @SCJAllowed, thus making the profiles easier to comprehend and use for application developers and students.