scispace - formally typeset
Search or ask a question

Showing papers presented at "International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing in 2017"


Proceedings ArticleDOI
01 May 2017
TL;DR: A component-based decentralized software platform called Resilient Information Architecture Platform for Smart Systems (RIAPS) which provides an infrastructure for such systems and focuses on the design and integration choices for a resilient Discovery Manager service that is a critical component of this infrastructure.
Abstract: The emerging Fog Computing paradigm provides an additional computational layer that enables new capabilities in real-time data-driven applications. This is especially interesting in the domain of Smart Grid as the boundaries between traditional generation, distribution, and consumer roles are blurring. This is a reflection of the ongoing trend of intelligence distribution in Smart Systems. In this paper, we briefly describe a component-based decentralized software platform called Resilient Information Architecture Platform for Smart Systems (RIAPS) which provides an infrastructure for such systems. We briefly describe some initial applications built using this platform. Then, we focus on the design and integration choices for a resilient Discovery Manager service that is a critical component of this infrastructure. The service allows applications to discover each other, work collaboratively, and ensure the stability of the Smart System.

61 citations


Proceedings ArticleDOI
16 May 2017
TL;DR: This paper presents an open-source DPR controller specially developed for hard real-time systems and prototyped in connection with the open- source multi-core platform for real- time applications T-CREST.
Abstract: In real-time systems, the use of hardware accelerators can lead to a worst-case execution-time speed-up, to a simplification of its analysis, and to a reduction of its pessimism. When using FPGA technology, dynamic partial reconfiguration (DPR) can be used to minimize the area, by only loading those accelerators that are needed at any given point in time. The DPR controllers provided by the FPGA vendors satisfy a wide range of requirements and rely on software to manage the reconfiguration. This approach may lead to slow reconfiguration and unpredictable timing. This paper presents an open-source DPR controller specially developed for hard real-time systems and prototyped in connection with the open-source multi-core platform for real-time applications T-CREST. The controller enables a processor to perform reconfiguration in a time-predictable manner and supports different operating modes. The paper also presents a software tool for bitstream conversion, compression, and for reconfiguration time analysis. The DPR controller is evaluated in terms of hardware cost, operating frequency, speed, and bitstream compression ratio vs. reconfiguration time trade-off. A simple application example is also presented with the scope of showing the reconfiguration features of the controller.

21 citations


Proceedings ArticleDOI
16 May 2017
TL;DR: A new response time analysis is proposed that computes an upper bound on the lower priority blocking that each task may incur with eager and lazy preemptions and demonstrates that, despite the eager approach generates a higher number of priority inversions, the blocking impact is generally smaller than in the lazy approach, leading to a better schedulability performance.
Abstract: DAG-based scheduling models have been shown to effectively express the parallel execution of current many-core heterogeneous architectures. However, their applicability to real-time settings is limited by the difficulties to find tight estimations of the worst-case timing parameters of tasks that may arbitrarily be preempted/migrated at any instruction. An efficient approach to increase the system predictability is to limit task preemptions to a set of pre-defined points. This limited preemption model supports two different preemption approaches, eager and lazy, which have been analyzed only for sequential task-sets. This paper proposes a new response time analysis that computes an upper bound on the lower priority blocking that each task may incur with eager and lazy preemptions. We evaluate our analysis with both, synthetic DAG-based task-sets and a real case-study from the automotive domain. Results from the analysis demonstrate that, despite the eager approach generates a higher number of priority inversions, the blocking impact is generally smaller than in the lazy approach, leading to a better schedulability performance.

18 citations


Proceedings ArticleDOI
01 May 2017
TL;DR: A platform that facilitates structural adaptation (SHSA) in CPS is proposed and its capabilities on an example from the automotive domain: a fault-tolerant system that estimates the state-of-charge (SoC) of the battery.
Abstract: Self-healing is an increasingly popular approach to ensure resiliency, that is, a proper adaptation to failures and attacks, in cyber-physical systems (CPS). A very promising way of achieving self-healing is through structural adaptation (SHSA), by adding and removing components, or even by changing their interaction, at runtime. SHSA has to be enabled and supported by the underlying platform, in order to minimize undesired interference during components exchange and to reduce the complexity of the application components. In this paper, we discuss architectural requirements and design decisions which enable SHSA in CPS. We propose a platform that facilitates structural adaptation and demonstrate its capabilities on an example from the automotive domain: a fault-tolerant system that estimates the state-of-charge (SoC) of the battery. The SHSA support of the SoC estimator is enhanced through the existence of an ontology, capturing the interrelations among the components and using this information at runtime for reconfiguration. Finally, we demonstrate the efficiency of our SHSA framework by deploying it in a real-world CPS prototype of a rover under sensor failure.

18 citations


Proceedings ArticleDOI
01 May 2017
TL;DR: This work proposes a WCET timing model and analyzer based on a predictable GPU warp scheduling policy to enable the WCET estimation on GPUs.
Abstract: The capability of GPUs to accelerate general-purpose applications that can be parallelized into massive number of threads makes it promising to apply GPUs to real-time applications as well, where high throughput and intensive computation are also needed. However, due to the different architecture and programming model of GPUs, the worst-case execution time (WCET) analysis methods and techniques designed for CPUs cannot be used directly to estimate the WCET of GPUs. In this work, based on the analysis of the architecture and dynamic behavior of GPUs, we propose a WCET timing model and analyzer based on a predictable GPU warp scheduling policy to enable the WCET estimation on GPUs.

9 citations


Proceedings ArticleDOI
01 May 2017
TL;DR: A low-cost, indoor localization and navigation system, which performs continuous and real-time processing of Bluetooth Low Energy (BLE) and IEEE 802.15.4a compliant Ultra-wideband (UWB) sensor data to localize and navigate the concerned entity to its desired location.
Abstract: Emerging smart services, such as indoor smart parking or patient monitoring and tracking in hospitals, incur a significant technical roadblock stemming primarily from a lack of cost-effective and easily deployable localization framework that impedes their widespread deployment. To address this concern, in this paper we present a low-cost, indoor localization and navigation system, which performs continuous and real-time processing of Bluetooth Low Energy (BLE) and IEEE 802.15.4a compliant Ultra-wideband (UWB) sensor data to localize and navigate the concerned entity to its desired location. Our approach depends upon fusing the two feature sets, using the UWB to calibrate the BLE localization mechanism.

8 citations


Proceedings ArticleDOI
16 May 2017
TL;DR: NEO, an end-to-end toolchain to automate cost-model generation for both WCET and WCEC analyses, and integrated the cost models into the state-of-the-art WCET analyzer PLATIN, to statically determine upper bounds of benchmarks.
Abstract: Reliable and fine-grained cost-models are fundamental for real-time systems to statically predict worst-case execution time (WCET) estimates of program code in order to guarantee timeliness. Analogous considerations hold for energy-constrained systems where worst-case energy consumption (WCEC) values are mandatory to ensure meeting predefined energy budgets. These cost models are generally unavailable for commercial off-the-shelf (COTS) hardware platforms, although static worst-case analysis tools require those models in order to predict the WCET as well as the WCEC of program code. To solve this problem, we present NEO, an end-to-end toolchain to automate cost-model generation for both WCET and WCEC analyses. NEO exploits automatically generated benchmarks, which are input for 1) an instruction-level emulation and 2) automatically conducted execution-time and energy-consumption measurements on the target platform. The gathered values (i.e., occurrences per instruction, execution-time and energy-consumption per benchmark) are combined as mathematical optimization problems. The solutions to the formulated problems, which are designed to reveal the worst-case behavior, yield the respective cost models. To statically determine upper bounds of benchmarks, we integrated the cost models into the state-of-the-art WCET analyzer PLATIN. Our evaluations on COTS hardware reveal that our open-source, end-to-end toolchain NEO yields accurate worst-case bounds.

7 citations


Proceedings ArticleDOI
01 May 2017
TL;DR: An expert supervisory framework for HMIs (EYE-on-HMI) utilizing ViDAQ is envisioned, that is scalable and extensible to various industrial applications with prospective safety improvement to next generation commercial automation especially those with "driverless" operational modes.
Abstract: A novel case for visual data acquisition (ViDAQ) as an non-intrusive, scalable and reliable means of monitoring Human Machine Interfaces (HMIs) is envisioned. ViDAQ is a step towards achieving real-time cross-validation of human operator commands with respect to HMI states in large scale industrial control room environments. HMIs are integral in allowing human operators to safely command and monitor various critical processes, such as in nuclear power plants, commercial aviation, public transit vehicles, etc. However, HMI related perceptual dynamics presents a challenge to the designed safeguards against human-in-the-loop errors, which, ultimately is dependent on operator situational awareness. We envision, an expert supervisory framework for HMIs (EYE-on-HMI) utilizing ViDAQ, that is scalable and extensible to various industrial applications with prospective safety improvement to next generation commercial automation especially those with "driverless" operational modes. To this end, we present the design, implementation and evaluation of the ViDAQ to visually read rotary multi-dial meters herein.

6 citations


Proceedings ArticleDOI
16 May 2017
TL;DR: A new method and hardware architecture to collect Execution Time Profiles (ETP) which give much more insight in the execution time behaviour on modern system-on-chip architectures as previously available.
Abstract: Timing analysis in embedded systems has focused mainly on the Worst-Case Execution Time (WCET) in the past. This was (and still is) important to make guarantees for the application of the system in safety critical environments. Today, two reasons call for a slightly changed perspective. Firstly, the complex and often unpredictable internal structure of modern system-on-chip architectures prohibits the calculation of realistic upper bounds for the WCET. Secondly, even if we can compute a realistic value for the WCET, the developer still does not know how the code under scrutiny behaves in general and whether it is useful or necessary to spend time on optimising this code. In this contribution, we present a new method and hardware architecture to collect Execution Time Profiles (ETP) which give us much more insight in the execution time behaviour on modern system-on-chip architectures as previously available.

5 citations


Proceedings ArticleDOI
16 May 2017
TL;DR: This paper proposes a time-predictable memory hierarchy with a prefetcher that exploits the predictability of execution traces in single-path code to speed up code execution.
Abstract: Deriving the Worst-Case Execution Time (WCET) of a task is a challenging process, especially for processor architectures that use caches, out-of-order pipelines, and speculative execution. Despite existing contributions to WCET analysis for these complex architectures, there are open problems. The single-path code generation overcomes these problems by generating time-predictable code that has a single execution trace. However, the simplicity of this approach comes at the cost of longer execution times. This paper addresses performance improvements for single-path code. We propose a time-predictable memory hierarchy with a prefetcher that exploits the predictability of execution traces in single-path code to speed up code execution. The new memory hierarchy reduces both the cache-miss penalty time and the cache-miss rate on the instruction cache. The benefit of the approach is demonstrated through benchmarks that are executed on an FPGA implementation.

4 citations


Proceedings ArticleDOI
16 May 2017
TL;DR: This paper proposes a model driven approach for pacemaker design by combining the strengths of two well-known philosophies for safety critical systems: SCCharts synchronous language and PRET architecture for the underlying processor which has been modified to include reactive semantics.
Abstract: Implantable medical devices such as cardiac pacemakers have been recalled frequently with safety related issues. This paper proposes a model driven approach for pacemaker design by combining the strengths of two well-known philosophies for safety critical systems. First, we adopt the SCCharts synchronous language for pacemaker specification. Second, we adopt a PRET architecture for the underlying processor which has been modified to include reactive semantics. PRET processors offer an ideal platform for providing timing guarantees. We use automatic code generation combined with static timing analysis during the design phase. Also, we use an existing emulation model of the human heart using a 33-node conduction network for closed loop validation of the designed pacemaker.

Proceedings ArticleDOI
16 May 2017
TL;DR: A novel solution whereby a separate reference counting unit replaces the garbage collector and its use for garbage collection is proposed and an implementation that includes a specialized memory arrangement that allows for object management transactions to occur concurrently with program execution, with no resultant impact upon program performance is presented.
Abstract: Time predictability is a first class requirement in safety critical system design. Techniques exist for the timing analysis of programs designed in memory managed languages, but these require detailed knowledge of memory allocation. Moreover, enforcing hard real-time guarantees for systems designed in such garbage collected languages is difficult, because of the so called collection pause – however incremental. This paper proposes a novel solution whereby a separate reference counting unit replaces the garbage collector. We present the underlying concepts of the Reference Counting Memory Management Unit (RCMMU) and its use for garbage collection. In addition, we present an implementation that includes a specialized memory arrangement that allows for object management transactions to occur concurrently with program execution, with no resultant impact upon program performance. The RCMMU has been targeted for use with Java but can be adopted to be used with other memory managed languages. The hardware-implemented RCMMU removes all overhead associated with garbage collection operations when compared against a software-only collector. Additionally, it simplifies the static worst-case execution time analysis of programs.

Proceedings ArticleDOI
16 May 2017
TL;DR: From this experience, it is concluded that the re-use of pre-validated code is a cost-effective approach to build realistic-behaviour reactive test components, the main saving found at the verification of the test component itself.
Abstract: In this paper we present an approach to reduce the verification cost of distributed elevator control systems through embedded code re-use to co-simulate networked devices in a hardware-in-the-loop simulator: the CAN Restbus simulator. The approach is applied to a case study in the field of distributed control system for elevators movement. We discuss the use cases for the CAN Restbus simulator, the functionality of the devices to simulate, the rationale for the development and integration approach and the validation procedures of the implemented Restbus on the test-bench. We describe the adaptations needed to cross-compile the legacy source code for the test platform processor, the shortcomings related to the programming style and tools, the performance and integration issues arising when integrating in the test system. We also discuss the operational issues due to the limits of feasible synchronization. From this experience we conclude that the re-use of pre-validated code is a cost-effective approach to build realistic-behaviour reactive test components, the main saving found at the verification of the test component itself. Finally, we put forward an outlook about forthcoming developments of the overall test system.

Proceedings ArticleDOI
01 May 2017
TL;DR: It is shown that the multi-mode task P-FRP system has significant schedulability improvements over the original P- FRP model and two particular scenarios for the framework that are able to reflect such effects and then improve the performance of a developing commercial software platform.
Abstract: Functional Reactive Programming (FRP) provides an elegant way to express computation in domains such as interactive animations, robotics, computer vision, user interfaces, and simulation. Priority-based (preemptive) FRP (P-FRP), a variant of FRP with more real-time characteristics, demands research in its scheduling and timing analysis. Different from the classic preemptive model, in a P-FRP system, when a task is preempted, all changes made by the task are discarded and after higher priority tasks complete their execution the preempted task will restart from the beginning (abort-and-restart). P-FRP is thus able to capture changes of the task in time and provides an option other than the classic preemptive model in certain scenarios. In the P-FRP model, previous studies use the largest execution time of a task for all its restarted jobs. In practice, however, when considering the changing/unchanging inputs/outputs of the task or the memory effects such as cache-hit in loading code and data, the restarted jobs likely consume less time than its largest execution time. In this paper, for the first time we present a multi-mode P-FRP task framework and two particular scenarios for the framework that are able to reflect such effects and then improve the performance of a developing commercial software platform. We show that the multi-mode task P-FRP system has significant schedulability improvements over the original P-FRP model.

Proceedings ArticleDOI
16 May 2017
TL;DR: A new computing method is proposed which breaches the tradition and builds equivalence between data sets in different sizes and greatly improves efficiency of data acquisition, compression and visualization, especially in scalable applications.
Abstract: In tradition, equivalent data matrices or ordered data sets must be in the same size. We propose a new computing method which breaches the tradition and builds equivalence between data sets in different sizes. By the new method, essential data elements can be extracted from a data set or data source without loss of information. And, the full data set may be exactly recovered from its essential elements. Both extraction and recovery may be accomplished in real time. As such, the equivalence between data sets and their essential subsets can be validated by the invertible operations. This equivalence extends traditional concepts and methodologies in mathematics, data science and data processing engineering. It may greatly improve both veracity and velocity of data analytics. The new method may be extensively applied to image and video technologies, medical imaging, remote sensing, wireless communication and so on. It greatly improves efficiency of data acquisition, compression and visualization, especially in scalable applications. It makes revolutionary improvement than existing methods. Demo software and testing data are all downloadable at website http://qualvisual.net.

Proceedings ArticleDOI
01 May 2017
TL;DR: This paper presents an approach for non-intrusively instrumenting standards-based event-based middle-ware, and applies CPM to applications implemented in the Common Object Request Broker Architecture (CORBA).
Abstract: This paper presents an approach for non-intrusively instrumenting standards-based event-based middle-ware. The approach has been realized in an open-source tool called Component Port Monitor (CPM). CPM uses dynamic binary instrumentation as a means to monitor events published between software components. This allows CPM to operate in contexts without any a priori knowledge of the concrete events in the system, or how the system is composed. We have applied CPM to applications implemented in the Common Object Request Broker Architecture (CORBA). Our results show that once the application is completely instrumented, the performance impact of actually monitoring events is minimal, requiring an additional 2.5 milliseconds.

Proceedings ArticleDOI
16 May 2017
TL;DR: This paper implements and evaluates five distinct synchronisation methods and analyses their run-time characteristics in-depth, and concludes that it is mandatory to consider the effects of process synchronisation for energy analysis and energy-efficiency optimisations.
Abstract: Advances in semiconductor technology greatly extend the scope of special-purpose applications as multi-core processors find the way into embedded systems. The increasing number of processor cores makes it more important than ever to have real-time operating systems process parallel threads in the most efficient way. In doing so, they have to pursue multiple (often conflicting) goals: namely being predictable as to time and energy demand. In shared-memory multi-core systems, contention at critical sections makes it inevitable for the operating system to execute competing threads with highly efficient synchronisation methods. Related research has primarily focussed on timing aspects of synchronisation methods, while the energy efficiency of the latter is an unexplored field, yet. In this paper, we implement and evaluate five distinct synchronisation methods and analyse their run-time characteristics (i.e. time, energy) in-depth. We evaluate the overall demand at application level, and empirically prove that contention increases the energy demand significantly even when competing processes are temporarily suspended. Furthermore, the evaluation reveals that choosing the right synchronisation method can decrease the energy demand by more than a factor of 5. We come to the conclusion that it is mandatory to consider the effects of process synchronisation for energy analysis and energy-efficiency optimisations.

Proceedings ArticleDOI
01 May 2017
TL;DR: A new framework for testing components of an MPS for faults that involve the violation of any implicit user-intended receiving order of messages, which shows a 4x speedup over random testing with minimal run time overhead and only a few kilobytes of memory overhead.
Abstract: In a message-passing system (MPS), components communicate through messages. However, both the time and order in which messages are delivered depend on the execution environment. The resulting nondeterminism may lead to concurrency defects such as message races, making it difficult to thoroughly test and debug MPS. This paper presents a new framework for testing components of an MPS for faults that involve the violation of any implicit user-intended receiving order of messages. The framework purposefully assesses possible interleavings by reordering incoming messages before delivering them to the tested target component. We evaluate three methods to support the reordering process: the blocking method intercepts and blocks each message until all its dependencies have occurred, the buffering method buffers a message until either its dependencies are observed or a predefined timeout expires, and the adaptive buffering dynamically adjusts its flushing period. All three methods are implemented inside QNX Neutrino, a popular embedded real-time operating system. The evaluation shows a 4x speedup over random testing with minimal run time overhead and only a few kilobytes of memory overhead. These results confirms the effectiveness and capability of the framework to uncover faults in real-world applications.

Proceedings ArticleDOI
01 May 2017
TL;DR: A model-based compositional energy planning technique that computes a minimal ratio of processor frequency that preserves schedulability of independent and preemptive tasks that outperforms the classical real-time calculus (RTC) method.
Abstract: Cyber-physical systems (CSPs) are demanding energy-efficient design not only of hardware (HW), but also of software (SW) Dynamic Voltage and Frequency Scaling (DVFS) and Dynamic Power Manage (DPM) are most popular techniques to improve the energy efficiency However, contemporary complicated HW and SW designs requires more elaborate and sophisticated energy management and efficiency evaluation techniques This paper is concerned about energy supply planning for real-time scheduling systems (units) of which tasks need to meet deadlines This paper presents a model-based compositional energy planning technique that computes a minimal ratio of processor frequency that preserves schedulability of independent and preemptive tasks The minimal ratio of processor frequency can be used to plan the energy supply of real-time components Our model-based technique is extensible by refining our model with additional features so that energy management techniques and their energy efficiency can be evaluated by model checking techniques We exploit the compositional framework for hierarchical scheduling systems and provide a new resource model for the frequency computation As results, our use-case for avionics software components shows that our new method outperforms the classical real-time calculus (RTC) method, requiring 3621% less frequency ratio on average for scheduling units under RM than the RTC method

Proceedings ArticleDOI
16 May 2017
TL;DR: This paper shows how a workload distributed among several server blades can be scheduled at a finer time scale than what a normal software implementation would allow, in order to minimize the makespan required to complete execution of sets of tasks.
Abstract: This paper proposes a generic hardware architecture for runtime acceleration of heterogeneous high performance computing (HPC) clusters. This runtime accelerator performs real time resource allocation and management of HPC systems with low latency on multiple time scales. One of the target applications is to perform the signal processing in wireless communication systems such as LTE and 5G over the cloud. A core part of this work is to develop and characterize algorithms that can distribute workloads to server blades in a balanced manner with the aim of maximizing processor utilization in computing clusters. Resources are also managed to guarantee bandwidth for data transfer between computing nodes and reserved cache memories to enable deterministic task execution. This paper shows how a workload distributed among several server blades can be scheduled at a finer time scale than what a normal software implementation would allow, in order to minimize the makespan required to complete execution of sets of tasks. A case study is conducted on the implementation of a resource allocator for the proposed platform. A 760-time acceleration factor of the resource allocation process has been achieved compared to a pure software implementation, while enabling data transfers at the nanosecond scale. It stands as a proof of concept that confirms the viability of CPU-FPGA platforms for wireless standards virtualization.

Proceedings ArticleDOI
01 May 2017
TL;DR: This paper presents a new Web Server platform that has embedded specific features for the freemium business model, assisting Web startups to reduce costs in infrastructure, operations and software customization.
Abstract: The evolution of the Internet has brought new ways of doing business, with more flexibility and collaboration between customers and service providers. Internet technology providers have also evolved, providing infrastructure, platforms and software as services. However, current platforms in the cloud do not offer the necessary features for each of these business models, as is the case of the freemium model and profile differentiation feature (free and paid), requiring that several Web startups spend money on similar software customization. This paper presents a new Web Server platform that has embedded specific features for the freemium business model, assisting Web startups to reduce costs in infrastructure, operations and software customization.

Proceedings ArticleDOI
01 May 2017
TL;DR: The communication load is determined and an evaluation is conducted using a flexible DNA-controlled robot vehicle to conduct an evaluation of the self-organization techniques of a bio-inspired system.
Abstract: Embedded systems are growing more and more complex because of the increasing chip integration density, larger number of chips in distributed applications and demanding application fields (e.g. in cars and in households). Bio-inspired techniques like self-organization are a key feature to handle this complexity. The self-organization process needs a guideline for setting up and managing the system. In biology the structure and organization of a system is coded in its DNA. This concept can be adapted to embedded systems. Since many embedded systems can be composed from a limited number of basic elements, the structure and parameters of such systems can be stored in a compact way representing an artificial DNA deposited in each processor core. Based on the DNA, the self-organization mechanisms can build the system autonomously providing a self-building system. System repair and optimization at runtime are also possible, leading to higher robustness, dependability and flexibility. However, these properties introduce a certain amount of communication load. In this paper we determine the communication load and conduct an evaluation using a flexible DNA-controlled robot vehicle.

Proceedings ArticleDOI
01 May 2017
TL;DR: A linked list-based method is proposed for solving the problem of finding a starting time point of the minimal schedulability interval for fixed priority independent periodic real-time preemptive tasks with arbitrary given release offsets (phasing).
Abstract: Minimal schedulability interval is one of the important considerations of both research motivation and practice stage. In this paper, we investigate the problem of finding a starting time point of the minimal schedulability interval for fixed priority independent periodic real-time preemptive tasks with arbitrary given release offsets (phasing). A linked list-based method is proposed for solving the problem. Each node in the linked list represents a pending-less busy period. Analysis and experimental results show that the linked list-based method outperforms the current best acyclic-idle-slot-based one.

Proceedings ArticleDOI
Zhou Jiqin1, Weigong Zhang1, Keni Qiu1, Ruiying Bai1, Wang Ying1, Zhu Xiaoyan1 
01 May 2017
TL;DR: This paper takes advantage of the characteristics of UM-BUS, a novel serial bus with the capability of multi-lane concurrent transmissions, and investigates the scheduling problem to reduce the deviation to the expected completion time of messages.
Abstract: In real-time embedded systems, periodic messages need to be transmitted at the expected time because of timing sensitive requirements. In this paper, we take advantage of the characteristics of UM-BUS, a novel serial bus with the capability of multi-lane concurrent transmissions, and investigate the scheduling problem to reduce the deviation to the expected completion time of messages. By configuring different lanes to change the bus utilization, two sets of experiments were implemented to evaluate the effectiveness of the proposed algorithm. The results show that the heuristic algorithm works effectively and can achieve a deviation within 1.52% which is significantly smaller comparing to the existing scheduling algorithms.

Proceedings ArticleDOI
16 May 2017
TL;DR: D-RMTP I is developed, which has one Responsive Multithreaded Processing Unit with an 8-way prioritized Simultaneous Multithreading architecture and Experimental results show that Responsive Tasks outperform real-time tasks with respect to the overhead of wake-up, the response time, and the jitter.
Abstract: Humanoid robots are typical application of real-time systems and have required timing constraints, low-latency, and parallel/distributed processing to achieve fine-grained real-time execution. Therefore, we have developed Dependable Responsive Multithreaded Processor I (D-RMTP I), which has one Responsive Multithreaded Processing Unit with an 8-way prioritized Simultaneous Multithreading architecture. In addition, D-RMTP I has Responsive Link, which is a communication standard as specified in ISO/IEC 24740:2008 for distributed real-time systems. Our previous work presented Responsive Task, which is a low-latency real-time task with the interrupt wake-up structure on D-RMTP I. Unfortunately, Responsive Task does not support real-time communication. We present Responsive Task for real-time communication by using Responsive Link. Responsive Task for real-time communication can be waked up with low-latency by the external interrupt when packets in Responsive Link are received. Experimental results by using Responsive Link show that Responsive Tasks outperform real-time tasks with respect to the overhead of wake-up, the response time, and the jitter.