scispace - formally typeset
Search or ask a question

Showing papers in "Real-time Systems in 1998"


Journal ArticleDOI
TL;DR: The worst case achievable utilization for homogeneous multiprocessor systems is between n(21/2-1) and (n+1)/(1+21/(n-1), where n stands for the number of processors, and practicality of the lower bound is demonstrated by proving it can be achieved using a First Fit scheduling algorithm.
Abstract: We consider the schedulability of a set of independent periodic tasks under fixed priority preemptive scheduling on homogeneous multiprocessor systems. Assuming there is no task migration between processors and each processor schedules tasks preemptively according to fixed priorities assigned by the Rate Monotonic policy, the scheduling problem reduces to assigning the set of tasks to disjoint processors in such a way that the Monotonic policy, the scheduling problem reduces to assigning the set of tasks to disjoint processors in such a way that the schedulability of the tasks on each processor can be guaranteed. In this paper we show that the worst case achievable utilization for such systems is between n(2^{1/2}-1) and (n+1)/(1+2^{1/(n+1)}), where n stands for the number of processors. The lower bound represents 41 percent of the total system capacity and the upper bound represents 50 to 66 percent depending on n. Practicality of the lower bound is demonstrated by proving it can be achieved using a First Fit scheduling algorithm.

172 citations


Journal ArticleDOI
TL;DR: A real-time behavioral model for control applications illuminates key execution strategies to ensure the required timing behavior and implications on design and implementation and directions for further work are discussed.
Abstract: Automatic control applications are real-time systems which pose stringent requirements on precisely time-triggered synchronized actions and constant end-to-end delays in feedback loops which often constitute multi-rate systems. Motivated by the apparent gap between computer science and automatic control theory, a set of requirements for real-time implementation of control applications is given. A real-time behavioral model for control applications is then presented and exemplified. Important sources and characteristics of time-variations in distributed computer systems are investigated. This illuminates key execution strategies to ensure the required timing behavior. Implications on design and implementation and directions for further work are discussed.

164 citations


Journal ArticleDOI
TL;DR: Experiments using evolutionary testing on a number of programs with up to 1511 LOC and 5000 input parameters have successfully identified new longer and shorter execution times than had been found using other testing techniques, and evolutionary testing seems to be a promising approach for the verification of timing constraints.
Abstract: Many industrial products are based on the use of embedded computer systems. Usually, these systems have to fulfil real-time requirements, and correct system functionality depends on their logical correctness as well as on their temporal correctness. In order to verify the temporal behavior of real-time systems, previous scientific work has, to a large extent, concentrated on static analysis techniques. Although these techniques offer the possibilty of providing safe estimates of temporal behavior for certain cases, there are a number of cases in practice for which static analysis can not be easily applied. Furthermore, no commercial tools for timing analysis of real-world programs are available. Therefore, the developed systems have to be thoroughly tested in order to detect existing deficiencies in temporal behavior, as well as to strengthen the confidence in temporal correctness. An investigation of existing test methods shows that they mostly concentrate on testing for logical correctness. They are not specialised in the examination of temporal correctness which is also essential to real-time systems. For this reason, existing test procedures must be supplemented by new methods which concentrate on determining whether the system violates its specified timing constraints. Normally, a violation means that outputs are produced too early, or their computation takes too long. The task of the tester therefore is to find the input situations with the longest or shortest execution times, in order to check whether they produce a temporal error. If the search for such inputs is interpreted as a problem of optimization, evolutionary computation can be used to automatically find the inputs with the longest or shortest execution times. This automatic search for accurate test data by means of evolutionary computation is called evolutionary testing. Experiments using evolutionary testing on a number of programs with up to 1511 LOC and 5000 input parameters have successfully identified new longer and shorter execution times than had been found using other testing techniques. Evolutionary testing, therefore, seems to be a promising approach for the verification of timing constraints. A combination of evolutionary testing and systematic testing offers further opportunities to improve the test quality, and could lead to an effective test strategy for real-time systems.

139 citations


Journal ArticleDOI
TL;DR: This paper describes a recovery scheme which can be used to re-execute tasks in the event of single and multiple transient faults, and derives schedulability bounds for sets of real-time tasks given the desired level of fault tolerance for each task or subset of tasks.
Abstract: Due to the critical nature of the tasks in hard real-time systems, it is essential that faults be tolerated. In this paper, we present a scheme which can be used to tolerate faults during the execution of preemptive real-time tasks. We describe a recovery scheme which can be used to re-execute tasks in the event of single and multiple transient faults and discuss conditions that must be met by any such recovery scheme. We then extend the original Rate Monotonic Scheduling (RMS) scheme and the exact characterization of RMS to provide tolerance for single and multiple transient faults. We derive schedulability bounds for sets of real-time tasks given the desired level of fault tolerance for each task or subset of tasks. Finally, we analyze and compare those bounds with existing bounds for non-fault-tolerant and other variations of RMS.

110 citations


Journal ArticleDOI
TL;DR: It is found that the schedulability offered by parallelizable task scheduling is always higher than that of the EDF algorithm for a wide variety of task parameters and the storage overhead incurred by it is less than 3.6% of the static table-driven approach even under heavy task loads.
Abstract: In a parallelizable task model, a task can be parallelized and the component tasks can be executed concurrently on multiple processors. We use this parallelism in tasks to meet their deadlines and also obtain better processor utilisation compared to non-parallelized tasks. Non-preemptive parallelizable task scheduling combines the advantages of higher schedulability and lower scheduling overhead offered by the preemptive and non-preemptive task scheduling models, respectively. We propose a new approach to maximize the benefits from task parallelization. It involves checking the schedulability of periodic tasks (if necessary, by parallelizing them) off-line and run-time scheduling of the schedulable periodic tasks together with dynamically arriving aperiodic tasks. To avoid the run-time anomaly that may occur when the actual computation time of a task is less than its worst case computation time, we propose efficient run-time mechanisms. We have carried out extensive simulation to study the effectiveness of the proposed approach by comparing the schedulability offered by it with that of dynamic scheduling using Earliest Deadline First (EDF), and by comparing its storage efficiency with that of the static table-driven approach. We found that the schedulability offered by parallelizable task scheduling is always higher than that of the EDF algorithm for a wide variety of task parameters and the storage overhead incurred by it is less than 3.6% of the static table-driven approach even under heavy task loads.

76 citations


Journal ArticleDOI
TL;DR: This tutorial acts as a guide to the major tests available for preemptive multitasking applications, using schedulability tests to formally prove that a given task set will meet its deadlines.
Abstract: When developing multitasking real-time systems, schedulability tests are used to formally prove that a given task set will meet its deadlines. A wide range of such tests have appeared in the literature. This tutorial acts as a guide to the major tests available for preemptive multitasking applications.

65 citations


Journal ArticleDOI
TL;DR: This paper describes an approach to the specification and schedulability analysis of real-time systems based on the timed process algebra ACSR-VP, which is an extension of ACSR with value-passing communication and dynamic priorities that is capable of specifying a variety of real time systems with different scheduling disciplines in a modular fashion.
Abstract: To engineer reliable real-time systems, it is desirable to detect timing anomalies early in the development process. However, there is little work addressing the problem of accurately predicting timing properties of real-time systems before implementations are developed. This paper describes an approach to the specification and schedulability analysis of real-time systems based on the timed process algebra ACSR-VP, which is an extension of ACSR with value-passing communication and dynamic priorities. Combined with the existing features of ACSR for representing time, synchronization and resource requirements, ACSR-VP is capable of specifying a variety of real-time systems with different scheduling disciplines in a modular fashion. Moreover, we can use VERSA, a toolkit we have developed for ACSR, to perform schedulability analysis on real-time systems specified in ACSR-VP automatically by checking for a certain bisimulation relation.

44 citations


Journal ArticleDOI
TL;DR: A new necessary and sufficient condition for a given task system to be feasible is presented and a new feasibility decision algorithm based on that condition is proposed, which depends solely on the number of tasks.
Abstract: Rate monotonic and deadline monotonic scheduling are commonly used for periodic real-time task systems. This paper discusses a feasibility decision for a given real-time task system when the system is scheduled by rate monotonic and deadline monotonic scheduling. The time complexity of existing feasibility decision algorithms depends on both the number of tasks and maximum periods or deadlines when the periods and deadlines are integers. This paper presents a new necessary and sufficient condition for a given task system to be feasible and proposes a new feasibility decision algorithm based on that condition. The time complexity of this algorithm depends solely on the number of tasks. This condition can also be applied as a sufficient condition for a task system using priority inheritance protocols to be feasible with rate monotonic and deadline monotonic scheduling.

43 citations


Journal ArticleDOI
TL;DR: The presented evaluation shows that the Slack Method is superior to list-processing-based approaches with regard to both finding more feasible solutions as well as finding solutions with better objective function values.
Abstract: This article presents and evaluates the Slack Method, a new constructive heuristic for the allocation (mapping) of periodic hard real-time tasks to multiprocessor or distributed systems. The Slack Method is based on task deadlines, in contrast with other constructive heuristics, such as List Processing. The presented evaluation shows that the Slack Method is superior to list-processing-based approaches with regard to both finding more feasible solutions as well as finding solutions with better objective function values. In a comparative survey we evaluate the Slack Method against several alternative allocation techniques. This includes comparisons with optimal algorithms, non-guided search heuristics (e.g. Simulated Annealing), and other constructive heuristics. The main practical result of the comparison is that a combination of non-guided search and constructive approaches is shown to perform better than either of them alone, especially when using the Slack Method.

27 citations


Journal ArticleDOI
TL;DR: A dynamic scheduling approach using on-line QoS adjustment and multiresource preemption and a primal-dual-algorithm-based approximation solution is shown to be comparable to the linear-programming-based solution, which is near optimal, and to outperform a criticality-cognitive baseline algorithm.
Abstract: This paper presents design, analysis, and implementation of a multiresource management system that enables criticality- and QoS-based resource negotiation and adaptation for mission-critical multimedia applications. With the goal of maximizing the number of high-criticality multimedia streams and the degree of their QoS, it introduces a dynamic scheduling approach using on-line QoS adjustment and multiresource preemption. An integrated multiresource management infrastructure and a set of scheduling algorithms for multiresource preemption and on-line QoS adjustment are presented. The optimality and execution efficiency of two preemption algorithms are analyzed. A primal-dual-algorithm-based approximation solution is shown (1) to be comparable to the linear-programming-based solution, which is near optimal; (2) to outperform a criticality-cognitive baseline algorithm; and (3) to be feasible for on-line scheduling. In addition, the dynamic QoS adjustment scheme is shown to greatly improve the quality of service for video streams. The multiresource management system is part of the Presto multimedia system environment prototyped at Honeywell for mission-critical applications.

26 citations


Journal ArticleDOI
TL;DR: This paper proposes a framework in which both static and dynamic costs of transactions can be taken into account, and presents a method for pre-analyzing transactions based on the notion of branch-points for data accessed up to a branch point and predicting expected data access for completing the transaction.
Abstract: Real-time databases are poised to be an important component of complex embedded real-time systems In real-time databases (as opposed to real-time systems), transactions must satisfy the ACID properties in addition to satisfying the timing constraints specified for each transaction (or task) Although several approaches have been proposed to combine real-time scheduling and database concurrency control methods, to the best of our knowledge, none of them provide a framework for taking into account the dynamic cost associated with aborts, rollbacks, and restarts of transactions In this paper, we propose a framework in which both static and dynamic costs of transactions can be taken into account Specifically, we present: i) a method for pre-analyzing transactions based on the notion of branch-points for data accessed up to a branch point and predicting expected data access to be incurred for completing the transaction, ii) a formulation of cost that includes static and dynamic factors for prioritizing transactions, iii) a scheduling algorithm which uses the above two, and iv) simulation of the algorithm for several operating conditions and workload Our dynamic priority assignment policy (termed the cost conscious approach or CCA) adapts well to fluctuations in the system load without causing excessive numbers of transaction restarts Our simulations indicate that i) CCA performs better than the EDF-HP algorithm for both soft and firm deadlines, ii) CCA is more fair than EDF-HP, iii) CCA is better than EDF-CR for soft deadline, even though CCA requires and uses less information, and iv) CCA is especially good for disk-resident data

Journal ArticleDOI
TL;DR: This paper presents an automated aggregate approach to software meta-control, appropriate for large-scale distributed real-time systems, and introduces two very-effective approximation algorithms, QDP and GG, with very reasonable polynomial time complexity.
Abstract: The software meta-controller is an online agent responsible for dynamically adapting an application‘s software configuration, e.g. altering operational modes and migrating tasks, to best accommodate varying runtime circumstances. In distributed real-time applications such adaptations must be carried out in a manner which maintains the schedulability of all critical tasks while maximizing some notion of system value for all other tasks. For large-scale real-time applications, considering all possible adaptations at the task-level is computationally intractable. This paper presents an automated aggregate approach to software meta-control, appropriate for large-scale distributed real-time systems. The aggregate automated meta-control problem is still NP-hard, but it has very practical approximate solutions. Introduced, here, are two very-effective approximation algorithms, QDP and GG, with very reasonable polynomial time complexity. Both algorithms also provide us with upper bounds for optimum system values, useful for deriving absolute, albeit somewhat pessimistic, measures of actual performance. Extensive Monte Carlo analysis is used to illustrate that expected performance for both algorithms is generally suboptimal by no more than a few percent. Our flexible software meta-control model is also shown to be readily applied to a wide range of time-sensitive applications.

Journal ArticleDOI
TL;DR: It is shown how estimates of process run-times necessary for schedulability analysis can be acquired on the basis of deterministic behavior of the hardware platform.
Abstract: Although the domain of hard real-time systems has been thoroughly elaborated in the academic sphere, embedded computer control systems –- being an important component in mechatronic designs –- are seldom dealt with consistently. Often, off-the-shelf computer systems are used, with no guarantee that they will be able to meet the requirements specified. In this paper, a design for embedded control systems is presented. Particularly, the paper deals with the hardware architecture and design details, the operating system, and high-level real-time language support. It is shown how estimates of process run-times necessary for schedulability analysis can be acquired on the basis of deterministic behavior of the hardware platform.

Journal ArticleDOI
TL;DR: Results show that for representative real-time workloads, simple low-overhead dispatchers perform nearly as well as a complex “minimally stabilized” dispatcher, which may be unnecessary, or even detrimental, due to their high computational overhead.
Abstract: Non-preemptive static priority list scheduling is a simple, low-overhead approach to scheduling precedence-constrained tasks in real-time multiprocessor systems. However, it is vulnerable to anomalous timing behavior caused by variations in task durations. Specifically, reducing the duration of one task can delay the starting time of another task. This phenomenon, called Scheduling Instability, can make it difficult or impossible to guarantee real-time deadlines. Several heuristic solutions to handle scheduling instability have been reported. This paper addresses three main limitations in the state of the art in schedule stabilization. First, each stabilization technique only applies to a narrowly defined class of systems. To alleviate this constraint, we present an Extended Scheduling Model encompassing a wide range of assumptions about the scheduling environment, and address the stability problem under this model. Second, existing stabilization methods are heuristics based on a partial understanding of the causes of instability. We therefore derive a set of General Instability Conditions which are both necessary and sufficient for instability to occur. Third, solutions to scheduling instability range from trivial constraints on the run-time dispatcher through complex transformations of the precedence graph. We present scheduling simulation results comparing the average performance of several inherently stable run-time dispatchers of widely varying levels of complexity. Results show that for representative real-time workloads, simple low-overhead dispatchers perform nearly as well as a complex ’’minimally stabilized‘‘ dispatcher. Thus, complex schedule stabilization methods may be unnecessary, or even detrimental, due to their high computational overhead.

Journal ArticleDOI
TL;DR: Encouraged by results from fully implemented test cases, it is believed that extensive use of the key idea of using pieces of compiled executable code as functional operators will admit more open, still efficient, embedded systems.
Abstract: Embedded control devices today usually allow parameter changes, and possibly activation of different pre-implemented algorithms. Full reprogramming using the complete source code is not allowed for safety, efficiency, and proprietary reasons. For these reasons, embedded regulators are quite rigid and closed concerning the control structure. In several applications, like industrial robots, there is a need to tailor the low level control to meet specific application demands. In order to meet the efficiency and safety demands, a way of building more generic and open regulators has been developed. The key idea is to use pieces of compiled executable code as functional operators, which in the simplest case may appear as ordinary control parameters. In an object oriented framework, this means that new methods can be added to controller objects after implementation of the basic control, even at run-time. The implementation was carried out in industrially well accepted languages such as C and C++. The dynamic binding at run-time differs from ordinary dynamic linking in that only a subset of the symbols can be used. This subset is defined by the fixed part of the system. The safety demands can therefore still be fulfilled. Encouraged by results from fully implemented test cases, we believe that extensive use of this concept will admit more open, still efficient, embedded systems.

Journal ArticleDOI
TL;DR: The benefits of this process are: it provides a traceable path to the original language implementation, it achieves data encapsulation and data flow understanding, it separates out concurrent processes, and MetaH provides a robust mechanism for multiprocessor distribution.
Abstract: This paper describes a software/hardware architectural transformation of a single threaded, cyclic executive based missile application to a multitasking, distributed application using MetaH (Binns and Vestal 1993a, 1993b, 1995). The benefits of this process are: it provides a traceable path to the original language implementation, it achieves data encapsulation and data flow understanding, it separates out concurrent processes, it results in an object based design, and MetaH provides a robust mechanism for multiprocessor distribution.

Journal ArticleDOI
TL;DR: This paper focuses on evaluating the communication resources for use aboard a Navy ship using key performance metrics and an assessment of existing methods for determining these metrics.
Abstract: The Navy is a large user of real-time systems. A modern surface ship is deployed with hundreds of computers which are required for the ship to perform its mission. The Navy is moving away from using ’’niche‘‘ market components to meet its real-time computing needs and towards distributed processing using Commercial-Off-The-Shelf (COTS) computing components. The use of commercial real-time computing components requires careful evaluation to determine if they can meet the real-time requirements. This paper focuses on evaluating the communication resources for use aboard a Navy ship. Presented are: (1) Key performance metrics for assessing the communication capabilities of computers for use aboard Navy ships; (2) An assessment of existing methods for determining these metrics; (3) A methodology for collecting data to evaluate a particular component‘s performance related to these; (4) Examples of applying the methodology; and (5) Examples of quantitative analysis using the metrics.

Journal ArticleDOI
TL;DR: A design and programming environment to assist the development of hard real-time applications based on an iterative process in which the real- time scheduling support is considered from the beginning of the design phases.
Abstract: The development of time critical applications needs specific tools able to cope with both functional and non-functional requirements. In this paper we describe a design and programming environment to assist the development of hard real-time applications. An interactive graphic interface is provided to facilitate the design of the application according to three hierarchical levels. The development model we propose is based on an iterative process in which the real-time scheduling support is considered from the beginning of the design phases. Our graphic environment integrates several tools to analyze, test, and simulate the real-time application under development. In particular, the tools we have implemented are: a Design Tool, to describe the structure of the application, a Schedulability Analyser Tool (SAT), to verify off-line the feasibility of the schedule of a critical task set, a Scheduling Simulator, to test the average behavior of the application, and a Maximum Execution Time (MET) estimator to bound the worst case duration of each task.

Journal ArticleDOI
TL;DR: A formal development method in which specifications may be decomposed into unexceptional programs whilst preserving the functional and timing requirements of the specification is described.
Abstract: Existing formal techniques for the development of software for use in safety-critical systems do not adequately address non-functional system requirements such as those involving timing. In this paper we describe a formal development method in which specifications may be decomposed into unexceptional programs whilst preserving the functional and timing requirements of the specification. We illustrate the method with a speed monitoring example.

Journal ArticleDOI
TL;DR: This work offers a cache-based architecture that is capable of both storing knowledge in different formats, and invoking an appropriate reasoning scheme to fit the available computing time, and illustrates the design of such a cache for solving resource allocation problems in the domain of shortwave radio transmission and evaluates its performance in observing imposed temporal bounds.
Abstract: Knowledge-based computing, in general, suffers from an inherent open-endedness that precludes its application in time-bounded domains where an answer must be computed within a stipulated time limit. We examine a two-way improvement of the shortcomings: a knowledge representation scheme that provides easy access to relevant knowledge and thereby reduces search time, and a reasoning scheme that is algorithmic in nature and thus makes computational requirements meaningfully estimable. In this work, we offer a cache-based architecture that is capable of both storing knowledge in different formats (e.g. rules, cases), and invoking an appropriate reasoning scheme to fit the available computing time. The cache helps in retrieving the most relevant pieces of knowledge (not only exact matches) required for solving a given problem. This cache relies on a reasoning tactic, knowledge interpolation, that can generate a solution from two near-matches in an algorithmic way, to generate time-bounded solutions. We illustrate the design of such a cache for solving resource allocation problems in the domain of shortwave radio transmission and evaluate its performance in observing imposed temporal bounds.

Journal ArticleDOI
TL;DR: This paper presents a rapid prototyping environment that supports the designer of application specific embedded controllers during the requirement's validation phase.
Abstract: Mechatronics is a rapidly growing field that requires application specific hardware/software solutions for complex information processing at very low power and area consumption, and cost. Rapid prototyping is a proven method to check a design against its requirements during early design phases and thus shorten the overall design cycle. Rapid Prototyping of real-time information processing units in mechatronics applications requires code generation and hardware synthesis tools for a fast and efficient search in the design space. In this paper we present a rapid prototyping environment that supports the designer of application specific embedded controllers during the requirement‘s validation phase.

Journal ArticleDOI
TL;DR: This paper presents the design and implementation of a user-level real-time network system in Real-Time Mach, and focuses on the aspects to avoid the priority inversion problem in order to make network systems more preemptable and predictable.
Abstract: This paper presents the design and implementation of a user-level real-time network system in Real-Time Mach. Traditional network systems for microkernel based operating systems, which tend to focus on high performance and flexibility, are not suitable for real-time communication. Our network system provides a framework for implementing real-time network protocols which require to bound protocol processing time, and it is suitable for implementing on microkernel based operating systems. In this paper, we especially focus on the aspects to avoid the priority inversion problem in order to make network systems more preemptable and predictable. We also describe the feasibility of our network system for building distributed multimedia systems.

Journal ArticleDOI
TL;DR: The OBSERV implementation of the Production cell is described, design decisions are explained, with special emphasis on reusability and safety issues, and how to take care of safety and liveness properties required for this example are demonstrated.
Abstract: The Production Cell example was chosen by FZI (the Computer Science Research Center), in Karlsruhe. to examine the benefits of formal methods for industrial applications. This example was implemented in more than 30 formalisms. This paper describes the implementation of the Production Cell in OBSERV. The OBSERV methodology for software development is based on rapid construction of an executable specification, or prototype, of a system, which may be examined and modified repeatedly to achieve the desired functionality. The objectives of OBSERV also include facilitating a smooth transition to a target system, and providing means for reusing specification, design, and code of systems, particularly real-time reactive systems. In this paper we show how the methods used in the OBSERV implementation address the requirements imposed by reactive systems. We describe the OBSERV implementation of the Production cell, explain design decisions, with special emphasis on reusability and safety issues. We demonstrate how to take care of safety and liveness properties required for this example. These properties are checked by means of simulation and formally proved with a model checker.

Journal ArticleDOI
TL;DR: An open and flexible programming system, based on an existing system called VIRTUOSO, that is an evolution of Virtuoso towards the aimed architecture of a heterogeneous distributed real-time architecture for robot and machine control.
Abstract: HEDRA, ESPRIT project (nr. 6768), aims to develop a heterogeneous distributed real-time architecture for robot and machine control. This paper describes an open and flexible programming system, as a part of this architecture, that is based on an existing system called VIRTUOSO. The programming system is an evolution of Virtuoso towards the aimed architecture. The work concentrates on the achievement of guaranteed real-time behavior, minimum interrupt latency and transparency of interprocessor data communication.

Journal Article
TL;DR: In this paper, the authors describe a design and programming environment to assist the development of hard real-time applications, which is based on an iterative process in which the realtime scheduling support is considered from the beginning of the design phases.
Abstract: The development of time critical applications needs specific tools able to cope with both functional and non-functional requirements. In this paper we describe a design and programming environment to assist the development of hard real-time applications. An interactive graphic interface is provided to facilitate the design of the application according to three hierarchical levels. The development model we propose is based on an iterative process in which the real-time scheduling support is considered from the beginning of the design phases.Our graphic environment integrates several tools to analyze, test, and simulate the real-time application under development. In particular, the tools we have implemented are: a Design Tool, to describe the structure of the application, a Schedulability Analyser Tool (SAT), to verify off-line the feasibility of the schedule of a critical task set, a Scheduling Simulator, to test the average behavior of the application, and a Maximum Execution Time (MET) estimator to bound the worst case duration of each task.