scispace - formally typeset
Search or ask a question

Showing papers in "Real-time Systems in 1999"


Journal ArticleDOI
TL;DR: A framework for determining feasibility for a wide variety of task systems is established and a framework is applied to this task model to obtain a feasibility-testing algorithm that runs in time pseudo-polynomial in the size of the input for all systems of such tasks whose densities are bounded by a constant less than one.
Abstract: A new model for sporadic task systems is introduced. This model— the generalized multiframe task model—further generalizes both the conventional sporadic-tasks model, and the more recent multiframe model of Mok and Chen. A framework for determining feasibility for a wide variety of task systems is established; this framework is applied to this task model to obtain a feasibility-testing algorithm that runs in time pseudo-polynomial in the size of the input for all systems of such tasks whose densities are bounded by a constant less than one.

260 citations


Journal ArticleDOI
TL;DR: For interprocedural analysis, existing methods are examined and a new approach that is especially tailored for the cache analysis is presented, which allows for a static classification of the cache behavior of memory references of programs.
Abstract: interpretation is a technique for the static detection of dynamic properties of programs. It is semantics based, that is, it computes approximative properties of the semantics of programs. On this basis, it supports correctness proofs of analyses. It replaces commonly used ad hoc techniques by systematic, provable ones, and it allows for the automatic generation of analyzers from specifications by existing tools. In this work, abstract interpretation is applied to the problem of predicting the cache behavior of programs. semantics of machine programs are defined which determine the contents of caches. For interprocedural analysis, existing methods are examined and a new approach that is especially tailored for the cache analysis is presented. This allows for a static classification of the cache behavior of memory references of programs. The calculated information can be used to improve worst case execution time estimations. It is possible to analyze instruction, data, and combined instruction/data caches for common (re)placement and write strategies. Experimental results are presented that demonstrate the applicability of the analyses.

230 citations


Journal ArticleDOI
TL;DR: A method that integrates path and timing analysis to accurately predict the worst-case execution time for real-time programs on high-performance processors and can exclude many infeasible program paths and calculate path information, such as bounds on number of loop iterations, without the need for manual annotations of programs.
Abstract: Previously published methods for estimation of the worst-case execution time on high-performance processors with complex pipelines and multi-level memory hierarchies result in overestimations owing to insufficient path and/or timing analysis. This does not only give rise to poor utilization of processing resources but also reduces the schedulability in real-time systems. This paper presents a method that integrates path and timing analysis to accurately predict the worst-case execution time for real-time programs on high-performance processors. The unique feature of the method is that it extends cycle-level architectural simulation techniques to enable symbolic execution with unknown input data values; it uses alternative instruction semantics to handle unknown operands. We show that the method can exclude many infeasible (or non-executable) program paths and can calculate path information, such as bounds on number of loop iterations, without the need for manual annotations of programs. Moreover, the method is shown to accurately analyze timing properties of complex features in high-performance processors using multiple-issue pipelines and instruction and data caches. The combined path and timing analysis capability is shown to derive exact estimates of the worst-case execution time for six out of seven programs in our benchmark suite.

153 citations


Journal ArticleDOI
TL;DR: An automatic tool-based approach is described to bound worst-case data cache performance and a method to deal with realistic cache filling approaches, namely wrap-around-filling for cache misses, is presented as an extension to pipeline analysis.
Abstract: The contributions of this paper are twofold. First, an automatic tool-based approach is described to bound worst-case data cache performance. The approach works on fully optimized code, performs the analysis over the entire control flow of a program, detects and exploits both spatial and temporal locality within data references, and produces results typically within a few seconds. Results obtained by running the system on representative programs are presented and indicate that timing analysis of data cache behavior usually results in significantly tighter worst-case performance predictions. Second, a method to deal with realistic cache filling approaches, namely wrap-around-filling for cache misses, is presented as an extension to pipeline analysis. Results indicate that worst-case timing predictions become significantly tighter when wrap-around-fill analysis is performed. Overall, the contribution of this paper is a comprehensive report on methods and results of worst-case timing analysis for data caches and wrap-around caches. The approach taken is unique and provides a considerable step toward realistic worst-case execution time prediction of contemporary architectures and its use in schedulability analysis for hard real-time systems.

66 citations


Journal ArticleDOI
Jiandong Huang1, Y. Wang1, F. Cao1
TL;DR: This paper presents GRMS's design, prototyping, and performance evaluation, and presents a decentralized end-to-end two-phase negotiation and adaptation protocol with the functionality of distributed, dynamic QoS adjustment and stream preemption.
Abstract: The Global Resource Management System (GRMS) provides middleware services for QoS- and criticality-based resource negotiation and adaptation across heterogeneous computing nodes and communication networks. This paper presents GRMS‘s design, prototyping, and performance evaluation. We introduce GRMS design principles and two key concepts unified resource model and ripple scheduling—and describe our architectural design based on these concepts. Further, we present a decentralized end-to-end two-phase negotiation and adaptation protocol with the functionality of distributed, dynamic QoS adjustment and stream preemption. We discuss GRMS‘s system prototyping and lessons learned and report our experimentation and simulation results, providing insights into design and implementation of a middleware-based distributed resource management system.

43 citations


Journal ArticleDOI
TL;DR: This paper shows how Spring's specification language, programming language, software generation system, and operating system kernel are applied to build a flexible manufacturing testbed and the use of reflective information and the value of function and time composition.
Abstract: The Spring system is a highly integrated collection of software and hardware that synergistically operates to provide end-to-end support in building complex real-time applications. In this paper, we show how Spring‘s specification language, programming language, software generation system, and operating system kernel are applied to build a flexible manufacturing testbed. The same ingredients have also been used to realize a predictable version of a robot pick and place application used in industry. These applications are good examples of complex real-time systems that require flexibility. The goal of this paper is to demonstrate the integrated nature of the system and the benefits of integration; in particular, the use of reflective information and the value of function and time composition. The lessons learned from these applications and the project as a whole are also presented.

41 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe an open system architecture that allows independently developed hard real-time applications to run together and supports their reconfiguration at run-time, and describe the two level CPU scheduling scheme used by the open system and the design and implementation of a uniprocessor open system within the framework of the Windows NT operating system.
Abstract: This paper describes an open system architecture that allows independently developed hard real-time applications to run together and supports their reconfiguration at run-time. In the open system, each real-time application is executed by a server. At the lower level, the OS scheduler schedules all the servers on the EDF basis. At the upper level, the server scheduler of each server schedules the ready jobs of the application executed by the server according to the algorithm chosen for the application. The paper describes the two-level CPU scheduling scheme used by the open system and the design and implementation of a uniprocessor open system within the framework of the Windows NT operating system. The implementation consists of three key components: the two-level hierarchical kernel scheduler, common system service providers, and real-time application programming interface.

40 citations


Journal ArticleDOI
TL;DR: This analysis determines the maximum processing time which may be stolen from periodic tasks without jeopardizing both their timing constraints and resource consistency and provides the basis for an on-line scheduling algorithm, the EDL Server, to deal with the minimization of response times for soft aperiodic tasks.
Abstract: In this paper, we are concerned with the problem of serving soft aperiodic tasks on a uniprocessor system where periodic tasks are scheduled on a dynamic-priority, preemptive basis and exclusively access to critical sections. Scheduling of tasks is handled by the Dynamic Priority Ceiling Protocol working with an Earliest Deadline scheduler. Our analysis determines the maximum processing time which may be stolen from periodic tasks without jeopardizing both their timing constraints and resource consistency. It provides the basis for an on-line scheduling algorithm, the EDL Server, to deal with the minimization of response times for soft aperiodic tasks.

39 citations


Journal ArticleDOI
TL;DR: The proposed policy, the Mixed Method (MM), which considers both transaction timeliness and data contention, outperforms other policies over a wide range of system parameter settings.
Abstract: One of the most important issues in the design of distributed real-time database system (DRTDBS) is transaction scheduling which consists of two parts: priority scheduling and real-time concurrency control. In the past studies, mostly, these issues are studied separately although they have a very close interaction with each other. In this paper, we propose new priority assignment policies for DRTDBS and study their impact on two typical real-time concurrency control protocols (RT-CCPs), High Priority Two Phase Locking (HP-2PL) and Optimistic Concurrency Control with Broadcast Commit (OCC-BC). Our performance results show that many factors, such as data conflict resolution, degree of data contention and transaction restarts, that are unique to database systems, have significant impact on the performance of the policies which in turn affect the performance of the real-time concurrency control protocols. OCC-BC is more affected by the priority assignment policies than HP-2PL owing to the late detection of conflict. In the design of priority assignment policies, we have found that neither the purely deadline driven policies nor data contention driven policies are suitable for DRTDBS. Our proposed policy, the Mixed Method (MM), which considers both transaction timeliness and data contention, outperforms other policies over a wide range of system parameter settings.

36 citations


Journal ArticleDOI
TL;DR: An overview of the ARMADA project is given, describing the architecture and presenting its implementation status, and a collection of modular, composable middleware for fault-tolerant group communication and replication under timing constraints is developed.
Abstract: Real-time embedded systems have evolved during the past several decades from small custom-designed digital hardware to large distributed processing systems. As these systems become more complex, their interoperability, evolvability and cost-effectiveness requirements motivate the use of commercial-off-the-shelf components. This raises the challenge of constructing dependable and predictable real-time services for application developers on top of the inexpensive hardware and software components which has minimal support for timeliness and dependability guarantees. We are addressing this challenge in the ARMADA project. ARMADA is set of communication and middleware services that provide support for fault-tolerance and end-to-end guarantees for embedded real-time distributed applications. Since real-time performance of such applications depends heavily on the communication subsystem, the first thrust of the project is to develop a predictable communication service and architecture to ensure QoS-sensitive message delivery. Fault-tolerance is of paramount importance to embedded safety-critical systems. In its second thrust, ARMADA aims to offload the complexity of developing fault-tolerant applications from the application programmer by focusing on a collection of modular, composable middleware for fault-tolerant group communication and replication under timing constraints. Finally, we develop tools for testing and validating the behavior of our services. We give an overview of the ARMADA project, describing the architecture and presenting its implementation status.

33 citations


Journal ArticleDOI
TL;DR: A Dynamic Real-Time CorBA system is described, which supports the expression and enforcement of end-to-end timing constraints as an extension to a commercial CORBA system.
Abstract: Distributed real-time applications have presented the need to extend the Object Management Group‘s Common Object Request Broker Architecture (CORBA) standard to support real-time. This paper describes a Dynamic Real-Time CORBA system, which supports the expression and enforcement of end-to-end timing constraints as an extension to a commercial CORBA system. The paper also describes performance tests that demonstrate the system‘s ability to enforce expressed timing constraints.

Journal ArticleDOI
TL;DR: It is shown that the performance of a scheduling algorithm is improved dramatically when the release time of the tasks is O(Cmax) prior to their deadline; achieving a competitive ratio that is close to one.
Abstract: Motivated by the special characteristics of multimedia tasks, we consider non-preemptive scheduling of tasks where there exists no (or very limited) information concerning the tasks before they are released. We present impossibility results and analyze algorithms for non-preemptive scheduling in single processor and multiprocessor systems. To evaluate our algorithm we assume that system obtains a value that is proportional to the processing time of the task whenever a task is completed by its deadline. Competitive analysis is used, where the goal is to keep the total value obtained by an on-line algorithm bounded by a function of the total value obtained by an off-line algorithm. In particular, one set of our results considers the competitive ratio of scheduling algorithm when the length of the tasks is not greater than C_{\maxx} (and not smaller thanC_{\minn} ). We show that the performance of a scheduling algorithm is improved dramatically when the release time of the tasks is O(C_{\maxx}) prior to their deadline; achieving a competitive ratio that is close to one.

Journal ArticleDOI
TL;DR: An experimental evaluation of the period calibration method (PCM), which was developed in Gerber et al. (1994, 1995) as a systematic design methodology for real-time systems, unveils several weaknesses and proposes a new communication scheme and a transient overload handling technique.
Abstract: In this paper we present an experimental evaluation of the period calibration method (PCM) which was developed in Gerber et al. (1994, 1995) as a systematic design methodology for real-time systems. The objective of this experimental study is to assess design alternatives integrated into the method and their performance implication on resultant systems built via the PCM. Such design alternatives include scheduling jitter, sensor-to-output latency, intertask communication schemes, and system utilization. For this study, we have chosen a computerized numerical control (CNC) machine as our target real-time system, and built a realistic controller and a plant simulator. We show the detailed development process of the CNC controller and report its performance. The performance results were extracted from a controlled series of more than hundred test controllers obtained by varying four test variables. This study unveils several weaknesses of the PCM: (1) the communication scheme built into PCM incurs a large latency though average sensor-to-output latency is one of the most dominating factors in determining control quality; (2) scheduling jitter is taken seriously in PCM though its effect appears only when average sensor-to-output latency is sufficiently small; (3) loop processing periods are not properly optimized for control quality though they are another dominating factor of performance; and (4) transient overloads are not considered at all in PCM, even though they can seriously damage the performance of a system. Based on these results, we propose a new communication scheme and a transient overload handling technique for the improved period calibration method.

Journal ArticleDOI
Farn Wang1, Chia-Tien Lo1
TL;DR: This work identifies two common engineering guidelines respected in the development of real-world software projects, structured programming and local autonomy in concurrent systems, and experiments with special verification algorithm based on those engineering wisdoms.
Abstract: We want to develop verification techniques for real-time concurrent system specifications with high-level behavior structures. This work identifies two common engineering guidelines respected in the development of real-world software projects, structured programming and local autonomy in concurrent systems, and experiments with special verification algorithm based on those engineering wisdoms. The algorithm we have adopted respects the integrity of program structures, treats each procedure as an entity instead of as a group of statements, allows local state space search to exploit the local autonomy in concurrent systems without calculating the Cartesian products of local state spaces, and derives from each procedure declaration characteristic information which can be utilized in the verification process anywhere the procedure is invoked. We have endeavored to implement our idea, test it against an abstract extension of a real-world protocol in a mobile communication environment, and report the data.

Journal ArticleDOI
TL;DR: It is shown that taking into account the scheduling time is crucial for honoring the deadlines of scheduled tasks, and the algorithms proposed in this paper increase scheduling complexity to optimize longer and obtain high-quality schedules.
Abstract: This paper addresses a fundamental trade-off in dynamic scheduling between the cost of scheduling and the quality of the resulting schedules The time allocated to scheduling must be controlled explicitly, in order to obtain good-quality schedules in reasonable times As task constraints are relaxed, the algorithms proposed in this paper increase scheduling complexity to optimize longer and obtain high-quality schedules When task constraints are tightened, the algorithms adjust scheduling complexity to reduce the adverse effect of long scheduling times on the schedule quality We show that taking into account the scheduling time is crucial for honoring the deadlines of scheduled tasks We investigate the performance of our algorithms in two scheduling models: one that allows idle-time intervals to exist in the schedule and another that does not The model with idle-time intervals has important implications for dynamic scheduling which are discussed in the paper Experimental evaluation of the proposed algorithms shows that our algorithms outperform other candidate algorithms in several parameter configurations

Journal ArticleDOI
TL;DR: A novel scheduling scheme, called limited preemptive scheduling (LPS), is proposed that limits preemptions to execution points with small cache-related preemption costs to maximize the schedulability of a given task set while minimizing cache- related preemption delay of tasks.
Abstract: In multi-tasking real-time systems, inter-task cache interference due to preemptions degrades schedulability as well as performance. To address this problem, we propose a novel scheduling scheme, called limited preemptive scheduling (LPS), that limits preemptions to execution points with small cache-related preemption costs. Limiting preemptions decreases the cache-related preemption costs of tasks but increases blocking delay of higher priority tasks. The proposed scheme makes an optimal trade-off between these two factors to maximize the schedulability of a given task set while minimizing cache-related preemption delay of tasks. Experimental results show that the LPS scheme improves the maximum schedulable utilization by up to 40\% compared with the traditional fully preemptive scheduling (FPS) scheme. The results also show that up to 20\% of processor time is saved by the LPS scheme due to reduction in the cache-related preemption costs. Finally, the results show that both the improvement of schedulability and the saving of processor time by the LPS scheme increase as the speed gap between the processor and main memory widens.

Journal ArticleDOI
TL;DR: A novel pre-runtime scheduling method for uniprocessors which precisely takes the effects of task switching on the processor cache into consideration and uses a heuristically guided search strategy.
Abstract: We present a novel pre-runtime scheduling method for uniprocessors which precisely takes the effects of task switching on the processor cache into consideration. Tasks are modelled as a sequence of non preemptable segments with precedence constraints. The cache behavior of each task segment is statically determined by abstract interpretation. For the sake of efficiency, the scheduling algorithm uses a heuristically guided search strategy. Each time a new task segment is added to a partial schedule, its worst case execution time is calculated based on the cache state at the end of the preceding partial schedule.

Journal ArticleDOI
TL;DR: The results provide a guideline to select the right sorting algorithm for a given application and show in which way the adequacy of an algorithm depends on the demanded performance criterium (hard, soft, or non real-time).
Abstract: In hard real-time systems tasks must meet their deadlines under guarantee. Soft real-time tasks may miss deadlines occasionally, as long as the entire system can provide the specified quality of service. In this paper we investigate the hard and soft real-time performance of sorting algorithms and compare it to their average performance. We show in which way the adequacy of an algorithm depends on the demanded performance criterium (hard, soft, or non real-time). The results provide a guideline to select the right sorting algorithm for a given application.


Journal ArticleDOI
TL;DR: This paper proposes an admission algorithm to select the part of the offered load to be executed, should overload occur, and shows that the algorithm provides the best solution to the optimisation problem resorting to the linear programming theory.
Abstract: This paper deals with planning system activities to support applications that have different contrasting requirements including timing constraints on tasks execution and correctness requirements. Our approach is based on a simple yet effective formulation of a value structure associated to the application tasks. Values are associated to each relevant outcome thus accounting for successful executions as well as for those which violate the application requirements. Moreover we assume degradable real time systems equipped with several execution techniques characterised by different execution costs and different levels of fulfilment of requirements (and associated reward). We propose an admission algorithm to select the part of the offered load to be executed, should overload occur. For all the admitted tasks the algorithm selects also the most suitable execution technique (among those available) to optimise the expected cumulated reward. We show that the algorithm provides the best solution to the optimisation problem resorting to the linear programming theory. Then we discuss the applicability of this result to systems operating in dynamic environments. A planner component is defined, responsible to collect information on the current status of the system and of its environment. The planner decides when a new ’plan‘ is required, and dynamically executes the admission algorithm to properly tune the usage of system resources.

Journal ArticleDOI
TL;DR: This paper discusses the stability of a feasible pre-run-time schedule for a transient overload introduced by processes re-execution during an error recovery action, and shows that fault-tolerant hard real-time systems do not have to be extremely expensive and complex.
Abstract: This paper discusses the stability of a feasible pre-run-time schedule for a transient overload introduced by processes re-execution during an error recovery action. It shows that the stability of a schedule strictly tuned to meet hard deadlines is very small, invalidating thus backward error recovery. However, the stability of the schedule always increases when a real-time process is considered as having a nominal and a hard deadline separated by a non-zero grace time. This is true for sets of processes having arbitrary precedence and exclusion constraints, and executed on a single or multiprocessor based architecture. Grace time is not just the key element for the realistic estimation of the timing constraints of real-time error processing techniques. It also allows backward error recovery to be included in very efficient pre-run-time scheduled systems when the conditions stated in this paper are satisfied. This is a very important conclusion, as it shows that fault-tolerant hard real-time systems do not have to be extremely expensive and complex.