scispace - formally typeset
Search or ask a question

Showing papers by "Sanjoy Baruah published in 2016"


Proceedings ArticleDOI
05 Jul 2016
TL;DR: A generalization of the Vestal model is considered here, in which a degraded (but non-zero) level of service is required for the less critical functionalities even in the event of only the more conservative assumptions holding.
Abstract: Many reactive systems must be designed and analyzed prior to deployment in the presence of considerable epistemic uncertainty: the precise nature of the external environment the system will encounter, as well as the run-time behavior of the platform upon which it is implemented, cannot be predicted with complete certainty prior to deployment. The widely-studied Vestal model for mixed-criticality workloads addresses uncertainties in estimating the worst-case execution time (WCET) of real-time code. Different estimations, at different levels of assurance, are made about these WCET values, it is required that all functionalities execute correctly if the less conservative assumptions hold, while only the more critical functionalities are required to execute correctly in the (presumably less likely) event that the less conservative assumptions fail to hold but the more conservative assumptions do. A generalization of the Vestal model is considered here, in which a degraded (but non-zero) level of service is required for the less critical functionalities even in the event of only the more conservative assumptions holding. An algorithm is derived for scheduling dual-criticality implicit-deadline sporadic task systems specified in this more general model upon preemptive uniprocessor platforms, and proved to be speedup-optimal.

50 citations


Proceedings ArticleDOI
01 Nov 2016
TL;DR: A model is proposed for mixed-criticality recurrent tasks that extends the (previously-proposed) implicit-deadline sporadic DAG tasks model to account for mixed criticalities and a federated scheduling algorithm for systems of such tasks is presented and proved correct.
Abstract: Under the federated approach to multiprocessor scheduling, each individual task is either restricted to execute upon a single processor (as in partitioned scheduling), or has exclusive access to all the processors upon which it may execute. The federated scheduling of a mixed-criticality collection of independent recurrent tasks is studied here. A model is proposed for mixed-criticality recurrent tasks that extends the (previously-proposed) implicit-deadline sporadic DAG tasks model to account for mixed criticalities. A federated scheduling algorithm for systems of such tasks is presented and proved correct, and a quantitative evaluation of its efficacy derived via the widely-used speedup factor metric.

46 citations


Proceedings ArticleDOI
05 Jul 2016
TL;DR: The problem of partitioning systems of independent constrained-deadline sporadic tasks upon heterogeneous multiprocessor platforms is considered, and several different integer linear program formulations offering different tradeoffs between effectiveness and running time efficiency are presented.
Abstract: The problem of partitioning systems of independent constrained-deadline sporadic tasks upon heterogeneous multiprocessor platforms is considered. Several different integer linear program (ILP) formulations of this problem, offering different tradeoffs between effectiveness (as quantified by speedup bound) and running time efficiency, are presented.

21 citations


Proceedings ArticleDOI
01 Oct 2016
TL;DR: A model is considered in which multiple estimates are instead provided for the rate at which event-triggered processes are executed, and an algorithm is derived for scheduling such systems upon a preemptive uniprocessor via the speedup factor metric.
Abstract: In mixed-criticality systems functionalities of different criticalities, that need to have their correctness validated to different levels of assurance, co-exist upon a shared platform. Multiple specifications at differing levels of assurance may be provided for such systems; the specifications that are trusted at very high levels of assurance tend to be more conservative than those at lower levels of assurance. Prior research on the scheduling of such mixed-criticality systems has primarily focused upon the case where multiple estimates of the worst-case execution time (WCET) of pieces of code are provided; in this paper, a model is considered in which multiple estimates are instead provided for the rate at which event-triggered processes are executed. An algorithm is derived for scheduling such systems upon a preemptive uniprocessor; the effectiveness of this algorithm is demonstrated quantitatively via the speedup factor metric.

17 citations


Journal ArticleDOI
TL;DR: Experimental studies on a large number of randomly generated sets suggest that the proposed neural network-based optimization method is optimal when the set is nonoverloaded, and outperforms existing typical scheduling strategies when there is overload.
Abstract: In this paper, we study a set of real-time scheduling problems whose objectives can be expressed as piecewise linear utility functions This model has very wide applications in scheduling-related problems, such as mixed criticality, response time minimization, and tardiness analysis Approximation schemes and matrix vectorization techniques are applied to transform scheduling problems into linear constraint optimization with a piecewise linear and concave objective; thus, a neural network-based optimization method can be adopted to solve such scheduling problems efficiently This neural network model has a parallel structure, and can also be implemented on circuits, on which the converging time can be significantly limited to meet real-time requirements Examples are provided to illustrate how to solve the optimization problem and to form a schedule An approximation ratio bound of 05 is further provided Experimental studies on a large number of randomly generated sets suggest that our algorithm is optimal when the set is nonoverloaded, and outperforms existing typical scheduling strategies when there is overload Moreover, the number of steps for finding an approximate solution remains at the same level when the size of the problem (number of jobs within a set) increases

17 citations


Proceedings ArticleDOI
01 Nov 2016
TL;DR: This is the first work to conduct any form of analysis of task systems that are represented using this general model, and a scheduling algorithm is presented, proved correct, and quantitatively characterized via the speedup factor metric for dual-criticality systems of such tasks.
Abstract: In their widely-cited survey on mixed-criticality systems, Burns and Davis describe a very general model for representing mixed-criticality sporadic tasks. In this general model multiple estimates, at differing levels of assurance, are specified for each of the three parameters -- worst-case execution time (WCET), relative deadline, and period -- characterizing a 3-parameter sporadic task. The preemptive uniprocessor scheduling of systems of such tasks is considered. A scheduling algorithm is presented, proved correct, and quantitatively characterized via the speedup factor metric for dual-criticality systems of such tasks. To our knowledge, this is the first work to conduct any form of analysis of task systems that are represented using this general model.

16 citations


Proceedings ArticleDOI
01 Dec 2016
TL;DR: In the mixed-criticality job model, each job is characterized by two execution time parameters, representing a smaller (less conservative) estimate and a larger estimate on its actual, unknown, execution time.
Abstract: In the mixed-criticality job model, each job is characterized by two execution time parameters, representing a smaller (less conservative) estimate and a larger (more conservative) estimate on its actual, unknown, execution time. Each job is further classified as being either less critical or more critical. The desired execution semantics are that all jobs should execute correctly provided all jobs complete upon being allowed to execute for up to the smaller of their execution time estimates, whereas if some jobs need to execute beyond their smaller execution time estimates (but not beyond their larger execution time estimates), then only the jobs classified as being more critical are required to execute correctly. The scheduling of collections of such mixed-criticality jobs upon identical multiprocessor platforms in order to minimize the makespan is considered here.

8 citations


Proceedings ArticleDOI
19 Oct 2016
TL;DR: It is shown that even minimal task splitting can drastically release slack previously unusable due to isolation requirements, which in turn provides a significant increase in schedulability.
Abstract: Mixed Criticality workloads present a challenging paradigm which requires equal consideration of functional separation and efficient platform usage. As more powerful platforms become available the consolidation of previously federated functionality becomes highly desirable. Such platforms are becoming increasingly multi-core in nature bringing challenges in addition to those of isolation and utilisation. Cyclic Executives (CE) are used extensively in industry to schedule highly critical functionality in a manner which aids certification. The CE paradigm may be applied to the mixed criticality case making use of a number of features to ensure the sufficient separation of different levels of criticality. While previous work has considered the separation of criticality levels, this work focuses on providing high system utilisation. One of the significant challenges of such an implementation is the allocation of work (tasks) to minor cycles and cores. This work considers such an allocation problem and presents a means of testing schedulability using Linear Programming (LP) tools. Toward the aim of high system utilisation we consider how tasks of different criticality levels might be split, in some limited way, in order to increase the overall schedulability. We show that even minimal task splitting can drastically release slack previously unusable due to isolation requirements, which in turn provides a significant increase in schedulability.

7 citations


Proceedings ArticleDOI
01 Nov 2016
TL;DR: This work is exploring preemption-costs cognizant schedulability and sustainability issues in the uniprocessor EDF scheduling of sporadic task systems.
Abstract: We are exploring preemption-costs cognizant schedulability and sustainability issues in the uniprocessor EDF scheduling of sporadic task systems