scispace - formally typeset
Proceedings ArticleDOI

ParaRMS algorithm: A parallel implementation of rate monotonic scheduling algorithm using OpenMP

Reads0
Chats0
TLDR
An algorithm called ParaRMS is presented, which is a Rate Monotonic Scheduling (RMS) algorithm in parallel on multi-core architecture and has been scheduled on a multi- core system, which improves the scalability and responsiveness of the Rate monotonic scheduling algorithm.
Abstract
With the evolution of multi-core Systems, the computing performance has been improved and these processors with more than one processing cores has found there application in the fields where high performance and complex computation is required like supercomputing, remote monitoring systems, data mining, data communication and processing systems etc. Multi-core processors can be utilized to improve the performance of embedded and real time systems. As the scheduling capability of the Real Time Operating System (RTOS) determines its efficiency and performance when dealing with real-time critical tasks, use of more than one processing units speeds up the task processing. This paper presents an algorithm called ParaRMS, which is a Rate Monotonic Scheduling (RMS) algorithm in parallel on multi-core architecture and has been scheduled on a multi-core system. This improves the scalability and responsiveness of the Rate monotonic scheduling algorithm which will also help scheduling the dynamic tasks effectively and offers good CPU utilization for given task set. In support of this work, analysis of ParaRMS has been carried out on Intel VTune Amplifier XE 2013 with positive results.

read more

Citations
More filters
Journal ArticleDOI

The Migration of Engine ECU Software From Single-Core to Multi-Core

TL;DR: In this article, a multi-core migration methodology was proposed for a real-world automotive open system architecture (AUTOSAR)-based engine ECU from HYUNDAI.
Journal Article

Shared-memory programming with OpenMP

TL;DR: Sample solutions of homeworks 5 and 6 will be posted by tomorrow and there will be a final exam review session, not yet clear when.
Journal ArticleDOI

Performance Evaluation and Analysis of Parallel Computers Workload

TL;DR: This paper investigates how to tune the performance of threaded applications with balanced load for each core or processor and comparative analysis of single core and multicore systems to run an application program for faster execution time and optimize the scheduler for better performance.
References
More filters
Journal ArticleDOI

Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment

TL;DR: The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service and it is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization.
Book

Scheduling algorithms for multiprogramming in a hard real-time environment

TL;DR: In this paper, the problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions that need guaranteed service, and it is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets.
Book

Parallel Programming in OpenMP

TL;DR: Aimed at the working researcher or scientific C/C++ or Fortran programmer, this text introduces the competent research programmer to a new vocabulary of idioms and techniques for parallelizing software using OpenMP.
Book

Computers as Components: Principles of Embedded Computing System Design

TL;DR: This research presents a meta-modelling architecture for embedded systems that automates the very labor-intensive and therefore time-heavy and expensive process of designing and programming embedded systems.
Book ChapterDOI

Introduction to Parallel Programming

TL;DR: This chapter is an introduction to parallel programming to address the need for teaching parallel programming on current system architectures using OpenCL as the target language, and it includes examples for CPUs, GPUs, and their integration in the accelerated processing unit (APU).
Related Papers (5)