scispace - formally typeset
Search or ask a question

Showing papers by "Yaser Jararweh published in 2013"


Proceedings ArticleDOI
11 Dec 2013
TL;DR: A Cloudlet based MCC system aiming to reduce the power consumption and the network delay while using MCC is introduced and a new framework for the MCC model is proposed.
Abstract: Mobile Cloud Computing (MCC) has been introduced as a viable solution to the inherited limitations of mobile computing. These limitations include battery lifetime, processing power, and storage capacity. By using MCC, the processing and the storage of intensive mobile device jobs will take place in the cloud system and the results will be returned to the mobile device. This will reduce the required power and time for completing such intensive jobs. However, connecting mobile devices with the cloud suffers from the high network latency and the huge transmission power consumption especially when using 3G/LTE connections. In this paper, we introduce a Cloudlet based MCC system aiming to reduce the power consumption and the network delay while using MCC. We merged the MCC concepts with the proposed Cloudlet framework and propose a new framework for the MCC model. Our practical experimental results showed that using the proposed model reduces the power consumption from the mobile device, besides reducing the communication latency when the mobile device requests a job to take place remotely while keeping high quality of service stander.

120 citations


Journal ArticleDOI
25 Jul 2013
TL;DR: TeachCloud is a modelling and simulation environment for cloud computing that can be used to experiment with different cloud components such as: processing elements, data centres, storage, networking, service level agreement (SLA) constraints, web-based applications, service oriented architecture (SOA), virtualisation, management and automation, and business process management (BPM).
Abstract: Cloud computing is an evolving and fast-growing computing paradigm that has gained great interest from both industry and academia. Consequently, universities are actively integrating cloud computing into their IT curricula. One major challenge facing cloud computing instructors is the lack of a teaching tool to experiment with. This paper introduces TeachCloud, a modelling and simulation environment for cloud computing. TeachCloud can be used to experiment with different cloud components such as: processing elements, data centres, storage, networking, service level agreement (SLA) constraints, web-based applications, service oriented architecture (SOA), virtualisation, management and automation, and business process management (BPM). Also, TeachCloud introduces MapReduce processing model in order to handle embarrassingly parallel data processing problems. TeachCloud is an extension of CloudSim, a research-oriented simulator used for the development and validation in cloud computing.

76 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper presents a large scale BANs system in the presence of cloudlet-based data collection and attempts to minimize the end-to-end packet delay by choosing dynamically a neighbor cloudlet, so that the overall delay is minimized.
Abstract: This paper presents a large scale BANs system in the presence of cloudlet-based data collection. The objective is to minimize end-to-end packet cost by dynamically choosing data collection to the cloud using cloudlet based system. The goal is to have the monitored data of BANs to be available to the end user or to the service provider in reliable manner. While reducing packet-to-cloud energy, the proposed work also attempts to minimize the end-to-end packet delay by choosing dynamically a neighbor cloudlet, so that the overall delay is minimized. Then, it will lead to have the monitored data in the cloud in real time manner. Note that, in the absence of network congestions in low data-rate BANs, the storage delays due to data collection manner are usually much larger compared to the congestion delay.

44 citations


Proceedings ArticleDOI
01 Nov 2013
TL;DR: This paper investigates the self-coexistence problem between secondary users in overlapped WRAN cells with the objective of improving network performance by employing an adaptive traffic-aware channel allocation strategy and reveals that the proposed algorithm provides a significant enhancement on system performance in terms of the number of served requests.
Abstract: Co-existence of different wireless networks and interference management are challenging problems in a Cognitive Radio (CR) environment. There are two different types of co-existence; incumbent co-existence (between licensed and unlicensed users) and self-coexistence (between secondary users in multiple overlapped Wireless Regional Area Networks (WRANs) cells). To overcome the self-coexistence problem in WRANs, many Fixed Channel Assignment (FCA) techniques have been proposed but without accounting for the cooperation overhead and the randomly time-varying traffic loads in different cells. In this paper, we investigate the self-coexistence problem between secondary users in overlapped WRAN cells with the objective of improving network performance by employing an adaptive traffic-aware channel allocation strategy. The proposed method provides interference-free environment with minimum cooperation overhead and attempts at guaranteeing pre-specified blocking probability requirements. Simulation results reveal that the proposed algorithm provides a significant enhancement on system performance in terms of the number of served requests.

16 citations


Proceedings ArticleDOI
24 Oct 2013
TL;DR: It is shown that using the PEP-side caching approach may open an insider threat port that can be used to bypass access control models in cloud and distributed relational databases.
Abstract: PEP-side caching is used in request-response access control mechanisms to increase the availability and reduce the processing overhead on PDP. Nonetheless, this paper shows that using this approach may open an insider threat port that can be used to bypass access control models in cloud and distributed relational databases. Moreover, the paper proposes a light model that detects and prevents the threat without affecting the performance of PEP and PDP, and it keeps the advantages of PEP-side caching model.

15 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper presents a simple, low-hardware overhead, yet effective, cache bypassing algorithm that dynamically chooses which blocks to insert into the LLC and which to bypass it following a miss based on past access/bypass patterns.
Abstract: The design of an effective last-level cache (LLC) is crucial to the overall processor performance and, consequently, continues to be the center of substantial research. Unfortunately, LLCs in modern high-performance processors are not used efficiently. One major problem suffered by LLCs is their low hit rates caused by the large fraction of cache blocks that do not get re-accessed after being brought into the LLC following a cache miss. These blocks do not contribute any cache hits and usually induce cache pollution and thrashing. Cache bypassing presents an effective solution to this problem. Cache blocks that are predicted not to be accessed while residing in the cache are not inserted into the LLC following a miss, instead they bypass the LLC and are only inserted in the higher cache levels. This paper presents a simple, low-hardware overhead, yet effective, cache bypassing algorithm that dynamically chooses which blocks to insert into the LLC and which to bypass it following a miss based on past access/bypass patterns. Our proposed algorithm is thoroughly evaluated using a detailed simulation environment where its effectiveness, performance-improvement capabilities, and robustness are demonstrated. Moreover, it is shown to outperform the state-of-the-art cache bypassing algorithm in both a uniprocessor and a multi-core processor settings.

12 citations


Journal ArticleDOI
TL;DR: This paper evaluates the new 28 nm FPGAs technology and its impact in eight of the major cryptographic algorithms available today such as SHA2, SHA3, and AES and revealed that using the 28 nmFPGAs reduced the power consumption to more than 50% and increase the throughput up to 100% compared to the older FPGs technologies.
Abstract: The current unprecedented advancements of communication systems and high performance computing urged for a high throughput applications with power consumption within a predefined budget. These advancements were accompanied with a crucial need for securing such systems and users critical data. Current cryptographic applications suffer from the limitations of their low throughput and extensive power consumption that severely impact the available power budget. Creating new algorithms to handle these issues will be a time consuming process. One viable solution is to use the new 28 Nanometers (nm) FPGAs devices that promise to provide less power consumption with a very competitive throughput and throughput to area ratio comparing to the older technologies. In this paper, we evaluate the 28 nm FPGAs technology and its impact in eight of the major cryptographic algorithms available today such as SHA2, SHA3, and AES. Our results revealed that using the 28 nm FPGAs reduced the power consumption to more than 50% and increase the throughput up to 100% compared to the older FPGs technologies. On the other hand, throughputs to area ratio results show about 71% improvement over other technologies.

5 citations


Proceedings ArticleDOI
01 Jan 2013
TL;DR: This paper investigates the Personal SuperComputing (PSC) as an emerging concept in high performance computing that provides an opportunity to overcome most of aforementioned problems, and evaluates the Gpu-based personal supercomputing system and compares it with conventional high performance cluster-based computing system.
Abstract: The complexity of today's applications has increased exponentially with the growing demand for unlimited computational resources. Cluster based supercomputing systems and high performance data centers have traditionally been the ideal assets to fulfill the ever increase in computing demands. Such computing resources require multi-millions dollars investment for building the system. The cost increases with the amount of power needed for operating and cooling the system along with the maintenance. In this paper we investigate the Personal SuperComputing (PSC) as an emerging concept in high performance computing that provides an opportunity to overcome most of aforementioned problems. We explore and evaluate the Gpu-based personal supercomputing system, and we compare it with conventional high performance cluster-based computing system. Our evaluations show promising opportunities in using GPU-based clusters for high performance computing applications.

1 citations