scispace - formally typeset
Search or ask a question
Author

Mohammad Shojafar

Bio: Mohammad Shojafar is an academic researcher from University of Surrey. The author has contributed to research in topics: Cloud computing & Scheduling (computing). The author has an hindex of 9, co-authored 32 publications receiving 323 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The impact of an intelligent reflecting surface (IRS) on computational performance in a mobile edge computing (MEC) system that provides MEC services to multiple Internet of Thing devices that choose to offload a portion of their own computational tasks to the AP with the remaining portion being locally computed.
Abstract: This letter studies the impact of an intelligent reflecting surface (IRS) on computational performance in a mobile edge computing (MEC) system. Specifically, an access point (AP) equipped with an edge server provides MEC services to multiple Internet of Thing (IoT) devices that choose to offload a portion of their own computational tasks to the AP with the remaining portion being locally computed. We deploy an IRS to enhance the computational performance of the MEC system by intelligently adjusting the phase shift of each reflecting element. A joint design problem is formulated for the considered IRS assisted MEC system, aiming to optimize its sum computational bits and taking into account the CPU frequency, the offloading time allocation, transmit power of each device as well as the phase shifts of the IRS. To deal with the non-convexity of the formulated problem, we conduct our algorithm design by finding the optimized phase shifts first and then achieving the jointly optimal solution of the CPU frequency, the transmit power and the offloading time allocation by considering the Lagrange dual method and Karush-Kuhn-Tucker (KKT) conditions. Numerical evaluations highlight the advantage of the IRS-assisted MEC system in comparison with the benchmark schemes.

117 citations

Journal ArticleDOI
TL;DR: A robust Federated Learning-based architecture called Fed-IIoT for detecting Android malware applications in IIoT and shows that the A3GAN defensive approach preserves the robustness of data privacy for Android mobile users and is about 8% higher accuracy with existing state-of-the-art solutions.
Abstract: The sheer volume of industrial Internet of Things (IIoT) malware is one of the most serious security threats in today's interconnected world, with new types of advanced persistent threats and advanced forms of obfuscations. This article presents a robust federated learning based architecture called Fed-IIoT for detecting Android malware applications in IIoT. Fed-IIoT consists of two parts: first, participant side, where the data are triggered by two dynamic poisoning attacks based on a generative adversarial network (GAN) and federated GAN; and second, server side, which aims to monitor the global model and shape a robust collaboration training model, by avoiding anomaly in aggregation by a GAN network (A3GAN) and adjust two GAN-based countermeasure algorithms. One of the main advantages of Fed-IIoT is that devices can safely participate in the IIoT and efficiently communicate with each other, with no privacy issues. We evaluate our solutions through experiments on various features using three IoT datasets. The results confirm the high accuracy rates of our attack and defense algorithms and show that the A3GAN defensive approach preserves the robustness of data privacy for Android mobile users and is about 8% higher accuracy with existing state-of-the-art solutions.

102 citations

08 Mar 2018
TL;DR: In this paper, the state of the art of computationally intelligent (i.e., machine learning) methods that are applied in load forecasting in terms of their classification and evaluation for sustainable operation of the overall energy management system is explored.
Abstract: Energy management systems are designed to monitor, optimize, and control the smart grid energy market. Demand-side management, considered as an essential part of the energy management system, can enable utility market operators to make better management decisions for energy trading between consumers and the operator. In this system, a priori knowledge about the energy load pattern can help reshape the load and cut the energy demand curve, thus allowing a better management and distribution of the energy in smart grid energy systems. Designing a computationally intelligent load forecasting (ILF) system is often a primary goal of energy demand management. This study explores the state of the art of computationally intelligent (i.e., machine learning) methods that are applied in load forecasting in terms of their classification and evaluation for sustainable operation of the overall energy management system. More than 50 research papers related to the subject identified in existing literature are classified into two categories: namely the single and the hybrid computational intelligence (CI)-based load forecasting technique. The advantages and disadvantages of each individual techniques also discussed to encapsulate them into the perspective into the energy management research. The identified methods have been further investigated by a qualitative analysis based on the accuracy of the prediction, which confirms the dominance of hybrid forecasting methods, which are often applied as metaheurstic algorithms considering the different optimization techniques over single model approaches. Based on extensive surveys, the review paper predicts a continuous future expansion of such literature on different CI approaches and their optimizations with both heuristic and metaheuristic methods used for energy load forecasting and their potential utilization in real-time smart energy management grids to address future challenges in energy demand management

97 citations

Posted Content
TL;DR: This paper demonstrates the advantage of MAGA over traditional GA, and exploits multi-agent genetic algorithms to solve the load balancing problem in cloud computing, by designing a load balancing model on the basis of virtualization resource management.
Abstract: In this paper with the aid of genetic algorithm and fuzzy theory, we present a hybrid job scheduling approach, which considers the load balancing of the system and reduces total execution time and execution cost. We try to modify the standard Genetic algorithm and to reduce the iteration of creating population with the aid of fuzzy theory. The main goal of this research is to assign the jobs to the resources with considering the VM MIPS and length of jobs. The new algorithm assigns the jobs to the resources with considering the job length and resources capacities. We evaluate the performance of our approach with some famous cloud scheduling models. The results of the experiments show the efficiency of the proposed approach in term of execution time, execution cost and average Degree of Imbalance (DI).

78 citations

Journal ArticleDOI
TL;DR: This study proposes an efficient policy, called MinRE, for SPP in fog–cloud systems, to provide both QoS for IoT services and energy efficiency for fog service providers, and classify services into two categories: critical services and normal ones.
Abstract: Fog computing is a decentralised model which can help cloud computing for providing high quality-of-service (QoS) for the Internet of Things (IoT) application services. Service placement problem (SPP) is the mapping of services among fog and cloud resources. It plays a vital role in response time and energy consumption in fog–cloud environments. However, providing an efficient solution to this problem is a challenging task due to difficulties such as different requirements of services, limited computing resources, different delay, and power consumption profile of devices in fog domain. Motivated by this, in this study, we propose an efficient policy, called MinRE, for SPP in fog–cloud systems. To provide both QoS for IoT services and energy efficiency for fog service providers, we classify services into two categories: critical services and normal ones. For critical services, we propose MinRes, which aims to minimise response time, and for normal ones, we propose MinEng, whose goal is reducing the energy consumption of fog environment. Our extensive simulation experiments show that our policy improves the energy consumption up to 18%, the percentage of deadline satisfied services up to 14% and the average response time up to 10% in comparison with the second-best results.

52 citations


Cited by
More filters
01 Jan 2003
TL;DR: In this article, the authors propose a web of trust, in which each user maintains trust in a small number of other users and then composes these trust values into trust values for all other users.
Abstract: Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign top-down or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user maintains trusts in a small number of other users. We then compose these trusts into trust values for all other users. The result of our computation is not an agglomerate "trustworthiness" of each user. Instead, each user receives a personalized set of trusts, which may vary widely from person to person. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise.

567 citations

Journal ArticleDOI
TL;DR: Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources, then paints a landscape of the scheduling problem and solutions, and a comprehensive survey of state-of-the-art approaches is presented systematically.
Abstract: A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon.

416 citations

Journal ArticleDOI
TL;DR: An extensive survey and comparative analysis of various scheduling algorithms for cloud and grid environments based on three popular metaheuristic techniques: Ant Colony Optimization, Genetic Algorithm and Particle Swarm Optimization and two novel techniques: League Championship Algorithm (LCA) and BAT algorithm.

334 citations

Journal ArticleDOI
TL;DR: The role of artificial intelligence (AI), machine learning (ML), and deep reinforcement learning (DRL) in the evolution of smart cities is explored and various research challenges and future research directions where the aforementioned techniques can play an outstanding role to realize the concept of a smart city are presented.

305 citations

Journal Article
TL;DR: In this article, a cloud task scheduling policy based on ant colony optimization algorithm compared with different scheduling algorithms FCFS and round-robin has been presented, the main goal of these algorithms is minimizing the makespan of a given tasks set.
Abstract: Cloud computing is the development of distributed computing, parallel computing and grid computing, or defined as the commercial implementation of these computer science concepts. One of the fundamental issues in this environment is related to task scheduling. Cloud task scheduling is an NP-hard optimization problem, and many meta-heuristic algorithms have been proposed to solve it. A good task scheduler should adapt its scheduling strategy to the changing environment and the types of tasks. In this paper a cloud task scheduling policy based on ant colony optimization algorithm compared with different scheduling algorithms FCFS and round-robin, has been presented. The main goal of these algorithms is minimizing the makespan of a given tasks set. Ant colony optimization is random optimization search approach that will be used for allocating the incoming jobs to the virtual machines. Algorithms have been simulated using Cloudsim toolkit package. Experimental results showed that the ant colony optimization outperformed FCFS and round-robin algorithms.

229 citations