scispace - formally typeset
Search or ask a question
Author

Mohsen Guizani

Bio: Mohsen Guizani is an academic researcher from Qatar University. The author has contributed to research in topics: Computer science & Cloud computing. The author has an hindex of 79, co-authored 1110 publications receiving 31282 citations. Previous affiliations of Mohsen Guizani include Jaypee Institute of Information Technology & University College for Women.


Papers
More filters
Proceedings ArticleDOI
25 Jun 2018
TL;DR: A new Scratch program analysis tool based on ANTLR is designed and implemented and the grading standard of assessing Computational Thinking skills in Dr. Scratch is expanded.
Abstract: With the introduction of computer programming in schools around the world, Scratch has risen in prominence for its thinkable, meaningful and social Aiming to assessing the Computational Thinking skills of a Scratch program, we design and implement a new Scratch program analysis tool based on ANTLR To solve some flaws (eg, high failure rate and low efficiency) in Dr Scratch which is the most relevant tool to assess Computational Thinking skills of Scratch programs, we choose the recognition tool ANTLR to design the system module and the assessing flow And then, we customize more than 200 lexical and syntax parser rules in ANTLR Furthermore, we expand the grading standard of assessing Computational Thinking skills in Dr Scratch Some fundamental concepts in Computer Science, such as stack, queue and recursion method, are involved in our grading standard Experiment results show the performance (eg, success rate, execution time) of SAT is superior to that of Dr Scratch

29 citations

Proceedings ArticleDOI
01 May 2017
TL;DR: In this article, a global framework of a dual-hop RF/FSO system with multiple relays operating at the mode of amplify-and-forward (AF) with fixed gain is presented.
Abstract: In this work, we present a global framework of a dual-hop RF/FSO system with multiple relays operating at the mode of amplify-and-forward (AF) with fixed gain. Partial relay selection (PRS) protocol with outdated channel state information (CSI) is assumed since the channels of the first hop are time-varying. The optical irradiance of the second hop are subject to the Double-Weibull model while the RF channels of the first hop experience the Rayleigh fading. The signal reception is achieved either by heterodyne or intensity modulation and direct detection (IM/DD). In addition, we introduce an aggregate model of hardware impairments to the source (S) and the relays since they are not perfect nodes. In order to quantify the impairment impact on the system, we derive closed-form, approximate, upper bound and high signal-to-noise ratio (SNR) asymptotic of the outage probability (OP) and the ergodic capacity (EC). Finally, analytical and numerical results are in agreement using Monte Carlo simulation.

29 citations

Journal ArticleDOI
TL;DR: The objective of this article is to highlight the benefits of the adoption of the edge intelligent technology, along with the use of AI in smart healthcare systems, and a novel smart healthcare model is proposed to boost the utilization of AI and edge technology in smart Healthcare systems.
Abstract: The demand for real-time, affordable, and efficient smart healthcare services is increasing exponentially due to the technological revolution and burst of population. To meet the increasing demands on this critical infrastructure, there is a need for intelligent methods to cope with the existing related challenges. In this regard, edge computing technology can reduce latency and energy consumption by moving processes closer to the data sources in comparison to the traditional centralized cloud and IoT-based healthcare systems. In addition, by bringing automated insights into the smart healthcare systems, artificial intelligence (AI) provides the possibility of detecting and predicting high-risk diseases in advance, decreasing medical costs for patients, and offering efficient treatments. The objective of this article is to highlight the benefits of the adoption of the edge intelligent technology, along with the use of AI in smart healthcare systems. Moreover, a novel smart healthcare model is proposed to boost the utilization of AI and edge technology in smart healthcare systems. Additionally, we discuss potential challenges and future research directions arising when integrating these different technologies.

29 citations

Journal ArticleDOI
TL;DR: Comparative studies conducted using Google data traces show the effectiveness of the proposed framework in terms of improving resource utilization, reducing energy expenses, and increasing cloud profits.
Abstract: This paper exploits cloud task elasticity and price heterogeneity to propose an online resource management framework that maximizes cloud profits while minimizing energy expenses. This is done by reducing the duration during which servers need to be left on and maximizing the monetary revenues when the charging cost for some of the elastic tasks depends on how fast these tasks complete, while meeting all the resource requirements. Comparative studies conducted using Google data traces show the effectiveness of our proposed framework in terms of improving resource utilization, reducing energy expenses, and increasing cloud profits.

29 citations

Journal ArticleDOI
TL;DR: HT3O is a scalable scheduling approach built with neural networks via deep RL to obtain real-time scheduling policies for MEC in dynamic environments and can achieve promising performance improvements over state-of-the-art approaches.
Abstract: Due to the high maneuverability and flexibility, unmanned-aerial-vehicles (UAVs) have been considered as a promising paradigm to assist mobile edge computing (MEC) in many scenarios including disaster rescue and field operation Most existing research focuses on the study of trajectory and computation-offloading scheduling for UAV-assisted MEC in stationary environments, and could face challenges in dynamic environments where the locations of UAVs and mobile devices (MDs) vary significantly Some latest research attempts to develop scheduling policies for dynamic environments by means of reinforcement learning However, as these need to explore in high-dimensional state and action space, they may fail to cover in large-scale networks where multiple UAVs serve numerous MDs To address this challenge, we leverage the idea of ‘divide-and-conquer’ and propose HT3O, a scalable scheduling approach for large-scale UAV-assisted MEC First, HT3O is built with neural networks via deep reinforcement learning to obtain real-time scheduling policies for MEC in dynamic environments More importantly, to make HT3O more scalable, we decompose the scheduling problem into two-layered sub-problems and optimize them alternately via hierarchical reinforcement learning This not only substantially reduces the complexity of each sub-problem, but also improves the convergence efficiency Experimental results show that HT3O can achieve promising performance improvements over state-of-the-art approaches

29 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2002

9,314 citations