scispace - formally typeset
Search or ask a question
Author

Mohsen Guizani

Bio: Mohsen Guizani is an academic researcher from Qatar University. The author has contributed to research in topics: Computer science & Cloud computing. The author has an hindex of 79, co-authored 1110 publications receiving 31282 citations. Previous affiliations of Mohsen Guizani include Jaypee Institute of Information Technology & University College for Women.


Papers
More filters
Journal ArticleDOI
TL;DR: A federated learning-based DT framework is proposed and a Secure and lAtency-aware dIgital twin assisted resource scheduliNg algoriThm (SAINT) is presented, which enables intelligent resource scheduling by using DT to improve the learning performance of deep Q-learning.
Abstract: Digital twin (DT) provides accurate guidance for multidimensional resource scheduling in 5G edge computing-empowered distribution grids by establishing a digital representation of the physical entities. In this article, we address the critical challenges of DT construction and DT-assisted resource scheduling such as low accuracy, large iteration delay, and security threats. We propose a federated learning-based DT framework and present a Secure and lAtency-aware dIgital twin assisted resource scheduliNg algoriThm (SAINT). SAINT achieves low-latency, accurate, and secure DT by jointly optimizing its total iteration delay and loss function, and leveraging abnormal model recognition (AMR). SAINT enables intelligent resource scheduling by using DT to improve the learning performance of deep Q-learning. SAINT supports access priority and energy consumption awareness due to the consideration of long-term constraints. Compared with state-of-the-art algorithms, SAINT has superior performance in cumulative iteration delay, DT loss function, energy consumption, and access priority deficit.

41 citations

Journal ArticleDOI
TL;DR: The performance evaluation shows that the dynamic contract design satisfies the IIC constraints and derives greater profits than that of the uniform pricing scheme, thus validating its effectiveness in mitigating the adverse impacts of the information asymmetry.
Abstract: Currently, the data collected by the Internet of Healthcare Things, i.e., healthcare oriented Internet of Things (IoT), still rely on cloud-based centralized data aggregation and processing. To reduce the need for transmission of data to the cloud, the edge computing architecture may be adopted to facilitate machine learning at the edge of the network through leveraging on the amassed computation resources of pervasive IoT devices. In this article, federated learning (FL) is proposed to enable privacy-preserving collaborative model training at the edge of the network across distributed IoT users. However, the users in the FL network may have different willingness to participate (WTP), a hidden information unknown to the model owner. Furthermore, the development of healthcare applications typically requires sustainable user participation, e.g., for the continuous collection of data during which a user’s WTP may change over time. As such, we leverage on the dynamic contract design to consider a two-period incentive mechanism that satisfies the intertemporal incentive compatibility (IIC), such that the self-revealing mechanism of the contract holds across both periods. The performance evaluation shows that our contract design satisfies the IIC constraints and derives greater profits than that of the uniform pricing scheme, thus validating its effectiveness in mitigating the adverse impacts of the information asymmetry.

41 citations

Journal ArticleDOI
TL;DR: A testing framework for learning-based Android malware detection systems (TLAMD) for IoT Devices is proposed that can generate adversarial samples for the IoT Android application with a success rate of nearly 100% and can perform black-box testing on the system.
Abstract: Many IoT(Internet of Things) systems run Android systems or Android-like systems. With the continuous development of machine learning algorithms, the learning-based Android malware detection system for IoT devices has gradually increased. However, these learning-based detection models are often vulnerable to adversarial samples. An automated testing framework is needed to help these learning-based malware detection systems for IoT devices perform security analysis. The current methods of generating adversarial samples mostly require training parameters of models and most of the methods are aimed at image data. To solve this problem, we propose a \textbf{t}esting framework for \textbf{l}earning-based \textbf{A}ndroid \textbf{m}alware \textbf{d}etection systems(TLAMD) for IoT Devices. The key challenge is how to construct a suitable fitness function to generate an effective adversarial sample without affecting the features of the application. By introducing genetic algorithms and some technical improvements, our test framework can generate adversarial samples for the IoT Android Application with a success rate of nearly 100\% and can perform black-box testing on the system.

40 citations

Journal ArticleDOI
TL;DR: The proposed methodology was able to detect different types of emulated attack patterns efficiently and thereby notifying the patient about the possible attack in DBSs and helps in diagnosing fake versus genuine stimulations.
Abstract: Deep brain stimulators (DBSs), a widely used and comprehensively acknowledged restorative methodology, are a type of implantable medical device which uses electrical stimulation to treat neurological disorders. These devices are widely used to treat diseases such as Parkinson, movement disorder, epilepsy, and psychiatric disorders. Security in such devices plays a vital role since it can directly affect the mental, emotional, and physical state of human bodies. In worst-case situations, it can even lead to the patient’s death. An adversary in such devices, for instance, can inhibit the normal functionality of the brain by introducing fake stimulation inside the human brain. Nonetheless, the adversary can impair the motor functions, alter impulse control, induce pain, or even modify the emotional pattern of the patient by giving fake stimulations through DBSs. This paper presents a deep learning methodology to predict different attack stimulations in DBSs. The proposed work uses long short-term memory, a type of recurrent network for forecasting and predicting rest tremor velocity. (A type of characteristic observed to evaluate the intensity of the neurological diseases) The prediction helps in diagnosing fake versus genuine stimulations. The effect of deep brain stimulation was tested on Parkinson tremor patients. The proposed methodology was able to detect different types of emulated attack patterns efficiently and thereby notifying the patient about the possible attack.

40 citations

Journal ArticleDOI
TL;DR: Results show that the proposed model can support a larger number of active nodes with less energy compared to conventional first come first served methods, and the coverage utility of sensor nodes is much higher using the method compared to the on-demand recharging request schemes found in existing studies.

40 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2002

9,314 citations