scispace - formally typeset
Search or ask a question
Author

Mohsen Guizani

Bio: Mohsen Guizani is an academic researcher from Qatar University. The author has contributed to research in topics: Computer science & Cloud computing. The author has an hindex of 79, co-authored 1110 publications receiving 31282 citations. Previous affiliations of Mohsen Guizani include Jaypee Institute of Information Technology & University College for Women.


Papers
More filters
Journal ArticleDOI
TL;DR: A new scheme that creates symmetric encryption keys to encrypt the wireless communication portion of Implantable Medical Devices, and relies on chaotic systems to obtain a synchronized Pseudo-Random key, thus protecting patients from the key theft.
Abstract: Healthcare remote devices are recognized as a promising technology for treating health related issues. Among them are the wireless Implantable Medical Devices (IMDs): These electronic devices are manufactured to treat, monitor, support or replace defected vital organs while being implanted in the human body. Thus, they play a critical role in healing and even saving lives. Current IMDs research trends concentrate on their medical reliability. However, deploying wireless technology in such applications without considering security measures may offer adversaries an easy way to compromise them. With the aim to secure these devices, we explore a new scheme that creates symmetric encryption keys to encrypt the wireless communication portion. We will rely on chaotic systems to obtain a synchronized Pseudo-Random key. The latter will be generated separately in the system in such a way that avoids a wireless key exchange, thus protecting patients from the key theft. Once the key is defined, a simple encryption system that we propose in this paper will be used. We analyze the performance of this system from a cryptographic point of view to ensure that it offers a better safety and protection for patients.

12 citations

Journal ArticleDOI
13 Sep 2020-Sensors
TL;DR: This paper proposes a joint channel selection and power adaptation scheme for the underlay cognitive radio network (CRN), maximizing the data rate of all secondary users (SUs) while guaranteeing the quality of service (QoS) of primary users (PUs).
Abstract: Cognitive radio (CR) is a critical technique to solve the conflict between the explosive growth of traffic and severe spectrum scarcity. Reasonable radio resource allocation with CR can effectively achieve spectrum sharing and co-channel interference (CCI) mitigation. In this paper, we propose a joint channel selection and power adaptation scheme for the underlay cognitive radio network (CRN), maximizing the data rate of all secondary users (SUs) while guaranteeing the quality of service (QoS) of primary users (PUs). To exploit the underlying topology of CRNs, we model the communication network as dynamic graphs, and the random walk is used to imitate the users’ movements. Considering the lack of accurate channel state information (CSI), we use the user distance distribution contained in the graph to estimate CSI. Moreover, the graph convolutional network (GCN) is employed to extract the crucial interference features. Further, an end-to-end learning model is designed to implement the following resource allocation task to avoid the split with mismatched features and tasks. Finally, the deep reinforcement learning (DRL) framework is adopted for model learning, to explore the optimal resource allocation strategy. The simulation results verify the feasibility and convergence of the proposed scheme, and prove that its performance is significantly improved.

12 citations

Journal ArticleDOI
TL;DR: This work proposes a cross-domain recommender system, including three approaches, based on multi-source social big data, and shows that the accuracies of the three proposed approaches are significantly improved compared with the conventional recommender approaches, such as collaborative filtering and matrix factorization.
Abstract: With the explosion of social data comes a great challenge called information overloading. To overcome this challenge, recommender systems are expected to support users in quickly accessing the appropriate content. However, cold-start users are a formidable challenge in the design of recommender systems because the conventional recommendation services are based on a single data source, namely, a single field. Considering the advantages of social-based and cross-domain approaches involving further additional data, we propose a cross-domain recommender system, including three approaches, based on multi-source social big data. The proposed approach is expected to effectively alleviate the issues of cold-start users by transferring user preferences from a related auxiliary domain to a target domain. Moreover, the transferred preferences are able to improve the diversity of recommendations. Through adequate evaluations based on an actual dataset in the book and music domains, it is shown that the accuracies of the three proposed approaches are significantly improved compared with the conventional recommender approaches, such as collaborative filtering and matrix factorization. In particular, the proposed approaches are available to provide cold-start users with highly effective recommendations.

12 citations

Journal ArticleDOI
TL;DR: This paper proposes a caching strategy to cache only the chunks of videos to be watched and instead of offloading one video content by one edge node, helpers will collaborate to store and share different chunks to optimize the storage/transmission resources usage.

12 citations

Journal ArticleDOI
TL;DR: An integer linear programming (ILP) model and a dynamic programming algorithm are proposed to maximize the number of successfully served IoT data tasks with satisfactory security requirements while minimizing the end-to-end transmission delay.
Abstract: Numerous new applications have been proliferated with the mature of 5G, which generates a large number of latency-sensitive and computationally intensive mobile data requests. The real-time requirement of these mobile data has been accommodated well by fog computing in the past few years, mainly through offloading tasks to fog nodes in the vicinity. On the other hand, the user-privacy hidden in the Internet-of-Things (IoT) data has not been sufficiently considered in the presence of insecure fog nodes. It is risky to offload an entire mission-critical task to just one fog node or several fog nodes owned by the same service provider (SP), especially when the SP is marked with low-security credit and tends to collect data information of users for malicious use. To address this issue, we classify IoT user tasks based on their security requirements, divide them into different numbers of smaller fragments, and, finally, offload the segments of a task to multiple fog nodes owned by the same or various SPs according to their security requirements. The selected fog nodes will collaboratively serve the divided fragments to avoid the possible damage caused by the leak of sensitive data due to compromised fog nodes of malicious SPs. For this, we propose an integer linear programming (ILP) model and a dynamic programming algorithm to maximize the number of successfully served IoT data tasks with satisfactory security requirements while minimizing the end-to-end transmission delay. The numerical results show that the proposed ILP model and algorithm can significantly increase the successful provisioning ratio for tasks with high-security requirements.

12 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2002

9,314 citations