scispace - formally typeset
Search or ask a question
Author

Mohsen Guizani

Bio: Mohsen Guizani is an academic researcher from Qatar University. The author has contributed to research in topics: Computer science & Cloud computing. The author has an hindex of 79, co-authored 1110 publications receiving 31282 citations. Previous affiliations of Mohsen Guizani include Jaypee Institute of Information Technology & University College for Women.


Papers
More filters
Journal ArticleDOI
TL;DR: The three articles in this special section are devoted to wireless broadband communications, with special emphasis on WiMax and other new applications.
Abstract: The three articles in this special section are devoted to wireless broadband communications, with special emphasis on WiMax and other new applications.

5 citations

Journal ArticleDOI
TL;DR: STAC prolongs the network lifetime with error-bounded data precision with the spatio-temporal approximate and correlation-variation verification mechanism.

5 citations

Journal ArticleDOI
TL;DR: Inspired by the recent success of multi-agent learning in online control, this article adopts a centralized training and distributed execution learning paradigm and design a hierarchical social-based DTN architecture and proposes a collaborative multi- agent reinforcement learning (termed as QMIX) aided routing algorithm.
Abstract: Delay-tolerant network (DTN) is a network that’s designed to operate effectively in heterogeneous networks that may lack continuous network connectivity. It is characterized by their lack of instantaneous end-to-end paths, resulting in difficulties in designing effective DTN routing protocols. Traditional routing algorithms largely rely on greedy schemes. Such schemes can not guarantee the packets will be eventually transmitted to their destinations, thereby presenting a poor transmission efficiency. Recently, the social-based method has attracted a large amount of attention in wireless network routing. It can use the community and the centrality information to increase the delivery rate of the whole network. Therefore, in this article, we introduce the social-based mechanism to our DTN routing design. Besides, how the distributed nodes can learn the collaboration strategies is another challenge. Inspired by the recent success of multi-agent learning in online control, we adopt a centralized training and distributed execution learning paradigm and design a hierarchical social-based DTN architecture. Based on this, we propose a collaborative multi-agent reinforcement learning (termed as QMIX) aided routing algorithm.

5 citations

Proceedings ArticleDOI
10 Apr 2022
TL;DR: This paper proposes a dynamic network slicing and resource allocation framework that aims at maintaining high-level network operational performance, while fulfilling diverse services’ requirements and KPIs, e.g., availability, reliability, and data quality.
Abstract: 5G networks are designed not only to transport data, but also to process them while supporting a vast number of services with different key Performance Indicators (KPIs). Network virtualization has emerged to enable this vision, however it calls for designing efficient computing and network resource allocation schemes to support diverse services, while jointly considering all KPIs associated with these services. Thus, this paper proposes a dynamic network slicing and resource allocation framework that aims at maintaining high-level network operational performance, while fulfilling diverse services’ requirements and KPIs, e.g., availability, reliability, and data quality. Differently from the existing works, which are designed considering traditional metrics like throughput and latency, we present a novel methodology and resource allocation schemes that enable high-quality selection of radio points of access, resource allocation, and data routing from end users to the cloud. Our results depict that the proposed solutions could obtain the best trade-off between diverse services’ requirements when compared to baseline approaches that consider partial network view or fair resource allocation.

5 citations

Proceedings ArticleDOI
20 May 2019
TL;DR: A multi-objective optimization framework for secure wireless health monitoring applications, which considers a legitimate link for the transmission of a vital EEG signal, threatened by a passive eavesdropping attack that aims at wiretapping these measurements, and incorporates in its framework the practical secrecy metric, namely secrecy outage probability.
Abstract: In this paper, we investigate a multi-objective optimization framework for secure wireless health monitoring applications. In particular, we consider a legitimate link for the transmission of a vital EEG signal, threatened by a passive eavesdropping attack, that aims at wiretapping these measurements. We incorporate in our framework the practical secrecy metric, namely secrecy outage probability (SOP), which requires only the knowledge of side information regarding the eavesdropper (Ev), instead of completely having its instantaneous channel state information (CSI). To that end, we formulate an optimization problem in the form of maximizing the energy efficiency of the transmitter, while minimizing the distortion encountered at the signal resulting from the compression process prior to transmission, under realistic quality of service (QoS) constraints. The problem is shown to be nonconvex and NP-complete. Towards solving the problem, a branch and bound (BnB)-based algorithm is presented where a δ-suboptimal solution, from the global optimal one, is obtained. Numerical results are conducted to verify the system performance, where it is shown that our proposed approach outperforms similar systems deploying fixed compression policies (FCPs). We successfully meet QoS requirements while optimizing the system objectives, at all channel conditions, which cannot be attained by these FCP approaches. Interestingly, we also show that a target secrecy rate can be practically achieved with nonzero probability, even when the Ev has a better channel condition, on the average, than that for the legitimate receiver.

5 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2002

9,314 citations