scispace - formally typeset
Search or ask a question
Author

Mohsen Guizani

Bio: Mohsen Guizani is an academic researcher from Qatar University. The author has contributed to research in topics: Computer science & Cloud computing. The author has an hindex of 79, co-authored 1110 publications receiving 31282 citations. Previous affiliations of Mohsen Guizani include Jaypee Institute of Information Technology & University College for Women.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article , the authors investigated the joint transmission time and power allocation problem for NOMA communication, aiming to improve the sum-throughput while guaranteeing different wireless devices' (WDs) throughput in multi-cell WPCN.
Abstract: The emerging non-orthogonal multiple access (NOMA) technology can effectively improve the throughput performance of Internet of Things (IoT) devices. Besides throughput maximization, ensuring throughput fairness is a practical design issue when implementing NOMA in wireless powered communication networks (WPCN). To this end, we investigate the joint transmission time and power allocation problem for NOMA communication, aiming to improve the sum-throughput while guaranteeing different wireless devices’ (WDs’) throughput in multi-cell WPCN. In particular, we first analyze the feasibility of the problem by deriving the necessary and sufficient conditions for the existence of feasible solutions and propose an efficient algorithm to obtain the set of feasible values of transmission time allocation. We then propose an efficient algorithm for the transmission time allocation to improve the sum-throughput. During each search iteration, we adopt the successive convex approximation (SCA) approach to transform the non-convex power allocation problem into a sequence of convex problems and obtain the locally optimal transmit power under a fixed transmission time. Numerical simulations show that the proposed algorithm can improve the sum-throughput while guaranteeing each WD’s throughput.

8 citations

Journal ArticleDOI
TL;DR: A word-distributed sensitive topic representation model (WDS-LDA) based on hybrid human–AI (H-AI) that makes the representative words more important, the distinction among different topic words higher, and effectively improves the precision of subsequent algorithms, such as topic detection and topic evolutionary analysis using the topic model.
Abstract: With the widespread use of online social networks, billions of pieces of information are generated every day. How to detect new topics quickly and accurately at such data scale plays a vital role in information recommendation and public opinion control. One of the basic research tasks of topic detection is how to represent a topic. The existing topic representation models do not focus on how to select better differentiated words to represent topics, are still computer-centered, and do not effectively combine human intelligence and artificial intelligence (AI). To solve these problems, this article proposes a word-distributed sensitive topic representation model (WDS-LDA) based on hybrid human–AI (H-AI). The basic idea is that the distribution of words within a topic or among different topics has a great influence on the selection of topic expression words. If a word is evenly distributed among all documents of a certain topic, it indicates that the word is the common word of all documents in the topic, and it is more suitable to represent this topic. If a word is more evenly distributed among various topics, it indicates that the word is a common word of all topics, and cannot be used for the purpose of distinguishing among topics, becoming less suitable to represent any topic. At the same time, the human cognitive ability and cognitive models are introduced into topic representation based on H-AI. We introduce the user’s modification of topic expression words into the topic model representation so that the topic model can learn human wisdom and become more and more accurate. Therefore, three different weights are introduced: inside weight; outside weight; and manual adjustment weight. The inside weight describes the uniform distribution of a word in the given topic, the outside weight describes the uniform distribution of a word in all topics, and the manual adjustment weight reflects whether a word is suitable as a representative vocabulary in the past manual adjustment. Tests using Sina microblog’s actual data sets show that the WDS-LDA algorithm makes the representative words more important, the distinction among different topic words higher, and effectively improves the precision of subsequent algorithms, such as topic detection and topic evolutionary analysis using the topic model.

8 citations

Posted Content
TL;DR: A novel optimization-driven ML framework for IRS-assisted wireless networks is designed that can improve both the convergence and the reward performance compared to conventional model-free learning approaches.
Abstract: Intelligent reflecting surface (IRS) has been recently employed to reshape the wireless channels by controlling individual scattering elements' phase shifts, namely, passive beamforming. Due to the large size of scattering elements, the passive beamforming is typically challenged by the high computational complexity and inexact channel information. In this article, we focus on machine learning (ML) approaches for performance maximization in IRS-assisted wireless networks. In general, ML approaches provide enhanced flexibility and robustness against uncertain information and imprecise modeling. Practical challenges still remain mainly due to the demand for a large dataset in offline training and slow convergence in online learning. These observations motivate us to design a novel optimization-driven ML framework for IRS-assisted wireless networks, which takes both advantages of the efficiency in model-based optimization and the robustness in model-free ML approaches. By splitting the decision variables into two parts, one part is obtained by the outer-loop ML approach, while the other part is optimized efficiently by solving an approximate problem. Numerical results verify that the optimization-driven ML approach can improve both the convergence and the reward performance compared to conventional model-free learning approaches.

8 citations

Posted Content
TL;DR: The challenges facing the support of IoTs through 5G systems are discussed, and how sparsity can be exploited for addressing these challenges, especially in terms of enabling wideband spectrum management and handling the connectivity by exploiting device-to-device communications and edge cloud.
Abstract: Besides enabling an enhanced mobile broadband, next generation of mobile networks (5G) are envisioned for the support of massive connectivity of heterogeneous Internet of Things (IoT)s. These IoTs are envisioned for a large number of use-cases including smart cities, environment monitoring, smart vehicles, etc. Unfortunately, most IoTs have very limited computing and storage capabilities and need cloud services. Hence, connecting these devices through 5G systems requires huge spectrum resources in addition to handling the massive connectivity and improved security. This article discusses the challenges facing the support of IoTs through 5G systems. The focus is devoted to discussing physical layer limitations in terms of spectrum resources and radio access channel connectivity. We show how sparsity can be exploited for addressing these challenges especially in terms of enabling wideband spectrum management and handling the connectivity by exploiting device-to-device communications and edge-cloud. Moreover, we identify major open problems and research directions that need to be explored towards enabling the support of massive heterogeneous IoTs through 5G systems.

8 citations

Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this article, the authors introduce a methodology aiming to secure the sensitive data through re-thinking the distribution strategy, without adding any computation overhead, and formulate such a methodology, namely DistPrivacy, as an optimization problem, where they establish a trade-off between the latency of co-inference, the privacy level of the data, and the limited resources of IoT participants.
Abstract: With the emergence of smart cities, Internet of Things (IoT) devices as well as deep learning technologies have witnessed an increasing adoption. To support the requirements of such paradigm in terms of memory and computation, joint and real-time deep co-inference framework with IoT synergy was introduced. However, the distribution of Deep Neural Networks (DNN) has drawn attention to the privacy protection of sensitive data. In this context, various threats have been presented, including black-box attacks, where a malicious participant can accurately recover an arbitrary input fed into his device. In this paper, we introduce a methodology aiming to secure the sensitive data through re-thinking the distribution strategy, without adding any computation overhead. First, we examine the characteristics of the model structure that make it susceptible to privacy threats. We found that the more we divide the model feature maps into a high number of devices, the better we hide proprieties of the original image. We formulate such a methodology, namely DistPrivacy, as an optimization problem, where we establish a trade-off between the latency of co-inference, the privacy level of the data, and the limited-resources of IoT participants. Due to the NP-hardness of the problem, we introduce an online heuristic that supports heterogeneous IoT devices as well as multiple DNNs and datasets, making the pervasive system a general-purpose platform for privacy-aware and low decision-latency applications.

8 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2002

9,314 citations