scispace - formally typeset
Search or ask a question
Author

Mohsen Guizani

Bio: Mohsen Guizani is an academic researcher from Qatar University. The author has contributed to research in topics: Computer science & Cloud computing. The author has an hindex of 79, co-authored 1110 publications receiving 31282 citations. Previous affiliations of Mohsen Guizani include Jaypee Institute of Information Technology & University College for Women.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper constructs a jitter graph-based network model as well as a Poisson process-based traffic model in the context of 5G mobile networks and designs a QoS-oriented adaptive routing scheme based on Deep Reinforcement Learning SPACE, which is a DRL architecture with parameterized action space in order to find an optimal path from the source to the destination.
Abstract: The increasing complexity and dynamics of 5G mobile networks have brought revolutionary changes in its modeling and control, where efficient routing and resource allocation strategies become beneficial. Software-Defined Network (SDN) makes it possible to achieve the automatic management of network resources. Relying on the powerful decision-making capability of SDNs, network association can be flexibly implemented for adapting to the dynamic of the real-time network status. In this paper, we first construct a jitter graph-based network model as well as a Poisson process-based traffic model in the context of 5G mobile networks. Second, we solve the problem of QoS routing with resource allocation based on queueing theory using a low computational complexity greedy algorithm, which takes finding a feasible path set as the main task and resource allocation as the auxiliary task. Finally, we design a QoS-oriented adaptive routing scheme based on Deep Reinforcement Learning (DRL) SPACE, which is a DRL architecture with parameterized action space, in order to find an optimal path from the source to the destination. To validate the feasibility of the greedy QoS routing strategy with resource allocation, we make a numerical packet-level simulation to model a M/M/C/N queuing system. Moreover, extensive simulation results demonstrate that our proposed routing strategy is able to improve the traffic’s QoS metrics, such as the packet loss ratio and queueing delay.

13 citations

Proceedings ArticleDOI
05 Oct 2015
TL;DR: This paper proposes three energy-efficient and interference-aware routing protocols named as Inverse Energy Efficient Depth-Based Routing protocol (IEEDBR), Interference-Aware Energy E efficient Depth- based Routed protocol (IA-EEDBR) and Interference.
Abstract: The unique characteristics of Underwater Wireless Sensor Networks (UWSNs) attracted the research community to explore different aspects of these networks. Routing is one of the most important and challenging function in UWSNs, for efficient data communication and longevity of sensor node's battery timing. Sensor nodes have energy constraint because replacing the batteries of sensor nodes is an expensive and tough task in harsh aqueous environment. Also interference is a major performance influencing factor. Providing solutions for interference-free communication are also essential. In this paper, we propose three energy-efficient and interference-aware routing protocols named as Inverse Energy Efficient Depth-Based Routing protocol (IEEDBR), Interference-Aware Energy Efficient Depth-Based Routing protocol (IA-EEDBR) and Interference-Aware Inverse Energy Efficient Depth-Based Routing protocol (IA-IEEDBR). Unlike EEDBR, IEEDBR protocol uses depth and minimum residual energy information for selecting data forwarder. While IA-EEDBR takes minimum number of neighbors for forwarder selection. IA-IEEDBR considers depth, minimum residual energy along with minimum number of neighbors for selection of forwarder. Our proposed schemes are validated through simulation and the results demonstrate better performance in terms of improved network lifetime, maximized throughput and reduced path loss.

13 citations

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed an image super-resolution method based on overcomplete dictionaries and the inherent similarity of an image to recover the HR image from a single low-resolution (LR) image.
Abstract: Recently, the magnetic resonance imaging (MRI) images have limited and unsatisfactory resolutions due to various constraints such as physical, technological, and economic considerations. Super-resolution techniques can obtain high-resolution MRI images. The traditional methods obtained the resolution enhancement of brain MRI by interpolations, affecting the accuracy of the following diagnose process. The requirement for brain image quality is fast increasing. In this paper, we propose an image super-resolution method based on overcomplete dictionaries and the inherent similarity of an image to recover the high-resolution (HR) image from a single low-resolution (LR) image. We use the linear relationship among images in the measurement domain and frequency domain to classify image blocks into smooth, texture, and edge feature blocks in the measurement domain. The dictionaries for different blocks are trained using different categories. Consequently, an LR image block of interest may be reconstructed using the most appropriate dictionary. In addition, we explore the nonlocal similarity of the image to tentatively search for similar blocks in the whole image and present a joint reconstruction method based on compressed sensing (CS) and similarity constraints. The sparsity and self-similarity of the image blocks are taken as the constraints. The proposed method is summarized in the following steps. First, a dictionary classification method based on the measurement domain is presented. The image blocks are classified into smooth, texture, and edge parts by analyzing their features in the measurement domain. Then, the corresponding dictionaries are trained using the classified image blocks. Equally important, in the reconstruction part, we use the CS reconstruction method to recover the HR brain MRI image, considering both nonlocal similarity and the sparsity of an image as the constraints. This method performs better both visually and quantitatively than some existing methods.

13 citations

Posted Content
TL;DR: 5G security is explored by combining the physical layer and the logical layer from the perspective of automated attack and defense, and dedicated to provide automated solution framework for 5G security.
Abstract: The 5th generation (5G) network adopts a great number of revolutionary technologies to fulfill continuously increasing requirements of a variety of applications, including ultra-high bandwidth, ultra-low latency, ultra-massive device access, ultra-reliability, and so on. Correspondingly, traditional security focuses on the core network, and the logical (non-physical) layer is no longer suitable for the 5G network. 5G security presents a tendency to extend from the network center to the network edge and from the logical layer to the physical layer. The physical layer security is also an essential part of 5G security. However, the security of each layer in 5G is mostly studied separately, which causes a lack of comprehensive analysis for security issues across layers. Meanwhile, potential security threats are lack of automated solutions. This article explores the 5G security by combining the physical layer and the logical layer from the perspective of automated attack and defense, and dedicate to provide automated solution framework for 5G security.

13 citations

Journal ArticleDOI
TL;DR: This work formalizes the cooperative offloading process of a reusable task into a coalitional game to maximize the cost savings and proves that CGCO is equal to the optimal exhaustive search (ES) method and CGCO-M is close to ES in terms of cost ratios.
Abstract: Mobile-edge computing (MEC) has been a promising solution for Internet-of-Things (IoT) applications to obtain latency reduction and energy savings. In view of the loosely coupled application, multiple devices can use the same task code and different input parameters to obtain diverse results. This motivates us to study the cooperation between devices for eliminating the repeated data transmission. Leveraging coalitional game theory, we formalize the cooperative offloading process of a reusable task into a coalitional game to maximize the cost savings. In particular, we first propose an efficient coalitional game-based cooperative offloading (CGCO) algorithm for the single-task model, and then expand it into a CGCO-M algorithm for the multiple-task model with jointly applying a two-stage flow shop scheduling approach, which helps to obtain an optimal task schedule. It is proved that our CGCO and CGCO-M can achieve the Nash-stable solution with convergence guarantee, and CGCO can obtain an optimal solution. The simulations show that CGCO is equal to the optimal exhaustive search (ES) method and CGCO-M is close to ES in terms of cost ratios. Cost ratios of CGCO and CGCO-M are significantly down by 41.08% and 83.70% compared to local executions, respectively. Meanwhile, CGCO-M obtains 41.46% and 89.74% reductions when reuse factors are 0.1 and 1, which means CGCO-M can save more cost with higher reuse density.

13 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2002

9,314 citations