scispace - formally typeset
Search or ask a question
Author

Mohsen Guizani

Bio: Mohsen Guizani is an academic researcher from Qatar University. The author has contributed to research in topics: Computer science & Cloud computing. The author has an hindex of 79, co-authored 1110 publications receiving 31282 citations. Previous affiliations of Mohsen Guizani include Jaypee Institute of Information Technology & University College for Women.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown through analysis that the normalized mean squared auto-correlation function (ACF) of the pulse waveforms can be used as an effective merit figure to judge the suitability for their applications in a DS-CDMA UWB radio.
Abstract: This paper proposes an approach to analyze pulse waveform dependent bit error rate (BER) performance of a DS-CDMA ultra wideband (UWB) radio, which operates in a frequency selective fading channel. The analysis takes into account almost all real operational conditions, such as asynchronous transmissions, RAKE receiver, multiple access interference (MAI), multipath interference (MI), log-normal shadowing, and noise. The main objective of the paper is to reveal the relationship between time domain characteristics of pulse waveforms and BER of a UWB radio. It is shown through analysis (also validated by simulation) that the normalized mean squared auto-correlation function (ACF) of the pulse waveforms can be used as an effective merit figure to judge the suitability for their applications in a DS-CDMA UWB radio. In fact, the normalized mean squared auto-correlation function (ACF) governs the average inter-chip interference caused by imperfect auto-correlation function of the pulse waveforms. The paper concludes that, as long as the power spectral density (PSD) functions of the pulse waveforms fit the FCC spectral mask, the pulse waveforms' normalized mean squared ACF should be minimized to ensure an acceptable BER.

18 citations

Journal ArticleDOI
TL;DR: The experimental results on real-world datasets demonstrate that the proposed algorithms are able to achieve matching blocking effect to the greedy algorithm as the increase in the number of positive seeds and often better than other heuristic algorithms, whereas they are four orders of magnitude faster than the greedy algorithms.
Abstract: Influence blocking maximization (IBM) is a key problem for viral marketing in competitive social networks. Although the IBM problem has been extensively studied, existing works neglect the fact that the location information can play an important role in influence propagation. In this paper, we study the location-based seeds selection for IBM problem, which aims to find a positive seed set in a given query region to block the negative influence propagation in a given block region as much as possible. In order to overcome the low efficiency of the simulation-based greedy algorithm, we propose a heuristic algorithm IS-LSS and its improved version IS-LSS+, both of which are based on the maximum influence arborescence structure and Quadtree index, while IS-LSS+ further improves the efficiency of IS-LSS by using an upper bound method and Quadtree cell lists. The experimental results on real-world datasets demonstrate that our proposed algorithms are able to achieve matching blocking effect to the greedy algorithm as the increase in the number of positive seeds and often better than other heuristic algorithms, whereas they are four orders of magnitude faster than the greedy algorithm.

18 citations

Proceedings ArticleDOI
01 May 2017
TL;DR: This paper proposes use of process state synchronization (PSS) as a mechanism to mitigate the impact of network disconnections on the service continuity of cloud-based interactive mobile applications and develops a mathematical model that incorporates the disconnection and synchronization intervals, and mobile device capabilities along with that of cloud.
Abstract: Mobile Cloud Computing (MCC) extends cloud services to the resource-constrained mobile devices. Compute-intensive mobile applications can be augmented using cloud either in client/server model or through cyber foraging. However, long or permanent network disconnections due to user mobility increase the execution time and in certain cases refrain the mobile devices from getting response back for the remotely performed execution. In this paper, we propose use of process state synchronization (PSS) as a mechanism to mitigate the impact of network disconnections on the service continuity of cloud-based interactive mobile applications. To validate the PSS-based execution, we develop a mathematical model that incorporates the disconnection and synchronization intervals, and mobile device capabilities along with that of cloud. The comparison with existing mechanisms shows that PSS reduces the execution time by upto 47% for intermittent network connectivity compared to COMET and by upto 35% for optimized VM-based offloading.

18 citations

Journal ArticleDOI
TL;DR: This paper introduces a cloud radio access network (C-RAN)-based vehicular network architecture, named C-VRAN, which facilitates efficient management and centralized processing of vehicular networks and proposes a discrete cosine transform (DCT)-based data compression scheme for C- VRAN to enhance the effective data rate of the fronthaul network.
Abstract: Next-generation (5G) vehicular networks will support various network applications, leading to specific requirements and challenges for wireless access technologies. This trend has motivated the development of the long-term evolution-vehicle (LTE-V) network, a 5G cellular-based vehicular technology. Due to the limited bandwidth for vehicular communications, it is important to efficiently utilize slim spectrum resources in vehicular networks. In this paper, we introduce a cloud radio access network (C-RAN)-based vehicular network architecture, named C-VRAN, which facilitates efficient management and centralized processing of vehicular networks. Furthermore, we propose a discrete cosine transform (DCT)-based data compression scheme for C-VRAN to enhance the effective data rate of the fronthaul network. This scheme first uses DCT to perform time-frequency conversion of LTE-V I/Q data and then utilizes the Lloyd-Max algorithm to quantify data in the frequency domain before finally selecting an appropriate coding scheme to achieve better performance. Simulation results show that the proposed scheme can achieve 3 times compression ratio within 1% error vector amplitude distortion, and it also has strong independence and versatility, allowing it to be used as a standalone module for the current LTE-V system.

18 citations

Journal ArticleDOI
TL;DR: A novel electrocardiogram authentication scheme which uses Legendre approximation coupled with multi-layer perceptron model for providing three levels of security for data, network and application levels is proposed.
Abstract: Internet of Medical Things (IoMTs) is fast emerging, thereby fostering rapid advances in the areas of sensing, actuation and connectivity to significantly improve the quality and accessibility of health care for everyone. Implantable medical device (IMD) is an example of such an IoMT-enabled device. IMDs treat the patient’s health and give a mechanism to provide regular remote monitoring to the healthcare providers. However, the current wireless communication channels can curb the security and privacy of these devices by allowing an attacker to interfere with both the data and communication. The privacy and security breaches in IMDs have thereby alarmed both the health providers and government agencies. Ensuring security of these small devices is a vital task to prevent severe health consequences to the bearer. The attacks can range from system to infrastructure levels where both the software and hardware of the IMD are compromised. In the recent years, biometric and cryptographic approaches to authentication, machine learning approaches to anomaly detection and external wearable devices for wireless communication protection have been proposed. However, the existing solutions for wireless medical devices are either heavy for memory constrained devices or require additional devices to be worn. To treat the present situation, there is a requirement to facilitate effective and secure data communication by introducing policies that will incentivize the development of security techniques. This paper proposes a novel electrocardiogram authentication scheme which uses Legendre approximation coupled with multi-layer perceptron model for providing three levels of security for data, network and application levels. The proposed model can reach up to 99.99% testing accuracy in identifying the authorized personnel even with 5 coefficients.

18 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2002

9,314 citations