scispace - formally typeset
Search or ask a question
Author

Mohsen Guizani

Bio: Mohsen Guizani is an academic researcher from Qatar University. The author has contributed to research in topics: Computer science & Cloud computing. The author has an hindex of 79, co-authored 1110 publications receiving 31282 citations. Previous affiliations of Mohsen Guizani include Jaypee Institute of Information Technology & University College for Women.


Papers
More filters
Proceedings ArticleDOI
10 Jun 2014
TL;DR: An algorithm which makes use of a commonly used CPU intensive calculation, transpose matrix multiplication, to randomly detect cheating by a CSP is developed and shown that it can detect CPU cheating quite effectively even if the extent of the cheating is fairly small.
Abstract: In this paper, we present a novel scheme for auditing Service Level Agreement (SLA) in a semi-trusted or untrusted cloud. A SLA is a contract formed between a cloud service provider (CSP)and a user which specifies, in measurable terms, what resources a the CSP will provide the user. CSP's being profit based companies have incentive to cheat on the SLA. By providing a user with less resources than specified in the SLA the CSP can support more users on the same hardware and increase their profits. As the monitoring and verification of the SLA is typically performed on the cloud system itself it is straightforward for the CSP to lie on reports and hide their intentional breach of the SLA. To prevent such cheating we introduce a framework which makes use of a third party auditor (TPA). In this paper we are interested in CPU cheating only. To detect CPU cheating, we develop an algorithm which makes use of a commonly used CPU intensive calculation, transpose matrix multiplication, to randomly detect cheating by a CSP. Using real experiments we show that our algorithm can detect CPU cheating quite effectively even if the extent of the cheating is fairly small.

6 citations

Proceedings ArticleDOI
25 Jun 2018
TL;DR: Analytical and simulation results are presented, showing that by taking advantage of mobile users’ behaviors, and of varying demands of data allowance selling and buying, the cognitive auction and data allocation mechanism can significant improve the overall performance of the mobile data allowance transaction system.
Abstract: The unprecedented growth of the volume of mobile data calls for novel approaches that improve the sharing of data allowances among mobile users with diverse needs. Specifically, the Wi-Fi hotspot function of current smartphones allows mobile-to-mobile offloading, but requires fast and efficient transactions between mobile users. Thus we propose an auction-based approach to allow the transfer of data allowances between mobile users with excess and deficits of data allowances, together with a cognitive approach to access the needed information about the system. The objective is to optimize the income of “sellers” and satisfy the needs of the other mobile users. Analytical and simulation results are presented, showing that by taking advantage of mobile users’ behaviors, and of varying demands of data allowance selling and buying, the cognitive auction and data allocation mechanism can significant improve the overall performance of the mobile data allowance transaction system.

6 citations

Proceedings ArticleDOI
14 Jun 2009
TL;DR: A new architecture for a scalable service monitoring and support system called Call Home Analysis and Response System (CHARS) that utilizes data de-noising and filtering techniques to meet the service management requirements of large-scale service deployments.
Abstract: Intelligent service management techniques play an important role in the continuously and rapidly evolving area of technologically advanced services. High tech companies are looking for better ways to deliver and preserve services to their customers in a competitive way. This paper introduces a new architecture for a scalable service monitoring and support system called Call Home Analysis and Response System (CHARS). The proposed system utilizes data de-noising and filtering techniques to meet the service management requirements of large-scale service deployments. The system utilizes intra and inter-element correlation of events to enhance the services delivered to end-users. Our results demonstrate that the proposed system is effective in covering the performance and fault management aspects for large-scale deployments of advanced services.

6 citations

Proceedings ArticleDOI
05 Mar 2018
TL;DR: This paper has developed a module called Security Monitor which is deployed below the operating system (OS) level and above the hardware level, to monitor and verify the configuration parameters in a timely manner, and evaluated its overhead.
Abstract: Software-defined radio (SDR) enables the flexible and efficient use of spectrum, which is a key technology to maintain high quality wireless services in the future. However, the flexibility of SDR brings serious security concerns of malicious attacks to the radio equipment. In this paper, we present a novel defense scheme for SDR to avoid the configuration parameters being tampered by malicious software, even if the operating system is comprised. Three manipulation attacks of radio parameters are considered. We developed a module called Security Monitor which is deployed below the operating system (OS) level and above the hardware level, to monitor and verify the configuration parameters in a timely manner. To demonstrate the effectiveness and efficiency of the proposed security mechanism, we implemented the Security Monitor on Ettus Universal Software Radio Peripheral (USRP) software-defined radio nodes, and evaluated its overhead. The experimental results show that the time overheads to detect the three types of manipulation attacks are 2.9 microsecond, 2.86 microsecond, and 3.4 microsecond, respectively.

6 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2002

9,314 citations