scispace - formally typeset
Search or ask a question
Author

Mohsen Guizani

Bio: Mohsen Guizani is an academic researcher from Qatar University. The author has contributed to research in topics: Computer science & Cloud computing. The author has an hindex of 79, co-authored 1110 publications receiving 31282 citations. Previous affiliations of Mohsen Guizani include Jaypee Institute of Information Technology & University College for Women.


Papers
More filters
Journal ArticleDOI
TL;DR: An automatic refactoring tool to help developers convert built-in monitors into fine-grained ReentrantReadWriteLocks and save their time and energy is implemented.
Abstract: Internet of Things (IoT) software should provide good support for IoT devices as IoT devices are growing in quantity and complexity. Communication between IoT devices is largely realized in a concurrent way. How to ensure the correctness of concurrent access becomes a big challenge to IoT software development. This paper proposes a general refactoring framework for fine-grained read–write locking and implements an automatic refactoring tool to help developers convert built-in monitors into fine-grained ReentrantReadWriteLocks. Several program analysis techniques, such as visitor pattern analysis, alias analysis, and side-effect analysis, are used to assist with refactoring. Our tool is tested by several real-world applications including HSQLDB, Cassandra, JGroups, Freedomotic, and MINA. A total of 1072 built-in monitors are refactored into ReentrantReadWriteLocks. The experiments revealed that our tool can help developers with refactoring for ReentrantReadWriteLocks and save their time and energy.

4 citations

Proceedings ArticleDOI
01 Sep 2014
TL;DR: An energy efficient relay deployment algorithm that determines the optimal location and number of relays for future wireless networks, including Long Term Evolution-Advanced heterogeneous networks as a Mixed Integer Linear Programming (MILP) problem is presented.
Abstract: This paper presents an energy efficient relay deployment algorithm that determines the optimal location and number of relays for future wireless networks, including Long Term Evolution (LTE)-Advanced heterogeneous networks. We formulate an energy minimization problem for macro-relay heterogeneous networks as a Mixed Integer Linear Programming (MILP) problem. The proposed algorithm not only optimally connects users to either relays or eNodeBs (eNBs), but also allows eNBs to switch into inactive mode. This is possible by enabling relay-to-relay communication which forms the basis for relays to act as donors for neighboring relays instead of eNBs. Moreover, it relaxes traffic load of some eNBs in order to allow them to enter the inactive mode. We characterize the optimal as well as provide an approximate solution, which, however, performs very closely to the optimum. Our performance evaluation shows that an optimal relay deployment with relays acting as donors can significantly improve system energy efficiency.

4 citations

Proceedings ArticleDOI
01 Dec 2014
TL;DR: A handover management scheme that efficiently initiates a handover process and selects an optimal network and outperforms existing schemes is proposed.
Abstract: The Machine-to-Machine (M2M) communication has the potential to connect millions of devices in the near future. Since they agree on this potential, several standard organizations need to focus on improved general architecture for M2M communications. Currently, there is a lack of consensus to improve the general feasibility of M2M communication. Heterogeneous Mobile Ad hoc Networks (HetMANETs) can normally be considered appropriate for M2M challenges. When a mobile node (MN) moves inside a HetMANET, various challenges including a selection of the target network and energy efficient scanning take place, which need to be addressed for efficient handover. To cope with these issues, we propose a handover management scheme that efficiently initiates a handover process and selects an optimal network. Our proposed scheme is composed of two phases, i.e., i) the MN performs handover triggering based on the optimization of the Receive Signal Strength (RSS) from an access point/base station (AP/BS), and, ii) the network selection process is carried out by considering different parameters such as delay, jitter, velocity, network load, and energy consumption by the network interface. Moreover, if there are more networks available, then the MN selects the one that can provide the highest quality-of- service (QoS) using the Elimination and Choice Expressing Reality (ELECTRE) decision model. The performance of the proposed scheme is compared in the context of the number of handovers, average stay-time of an MN in the network, and energy consumption against periodic and adaptive scanning. Similarly, a two- state Markov model is defined that efficiently distribute the number nodes on the available access points and base stations. The proposed scheme efficiently optimizes the handoff related parameters and outperforms existing schemes.

4 citations

Proceedings ArticleDOI
01 Dec 2014
TL;DR: An energy-aware resource allocation framework that places the submitted tasks (elastic/inelastic) in an energy-efficient way, decides initially how much resources should be assigned to the elastic tasks, and tunes periodically the allocated resources for the currently hosted elastic tasks is proposed.
Abstract: Managing cloud resources in a way that reduces the consumed energy while also meeting clients demands is a challenging task. In this paper, we propose an energy-aware resource allocation framework that: i) places the submitted tasks (elastic/inelastic) in an energy-efficient way, ii) decides initially how much resources should be assigned to the elastic tasks, and iii) tunes periodically the allocated resources for the currently hosted elastic tasks. This is all done with the aim of reducing the number of ON servers and the time for which servers need to be kept ON allowing them to be turned to sleep early to save energy while meeting all clients demands. Comparative studies conducted on Google traces show the effectiveness of our framework in terms of energy savings and utilization gains.

4 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2002

9,314 citations