scispace - formally typeset
Search or ask a question
Author

Mohsen Guizani

Bio: Mohsen Guizani is an academic researcher from Qatar University. The author has contributed to research in topics: Computer science & Cloud computing. The author has an hindex of 79, co-authored 1110 publications receiving 31282 citations. Previous affiliations of Mohsen Guizani include Jaypee Institute of Information Technology & University College for Women.


Papers
More filters
Journal ArticleDOI
TL;DR: The articles in this special section focus on several promising approaches to sensors, actuators, and new consumer devices that are applicable to an even larger set of application domains.
Abstract: The Internet of Things (IoT) is seen as a set of vertical application domains that share a limited number of common basic functionalities. In this view, consumer-centric solutions, platforms, data management, and business models have to be developed and consolidated in order to deploy effective solutions in the specific fields. The availability of low-cost general-purpose processing and storage systems with sensing/actuation capabilities coupled with communication capabilities are broadening the possibilities of IoT, leading to open systems that will be highly programmable and virtualized, and will support large numbers of application programming interfaces (APIs). IoT emerges as a set of integrated technologies — new exciting solutions and services that are set to change the way people live and produce goods. IoT is viewed by many as a fruitful technological sector in order to generate revenues. IoT covers a large wealth of consumer-centric technologies, and it is applicable to an even larger set of application domains. Innovation will be nurtured and driven by the possibilities offered by the combination of increased technological capabilities, new business models, and the rise of new ecosystems. The articles in this special section focus on several promising approaches to sensors, actuators, and new consumer devices. New communication capabilities (from short-range to LPWAN to 4G and 5G networks, with NB-IoT).

8 citations

Proceedings ArticleDOI
01 Dec 2018
TL;DR: This work proposes AirMAP, a framework for enabling scalable database-driven dynamic spectrum access and sharing to provide accurate radio occupancy map while reducing the network overhead cost and overcome the scalability issue with conventional approaches.
Abstract: We propose AirMAP, a framework for enabling scalable database-driven dynamic spectrum access and sharing. We bring together the merits of compressive sensing and collaborative filtering to provide accurate radio occupancy map while reducing the network overhead cost and overcome the scalability issue with conventional approaches. We start from an observation that close-by users have a highly correlated spectrum observation and we propose to recover the spectrum occupancy matrix in the borough of each sensing node by minimizing the rank of local sub-matrices. Then, we combine the recovered matrix entries using a similarity criterion to get the global spectrum occupancy map. Through simulations, we show that the proposed framework minimizes the error while reducing the network overhead. We also show that the proposed framework is scalable when considering high frequencies.

8 citations

Journal ArticleDOI
TL;DR: A trusted third party is introduced and partially blind signature is combined to effectively reduce the correlation between participants and data and the number of interactions between users and task platform, so as to achieve high level participant privacy.

8 citations

Journal ArticleDOI
TL;DR: This paper proposes a scheme to effectively provide VoD by using P2P-based mesh overlay networks that may be suitable for the future Internet and selects the most appropriate peers by exploiting domain-based localization and congestion awareness strategies.
Abstract: The concept of the "future Internet" has evolved amongst researchers recently to relieve the tremendous pressure on the current Internet infrastructure to support the heterogeneous networking technologies, mobile devices, increased population of users, and also the high user requirements for real-time services and applications. Peer-to-Peer (P2P) Video on Demand (VoD) streaming technologies are expected to be a key technology in the future Internet. Because the existing P2P streaming techniques are attributed to a number of shortcomings, P2P VoD schemes need to be adequately redesigned for the future Internet. In this paper, we propose a scheme to effectively provide VoD by using P2P-based mesh overlay networks that may be suitable for the future Internet. Our scheme selects the most appropriate peers by exploiting domain-based localization and congestion awareness strategies. Through simulations, our proposed scheme is demonstrated to have scalability and capability of reducing the startup delay and total link cost, while maintaining high playback rate. The results are encouraging and show the importance of redesigning P2P VoD services in future Internet.

8 citations

Journal ArticleDOI
TL;DR: Theoretical and simulation analysis show that the QL-UACW algorithm effectively improves the fairness of node access channels, reduces the collision rate of data frames, and increases network throughput.

8 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2002

9,314 citations