scispace - formally typeset
Search or ask a question
Author

Mohsen Guizani

Bio: Mohsen Guizani is an academic researcher from Qatar University. The author has contributed to research in topics: Computer science & Cloud computing. The author has an hindex of 79, co-authored 1110 publications receiving 31282 citations. Previous affiliations of Mohsen Guizani include Jaypee Institute of Information Technology & University College for Women.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper investigates how IFTTT, one of the most popular smart home platforms, has the capability of monitoring the daily life of a user in a variety of ways that are hardly noticeable and proposes multiple ideas for mitigating privacy leakages, which all together form a “Filter-and-Fuzz” process.
Abstract: The combination of smart home platforms and automation apps introduce many conveniences to smart home users. However, this also brings the potential of privacy leakage. If a smart home platform is permitted to collect all the events of a user day and night, then the platform will learn the behavior patterns of this user before long. In this paper, we investigate how IFTTT, one of the most popular smart home platforms, has the capability of monitoring the daily life of a user in a variety of ways that are hardly noticeable. Moreover, we propose multiple ideas for mitigating privacy leakages, which all together form a “Filter-and-Fuzz” (F&F) process: first, it filters out events unneeded by the IFTTT platform. Then, it fuzzifies the values and frequencies of the remaining events. We evaluate the F&F process and the results show that the proposed solution makes the IFTTT unable to recognize any of the user's behavior patterns.

26 citations

Journal ArticleDOI
TL;DR: A lightweight and privacy-preserving medical services access scheme based on multi-authority ABS for healthcare cloud, named LPP-MSA, which is more efficient in both signing and verification phases and could be well applied to the scenarios where users access the healthcare cloud system for large scale remote medical services via resource-constrained mobile devices.
Abstract: With the popularity of cloud computing technology, the healthcare cloud system is becoming increasingly perfect, which reduces the time of disease diagnosis and brings great convenience to people's lives. But meanwhile, the healthcare cloud system usually involves users' privacy information, and there is still a challenge on how to ensure that the sensitive information of users is not disclosed. Attribute-based signature (ABS) is a very useful technique for the privacy protection of users and is very suitable for anonymous authentication and privacy access control. However, general ABS schemes usually contain heavy computation overhead in signing and verification phases, which is not conducive for resource-limited devices to access healthcare cloud system. To address the above issues, we propose a lightweight and privacy-preserving medical services access scheme based on multi-authority ABS for healthcare cloud, named LPP-MSA. By using online/offline signing and server-aided verification mechanisms, the proposed scheme can greatly reduce the calculation overhead. In addition, LPP-MSA achieves unforgeability and anonymity and can resist collision attack. The comparisons of computational cost and storage overhead between LPP-MSA and the other existing schemes show that LPP-MSA is more efficient in both signing and verification phases. Therefore, it could be well applied to the scenarios where users access the healthcare cloud system for large scale remote medical services via resource-constrained mobile devices.

26 citations

Journal ArticleDOI
TL;DR: This paper mainly explores the security of deep neural network models based on the image segmentation tasks and shows that the proposed adversary is more likely to affect the performance of the segmentation model compared with the FGSM.
Abstract: Due to the powerful ability of data fitting, deep neural networks have been applied in a wide range of applications in many key areas. However, in recent years, it was found that some adversarial samples easily fool the deep neural networks. These input samples are generated by adding a few small perturbations based on the original sample, making a very significant influence on the decision of the target model in the case of not being perceived. Image segmentation is one of the most important technologies in the medical image and automatic driving field. This paper mainly explores the security of deep neural network models based on the image segmentation tasks. Two lightweight image segmentation models on the embedded device suffered from the white-box attack by using local perturbations and universal perturbations. The perturbations are generated indirectly by a noise function and an intermediate variable so that the gradient of pixels can be propagated unlimitedly. Through experiments, we find that different models have different blind spots, and the adversarial samples trained for a single model have no transferability. In the end, multiple models are attacked by our joint learning. Finally, under the constraint of low perturbation, most of the pixels in the attacked area have been misclassified by both lightweight models. The experimental result shows that the proposed adversary is more likely to affect the performance of the segmentation model compared with the FGSM.

26 citations

Journal ArticleDOI
TL;DR: This paper proposes the first secure and distributed data discovery and dissemination protocol named DiDrip, which allows the network owners to authorize multiple network users with different privileges to simultaneously and directly disseminate data items to the sensor nodes.
Abstract: A data discovery and dissemination protocol for wireless sensor networks (WSNs) is responsible for updating configuration parameters of, and distributing management commands to, the sensor nodes. All existing data discovery and dissemination protocols suffer from two drawbacks. First, they are based on the centralized approach; only the base station can distribute data items. Such an approach is not suitable for emergent multi-owner-multi-user WSNs. Second, those protocols were not designed with security in mind and hence adversaries can easily launch attacks to harm the network. This paper proposes the first secure and distributed data discovery and dissemination protocol named DiDrip . It allows the network owners to authorize multiple network users with different privileges to simultaneously and directly disseminate data items to the sensor nodes. Moreover, as demonstrated by our theoretical analysis, it addresses a number of possible security vulnerabilities that we have identified. Extensive security analysis show DiDrip is provably secure. We also implement DiDrip in an experimental network of resource-limited sensor nodes to show its high efficiency in practice.

26 citations

Proceedings ArticleDOI
11 Dec 2006
TL;DR: A fuzzy-based hierarchical energy efficient routing scheme (FEER) for large scale mobile ad-hoc networks that aims to maximize the network's lifetime and develops a fuzzy logic controller that combines these parameters, keeping in mind the synergy between them.
Abstract: A mobile Ad-Hoc network (MANET) is a collection of autonomous arbitrarily located wireless mobile hosts, in which an infrastructure is absent. In this paper we propose a fuzzy-based hierarchical energy efficient routing scheme (FEER) for large scale mobile ad-hoc networks that aims to maximize the network's lifetime. Each node in the network is characterized by its residual energy, traffic, and mobility. We develop a fuzzy logic controller that combines these parameters, keeping in mind the synergy between them. The value obtained, indicates the importance of a node and it is used in network formation and maintenance. We compare our approach to another energy efficient hierarchical protocol based on the dominating set (DS) idea. Our simulation shows that our design out performs the DS approach in prolonging the network lifetime.

26 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2002

9,314 citations