Bio: Abdulhameed Alelaiwi is an academic researcher from King Saud University. The author has contributed to research in topics: Cloud computing & Wireless sensor network. The author has an hindex of 27, co-authored 103 publications receiving 2770 citations.
TL;DR: This paper presents a comprehensive study of representative works on Sensor-Cloud infrastructure, which will provide general readers an overview of the Sensor- Cloud platform including its definition, architecture, and applications.
Abstract: Nowadays, wireless sensor network (WSN) applications have been used in several important areas, such as healthcare, military, critical infrastructure monitoring, environment monitoring, and manufacturing. However, due to the limitations of WSNs in terms of memory, energy, computation, communication, and scalability, efficient management of the large number of WSNs data in these areas is an important issue to deal with. There is a need for a powerful and scalable high-performance computing and massive storage infrastructure for real-time processing and storing of the WSN data as well as analysis (online and offline) of the processed information under context using inherently complex models to extract events of interest. In this scenario, cloud computing is becoming a promising technology to provide a flexible stack of massive computing, storage, and software services in a scalable and virtualized manner at low cost. Therefore, in recent years, Sensor-Cloud infrastructure is becoming popular that can provide an open, flexible, and reconfigurable platform for several monitoring and controlling applications. In this paper, we present a comprehensive study of representative works on Sensor-Cloud infrastructure, which will provide general readers an overview of the Sensor-Cloud platform including its definition, architecture, and applications. The research challenges, existing solutions, and approaches as well as future research directions are also discussed in this paper.
TL;DR: This paper measures the depth accuracy of the newly released Kinect v2 depth sensor, and obtains a cone model to illustrate its accuracy distribution, and proposes a trilateration method to improve thedepth accuracy with multiple Kinects simultaneously.
Abstract: Microsoft Kinect sensor has been widely used in many applications since the launch of its first version. Recently, Microsoft released a new version of Kinect sensor with improved hardware. However, the accuracy assessment of the sensor remains to be answered. In this paper, we measure the depth accuracy of the newly released Kinect v2 depth sensor, and obtain a cone model to illustrate its accuracy distribution. We then evaluate the variance of the captured depth values by depth entropy. In addition, we propose a trilateration method to improve the depth accuracy with multiple Kinects simultaneously. The experimental results are provided to ascertain the proposed model and method.
TL;DR: A hybrid feature extraction method with a regularized extreme learning machine (RELM) for developing an accurate brain tumor classification approach and the experimental results proved that the approach is more effective compared with the existing state-of-the-art approaches.
Abstract: Brain cancer classification is an important step that depends on the physician’s knowledge and experience. An automated tumor classification system is very essential to support radiologists and physicians to identify brain tumors. However, the accuracy of current systems needs to be improved for suitable treatments. In this paper, we propose a hybrid feature extraction method with a regularized extreme learning machine (RELM) for developing an accurate brain tumor classification approach. The approach starts by preprocessing the brain images by using a min–max normalization rule to enhance the contrast of brain edges and regions. Then, the brain tumor features are extracted based on a hybrid method of feature extraction. Finally, a RELM is used for classifying the type of brain tumor. To evaluate and compare the proposed approach, a set of experiments is conducted on a new public dataset of brain images. The experimental results proved that the approach is more effective compared with the existing state-of-the-art approaches, and the performance in terms of classification accuracy improved from 91.51% to 94.233% for the experiment of the random holdout technique.
TL;DR: A voice pathology detection system is proposed inside the monitoring framework using a local binary pattern on a Mel-spectrum representation of the voice signal, and an extreme learning machine classifier to detect the pathology.
Abstract: The integration of the IoT and cloud technology is very important to have a better solution for an uninterrupted, secured, seamless, and ubiquitous framework. The complementary nature of the IoT and the could in terms of storage, processing, accessibility, security, service sharing, and components makes the convergence suitable for many applications. The advancement of mobile technologies adds a degree of flexibility to this solution. The health industry is one of the venues that can benefit from IoT–Cloud technology, because of the scarcity of specialized doctors and the physical movement restrictions of patients, among other factors. In this article, as a case study, we discuss the feasibility of and propose a solution for voice pathology monitoring of people using IoT–cloud. More specifically, a voice pathology detection system is proposed inside the monitoring framework using a local binary pattern on a Mel-spectrum representation of the voice signal, and an extreme learning machine classifier to detect the pathology. The proposed monitoring framework can achieve high accuracy of detection, and it is easy to use.
TL;DR: The experimental results show that, compared with the existing energy-saving techniques, the proposed approaches can effectively decrease the energy consumption in Cloud datacenters while maintaining low SLA violation.
Abstract: In this paper, we address the problem of reducing Cloud datacenter high energy consumption with minimal Service Level Agreement (SLA) violation. Although there are many energy-aware resource management solutions for Cloud datacenters, existing approaches focus on minimizing energy consumption while ignoring the SLA violation at the time of virtual machine (VM) deployment. Also, they do not consider the types of application running in the VMs and thus may not really reduce energy consumption with minimal SLA violation under a variety of workloads. In this paper, we propose two novel adaptive energy-aware algorithms for maximizing energy efficiency and minimizing SLA violation rate in Cloud datacenters. Unlike the existing approaches, the proposed energy-aware algorithms take into account the application types as well as the CPU and memory resources during the deployment of VMs. To study the efficacy of the proposed approaches, we performed extensive experimental analysis using real-world workload, which comes from more than a thousand PlanetLab VMs. The experimental results show that, compared with the existing energy-saving techniques, the proposed approaches can effectively decrease the energy consumption in Cloud datacenters while maintaining low SLA violation.
TL;DR: This paper provides an up-to-date picture of CloudIoT applications in literature, with a focus on their specific research challenges, and identifies open issues and future directions in this field, which it expects to play a leading role in the landscape of the Future Internet.
Abstract: Cloud computing and Internet of Things (IoT) are two very different technologies that are both already part of our life. Their adoption and use are expected to be more and more pervasive, making them important components of the Future Internet. A novel paradigm where Cloud and IoT are merged together is foreseen as disruptive and as an enabler of a large number of application scenarios.In this paper, we focus our attention on the integration of Cloud and IoT, which is what we call the CloudIoT paradigm. Many works in literature have surveyed Cloud and IoT separately and, more precisely, their main properties, features, underlying technologies, and open issues. However, to the best of our knowledge, these works lack a detailed analysis of the new CloudIoT paradigm, which involves completely new applications, challenges, and research issues. To bridge this gap, in this paper we provide a literature survey on the integration of Cloud and IoT. Starting by analyzing the basics of both IoT and Cloud Computing, we discuss their complementarity, detailing what is currently driving to their integration. Thanks to the adoption of the CloudIoT paradigm a number of applications are gaining momentum: we provide an up-to-date picture of CloudIoT applications in literature, with a focus on their specific research challenges. These challenges are then analyzed in details to show where the main body of research is currently heading. We also discuss what is already available in terms of platforms-both proprietary and open source-and projects implementing the CloudIoT paradigm. Finally, we identify open issues and future directions in this field, which we expect to play a leading role in the landscape of the Future Internet. Vision and motivations for the integration of Cloud computing and Internet of Things (IoT).Applications stemming from the integration of Cloud computing and IoT.Hot research topics and challenges in the integrated scenario of Cloud computing and IoT.Open issues and future directions for research in this scenario.
TL;DR: This paper defines and explores proofs of retrievability (PORs), a POR scheme that enables an archive or back-up service to produce a concise proof that a user can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.
Abstract: In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes.In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work.We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.
01 Jan 2013
01 Jan 1978