scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Cloud Computing in 2019"


Journal ArticleDOI
TL;DR: An energy-efficient adaptive resource scheduler for Networked Fog Centers (NetFCs) that is capable to provide hard QoS guarantees, in terms of minimum/maximum instantaneous rates of the traffic delivered to the vehicular clients, instantaneous rate-jitters and total processing delays is proposed and tested.
Abstract: Providing real-time cloud services to Vehicular Clients (VCs) must cope with delay and delay-jitter issues. Fog computing is an emerging paradigm that aims at distributing small-size self-powered data centers (e.g., Fog nodes) between remote Clouds and VCs, in order to deliver data-dissemination real-time services to the connected VCs. Motivated by these considerations, in this paper, we propose and test an energy-efficient adaptive resource scheduler for Networked Fog Centers (NetFCs). They operate at the edge of the vehicular network and are connected to the served VCs through Infrastructure-to-Vehicular (I2V) TCP/IP-based single-hop mobile links. The goal is to exploit the locally measured states of the TCP/IP connections, in order to maximize the overall communication-plus-computing energy efficiency, while meeting the application-induced hard QoS requirements on the minimum transmission rates, maximum delays and delay-jitters. The resulting energy-efficient scheduler jointly performs: (i) admission control of the input traffic to be processed by the NetFCs; (ii) minimum-energy dispatching of the admitted traffic; (iii) adaptive reconfiguration and consolidation of the Virtual Machines (VMs) hosted by the NetFCs; and, (iv) adaptive control of the traffic injected into the TCP/IP mobile connections. The salient features of the proposed scheduler are that: (i) it is adaptive and admits distributed and scalable implementation; and, (ii) it is capable to provide hard QoS guarantees, in terms of minimum/maximum instantaneous rates of the traffic delivered to the vehicular clients, instantaneous rate-jitters and total processing delays. Actual performance of the proposed scheduler in the presence of: (i) client mobility; (ii) wireless fading; and, (iii) reconfiguration and consolidation costs of the underlying NetFCs, is numerically tested and compared against the corresponding ones of some state-of-the-art schedulers, under both synthetically generated and measured real-world workload traces.

299 citations


Journal ArticleDOI
TL;DR: A discussion of agreed and emerging concerns in the container orchestration space is discussed, positioning it within the cloud context, but also moving it closer to current concerns in cloud platforms, microservices and continuous development.
Abstract: Containers as a lightweight technology to virtualise applications have recently been successful, particularly to manage applications in the cloud. Often, the management of clusters of containers becomes essential and the orchestration of the construction and deployment becomes a central problem. This emerging topic has been taken up by researchers, but there is currently no secondary study to consolidate this research. We aim to identify, taxonomically classify and systematically compare the existing research body on containers and their orchestration and specifically the application of this technology in the cloud. We have conducted a systematic mapping study of 46 selected studies. We classified and compared the selected studies based on a characterisation framework. This results in a discussion of agreed and emerging concerns in the container orchestration space, positioning it within the cloud context, but also moving it closer to current concerns in cloud platforms, microservices and continuous development.

267 citations


Journal ArticleDOI
TL;DR: This paper introduces the concept of wireless aware joint scheduling and computation offloading (JSCO) for multi-component applications, where an optimal decision is made on which components need to be offloaded as well as the scheduling order of these components.
Abstract: Cloud offloading is an indispensable solution to supporting computationally demanding applications on resource constrained mobile devices. In this paper, we introduce the concept of wireless aware joint scheduling and computation offloading (JSCO) for multi-component applications, where an optimal decision is made on which components need to be offloaded as well as the scheduling order of these components. The JSCO approach allows for more degrees of freedom in the solution by moving away from a compiler pre-determined scheduling order for the components towards a more wireless aware scheduling order. For some component dependency graph structures, the proposed algorithm can shorten execution times by parallel processing appropriate components in the mobile and cloud. We define a net utility that trades-off the energy saved by the mobile, subject to constraints on the communication delay, overall application execution time, and component precedence ordering. The linear optimization problem is solved using real data measurements obtained from running multi-component applications on an HTC smartphone and the Amazon EC2, using WiFi for cloud offloading. The performance is further analyzed using various component dependency graph topologies and sizes. Results show that the energy saved increases with longer application runtime deadline, higher wireless rates, and smaller offload data sizes.

219 citations


Journal ArticleDOI
TL;DR: Following-Me Cloud applies a Markov-decision-process-based algorithm for cost-effective performance-optimized service migration decisions, while two alternative schemes to ensure service continuity and disruption-free operation are proposed, based on either software defined networking technologies or the locator/identifier separation protocol.
Abstract: The trend towards the cloudification of the 3GPP LTE mobile network architecture and the emergence of federated cloud infrastructures call for alternative service delivery strategies for improved user experience and efficient resource utilization. We propose Follow-Me Cloud (FMC), a design tailored to this environment, but with a broader applicability, which allows mobile users to always be connected via the optimal data anchor and mobility gateways, while cloud-based services follow them and are delivered via the optimal service point inside the cloud infrastructure. Follow-Me Cloud applies a Markov-decision-process-based algorithm for cost-effective performance-optimized service migration decisions, while two alternative schemes to ensure service continuity and disruption-free operation are proposed, based on either software defined networking technologies or the locator/identifier separation protocol. Numerical results from our analytic model for follow-me cloud, as well as testbed experiments with the two alternative follow-me cloud implementations we have developed, demonstrate quantitatively and qualitatively the advantages it can bring about.

185 citations


Journal ArticleDOI
TL;DR: The experimental results show, the proposed VM consolidation approach uses a regression-based model to approximate the future CPU and memory utilization of VMs and PMs provides substantial improvement over other heuristic and meta-heuristic algorithms in reducing the energy consumption, the number of VM migrations and thenumber of SLA violations.
Abstract: Virtual Machine (VM) consolidation provides a promising approach to save energy and improve resource utilization in data centers. Many heuristic algorithms have been proposed to tackle the VM consolidation as a vector bin-packing problem. However, the existing algorithms have focused mostly on the number of active Physical Machines (PMs) minimization according to their current resource requirements and neglected the future resource demands. Therefore, they generate unnecessary VM migrations and increase the rate of Service Level Agreement (SLA) violations in data centers. To address this problem, we propose a VM consolidation approach that takes into account both the current and future utilization of resources. Our approach uses a regression-based model to approximate the future CPU and memory utilization of VMs and PMs. We investigate the effectiveness of virtual and physical resource utilization prediction in VM consolidation performance using Google cluster and PlanetLab real workload traces. The experimental results show, our approach provides substantial improvement over other heuristic and meta-heuristic algorithms in reducing the energy consumption, the number of VM migrations and the number of SLA violations.

140 citations


Journal ArticleDOI
TL;DR: A power and latency aware optimum cloudlet selection strategy for multi-cloudlet environment with the introduction of a proxy server is proposed and results demonstrate that the proposed approach reduces the power consumption and the system response time.
Abstract: Fast interactive response in mobile cloud computing is an emerging area of interest. Execution of applications inside the remote cloud increases the delay and affects the service quality. To avoid this difficulty cloudlet is introduced. Cloudlet provides the same service to the device as cloud at low latency but at high bandwidth. But selection of a cloudlet for offloading computation at low power is a major challenge if more than one cloudlet is available nearby. In this paper we have proposed a power and latency aware optimum cloudlet selection strategy for multi-cloudlet environment with the introduction of a proxy server. Theoretical analysis show that using the proposed approach the power and the latency consumption are reduced by approximately 29-32 and 33-36 percent respectively than offloading to the remote cloud. An experimental analysis of the proposed cloudlet selection scheme is performed using cloudlets and cloud servers located at our university laboratory. Theoretical and experimental results demonstrate that using the proposed strategy power and latency aware cloudlet selection can be performed. The proposed approach is compared with the existing methods on multi-cloudlet scenario to demonstrate that the proposed approach reduces the power consumption and the system response time.

109 citations


Journal ArticleDOI
TL;DR: This paper presents a novel data protection method combining Selective Encryption (SE) concept with fragmentation and dispersion on storage based on the invertible Discrete Wavelet Transform (DWT) to divide agnostic data into three fragments with three different levels of protection.
Abstract: Protection on end users' data stored in Cloud servers becomes an important issue in today's Cloud environments. In this paper, we present a novel data protection method combining Selective Encryption (SE) concept with fragmentation and dispersion on storage. Our method is based on the invertible Discrete Wavelet Transform (DWT) to divide agnostic data into three fragments with three different levels of protection. Then, these three fragments can be dispersed over different storage areas with different levels of trustworthiness to protect end users' data by resisting possible leaks in Clouds. Thus, our method optimizes the storage cost by saving expensive, private, and secure storage spaces and utilizing cheap but low trustworthy storage space. We have intensive security analysis performed to verify the high protection level of our method. Additionally, the efficiency is proved by implementation of deploying tasks between CPU and General Purpose Graphic Processing Unit (GPGPU) in an optimized manner.

107 citations


Journal ArticleDOI
TL;DR: This paper jointly investigates the task allocation and CPU-cycle frequency of MEC servers as a stochastic optimization problem, and an online Task Offloading and Frequency Scaling for Energy Efficiency (TOFFEE) algorithm is proposed to obtain the optimal solutions of these subproblems concurrently.
Abstract: As an emerging computing paradigm, mobile edge computing (MEC) can improve users' service experience by provisioning the cloud resources close to the mobile devices. With MEC, computation-intensive tasks can be processed on the MEC servers, which can greatly decrease the mobile devices' energy consumption and prolong their battery lifetime. However, the highly dynamic task arrival and wireless channel states pose great challenges on the computation task allocation in MEC. This article jointly investigates the task allocation and CPU-cycle frequency, to achieve the minimum energy consumption while guaranteeing that the queue length is upper bounded. We formulate it as a stochastic optimization problem, and with the aid of stochastic optimization methods, we decouple the original problem into two deterministic optimization subproblems. An online Task Offloading and Frequency Scaling for Energy Efficiency (TOFFEE) algorithm is proposed to obtain the optimal solutions of these subproblems concurrently. TOFFEE can obtain the close-to-optimal energy consumption while bounding the applications' queue length. Performance evaluation is conducted which verifies TOFFEE's effectiveness. Experiment results indicate that TOFFEE can decrease the energy consumption by about 15% compared with the RLE algorithm, and by about 38% compared with the RME algorithm.

105 citations


Journal ArticleDOI
TL;DR: This paper presents a work-sharing model, called Honeybee, using an adaptation of the well-known work stealing method to load balance independent jobs among heterogeneous mobile nodes, able to accommodate nodes randomly leaving and joining the system.
Abstract: As mobile devices evolve to be powerful and pervasive computing tools, their usage also continues to increase rapidly. However, mobile device users frequently experience problems when running intensive applications on the device itself, or offloading to remote clouds, due to resource shortage and connectivity issues. Ironically, most users’ environments are saturated with devices with significant computational resources. This paper argues that nearby mobile devices can efficiently be utilised as a crowd-powered resource cloud to complement the remote clouds. Node heterogeneity, unknown worker capability, and dynamism are identified as essential challenges to be addressed when scheduling work among nearby mobile devices. We present a work-sharing model, called Honeybee, using an adaptation of the well-known work stealing method to load balance independent jobs among heterogeneous mobile nodes, able to accommodate nodes randomly leaving and joining the system. The overall strategy of Honeybee is to focus on short-term goals, taking advantage of opportunities as they arise, based on the concepts of proactive workers and opportunistic delegator. We evaluate our model using a prototype framework built using Android and implement two applications. We report speedups of up to four with seven devices and energy savings up to 71 percent witheight devices.

100 citations


Journal ArticleDOI
TL;DR: The proposed scheme is the first identity-based PDP scheme for multi-copy and multi-cloud servers that is efficient and practical, and based on the computation Diffie-Hellman (CDH) hard problem.
Abstract: To increase the availability and durability of the outsourced data, many customers store multiple copies on multiple cloud servers. To guarantee the integrity of multi-copies, some provable data possession (PDP) protocols for multi-copy are presented. However, most of previous PDP protocols consider all copies to be stored on only one cloud storage server. In some degree, multi-copy makes little sense in such circumstance. Furthermore, many PDP protocols depend on the technique of public key infrastructure (PKI), which suffers many types of security vulnerabilities and also brings heavy communicational and computational cost. To increase the security and efficiency, we provide a novel identity-based PDP scheme of multi-copy on multiple cloud storage servers. In our scheme, all copies are delivered to different cloud storage servers, which work cooperatively to store the customer's data. By the homomorphic verifiable tags, the integrity of all copies can be checked simultaneously. The system model and security model of our scheme are provided in the paper. The security for our scheme is proved based on the computation Diffie-Hellman (CDH) hard problem. Analysis and experimental evaluation show that our scheme is efficient and practical. The proposed scheme is the first identity-based PDP scheme for multi-copy and multi-cloud servers.

94 citations


Journal ArticleDOI
TL;DR: A queue model is built to formulate the mobile users’ workload offloading problem and Lyapunov optimization framework is proposed to make trade-off between system offloading utility and queue backlog and results show effectiveness of the Lagrangian optimization offloading method for deterministic WiFi connection and the multi-stage stochastic programming method for random WiFi connection.
Abstract: In order to improve life mobile users' service experience, mobile cloud computing (MCC) is promoted, which can offload these compute-intensive applications to cloud. Although MCC can alleviate the burdens of Smart mobile devices (SMDs), it also aggravates the computing and storage overheads in cloud center and bandwidth overhead on wireless links for applications offloading. Therefore, we should carefully design the offloading policy to decrease these overheads while easing the burdens of SMDs. To this end, we investigate the offloading policy in heterogeneous wireless networks. In this paper, a queue is built to formulate the mobile users' workload offloading problem and Lyapunov optimization is used to make trade-off between the system offloading utility and the queue backlog. First, based on the deterministic WiFi connection, a Lagrangian optimization method is proposed to decide the optimal offloading workloads. Furthermore, considering the random WiFi connection durations, a multi-stage stochastic programming is adopted. The experimental results show the effectiveness of the Lagrangian optimization offloading method for deterministic WiFi connection and the multi-stage stochastic programming method for random WiFi connection.

Journal ArticleDOI
TL;DR: This paper proposes a secure framework for outsourced privacy-preserving storage and retrieval in large shared image repositories based on IES-CBIR, a novel Image Encryption Scheme that exhibits Content-Based Image Retrieval properties.
Abstract: Storage requirements for visual data have been increasing in recent years, following the emergence of many highly interactive multimedia services and applications for mobile devices in both personal and corporate scenarios. This has been a key driving factor for the adoption of cloud-based data outsourcing solutions. However, outsourcing data storage to the Cloud also leads to new security challenges that must be carefully addressed, especially regarding privacy. In this paper we propose a secure framework for outsourced privacy-preserving storage and retrieval in large shared image repositories. Our proposal is based on IES-CBIR, a novel Image Encryption Scheme that exhibits Content-Based Image Retrieval properties. The framework enables both encrypted storage and searching using Content-Based Image Retrieval queries while preserving privacy against honest-but-curious cloud administrators. We have built a prototype of the proposed framework, formally analyzed and proven its security properties, and experimentally evaluated its performance and retrieval precision. Our results show that IES-CBIR is provably secure, allows more efficient operations than existing proposals, both in terms of time and space complexity, and paves the way for new practical application scenarios.

Journal ArticleDOI
TL;DR: An optimal offline algorithm that leverages dynamic and linear programming techniques with the assumption of available exact knowledge of workload on objects is proposed and two online algorithms that make a trade-off between residential and migration costs and dynamically select storage classes across CSPs are proposed.
Abstract: Cloud Storage Providers (CSPs) offer geographically data stores providing several storage classes with different prices. An important problem facing by cloud users is how to exploit these storage classes to serve an application with a time-varying workload on its objects at minimum cost. This cost consists of residential cost (i.e., storage, Put and Get costs) and potential migration cost (i.e., network cost). To address this problem, we first propose the optimal offline algorithm that leverages dynamic and linear programming techniques with the assumption of available exact knowledge of workload on objects. Due to the high time complexity of this algorithm and its requirement for a priori knowledge, we propose two online algorithms that make a trade-off between residential and migration costs and dynamically select storage classes across CSPs. The first online algorithm is deterministic with no need of any knowledge of workload and incurs no more than $2\gamma -1$2γ-1 times of the minimum cost obtained by the optimal offline algorithm, where $\gamma$γ is the ratio of the residential cost in the most expensive data store to the cheapest one in either network or storage cost. The second online algorithm is randomized that leverages “Receding Horizon Control” (RHC) technique with the exploitation of available future workload information for $w$w time slots. This algorithm incurs at most $1+\frac{\gamma }{w}$1+γw times the optimal cost. The effectiveness of the proposed algorithms is demonstrated through simulations using a workload synthesized based on characteristics of the Facebook workload.

Journal ArticleDOI
TL;DR: The framework reliably performed object detection and classification on the data, comprising of 21,600 video streams and 175 GB in size, in 6.52 hours, thus making it at least twice as fast than the cloud deployment without GPUs.
Abstract: Object detection and classification are the basic tasks in video analytics and become the starting point for other complex applications. Traditional video analytics approaches are manual and time consuming. These are subjective due to the very involvement of human factor. We present a cloud based video analytics framework for scalable and robust analysis of video streams. The framework empowers an operator by automating the object detection and classification process from recorded video streams. An operator only specifies an analysis criteria and duration of video streams to analyse. The streams are then fetched from a cloud storage, decoded and analysed on the cloud. The framework executes compute intensive parts of the analysis to GPU powered servers in the cloud. Vehicle and face detection are presented as two case studies for evaluating the framework, with one month of data and a 15 node cloud. The framework reliably performed object detection and classification on the data, comprising of 21,600 video streams and 175 GB in size, in 6.52 hours. The GPU enabled deployment of the framework took 3 hours to perform analysis on the same number of video streams, thus making it at least twice as fast than the cloud deployment without GPUs.

Journal ArticleDOI
TL;DR: MoDEMO is proposed, a new elasticity management system powering both vertical and horizontal elasticities, both VM and Container virtualization technologies, multiple cloud providers simultaneously, and various elasticity policies and allows a dynamic configuration at runtime during the execution of the application.
Abstract: Elasticity is considered as a fundamental feature of cloud computing where the system capacity can adjust to the current application workloads by provisioning or de-provisioning computing resources automatically and timely. Many studies have been already conducted to elasticity management systems, however, almost all lack to offer a complete modular solution. In this article, we propose MoDEMO, a new elasticity management system powering both vertical and horizontal elasticities, both VM and Container virtualization technologies, multiple cloud providers simultaneously, and various elasticity policies. MoDEMO is characterized by the following features: it represents (i) the first system that manages elasticity using Open Cloud Computing Interface (OCCI) model with respect to the OCCI standard specifications, (ii) the first unified system which combines the functionalities of the worldwide cloud providers: Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP), and (iii) allows a dynamic configuration at runtime during the execution of the application. MoDEMO permits to timely adapt resource capacity according to the workload intensity and increase application performance without introducing a significant overhead.

Journal ArticleDOI
TL;DR: This paper proposes agent mining and cloud mining approaches to solve the above problem in the blockchain-enabled IoT, and proposes a dueling deep reinforcement learning approach to address this problem.
Abstract: Recently, the term ‘Internet of Things’ (IoT) has garnered great attention. As a trusted, dependable, and decentralized approach, blockchain has already been used in IoT. However, the existing blockchain has a number of drawbacks that prevent it from being used as a generic platform for IoT. The nodes in IoT are heavily resource-limited, especially computing and networking resources. Unfortunately, they are necessary for the blockchain to solve complicated puzzles and propagate blocks. In this paper, we propose agent mining and cloud mining approaches to solve the above problem in the blockchain-enabled IoT. To be specific, miners act as mining agents for nodes in IoT, offload mining tasks to cloud computing servers, and use networking resources dynamically. Furthermore, in order to enhance the performance, the access selection of users, computing resources allocation, and networking resources allocation are formulated as a joint optimization problem. We then propose a dueling deep reinforcement learning approach to address this problem. Numerical results justify the effectiveness of our proposed scheme.

Journal ArticleDOI
TL;DR: This paper studies the service chaining towards the hybrid SFC clouds, where both physical appliances and VNF appliances provide services collaboratively and devise a Markov Approximation (MA) based algorithm that can yield near-optimal solutions and outperform other benchmark algorithms significantly.
Abstract: In the Service-Function-Chaining (SFC) enabled networks, various sophisticated policy-aware network functions, such as intrusion detection, access control and unified threat management, can be realized in either physical middleboxes or virtualized network function (VNF) appliances. In this paper, we study the service chaining towards the hybrid SFC clouds, where both physical appliances and VNF appliances provide services collaboratively. In such hybrid SFC networks, the challenge is how to efficiently steer the service chains for traffic demands while matching their individual policy chains concurrently such that a utility associated with the total admitted traffic rate and the induced overheads can be maximized. We find such problem has not been well solved so far. To this end, we devise a Markov Approximation (MA) based algorithm. The approximation property of the proposed algorithm is also proved. Extensive evaluation results show that the proposed MA algorithm can yield near-optimal solutions and outperform other benchmark algorithms significantly.

Journal ArticleDOI
TL;DR: A hierarchical access control method using modified hierarchical attribute-based encryption (M-HABE) and a modified three-layer structure is proposed in this paper, designed to ensure the users with legal authorities to get corresponding classified data and to restrict illegal users and unauthorized legal users get access to the data.
Abstract: —Cloud computing is an Internet-based computing pattern through which shared resources are provided to devices on-demand. It is an emerging but promising paradigm to integrating mobile devices into cloud computing, and the integration performs in the cloud based hierarchical multi-user data-shared environment. With integrating into cloud computing, security issues such as data confidentiality and user authority may arise in the mobile cloud computing system, and it is concerned as the main constraints to the developments of mobile cloud computing. In order to provide safe and secure operation, a hierarchical access control method using modified hierarchical attribute-based encryption (M-HABE) and a modified three-layer structure is proposed in this paper. In a specific mobile cloud computing model, enormous data which may be from all kinds of mobile devices, such as smart phones, functioned phones and PDAs and so on can be controlled and monitored by the system, and the data can be sensitive to unauthorized third party and constraint to legal users as well. The novel scheme mainly focuses on the data processing, storing and accessing, which is designed to ensure the users with legal authorities to get corresponding classified data and to restrict illegal users and unauthorized legal users get access to the data, which makes it extremely suitable for the mobile cloud computing paradigms.

Journal ArticleDOI
TL;DR: An energy-aware resource allocation scheme is proposed using a Stackelberg game for energy management in cloud-based DCs using various performance metrics using Google workload traces and results obtained show the effectiveness of the proposed scheme.
Abstract: Smart Grid (SG) has emerged as one of the most powerful technologies of the modern era for an efficient energy management by integrating information and communication technologies (ICT) in the existing infrastructure. Among various ICT, cloud computing (CC) has emerged as one of the leading service providers which uses geo-distributed data centers (DCs) to serve the requests of users in SG. In recent times, with an increase in service requests by end users for various resources, there has been an exponential increase in the number of servers deployed at various DCs. With an increase in the size, the energy consumption of DCs has increased many folds which leads to an increase in overall operational cost of DCs. However, efficient resource allocation among these geo-distributed DCs may play a vital role in reducing the energy consumption of DCs. Moreover, with an increase in harmful emissions, the use of renewable energy sources (RES) can benefit DCs, SG, and society at large. Keeping focus on these points, in this paper, an energy-aware resource allocation scheme is proposed using a Stackelberg game for energy management in cloud-based DCs. For this purpose, a cloud controller is used to receive the requests of users which then distributes these requests among geo-distributed DCs in such a way that the energy consumption of DCs is sustained by RES. However, if energy consumption of DCs is not sustained by RES then the energy is drawn from the grid. The requests of users are routed to the DC which is offered lowest energy tariff from the grid. For this purpose, a Stackelberg game for energy trading is also proposed to select the grid offering lowest energy tariff to DCs. The proposed scheme is evaluated using various performance metrics using Google workload traces. The results obtained show the effectiveness of the proposed scheme.

Journal ArticleDOI
TL;DR: Analytical models based on Stochastic Reward Nets (SRNs) are proposed to model and evaluate an IaaS cloud system at different levels and an improvement of several orders of magnitude in the state space reduction of the monolithic model is obtained.
Abstract: Infrastructure as a Service (IaaS) is one of the most significant and fastest growing fields in cloud computing. To efficiently use the resources of an IaaS cloud, several important factors such as performance, availability, and power consumption need to be considered and evaluated carefully. Evaluation of these metrics is essential for cost-benefit prediction and quantification of different strategies which can be applied to cloud management. In this paper, analytical models based on Stochastic Reward Nets (SRNs) are proposed to model and evaluate an IaaS cloud system at different levels. To achieve this, an SRN is initially presented to model a group of physical machines which are controlled by a management layer. Afterwards, the SRN models presented for the groups of physical machines in the first stage are combined to capture a monolithic model representing an entire IaaS cloud. Since the monolithic model does not scale well for large cloud systems, two approximate SRN models using folding and fixed-point iteration techniques are proposed to evaluate the performance, availability, and power consumption of the IaaS cloud. The existence of a solution for the fixed-point approximate model is proved using Brouwer's fixed-point theorem. A validation of the proposed monolithic and approximate models against both an ad-hoc discrete-event simulator developed in Java and the CloudSim framework is presented. The analytic-numeric results obtained from applying the proposed models to sample cloud systems show that the errors introduced by approximate models are insignificant while an improvement of several orders of magnitude in the state space reduction of the monolithic model is obtained.

Journal ArticleDOI
TL;DR: CIPPPA is the first public auditing mechanism achieving conditional identity privacy of patients inWBANs, the real identity of a patient is unknown to anyone in cloud-based WBANs other than the private key generator (PKG).
Abstract: Wireless body area networks (WBANs) rely on powerful cloud storage services to manage massive medical data. As precise medical diagnosis analysis is heavily based on these medical data, any altered medical data may cause severe consequences, the integrity of outsourced medical data has become the most concerning security issue. Up to date, most existing public auditing mechanisms have been proposed to check the data integrity, but they could not achieve conditional identity privacy, any patient would not like others to know his/her real identity corresponding to certain serious disease, and some malicious patients should be revoked timely due to misbehaviors. Additionally, they are vulnerable to malicious auditors, by colluding with the cloud server to cheat patients. In this paper, we propose a conditional identity privacy-preserving public auditing (CIPPPA) mechanism for cloud-based WBANs. CIPPPA is the first public auditing mechanism achieving conditional identity privacy of patients in WBANs, the real identity of a patient is unknown to anyone in cloud-based WBANs other than the private key generator (PKG). We attempt to integrate Ethereum blockchain into CIPPPA, which gives assistance to patients for validating malicious auditing behaviors. Formal security analysis and performance evaluation demonstrate that CIPPPA is practical for cloud-based WBANs.

Journal ArticleDOI
TL;DR: A new paradigm called data integrity auditing without private key storage is proposed and a new signature scheme is designed which not only supports blockless verifiability, but also is compatible with the linear sketch.
Abstract: Using cloud storage services, users can store their data in the cloud to avoid the expenditure of local data storage and maintenance. To ensure the integrity of the data stored in the cloud, many data integrity auditing schemes have been proposed. In most, if not all, of the existing schemes, a user needs to employ his private key to generate the data authenticators for realizing the data integrity auditing. Thus, the user has to possess a hardware token (e.g. USB token, smart card) to store his private key and memorize a password to activate this private key. If this hardware token is lost or this password is forgotten, most of the current data integrity auditing schemes would be unable to work. In order to overcome this problem, we propose a new paradigm called data integrity auditing without private key storage and design such a scheme. In this scheme, we use biometric data (e.g. iris scan, fingerprint) as the user's fuzzy private key to avoid using the hardware token. Meanwhile, the scheme can still effectively complete the data integrity auditing.We utilize a linear sketch with coding and error correction processes to confirm the identity of the user. In addition, we design a new signature scheme which not only supports blockless verifiability, but also is compatible with the linear sketch. The security proof and the performance analysis show that our proposed scheme achieves desirable security and efficiency.

Journal ArticleDOI
TL;DR: This article is the first to take advantage of Vector Ordinal Optimization techniques to search for Pareto optimal composition solutions with QoS dependency involved, and it is hoped that this work will help find truly desirable solutions for QoS-aware service composition.
Abstract: Service composition is popular for composing a set of existing services to provide complex services. With the increasing number of services deployed in cloud computing environments, many service providers have started to offer candidate services with equivalent functionality but different Quality of Service (QoS) levels. Therefore, QoS-aware service composition has drawn extensive attention. Most existing approaches for QoS-aware service composition assume a service's QoS values are not correlated to those of other services. However, QoS dependency exists in real life, and impacts the overall QoS values of the composite services. In this article, we study QoS dependency-aware service composition considering multiple QoS attributes. Based on the Pareto set model, we focus on searching for a set of Pareto optimal solutions. A candidate pruning algorithm for removing the unpromising candidates is proposed, and a service composition algorithm using Vector Ordinal Optimization techniques is designed. Simulation experiments are conducted to validate the efficiency and effectiveness of our algorithms. We are the first to take advantage of Vector Ordinal Optimization techniques to search for Pareto optimal composition solutions with QoS dependency involved. The capturing of QoS dependency enables us to find truly desirable solutions.

Journal ArticleDOI
TL;DR: A cloud-based UAV system which incorporates the computing capability of the terrestrial cloud into the UAV systems and the relationship between the acquisition rate of sensor data and the stability of the cloud- based UAVsystem is derived.
Abstract: Unmanned Aerial Vehicle (UAV) technology has been widely applied in both military and civilian applications. Recent researches on UAV systems feature in the dramatic augment of the variety and number of equipped sensors, which results in such an issue that multiple UAVs cannot afford to handle the big data generated by a range of sensors in the air. Considering this practical problem, in this paper, we propose a cloud-based UAV system which incorporates the computing capability of the terrestrial cloud into the UAV systems. Relying on proposed cloud-based UAV system, one critical theoretic issue is how to acquire the big data generated by the sensors while guaranteeing a stable operation state of the system. First, we analyze the cloud-based system's on-demand service ability as well as its impact on UAVs’ control procedure. Second, the UAV cloud control system is modeled as a network control system. Moreover, the stable condition of the UAV cloud control system is derived, which reveals the relationship between the acquisition rate of sensor data and the stability of the cloud-based UAV system. Finally, simulations are conducted to verify the effectiveness of our theoretical analysis.

Journal ArticleDOI
TL;DR: This paper proposes a practical privacy-preserving K-means clustering scheme that can be efficiently outsourced to cloud servers, and allows cloud servers to perform clustering directly over encrypted datasets, while achieving comparable computational complexity and accuracy compared with clusterings over unencrypted ones.
Abstract: Clustering techniques have been widely adopted in many real world data analysis applications, such as customer behavior analysis, targeted marketing, digital forensics, etc. With the explosion of data in today's big data era, a major trend to handle a clustering over large-scale datasets is outsourcing it to public cloud platforms. This is because cloud computing offers not only reliable services with performance guarantees, but also savings on in-house IT infrastructures. However, as datasets used for clustering may contain sensitive information, e.g., patient health information, commercial data, and behavioral data, etc, directly outsourcing them to public cloud servers inevitably raise privacy concerns. In this paper, we propose a practical privacy-preserving K-means clustering scheme that can be efficiently outsourced to cloud servers. Our scheme allows cloud servers to perform clustering directly over encrypted datasets, while achieving comparable computational complexity and accuracy compared with clusterings over unencrypted ones. We also investigate secure integration of MapReduce into our scheme, which makes our scheme extremely suitable for cloud computing environment. Thorough security analysis and numerical analysis carry out the performance of our scheme in terms of security and efficiency. Experimental evaluation over a 5 million objects dataset further validates the practical performance of our scheme.

Journal ArticleDOI
TL;DR: The proposed mechanism applies a deflating factor on the available bandwidth value furnished as input to the scheduler and is able to increase the number of solutions with makespans that are shorter than the defined deadline and reduce the underestimations of the makespan and cost provided by workflow schedulers.
Abstract: In hybrid clouds, inter-cloud links play a key role in the execution of jobs with data dependencies. Insufficient available bandwidth in inter-cloud links can increase the makespan and the monetary cost to execute the application on public clouds. Imprecise information about the available bandwidth can lead to inefficient scheduling decisions. This paper attempts to evaluate the impact of imprecise information about the available bandwidth in inter-cloud links on workflow schedules, and it proposes a mechanism to cope with imprecise information about the available bandwidth and its impact on the makespan and cost estimates. The proposed mechanism applies a deflating factor on the available bandwidth value furnished as input to the scheduler. Simulation results showed that the mechanism is able to increase the number of solutions with makespans that are shorter than the defined deadline and reduce the underestimations of the makespan and cost provided by workflow schedulers.

Journal ArticleDOI
TL;DR: This paper examines the problem of how to schedule the migrations and how to allocate network resources for migration when multiple VMs need to be migrated at the same time in the Software-defined Network context and proposes a method that computes the optimal migration sequence and network bandwidth used for each migration.
Abstract: Live migration is a key technique for virtual machine (VM) management in data center networks, which enables flexibility in resource optimization, fault tolerance, and load balancing Despite its usefulness, the live migration still introduces performance degradations during the migration process Thus, there has been continuous efforts in reducing the migration time in order to minimize the impact From the network's perspective, the migration time is determined by the amount of data to be migrated and the available bandwidth used for such transfer In this paper, we examine the problem of how to schedule the migrations and how to allocate network resources for migration when multiple VMs need to be migrated at the same time We consider the problem in the Software-defined Network (SDN) context since it provides flexible control on routing More specifically, we propose a method that computes the optimal migration sequence and network bandwidth used for each migration We formulate this problem as a mixed integer programming, which is NP-hard To make it computationally feasible for large scale data centers, we propose an approximation scheme via linear approximation plus fully polynomial time approximation, and obtain its theoretical performance bound and computational complexity Through extensive simulations, we demonstrate that our fully polynomial time approximation (FPTA) algorithm has a good performance compared with the optimal solution of the primary programming problem and two state-of-the-art algorithms That is, our proposed FPTA algorithm approaches to the optimal solution of the primary programming problem with less than 10 percent variation and much less computation time Meanwhile, it reduces the total migration time and service downtime by up to 40 and 20 percent compared with the state-of-the-art algorithms, respectively

Journal ArticleDOI
TL;DR: This paper proposes a novel healthcare IoT system fusing advantages of attribute-based encryption, cloud and edge computing, which provides an efficient, flexible, secure fine-grained access control mechanism with data verification in healthcare IoT network without any secure channel and enables data users to enjoy the lightweight decryption.
Abstract: Healthcare Internet-of-Things (IoT) is an emerging paradigm that enables embedded devices to monitor patients' vital signals and allows these data to be aggregated and outsourced to the cloud. The cloud enables authorized users to store and share data to enjoy on-demand services. Nevertheless, it also causes many security concerns because of the untrusted network environment, dishonest cloud service providers and resource-limited devices. To preserve patients' privacy, existing solutions usually apply cryptographic tools to offer access controls. However, fine-grained access control among authorized users is still a challenge, especially for lightweight and resource-limited end-devices. In this paper, we propose a novel healthcare IoT system fusing advantages of attribute-based encryption, cloud and edge computing, which provides an efficient, flexible, secure fine-grained access control mechanism with data verification in healthcare IoT network without any secure channel and enables data users to enjoy the lightweight decryption. We also define the formal security models and present security proofs for our proposed scheme. The extensive comparison and experimental simulation demonstrate that our scheme has better performance than existing solutions.

Journal ArticleDOI
TL;DR: An adaptive and fuzzy resource management framework (AFRM) is proposed in which the last resource values of each virtual machine are gathered through the environment sensors and are sent to a fuzzy controller and AFRM analyzes the received information to make decision on how to reallocate the resources in each iteration of a self-adaptive control cycle.
Abstract: Resource management plays a key role in the cloud-computing environment in which applications face with dynamically changing workloads. However, such dynamic and unpredictable workloads can lead to performance degradation of applications, especially when demands for resources are increased. To meet Quality of Service (QoS) requirements based on Service Level Agreements (SLA), resource management strategies must be taken into account. The question addressed in this research includes how to reduce the number of SLA violations based on the optimization of resources allocated to users applying an autonomous control cycle and a fuzzy knowledge management system. In this paper, an adaptive and fuzzy resource management framework (AFRM) is proposed in which the last resource values of each virtual machine are gathered through the environment sensors and are sent to a fuzzy controller. Then, AFRM analyzes the received information to make decision on how to reallocate the resources in each iteration of a self-adaptive control cycle. All the membership functions and rules are dynamically updated based on workload changes to satisfy QoS requirements. Two sets of experiments were conducted on the storage resource to examine AFRM in comparison to rule-based and static-fuzzy approaches in terms of RAE, utility, number of SLA violations, and cost applying HIGH, MEDIUM, MEDIUM-HIGH, and LOW workloads. The results reveal that AFRM outweighs the rule-based and static-fuzzy approaches from several aspects.

Journal ArticleDOI
TL;DR: This article proposes a theoretical price-performance model based on the study of the actual Cloud instances proposed by one of the major Cloud IaaS actors: Amazon Elastic Compute Cloud (EC2), and proposes a hourly price comparison between an in-house cluster and the equivalent EC2 instances.
Abstract: While High Performance Computing (HPC) centers continuously evolve to provide more computing power to their users, we observe a wish for the convergence between Cloud Computing (CC) and High Performance Computing (HPC) platforms, with the commercial hope to see Cloud Computing (CC) infrastructures to eventually replace in-house facilities. If we exclude the performance point of view where many previous studies highlight a non-negligible overhead induced by the virtualization layer at the heart of every Cloud middleware when running a HPC workload, the question of the real cost-effectiveness is often left aside with the intuition that, most probably, the instances offered by the Cloud providers are competitive from a cost point of view. In this article, we wanted to assert (or infirm) this intuition by analyzing what composes the Total Cost of Ownership (TCO) of an in-house HPC facility operated internally since 2007. This Total Cost of Ownership (TCO) model is then used to compare with the induced cost that would have been required to run the same platform (and the same workload) over a competitive Cloud IaaS offer. Our approach to address this price comparison is three-fold. First we propose a theoretical price-performance model based on the study of the actual Cloud instances proposed by one of the major Cloud IaaS actors: Amazon Elastic Compute Cloud (EC2). Then, based on the HPC facility TCO analysis we propose a hourly price comparison between our in-house cluster and the equivalent EC2 instances. Finally, based on the experimental benchmarking on the local cluster and on the Cloud instances we propose an update of the former theoretical price model to reflect the real system performance. The results obtained advocate in general for the acquisition of an in-house HPC facility, which balances the common intuition in favor of Cloud Computing platforms, would they be provided by the reference Cloud provider worldwide.