scispace - formally typeset
Search or ask a question

Showing papers on "Cloud computing published in 2012"


Proceedings ArticleDOI
17 Aug 2012
TL;DR: This paper argues that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Abstract: Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).

4,440 citations


Journal ArticleDOI
TL;DR: An architectural framework and principles for energy-efficient Cloud computing are defined and the proposed energy-aware allocation heuristics provision data center resources to client applications in a way that improves energy efficiency of the data center, while delivering the negotiated Quality of Service (QoS).

2,511 citations


Journal ArticleDOI
TL;DR: This paper proposes introducing a Trusted Third Party, tasked with assuring specific security characteristics within a cloud environment, and presents a horizontal level of service, available to all implicated entities, that realizes a security mesh, within which essential trust is maintained.

1,728 citations


Journal ArticleDOI
TL;DR: The goal is development of a cloud and cloud shadow detection algorithm suitable for routine usage with Landsat images and as high as 96.4%.

1,620 citations


Journal ArticleDOI
TL;DR: A competitive analysis is conducted and competitive ratios of optimal online deterministic algorithms for the single VM migration and dynamic VM consolidation problems are proved, and novel adaptive heuristics for dynamic consolidation of VMs are proposed based on an analysis of historical data from the resource usage by VMs.
Abstract: The rapid growth in demand for computational power driven by modern service applications combined with the shift to the Cloud computing model have led to the establishment of large-scale virtualized data centers. Such data centers consume enormous amounts of electrical energy resulting in high operating costs and carbon dioxide emissions. Dynamic consolidation of virtual machines (VMs) using live migration and switching idle nodes to the sleep mode allows Cloud providers to optimize resource usage and reduce energy consumption. However, the obligation of providing high quality of service to customers leads to the necessity in dealing with the energy-performance trade-off, as aggressive consolidation may lead to performance degradation. Because of the variability of workloads experienced by modern applications, the VM placement should be optimized continuously in an online manner. To understand the implications of the online nature of the problem, we conduct a competitive analysis and prove competitive ratios of optimal online deterministic algorithms for the single VM migration and dynamic VM consolidation problems. Furthermore, we propose novel adaptive heuristics for dynamic consolidation of VMs based on an analysis of historical data from the resource usage by VMs. The proposed algorithms significantly reduce energy consumption, while ensuring a high level of adherence to the service level agreement. We validate the high efficiency of the proposed algorithms by extensive simulations using real-world workload traces from more than a thousand PlanetLab VMs. Copyright © 2011 John Wiley & Sons, Ltd.

1,616 citations


Journal ArticleDOI
Xun Xu1
TL;DR: Some of the essential features of cloud computing are briefly discussed with regard to the end-users, enterprises that use the cloud as a platform, and cloud providers themselves.
Abstract: Cloud computing is changing the way industries and enterprises do their businesses in that dynamically scalable and virtualized resources are provided as a service over the Internet. This model creates a brand new opportunity for enterprises. In this paper, some of the essential features of cloud computing are briefly discussed with regard to the end-users, enterprises that use the cloud as a platform, and cloud providers themselves. Cloud computing is emerging as one of the major enablers for the manufacturing industry; it can transform the traditional manufacturing business model, help it to align product innovation with business strategy, and create intelligent factory networks that encourage effective collaboration. Two types of cloud computing adoptions in the manufacturing sector have been suggested, manufacturing with direct adoption of cloud computing technologies and cloud manufacturing-the manufacturing version of cloud computing. Cloud computing has been in some of key areas of manufacturing such as IT, pay-as-you-go business models, production scaling up and down per demand, and flexibility in deploying and customizing solutions. In cloud manufacturing, distributed resources are encapsulated into cloud services and managed in a centralized way. Clients can use cloud services according to their requirements. Cloud users can request services ranging from product design, manufacturing, testing, management, and all other stages of a product life cycle.

1,588 citations


Posted Content
TL;DR: This paper presents a Cloud centric vision for worldwide implementation of Internet of Things, and expands on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.
Abstract: Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across many areas of modern day living. This offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. The proliferation of these devices in a communicating-actuating network creates the Internet of Things (IoT), wherein, sensors and actuators blend seamlessly with the environment around us, and the information is shared across platforms in order to develop a common operating picture (COP). Fuelled by the recent adaptation of a variety of enabling device technologies such as RFID tags and readers, near field communication (NFC) devices and embedded sensor and actuator nodes, the IoT has stepped out of its infancy and is the the next revolutionary technology in transforming the Internet into a fully integrated Future Internet. As we move from www (static pages web) to web2 (social networking web) to web3 (ubiquitous computing web), the need for data-on-demand using sophisticated intuitive queries increases significantly. This paper presents a cloud centric vision for worldwide implementation of Internet of Things. The key enabling technologies and application domains that are likely to drive IoT research in the near future are discussed. A cloud implementation using Aneka, which is based on interaction of private and public clouds is presented. We conclude our IoT vision by expanding on the need for convergence of WSN, the Internet and distributed computing directed at technological research community.

1,372 citations


Proceedings ArticleDOI
25 Mar 2012
TL;DR: This paper proposes ThinkAir, a framework that makes it simple for developers to migrate their smartphone applications to the cloud and enhances the power of mobile cloud computing by parallelizing method execution using multiple virtual machine (VM) images.
Abstract: Smartphones have exploded in popularity in recent years, becoming ever more sophisticated and capable. As a result, developers worldwide are building increasingly complex applications that require ever increasing amounts of computational power and energy. In this paper we propose ThinkAir, a framework that makes it simple for developers to migrate their smartphone applications to the cloud. ThinkAir exploits the concept of smartphone virtualization in the cloud and provides method-level computation offloading. Advancing on previous work, it focuses on the elasticity and scalability of the cloud and enhances the power of mobile cloud computing by parallelizing method execution using multiple virtual machine (VM) images. We implement ThinkAir and evaluate it with a range of benchmarks starting from simple micro-benchmarks to more complex applications. First, we show that the execution time and energy consumption decrease two orders of magnitude for a N-queens puzzle application and one order of magnitude for a face detection and a virus scan application. We then show that a parallelizable application can invoke multiple VMs to execute in the cloud in a seamless and on-demand manner such as to achieve greater reduction on execution time and energy consumption. We finally use a memory-hungry image combiner tool to demonstrate that applications can dynamically request VMs with more computational power in order to meet their computational requirements.

1,215 citations


Proceedings ArticleDOI
16 Oct 2012
TL;DR: In this article, the authors proposed a searchable symmetric encryption (SSE) scheme to achieve sublinear search time, security against adaptive chosen-keyword attacks, compact indexes and the ability to add and delete files efficiently.
Abstract: Searchable symmetric encryption (SSE) allows a client to encrypt its data in such a way that this data can still be searched. The most immediate application of SSE is to cloud storage, where it enables a client to securely outsource its data to an untrusted cloud provider without sacrificing the ability to search over it.SSE has been the focus of active research and a multitude of schemes that achieve various levels of security and efficiency have been proposed. Any practical SSE scheme, however, should (at a minimum) satisfy the following properties: sublinear search time, security against adaptive chosen-keyword attacks, compact indexes and the ability to add and delete files efficiently. Unfortunately, none of the previously-known SSE constructions achieve all these properties at the same time. This severely limits the practical value of SSE and decreases its chance of deployment in real-world cloud storage systems.To address this, we propose the first SSE scheme to satisfy all the properties outlined above. Our construction extends the inverted index approach (Curtmola et al., CCS 2006) in several non-trivial ways and introduces new techniques for the design of SSE. In addition, we implement our scheme and conduct a performance evaluation, showing that our approach is highly efficient and ready for deployment.

887 citations


Proceedings ArticleDOI
03 Mar 2012
TL;DR: This work identifies the key micro-architectural needs of scale-out workloads, calling for a change in the trajectory of server processors that would lead to improved computational density and power efficiency in data centers.
Abstract: Emerging scale-out workloads require extensive amounts of computational resources. However, data centers using modern server hardware face physical constraints in space and power, limiting further expansion and calling for improvements in the computational density per server and in the per-operation energy. Continuing to improve the computational resources of the cloud while staying within physical constraints mandates optimizing server efficiency to ensure that server hardware closely matches the needs of scale-out workloads.In this work, we introduce CloudSuite, a benchmark suite of emerging scale-out workloads. We use performance counters on modern servers to study scale-out workloads, finding that today's predominant processor micro-architecture is inefficient for running these workloads. We find that inefficiency comes from the mismatch between the workload needs and modern processors, particularly in the organization of instruction and data memory systems and the processor core micro-architecture. Moreover, while today's predominant micro-architecture is inefficient when executing scale-out workloads, we find that continuing the current trends will further exacerbate the inefficiency in the future. In this work, we identify the key micro-architectural needs of scale-out workloads, calling for a change in the trajectory of server processors that would lead to improved computational density and power efficiency in data centers.

860 citations


Proceedings ArticleDOI
13 Aug 2012
TL;DR: APLOMB solves real problems faced by network administrators, can outsource over 90% of middlebox hardware in a typical large enterprise network, and, in a case study of a real enterprise, imposes an average latency penalty of 1.1ms and median bandwidth inflation of 3.8%.
Abstract: Modern enterprises almost ubiquitously deploy middlebox processing services to improve security and performance in their networks. Despite this, we find that today's middlebox infrastructure is expensive, complex to manage, and creates new failure modes for the networks that use them. Given the promise of cloud computing to decrease costs, ease management, and provide elasticity and fault-tolerance, we argue that middlebox processing can benefit from outsourcing the cloud. Arriving at a feasible implementation, however, is challenging due to the need to achieve functional equivalence with traditional middlebox deployments without sacrificing performance or increasing network complexity.In this paper, we motivate, design, and implement APLOMB, a practical service for outsourcing enterprise middlebox processing to the cloud.Our discussion of APLOMB is data-driven, guided by a survey of 57 enterprise networks, the first large-scale academic study of middlebox deployment. We show that APLOMB solves real problems faced by network administrators, can outsource over 90% of middlebox hardware in a typical large enterprise network, and, in a case study of a real enterprise, imposes an average latency penalty of 1.1ms and median bandwidth inflation of 3.8%.

Journal ArticleDOI
TL;DR: The authors outline several critical security challenges and motivate further investigation of security solutions for a trustworthy public cloud environment.
Abstract: Cloud computing represents today's most exciting computing paradigm shift in information technology. However, security and privacy are perceived as primary obstacles to its wide adoption. Here, the authors outline several critical security challenges and motivate further investigation of security solutions for a trustworthy public cloud environment.

Journal ArticleDOI
Cong Wang1, Qian Wang1, Kui Ren1, Ning Cao, Wenjing Lou 
TL;DR: This paper proposes a flexible distributed storage integrity auditing mechanism, utilizing the homomorphic token and distributed erasure-coded data, which is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.
Abstract: Cloud storage enables users to remotely store their data and enjoy the on-demand high quality cloud applications without the burden of local hardware and software management. Though the benefits are clear, such a service is also relinquishing users' physical possession of their outsourced data, which inevitably poses new security risks toward the correctness of the data in cloud. In order to address this new problem and further achieve a secure and dependable cloud storage service, we propose in this paper a flexible distributed storage integrity auditing mechanism, utilizing the homomorphic token and distributed erasure-coded data. The proposed design allows users to audit the cloud storage with very lightweight communication and computation cost. The auditing result not only ensures strong cloud storage correctness guarantee, but also simultaneously achieves fast data error localization, i.e., the identification of misbehaving server. Considering the cloud data are dynamic in nature, the proposed design further supports secure and efficient dynamic operations on outsourced data, including block modification, deletion, and append. Analysis shows the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.

Proceedings ArticleDOI
23 Mar 2012
TL;DR: This paper provides a concise but all-round analysis on data security and privacy protection issues associated with cloud computing across all stages of data life cycle and describes future research work about dataSecurity and privacy Protection issues in cloud.
Abstract: It is well-known that cloud computing has many potential advantages and many enterprise applications and data are migrating to public or hybrid cloud. But regarding some business-critical applications, the organizations, especially large enterprises, still wouldn't move them to cloud. The market size the cloud computing shared is still far behind the one expected. From the consumers' perspective, cloud computing security concerns, especially data security and privacy protection issues, remain the primary inhibitor for adoption of cloud computing services. This paper provides a concise but all-round analysis on data security and privacy protection issues associated with cloud computing across all stages of data life cycle. Then this paper discusses some current solutions. Finally, this paper describes future research work about data security and privacy protection issues in cloud.

Journal ArticleDOI
TL;DR: Numerical studies are extensively performed in which the results clearly show that with the OCRP algorithm, cloud consumer can successfully minimize total cost of resource provisioning in cloud computing environments.
Abstract: In cloud computing, cloud providers can offer cloud consumers two provisioning plans for computing resources, namely reservation and on-demand plans. In general, cost of utilizing computing resources provisioned by reservation plan is cheaper than that provisioned by on-demand plan, since cloud consumer has to pay to provider in advance. With the reservation plan, the consumer can reduce the total resource provisioning cost. However, the best advance reservation of resources is difficult to be achieved due to uncertainty of consumer's future demand and providers' resource prices. To address this problem, an optimal cloud resource provisioning (OCRP) algorithm is proposed by formulating a stochastic programming model. The OCRP algorithm can provision computing resources for being used in multiple provisioning stages as well as a long-term plan, e.g., four stages in a quarter plan and twelve stages in a yearly plan. The demand and price uncertainty is considered in OCRP. In this paper, different approaches to obtain the solution of the OCRP algorithm are considered including deterministic equivalent formulation, sample-average approximation, and Benders decomposition. Numerical studies are extensively performed in which the results clearly show that with the OCRP algorithm, cloud consumer can successfully minimize total cost of resource provisioning in cloud computing environments.

Book
07 Jun 2012
TL;DR: This publication provides an overview of the security and privacy challenges pertinent to public cloud computing and points out considerations organizations should take when outsourcing data, applications, and infrastructure to a public cloud environment.
Abstract: NIST Special Publication 800-144 - Cloud computing can and does mean different things to different people. The common characteristics most interpretations share are on-demand scalability of highly available and reliable pooled computing resources, secure access to metered services from nearly anywhere, and displacement of data and services from inside to outside the organization. While aspects of these characteristics have been realized to a certain extent, cloud computing remains a work in progress. This publication provides an overview of the security and privacy challenges pertinent to public cloud computing and points out considerations organizations should take when outsourcing data, applications, and infrastructure to a public cloud environment.~

Journal ArticleDOI
TL;DR: A simulation environment for energy-aware cloud computing data centers is presented and the effectiveness of the simulator in utilizing power management schema, such as voltage scaling, frequency scaling, and dynamic shutdown that are applied to the computing and networking components are demonstrated.
Abstract: Cloud computing data centers are becoming increasingly popular for the provisioning of computing resources. The cost and operating expenses of data centers have skyrocketed with the increase in computing capacity. Several governmental, industrial, and academic surveys indicate that the energy utilized by computing and communication units within a data center contributes to a considerable slice of the data center operational costs. In this paper, we present a simulation environment for energy-aware cloud computing data centers. Along with the workload distribution, the simulator is designed to capture details of the energy consumed by data center components (servers, switches, and links) as well as packet-level communication patterns in realistic setups. The simulation results obtained for two-tier, three-tier, and three-tier high-speed data center architectures demonstrate the effectiveness of the simulator in utilizing power management schema, such as voltage scaling, frequency scaling, and dynamic shutdown that are applied to the computing and networking components.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed prediction-based resource measurement and provisioning strategies using Neural Network and Linear Regression offers more adaptive resource management for applications hosted in the cloud environment, an important mechanism to achieve on-demand resource allocation in thecloud.

Journal ArticleDOI
TL;DR: A thorough survey on optical interconnects for next generation data center networks is presented and a qualitative categorization and comparison of the proposed schemes based on their main features such as connectivity and scalability are provided.
Abstract: Data centers are experiencing an exponential increase in the amount of network traffic that they have to sustain due to cloud computing and several emerging web applications. To face this network load, large data centers are required with thousands of servers interconnected with high bandwidth switches. Current data center networks, based on electronic packet switches, consume excessive power to handle the increased communication bandwidth of emerging applications. Optical interconnects have gained attention recently as a promising solution offering high throughput, low latency and reduced energy consumption compared to current networks based on commodity switches. This paper presents a thorough survey on optical interconnects for next generation data center networks. Furthermore, the paper provides a qualitative categorization and comparison of the proposed schemes based on their main features such as connectivity and scalability. Finally, the paper discusses the cost and the power consumption of these schemes that are of primary importance in the future data center networks.

Proceedings ArticleDOI
21 Mar 2012
TL;DR: What makes all this possible, what is the architectural design of cloud computing and its applications, and how can customers do not have to pay for infrastructure, its installation, required man power to handle such infrastructure and maintenance are discussed.
Abstract: With the advent internet in the 1990s to the present day facilities of ubiquitous computing, the internet has changed the computing world in a drastic way. It has traveled from the concept of parallel computing to distributed computing to grid computing and recently to cloud computing. Although the idea of cloud computing has been around for quite some time, it is an emerging field of computer science. Cloud computing can be defined as a computing environment where computing needs by one party can be outsourced to another party and when need be arise to use the computing power or resources like database or emails, they can access them via internet. Cloud computing is a recent trend in IT that moves computing and data away from desktop and portable PCs into large data centers. The main advantage of cloud computing is that customers do not have to pay for infrastructure, its installation, required man power to handle such infrastructure and maintenance. In this paper we will discuss what makes all this possible, what is the architectural design of cloud computing and its applications.

Patent
13 Mar 2012
Abstract: System, method, and tangible computer-readable storage media are disclosed for providing a brokering service for compute resources The method includes, at a brokering service, polling a group of separately administered compute environments to identify resource capabilities and information, each compute resource environment including the group of managed nodes for processing workload, receiving a request for compute resources at the brokering service system, the request for compute resources being associated with a service level agreement (SLA) and based on the resource capabilities across the group of compute resource environments, selecting compute resources in one or more of the group of compute resource environments The brokering service system receives workload associated with the request and communicates the workload to the selected resources for processing The brokering services system can aggregate resources for multiple cloud service providers and act as an advocate for or a guarantor of the SLA associated with the workload

Journal ArticleDOI
TL;DR: Two energy-conscious task consolidation heuristics are presented, which aim to maximize resource utilization and explicitly take into account both active and idle energy consumption and demonstrate their promising energy-saving capability.
Abstract: The energy consumption of under-utilized resources, particularly in a cloud environment, accounts for a substantial amount of the actual energy use. Inherently, a resource allocation strategy that takes into account resource utilization would lead to a better energy efficiency; this, in clouds, extends further with virtualization technologies in that tasks can be easily consolidated. Task consolidation is an effective method to increase resource utilization and in turn reduces energy consumption. Recent studies identified that server energy consumption scales linearly with (processor) resource utilization. This encouraging fact further highlights the significant contribution of task consolidation to the reduction in energy consumption. However, task consolidation can also lead to the freeing up of resources that can sit idling yet still drawing power. There have been some notable efforts to reduce idle power draw, typically by putting computer resources into some form of sleep/power-saving mode. In this paper, we present two energy-conscious task consolidation heuristics, which aim to maximize resource utilization and explicitly take into account both active and idle energy consumption. Our heuristics assign each task to the resource on which the energy consumption for executing the task is explicitly or implicitly minimized without the performance degradation of that task. Based on our experimental results, our heuristics demonstrate their promising energy-saving capability.

Patent
19 Jan 2012
TL;DR: In this paper, the authors describe improved capabilities for a virtualization environment adapted for development and deployment of at least one software workload, the virtualisation environment having a metamodel framework that allows the association of a policy to the software workload upon development of the workload that is applied upon deployment of software workload.
Abstract: In embodiments of the present invention improved capabilities are described for a virtualization environment adapted for development and deployment of at least one software workload, the virtualization environment having a metamodel framework that allows the association of a policy to the software workload upon development of the workload that is applied upon deployment of the software workload. This allows a developer to define a security zone and to apply at least one type of security policy with respect to the security zone including the type of security zone policy in the metamodel framework such that the type of security zone policy can be associated with the software workload upon development of the software workload, and if the type of security zone policy is associated with the software workload, automatically applying the security policy to the software workload when the software workload is deployed within the security zone.

Proceedings ArticleDOI
24 Jun 2012
TL;DR: This paper studies the startup time of cloud VMs across three real-world cloud providers -- Amazon EC2, Windows Azure and Rackspace and analyzes the relationship between the VM startup time and different factors, such as time of the day, OS image size, instance type, data center location and the number of instances acquired at the same time.
Abstract: One of many advantages of the cloud is the elasticity, the ability to dynamically acquire or release computing resources in response to demand. However, this elasticity is only meaningful to the cloud users when the acquired Virtual Machines (VMs) can be provisioned in time and be ready to use within the user expectation. The long unexpected VM startup time could result in resource under-provisioning, which will inevitably hurt the application performance. A better understanding of the VM startup time is therefore needed to help cloud users to plan ahead and make in-time resource provisioning decisions. In this paper, we study the startup time of cloud VMs across three real-world cloud providers -- Amazon EC2, Windows Azure and Rackspace. We analyze the relationship between the VM startup time and different factors, such as time of the day, OS image size, instance type, data center location and the number of instances acquired at the same time. We also study the VM startup time of spot instances in EC2, which show a longer waiting time and greater variance compared to on-demand instances.

Journal ArticleDOI
TL;DR: This paper defines and solves the problem of secure ranked keyword search over encrypted cloud data, and explores the statistical measure approach from information retrieval to build a secure searchable index, and develops a one-to-many order-preserving mapping technique to properly protect those sensitive score information.
Abstract: Cloud computing economically enables the paradigm of data service outsourcing. However, to protect data privacy, sensitive cloud data have to be encrypted before outsourced to the commercial public cloud, which makes effective data utilization service a very challenging task. Although traditional searchable encryption techniques allow users to securely search over encrypted data through keywords, they support only Boolean search and are not yet sufficient to meet the effective data utilization need that is inherently demanded by large number of users and huge amount of data files in cloud. In this paper, we define and solve the problem of secure ranked keyword search over encrypted cloud data. Ranked search greatly enhances system usability by enabling search result relevance ranking instead of sending undifferentiated results, and further ensures the file retrieval accuracy. Specifically, we explore the statistical measure approach, i.e., relevance score, from information retrieval to build a secure searchable index, and develop a one-to-many order-preserving mapping technique to properly protect those sensitive score information. The resulting design is able to facilitate efficient server-side ranking without losing keyword privacy. Thorough analysis shows that our proposed solution enjoys “as-strong-as-possible” security guarantee compared to previous searchable encryption schemes, while correctly realizing the goal of ranked keyword search. Extensive experimental results demonstrate the efficiency of the proposed solution.

Journal ArticleDOI
TL;DR: A comparative study between some of the IaaS (Infrastructure as a Service) commonly used to select the best suited one for deployment and research development in the field of cloud computing is presented.
Abstract: Cloud computing is a quite new concept for which the resources are virtualized, dynamically extended and provided as a service on the Internet. In this paper, we present a comparative study between some of the IaaS (Infrastructure as a Service) commonly used to select the best suited one for deployment and research development in the field of cloud computing. The aim is to provide the computer industry with the opportunity to build a hosting architecture, massively scalable which is completely open source, while overcoming the constraints and the use of proprietary technologies. Then, we present the solution OpenStack retained by the comparative study. We discuss in detail its functional and architectural system. We finish by a discussion of the motivation of our choice of the IaaS solution. General Terms:

Journal ArticleDOI
TL;DR: A dynamic offloading algorithm based on Lyapunov optimization is presented, which has low complexity to solve the offloading problem and shows that the proposed algorithm saves more energy than the existing algorithm while meeting the requirement of application execution time.
Abstract: Offloading is an effective method for extending the lifetime of handheld mobile devices by executing some components of applications remotely (e.g., on the server in a data center or in a cloud). In this article, to achieve energy saving while satisfying given application execution time requirement, we present a dynamic offloading algorithm, which is based on Lyapunov optimization. The algorithm has low complexity to solve the offloading problem (i.e., to determine which software components to execute remotely given available wireless network connectivity). Performance evaluation shows that the proposed algorithm saves more energy than the existing algorithm while meeting the requirement of application execution time.

Journal ArticleDOI
TL;DR: The security of HASBE is formally proved based on security of the ciphertext-policy attribute-based encryption (CP-ABE) scheme by Bethencourt and its performance and computational complexity are formally analyzed.
Abstract: Cloud computing has emerged as one of the most influential paradigms in the IT industry in recent years. Since this new computing technology requires users to entrust their valuable data to cloud providers, there have been increasing security and privacy concerns on outsourced data. Several schemes employing attribute-based encryption (ABE) have been proposed for access control of outsourced data in cloud computing; however, most of them suffer from inflexibility in implementing complex access control policies. In order to realize scalable, flexible, and fine-grained access control of outsourced data in cloud computing, in this paper, we propose hierarchical attribute-set-based encryption (HASBE) by extending ciphertext-policy attribute-set-based encryption (ASBE) with a hierarchical structure of users. The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits flexibility and fine-grained access control in supporting compound attributes of ASBE. In addition, HASBE employs multiple value assignments for access expiration time to deal with user revocation more efficiently than existing schemes. We formally prove the security of HASBE based on security of the ciphertext-policy attribute-based encryption (CP-ABE) scheme by Bethencourt and analyze its performance and computational complexity. We implement our scheme and show that it is both efficient and flexible in dealing with access control for outsourced data in cloud computing with comprehensive experiments.

Proceedings Article
28 May 2012
TL;DR: In this paper, the authors evaluate the performance of the Amazon EC2 platform using micro-benchmarks and kernels and conclude that the current cloud services need an order of magnitude in performance improvement to be useful to the scientific community.
Abstract: Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds promise to be for scientists an alternative to clusters, grids, and supercomputers. However, virtualization may induce significant performance penalties for the demanding scientific computing workloads. In this work we present an evaluation of the usefulness of the current cloud computing services for scientific computing. We analyze the performance of the Amazon EC2 platform using micro-benchmarks and kernels. While clouds are still changing, our results indicate that the current cloud services need an order of magnitude in performance improvement to be useful to the scientific community.

Journal ArticleDOI
TL;DR: Investigating how cloud computing is understood by IT professionals and the concerns that IT professionals have in regard to the adoption of cloud services suggest that most IT companies in Taiwan will not adopt cloud computing until the uncertainties associated with cloud computing are reduced and successful business models have emerged.