scispace - formally typeset
Search or ask a question
Journal ArticleDOI

User Authentication Framework with Improved Performance in Multi-cloudEnvironment

TL;DR: Analytical results support the claim to enhance the security services and also the performance factors such as resource utilizations and cost.
Abstract: Cloud computing is an internet-based computing where shared resources, software and information provided to the end-users, on demand. The end users can be comfortable with the cloud as its features on-demand self-services, broad network access, resource pooling, rapid elasticity and measured service, make it more efficient. Security is one of the major issues which hamper the growth of cloud. Confidentiality–Integrity–Availability are the major security goals to be ensured by the security mechanisms. Authentication could provide a better solution in all three aspects. Various authentication based methodologies proposed by experts are in the field, with their own strength and weakness. This paper proposed an authentication scheme along with performance enhancement. Initial phase of the proposal differentiates the request as wired or wireless network. Based on which, appropriate authentication protocol comes into play, wired adopts keystroke behaviour and wireless follows SSID. In the second phase, users behaviour were analysed and credits assigned to them, based on which resource accessibility is restricted. In the third phase, performance characteristics were taken into account. Analytical results support the claim to enhance the security services and also the performance factors such as resource utilizations and cost.
References
More filters
Journal ArticleDOI
TL;DR: By sacrificing modest computation resources to save communication bandwidth and reduce transmission latency, fog computing can significantly improve the performance of cloud computing.
Abstract: Mobile users typically have high demand on localized and location-based information services. To always retrieve the localized data from the remote cloud, however, tends to be inefficient, which motivates fog computing. The fog computing, also known as edge computing, extends cloud computing by deploying localized computing facilities at the premise of users, which prestores cloud data and distributes to mobile users with fast-rate local connections. As such, fog computing introduces an intermediate fog layer between mobile users and cloud, and complements cloud computing toward low-latency high-rate services to mobile users. In this fundamental framework, it is important to study the interplay and cooperation between the edge (fog) and the core (cloud). In this paper, the tradeoff between power consumption and transmission delay in the fog-cloud computing system is investigated. We formulate a workload allocation problem which suggests the optimal workload allocations between fog and cloud toward the minimal power consumption with the constrained service delay. The problem is then tackled using an approximate approach by decomposing the primal problem into three subproblems of corresponding subsystems, which can be, respectively, solved. Finally, based on simulations and numerical results, we show that by sacrificing modest computation resources to save communication bandwidth and reduce transmission latency, fog computing can significantly improve the performance of cloud computing.

681 citations

Journal ArticleDOI
TL;DR: A novel cloud-based workflow scheduling (CWSA) policy for compute-intensive workflow applications in multi-tenant cloud computing environments, which helps minimize the overall workflow completion time, tardiness, cost of execution of the workflows, and utilize idle resources of cloud effectively is proposed.
Abstract: Multi-tenancy is one of the key features of cloud computing, which provides scalability and economic benefits to the end-users and service providers by sharing the same cloud platform and its underlying infrastructure with the isolation of shared network and compute resources. However, resource management in the context of multi-tenant cloud computing is becoming one of the most complex task due to the inherent heterogeneity and resource isolation. This paper proposes a novel cloud-based workflow scheduling (CWSA) policy for compute-intensive workflow applications in multi-tenant cloud computing environments, which helps minimize the overall workflow completion time, tardiness, cost of execution of the workflows, and utilize idle resources of cloud effectively. The proposed algorithm is compared with the state-of-the-art algorithms, i.e., First Come First Served (FCFS), EASY Backfilling, and Minimum Completion Time (MCT) scheduling policies to evaluate the performance. Further, a proof-of-concept experiment of real-world scientific workflow applications is performed to demonstrate the scalability of the CWSA, which verifies the effectiveness of the proposed solution. The simulation results show that the proposed scheduling policy improves the workflow performance and outperforms the aforementioned alternative scheduling policies under typical deployment scenarios.

156 citations

Proceedings ArticleDOI
04 Jul 2011
TL;DR: A scheduling model for optimizing virtual cluster placements across available cloud offers is proposed and the results show that user's investment decreases when part of the virtual infrastructure is dynamically distributed among clouds instead of maintaining it in a fixed one.
Abstract: The number of providers in the cloud computing market is increasing at rapid pace. At the same time, we are observing a fragmentation of the market in terms of pricing schemes, virtual machine offers and value-add features. In the early phase of cloud adoption, the price model was dominated by fixed prices. However, cloud market trend shows that dynamic pricing schemes utilization is being increased. In this plan, prices change according to demand in each cloud provider. In general, it is difficult for users to search cloud prices and decide where to put their resources. In this paper, we propose a scheduling model for optimizing virtual cluster placements across available cloud offers. This scheduler uses some variables such as average prices or cloud prices trends for suggesting an optimal deployment. Also, this scheduler is part of a cloud broker which automates actions and makes them transparent for users. The performance of our model is evaluated in a real-world cloud environment and the results show that user's investment decreases when part of the virtual infrastructure is dynamically distributed among clouds instead of maintaining it in a fixed one.

126 citations

Journal ArticleDOI
TL;DR: Evaluating the scalability, performance, and cost of different configurations of a Sun Grid Engine cluster deployed on a multicloud infrastructure spanning a local data center and three different cloud sites shows that performance and cost results can be extrapolated to large-scale problems and cluster infrastructures.
Abstract: Cloud computing is gaining acceptance in many IT organizations, as an elastic, flexible, and variable-cost way to deploy their service platforms using outsourced resources. Unlike traditional utilities where a single provider scheme is a common practice, the ubiquitous access to cloud resources easily enables the simultaneous use of different clouds. In this paper, we explore this scenario to deploy a computing cluster on the top of a multicloud infrastructure, for solving loosely coupled Many-Task Computing (MTC) applications. In this way, the cluster nodes can be provisioned with resources from different clouds to improve the cost effectiveness of the deployment, or to implement high-availability strategies. We prove the viability of this kind of solutions by evaluating the scalability, performance, and cost of different configurations of a Sun Grid Engine cluster, deployed on a multicloud infrastructure spanning a local data center and three different cloud sites: Amazon EC2 Europe, Amazon EC2 US, and ElasticHosts. Although the testbed deployed in this work is limited to a reduced number of computing resources (due to hardware and budget limitations), we have complemented our analysis with a simulated infrastructure model, which includes a larger number of resources, and runs larger problem sizes. Data obtained by simulation show that performance and cost results can be extrapolated to large-scale problems and cluster infrastructures.

109 citations

Journal ArticleDOI
TL;DR: Based on the proposed algorithms, energy consumption can be reduced by up to 28%, and SLA can be improved up to 87% when compared with the benchmark algorithms.
Abstract: Cloud computing has become a significant research area in large-scale computing, because it can share globally distributed resources Cloud computing has evolved with the development of large-scale data centers, including thousands of servers around the world However, cloud data centers consume vast amounts of electrical energy, contributing to high-operational costs, and carbon dioxide emissions Dynamic consolidation of virtual machines (VMs) using live migration and putting idle nodes in sleep mode allows cloud providers to optimize resource utilization and reduce energy consumption However, aggressive VM consolidation may degrade the performance Therefore, an energy-performance tradeoff between providing high-quality service to customers and reducing power consumption is desired In this paper, several novel algorithms are proposed for the dynamic consolidation of VMs in cloud data centers The aim is to improve the utilization of computing resources and reduce energy consumption under SLA constraints regarding CPU, RAM, and bandwidth The efficiency of the proposed algorithms is validated by conducting extensive simulations The results of the evaluation clearly show that the proposed algorithms significantly reduce energy consumption while providing a high level of commitment to the SLA Based on the proposed algorithms, energy consumption can be reduced by up to 28%, and SLA can be improved up to 87% when compared with the benchmark algorithms

89 citations