scispace - formally typeset
Search or ask a question
Author

G. Justy Mirobi

Bio: G. Justy Mirobi is an academic researcher from Bharathiar University. The author has contributed to research in topics: Cloud computing & Virtual machine. The author has an hindex of 2, co-authored 7 publications receiving 25 citations.

Papers
More filters
Proceedings ArticleDOI
01 Dec 2015
TL;DR: An overview of SLA, benefits of cloud computing, the necessity of cloud Computing, the importance ofSLA, classification of Service Level Agreement, SLA-based cloud framework of cloudputing is presented.
Abstract: Cloud computing is an internet based computing, providing the on demand services through the internet such as servers, storage disk, different platforms and applications to any business level company or organization. The cloud computing services are ‘Pay based on usage’ based on the agreement between Service Provider of the Cloud and customer. Service Level Agreement(SLA) is a contract, contracted between provider of the service and the third party such as purchaser of service, dealer(agent) or monitor(agent), where service is formally defined. Practically the term Service Level Agreement(SLA) is used to mention the delivery period of the contracted service, and to evaluate the performance of service. The cloud computing is a recent technology providing numerous services for critical commercial applications, trustworthy and flexible mechanisms for managing the contracts. Therefore SLA is very important for conducting the cloud business in a smooth way. This paper presents an overview of SLA, benefits of cloud computing, the necessity of cloud computing, the importance of SLA, classification of Service Level Agreement, SLA-based cloud framework of cloud computing.

22 citations

Proceedings ArticleDOI
01 Dec 2015
TL;DR: The Service Level Management is the process of controlling the service levels and negotiating the SLA in services, the entities that highly depend on the IT service management system, process, procedure and task for ensuring that whether the defined Quality of service is provided or not.
Abstract: The Service Level Management [SLM] is the process of managing the cloud resources and services. Also, it is the process of managing and deploying the recourses, providing the services based on demand, control the service, monitor the service and report the service. The Service Level Management defines the process of allocating the resources, managing the resources, SLA negation, controlling the service, reporting and monitoring the service levels with the predefined standard service parameters. In cloud computing, the effective cloud service is considered that providing the required adequate service as mentioned in the SLA, and quickly redress the issues for the satisfaction of the customers. The capacity of an entity is to provide and maintain the appropriate service level which heavily dependent on building the committed service and managing the service levels. Operationally, SLM can be a difficult process because the IT management service focus towards the technology centric measurements which is specific to respective domain. Commonly in cloud computing the customers do not hire the physical components, logical infrastructure or applications from the authorized one. At the same time, avoiding the capital expenditure by the leasing usage through the third party provider. The leasing usage can impact on costing techniques by block time, remote access or sharing the time. But the service consumption plans for payment are based on utilization of resources or the subscription rates mentioned in the provided business model. Hence, SLM is the process of controlling the service levels and negotiating the SLA. In services, the entities that highly depend on the IT service management system, process, procedure and task for ensuring that whether the defined Quality of service is provided or not, as well as the financial expectations are fulfilled for the selected configuration.

7 citations

Journal ArticleDOI
TL;DR: In this paper, a distance aware virtual machine scheduling algorithm is introduced to reduce propagation time and enhance the execution process to reduce the response time, which is used to deliver an enhanced virtual machine provision policy to physical hosts.
Abstract: The cloud services can be received at anytime from anywhere based on the need of the customers. According to the necessity of the customers, the virtualization of technologies is applied to deliver the cloud services accurately. A large amount of data transfers from user to host and hosts to the user, in cloud environment. To pin the virtual machine on an appropriate host and transferring the data is a challenging task. This paper explains DAVmS: Distance Aware Virtual Machine Scheduling Algorithm, which is applied to arrange virtual machines according to its capability, and pin the VMs to the nearest physical host for accessing the cloud services from the adjacent data center of the customer. This introduced virtual machine scheduling algorithm is used to reduce propagation time and enhance the execution process to reduce the response time. The main purpose of the introduced DAVmS: Distance Aware Virtual Machine Scheduling Algorithm is to deliver an enhanced virtual machine provision policy to physical hosts and access the services from the adjacent data center.

3 citations

Proceedings ArticleDOI
01 Nov 2019
TL;DR: The workflow scheduler is used to provide the cloud services with minimum process time and response time as defined in the Service Level Agreement (SLA), to offer cloud services in a quite short time.
Abstract: The cloud computing system complies the customer to access data and programs outside of the user's computing environment. Rather than storing the user's data and software on the user's personal computer or server, it is stored in the cloud. These cloud services comprise of applications, email, databases and file services. To access the cloud services, the requests are given as workflow. A workflow is a series of steps comprised of achieving a well-defined objective in a cloud environment. These steps are in a specific order to enhance the execution process and ensure efficiency. The main issue in executing the workflows is the uncertainty of the request process period and response period. The existing algorithms are not suitable to handle the above-mentioned issue. An efficient workflow scheduler is necessary to schedule the requests, select the requests and map the selected tasks to the appropriate VMs for handling their execution procedure while satisfying all dependencies, constraints and objective functions. The goal of the proposed workflow scheduler is to offer cloud services in a quite short time. The workflow scheduler is used to provide the cloud services with minimum process time and response time as defined in the Service Level Agreement (SLA).

2 citations

Proceedings ArticleDOI
01 Nov 2019
TL;DR: The proposed Distance Aware VM Scheduler is used to provide the resources from the nearest data center; thereby it decreases the propagation time and response time and the cloud services are provided quickly with improved performance.
Abstract: The virtual machine is a virtual model of a physical computer. Virtualization technology supports to generate multiple Virtual Machines (VMs) on a physical server; every VM is generated with their private OS and application. VM is a fundamental unit of cloud computing that is used to execute various types of applications successfully in a cloud environment. The propagation time and the response time are increased according to the distance between the customer region and the cloud server. Therefore, to solve the above-mentioned issue and for the efficient cloud services, the VM scheduling approach is necessary to manage the distance, resources and to reduce the propagation time and response time. In this paper, the Distance Aware VM Scheduler is proposed for mapping the VMs to the physical core processors and to provide the resources dynamically from the nearest data center. The proposed Distance Aware VM Scheduler is used to provide the resources from the nearest data center; thereby it decreases the propagation time and response time. Therefore, the cloud services are provided quickly with improved performance. The motivation of the proposed Distance Aware VM Scheduler is to generate an efficient VM allocation policy to physical servers and provide the resources from the nearest data center.

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A new power-aware VM selection policy has been proposed in this research that helps in VM selection for migration and has been further evaluated using trace-based simulation environment.
Abstract: With the rapid demand for service-oriented computing in association with the growth of cloud computing technologies, large-scale virtualized data centers have been established throughout the globe. These huge data centers consume power at a large scale that results in a high operational cost. The massive carbon footprint from the energy generators is another great issue to deal global warming. It is essential to lower the rate of carbon emission and energy consumption as much as possible. The live-migration-enabled dynamic virtual machine consolidation results in high energy saving. But it also incurs the violation of service level agreement (SLA). Excessive migration may lead to performance degradation and SLA violation. The process of VM selection for migration plays a vital role in the domain of energy-aware cloud computing. Using VM selection policies, VMs are selected for migration. A new power-aware VM selection policy has been proposed in this research that helps in VM selection for migration. The proposed power-aware VM selection policy has been further evaluated using trace-based simulation environment.

39 citations

Proceedings ArticleDOI
25 May 2020
TL;DR: Review of functionality and architecture of typical chatbot services shows the potential risks associated with chatbots and helps to build a checklist that security managers can use to assess risks prior to chatbot implementation.
Abstract: Intelligent Chatbot services become one of the mainstream applications in user help and many other areas Apart from bringing numerous benefits to users these services may bring additional risks to the companies that employ them The study starts with the review of the scale of chatbot industry and common use cases by focusing on their applications & industry tendencies Review of functionality and architecture of typical chatbot services shows the potential risks associated with chatbots Analysis of such risks in the paper helped to build a checklist that security managers can use to assess risks prior to chatbot implementation The proposed checklist was tested by reviewing a number of Service Level Agreements (SLA) of real chatbot providers

13 citations

Proceedings ArticleDOI
01 Dec 2017
TL;DR: A consortium blockchain based cleanroom security service protocol (CSSP), to track the deployment and usage of the user's software in a secure and tamper-resistant measure, to prevent running of error or illegal software in user computing environment.
Abstract: Untrusted Computing in cloud is the main obstacle to promote cloud computing services, even users get an initial trusted execution environment, the events of dynamic software deployment are easy to cause a risk of compromised system. TTP and ACS are the common strategies to reach an agreement of valid application list and operations, however, extra entity should be added into the origin network and the reliability of system rely on the trust operation of third party.,,,, ,,,, Consider this network computing model, the directly remote controlled security strategy is lacking for terminal users, while the security of user computing applications are controlled by its resources owner like cloud manager which might be not trusted in whole time. In this paper, we address this problem and propose a consortium blockchain based cleanroom security service protocol (CSSP), to track the deployment and usage of the user's software in a secure and tamper-resistant measure, to prevent running of error or illegal software in user computing environment. Unlike the traditional methods, CSSP is a two side protocol: service provider and user computing node, which reduce the redundant safety hazards nodes and its invalidation problem. Consortium Blockchain is an effectively method to reduce the energy consumption and keep the software service protocol between the manager and the user in safety state. The security analysis and evaluation shows that the approach has major potential in trusted network computing system and provide a higher secure level environment for users.

12 citations

Book ChapterDOI
02 May 2018
TL;DR: Five different machine learning algorithms are experimented with namely Support Vector Machine, Random Forest, Naive Bayes, Neural Network, and k-Nearest Neighbors for the detection and prediction of cloud quality of service violations in terms of response time and throughput.
Abstract: Cloud services connect user with cloud computing platform where services range from Infrastructure as a Service, Software as a Service and Platform as a Service. It is important for Cloud Service Provider to provide reliable cloud services which are fast in performance and to predict possible service violation before any issue emerges so then remedial action can be taken. In this paper, we therefore experiment with five different machine learning algorithms namely Support Vector Machine, Random Forest, Naive Bayes, Neural Network, and k-Nearest Neighbors for the detection and prediction of cloud quality of service violations in terms of response time and throughput. Experimental results show that the model created using SVM incorporated with 16 derived cloud quality of service violation rules has consistent accuracy of greater than 99%. With this machine learning model coupled with 16 decision rules, the Cloud Service Provider shall be able to know before hand, whether violation of services based on response time and throughput is occurring. When transactions tend to go beyond the threshold limits, system administrator shall be alerted to take necessary preventive measures to bring the system back to normal conditions. This shall reduce the chance for violation to occur, hence mitigating lose or costly penalty.

10 citations

Book ChapterDOI
01 Jan 2021
TL;DR: In this article, the authors explore some of the solutions to reduce energy consumption such as green cloud computing, using renewable energy sources, and so on, in order to improve the environmental sustainability of datacenters.
Abstract: A cloud can be considered as a distributed computing environment consists of a set of virtualized and interconnected computers or nodes. Dynamically provisioned cloud presented as consolidated computing resources based on Service Level Agreements (SLA) entrenched through an agreement between the consumers and service provider. The cloud consists of virtualized, distributed datacenters and here the applications are offered on-demand, as a service. Every datacenter needs a very much consistent as well as inexpensive power supply. Usually, this is fulfilled by combining grid electricity, which ensures affordability, and backup diesel generators for an emergency, that assures consistency. Unfortunately, this system has some flaws, relying on increasingly unstable electricity prices, and a very high rate of carbon emissions from the diesel generators, this could direct to the problems of environmental sustainability. We will look into the energy generation and consumption of different components of datacenters. The power sources of these power-hungry datacenters are affecting environmental sustainability. Later, in this chapter we will explore some of the solutions to reduce energy consumption such as green cloud computing, using renewable energy sources, and so on.

8 citations