scispace - formally typeset
Search or ask a question
Author

Kyong Hoon Kim

Bio: Kyong Hoon Kim is an academic researcher from Gyeongsang National University. The author has contributed to research in topics: Scheduling (computing) & Dynamic priority scheduling. The author has an hindex of 17, co-authored 104 publications receiving 1479 citations. Previous affiliations of Kyong Hoon Kim include Pohang University of Science and Technology & Kyungpook National University.


Papers
More filters
Proceedings ArticleDOI
14 May 2007
TL;DR: This paper provides power-aware scheduling algorithms for bag-of-tasks applications with deadline constraints on DVS-enabled cluster systems in order to minimize power consumption as well as to meet the deadlines specified by application users.
Abstract: Power-aware scheduling problem has been a recent issue in cluster systems not only for operational cost due to electricity cost, but also for system reliability. As recent commodity processors support multiple operating points under various supply voltage levels, Dynamic Voltage Scaling (DVS) scheduling algorithms can reduce power consumption by controlling appropriate voltage levels. In this paper, we provide power-aware scheduling algorithms for bag-of-tasks applications with deadline constraints on DVS-enabled cluster systems in order to minimize power consumption as well as to meet the deadlines specified by application users. A bag-of-tasks application should finish all the sub-tasks before the deadline, so that the DVS scheduling scheme should consider the deadline as well. We provide the DVS scheduling algorithms for both time-shared and space-shared resource sharing policies. The simulation results show that the proposed algorithms reduce much power consumption compared to static voltage schemes.

336 citations

Proceedings ArticleDOI
30 Nov 2009
TL;DR: This work investigates power-aware provisioning of virtual machines for real-time services, and proposes several schemes to reduce power consumption and show their performance throughout simulation results.
Abstract: Reducing energy consumption has been an essential technique for Cloud resources or datacenters, not only for operational cost, but also for system reliability. As Cloud computing becomes emergent for Anything as a Service (XaaS) paradigm, modern real-time Cloud services are also available throughout Cloud computing. In this work, we investigate power-aware provisioning of virtual machines for real-time services. Our approach is (i) to model a real-time service as a real-time virtual machine request; and (ii) to provision virtual machines of datacenters using DVFS (Dynamic Voltage Frequency Scaling) schemes. We propose several schemes to reduce power consumption and show their performance throughout simulation results.

201 citations

Journal ArticleDOI
TL;DR: This work investigates power‐aware provisioning of virtual machines for real‐time services, and proposes several schemes to reduce power consumption by hard real-time services and power-aware profitable provisioned of soft real‐ time services.
Abstract: Reducing power consumption has been an essential requirement for Cloud resource providers not only to decrease operating costs, but also to improve the system reliability. As Cloud computing becomes emergent for the Anything as a Service (XaaS) paradigm, modern real-time services also become available through Cloud computing. In this work, we investigate power-aware provisioning of virtual machines for real-time services. Our approach is (i) to model a real-time service as a real-time virtual machine request; and (ii) to provision virtual machines in Cloud data centers using dynamic voltage frequency scaling schemes. We propose several schemes to reduce power consumption by hard real-time services and power-aware profitable provisioning of soft real-time services. Copyright © 2011 John Wiley & Sons, Ltd.

164 citations

Proceedings ArticleDOI
19 May 2008
TL;DR: Overbooking models from Revenue Management are used to manage cancellations and no-shows of reservations in a Grid system, and it is shown that by overbooking reservations, a resource gains an extra 6-9% in the total net revenue.
Abstract: Advance reservation allows users to request available nodes in the future, whereas economy provides an incentive for resource owners to be part of the Grid, and encourages users to utilize resources optimally and effectively. In this paper, we use overbooking models from Revenue Management to manage cancellations and no-shows of reservations in a Grid system. Without overbooking, the resource owners are faced with a prospect of loss of income and lower system utilization. Thus, the models aim to find an ideal limit that exceeds the maximum capacity, without incurring greater compensation cost. Moreover, we introduce several novel strategies for selecting which bookings to deny, based on compensation cost and user class level, namely Lottery, Denied Cost First (DCF), and Lower Class DCF. The result shows that by overbooking reservations, a resource gains an extra 6-9% in the total net revenue.

66 citations

Proceedings ArticleDOI
20 Sep 2012
TL;DR: This work proposes two resource provisioning approaches: one based on listed pricing policies and the other based on deadline-aware tasks packing, to minimize the cost of virtual machines for executing MapReduce applications without violating their deadlines to be finished by.
Abstract: As Cloud computing provides Anything as a Service (XaaS), many applications can be developed and run on the Cloud without concerns of platforms. Data-incentive applications are also easily developed on virtual machines provided by the Cloud. In this work, we investigate cost-effective resource provisioning for MapReduce applications with deadline constraints, as the MapReduce programming model is useful and powerful in developing data-incentive applications. When users want to run MapReduce applications, they submit jobs to a Cloud resource broker which allocates appropriate virtual machines with consideration of SLAs (Service-Level Agreements). The goal of resource provisioning in this paper is to minimize the cost of virtual machines for executing MapReduce applications without violating their deadlines to be finished by. We propose two resource provisioning approaches: one based on listed pricing policies and the other based on deadline-aware tasks packing. Throughout simulations, we evaluate and analyze them in various ways.

65 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper defines Cloud computing and provides the architecture for creating Clouds with market-oriented resource allocation by leveraging technologies such as Virtual Machines (VMs), and provides insights on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain Service Level Agreement (SLA) oriented resource allocation.

5,850 citations

Book ChapterDOI
21 May 2010
TL;DR: The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.
Abstract: Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments The proposed InterCloud environment supports scaling of applications across multiple vendor clouds We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.

1,045 citations

Book ChapterDOI
Eric V. Denardo1
01 Jan 2011
TL;DR: This chapter sees how the simplex method simplifies when it is applied to a class of optimization problems that are known as “network flow models” and finds an optimal solution that is integer-valued.
Abstract: In this chapter, you will see how the simplex method simplifies when it is applied to a class of optimization problems that are known as “network flow models.” You will also see that if a network flow model has “integer-valued data,” the simplex method finds an optimal solution that is integer-valued.

828 citations

Book ChapterDOI
TL;DR: This study discusses causes and problems of high power/energy consumption, and presents a taxonomy of energy-efficient design of computing systems covering the hardware, operating system, virtualization, and data center levels.
Abstract: Traditionally, the development of computing systems has been focused on performance improvements driven by the demand of applications from consumer, scientific, and business domains. However, the ever-increasing energy consumption of computing systems has started to limit further performance growth due to overwhelming electricity bills and carbon dioxide footprints. Therefore, the goal of the computer system design has been shifted to power and energy efficiency. To identify open challenges in the area and facilitate future advancements, it is essential to synthesize and classify the research on power- and energy-efficient design conducted to date. In this study, we discuss causes and problems of high power/energy consumption, and present a taxonomy of energy-efficient design of computing systems covering the hardware, operating system, virtualization, and data center levels. We survey various key works in the area and map them onto our taxonomy to guide future design and development efforts. This chapter concludes with a discussion on advancements identified in energy-efficient computing and our vision for future research directions.

745 citations