Author
Kang G. Shin
Other affiliations: IBM, Sungkyunkwan University, Hitachi ...read more
Bio: Kang G. Shin is an academic researcher from University of Michigan. The author has contributed to research in topics: Scheduling (computing) & Network packet. The author has an hindex of 98, co-authored 885 publications receiving 38572 citations. Previous affiliations of Kang G. Shin include IBM & Sungkyunkwan University.
Papers published on a yearly basis
Papers
More filters
21 Oct 2001
TL;DR: This paper presents a class of novel algorithms that modify the OS's real-time scheduler and task management service to provide significant energy savings while maintaining real- time deadline guarantees, and shows that these RT-DVS algorithms closely approach the theoretical lower bound on energy consumption.
Abstract: In recent years, there has been a rapid and wide spread of non-traditional computing platforms, especially mobile and portable computing devices. As applications become increasingly sophisticated and processing power increases, the most serious limitation on these devices is the available battery life. Dynamic Voltage Scaling (DVS) has been a key technique in exploiting the hardware characteristics of processors to reduce energy dissipation by lowering the supply voltage and operating frequency. The DVS algorithms are shown to be able to make dramatic energy savings while providing the necessary peak computation power in general-purpose systems. However, for a large class of applications in embedded real-time systems like cellular phones and camcorders, the variable operating frequency interferes with their deadline guarantee mechanisms, and DVS in this context, despite its growing importance, is largely overlooked/under-developed. To provide real-time guarantees, DVS must consider deadlines and periodicity of real-time tasks, requiring integration with the real-time scheduler. In this paper, we present a class of novel algorithms called real-time DVS (RT-DVS) that modify the OS's real-time scheduler and task management service to provide significant energy savings while maintaining real-time deadline guarantees. We show through simulations and a working prototype implementation that these RT-DVS algorithms closely approach the theoretical lower bound on energy consumption, and can easily reduce energy consumption 20% to 40% in an embedded real-time system.
1,265 citations
TL;DR: In this paper, the problem of moving a manipulator in minimum time along a specified geometric path subject to input torque/force constraints is considered, and the minimum-time solution is deduced in an algorithm form using phase-plane techniques.
Abstract: Conventionally, robot control algorithms are divided into two stages, namely, path or trajectory planning and path tracking (or path control). This division has been adopted mainly as a means of alleviating difficulties in dealing with complex, coupled manipulator dynamics. Trajectory planning usually determines the timing of manipulator position and velocity without considering its dynamics. Consequently, the simplicity obtained from the division comes at the expense of efficiency in utilizing robot's capabilities. To remove at least partially this inefficiency, this paper considers a solution to the problem of moving a manipulator in minimum time along a specified geometric path subject to input torque/force constraints. We first describe the manipulator dynamics using parametric functions which represent geometric path constraints to be honored for collision avoidance as well as task requirements. Second, constraints on input torques/ forces are converted to those on the parameters. Third, the minimum-time solution is deduced in an algorithm form using phase-plane techniques. Finally, numerical examples are presented to demonstrate utility of the trajectory planning method developed.
1,016 citations
TL;DR: This work develops a sensing-period optimization mechanism and an optimal channel-sequencing algorithm, as well as an environment- adaptive channel-usage pattern estimation method that is shown to track time-varying channel-parameters accurately.
Abstract: Sensing/monitoring of spectrum-availability has been identified as a key requirement for dynamic spectrum allocation in cognitive radio networks (CRNs). An important issue associated with MAC-layer sensing in CRNs is how often to sense the availability of licensed channels and in which order to sense those channels. To resolve this issue, we address (1) how to maximize the discovery of spectrum opportunities by sensing-period adaptation and (2) how to minimize the delay in finding an available channel. Specifically, we develop a sensing-period optimization mechanism and an optimal channel-sequencing algorithm, as well as an environment- adaptive channel-usage pattern estimation method. Our simulation results demonstrate the efficacy of the proposed schemes and its significant performance improvement over nonoptimal schemes. The sensing-period optimization discovers more than 98 percent of the analytical maximum of discoverable spectrum-opportunities, regardless of the number of channels sensed. For the scenarios tested, the proposed scheme is shown to discover up to 22 percent more opportunities than nonoptimal schemes, which may become even greater with a proper choice of initial sensing periods. The idle-channel discovery delay with the optimal channel-sequencing technique ranges from 0.08 to 0.35 seconds under the tested scenarios, which is much faster than nonoptimal schemes. Moreover, our estimation method is shown to track time-varying channel-parameters accurately.
856 citations
23 Jun 2002
TL;DR: A simple and robust mechanism that not only sets alarms upon detection of ongoing SYN flooding attacks, but also reveals the location of the flooding sources without resorting to expensive IP traceback.
Abstract: We propose a simple and robust mechanism for detecting SYN flooding attacks. Instead of monitoring the ongoing traffic at the front end (like firewall or proxy) or a victim server itself, we detect the SYN flooding attacks at leaf routers that connect end hosts to the Internet. The simplicity of our detection mechanism lies in its statelessness and low computation overhead, which make the detection mechanism itself immune to flooding attacks. Our detection mechanism is based on the protocol behavior of TCP SYN-FIN (RST) pairs, and is an instance of the Seqnential Change Point Detection [l]. To make the detection mecbanism insensitive to site and access pattern, a non-parametric Cnmnlative Sum (CUSUM) method [4] is applied, thus making the detection mechanism much more generally applicable and its deployment much easier. The efficacy of this detection mechanism is validated by trace-driven simulations. The evaluation results show that the detection mechanism has short detection latency and high detection accuracy. Moreover, due to its proximity to the flooding sources, our mechanism not only sets alarms upon detection of ongoing SYN flooding attacks, but also reveals the location of the flooding sources without resorting to expensive IP traceback.
647 citations
21 Mar 2007
TL;DR: An adaptive resource control system that dynamically adjusts the resource shares to individual tiers in order to meet application-level quality of service (QoS) goals while achieving high resource utilization in the data center is developed.
Abstract: Data centers are often under-utilized due to over-provisioning as well as time-varying resource demands of typical enterprise applications. One approach to increase resource utilization is to consolidate applications in a shared infrastructure using virtualization. Meeting application-level quality of service (QoS) goals becomes a challenge in a consolidated environment as application resource needs differ. Furthermore, for multi-tier applications, the amount of resources needed to achieve their QoS goals might be different at each tier and may also depend on availability of resources in other tiers. In this paper, we develop an adaptive resource control system that dynamically adjusts the resource shares to individual tiers in order to meet application-level QoS goals while achieving high resource utilization in the data center. Our control system is developed using classical control theory, and we used a black-box system modeling approach to overcome the absence of first principle models for complex enterprise applications and systems. To evaluate our controllers, we built a testbed simulating a virtual data center using Xen virtual machines. We experimented with two multi-tier applications in this virtual data center: a two-tier implementation of RUBiS, an online auction site, and a two-tier Java implementation of TPC-W. Our results indicate that the proposed control system is able to maintain high resource utilization and meets QoS goals in spite of varying resource demands from the applications.
645 citations
Cited by
More filters
Journal Article•
28,685 citations
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.
6,340 citations
TL;DR: This paper presents a detailed study on recent advances and open research issues in WMNs, followed by discussing the critical factors influencing protocol design and exploring the state-of-the-art protocols for WMNs.
Abstract: Wireless mesh networks (WMNs) consist of mesh routers and mesh clients, where mesh routers have minimal mobility and form the backbone of WMNs. They provide network access for both mesh and conventional clients. The integration of WMNs with other networks such as the Internet, cellular, IEEE 802.11, IEEE 802.15, IEEE 802.16, sensor networks, etc., can be accomplished through the gateway and bridging functions in the mesh routers. Mesh clients can be either stationary or mobile, and can form a client mesh network among themselves and with mesh routers. WMNs are anticipated to resolve the limitations and to significantly improve the performance of ad hoc networks, wireless local area networks (WLANs), wireless personal area networks (WPANs), and wireless metropolitan area networks (WMANs). They are undergoing rapid progress and inspiring numerous deployments. WMNs will deliver wireless services for a large variety of applications in personal, local, campus, and metropolitan areas. Despite recent advances in wireless mesh networking, many research challenges remain in all protocol layers. This paper presents a detailed study on recent advances and open research issues in WMNs. System architectures and applications of WMNs are described, followed by discussing the critical factors influencing protocol design. Theoretical network capacity and the state-of-the-art protocols for WMNs are explored with an objective to point out a number of open research issues. Finally, testbeds, industrial practice, and current standard activities related to WMNs are highlighted.
4,205 citations
TL;DR: This note investigates a simple event-triggered scheduler based on the paradigm that a real-time scheduler could be regarded as a feedback controller that decides which task is executed at any given instant and shows how it leads to guaranteed performance thus relaxing the more traditional periodic execution requirements.
Abstract: In this note, we revisit the problem of scheduling stabilizing control tasks on embedded processors. We start from the paradigm that a real-time scheduler could be regarded as a feedback controller that decides which task is executed at any given instant. This controller has for objective guaranteeing that (control unrelated) software tasks meet their deadlines and that stabilizing control tasks asymptotically stabilize the plant. We investigate a simple event-triggered scheduler based on this feedback paradigm and show how it leads to guaranteed performance thus relaxing the more traditional periodic execution requirements.
3,695 citations