scispace - formally typeset
Search or ask a question
Author

Yong Meng Teo

Bio: Yong Meng Teo is an academic researcher from National University of Singapore. The author has contributed to research in topics: Grid & Discrete event simulation. The author has an hindex of 21, co-authored 134 publications receiving 1713 citations. Previous affiliations of Yong Meng Teo include Massachusetts Institute of Technology & Singapore–MIT alliance.


Papers
More filters
Proceedings ArticleDOI
17 May 2010
TL;DR: This paper presents a dyanmic pricing scheme suitable for rational users requests containing multiple resource types, and shows that user welfare and the percentage of successful requests is increased by using dynamic pricing.
Abstract: Current large distributed systems allow users to share and trade resources. In cloud computing, users purchase different types of resources from one or more resource providers using a fixed pricing scheme. Federated clouds, a topic of recent interest, allows different cloud providers to share resources for increased scalability and reliability. However, users and providers of cloud resources are rational and maximize their own interest when consuming and contributing shared resources. In this paper, we present a dyanmic pricing scheme suitable for rational users requests containing multiple resource types. Using simulations, we compare the efficiency of our proposed strategy-proof dynamic scheme with fixed pricing, and show that user welfare and the percentage of successful requests is increased by using dynamic pricing.

199 citations

Proceedings ArticleDOI
15 Nov 2005
TL;DR: A sensor grid architecture, called the scalable proxy-based architecture for sensor grid (SPRING), is proposed to address design issues and develop a sensor grid testbed to study the design issues of sensor grids and to improve the design architecture design.
Abstract: Wireless sensor networks have emerged as an exciting technology for a wide range of important applications that acquire and process information from the physical world. Grid computing has evolved as a standards-based approach for coordinated resource sharing. Sensor grids combine these two promising technologies by extending the grid computing paradigm to the sharing of sensor resources in wireless sensor networks. There are several issues and challenges in the design of sensor grids. In this paper, we propose a sensor grid architecture, called the scalable proxy-based architecture for sensor grid (SPRING), to address these design issues. We also developed a sensor grid testbed to study the design issues of sensor grids and to improve our sensor grid architecture design

100 citations

Journal ArticleDOI
01 Nov 2001
TL;DR: This paper focuses on an experimental analysis of the perfor mance and scalability of cluster-based web servers and observes that the round robin algorithm performs much worse in comparison with the other two algorithms for low to medium workload, but as the request arrival rate increases, the performance of the three algorithms converge.
Abstract: This paper focuses on an experimental analysis of the performance and scalability of cluster-based web servers. We carry out the comparative studies using two experimental platforms, namely, a hardware testbed consisting of 16 PCs, and a trace-driven discrete-event simulator. Dispatcher and web server service times used in the simulator are determined by carrying out a set of experiments on the testbed. The simulator is validated against stochastic queuing models and the testbed. Experiments on the testbed are limited by the hardware configuration, but our complementary approach allows us to carry out scalability study on the validated simulator. The three dispatcherbased scheduling algorithms analyzed are: round robin scheduling, least connected based scheduling, and least loaded based scheduling. The least loaded algorithm is used as the baseline (upper performance bound) in our analysis and the performance metrics include average waiting time, average response time, and average web server utilization. A synthetic trace generated by the workload generator called SURGE, and a public-domain France Football World Cup 1998 trace are used. We observe that the round robin algorithm performs much worse in comparison with the other two algorithms for low to medium workload. However, as the request arrival rate increases the performance of the three algorithms converge with the least connected algorithm approaching the baseline algorithm as at a much faster rate than the round robin. The least connected algorithm performs well for medium to high workload. At very low load the average waiting time is two to six times higher than the baseline algorithm but the absolute value between these two waiting times is very small.

96 citations

Journal ArticleDOI
01 Jul 2018
TL;DR: GAM, an efficient distributed in-memory platform that provides a directory-based cache coherence protocol over remote direct memory access (RDMA), manages the free memory distributed among multiple nodes to provide a unified memory model, and supports a set of user-friendly APIs for memory operations.
Abstract: Recent advancements in high-performance networking interconnect significantly narrow the performance gap between intra-node and inter-node communications, and open up opportunities for distributed memory platforms to enforce cache coherency among distributed nodes. To this end, we propose GAM, an efficient distributed in-memory platform that provides a directory-based cache coherence protocol over remote direct memory access (RDMA). GAM manages the free memory distributed among multiple nodes to provide a unified memory model, and supports a set of user-friendly APIs for memory operations. To remove writes from critical execution paths, GAM allows a write to be reordered with the following reads and writes, and hence enforces partial store order (PSO) memory consistency. A light-weight logging scheme is designed to provide fault tolerance in GAM. We further build a transaction engine and a distributed hash table (DHT) atop GAM to show the ease-of-use and applicability of the provided APIs. Finally, we conduct an extensive micro benchmark to evaluate the read/write/lock performance of GAM under various workloads, and a macro benchmark against the transaction engine and DHT. The results show the superior performance of GAM over existing distributed memory platforms.

64 citations

Proceedings ArticleDOI
01 Jun 2005
TL;DR: This paper focuses on extending the high level architecture (HLA) to the grid, and proposes a distributed simulation framework, called HLAGrid, which uses a Federate-Proxy-RTI architecture, which allows resources on the grid to be utilized on demand by using grid services.
Abstract: Modeling and simulation permeate all areas of business, science and engineering. With the increase in the scale and complexity of simulations, large amounts of computational resources are required, and provisioning of the services is becoming increasingly difficult. The grid provides a platform for coordinated resource sharing and application development and execution. In this paper, we focus on extending the high level architecture (HLA) to the grid. More specifically, we propose a distributed simulation framework, called HLAGrid. The framework uses a Federate-Proxy-RTI architecture, which allows resources on the grid to be utilized on demand by using grid services. The architecture also supports federation discovery, security of the simulator logic, and flexible federation construction, such as hierarchical federations. The architecture hides the heterogeneity of the simulators, simulators' execution platforms, and how the simulators communicate with the RTI. All interfaces used in the framework comply with the standard HLA interface specification, which provides reusability to simulators. A prototype of the framework is implemented using DMSO's RTI 1.3NG version 6 and the grid system runs the Globus toolkit.

61 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Three approaches to deciding model validity are described, two paradigms that relate verification and validation to the model development process are presented, and various validation techniques are defined.
Abstract: Verification and validation of simulation models are discussed in this paper. Three approaches to deciding model validity are described, two paradigms that relate verification and validation to the...

1,425 citations

Book
01 Jan 2000
TL;DR: The article gives an overview of technologies to distribute the execution of simulation programs over multiple computer systems, with particular emphasis on synchronization (also called time management) algorithms as well as data distribution techniques.
Abstract: Originating from basic research conducted in the 1970's and 1980's, the parallel and distributed simulation field has matured over the last few decades. Today, operational systems have been fielded for applications such as military training, analysis of communication networks, and air traffic control systems, to mention a few. The article gives an overview of technologies to distribute the execution of simulation programs over multiple computer systems. Particular emphasis is placed on synchronization (also called time management) algorithms as well as data distribution techniques.

1,217 citations

Journal Article
TL;DR: In this age of modern era, the use of internet must be maximized, as one of the benefits is to get the on-line analysis of human genetic linkage book, as the world window, as many people suggest.
Abstract: In this age of modern era, the use of internet must be maximized. Yeah, internet will help us very much not only for important thing but also for daily activities. Many people now, from any level can use internet. The sources of internet connection can also be enjoyed in many places. As one of the benefits is to get the on-line analysis of human genetic linkage book, as the world window, as many people suggest.

1,000 citations

01 Jan 2003
TL;DR: A super-peer is a node in a peer-to-peer network that operates both as a server to a set of clients, and as an equal in a network of super-peers.
Abstract: A super-peer is a node in a peer-to-peer network that operates both as a server to a set of clients, and as an equal in a network of super-peers. Super-peer networks strike a balance between the efficiency of centralized search, and the autonomy, load balancing and robustness to attacks provided by distributed search. Furthermore, they take advantage of the heterogeneity of capabilities (e.g., bandwidth, processing power) across peers, which recent studies have shown to be enormous. Hence, new and old P2P systems like KaZaA and Gnutella are adopting super-peers in their design. Despite their growing popularity, the behavior of super-peer networks is not well understood. For example, what are the potential drawbacks of super-peer networks? How can super-peers be made more reliable? How many clients should a super-peer take on to maximize efficiency? we examine super-peer networks in detail, gaming an understanding of their fundamental characteristics and performance tradeoffs. We also present practical guidelines and a general procedure for the design of an efficient super-peer network.

916 citations

Journal ArticleDOI
TL;DR: An in-depth study of the existing literature on data center power modeling, covering more than 200 models, organized in a hierarchical structure with two main branches focusing on hardware-centric and software-centric power models.
Abstract: Data centers are critical, energy-hungry infrastructures that run large-scale Internet-based services. Energy consumption models are pivotal in designing and optimizing energy-efficient operations to curb excessive energy consumption in data centers. In this paper, we survey the state-of-the-art techniques used for energy consumption modeling and prediction for data centers and their components. We conduct an in-depth study of the existing literature on data center power modeling, covering more than 200 models. We organize these models in a hierarchical structure with two main branches focusing on hardware-centric and software-centric power models. Under hardware-centric approaches we start from the digital circuit level and move on to describe higher-level energy consumption models at the hardware component level, server level, data center level, and finally systems of systems level. Under the software-centric approaches we investigate power models developed for operating systems, virtual machines and software applications. This systematic approach allows us to identify multiple issues prevalent in power modeling of different levels of data center systems, including: i) few modeling efforts targeted at power consumption of the entire data center ii) many state-of-the-art power models are based on a few CPU or server metrics, and iii) the effectiveness and accuracy of these power models remain open questions. Based on these observations, we conclude the survey by describing key challenges for future research on constructing effective and accurate data center power models.

741 citations