scispace - formally typeset
Search or ask a question

Showing papers by "Jason Nieh published in 2002"


Journal ArticleDOI
09 Dec 2002
TL;DR: The paper demonstrates that the Linux Zap prototype can provide general-purpose process migration functionality with low overhead and results for migrating pods show that these kinds of pods can be migrated with subsecond checkpoint and restart latencies.
Abstract: We have created Zap, a novel system for transparent migration of legacy and networked applications. Zap provides a thin virtualization layer on top of the operating system that introduces pods, which are groups of processes that are provided a consistent, virtualized view of the system. This decouples processes in pods from dependencies to the host operating system and other processes on the system. By integrating Zap virtualization with a checkpoint-restart mechanism, Zap can migrate a pod of processes as a unit among machines running independent operating systems without leaving behind any residual state after migration. We have implemented a Zap prototype in Linux that supports transparent migration of unmodified applications without any kernel modifications. We demonstrate that our Linux Zap prototype can provide general-purpose process migration functionality with low overhead. Our experimental results for migrating pods used for running a standard user's X windows desktop computing environment and for running an Apache web server show that these kinds of pods can be migrated with subsecond checkpoint and restart latencies.

549 citations


Proceedings Article
10 Jun 2002
TL;DR: It is found that thinclient systems can perform well on web and multimedia applications in LAN environments, but the efficiency of the thin-client protocols varies widely.
Abstract: The growing popularity of thin-client systems makes it important to determine the factors that govern the performance of these thin-client architectures. To assess the viability of the thin-client computing model, we measured the performance of six popular thin-client platforms—Citrix MetaFrame, Microsoft Terminal Services, Sun Ray, Tarantella, VNC, and X—running over a wide range of network access bandwidths. We find that thinclient systems can perform well on web and multimedia applications in LAN environments, but the efficiency of the thin-client protocols varies widely. We analyze the differences in the various approaches and explain the impact of the underlying remote display protocols on overall performance. Our results quantify the impact of different approaches in display encoding primitives, display update policies, and display caching and compression techniques across a broad range of thin-client systems.

109 citations


Proceedings ArticleDOI
01 Jun 2002
TL;DR: It is shown that using thin-client computing in a wide-area network environment can deliver acceptable performance over Internet2, even when client and server are located thousands of miles apart on opposite ends of the country.
Abstract: While many application service providers have proposed using thin-client computing to deliver computational services over the Internet, little work has been done to evaluate the effectiveness of thin-client computing in a wide-area network. To assess the potential of thin-client computing in the context of future commodity high-bandwidth Internet access, we have used a novel, non-invasive slow-motion benchmarking technique to evaluate the performance of several popular thin-client computing platforms in delivering computational services cross-country over Internet2. Our results show that using thin-client computing in a wide-area network environment can deliver acceptable performance over Internet2, even when client and server are located thousands of miles apart on opposite ends of the country. However, performance varies widely among thin-client platforms and not all platforms are suitable for this environment. While many thin-client systems are touted as being bandwidth efficient, we show that network latency is often the key factor in limiting wide-area thin-client performance. Furthermore, we show that the same techniques used to improve bandwidth efficiency often result in worse overall performance in wide-area networks. We characterize and analyze the different design choices in the various thin-client platforms and explain which of these choices should be selected for supporting wide-area computing services.

82 citations


DOI
01 Jan 2002
TL;DR: The performance results show that VNAT has essentially no network performance overhead except when connections are migrated, in which case the overhead of the Linux prototype is less than 7 percent over a stock RedHat Linux system.
Abstract: Virtual Network Address Translation (VNAT) is a novel architecture that allows transparent migration of end-to-end live network connections associated with various computation units. Such computation units can be either a single process, or a group of processes, or an entire host. VNAT virtualizes network connections perceived by transport protocols so that identification of network connections is decoupled from stationary hosts. Such virtual connections are then remapped into physical connections to be carried on the physical network using network address translation. VNAT requires no modification to existing applications, operating systems, or protocol stacks. Furthermore, it is fully compatible with the existing communication infrastructure; virtual and normal connections can coexist without interfering each other. VNAT functions entirely within end systems and requires no third party services. We have implemented a VNAT prototype with the Linux 2.4 kernel and demonstrated its functionality on a wide range of popular real-world network applications. Our performance results show that VNAT has essentially no network performance overhead except when connections are migrated, in which case the overhead of our Linux prototype is less than 7 percent over a stock RedHat Linux system.

62 citations


Proceedings ArticleDOI
01 Jun 2002
TL;DR: This work implements Certes (CliEnt Response Time Estimated by the Server), an online server-based mechanism for web servers to measure client perceived response time, as if measured at the client, based on a model of TCP.
Abstract: As businesses continue to grow their World Wide Web presence, it is becoming increasingly vital for them to have quantitative measures of the client perceived response times of their web services. We present Certes (CliEnt Response Time Estimated by the Server), an online server-based mechanism for web servers to measure client perceived response time, as if measured at the client. Certes is based on a model of TCP that quantifies the effect that connection drops have on perceived client response time, by using three simple server-side measurements: connection drop rate, connection accept rate and connection completion rate. The mechanism does not require modifications to http servers or web pages, does not rely on probing or third party sampling, and does not require client-side modifications or scripting. Certes can be used to measure response times for any web content, not just HTML. We have implemented Certes and compared its response time measurements with those obtained with detailed client instrumentation. Our results demonstrate that Certes provides accurate server-based measurements of client response times in HTTP 1.0/1.1 [14] environments, even with rapidly changing workloads. Certes runs online in constant time with very low overhead. It can be used at web sites and server farms to verify compliance with service level objectives.

52 citations


Proceedings ArticleDOI
Fei Li1, Jason Nieh1
07 Aug 2002
TL;DR: 2DLI is proposed and evaluated, which is based on OLI but additionally provides lower encoding complexity for lossless compression, and results show that when compared with other compression methods, 2DLI provides good data compression ratio with modest computational overhead, for both servers and clients.
Abstract: Due to its reduced administrative costs and better resource utilization, server-based computing (SBC) is becoming a popular approach for delivering computational services across a network. In SBC, all application processing is done on servers while only screen updates are sent to clients. While many SBC encoding techniques have been explored for transmitting screen updates efficiently, existing approaches do not effectively support multimedia applications. To address this problem, we propose optimal linear interpolation (OLI), a new pixel-based SBC screen update coding algorithm. With OLI, the server selects and transmits only a small sample of pixels to represent a screen update. The client recovers the complete screen update from these samples using piecewise linear interpolation to achieve the best visual quality. OLI can be used to provide lossless or lossy compression for an adaptive trade-off between network bandwidth and processing time requirements. We further propose and evaluate 2D lossless linear interpolation (2DLI), which is based on OLI but additionally provides lower encoding complexity for lossless compression. Our experimental results show that when compared with other compression methods, 2DLI provides good data compression ratio with modest computational overhead, for both servers and clients.

46 citations


DOI
01 Jan 2002
TL;DR: An elastic quota system is implemented in Solaris that allows all users to use unlimited amounts of available disk space while still providing system administrators the ability to control how the disk space is allocated among users.
Abstract: We introduce elastic quotas, a disk space management technique that makes disk space an elastic resource like CPU and memory. Elastic quotas allow all users to use unlimited amounts of available disk space while still providing system administrators the ability to control how the disk space is allocated among users. Elastic quotas maintain existing persistent file semantics while supporting user-controlled policies for removing files when the file system becomes too full. We have implemented an elastic quota system in Solaris and measured its performance. The system is simple to implement, requires no kernel modifications, and is compatible with existing disk space management methods. Our results show that elastic quotas are an effective, low-overhead solution for flexible file system management.

14 citations


Patent
01 May 2002
TL;DR: Proportional share scheduling as mentioned in this paper is a technique for scheduling resources among a plurality of clients, each of which has a proportional resource allocation of the total resources allocated for the CPU.
Abstract: A proportional share scheduling apparatus and technique for scheduling resources among a plurality of clients, each of which has a proportional resource allocation of the total resources for the CPU. The clients are sorted in a run queue from the client having the largest proportional share allocation to the client having the smallest proportional share allocation (112). Starting from the beginning of the run queue, each client is run for a constant time quantum (130). If a client in the run queue has received more than its proportional resource allocation, the remaining clients in the run queue are skipped, and the clients are run from the beginning of the run queue (120). This process repeats until all clients have received service. Since the clients with the largest proportional share allocation are placed at the beginning of the run queue, they are allowed to receive more service than the clients having a smaller proportional resource allocation propositioned at the end of the run queue.

8 citations


Proceedings ArticleDOI
Fei Li1, Jason Nieh1
02 Apr 2002
TL;DR: Two linear interpolation algorithms with linear encoding and decoding computational complexity are developed for encoding SBC screen updates and their performance is compared with other approaches on discrete-tonic and smoothed-toned images.
Abstract: Summary form only given. The growing total cost of ownership has resulted in a shift away from the distributed model of desktop computing toward a more centralized server-based computing (SBC) model. In SBC, all application logic is executed on the server while clients simply process the resulting screen updates sent from the server. To provide good performance, SBC systems employ various techniques to encode the screen updates to minimize the bandwidth and processing requirements of sending the screen updates. However, existing SBC encoding techniques are not able to effectively support multimedia applications. To address this problem, we have developed a family of linear interpolation algorithms for encoding SBC screen updates. We first present an overview of an optimal linear interpolation (OLI) algorithm. Given a rectangular region of pixels to be encoded, OLI represents the region as a one-dimensional function, mapping from the cardinal number of each pixel to the color value of the pixel. To reduce encoding complexity, we developed two linear interpolation algorithms with linear encoding and decoding computational complexity. The algorithms are near optimal linear interpolation (NOLI) and 2-D lossless linear interpolation (2DLI). We have implemented our linear interpolation algorithms and compared their performance with other approaches on discrete-toned and smoothed-toned images.

4 citations