scispace - formally typeset
Search or ask a question
Author

Gideon Glass

Bio: Gideon Glass is an academic researcher from AT&T Labs. The author has contributed to research in topics: Cache & Reverse proxy. The author has an hindex of 2, co-authored 2 publications receiving 192 citations.

Papers
More filters
Journal ArticleDOI
Ramón Cáceres1, Fred Douglis1, Anja Feldmann1, Gideon Glass1, Michael Rabinovich1 
01 Dec 1998
TL;DR: In this paper, Trace-driven simulation of the modem pool of a large ISP suggests that "cookies" dramatically affect the cachability of resources; wasted bandwidth due to aborted connections can more than offset the savings from cached documents; and using a proxy to keep from repeatedly opening new TCP connections can reduce latency more than simply caching data.
Abstract: Much work in the analysis of proxy caching has focused on high-level metrics such as hit rates, and has approximated actual reference patterns by ignoring exceptional cases such as connection aborts. Several of these low-level details have a strong impact on performance, particularly in heterogeneous bandwidth environments such as modem pools connected to faster networks. Trace-driven simulation of the modem pool of a large ISP suggests that "cookies" dramatically affect the cachability of resources; wasted bandwidth due to aborted connections can more than offset the savings from cached documents; and using a proxy to keep from repeatedly opening new TCP connections can reduce latency more than simply caching data.

185 citations


Cited by
More filters
Journal ArticleDOI
Jia Wang1
05 Oct 1999
TL;DR: This paper first describes the elements of a Web caching system and its desirable properties, then the state-of-art techniques which have been used in Web caching systems are surveyed, and the research frontier in Web cache is discussed.
Abstract: The World Wide Web can be considered as a large distributed information system that provides access to shared data objects. As one of the most popular applications currently running on the Internet, the World Wide Web is of an exponential growth in size, which results in network congestion and server overloading. Web caching has been recognized as one of the effective schemes to alleviate the service bottleneck and reduce the network traffic, thereby minimize the user access latency. In this paper, we first describe the elements of a Web caching system and its desirable properties. Then, we survey the state-of-art techniques which have been used in Web caching systems. Finally, we discuss the research frontier in Web caching.

759 citations

Proceedings ArticleDOI
12 Dec 1999
TL;DR: It is demonstrated that cooperative caching has performance benefits only within limited population bounds, and the model is extended beyond these populations to project cooperative caching behavior in regions with millions of clients.
Abstract: While algorithms for cooperative proxy caching have been widely studied, little is understood about cooperative-caching performance in the large-scale World Wide Web environment. This paper uses both trace-based analysis and analytic modelling to show the potential advantages and drawbacks of inter-proxy cooperation. With our traces, we evaluate quantitatively the performance-improvement potential of cooperation between 200 small-organization proxies within a university environment, and between two large-organization proxies handling 23,000 and 60,000 clients, respectively. With our model, we extend beyond these populations to project cooperative caching behavior in regions with millions of clients. Overall, we demonstrate that cooperative caching has performance benefits only within limited population bounds. We also use our model to examine the implications of future trends in Web-access behavior and traffic.

664 citations

Journal ArticleDOI
09 Dec 2002
TL;DR: This paper examines content delivery from the point of view of four content delivery systems: HTTP web traffic, the Akamai content delivery network, and Kazaa and Gnutella peer-to-peer file sharing traffic.
Abstract: In the span of only a few years, the Internet has experienced an astronomical increase in the use of specialized content delivery systems, such as content delivery networks and peer-to-peer file sharing systems. Therefore, an understanding of content delivery on the lnternet now requires a detailed understanding of how these systems are used in practice.This paper examines content delivery from the point of view of four content delivery systems: HTTP web traffic, the Akamai content delivery network, and Kazaa and Gnutella peer-to-peer file sharing traffic. We collected a trace of all incoming and outgoing network traffic at the University of Washington, a large university with over 60,000 students, faculty, and staff. From this trace, we isolated and characterized traffic belonging to each of these four delivery classes. Our results (1) quantify, the rapidly increasing importance of new content delivery systems, particularly peer-to-peer networks, (2) characterize the behavior of these systems from the perspectives of clients, objects, and servers, and (3) derive implications for caching in these systems.

644 citations

Patent
28 Oct 2003
TL;DR: In this paper, a client directs a request to a client-side transaction handler that forwards the request to the server side transaction handler, which in turn provides the request, or a representation thereof, to a server for responding to the request.
Abstract: In a network having transaction acceleration, for an accelerated transaction, a client directs a request to a client-side transaction handler that forwards the request to a server-side transaction handler, which in turn provides the request, or a representation thereof, to a server for responding to the request. The server sends the response to the server-side transaction handler, which forwards the response to the client-side transaction handler, which in turn provides the response to the client. Transactions are accelerated by the transaction handlers by storing segments of data used in the transactions in persistent segment storage accessible to the server-side transaction handler and in persistent segment storage accessible to the client-side transaction handler. When data is to be sent between the transaction handlers, the sending transaction handler compares the segments of the data to be sent with segments stored in its persistent segment storage and replaces segments of data with references to entries in its persistent segment storage that match or closely match the segments of data to be replaced. The receiving transaction store reconstructs the data sent by replacing segment references with corresponding segment data from its persistent segment storage, requesting missing segments from the sender as needed. The transaction accelerators could handle multiple clients and/or multiple servers and the segments stored in the persistent segment stores can relate to different transactions, different clients and/or different servers. Persistent segment stores can be prepopulated with segment data from other transaction accelerators.

460 citations

Proceedings ArticleDOI
01 Oct 1998
TL;DR: This paper proposes a new protocol called "Summary Cache"; each proxy keeps a summary of the URLs of cached documents of each participating proxy and checks these summaries for potential hits before sending any queries, which enables cache sharing among a large number of proxies.
Abstract: The sharing of caches among Web proxies is an important technique to reduce Web traffic and alleviate network bottlenecks. Nevertheless it is not widely deployed due to the overhead of existing protocols. In this paper we propose a new protocol called "Summary Cache"; each proxy keeps a summary of the URLs of cached documents of each participating proxy and checks these summaries for potential hits before sending any queries. Two factors contribute to the low overhead: the summaries are updated only periodically, and the summary representations are economical --- as low as 8 bits per entry. Using trace-driven simulations and a prototype implementation, we show that compared to the existing Internet Cache Protocol (ICP), Summary Cache reduces the number of inter-cache messages by a factor of 25 to 60, reduces the bandwidth consumption by over 50%, and eliminates between 30% to 95% of the CPU overhead, while at the same time maintaining almost the same hit ratio as ICP. Hence Summary Cache enables cache sharing among a large number of proxies.

446 citations