scispace - formally typeset
Search or ask a question
Author

K. G. Coffman

Bio: K. G. Coffman is an academic researcher from AT&T Labs. The author has contributed to research in topics: Moore's law & Internet traffic. The author has an hindex of 2, co-authored 2 publications receiving 165 citations.

Papers
More filters
Book ChapterDOI
K. G. Coffman1, Andrew Odlyzko1
01 Jan 2002
TL;DR: Internet traffic is approximately doubling each year as discussed by the authors, which is similar to "Moore's Law" in semiconductors, but is slower than the frequently heard claims of a doubling of traffic every three or four months.
Abstract: Internet traffic is approximately doubling each year. This growth rate applies not only to the entire Internet, but to a large range of individual institutions. For a few places we have records going back several years that exhibit this regular rate of growth. Even when there are no obvious bottlenecks, traffic tends not to grow much faster. This reflects complicated interactions of technology, economics, and sociology, similar to, but more delicate than those that have produced "Moore's Law" in semiconductors.A doubling of traffic each year represents extremely fast growth, much faster than the increases in other communication services. If it continues, data traffic will surpass voice traffic around the year 2002. However, this rate of growth is slower than the frequently heard claims of a doubling of traffic every three or four months. Such spectacular growth rates apparently did prevail over a two-year period 1995-6. Ever since, though, growth appears to have reverted to the Internet's historical pattern of a single doubling each year.Progress in transmission technology appears sufficient to double network capacity each year for about the next decade. However, traffic growth faster than a tripling each year could probably not be sustained for more than a few years. Since computing and storage capacities will also be growing, as predicted by the versions of "Moore's Law" appropriate for those technologies, we can expect demand for data transmission to continue increasing. A doubling in Internet traffic each year appears a likely outcome.If Internet traffic continues to double each year, we will have yet another form of "Moore's Law." Such a growth rate would have several important implications. In the intermediate run, there would be neither a clear "bandwidth glut" nor a "bandwidth scarcity," but a more balanced situation, with supply and demand growing at comparable rates. Also, computer and network architectures would be strongly affected, since most data would stay local. Programs such as Napster would play an increasingly important role. Transmission would likely continue to be dominated by file transfers, not by real time streaming media.

141 citations

Journal ArticleDOI
TL;DR: Internet traffic is approximately doubling each year as mentioned in this paper, which is similar to the Moore's Law in semiconductors, and it has been shown that a doubling of traffic each year represents extremely fast growth, much faster than the increases in other communication services.
Abstract: Internet traffic is approximately doubling each year. This growth rate applies not only to the entire Internet, but to a large range of individual institutions. For a few places we have records going back several years that exhibit this regular rate of growth. Even when there are no obvious bottlenecks, traffic tends not to grow much faster. This reflects complicated interactions of technology, economics, and sociology, similar to those that have produced "Moore's Law" in semiconductors.A doubling of traffic each year represents extremely fast growth, much faster than the increases in other communication services. If it continues, data traffic will surpass voice traffic around the year 2002. However, this rate of growth is slower than the frequently heard claims of a doubling of traffic every three or four months. Such spectacular growth rates apparently did prevail over a two-year period 1995-6. Ever since, though, growth appears to have reverted to the Internet's historical pattern of a single doubling each year.Progress in transmission technology appears sufficient to double network capacity each year for about the next decade. However, traffic growth faster than a tripling each year could probably not be sustained for more than a few years. Since computing and storage capacities will also be growing, as predicted by the versions of "Moore's Law" appropriate for those technologies, we can expect demand for data transmission to continue increasing. A doubling in Internet traffic each year appears a likely outcome.If Internet traffic continues to double each year, we will have yet another form of "Moore's Law." Such a growth rate would have several important implications. In the intermediate run, there would be neither a clear "bandwidth glut" nor a "bandwidth scarcity," but a more balanced situation, with supply and demand growing at comparable rates. Also, computer and network architectures would be strongly affected, since most data would stay local. Programs such as Napster would play an increasingly important role. Transmission would likely continue to be dominated by file transfers, not by real time streaming media.

27 citations


Cited by
More filters
01 Jan 2002
TL;DR: In this article, the authors analyze the topology graph and evaluate generated network traffic of Gnutella and find that the current configuration has the benefits and drawbacks of a power-law structure, and that the Gnutlla virtual network topology does not match well with the underlying Internet topology, hence leading to ineffective use of the physical network infrastructure.
Abstract: Despite recent excitement generated by the peer-to-peer (P2P) paradigm and the surprisingly rapid deployment of some P2P applications, there are few quantitative evaluations of P2P systems behavior. The open architecture, achieved scale, and self-organizing structure of the Gnutella network make it an interesting P2P architecture to study. Like most other P2P applications, Gnutella builds, at the application level, a virtual network with its own routing mechanisms. The topology of this virtual network and the routing mechanisms used have a significant influence on application properties such as performance, reliability, and scalability. We have built a “crawler” to extract the topology of Gnutella’s application level network. In this paper we analyze the topology graph and evaluate generated network traffic. Our two major findings are that: (1) although Gnutella is not a pure power-law network, its current configuration has the benefits and drawbacks of a power-law structure, and (2) the Gnutella virtual network topology does not match well the underlying Internet topology, hence leading to ineffective use of the physical networking infrastructure. These findings guide us to propose changes to the Gnutella protocol and implementations that may bring significant performance and scalability improvements. We believe that our findings as well as our measurement and analysis techniques have broad applicability to P2P systems and provide unique insights into P2P system design tradeoffs.

844 citations

Proceedings ArticleDOI
27 Aug 2001
TL;DR: A 'crawler' is built to extract the topology of Gnutella's application level network, a topology graph is analyzed and the current configuration has the benefits and drawbacks of a power-law structure.
Abstract: Despite recent excitement generated by the P2P paradigm and despite surprisingly fast deployment of some P2P applications, there are few quantitative evaluations of P2P system behavior. Due to its open architecture and achieved scale, Gnutella is an interesting P2P architecture case study. Gnutella, like most other P2P applications, builds at the application level a virtual network with its own routing mechanisms. The topology of this overlay network and the routing mechanisms used have a significant influence on application properties such as performance, reliability, and scalability. We built a 'crawler' to extract the topology of Gnutella's application level network, we analyze the topology graph and evaluate generated network traffic. We find that although Gnutella is not a pure power-law network, its current configuration has the benefits and drawbacks of a power-law structure. These findings lead us to propose changes to the Gnutella protocol and implementations that bring significant performance and scalability improvements.

824 citations

Journal ArticleDOI
Sally Floyd1, Vern Paxson1
TL;DR: Two key strategies for developing meaningful simulations in the face of the global Internet's great heterogeneity are discussed: searching for invariants and judiciously exploring the simulation parameter space.
Abstract: Simulating how the global Internet behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, the "mix" of different applications used at a site, and the levels of congestion seen on different links. We discuss two key strategies for developing meaningful simulations in the face of these difficulties: searching for invariants and judiciously exploring the simulation parameter space. We finish with a brief look at a collaborative effort within the research community to develop a common network simulator.

796 citations

Journal ArticleDOI
TL;DR: This work studied the topology and protocols of the public Gnutella network to evaluate costs and benefits of the peer-to-peer (P2P) approach and to investigate possible improvements that would allow better scaling and increased reliability in Gnutsella and similar networks.
Abstract: We studied the topology and protocols of the public Gnutella network. Its substantial user base and open architecture make it a good large-scale, if uncontrolled, testbed. We captured the network's topology, generated traffic, and dynamic behavior to determine its connectivity structure and how well (if at all) Gnutella's overlay network topology maps to the physical Internet infrastructure. Our analysis of the network allowed us to evaluate costs and benefits of the peer-to-peer (P2P) approach and to investigate possible improvements that would allow better scaling and increased reliability in Gnutella and similar networks. A mismatch between Gnutella's overlay network topology and the Internet infrastructure has critical performance implications.

790 citations

Journal ArticleDOI
TL;DR: Ethernet passive optical networks are described, an emerging local subscriber access architecture that combines low-cost point-to-multipoint fiber infrastructure with Ethernet, which has emerged as a potential optimized architecture for fiber to the building and Fiber to the home.
Abstract: This article describes Ethernet passive optical networks, an emerging local subscriber access architecture that combines low-cost point-to-multipoint fiber infrastructure with Ethernet. EPONs are designed to carry Ethernet frames at standard Ethernet rates. An EPON uses a single trunk fiber that extends from a central office to a passive optical splitter, which then fans out to multiple optical drop fibers connected to subscriber nodes. Other than the end terminating equipment, no component in the network requires electrical power, hence the term passive. Local carriers have long been interested in passive optical networks for the benefits they offer: minimal fiber infrastructure and no powering requirement in the outside plant. With Ethernet now emerging as the protocol of choice for carrying IP traffic in metro and access networks, EPON has emerged as a potential optimized architecture for fiber to the building and fiber to the home.

716 citations