scispace - formally typeset
Search or ask a question
Book ChapterDOI

Internet growth: is there a Moore's law for data traffic?

K. G. Coffman1, Andrew Odlyzko1
01 Jan 2002-pp 47-93
TL;DR: Internet traffic is approximately doubling each year as discussed by the authors, which is similar to "Moore's Law" in semiconductors, but is slower than the frequently heard claims of a doubling of traffic every three or four months.
Abstract: Internet traffic is approximately doubling each year. This growth rate applies not only to the entire Internet, but to a large range of individual institutions. For a few places we have records going back several years that exhibit this regular rate of growth. Even when there are no obvious bottlenecks, traffic tends not to grow much faster. This reflects complicated interactions of technology, economics, and sociology, similar to, but more delicate than those that have produced "Moore's Law" in semiconductors.A doubling of traffic each year represents extremely fast growth, much faster than the increases in other communication services. If it continues, data traffic will surpass voice traffic around the year 2002. However, this rate of growth is slower than the frequently heard claims of a doubling of traffic every three or four months. Such spectacular growth rates apparently did prevail over a two-year period 1995-6. Ever since, though, growth appears to have reverted to the Internet's historical pattern of a single doubling each year.Progress in transmission technology appears sufficient to double network capacity each year for about the next decade. However, traffic growth faster than a tripling each year could probably not be sustained for more than a few years. Since computing and storage capacities will also be growing, as predicted by the versions of "Moore's Law" appropriate for those technologies, we can expect demand for data transmission to continue increasing. A doubling in Internet traffic each year appears a likely outcome.If Internet traffic continues to double each year, we will have yet another form of "Moore's Law." Such a growth rate would have several important implications. In the intermediate run, there would be neither a clear "bandwidth glut" nor a "bandwidth scarcity," but a more balanced situation, with supply and demand growing at comparable rates. Also, computer and network architectures would be strongly affected, since most data would stay local. Programs such as Napster would play an increasingly important role. Transmission would likely continue to be dominated by file transfers, not by real time streaming media.
Citations
More filters
01 Jan 2002
TL;DR: In this article, the authors analyze the topology graph and evaluate generated network traffic of Gnutella and find that the current configuration has the benefits and drawbacks of a power-law structure, and that the Gnutlla virtual network topology does not match well with the underlying Internet topology, hence leading to ineffective use of the physical network infrastructure.
Abstract: Despite recent excitement generated by the peer-to-peer (P2P) paradigm and the surprisingly rapid deployment of some P2P applications, there are few quantitative evaluations of P2P systems behavior. The open architecture, achieved scale, and self-organizing structure of the Gnutella network make it an interesting P2P architecture to study. Like most other P2P applications, Gnutella builds, at the application level, a virtual network with its own routing mechanisms. The topology of this virtual network and the routing mechanisms used have a significant influence on application properties such as performance, reliability, and scalability. We have built a “crawler” to extract the topology of Gnutella’s application level network. In this paper we analyze the topology graph and evaluate generated network traffic. Our two major findings are that: (1) although Gnutella is not a pure power-law network, its current configuration has the benefits and drawbacks of a power-law structure, and (2) the Gnutella virtual network topology does not match well the underlying Internet topology, hence leading to ineffective use of the physical networking infrastructure. These findings guide us to propose changes to the Gnutella protocol and implementations that may bring significant performance and scalability improvements. We believe that our findings as well as our measurement and analysis techniques have broad applicability to P2P systems and provide unique insights into P2P system design tradeoffs.

844 citations

Proceedings ArticleDOI
27 Aug 2001
TL;DR: A 'crawler' is built to extract the topology of Gnutella's application level network, a topology graph is analyzed and the current configuration has the benefits and drawbacks of a power-law structure.
Abstract: Despite recent excitement generated by the P2P paradigm and despite surprisingly fast deployment of some P2P applications, there are few quantitative evaluations of P2P system behavior. Due to its open architecture and achieved scale, Gnutella is an interesting P2P architecture case study. Gnutella, like most other P2P applications, builds at the application level a virtual network with its own routing mechanisms. The topology of this overlay network and the routing mechanisms used have a significant influence on application properties such as performance, reliability, and scalability. We built a 'crawler' to extract the topology of Gnutella's application level network, we analyze the topology graph and evaluate generated network traffic. We find that although Gnutella is not a pure power-law network, its current configuration has the benefits and drawbacks of a power-law structure. These findings lead us to propose changes to the Gnutella protocol and implementations that bring significant performance and scalability improvements.

824 citations

Journal ArticleDOI
Sally Floyd1, Vern Paxson1
TL;DR: Two key strategies for developing meaningful simulations in the face of the global Internet's great heterogeneity are discussed: searching for invariants and judiciously exploring the simulation parameter space.
Abstract: Simulating how the global Internet behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, the "mix" of different applications used at a site, and the levels of congestion seen on different links. We discuss two key strategies for developing meaningful simulations in the face of these difficulties: searching for invariants and judiciously exploring the simulation parameter space. We finish with a brief look at a collaborative effort within the research community to develop a common network simulator.

796 citations

Journal ArticleDOI
TL;DR: This work studied the topology and protocols of the public Gnutella network to evaluate costs and benefits of the peer-to-peer (P2P) approach and to investigate possible improvements that would allow better scaling and increased reliability in Gnutsella and similar networks.
Abstract: We studied the topology and protocols of the public Gnutella network. Its substantial user base and open architecture make it a good large-scale, if uncontrolled, testbed. We captured the network's topology, generated traffic, and dynamic behavior to determine its connectivity structure and how well (if at all) Gnutella's overlay network topology maps to the physical Internet infrastructure. Our analysis of the network allowed us to evaluate costs and benefits of the peer-to-peer (P2P) approach and to investigate possible improvements that would allow better scaling and increased reliability in Gnutella and similar networks. A mismatch between Gnutella's overlay network topology and the Internet infrastructure has critical performance implications.

790 citations

Journal ArticleDOI
TL;DR: Ethernet passive optical networks are described, an emerging local subscriber access architecture that combines low-cost point-to-multipoint fiber infrastructure with Ethernet, which has emerged as a potential optimized architecture for fiber to the building and Fiber to the home.
Abstract: This article describes Ethernet passive optical networks, an emerging local subscriber access architecture that combines low-cost point-to-multipoint fiber infrastructure with Ethernet. EPONs are designed to carry Ethernet frames at standard Ethernet rates. An EPON uses a single trunk fiber that extends from a central office to a passive optical splitter, which then fans out to multiple optical drop fibers connected to subscriber nodes. Other than the end terminating equipment, no component in the network requires electrical power, hence the term passive. Local carriers have long been interested in passive optical networks for the benefits they offer: minimal fiber infrastructure and no powering requirement in the outside plant. With Ethernet now emerging as the protocol of choice for carrying IP traffic in metro and access networks, EPON has emerged as a potential optimized architecture for fiber to the building and fiber to the home.

716 citations

References
More filters
Book
01 Dec 1989
TL;DR: This best-selling title, considered for over a decade to be essential reading for every serious student and practitioner of computer design, has been updated throughout to address the most important trends facing computer designers today.
Abstract: This best-selling title, considered for over a decade to be essential reading for every serious student and practitioner of computer design, has been updated throughout to address the most important trends facing computer designers today. In this edition, the authors bring their trademark method of quantitative analysis not only to high-performance desktop machine design, but also to the design of embedded and server systems. They have illustrated their principles with designs from all three of these domains, including examples from consumer electronics, multimedia and Web technologies, and high-performance computing.

11,671 citations

Journal ArticleDOI
TL;DR: Moore's Law has become the central driving force of one of the most dynamic of the world's industries as discussed by the authors, and it is viewed as a reliable method of calculating future trends as well, setting the pace of innovation, and defining the rules and the very nature of competition.
Abstract: A simple observation, made over 30 years ago, on the growth in the number of devices per silicon die has become the central driving force of one of the most dynamic of the world's industries. Because of the accuracy with which Moore's Law has predicted past growth in IC complexity, it is viewed as a reliable method of calculating future trends as well, setting the pace of innovation, and defining the rules and the very nature of competition. And since the semiconductor portion of electronic consumer products keeps growing by leaps and bounds, the Law has aroused in users and consumers an expectation of a continuous stream of faster, better, and cheaper high-technology products. Even the policy implications of Moore's Law are significant: it is used as the baseline assumption in the industry's strategic road map for the next decade and a half.

1,649 citations

Book
01 Jan 1965

271 citations

Proceedings ArticleDOI
01 Feb 2000
TL;DR: The rules of thumb for the design of data storage systems are reexamines with a particular focus on performance and price/performance, and the 5-minute rule for disk caching becomes a cache-everything rule for Web caching.
Abstract: This paper reexamines the rules of thumb for the design of data storage systems Briefly, it looks at storage, processing, and networking costs, ratios, and trends with a particular focus on performance and price/performance Amdahl's ratio laws for system design need only slight revision after 35 years-the major change being the increased use of RAM An analysis also indicates storage should be used to cache both database and Web data to save disk bandwidth, network bandwidth, and people's time Surprisingly, the 5-minute rule for disk caching becomes a cache-everything rule for Web caching

232 citations

Proceedings Article
Tobias Oetiker1
06 Dec 1998
TL;DR: The history and operation of the current version of MRTG as well as the Round Robin Database Tool, a key component of the next major release of the Multi Router Traffic Grapher (MRTG), are described.
Abstract: This paper describes the history and operation of the current version of MRTG as well as the Round Robin Database Tool. The Round Robin Database Tool is a program which logs and visualizes numerical data in a efficient manner. The RRD Tool is a key component of the next major release of the Multi Router Traffic Grapher (MRTG). It is already fully implemented and working. Because of the massive performance gain possible with RRD Tool some sites have already started to use RRD Tool in production.

170 citations