scispace - formally typeset
Proceedings ArticleDOI

Cache memory design for network processors

Reads0
Chats0
TLDR
Simulation results demonstrate that the incorporation of hardware caches into network processors, when combined with efficient caching algorithms, can significantly improve the overall packet forwarding performance due to a sufficiently high degree of temporal locality in the network packet streams.
Abstract
The exponential growth in Internet traffic has motivated the development of a new breed of microprocessors called network processors, which are designed to address the performance problems resulting from the explosion in Internet traffic. The development efforts of these network processors concentrate almost exclusively on streamlining their data paths to speed up network packet processing, which mainly consists of routing and data movement. Rather than blindly pushing the performance of packet processing hardware, an alternative approach is to avoid repeated computation by applying the time-tested architecture idea of caching to network packet processing. Because the data streams presented to network processors and general-purpose CPUs exhibit different characteristics, detailed cache design tradeoffs for the two also differ considerably. This research focuses on cache memory design specifically for network processors. Using a trace-drive simulation methodology, we evaluate a series of three progressively more aggressive routing-table cache designs. Our simulation results demonstrate that the incorporation of hardware caches into network processors, when combined with efficient caching algorithms, can significantly improve the overall packet forwarding performance due to a sufficiently high degree of temporal locality in the network packet streams. Moreover, different cache designs can result in up to a factor of 5 difference in the average routing table lookup time, and thus in the packet forwarding rate.

read more

Citations
More filters
Proceedings ArticleDOI

Approximate caches for packet classification

TL;DR: This paper provides a model for optimizing Bloom filters for this purpose, as well as extensions to the data structure to support graceful aging, bounded misclassification rates, and multiple binary predicates.
Journal ArticleDOI

Analysis of a Least Recently Used Cache Management Policy for Web Browsers

TL;DR: The comparison suggests that finding a good caching policy that is conscious of document size and delay may be difficult, and presents an approximate, easy-to-compute method to evaluate performance.
Proceedings ArticleDOI

Routing prefix caching in network processor design

TL;DR: This paper is the first to evaluate the effectiveness of caching on routing prefix and proposes an on-chip routing prefix cache design for the network processor that performs much better than an IP address cache, even after factoring in the extra complexity involved.
Patent

Methods and systems for fast binary network address lookups using parent node information stored in routing table entries

TL;DR: In this paper, a binary search for variable length network address prefix lookups is described, where the longest matching prefix corresponds to the longest parent node prefix in a binary tree, and bits in the path information are used to determine the longest prefix that matches the address being searched.
Proceedings ArticleDOI

Overcoming the memory wall in packet processing: hammers or ladders?

TL;DR: This paper addresses the fundamental question: what minimal set of hardware mechanisms must a network processor support to achieve the twin goals of simplified programmability and high packet throughput?
References
More filters
Proceedings ArticleDOI

Scalable high speed IP routing lookups

TL;DR: This paper describes a new algorithm for best matching prefix using binary search on hash tables organized by prefix lengths that scales very well as address and routing table sizes increase and introduces Mutating Binary Search and other optimizations that considerably reduce the average number of hashes to less than 2.
Proceedings ArticleDOI

Small forwarding tables for fast routing lookups

TL;DR: A forwarding table data structure designed for quick routing lookups, small enough to fit in the cache of a conventional general purpose processor and feasible to do a full routing lookup for each IP packet at gigabit speeds without special hardware.
Journal ArticleDOI

A 50-Gb/s IP router

TL;DR: A router, nearly completed, which is more than fast enough to keep up with the latest transmission technologies and can forward tens of millions of packets per second.
Journal ArticleDOI

Routing on longest-matching prefixes

TL;DR: These tries extend the concepts of compact digital (Patricia) tries to support the storage of prefixes and to guarantee retrieval times at most linear in the length of the input key irrespective of the trie size, even when searching for longest-matching prefixes.
Proceedings ArticleDOI

High-performance IP routing table lookup using CPU caching

TL;DR: The overall performance of the proposed algorithm can reach 87.87 million lookups per second, which is one to two orders of magnitude faster than previously reported results on software-based routing table lookup implementations.
Related Papers (5)