scispace - formally typeset
Proceedings ArticleDOI

Router Buffer Caching for Managing Shared Cache Blocks in Tiled Multi-Core Processors

TLDR
Wang et al. as mentioned in this paper proposed a congestion management technique in the LLC that equips the NoC router with small storage to keep a copy of heavily shared cache blocks, and also propose a prediction classifier in LLC controller.
Abstract
Multiple cores in a tiled multi-core processor are connected using a network-on-chip mechanism. All these cores share the last-level cache (LLC). For large-sized LLCs, generally, non-uniform cache architecture design is considered, where the LLC is split into multiple slices. Accessing highly shared cache blocks from an LLC slice by several cores simultaneously results in congestion at the LLC, which in turn increases the access latency. To deal with this issue, we propose a congestion management technique in the LLC that equips the NoC router with small storage to keep a copy of heavily shared cache blocks. To identify highly shared cache blocks, we also propose a prediction classifier in the LLC controller. We implement our technique in Sniper, an architectural simulator for multi-core systems, and evaluate its effectiveness by running a set of parallel benchmarks. Our experimental results show that the proposed technique is effective in reducing the LLC access time.

read more

Citations
More filters
Journal ArticleDOI

NCDE: In-Network Caching for Directory Entries to Expedite Data Access in Tiled-Chip Multiprocessors

TL;DR: In this paper , the authors explore the opportunity of mitigating problems associated with shared data access via in-network caching for directory entries (NCDE), which can utilize every input port's virtual channels to hold directory entries.
Journal ArticleDOI

NCDE: In-Network Caching for Directory Entries to Expedite Data Access in Tiled-Chip Multiprocessors

- 01 Jan 2023 - 
TL;DR: In this paper , the authors explore the opportunity of mitigating problems associated with shared data access via in-network caching for directory entries (NCDE), which can utilize every input port's virtual channels to hold directory entries.
References
More filters
Journal ArticleDOI

Spider: a high-speed network interconnect

M. Galles
- 01 Jan 1997 - 
TL;DR: SGI's Spider chip-Scalable, Pipelined Interconnect for Distributed Endpoint Routing-create a scalable, short-range network delivering hundreds of gigabytes per second in bandwidth to large configurations.
Book ChapterDOI

SPEComp: A New Benchmark Suite for Measuring Parallel Computer Performance

TL;DR: An overview of a new benchmark suite for parallel computers, SPEComp, which targets mid-size parallel servers and includes a number of science/engineering and data processing applications, is presented.
Proceedings ArticleDOI

Cache system design in the tightly coupled multiprocessor system

C. K. Tang
TL;DR: System requirements in the multiprocessor environment as well as the cost-performance trade-offs of the cache system design are given in detail and the possibility of sharing the Cache system hardware with other multiprocessioning facilities (such as dynamic address translation, storage protection, locks, serialization, and the system clocks) is discussed.
Proceedings ArticleDOI

In-Network Cache Coherence

TL;DR: This paper proposes an implementation of the cache coherence protocol within the network, embedding directories within each router node that manage and steer requests towards nearby data copies, enabling in-transit optimization of memory access delay.