scispace - formally typeset
Search or ask a question
Topic

Smart Cache

About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.


Papers
More filters
Book ChapterDOI
21 May 2012
TL;DR: A centrality-based caching algorithm is proposed by exploiting the concept of (ego network) betweenness centrality to improve the caching gain and eliminate the uncertainty in the performance of the simplistic random caching strategy.
Abstract: Ubiquitous in-network caching is one of the key aspects of information-centric networking (ICN) which has recently received widespread research interest. In one of the key relevant proposals known as Networking Named Content (NNC), the premise is that leveraging in-network caching to store content in every node it traverses along the delivery path can enhance content delivery. We question such indiscriminate universal caching strategy and investigate whether caching less can actually achieve more . Specifically, we investigate if caching only in a subset of node(s) along the content delivery path can achieve better performance in terms of cache and server hit rates. In this paper, we first study the behavior of NNC's ubiquitous caching and observe that even naive random caching at one intermediate node within the delivery path can achieve similar and, under certain conditions, even better caching gain. We propose a centrality-based caching algorithm by exploiting the concept of (ego network) betweenness centrality to improve the caching gain and eliminate the uncertainty in the performance of the simplistic random caching strategy. Our results suggest that our solution can consistently achieve better gain across both synthetic and real network topologies that have different structural properties.

360 citations

Proceedings ArticleDOI
12 Mar 2016
TL;DR: CATalyst, a pseudo-locking mechanism which uses CAT to partition the LLC into a hybrid hardware-software managed cache, is presented, and it is shown that LLC side channel attacks can be defeated.
Abstract: Cache side channel attacks are serious threats to multi-tenant public cloud platforms. Past work showed how secret information in one virtual machine (VM) can be extracted by another co-resident VM using such attacks. Recent research demonstrated the feasibility of high-bandwidth, low-noise side channel attacks on the last-level cache (LLC), which is shared by all the cores in the processor package, enabling attacks even when VMs are scheduled on different cores. This paper shows how such LLC side channel attacks can be defeated using a performance optimization feature recently introduced in commodity processors. Since most cloud servers use Intel processors, we show how the Intel Cache Allocation Technology (CAT) can be used to provide a system-level protection mechanism to defend from side channel attacks on the shared LLC. CAT is a way-based hardware cache-partitioning mechanism for enforcing quality-of-service with respect to LLC occupancy. However, it cannot be directly used to defeat cache side channel attacks due to the very limited number of partitions it provides. We present CATalyst, a pseudo-locking mechanism which uses CAT to partition the LLC into a hybrid hardware-software managed cache. We implement a proof-of-concept system using Xen and Linux running on a server with Intel processors, and show that LLC side channel attacks can be defeated. Furthermore, CATalyst only causes very small performance overhead when used for security, and has negligible impact on legacy applications.

360 citations

Proceedings ArticleDOI
25 Mar 2012
TL;DR: This paper proposes a content caching scheme, WAVE, in which the number of chunks to be cached is adjusted based on the popularity of the content, which achieves higher cache hit ratio and fewer frequent cache replacements than other on-demand caching strategies.
Abstract: In content-oriented networking, content files are typically cached in network nodes, and hence how to cache content files is crucial for the efficient content delivery and cache storage utilization. In this paper, we propose a content caching scheme, WAVE, in which the number of chunks to be cached is adjusted based on the popularity of the content. In WAVE, an upstream node recommends the number of chunks to be cached at its downstream node, which is exponentially increased as the request count increases. Simulation results reveal that the average hop count of content delivery of WAVE is lower than other schemes, and the inter-ISP traffic volume of WAVE is the second lowest (CDN is the lowest). Also, WAVE achieves higher cache hit ratio and fewer frequent cache replacements than other on-demand caching strategies.

349 citations

Proceedings ArticleDOI
19 Sep 2012
TL;DR: There is a need for a simple yet efficient compression technique that can effectively compress common in-cache data patterns, and has minimal effect on cache access latency.
Abstract: Cache compression is a promising technique to increase on-chip cache capacity and to decrease on-chip and off-chip bandwidth usage. Unfortunately, directly applying well-known compression algorithms (usually implemented in software) leads to high hardware complexity and unacceptable decompression/compression latencies, which in turn can negatively affect performance. Hence, there is a need for a simple yet efficient compression technique that can effectively compress common in-cache data patterns, and has minimal effect on cache access latency.

348 citations

Journal ArticleDOI
TL;DR: This paper presents a comprehensive survey of state-of-art techniques aiming to address caching issues, with particular focus on reducing cache redundancy and improving the availability of cached content.

343 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
92% related
Server
79.5K papers, 1.4M citations
88% related
Scalability
50.9K papers, 931.6K citations
88% related
Network packet
159.7K papers, 2.2M citations
85% related
Quality of service
77.1K papers, 996.6K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202350
2022114
20215
20201
20198
201818