scispace - formally typeset
Search or ask a question
Topic

Cache

About: Cache is a research topic. Over the lifetime, 59167 publications have been published within this topic receiving 976633 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a proactive caching mechanism is proposed to reduce peak traffic demands by proactively serving predictable user demands via caching at base stations and users' devices, and the results show that important gains can be obtained for each case study, with backhaul savings and a higher ratio of satisfied users.
Abstract: This article explores one of the key enablers of beyond 4G wireless networks leveraging small cell network deployments, proactive caching. Endowed with predictive capabilities and harnessing recent developments in storage, context awareness, and social networks, peak traffic demands can be substantially reduced by proactively serving predictable user demands via caching at base stations and users' devices. In order to show the effectiveness of proactive caching, we examine two case studies that exploit the spatial and social structure of the network, where proactive caching plays a crucial role. First, in order to alleviate backhaul congestion, we propose a mechanism whereby files are proactively cached during off-peak periods based on file popularity and correlations among user and file patterns. Second, leveraging social networks and D2D communications, we propose a procedure that exploits the social structure of the network by predicting the set of influential users to (proactively) cache strategic contents and disseminate them to their social ties via D2D communications. Exploiting this proactive caching paradigm, numerical results show that important gains can be obtained for each case study, with backhaul savings and a higher ratio of satisfied users of up to 22 and 26 percent, respectively. Higher gains can be further obtained by increasing the storage capability at the network edge.

1,157 citations

Posted Content
TL;DR: In this article, the authors describe side-channel attacks based on inter-process leakage through the state of the CPU's memory cache, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups.
Abstract: We describe several software side-channel attacks based on inter-process leakage through the state of the CPU’s memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts, and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several such attacks on AES, and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux’s dm-crypt encrypted partitions (in the latter case, the full key can be recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we describe several countermeasures for mitigating such attacks.

1,109 citations

Journal ArticleDOI
TL;DR: A novel edge caching scheme based on the concept of content-centric networking or information-centric networks is proposed and evaluated, using trace-driven simulations to evaluate the performance of the proposed scheme and validate the various advantages of the utilization of caching content in 5G mobile networks.
Abstract: The demand for rich multimedia services over mobile networks has been soaring at a tremendous pace over recent years. However, due to the centralized architecture of current cellular networks, the wireless link capacity as well as the bandwidth of the radio access networks and the backhaul network cannot practically cope with the explosive growth in mobile traffic. Recently, we have observed the emergence of promising mobile content caching and delivery techniques, by which popular contents are cached in the intermediate servers (or middleboxes, gateways, or routers) so that demands from users for the same content can be accommodated easily without duplicate transmissions from remote servers; hence, redundant traffic can be significantly eliminated. In this article, we first study techniques related to caching in current mobile networks, and discuss potential techniques for caching in 5G mobile networks, including evolved packet core network caching and radio access network caching. A novel edge caching scheme based on the concept of content-centric networking or information-centric networking is proposed. Using trace-driven simulations, we evaluate the performance of the proposed scheme and validate the various advantages of the utilization of caching content in 5G mobile networks. Furthermore, we conclude the article by exploring new relevant opportunities and challenges.

1,098 citations

Proceedings ArticleDOI
09 Dec 2006
TL;DR: In this article, the authors propose a low-overhead, runtime mechanism that partitions a shared cache between multiple applications depending on the reduction in cache misses that each application is likely to obtain for a given amount of cache resources.
Abstract: This paper investigates the problem of partitioning a shared cache between multiple concurrently executing applications. The commonly used LRU policy implicitly partitions a shared cache on a demand basis, giving more cache resources to the application that has a high demand and fewer cache resources to the application that has a low demand. However, a higher demand for cache resources does not always correlate with a higher performance from additional cache resources. It is beneficial for performance to invest cache resources in the application that benefits more from the cache resources rather than in the application that has more demand for the cache resources. This paper proposes utility-based cache partitioning (UCP), a low-overhead, runtime mechanism that partitions a shared cache between multiple applications depending on the reduction in cache misses that each application is likely to obtain for a given amount of cache resources. The proposed mechanism monitors each application at runtime using a novel, cost-effective, hardware circuit that requires less than 2kB of storage. The information collected by the monitoring circuits is used by a partitioning algorithm to decide the amount of cache resources allocated to each application. Our evaluation, with 20 multiprogrammed workloads, shows that UCP improves performance of a dual-core system by up to 23% and on average 11% over LRU-based cache partitioning.

1,083 citations

Patent
13 Oct 1995
TL;DR: In this article, a method and apparatus for reconstructing data in a computer system employing a modified RAID 5 data protection scheme is described, where the computer system includes a write back cache composed of non-volatile memory for storing metadata information in the nonvolatile memories.
Abstract: Disclosed is a method and apparatus for reconstructing data in a computer system employing a modified RAID 5 data protection scheme. The computer system includes a write back cache composed of non-volatile memory for storing (1) writes outstanding to a device and associated data read, and (2) storing metadata information in the non-volatile memory. The metadata includes a first field containing the logical block number or address (LBN or LBA) of the data, a second field containing the device ID, and a third field containing the block status. From the metadata information it is determined where the write was intended when the crash occurred. An examination is made to determine whether parity is consistent across the slice, and if not, the data in the non-volatile write back cache is used to reconstruct the write that was occurring when the crash occurred to insure consistent parity, so that only those blocks affected by the crash have to be reconstructed.

1,069 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
90% related
Network packet
159.7K papers, 2.2M citations
87% related
Mobile computing
51.3K papers, 1M citations
87% related
Wireless ad hoc network
49K papers, 1.1M citations
86% related
Scheduling (computing)
78.6K papers, 1.3M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023681
20221,609
20211,412
20202,759
20193,644
20183,704