scispace - formally typeset
Search or ask a question
Topic

Smart Cache

About: Smart Cache is a research topic. Over the lifetime, 7680 publications have been published within this topic receiving 180618 citations.


Papers
More filters
Patent
20 Dec 2010
TL;DR: Cache management techniques for a content distribution network (CDN), for example, a video on demand (VOD) system supporting user requests and delivery of video content, are described in this paper.
Abstract: Cache management techniques are described for a content distribution network (CDN), for example, a video on demand (VOD) system supporting user requests and delivery of video content A preferred cache size may be calculated for one or more cache devices in the CDN, for example, based on a maximum cache memory size, a bandwidth availability associated with the CDN, and a title dispersion calculation determined by the user requests within the CDN After establishing the cache with a set of assets (eg, video content), an asset replacement algorithm may be executed at one or more cache devices in the CDN When a determination is made that a new asset should be added to a full cache, a multi-factor comparative analysis may be performed on the assets currently residing in the cache, comparing the popularity and size of assets and combinations of assets, along with other factors to determine which assets should be replaced in the cache device

106 citations

Proceedings ArticleDOI
Jun Rao1, Hao Feng1, Chenchen Yang1, Zhiyong Chen1, Bin Xia1 
22 May 2016
TL;DR: The placement problem for the helper-tier caching network is reduced to a convex problem, and can be effectively solved by the classical water-filling method, and notices that users and helpers prefer to cache popular contents under low node density and prefer to caches different contents evenly under high node density.
Abstract: In this paper, we devise the optimal caching placement to maximize the offloading probability for a two-tier wireless caching system, where the helpers and a part of users have caching ability. The offloading comes from the local caching, D2D sharing and the helper transmission. In particular, to maximize the offloading probability we reformulate the caching placement problem for users and helpers into a difference of convex (DC) problem which can be effectively solved by DC programming. Moreover, we analyze the two extreme cases where there is only help-tier caching network and only user-tier. Specifically, the placement problem for the helper-tier caching network is reduced to a convex problem, and can be effectively solved by the classical water-filling method. We notice that users and helpers prefer to cache popular contents under low node density and prefer to cache different contents evenly under high node density. Simulation results indicate a great performance gain of the proposed caching placement over existing approaches.

106 citations

Patent
Josh Sacks1
11 Aug 2005
TL;DR: In this paper, the authors proposed a cache-based approach to cache pre-computed map images (e.g., map tiles) to minimize the latency of a mapping application on high-latency and low-throughput networks.
Abstract: Techniques are disclosed that enable users to access and use digital mapping systems with constrained-resource services and/or mobile devices (e.g., cell phones and PDAs). In particular, latency of a mapping application on high-latency and low-throughput networks is minimized. One embodiment utilizes volatile and non-volatile storage of the mobile device to cache pre-computed map images (e.g., map tiles). An asynchronous cache can be used to prevent delays caused by potentially slow non-volatile storage. Meta-data about each map image and usage patterns can be stored and used by the cache to optimize hit rates.

105 citations

Proceedings ArticleDOI
24 Aug 2014
TL;DR: It is established that maintaining the tags in SRAM, because of its smaller access latency, leads to overall better performance and a simple technique is proposed to throttle the number of sets prefetched which can satisfy over 60% of DRAM cache tag accesses on average.
Abstract: 3D-stacking technology has enabled the option of embedding a large DRAM onto the processor. Prior works have proposed to use this as a DRAM cache. Because of its large size (a DRAM cache can be in the order of hundreds of megabytes), the total size of the tags associated with it can also be quite large (in the order of tens of megabytes). The large size of the tags has created a problem. Should we maintain the tags in the DRAM and pay the cost of a costly tag access in the critical path? Or should we maintain the tags in the faster SRAM by paying the area cost of a large SRAM for this purpose? Prior works have primarily chosen the former and proposed a variety of techniques for reducing the cost of a DRAM tag access. In this paper, we first establish (with the help of a study) that maintaining the tags in SRAM, because of its smaller access latency, leads to overall better performance. Motivated by this study, we ask if it is possible to maintain tags in SRAM without incurring high area overhead. Our key idea is simple. We propose to cache the tags in a small SRAM tag cache - we show that there is enough spatial and temporal locality amongst tag accesses to merit this idea. We propose the ATCache which is a small SRAM tag cache. Similar to a conventional cache, the ATCache caches recently accessed tags to exploit temporal locality; it exploits spatial locality by prefetching tags from nearby cache sets. In order to avoid the high miss latency and cache pollution caused by excessive prefetching, we use a simple technique to throttle the number of sets prefetched. Our proposed ATCache (which consumes 0.4% of overall tag size) can satisfy over 60% of DRAM cache tag accesses on average.

105 citations

Patent
Robert Drew Major1
22 Jun 1999
TL;DR: In this article, a cache object store is organized to provide fast and efficient storage of data as cache objects organized into cache object groups, and a multi-level hierarchical storage architecture comprising a primary memory-level cache store and, optionally, a secondary disk level cache store, each of which is configured to optimize access to the cache objects groups.
Abstract: A cache object store is organized to provide fast and efficient storage of data as cache objects organized into cache object groups. The cache object store preferably embodies a multi-level hierarchical storage architecture comprising a primary memory-level cache store and, optionally, a secondary disk-level cache store, each of which is configured to optimize access to the cache object groups. These levels of the cache object store further exploit persistent and non-persistent storage characteristics of the inventive architecture.

105 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
92% related
Server
79.5K papers, 1.4M citations
88% related
Scalability
50.9K papers, 931.6K citations
88% related
Network packet
159.7K papers, 2.2M citations
85% related
Quality of service
77.1K papers, 996.6K citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202350
2022114
20215
20201
20198
201818