scispace - formally typeset
Search or ask a question

Showing papers on "Cache pollution published in 2021"


Journal ArticleDOI
TL;DR: In this article, a service-oriented and location-based efficient key distribution protocol (SOLEK) is proposed for efficient and secure content delivery in mobile edge caching, which can significantly reduce service latency, decrease network load, and improve the user experience.
Abstract: Mobile edge caching is a promising technology for next-generation mobile networks to effectively offer service environments and cloud-storage capabilities at the edge of networks. By exploiting the storage and computing resources at the network edge, mobile edge caching can significantly reduce service latency, decrease network load, and improve the user experience. On the other hand, edge caching is subject to a number of threats regarding privacy violations and security breaches. In this article, we first introduce the architecture of mobile edge caching, and address the key problems regarding why, where, what, and how to cache. Then we examine the potential cyber threats, including cache poisoning attacks, cache pollution attacks, cache side-channel attacks, and cache deception attacks, which result in huge concerns about privacy, security, and trust in content placement, content delivery, and content usage for mobile users, respectively. After that, we propose a service-oriented and location-based efficient key distribution protocol (SOLEK) as an example in response to efficient and secure content delivery in mobile edge caching. Finally, we discuss the potential techniques for privacy-preserving content placement, efficient and secure content delivery, and trustful content usage, which are expected to draw more attention and efforts into secure edge caching.

36 citations


Journal ArticleDOI
TL;DR: This article proposes a detection and defense scheme with the help of grey forecast, which can effectively exploit the regularity of past Interests and popularity by comprehensively considering three major factors to predict the future popularity of each cached content.
Abstract: Named Data Networking (NDN) is one of the most promising information-centric networking architectures that can improve the network performance by supporting the large scale content distribution. However, the use of in-network caching mechanism increases the opportunity of cache pollution attack, where the attackers intend to reduce the cache hit of legal users by releasing fake requests to fill the precious cache with non-popular contents. To prevent the degradation of network performance caused by such an attack, it is becoming particularly important to detect the attack and then throttle it. In this article, we propose a detection and defense scheme with the help of grey forecast, which can effectively exploit the regularity of past Interests and popularity by comprehensively considering three major factors to predict the future popularity of each cached content. If the predicted popularity of any content differs too much from the actually calculated one in several consecutive slices, the pollution attack will be determined. Once the attack is detected, the defense will be taken by suppressing the popularity increase of the suspicious content to mitigate the damage of the pollution attack. We also consider a special case, where there exists a sudden burst of traffic from legal users that cannot be simply dropped. The simulations in ndnSIM indicate that our proposed method is effective in detecting and defending the pollution attack with higher cache hit, higher detecting ratio, and lower hop count compared to other state-of-the-art schemes.

13 citations


Journal ArticleDOI
Zhenke Chen1, Dingding Li1, Wang Zhiwen1, Hai Liu1, Yong Tang1 
04 Mar 2021
TL;DR: RAMCI, an asynchronous (async) memory copying mechanism based on Intel I/OAT engine, improves the sync overheads, but also overcomes the above three issues through a lock mechanism by using low-level CAS instruction and a lightweight interrupt mechanism for the completion of memory copying.
Abstract: Memory copying is one of the most common operations in modern software. Usually, the operation reflects a synchronous (sync) CPU procedure of memory copying, incurring overheads such as cache pollution and CPU stalling, especially in the scenario of bulk copying with large data. To improve this issue, some works based on I/OAT, which is a dedicated and popular hardware copying engine on Intel platform, is proposed but still exists several problems: (1) lacking atomic allocation/revocation at the granularity of I/OAT channel; (2) deficiency of interrupt support and (3) complicated programming interfaces. We propose RAMCI, an asynchronous (async) memory copying mechanism based on Intel I/OAT engine, not only improves the sync overheads, but also overcomes the above three issues through (1) a lock mechanism by using low-level CAS instruction; (2) a lightweight interrupt mechanism for the completion of memory copying, instead of using the polling pattern which consuming large CPU resource and (3) a group of well-defined and abstract interfaces, allowing the programmers to utilize the underlying free I/OAT channels transparently. To support the interfaces, a novel scheduler of the I/OAT channels is introduced. It splits the source copying data into several pieces, and each of them can be allocated with a dedicated I/OAT channel intelligently to transfer the data with parallelism. We evaluate RAMCI and compare it with other memory copying mechanisms in four NUMA scenarios. The experimental results show that RAMCI improves memory copying performance up to 4.68 $$\times $$ while achieving almost full ability of parallel computing.

8 citations


Journal ArticleDOI
Dapeng Man1, Mu Yongjia1, Jiafei Guo1, Wu Yang1, Jiguang Lv1, Wei Wang1 
TL;DR: Wang et al. as mentioned in this paper proposed a detection algorithm based on gradient boost decision tree (GBDT), which can obtain cache pollution detection through model learning, which uses two features based on node status and path information as model input, which improves the accuracy of the method.
Abstract: There is a new cache pollution attack in the information-centric network (ICN), which fills the router cache by sending a large number of requests for nonpopular content. This attack will severely reduce the router cache hit rate. Therefore, the detection of cache pollution attacks is also an urgent problem in the current information center network. In the existing research on the problem of cache pollution detection, most of the methods of manually setting the threshold are used for cache pollution detection. The accuracy of the detection result depends on the threshold setting, and the adaptability to different network environments is weak. In order to improve the accuracy of cache pollution detection and adaptability to different network environments, this paper proposes a detection algorithm based on gradient boost decision tree (GBDT), which can obtain cache pollution detection through model learning. Method. In feature selection, the algorithm uses two features based on node status and path information as model input, which improves the accuracy of the method. This paper proves the improvement of the detection accuracy of this method through comparative experiments.

6 citations


Proceedings ArticleDOI
23 Sep 2021
TL;DR: In this paper, the authors identify the different attack models that can disrupt the NDN operation and assess the impact of the Cache Pollution Attack on the performance of a Named Data Network, more precisely, they implemented different attack scenarios and analyzed their impact in terms of cache hit ratio, data retrieval delay and hit damage ratio.
Abstract: Named Data Networking (NDN), one of the most suitable candidates for the future Internet architecture, allows all network nodes to have a local cache that is used to serve incoming content requests. Content caching is an essential component in NDN: content is cached in routers and used for future requests in order to reduce bandwidth consumption and improve data delivery speed. Moreover, NDN introduces new self-certifying contents features that obviously improve data security and make NDN a secured-by-design architecture able to support an efficient and secure content distribution at a global scale. However, basic NDN security mechanisms, such as signatures and encryption, are not sufficient to ensure security in these networks. Indeed, the availability of the Data in several caches in the network allows malicious nodes to perform attacks that are relatively easy to implement and very effective. Such attacks include Cache Pollution Attacks (CPA), Cache Privacy Attacks, Content Poisoning Attacks and Interest Flooding Attacks. In this paper, we identify the different attack models that can disrupt the NDN operation. We conducted several simulations on NDNSim to assess the impact of the Cache Pollution Attack on the performance of a Named Data Network. More precisely, we implemented different attack scenarios and analyzed their impact in terms of cache hit ratio, data retrieval delay and hit damage ratio.

4 citations


Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, the authors focus on the Internet forwarding strategy which uses data forwarding in NDN-based networking and present the present-day measures, arrangements and endeavors of the apparatus's applied clustering approach to discover and defend against cache pollution attack.
Abstract: The future of Internet architectures is information-centric networking structure, to solve the problems of content spoofing attacks in the current Internet structure, making it more useful for IoT-based applications. The ICN is structured with the Internet forwarding state technology which is an advanced technology with a comparative structure. In this paper, we are concentrating on the Internet forwarding strategy which uses data forwarding in NDN-based networking. It understands content priority and prefixes the content parameter and passes through the named data network to deliver the packet based on the demands. Also, future Internet router cache could face the problem of overflowing with non-popular content due to cache pollution attack (CPA); i.e., the router keeps receiving requests for vulnerable content. The detection and defense against such spoofing attacks are especially difficult due to cache pollution attack’s similarities with every other consumer request. Based on the hobby content priority, named records networking accelerates the process and decreases the traffic to reach the request with low latency site visitors. We thereby address the present-day measures, arrangements and endeavors of the apparatus’s applied clustering approach to discover and defend against CPAs. Finally, we recommend the improved decision tree method where once any attack is detected an assault table will be updated to report any abnormal requests. While such requests are nevertheless forwarded, the corresponding content chunks are not cached. We carry out the above technique simulations with the aid of ndnSIM.

3 citations


Journal ArticleDOI
TL;DR: This research shall investigate the L1 cache, primary cache, and L2 cache as a secondary proxy server cache to anticipate the average period of usage of LRU, LRU_AVL, andLRU_BST cache algorithms.

2 citations


Posted Content
TL;DR: In this paper, a cache-level prediction method is proposed to complement data prefetching, which predicts which memory hierarchy level a load will access allowing the memory loads to start earlier and thereby save many cycles.
Abstract: High load latency that results from deep cache hierarchies and relatively slow main memory is an important limiter of single-thread performance. Data prefetch helps reduce this latency by fetching data up the hierarchy before it is requested by load instructions. However, data prefetching has shown to be imperfect in many situations. We propose cache-level prediction to complement prefetchers. Our method predicts which memory hierarchy level a load will access allowing the memory loads to start earlier, and thereby saves many cycles. The predictor provides high prediction accuracy at the cost of just one cycle added latency to L1 misses. Experimental results show speedup of 7.8\% on generic, graph, and HPC applications over a baseline with aggressive prefetchers.

1 citations


Patent
05 Jan 2021
TL;DR: In this paper, the authors present a cache pollution ratio of executed loads to completed loads to a branch prediction unit, and alter load instructions for the branch based on the cache pollution ratios.
Abstract: Aspects of the present disclosure relate to control of speculative demand loads. In some embodiments, the method includes receiving instructions for a branch in a program, detecting the branch load is in the cache, monitoring a number of completed loads for the program, determining a cache pollution ratio of executed loads to completed loads, providing the cache pollution ratio to a branch prediction unit, and altering load instructions for the branch based on the cache pollution ratio.

1 citations


Proceedings ArticleDOI
12 Jul 2021
TL;DR: In this article, the authors proposed a method of detecting cache pollution attack hosts using a limited amount of memory resources based on a Bloom filter using the combination of identifiers of host and content as keys.
Abstract: To provide web browsing and video streaming services with desirable quality, cache servers have been widely used to deliver digital data to users from locations close to users. For example, in the MEC (mobile edge computing), cache memories are provided at base stations of 5G cellular networks to reduce the traffic load in the backhaul networks. Cache servers are also connected to many edge routers in the CDN (content delivery network), and they are provided at routers in the ICN (information-centric networking). However, the cache pollution attack (CPA) which degrades the cache hit ratio by intentionally sending many requests to non-popular contents will be a serious threat in the cache networks. Quickly detecting the CPA hosts and protecting the cache servers is important to effectively utilize the cache resources. Therefore, in this paper, we propose a method of accurately detecting the CPA hosts using a limited amount of memory resources. The proposed method is based on a Bloom filter using the combination of identifiers of host and content as keys. We also propose to use two Bloom filters in parallel to continuously detect CPA hosts. Through numerical evaluations, we show that the proposed method suppresses the degradation of the cache hit ratio caused by the CPA while avoiding the false identification of legitimate hosts.

Patent
04 Feb 2021
TL;DR: In this paper, a cache usage measure calculation device (1) is provided with: a memory from which data is read and to which data are written; a cache which can be accessed at a higher speed than the memory; a central processing unit which performs processing by reading from and writing to the memory and the cache; a usage status measurement unit which measures the status of the usage of the cache by applications (11a, 11b).
Abstract: A cache usage measure calculation device (1) is provided with: a memory from which data is read and to which data is written; a cache which can be accessed at a higher speed than the memory; a central processing unit which performs processing by reading from and writing to the memory and the cache; a usage status measurement unit which measures the status of the usage of the cache by applications (11a, 11b) executed by the central processing unit; a performance measurement unit which measures the cache sensitivity and/or the cache pollution level with respect to the applications (11a, 11b); and a measure calculation unit which calculates a measure of the cache sensitivity and/or the cache pollution level for each of a plurality of pre-selected applications from performance degradation of the pre-selected applications and the usage status of the cache.

Journal ArticleDOI
TL;DR: This letter proposes a novel prefetching technique that is free from cache pollution and thus achieves high performance for multicore processors and consumes more 3.2% DRAM power, but achieves 12% higher performance on average.
Abstract: In this letter, we propose a novel prefetching technique that is free from cache pollution and thus achieves high performance for multicore processors. Unlike the conventional prefetchers that cause incorrect predictions, the proposed prefetcher reads instructions in advance only in determined paths and charges dynamic random access memory (DRAM) cells storing instructions in undetermined paths via refreshing DRAM cells. The DRAM cells highly charged will be quickly accessed. Since caches served by our prefetcher always store useful instructions, as a result, they are free from cache pollution that results in lower cache hit rate. In the case that SPEC CPU2006 benchmarks run on an 8-core processor, the proposed prefetcher consumes more 3.2% DRAM power, but achieves 12% higher performance on average.