Federated Deep Reinforcement Learning for Recommendation-Enabled Edge Caching in Mobile Edge-Cloud Computing Networks
01 Mar 2023-IEEE Journal on Selected Areas in Communications (IEEE Journal on Selected Areas in Communications)-Vol. 41, Iss: 3, pp 690-705
TL;DR: In this paper , a decentralized recommendation-enabled edge caching framework is proposed for mobile two-tier (edge-cloud) computing networks, which leverages a discrete multi-agent variant of soft actor-critic and federated learning.
Abstract: To support rapidly increasing services and applications from users, multi-tier computing is emerged as a promising system-level computing architecture by distributing computing/caching/communication/networking capabilities between cloud servers to users, especially deploying edge servers at network edges (e.g., base stations). However, due to heterogeneous content requests of users and a high-cost hit manner with direct hits, edge caching is still a most serious issue to be addressed. In this paper, we investigate the issue of recommendation-enabled edge caching in mobile two-tier (edge-cloud) computing networks. Particularly, we integrate recommender systems and edge caching to support both direct hits and soft hits and thus improve the resource utilization of edge servers. We model the factors affecting the user quality of experience as a comprehensive system cost and further formulate the problem as a multi-agent Markov decision process with the goal of minimizing the long-term average system cost. To address the formulated problem, we propose a decentralized recommendation-enabled edge caching framework that leverages a discrete multi-agent variant of soft actor-critic and federated learning. The proposed framework enables each edge server to learn its best policy locally and generate judicious decisions independently. Finally, trace-driven simulation results demonstrate that the proposed framework converges to a better caching policy and outperforms several existing algorithms on average system cost reduction.
TL;DR: The history of MovieLens and the MovieLens datasets is documents, including a discussion of lessons learned from running a long-standing, live research platform from the perspective of a research organization, and best practices and limitations of using the Movie Lens datasets in new research are documented.
Abstract: The MovieLens datasets are widely used in education, research, and industry. They are downloaded hundreds of thousands of times each year, reflecting their use in popular press programming books, traditional and online courses, and software. These datasets are a product of member activity in the MovieLens movie recommendation system, an active research platform that has hosted many experiments since its launch in 1997. This article documents the history of MovieLens and the MovieLens datasets. We include a discussion of lessons learned from running a long-standing, live research platform from the perspective of a research organization. We document best practices and limitations of using the MovieLens datasets in new research.
••28 Dec 2015
TL;DR: The motivations behind and approach that Netflix uses to improve the recommendation algorithms are explained, combining A/B testing focused on improving member retention and medium term engagement, as well as offline experimentation using historical member engagement data.
Abstract: This article discusses the various algorithms that make up the Netflix recommender system, and describes its business purpose. We also describe the role of search and related algorithms, which for us turns into a recommendations problem as well. We explain the motivations behind and review the approach that we use to improve the recommendation algorithms, combining A/B testing focused on improving member retention and medium term engagement, as well as offline experimentation using historical member engagement data. We discuss some of the issues in designing and interpreting A/B tests. Finally, we describe some current areas of focused innovation, which include making our recommender system global and language aware.
••19 Jun 2010
TL;DR: This paper proposes Static RRIP that is scan-resistant and Dynamic RRIP (DRRIP) that is both scan- resistant and thrash-resistant that require only 2-bits per cache block and easily integrate into existing LRU approximations found in modern processors.
Abstract: Practical cache replacement policies attempt to emulate optimal replacement by predicting the re-reference interval of a cache block. The commonly used LRU replacement policy always predicts a near-immediate re-reference interval on cache hits and misses. Applications that exhibit a distant re-reference interval perform badly under LRU. Such applications usually have a working-set larger than the cache or have frequent bursts of references to non-temporal data (called scans). To improve the performance of such workloads, this paper proposes cache replacement using Re-reference Interval Prediction (RRIP). We propose Static RRIP (SRRIP) that is scan-resistant and Dynamic RRIP (DRRIP) that is both scan-resistant and thrash-resistant. Both RRIP policies require only 2-bits per cache block and easily integrate into existing LRU approximations found in modern processors. Our evaluations using PC games, multimedia, server and SPEC CPU2006 workloads on a single-core processor with a 2MB last-level cache (LLC) show that both SRRIP and DRRIP outperform LRU replacement on the throughput metric by an average of 4% and 10% respectively. Our evaluations with over 1000 multi-programmed workloads on a 4-core CMP with an 8MB shared LLC show that SRRIP and DRRIP outperform LRU replacement on the throughput metric by an average of 7% and 9% respectively. We also show that RRIP outperforms LFU, the state-of the art scan-resistant replacement algorithm to-date. For the cache configurations under study, RRIP requires 2X less hardware than LRU and 2.5X less hardware than LFU.
••01 Nov 2010
TL;DR: A measurement study on data sets crawled from YouTube finds that the related video recommendation, which recommends the videos that are related to the video a user is watching, is one of the most important view sources of videos.
Abstract: Hosting a collection of millions of videos, YouTube offers several features to help users discover the videos of their interest. For example, YouTube provides video search, related video recommendation and front page highlight. The understanding of how these features drive video views is useful for creating a strategy to drive video popularity. In this paper, we perform a measurement study on data sets crawled from YouTube and find that the related video recommendation, which recommends the videos that are related to the video a user is watching, is one of the most important view sources of videos. Despite the fact that the YouTube video search is the number one source of views in aggregation, the related video recommendation is the main source of views for the majority of the videos on YouTube. Furthermore, our results reveal that there is a strong correlation between the view count of a video and the average view count of its top referrer videos. This implies that a video has a higher chance to become popular when it is placed on the related video recommendation lists of popular videos. We also find that the click through rate from a video to its related videos is high and the position of a video in a related video list plays a critical role in the click through rate. Finally, our evaluation of the impact of the related video recommendation system on the diversity of video views indicates that the current recommendation system helps to increase the diversity of video views in aggregation.
TL;DR: This work proposes a federated deep-reinforcement-learning-based cooperative edge caching (FADE) framework that enables base stations to cooperatively learn a shared predictive model, and proves the expectation convergence of FADE.
Abstract: Edge caching is an emerging technology for addressing massive content access in mobile networks to support rapidly growing Internet-of-Things (IoT) services and applications. However, most current optimization-based methods lack a self-adaptive ability in dynamic environments. To tackle these challenges, current learning-based approaches are generally proposed in a centralized way. However, network resources may be overconsumed during the training and data transmission process. To address the complex and dynamic control issues, we propose a federated deep-reinforcement-learning-based cooperative edge caching (FADE) framework. FADE enables base stations (BSs) to cooperatively learn a shared predictive model by considering the first-round training parameters of the BSs as the initial input of the local training, and then uploads near-optimal local parameters to the BSs to participate in the next round of global training. Furthermore, we prove the expectation convergence of FADE. Trace-driven simulation results demonstrate the effectiveness of the proposed FADE framework on reducing the performance loss and average delay, offloading backhaul traffic, and improving the hit rate.