scispace - formally typeset
Search or ask a question

Showing papers on "Cache published in 2022"


Journal ArticleDOI
TL;DR: In this article , a joint content caching and user association optimization problem is formulated to minimize the content download latency, and a joint CC and UA optimization algorithm (JCC-UA) is proposed.
Abstract: Deploying small cell base stations (SBS) under the coverage area of a macro base station (MBS), and caching popular contents at the SBSs in advance, are effective means to provide high-speed and low-latency services in next generation mobile communication networks. In this paper, we investigate the problem of content caching (CC) and user association (UA) for edge computing. A joint CC and UA optimization problem is formulated to minimize the content download latency. We prove that the joint CC and UA optimization problem is NP-hard. Then, we propose a CC and UA algorithm (JCC-UA) to reduce the content download latency. JCC-UA includes a smart content caching policy (SCCP) and dynamic user association (DUA). SCCP utilizes the exponential smoothing method to predict content popularity and cache contents according to prediction results. DUA includes a rapid association (RA) method and a delayed association (DA) method. Simulation results demonstrate that the proposed JCC-UA algorithm can effectively reduce the latency of user content downloading and improve the hit rates of contents cached at the BSs as compared to several baseline schemes.

85 citations


Journal ArticleDOI
Tilahun Muluneh1
TL;DR: In this paper , an improved caching scheme named robot helper aided caching (RHAC) is proposed to optimize the system performance by moving the robot helpers to the optimal positions, which can bring significant performance improvements in terms of hitting probability, cost, delay and energy consumption.

54 citations


Journal ArticleDOI
TL;DR: In this article, an improved caching scheme named robot helper aided caching (RHAC) is proposed to optimize the system performance by moving the robot helpers to the optimal positions, which can bring significant performance improvements in terms of hitting probability, cost, delay and energy consumption.

54 citations


Journal ArticleDOI
TL;DR: A caching approach with balanced content distribution among network devices is proposed and shows the better performance compared to the other three protocols used in the comparison.
Abstract: Information‐centric network (ICN) emphasizes on content retrieval without much bothering about the location of its actual producer. This novel networking paradigm makes content retrieval faster and less expensive by shifting data provisioning into content holder rather than content owner. Caching is the feature of ICN that makes content serving possible from any intermediate device. An efficient caching is one of the primary requirements for effective deployment of ICN. In this paper, a caching approach with balanced content distribution among network devices is proposed. The selection of contents to be cached is determined through universal and computed using Zipf's law. The dynamic change in popularity of contents is also considered to take make caching decisions. For balancing the cached content across the network, every router keeps track of its neighbor's cache status. Three parameters, the proportionate distance of the router from the client (pd), the router congestion (rc), and the cache status (cs), are contemplated to select a router for caching contents. The new caching approach is evaluated in the simulated environment using ndnSIM‐2.0. Three state‐of‐the‐art approaches, Leave Copy Everywhere (LCE), centrality measures‐based algorithm (CMBA), and a probability‐based caching (probCache), are considered for comparison. The proposed method of caching shows the better performance compared to the other three protocols used in the comparison.

33 citations


Journal ArticleDOI
TL;DR: In this article , the authors designed a strategy that uses reinforcement learning algorithm to optimize cache schemes on different devices to maximize the efficiency of content cache, which can enhance the cache hit ratio by 10%-20% compared with the well-known counterparts.
Abstract: The rapid development of 6G can help to bring autonomous driving closed to the reality. Drivers and passengers will have more time for work and leisure spending in the vehicles, further generating a lot of data requirements. However, edge resources from small base stations are insufficient to match the wide variety of services of the future vehicular networks. Besides, due to the high-speed nature of the vehicles, users have to switch the connections among different base stations, whereas such way will cause external latency during the data request. Therefore, it is vital to enable the local cache of vehicle users to realize the reliable autonomous driving. In this paper, we consider caching the contents in the local cache, small base station, and edge server. In practice, the request preference of some single users may be different from a whole region. To maximize the efficiency of content cache, we design a strategy that uses reinforcement learning algorithm to optimize cache schemes on different devices. The experimental results demonstrate that our strategy can enhance the cache hit ratio by 10%-20% compared with the well-known counterparts.

27 citations


Proceedings ArticleDOI
28 Mar 2022
TL;DR: GNNLab adopts a factored design for multiple GPUs, where each GPU is dedicated to the task of graph sampling or model training, and proposes a new pre-sampling based caching policy that takes both sampling algorithms and GNN datasets into account, and shows an efficient and robust caching performance.
Abstract: We propose GNNLab, a sample-based GNN training system in a single machine multi-GPU setup. GNNLab adopts a factored design for multiple GPUs, where each GPU is dedicated to the task of graph sampling or model training. It accelerates both tasks by eliminating GPU memory contention. To balance GPU workloads, GNNLab applies a global queue to bridge GPUs asynchronously and adopts a simple yet effective method to adaptively allocate GPUs for different tasks. GNNLab further leverages temporarily switching to avoid idle waiting on GPUs. Furthermore, GNNLab proposes a new pre-sampling based caching policy that takes both sampling algorithms and GNN datasets into account, and shows an efficient and robust caching performance. Evaluations on three representative GNN models and four real-life graphs show that GNNLab outperforms the state-of-the-art GNN systems DGL and PyG by up to 9.1× (from 2.4×) and 74.3× (from 10.2×), respectively. In addition, our pre-sampling based caching policy achieves 90% -- 99% of the optimal cache hit rate in all experiments.

24 citations


Journal ArticleDOI
TL;DR: In this paper , a blockchain-based secure cost-aware data caching scheme is proposed to optimize the placement and prevent the tampering of cache data, under the constraints of transmission cost, edge cache size, and a quantum particle swarm optimization (QPSO) algorithm is used to solve the data cache placement problem with the greatest content caching gain.
Abstract: With the continuous growth in the amount of data generated in the edge-cloud environment, security risks in traditional centralized data management platforms have been concerned. Blockchain technology can be applied to guarantee safety and information transparency in data caching and trading processes. Therefore, a blockchain-based secure cost-aware data caching scheme is proposed to optimize the placement and prevent the tampering of cache data. In this scheme, under the constraints of transmission cost, edge cache size, a quantum particle swarm optimization (QPSO) algorithm is used to solve the data cache placement problem with the greatest content caching gain. A blockchain-based secure decentralized data trading model is proposed to solve the trust problem among the buyers, sellers, and agent nodes and increase incentives for users to trade data. A double auction mechanism is used to maximize social welfare. The experimental results reveal that the proposed data caching and trading scheme can reduce the data transmission cost, improve the cache hit ratio, and maximize social welfare.

24 citations


Journal ArticleDOI
TL;DR: In this paper , a low-latency edge caching method is proposed to reduce user access latency and more effectively cache diverse content in the edge network, and the cache model based on base station cooperation is established and the delay in different transmission modes is considered.
Abstract: With the increase of mobile terminal equipment and network mass data, users have higher requirements for delay and service quality. To reduce user access latency and more effectively cache diverse content in the edge network, a low-latency edge caching method is proposed. The cache model based on base station cooperation is established and the delay in different transmission modes is considered. Finally, the problem of minimizing latency is transformed into a problem of maximizing cache reward, and a greedy algorithm based on the original dual interior point is used to obtain the strategy of the original problem. Meanwhile, in order to improve service quality and balance communication overhead and migration overhead, a migration method based on balanced communication overhead and migration overhead is proposed. The model that balances communication overhead and migration overhead is established, and the reinforcement learning method is used to obtain a migration scheme that maximizes accumulated revenue. Comparison results show that our caching method can enhance the cache reward and reduce delay. Meanwhile, the migration algorithm can increase service migration revenue and reduce communication overhead.

24 citations


Journal ArticleDOI
TL;DR: In this paper, a blockchain-based secure cost-aware data caching scheme is proposed to optimize the placement and prevent the tampering of cache data, under the constraints of transmission cost, edge cache size, and a quantum particle swarm optimization (QPSO) algorithm is used to solve the data cache placement problem with the greatest content caching gain.
Abstract: With the continuous growth in the amount of data generated in the edge-cloud environment, security risks in traditional centralized data management platforms have been concerned. Blockchain technology can be applied to guarantee safety and information transparency in data caching and trading processes. Therefore, a blockchain-based secure cost-aware data caching scheme is proposed to optimize the placement and prevent the tampering of cache data. In this scheme, under the constraints of transmission cost, edge cache size, a quantum particle swarm optimization (QPSO) algorithm is used to solve the data cache placement problem with the greatest content caching gain. A blockchain-based secure decentralized data trading model is proposed to solve the trust problem among the buyers, sellers, and agent nodes and increase incentives for users to trade data. A double auction mechanism is used to maximize social welfare. The experimental results reveal that the proposed data caching and trading scheme can reduce the data transmission cost, improve the cache hit ratio, and maximize social welfare.

24 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a new algorithm in which blockchain-assisted compressed algorithm of federated learning is applied for content caching, called CREAT to predict cached files, where each edge node uses local data to train a model and then uses the model to predict popular files to improve the cache hit rate.
Abstract: Edge computing architectures can help us quickly process the data collected by Internet of Things (IoT) and caching files to edge nodes can speed up the response speed of IoT devices requesting files. Blockchain architectures can help us ensure the security of data transmitted by IoT. Therefore, we have proposed a system that combines IoT devices, edge nodes, remote cloud, and blockchain. In the system, we designed a new algorithm in which blockchain-assisted compressed algorithm of federated learning is applied for content caching, called CREAT to predict cached files. In the CREAT algorithm, each edge node uses local data to train a model and then uses the model to learn the features of users and files, so as to predict popular files to improve the cache hit rate. In order to ensure the security of edge nodes’ data, we use federated learning (FL) to enable multiple edge nodes to cooperate in training without sharing data. In addition, for the purpose of reducing communication load in FL, we will compress gradients uploaded by edge nodes to reduce the time required for communication. What is more, in order to ensure the security of the data transmitted in the CREAT algorithm, we have incorporated blockchain technology in the algorithm. We design four smart contracts for decentralized entities to record and verify the transactions to ensure the security of data. We used MovieLens data sets for experiments and we can see that CREAT greatly improves the cache hit rate and reduces the time required to upload data.

23 citations


Journal ArticleDOI
TL;DR: In this paper , a cooperative edge caching strategy based on energy-latency balance is proposed to solve high power consumption and latency caused by processing computationally intensive applications, and the optimal solution of the model is solved by the deep Q-Network (DQN) algorithm.

Journal ArticleDOI
TL;DR: In this article , a social-aware vehicular edge caching mechanism that dynamically orchestrates the cache capability of roadside units (RSUs) and smart vehicles according to user preference similarity and service availability is proposed.
Abstract: The rapid proliferation of smart vehicles along with the advent of powerful applications bring stringent requirements on massive content delivery. Although vehicular edge caching can facilitate delay-bounded content transmission, constrained storage capacity and limited serving range of an individual cache server as well as highly dynamic topology of vehicular networks may degrade the efficiency of content delivery. To address the problem, in this article, we propose a social-aware vehicular edge caching mechanism that dynamically orchestrates the cache capability of roadside units (RSUs) and smart vehicles according to user preference similarity and service availability. Furthermore, catering to the complexity and variability of vehicular social characteristics, we leverage the digital twin technology to map the edge caching system into virtual space, which facilitates constructing the social relation model. Based on the social model, a new concept of vehicular cache cloud is developed to incorporate the correlation of content storing between multiple cache-enabled vehicles in diverse traffic environments. Then, we propose deep learning empowered optimal caching schemes, jointly considering the social model construction, cache cloud formation, and cache resource allocation. We evaluate the proposed schemes based on real traffic data. Numerical results demonstrate that our edge caching schemes have great advantages in optimizing caching utility.

Journal ArticleDOI
TL;DR: In this paper , the challenges of NDN-IoT caching are identified with the aim to develop a new hybrid strategy for efficient data delivery, which is comparatively and extensively studied with NDN and IoT caching strategies through an extensive simulation in terms of average latency, cache hit ratio, and average stretch ratio.
Abstract: Internet of Things (IoT) and named data network (NDN) are innovative technologies to meet up the future Internet requirements. NDN is considered as an enabling approach to improving data dissemination in IoT scenarios. NDN delivers in-network caching, which is the most prominent feature to provide fast data dissemination as compared to Internet protocol (IP)-based communication. The proper integration of caching placement strategies and replacement policies is the most suitable approach to support IoT networks. It can improve multicast communication which minimizes the delay in responding to IoT-based environments. Besides, these approaches are playing a most significant role in increasing the overall performance of NDN-based IoT networks. To this end, in this article, the challenges of NDN-IoT caching are identified with the aim to develop a new hybrid strategy for efficient data delivery. The proposed strategy is comparatively and extensively studied with NDN-IoT caching strategies through an extensive simulation in terms of average latency, cache hit ratio, and average stretch ratio. From the simulation findings, it is observed that the proposed hybrid strategy outperformed to achieve a higher caching performance of NDN-based IoT scenarios.

Journal ArticleDOI
TL;DR: In this article , a data caching scheme is proposed by taking the spatial-temporal characteristics of data into account to improve the service performance, e.g., reducing the delay of data acquisition.
Abstract: In the Internet of Vehicles (IoV), the classic TCP/IP still plays an important role for data transmission, traffic control and address assignment. However, with increasing requirements on content retrieve efficiency in IoV, the drawbacks of traditional TCP/IP stacks, such as weak scalability in large networks, low efficiency in dense environment and unreliable addressing in high mobility circumstance, have incurred significant performance degradations in vehicular environments. Fortunately, the emerging Named Data Network (NDN) technology provides a good choice to address above issues in vehicular environment by proving content caching capability with introduced content store module, and boosts the research activity of Vehicular Named Data Network (VNDN) in the last few years. In this paper, to improve the service performance, e.g., reducing the delay of data acquisition, a data caching scheme is proposed by taking the spatial-temporal characteristics of data into account. At first, we divided the data in a VNDN into emergency safety message, traffic efficiency message and service message, according to the application requirements. Then, we analyze the spatial-temporal characteristics of these three message categories and design the caching strategy according to these characteristics. Experimental results from NDNSim platform show that our designed scheme has an approximately 50% performance enhancement compared with Leave Copy Everywhere (LCE), Pro(0.7), and Pro(0.2) data caching protocols in terms of average hit rate, average hop count and average cache replacement times, which verifies the reliability and effectiveness of our proposed data caching scheme.

Journal ArticleDOI
TL;DR: In this paper , a data placement strategy based on an improved reservoir sampling algorithm is proposed to solve the problem of intermediate data tilt in the shuffle stage of Spark, where the data skew measurement model is used to classify skewed data into skewed data, and non-skewed and coarse-grained, and fine grained placement algorithms are designed.

Journal ArticleDOI
TL;DR: This paper exploits a behavior-shaping proactive mechanism, namely, recommendation, in cache-assisted non-orthogonal multiple access (NOMA) networks, aiming at minimizing the average system’s latency and shows the superiority of the proposed joint optimization method in terms of both system latency and cache hit ratio when compared to extensive benchmark strategies.
Abstract: In this paper, we exploit a behavior-shaping proactive mechanism, namely, recommendation, in cache-assisted non-orthogonal multiple access (NOMA) networks, aiming at minimizing the average system’s latency. Thereof, the considered latency consists of two parts, i.e., the backhaul link transmission delay and the content delivery latency. Towards this end, we first examine the expression of system latency, demonstrating how it is critically determined by content cache placement, personalized recommendation, and delivery associated NOMA user pairing and power control strategies. Thereafter, we formulate the minimization problem mathematically taking into account the cache capacity budget, the recommendation-oriented requirements, and the total transmit power constraint, which is a non-convex, multi-timescale, and mixed-integer programming problem. To facilitate the process, we put forth an entirely new paradigm named divide-and-rule. Specifically, we first solve the short-term optimization problem regarding user pairing as well as power allocation and the long-term decision-making problem with respect to recommendation and caching, respectively. On this basis, an iterative algorithm is developed to optimize all the optimization variables alternately. Particularly, for solving the short-timescale problem, graph theory enabled NOMA user grouping and efficient inter-group power control manners are invoked. Meanwhile, a dynamic programming approach and a complexity-controllable swap-then-compare method with convergence insurance are designed to derive the caching and recommendation policies, respectively. From Monte-Carlo simulation, we show the superiority of the proposed joint optimization method in terms of both system latency and cache hit ratio when compared to extensive benchmark strategies.

Journal ArticleDOI
TL;DR: An efficient popularity-based cache consistency management scheme, which aims to guarantee freshness of IoT data returned by on-path routers and avoid heavy signalling costs introduced at the same time, is proposed.
Abstract: Since Internet of Things (IoT) communications can enjoy many advantages brought by content-centric networking (CCN) in nature, there is an increasing interest on their integration for better information retrieval and distribution. Nevertheless, different from the conventional multimedia traffic of which contents are hardly changed, IoT data are always transient and updated by their producers according to the actual situation. As a result, if without any effective countermeasures, outdated copies are inevitably stored by CCN routers and then distributed to the associated consumers, degrading both caching efficiency and user experience. In fact, most of related policies take little account of information freshness for cached contents, and how to tackle transient IoT data in CCN is still an ignored but crucial issue required for further explorations. Therefore, in this article, we propose an efficient popularity-based cache consistency management scheme, which aims to guarantee freshness of IoT data returned by on-path routers and avoid heavy signalling costs introduced at the same time. Extensive simulations were performed under both real-world scare-free and binary-tree topologies, and corresponding results have proved the efficiency of the proposed scheme in timely evictions of outdated IoT data stored by CCN in-network caching.

Journal ArticleDOI
TL;DR: In this article , an energy-aware coded caching strategy was proposed to provide more multicast opportunities and reduce the backhaul transmission volume, considering the effects of file popularity, cache size, request frequency, and mobility in different road sections (RSs).
Abstract: The Internet of Vehicles (IoV) can offer safe and comfortable driving experience, by the enhanced advantages of space–air–ground-integrated networks (SAGINs), i.e., global seamless access, wide-area coverage, and flexible traffic scheduling. However, due to the huge popular traffic volume, limited cache/power resources, and the heterogeneous network infrastructures, the burden of backhaul link will be seriously enlarged, degrading the energy efficiency of IoV in SAGIN. In this article, to implement the popular content severing multiple vehicle users (VUs), we consider a cache-enabled satellite-UAV-vehicle-integrated network (CSUVIN), where the geosynchronous Earth orbit (GEO) satellite is regard as a cloud server, and unmanned aerial vehicles are deployed as edge caching servers. Then, we propose an energy-aware coded caching strategy employed in our system model to provide more multicast opportunities, and to reduce the backhaul transmission volume, considering the effects of file popularity, cache size, request frequency, and mobility in different road sections (RSs). Furthermore, we derive the closed-form expressions of total energy consumption both in single-RS and multi-RSs scenarios with asynchronous and synchronous services schemes, respectively. An optimization problem is formulated to minimize the total energy consumption, and the optimal content placement matrix, power allocation vector, and coverage deployment vector are obtained by well-designed algorithms. We finally show, numerically, our coded caching strategy can greatly improve energy efficient performance in CSUVINs, compared with other benchmarked caching schemes under the heterogeneous network conditions.

Proceedings ArticleDOI
01 May 2022
TL;DR: Fluid is a cloud-native platform that provides DL training jobs with a data abstraction called Fluid Dataset to access training data from heterogeneous sources in a unified manner with transparent and elastic data acceleration powered by auto-tuned cache runtimes.
Abstract: Nowdays, it is prevalent to train deep learning (DL) models in cloud-native platforms that actively leverage containerization and orchestration technologies for high elasticity, low and flexible operation cost, and many other benefits. However, it also faces new challenges and our work is focusing on those related to I/O throughput for training, including complex data access with complicated performance tuning, lack of cache capacity with specialized hardware to match its high and dynamic I/O requirement, and inefficient I/O resource sharing across different training jobs. We propose Fluid, a cloud-native platform that provides DL training jobs with a data abstraction called Fluid Dataset to access training data from heterogeneous sources in a unified manner with transparent and elastic data acceleration powered by auto-tuned cache runtimes. In addition, it comes with an on-the-fly cache system autoscaler that can intelligently scale up and down the cache capacity to match the online training speed of each individual DL job. To improve the overall performance of multiple DL jobs, Fluid can co-orchestrate the data cache and DL jobs by arranging job scheduling in an appropriate order. Our experimental results show significant performance improvement of each individual DL job which uses dynamic computing resources with Fluid. In addition, for scheduling multiple DL jobs with same datasets, Fluid gives around 2x performance speedup when integrated with existing widely-used and cutting-edge scheduling solutions. Fluid is now an open source project hosted by Cloud Native Computing Foundation (CNCF) with adopters in production including Alibaba Cloud, Tencent Cloud, Weibo.com, China Telecom, etc.

Journal ArticleDOI
TL;DR: In this paper , a constrained edge data caching (CEDC) problem was formulated as a constrained optimization problem from the service provider's perspective and proved its hardness, and an optimal approach named CEDC-IP was proposed to solve this problem with the Integer Programming technique.
Abstract: In recent years, edge computing, as an extension of cloud computing, has emerged as a promising paradigm for powering a variety of applications demanding low latency, e.g., virtual or augmented reality, interactive gaming, real-time navigation, etc. In the edge computing environment, edge servers are deployed at base stations to offer highly-accessible computing capacities to nearby end-users, e.g., CPU, RAM, storage, etc. From a service provider’s perspective, caching app data on edge servers can ensure low latency in its users’ data retrieval. Given constrained cache spaces on edge servers due to their physical sizes, the optimal data caching strategy must minimize overall user latency. In this article, we formulate this Constrained Edge Data Caching (CEDC) problem as a constrained optimization problem from the service provider’s perspective and prove its $\mathcal {NP}$ -hardness. We propose an optimal approach named CEDC-IP to solve this CEDC problem with the Integer Programming technique. We also provide an approximation algorithm named CEDC-A for finding approximate solutions to large-scale CEDC problems efficiently and prove its approximation ratio. CEDC-IP and CEDC-A are evaluated on a real-world data set. The results demonstrate that they significantly outperform four representative approaches.

Journal ArticleDOI
TL;DR: In this paper , a privacy-preserving distributed deep deterministic policy gradient (P2D3PG) algorithm is proposed to maximize the cache hit rates of devices in the MEC networks.
Abstract: Mobile edge computing (MEC) is a prominent computing paradigm which expands the application fields of wireless communication. Due to the limitation of the capacities of user equipments and MEC servers, edge caching (EC) optimization is crucial to the effective utilization of the caching resources in MEC-enabled wireless networks. However, the dynamics and complexities of content popularities over space and time as well as the privacy preservation of users pose significant challenges to EC optimization. In this paper, a privacy-preserving distributed deep deterministic policy gradient (P2D3PG) algorithm is proposed to maximize the cache hit rates of devices in the MEC networks. Specifically, we consider the fact that content popularities are dynamic, complicated and unobservable, and formulate the maximization of cache hit rates on devices as distributed problems under the constraints of privacy preservation. In particular, we convert the distributed optimizations into distributed model-free Markov decision process problems and then introduce a privacy-preserving federated learning method for popularity prediction. Subsequently, a P2D3PG algorithm is developed based on distributed reinforcement learning to solve the distributed problems. Simulation results demonstrate the superiority of the proposed approach in improving EC hit rate over the baseline methods while preserving user privacy.

Journal ArticleDOI
TL;DR: A novel dual locality-based FTL (DL-FTL) is proposed in this paper, which uses the sequential cache mapping state table (S-CMST) and sequential physical address cache mapping table (SPA-CMT) to process the sequential requests.
Abstract: NAND flash memory shows prominent performance, so it has been used as storage devices of consumer electronics, such as the smart phones and tablet personal computers. As the storage management software of NAND flash memory, the page-level flash translation layer (PLFTL) owns very high I/O access performance for consumer electronics. As an improved version of PLFTL, the demand-based PLFTL selectively keeps active mapping entries in the DRAM (Dynamic Random Access Memory) and the demand-based PLFTL mainly considers the temporal locality of workloads. However, the spatial locality also appears in many workloads. To exploit the temporal locality and spatial locality of workloads, a novel dual locality-based FTL (DL-FTL) is proposed in this paper. DL-FTL uses the sequential cache mapping state table (S-CMST) and sequential physical address cache mapping table (SPA-CMT) to process the sequential requests. To decrease the update counts of translation pages, the mapping entries that are evicted from S-CMST will be written back to NAND flash memory using a batch update strategy. The experimental results show that our proposed DL-FTL raises the cache hit ratio by up to 66.39% and reduces the system response time by up to 21.64% on average, compared with the demand-based PLFTL.

Journal ArticleDOI
TL;DR: This paper provides a contemporary survey of cutting-edge live video streaming studies from a computation-driven perspective, including cloud-, edge-, and peer-to-peer-based solutions.
Abstract: Live video streaming services have experienced significant growth since the emergence of social networking paradigms in recent years. In this scenario, adaptive bitrate streaming communications transmitted on web protocols provide a convenient and cost-efficient facility to serve various multimedia platforms over the Internet. In these communication models, video content is delivered optimally, possibly transcoded, edited automatically, and cached temporarily by network elements along the path. To this end, the computational capabilities of various network elements are considered as major resources to be optimized for service quality improvements. This article provides a contemporary survey of cutting-edge live video streaming studies from a computation-driven perspective. First, an overview of the global standards, system architectures, and streaming protocols is presented. Next, hierarchical computation-driven models of live video streaming are anatomized, including cloud-, edge-, and peer-to-peer-based solutions. Cutting-edge studies are then reviewed to discover the advances they have made in improving system performance in multiple areas. Finally, open challenges are presented to direct future research in this field.

Journal ArticleDOI
01 Feb 2022-Sensors
TL;DR: This paper designs an Information-Centric Network with mobility-aware proactive caching scheme to provide delay-sensitive services on IoV networks and results show that the proposed scheme outperforms related caching schemes in terms of latency and cache hits.
Abstract: Edge caching is a promising approach to alleviate the burden on the backhaul of network links. It has a significant role in the Internet of Vehicle (IoV) networks performance by providing cached data at the edge and reduce the burden of the core network caused by the number of participating vehicles and data volume. However, due to the limited computing and storage capabilities of edge devices, it is hard to guarantee that all contents are cached and every requirement of the device are satisfied for all users. In this paper, we design an Information-Centric Network (ICN) with mobility-aware proactive caching scheme to provide delay-sensitive services on IoV networks. The real-time status and interaction of vehicles with other vehicles and Roadside Units (RSU) is modeled using a Markov process. Mobility aware proactive edge caching decision that maximize network performance while minimizing transmission delay is applied. Our numerical simulation results show that the proposed scheme outperforms related caching schemes in terms of latency by 20–25% in terms of latency and by 15–23% in cache hits.

Journal ArticleDOI
TL;DR: In this paper , the authors investigated the joint problem of computation offloading, cache decision, transmission power allocation, and CPU frequency allocation for cloud-edge heterogeneous network system with multiple independent tasks.
Abstract: Cloud–edge heterogeneous network is an emerging technique built on edge infrastructure, which is based on the core of cloud computing technology and edge computing capabilities. The joint problem of computation offloading, cache decision, and resource allocation for cloud–edge heterogeneous network system is a challenging issue. In this article, we investigate the joint problem of computation offloading, cache decision, transmission power allocation, and CPU frequency allocation for cloud–edge heterogeneous network system with multiple independent tasks. The goal is to minimize the weighted sum cost of the execution delay and energy consumption while guaranteeing the transmission power and CPU frequency constraint of the tasks. The constraint of computing resource and cache capacity of each access point (AP) are considered as well. The formulated problem is a mixed-integer nonlinear optimization problem. In order to solve the formulated problem, we propose a two-level alternation method framework based on reinforcement learning (RL) and sequential quadratic programming (SQP). In the upper level, given the allocated transmission power and CPU frequency, the task offloading decision and cache decision problem is solved using the deep $Q$ -network method. In the lower level, the optimal transmission power and CPU frequency allocation with the offloading decision and cache decision is obtained by using the SQP technique. Simulation results demonstrate that the proposed scheme achieves significant reduction on the sum cost compared to other baselines.

Proceedings ArticleDOI
28 Mar 2022
TL;DR: Interestingly, in addition to write coalescing, the write buffer delivers lower than read and consistent write latency regardless of the working set size, the type of write, the access pattern, or the persistency model.
Abstract: We present a comprehensive and in-depth study of Intel Optane DC persistent memory (DCPMM). Our focus is on exploring the internal design of Optane's on-DIMM read-write buffering and its impacts on application-perceived performance, read and write amplifications, the overhead of different types of persists, and the tradeoffs between persistency models. While our measurements confirm the results of the existing profiling studies, we have new discoveries and offer new insights. Notably, we find that read and write are managed differently in separate on-DIMM read and write buffers. Comparable in size, the two buffers serve distinct purposes. The read buffer offers higher concurrency and effective on-DIMM prefetching, leading to high read bandwidth and superior sequential performance. However, it does not help hide media access latency. In contrast, the write buffer offers limited concurrency but is a critical stage in a pipeline that supports asynchronous write in the DDR-T protocol. Surprisingly, in addition to write coalescing, the write buffer delivers lower than read and consistent write latency regardless of the working set size, the type of write, the access pattern, or the persistency model. Furthermore, we discover that the mismatch between cacheline access granularity and the 3D-Xpoint media access granularity negatively impacts the effectiveness of CPU cache prefetching and leads to wasted persistent memory bandwidth. Our proposition is to decouple read and write in the performance analysis and optimization of persistent programs. We present three case studies based on this insight and demonstrate considerable performance improvements. We verify the results on two generations of Optane DCPMM.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a threshold based content offloading mechanism where endpoints can adapt to increase in content request rate in real time, conserve energy and collect metadata required to maintain the freshness of transient data.
Abstract: Technology and applications of Internet of Things (IOT) is expanding. Large number of smart devices with networking capabilities, often called endpoints are part of IOT networks. These endpoints are typically resource constrained in terms of battery power and this impacts the endurance of IOT applications in operational field. Therefore, energy effciency of IOT networks is important. This paper employs content caching mechanism of Named Data Network (NDN) as leverage to propose a novel Endpoint Linked Green Content Caching scheme to reduce energy consumption in NDN based IOT network architectures. We implement a novel threshold based content offloading mechanism where endpoints can adapt to increase in content request rate in real time, conserve energy and collect metadata required to maintain the freshness of transient data. The proposed scheme achieved reduction in network energy consumption in the range of 30%–57% and reduction in response time for content request in the range of 35%–75% when compared with different existing schemes. The strategy performed consistently when the number of nodes in the network was scaled.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a learning-based edge caching scheme to enable mutual cooperation among different edge servers with limited caching resources, thus effectively reducing the content delivery latency.
Abstract: With the rapid growth of networked multimedia services in the Internet, wireless network traffic has increased dramatically. However, the current mainstream content caching schemes do not take into account the cooperation of different edge servers, resulting in deteriorated system performance. In this paper, we propose a learning-based edge caching scheme to enable mutual cooperation among different edge servers with limited caching resources, thus effectively reducing the content delivery latency. Specifically, we formulate the cooperative content caching problem as an optimization problem, which is proven to be NP-hard. To solve this problem, we design a new learning-based cooperative caching strategy (LECS) that encompasses three key components. Firstly, a temporal convolutional network driven content popularity prediction model is developed to estimate the content popularity with high accuracy. Secondly, with the predicted content popularity, the concept of content caching value (CCV) is introduced to weigh the value of a content cached on a given edge server. Thirdly, an novel dynamic programming algorithm is developed to maximize the overall CCV. Extensive simulation results have demonstrated the superiority of our approach. Compared with the state-of-the-art caching schemes, LECS can improve the cache hit rate by 8.3%-10.1%, and reduce the average content delivery delay by 9.1%-15.1%.

Journal ArticleDOI
TL;DR: In this article , the performance of cache-enabled hybrid satellite-aerial-terrestrial networks (HSATNs) with NOMA was investigated, where the user retrieves the required content files from the cacheenabled aerial node (AN) or the satellite with the non-orthogonal multiple access (NOMA) scheme.
Abstract: Due to the emergence of non-terrestrial platforms with extensive coverage, flexible deployment, and reconfigurable characteristics, the hybrid satellite-aerial-terrestrial networks (HSATNs) can accommodate a great variety of wireless access services in different applications. To effectively reduce the transmission latency and facilitate the frequent update of files with improved spectrum efficiency, we investigate the performance of cache-enabled HSATN, where the user retrieves the required content files from the cache-enabled aerial node (AN) or the satellite with the non-orthogonal multiple access (NOMA) scheme. If the required content files of the user are cached in the AN, the cache-enabled node would serve directly. Otherwise, the user would retrieve the content file from the satellite system, where the satellite system seeks opportunities for proactive content pushing to ANs during the user content delivery phase. Specifically, taking into account the uncertainty of the number and location of ANs, along with the channel fading of terrestrial users, the outage probability and hit probability of the considered network are, respectively, derived based on stochastic geometry. Numerical results unveil the effectiveness of the cache-enabled HSATN with the NOMA scheme and proclaim the influence of key factors on the system performance. The realistic, tractable, and expandable framework, as well as associated methodology, provide both useful guidance and a solid foundation for evolved networks with advanced configurations in the performance of cache-enabled HSATN.

Journal ArticleDOI
TL;DR: The Critical Assessment of Computational Hit-finding Experiments (CACHE) as discussed by the authors is a public benchmarking project to compare and improve computational small-molecule hit-finding approaches through cycles of prediction, compound synthesis and experimental testing.
Abstract: One aspirational goal of computational chemistry is to predict potent and drug-like binders for any protein, such that only those that bind are synthesized. In this Roadmap, we describe the launch of Critical Assessment of Computational Hit-finding Experiments (CACHE), a public benchmarking project to compare and improve small-molecule hit-finding algorithms through cycles of prediction and experimental testing. Participants will predict small-molecule binders for new and biologically relevant protein targets representing different prediction scenarios. Predicted compounds will be tested rigorously in an experimental hub, and all predicted binders as well as all experimental screening data, including the chemical structures of experimentally tested compounds, will be made publicly available and not subject to any intellectual property restrictions. The ability of a range of computational approaches to find novel binders will be evaluated, compared and openly published. CACHE will launch three new benchmarking exercises every year. The outcomes will be better prediction methods, new small-molecule binders for target proteins of importance for fundamental biology or drug discovery and a major technological step towards achieving the goal of Target 2035, a global initiative to identify pharmacological probes for all human proteins. Critical Assessment of Computational Hit-finding Experiments (CACHE) is a public benchmarking project to compare and improve computational small-molecule hit-finding approaches through cycles of prediction, compound synthesis and experimental testing. By that, CACHE will enable a more efficient and effective approach to hit identification and drug discovery.