scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Journal on Selected Areas in Communications in 2023"


Journal ArticleDOI
TL;DR: In this paper , a multi-tier cache-aided relaying network is studied, where the destination is randomly located in the network and it requests files from the source through the help of cache-assisted base station (BS) and relays.
Abstract: This paper studies a multi-tier cache-aided relaying network, where the destination $D$ is randomly located in the network and it requests files from the source $S$ through the help of cache-aided base station (BS) and $N$ relays. In this system, the multi-tier architecture imposes a significant impact on the system collaborative caching and file delivery, which brings a big challenge to the system performance evaluation and optimization. To address this problem, we first evaluate the system performance by deriving analytical outage probability expression, through fully taking into account the random location of the destination and different file delivery modes related to the file caching status. We then perform the asymptotic analysis on the system outage probability when the signal-to-noise ratio (SNR) is high, to enclose some important and meaningful insights on the network. We further optimize the caching strategies among the relays and BS, to improve the network outage probability. Simulations are performed to show the effectiveness of the derived analytical and asymptotic outage probability for the proposed caching strategy. In particular, the proposed caching is superior to the conventional caching strategies such as the most popular content (MPC) and equal probability caching (EPC) strategies.

20 citations



Journal ArticleDOI
TL;DR: In this article , a multi-tier cache-aided relaying network is studied, where the destination is randomly located in the network and it requests files from the source, and the authors derive analytical outage probability expression, through fully taking into account the random location of the destination and different file delivery modes related to the file caching status.
Abstract: This paper studies a multi-tier cache-aided relaying network, where the destination $D$ is randomly located in the network and it requests files from the source $S$ through the help of cache-aided base station (BS) and $N$ relays. In this system, the multi-tier architecture imposes a significant impact on the system collaborative caching and file delivery, which brings a big challenge to the system performance evaluation and optimization. To address this problem, we first evaluate the system performance by deriving analytical outage probability expression, through fully taking into account the random location of the destination and different file delivery modes related to the file caching status. We then perform the asymptotic analysis on the system outage probability when the signal-to-noise ratio (SNR) is high, to enclose some important and meaningful insights on the network. We further optimize the caching strategies among the relays and BS, to improve the network outage probability. Simulations are performed to show the effectiveness of the derived analytical and asymptotic outage probability for the proposed caching strategy. In particular, the proposed caching is superior to the conventional caching strategies such as the most popular content (MPC) and equal probability caching (EPC) strategies.

11 citations


Journal ArticleDOI
TL;DR: In this paper , a multi-tier perception task offloading framework with a collaborative computing approach is proposed, where an AV is able to achieve a comprehensive perception of the concerned region-of-interest (RoI) by leveraging collaborative computation with nearby AVs and road side units (RSUs).
Abstract: Autonomous driving has so far received numerous attention from academia and industry. However, the inevitable occlusion is a great menace to safety and reliable driving. Existing works have primarily focused on improving the perception ability of a single autonomous vehicle (AV), but the safety problem brought by occlusions remains unanswered. In this paper, we propose a multi-tier perception task offloading framework with a collaborative computing approach, where an AV is able to achieve a comprehensive perception of the concerned region-of-interest (RoI) by leveraging collaborative computation with nearby AVs and road side units (RSUs). Besides, the collaborative computation provides offloading service for computationally intensive tasks so as to reduce processing delay. Specifically, we formulate a joint problem of perception task assignment, offloading and resource allocation, by fully considering the AV’s mobility, task dependency, and delay requirement. The collaborative offloading is modeled as a mixed-integer nonlinear programming (MINLP) problem. We design a two-layer binary intelligent firefly (TL-BIFA) algorithm to solve MINLP, with the goal of minimizing execution delay. The proposed TL-BIFA synthesizes the advantages of heuristic methods and deterministic methods. Through extensive simulations, the proposed collaborative offloading approach and the TL-BIFA show superiority in enhancing the autonomous driving system’s safety, efficiency and resource utilization.

8 citations


Journal ArticleDOI
TL;DR: Huang et al. as discussed by the authors proposed an efficient information sharing scheme based on full-duplex device-to-device (D2D) semantic communications to enable users to avoid heavy and repetitive computational tasks such as artificial intelligence-generated content (AIGC) in the view images of all MR users.
Abstract: The next generation of Internet services, such as Metaverse, rely on mixed reality (MR) technology to provide immersive user experiences. However, the limited computation power of MR headset-mounted devices (HMDs) hinders the deployment of such services. Therefore, we propose an efficient information sharing scheme based on full-duplex device-to-device (D2D) semantic communications to address this issue. Our approach enables users to avoid heavy and repetitive computational tasks, such as artificial intelligence-generated content (AIGC) in the view images of all MR users. Specifically, a user can transmit the generated content and semantic information extracted from their view image to nearby users, who can then use this information to obtain the spatial matching of computation results under their view images. We analyze the performance of full-duplex D2D communications, including the achievable rate and bit error probability, by using generalized small-scale fading models. To facilitate semantic information sharing among users, we design a contract theoretic AI-generated incentive mechanism. The proposed diffusion model generates the optimal contract design, outperforming two deep reinforcement learning algorithms, i.e., proximal policy optimization and soft actor-critic algorithms. Our numerical analysis experiment proves the effectiveness of our proposed methods. The code for this paper is available at https://github.com/HongyangDu/SemSharing

8 citations


Journal ArticleDOI
TL;DR: In this paper , a layer-based semantic communication system for images (LSCI) is built on the simulation platform to demonstrate the feasibility of the proposed system and a novel semantic metric called semantic service quality (SS) is proposed to evaluate the semantic communication systems.
Abstract: Traditional communication systems treat messages’ semantic aspects and meaning as irrelevant to communication, revealing its limitations in the era of artificial intelligence (AI), such as communication efficiency and intent-sharing among different entities. Through broadening the scope of the traditional communication system and the AI-based encoding techniques, in this manuscript, we present a novel semantic communication system, which involves the essential semantic information exploration, transmission and recovery for more efficient communications. Compared to other state-of-the-art semantic communication-related works, our proposed semantic communication system is characterized by the “flow of the intelligence” via the propagation of the model. Besides, the concept of semantic slice-models (SeSM) is proposed to enable flexible model-resembling under the different requirements of the model performance, channel situation and transmission goals. Specifically, a layer-based semantic communication system for images (LSCI) is built on the simulation platform to demonstrate the feasibility of the proposed system and a novel semantic metric called semantic service quality (SS) is proposed to evaluate the semantic communication systems. We evaluate the proposed system on Cityscapes and Open Images datasets, resulting in averaged 10% and 2% bit rate reduction over JPEG and JPEG2000, respectively. In comparison to LDPC, the proposed channel coding scheme can averagely save 2dB and 5dB in AWGN channel and Rayleigh fading channel, respectively.

8 citations


Journal ArticleDOI
TL;DR: Semantic-Aware and Task-Oriented Communication (SAC) as discussed by the authors is a generalization of semantic-aware and task-oriented communications, covering the foundations, algorithms and potential implementations.
Abstract: Communication systems to date primarily aim at reliably communicating bit sequences. Such an approach provides efficient engineering designs that are agnostic to the meanings of the messages or to the goal that the message exchange aims to achieve. Next generation systems, however, can be potentially enriched by folding message semantics and goals of communication into their design. Further, these systems can be made cognizant of the context in which communication exchange takes place, thereby providing avenues for novel design insights. This tutorial summarizes the efforts to date, starting from its early adaptations, semantic-aware and task-oriented communications, covering the foundations, algorithms and potential implementations. The focus is on approaches that utilize information theory to provide the foundations, as well as the significant role of learning in semantics and task-aware communications.

8 citations


Journal ArticleDOI
TL;DR: In this article , a multi-tier perception task offloading framework with a collaborative computing approach is proposed, where an AV is able to achieve a comprehensive perception of the concerned region-of-interest (RoI) by leveraging collaborative computation with nearby AVs and road side units (RSUs).
Abstract: Autonomous driving has so far received numerous attention from academia and industry. However, the inevitable occlusion is a great menace to safety and reliable driving. Existing works have primarily focused on improving the perception ability of a single autonomous vehicle (AV), but the safety problem brought by occlusions remains unanswered. In this paper, we propose a multi-tier perception task offloading framework with a collaborative computing approach, where an AV is able to achieve a comprehensive perception of the concerned region-of-interest (RoI) by leveraging collaborative computation with nearby AVs and road side units (RSUs). Besides, the collaborative computation provides offloading service for computationally intensive tasks so as to reduce processing delay. Specifically, we formulate a joint problem of perception task assignment, offloading and resource allocation, by fully considering the AV’s mobility, task dependency, and delay requirement. The collaborative offloading is modeled as a mixed-integer nonlinear programming (MINLP) problem. We design a two-layer binary intelligent firefly (TL-BIFA) algorithm to solve MINLP, with the goal of minimizing execution delay. The proposed TL-BIFA synthesizes the advantages of heuristic methods and deterministic methods. Through extensive simulations, the proposed collaborative offloading approach and the TL-BIFA show superiority in enhancing the autonomous driving system’s safety, efficiency and resource utilization.

6 citations


Journal ArticleDOI
TL;DR: In this article , an energy-efficient task-oriented semantic communication framework with a triple-based scene graph for image information was designed to enhance the personalization and anti-interference performance of semantic communication, and an optimal multi-user resource allocation scheme was designed by using game theory.
Abstract: Semantic communication, as a promising technology, has emerged to break through the Shannon limit, which is envisioned as the key enabler and fundamental paradigm for future 6G networks and applications, e.g., smart healthcare. In this paper, we focus on UAV image-sensing-driven task-oriented semantic communications scenarios. The majority of existing work has focused on designing advanced algorithms for high-performance semantic communication. However, the challenges, such as energy-hungry and efficiency-limited image retrieval manner, and semantic encoding without considering user personality, have not been explored yet. These challenges have hindered the widespread adoption of semantic communication. To address the above challenges, at the semantic level, we first design an energy-efficient task-oriented semantic communication framework with a triple-based scene graph for image information. We then design a new personalized semantic encoder based on user interests to meet the requirements of personalized saliency. Moreover, at the communication level, we study the effects of dynamic wireless fading channel on semantic transmission mathematically and thus design an optimal multi-user resource allocation scheme by using game theory. Numerical results based on real-world datasets clearly indicate that the proposed framework and schemes significantly enhance the personalization and anti-interference performance of semantic communication, and are also efficient to improve the communication quality of semantic communication services.

6 citations


Journal ArticleDOI
TL;DR: In this article , the authors investigated the dual Uu/PC5 interface offloading and resource allocation strategy in cellular vehicle-to-everything (C-V2X) enabled multi-tier VEC system, and the successful transmission probability was characterized to obtain the normalized transmission rate of PC5 interface.
Abstract: Emerging intelligent transportation services are latency-sensitive with heavy demand for computing resources, which can be supported by a multi-tier computing system composed of vehicular edge computing (VEC) servers along the roads and micro servers on vehicles. In this work, we investigate the dual Uu/PC5 interface offloading and resource allocation strategy in Cellular Vehicle-to-Everything (C-V2X) enabled multi-tier VEC system. The successful transmission probability is characterized to obtain the normalized transmission rate of PC5 interface. We aim to minimize the system latency of task processing while satisfying the resource requirements of Uu and PC5 interfaces. Due to the non-convex and variables coupling, we decompose the original problem into two subproblems, i.e., resource allocation and offloading strategy subproblems. Specifically, we derive the closed-form expressions of packet transmit frequency of PC5 interface, transmission power of Uu interface, and CPU computation frequency in the resource allocation subproblem. Moreover, for the offloading strategy subproblem, the offloading ratio matrix is obtained by proposing the PC5 interface based greedy offloading (PC5-GO) algorithm, which concludes offloading decision and ratio. Simulation results are provided that the proposed PC5-GO algorithm can significantly improve the system performance compared with other baseline schemes by 13.7% at least.

5 citations


Journal ArticleDOI
TL;DR: In this article , rate splitting multiple access (RSMA) has emerged as a powerful multiple access, interference management, and multi-user strategy for next generation communication systems, and the fundamental problem of interference management is discussed.
Abstract: Rate-Splitting Multiple Access (RSMA) has emerged as a powerful multiple access, interference management, and multi-user strategy for next generation communication systems. In this tutorial, we depart from the orthogonal multiple access (OMA) versus non-orthogonal multiple access (NOMA) discussion held in 5G, and the conventional multi-user linear precoding approach used in space-division multiple access (SDMA), multi-user and massive MIMO in 4G and 5G, and show how multi-user communications and multiple access design for 6G and beyond should be intimately related to the fundamental problem of interference management. We start from foundational principles of interference management and rate-splitting, and progressively delineate RSMA frameworks for downlink, uplink, and multi-cell networks. We show that, in contrast to past generations of multiple access techniques (OMA, NOMA, SDMA), RSMA offers numerous benefits: 1) enhanced spectral, energy and computation efficiency ; 2) universality by unifying and generalizing OMA, SDMA, NOMA, physical-layer multicasting, multi-user MIMO under a single framework that holds for any number of antennas at each node (SISO, SIMO, MISO, and MIMO settings); 3) flexibility by coping with any interference levels (from very weak to very strong), network loads (underloaded, overloaded), services (unicast, multicast), traffic, user deployments (channel directions and strengths); 4) robustness to inaccurate channel state information (CSI) and resilience to mixed-critical quality of service; 5) reliability under short channel codes and low latency . We then discuss how those benefits translate into numerous opportunities for RSMA in over forty different applications and scenarios of 6G, e.g., multi-user MIMO with statistical/quantized CSI, FDD/TDD/cell-free massive MIMO, millimeter wave and terahertz, cooperative relaying, physical layer security, reconfigurable intelligent surfaces, cloud-radio access network, internet-of-things, massive access, joint communication and jamming, non-orthogonal unicast and multicast, multigroup multicast, multibeam satellite, space-air-ground integrated networks, unmanned aerial vehicles, integrated sensing and communications, grant-free access, network slicing, cognitive radio, optical/visible light communications, mobile edge computing, machine/federated learning, etc. We finally address common myths and answer frequently asked questions, opening the discussions to interesting future research avenues. Supported by the numerous benefits and applications, the tutorial concludes on the underpinning role played by RSMA in next generation networks, which should inspire future research, development, and standardization of RSMA-aided communication for 6G.

Journal ArticleDOI
TL;DR: In this article , the resilience and trustworthiness of highly unstable transcoders in decision making are characterized with mean-variance-based measures to avoid making highly risky decisions and two risk-aware contextual learning schemes are developed to efficiently estimate the transcoding capabilities of the edge devices.
Abstract: This paper proposes an edge-assisted crowdsourced live video transcoding approach where the transcoding capabilities of the edge transcoders are unknown and dynamic. The resilience and trustworthiness of highly unstable transcoders in decision making are characterized with mean-variance-based measures to avoid making highly risky decisions. The risk level of each device’s situation is assessed and two upper confidence bounds of the variance of transcoding performance are presented. Based on the derived bounds and by leveraging the contextual information of devices, two risk-aware contextual learning schemes are developed to efficiently estimate the transcoding capabilities of the edge devices. Combining context awareness and risk sensitivity, a novel transcoding task assignment and viewer association algorithm is proposed. Simulation results demonstrate that the proposed algorithm achieves robust task offloading with superior network utility performance as compared to the linear upper confidence bound and the risk-aware mean-variance upper confidence bound-based algorithms. In particular, an epoch-based task assignment strategy is designed to reduce the task switching costs incurred in assigning the same transcoding task to different transcoders over time. This strategy also reduces the computational time needed. Numerical results confirm that this strategy achieves up to 86.8% switching costs reduction and 92.3% computational time reduction.

Journal ArticleDOI
TL;DR: In this paper , the authors investigated a novel integration of rate splitting multiple access (RSMA) and reconfigurable intelligent surface (RIS) into cellular systems to achieve a desirable tradeoff between spectral efficiency and energy efficiency.
Abstract: With the increasing demand of high data rate and massive access in both ultra-dense and industrial Internet-of-things networks, spectral efficiency (SE) and energy efficiency (EE) are regarded as two important and inter-related performance metrics for future networks. In this paper, we investigate a novel integration of rate-splitting multiple access (RSMA) and reconfigurable intelligent surface (RIS) into cellular systems to achieve a desirable tradeoff between SE and EE. Different from the commonly used passive RIS, we adopt reflection elements with active load to improve a newly defined metric, called resource efficiency (RE), which is capable of striking a balance between SE and EE. This paper focuses on the RE optimization by jointly designing the base station (BS) transmit precoding and RIS beamforming (BF) while guaranteeing the transmit and forward power budgets of the BS and RIS, respectively. To efficiently tackle the challenges for solving the RE maximization problem due to its fractional objective function, coupled optimization variables, and discrete coefficient constraint, the formulated nonconvex problem is solved by proposing a two-stage optimization framework. For the outer stage problem, a quadratic transformation is used to recast the fractional objective into a linear form, and a closed-form solution is obtained by using auxiliary variables. For the inner stage problem, the system sum rate is approximated into a linear function. Then, an alternating optimization (AO) algorithm is proposed to optimize the BS precoding and RIS BF iteratively, by utilizing the penalty dual decomposition (PDD) method. Simulation results demonstrate the superiority of the proposed design compared to other benchmarks.

Journal ArticleDOI
TL;DR: In this paper , a deep reinforcement learning (DRL) technique was proposed to jointly make optimal computation offloading decisions and flying orientation choices for multi-UAV cooperative target search, and extensive simulations validate the effectiveness of the proposed techniques, and comprehensive discussions on how different parameters affect the search performance are given.
Abstract: Unmanned aerial vehicles (UAVs) are widely used for surveillance and monitoring to complete target search tasks. However, the short battery life and moderate computational capability hinder UAVs to process computation-intensive tasks. The emerging edge computing technologies can alleviate this problem by offloading tasks to the ground edge servers. How to evaluate the search process so as to make optimal offloading decisions and make optimal flying trajectories represent fundamental research challenges. In this paper, we propose to utilize the concept of uncertainty to evaluate the search process, which reflects the reliability of the target search results. Thereafter, we propose a deep reinforcement learning (DRL) technique to jointly make optimal computation offloading decisions and flying orientation choices for multi-UAV cooperative target search. Specifically, we first formulate an uncertainty minimization problem based on the established system model. By introducing a reward function, we prove that the uncertainty minimization problem is equivalent to a reward maximization problem, which is further analyzed by a Markov decision process (MDP). To obtain the optimal task offloading decisions and flying orientation choices, a deep Q-network (DQN) based DRL architecture with a separated Q-network is then proposed. Finally, extensive simulations validate the effectiveness of the proposed techniques, and comprehensive discussions on how different parameters affect the search performance are given.

Journal ArticleDOI
TL;DR: In this paper , an incremental redundancy hybrid automatic repeat-request framework for varying channels (SVC-HARQ) incorporating a novel semantic error detector is developed, which dramatically reduces transmission resources while only losing detailed expressions.
Abstract: Video conferencing has become a popular mode of meeting despite consuming considerable communication resources. Conventional video compression causes resolution reduction under a limited bandwidth. Semantic video conferencing (SVC) maintains a high resolution by transmitting some keypoints to represent the motions because the background is almost static, and the speakers do not change often. However, the study on the influence of transmission errors on keypoints is limited. In this paper, an SVC network based on keypoint transmission is established, which dramatically reduces transmission resources while only losing detailed expressions. Transmission errors in SVC only lead to a changed expression, whereas those in the conventional methods directly destroy pixels. However, the conventional error detector, such as cyclic redundancy check, cannot reflect the degree of expression changes. To overcome this issue, an incremental redundancy hybrid automatic repeat-request framework for varying channels (SVC-HARQ) incorporating a novel semantic error detector is developed. SVC-HARQ has flexibility in bit consumption and achieves a good performance. In addition, SVC-channel state information (CSI) is designed for CSI feedback to allocate the keypoint transmission and enhance the performance dramatically. Simulation shows that the proposed wireless semantic communication system can remarkably improve transmission efficiency.

Journal ArticleDOI
TL;DR: In this paper , a sampling, communication and prediction co-design framework is proposed to minimize the communication load subject to a constraint on the tracking error, which achieves better convergence time, stability, communication load, and average tacking error compared with a communication system without sampling and prediction.
Abstract: The metaverse has the potential to revolutionize the next generation of the Internet by supporting highly interactive services with satisfactory user experience. The synchronization between devices in the physical world and their digital models in the metaverse is crucial. This work proposes a sampling, communication and prediction co-design framework to minimize the communication load subject to a constraint on the tracking error. To optimize the sampling rate and the prediction horizon, we exploit expert knowledge and develop a constrained deep reinforcement learning algorithm. We validate our framework on a prototype composed of a real-world robotic arm and its digital model. The results show that our framework achieves a better trade-off between the average tracking error and the average communication load compared with a communication system without sampling and prediction. For example, the average communication load can be reduced up to 87% when the average track error constraint is 0.007°. In addition, our policy outperforms the benchmark with the static sampling rate and prediction horizon optimized by exhaustive search, in terms of the tail probability of the tracking error. Furthermore, with the assistance of expert knowledge, the proposed algorithm achieves better convergence time, stability, communication load, and average tacking error.

Journal ArticleDOI
TL;DR: In this paper , the authors investigate key techniques and directions for wireless communications and resource allocation approaches to enable task offloading in multi-tier computing systems, with its main functionality and optimization methods.
Abstract: With the development of next-generation wireless networks, the Internet of Things (IoT) is evolving towards the intelligent IoT (iIoT), where intelligent applications usually have stringent delay and jitter requirements. In order to provide low-latency services to heterogeneous users in the emerging iIoT, multi-tier computing was proposed by effectively combining edge computing and fog computing. More specifically, multi-tier computing systems compensate for cloud computing through task offloading and dispersing computing tasks to multi-tier nodes along the continuum from the cloud to things. In this paper, we investigate key techniques and directions for wireless communications and resource allocation approaches to enable task offloading in multi-tier computing systems. A multi-tier computing model, with its main functionality and optimization methods, is presented in detail. We hope that this paper will serve as a valuable reference and guide to the theoretical, algorithmic, and systematic opportunities of multi-tier computing towards next-generation wireless networks.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed the concept of Mobile Edge Computing (MEC) into the control of SDN-AUVNS and proposed the upgrade of the control plane with multi-tier edge computing ability.
Abstract: The forthcoming 6G networks are expected to provide a vision of overlapping aerial-ground-underwater wireless networks. Meanwhile, the rapid development of the Internet of Underwater Things (IoUTs) brings forth many categories of Autonomous Underwater Vehicle (AUV)-assisted Underwater Wireless Networks (UWNs). In this paper, we argue that the AUV-assisted UWNs can be intelligently utilized to track underwater pollution. To perform smart underwater pollution tracking, we propose the paradigm of AUV flock-based networking system and Software-Defined Networking (SDN)-enabled AUV flock Networking System (SDN-AUVNS). We introduce the concept of Mobile Edge Computing (MEC) into the control of SDN-AUVNS and propose the upgrade of the control plane of the SDN-AUVNS to with the multi-tier edge computing ability. By the proposed system architecture, we adopt the artificial potential field theory to construct the network controlling model. And we present the underwater tracking model for SDN-AUVNS, especially for the underwater pollution equipotential line of a particular concentration. Furthermore, to provide accurate path planning for the equipotential line tracking, we utilize the linearizability mechanism to optimize and revise the control input for the SDN-AUVNS. Lastly, we give a fast united control algorithm that can intelligently schedule the SDN-AUVNS to track underwater pollution equipotential lines. In particular, we propose a smart approach with the name of ’Inverse Distance Weighting’ to optimize the detection sample of the SDN-AUVNS. Evaluation results indicate that our proposal is able to track/survey the equipotential lines within a satisfactory error.

Journal ArticleDOI
TL;DR: In this paper , an RL-based adaptive semantic coding (RL-ASC) approach was proposed to encode images beyond pixel level, where the task-related information is preserved and reconstructed properly while less important data is discarded.
Abstract: Semantic communications has received growing interest since it can remarkably reduce the amount of data to be transmitted without missing critical information. Most existing works explore the semantic encoding and transmission for text and apply techniques in Natural Language Processing (NLP) to interpret the meaning of the text. In this paper, we conceive the semantic communications for image data that is much more richer in semantics and bandwidth sensitive. We propose an reinforcement learning based adaptive semantic coding (RL-ASC) approach that encodes images beyond pixel level. Firstly, we define the semantic concept of image data that includes the category, spatial arrangement, and visual feature as the representation unit, and propose a convolutional semantic encoder to extract semantic concepts. Secondly, we propose the image reconstruction criterion that evolves from the traditional pixel similarity to semantic similarity and perceptual performance. Thirdly, we design a novel RL-based semantic bit allocation model, whose reward is the increase in rate-semantic-perceptual performance after encoding a certain semantic concept with adaptive quantization level. Thus, the task-related information is preserved and reconstructed properly while less important data is discarded. Finally, we propose the Generative Adversarial Nets (GANs) based semantic decoder that fuses both locally and globally features via an attention module. Experimental results demonstrate that the proposed RL-ASC is noise robust and could reconstruct visually pleasant and semantic consistent image in low bit rate condition.

Journal ArticleDOI
TL;DR: In this article , the authors present a comprehensive survey of prevailing methodologies for communication-efficient distributed learning, including reduction of the number of communications, compression and quantization of the exchanged information, radio resource management for efficient learning, and game-theoretic mechanisms incentivizing user participation.
Abstract: Distributed learning is envisioned as the bedrock of next-generation intelligent networks, where intelligent agents, such as mobile devices, robots, and sensors, exchange information with each other or a parameter server to train machine learning models collaboratively without uploading raw data to a central entity for centralized processing. By utilizing the computation/communication capability of individual agents, the distributed learning paradigm can mitigate the burden at central processors and help preserve data privacy of users. Despite its promising applications, a downside of distributed learning is its need for iterative information exchange over wireless channels, which may lead to high communication overhead unaffordable in many practical systems with limited radio resources such as energy and bandwidth. To overcome this communication bottleneck, there is an urgent need for the development of communication-efficient distributed learning algorithms capable of reducing the communication cost and achieving satisfactory learning/optimization performance simultaneously. In this paper, we present a comprehensive survey of prevailing methodologies for communication-efficient distributed learning, including reduction of the number of communications, compression and quantization of the exchanged information, radio resource management for efficient learning, and game-theoretic mechanisms incentivizing user participation. We also point out potential directions for future research to further enhance the communication efficiency of distributed learning in various scenarios.

Journal ArticleDOI
TL;DR: In this paper , a residual-enhanced mask-based progressive generative coding (RMPGC) framework is proposed for image compression in wireless communications, multi-user broadcasting, and multi-tasking applications.
Abstract: Progressive deep image compression (DIC) with hybrid contexts is an under-investigated problem that aims to jointly maximize the utility of a compressed image for multiple contexts or tasks under variable rates. In this paper, we consider the contexts of image reconstruction and classification. We propose a DIC framework, called residual-enhanced mask-based progressive generative coding (RMPGC), designed for explicit control of the performance within the rate-distortion-classification-perception (RDCP) trade-off. Three independent mechanisms are introduced to yield a semantically structured latent representation that can support parameterized control of rate and context adaptation. Experimental results show that the proposed RMPGC outperforms a benchmark DIC scheme using the same generative adversarial nets (GANs) backbone in all six metrics related to classification, distortion, and perception. Moreover, RMPGC is a flexible framework that can be applied to different neural network backbones. Some typical implementations are given and shown to outperform the classic BPG codec and four state-of-the-art DIC schemes in classification and perception metrics, with a slight degradation in distortion metrics. Our proposal of a nonlinear-neural-coded and richly structured latent space makes the proposed DIC scheme well suited for image compression in wireless communications, multi-user broadcasting, and multi-tasking applications.

Journal ArticleDOI
TL;DR: In this paper , a deep reinforcement learning (DRL) technique was proposed to jointly make optimal computation offloading decisions and flying orientation choices for multi-UAV cooperative target search, and extensive simulations validate the effectiveness of the proposed techniques, and comprehensive discussions on how different parameters affect the search performance are given.
Abstract: Unmanned aerial vehicles (UAVs) are widely used for surveillance and monitoring to complete target search tasks. However, the short battery life and moderate computational capability hinder UAVs to process computation-intensive tasks. The emerging edge computing technologies can alleviate this problem by offloading tasks to the ground edge servers. How to evaluate the search process so as to make optimal offloading decisions and make optimal flying trajectories represent fundamental research challenges. In this paper, we propose to utilize the concept of uncertainty to evaluate the search process, which reflects the reliability of the target search results. Thereafter, we propose a deep reinforcement learning (DRL) technique to jointly make optimal computation offloading decisions and flying orientation choices for multi-UAV cooperative target search. Specifically, we first formulate an uncertainty minimization problem based on the established system model. By introducing a reward function, we prove that the uncertainty minimization problem is equivalent to a reward maximization problem, which is further analyzed by a Markov decision process (MDP). To obtain the optimal task offloading decisions and flying orientation choices, a deep Q-network (DQN) based DRL architecture with a separated Q-network is then proposed. Finally, extensive simulations validate the effectiveness of the proposed techniques, and comprehensive discussions on how different parameters affect the search performance are given.

Journal ArticleDOI
TL;DR: In this paper , the beamforming vectors, transmission data rates, and RS-user selection were jointly optimized to maximize the effective throughput of RSMA systems with finite blocklength, and a semi-closed-form solution of the optimal data rates was derived.
Abstract: Rate-splitting multiple access (RSMA) is a promising multiple access (MA) technique. It employs rate-splitting (RS) at the transmitter and successive interference cancellation (SIC) at the receiver. Most of the existing works on RSMA assume that all users use SIC to decode the common stream and the blocklength is infinite. The first assumption causes the data rate of the common stream to be limited by the user with the worst channel quality. The second assumption may lead to suboptimal performance in practical systems with finite blocklength. In this paper, we propose a flexible RSMA scheme, which allows the system to decide whether a user should use SIC to decode the common stream or not. We consider the effective throughput as the performance metric, which incorporates the data rate as well as the error performance of RSMA with finite blocklength. We first derive the effective throughput expression and then formulate an effective throughput maximization problem by jointly optimizing the beamforming vectors, transmission data rates, and RS-user selection. We develop an optimal algorithm as well as a low-complexity algorithm for beamforming design. We derive a semi-closed-form solution of the optimal data rates and propose an efficient algorithm for the RS-user selection. Numerical results demonstrate that the proposed algorithm obtains a higher effective throughput than space division multiple access (SDMA), non-orthogonal multiple access (NOMA), and two other RSMA schemes.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed the concept of Mobile Edge Computing (MEC) into the control of SDN-AUVNS and proposed the upgrade of the control plane with multi-tier edge computing ability.
Abstract: The forthcoming 6G networks are expected to provide a vision of overlapping aerial-ground-underwater wireless networks. Meanwhile, the rapid development of the Internet of Underwater Things (IoUTs) brings forth many categories of Autonomous Underwater Vehicle (AUV)-assisted Underwater Wireless Networks (UWNs). In this paper, we argue that the AUV-assisted UWNs can be intelligently utilized to track underwater pollution. To perform smart underwater pollution tracking, we propose the paradigm of AUV flock-based networking system and Software-Defined Networking (SDN)-enabled AUV flock Networking System (SDN-AUVNS). We introduce the concept of Mobile Edge Computing (MEC) into the control of SDN-AUVNS and propose the upgrade of the control plane of the SDN-AUVNS to with the multi-tier edge computing ability. By the proposed system architecture, we adopt the artificial potential field theory to construct the network controlling model. And we present the underwater tracking model for SDN-AUVNS, especially for the underwater pollution equipotential line of a particular concentration. Furthermore, to provide accurate path planning for the equipotential line tracking, we utilize the linearizability mechanism to optimize and revise the control input for the SDN-AUVNS. Lastly, we give a fast united control algorithm that can intelligently schedule the SDN-AUVNS to track underwater pollution equipotential lines. In particular, we propose a smart approach with the name of ’Inverse Distance Weighting’ to optimize the detection sample of the SDN-AUVNS. Evaluation results indicate that our proposal is able to track/survey the equipotential lines within a satisfactory error.

Journal ArticleDOI
TL;DR: In this article , the authors investigated a LEOS edge-assisted multi-layer multi-access edge computing (MEC) system and formulated a joint optimization problem for the associated communication and computation resource allocation for minimizing the overall energy dissipation of the MEC system while maintaining a low computing latency.
Abstract: Sixth-Generation (6G) technologies will revolutionize the wireless ecosystem by enabling the delivery of futuristic services through satellite-terrestrial integrated networks (STINs). As the number of subscribers connected to STINs increases, it becomes necessary to investigate whether the edge computing paradigm may be applied to low Earth orbit satellite (LEOS) networks for supporting computation-intensive and delay-sensitive services for anyone, anywhere, and at any time. Inspired by this research dilemma, we investigate a LEOS edge-assisted multi-layer multi-access edge computing (MEC) system. In this system, the MEC philosophy will be extended to LEOS, for defining the LEOS edge, in order to enhance the coverage of the multi-layer MEC system and address the users’ computing problems both in congested and isolated areas. We then design its operating offloading framework and explore its feasible implementation methodologies. In this context, we formulate a joint optimization problem for the associated communication and computation resource allocation for minimizing the overall energy dissipation of our LEOS edge-assisted multi-layer MEC system while maintaining a low computing latency. To solve the optimization problem effectively, we adopt the classic alternating optimization (AO) method for decomposing the original problem and then solve each sub-problem using low-complexity iterative algorithms. Finally, our numerical results show that the offloading scheme conceived achieves low computing latency and energy dissipation compared to the state-of-the-art solutions, a single layer MEC supported by LEOS or base stations (BS).

Journal ArticleDOI
TL;DR: In this article , the authors provide guidelines for the design of communication-efficient distributed learning in complex networked systems and derive a sufficient condition for the boundedness of the distributed inference error of all the agent nodes over time.
Abstract: Distributed learning is an important task in emerging applications such as localization and navigation, Internet-of-Things, and autonomous vehicles. This paper establishes a theoretical framework for learning states that evolve in real time over networks. Specifically, each agent node in the network aims to infer a time-varying state in a decentralized manner by using the node’s local observations and the messages received from other nodes within its communication range. As a result, the inference accuracy of a node is significantly affected by the quality of its received messages. This calls for carefully designed strategies for generating messages that are able to provide sufficient information for the receiver and are robust to channel impairments. This paper presents communication-efficient encoding strategies for generating transmitted messages and derives a sufficient condition for the boundedness of the distributed inference error of all the agent nodes over time. The findings of this paper provide guidelines for the design of communication-efficient distributed learning in complex networked systems.

Journal ArticleDOI
TL;DR: In this article , a residual-enhanced mask-based progressive generative coding (RMPGC) framework is proposed for image compression in wireless communications, multi-user broadcasting, and multi-tasking applications.
Abstract: Progressive deep image compression (DIC) with hybrid contexts is an under-investigated problem that aims to jointly maximize the utility of a compressed image for multiple contexts or tasks under variable rates. In this paper, we consider the contexts of image reconstruction and classification. We propose a DIC framework, called residual-enhanced mask-based progressive generative coding (RMPGC), designed for explicit control of the performance within the rate-distortion-classification-perception (RDCP) trade-off. Three independent mechanisms are introduced to yield a semantically structured latent representation that can support parameterized control of rate and context adaptation. Experimental results show that the proposed RMPGC outperforms a benchmark DIC scheme using the same generative adversarial nets (GANs) backbone in all six metrics related to classification, distortion, and perception. Moreover, RMPGC is a flexible framework that can be applied to different neural network backbones. Some typical implementations are given and shown to outperform the classic BPG codec and four state-of-the-art DIC schemes in classification and perception metrics, with a slight degradation in distortion metrics. Our proposal of a nonlinear-neural-coded and richly structured latent space makes the proposed DIC scheme well suited for image compression in wireless communications, multi-user broadcasting, and multi-tasking applications.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a new neural network-based semantic communication system for image transmission, where the task is unaware at the transmitter and the data environment is dynamic, and the system consists of two main parts, namely the semantic coding (SC) network and data adaptation (DA) network.
Abstract: Existing deep learning-enabled semantic communication systems often rely on shared background knowledge between the transmitter and receiver that includes empirical data and their associated semantic information. In practice, the semantic information is defined by the pragmatic task of the receiver and cannot be known to the transmitter. The actual observable data at the transmitter can also have non-identical distribution with the empirical data in the shared background knowledge library. To address these practical issues, this paper proposes a new neural network-based semantic communication system for image transmission, where the task is unaware at the transmitter and the data environment is dynamic. The system consists of two main parts, namely the semantic coding (SC) network and the data adaptation (DA) network. The SC network learns how to extract and transmit the semantic information using a receiver-leading training process. By using the domain adaptation technique from transfer learning, the DA network learns how to convert the data observed into a similar form of the empirical data that the SC network can process without re-training. Numerical experiments show that the proposed method can be adaptive to observable datasets while keeping high performance in terms of both data recovery and task execution.

Journal ArticleDOI
TL;DR: In this paper , a pair of SIMs at the transmitter (TX) and receiver (RX), respectively, are used to perform transmit precoding and receiver combining, and the phase shifts associated with all the metasurface layers of both the TX-SIM and RX-SIM are jointly optimized.
Abstract: The revolutionary technology of \emph{Stacked Intelligent Metasurfaces (SIM)} has been recently shown to be capable of carrying out advanced signal processing directly in the native electromagnetic (EM) wave domain. An SIM is fabricated by a sophisticated amalgam of multiple stacked metasurface layers, which may outperform its single-layer metasurface counterparts, such as reconfigurable intelligent surfaces (RISd) and metasurface lenses. We harness this new SIM concept for implementing efficient holographic multiple-input multiple-output (HMIMO) communications that dot require excessive radio-frequency (RF) chains, which constitutes a substantial benefit compared to existing implementations. We first present an HMIMO communication system based on a pair of SIMs at the transmitter (TX) and receiver (RX), respectively. In sharp contrast to the conventional MIMO designs, the considered SIMs are capable of automatically accomplishing transmit precoding and receiver combining, as the EM waves propagate through them. As such, each information data stream can be directly radiated and recovered from the corresponding transmit and receive ports. Secondly, we formulate the problem of minimizing the error between the actual end-to-end SIMs'parametrized channel matrix and the target diagonal one, with the latter representing a flawless interference-free system of parallel subchannels. This is achieved by jointly optimizing the phase shifts associated with all the metasurface layers of both the TX-SIM and RX-SIM. We then design a gradient descent algorithm to solve the resultant non-convex problem. Furthermore, we theoretically analyze the HMIMO channel capacity bound and provide some useful fundamental insights. Extensive simulation results are provided for characterizing our SIM-based HMIMO system, quantifying its substantial performance benefits.

Journal ArticleDOI
TL;DR: In this article , the authors provide guidelines for the design of communication-efficient distributed learning in complex networked systems and derive a sufficient condition for the boundedness of the distributed inference error of all the agent nodes over time.
Abstract: Distributed learning is an important task in emerging applications such as localization and navigation, Internet-of-Things, and autonomous vehicles. This paper establishes a theoretical framework for learning states that evolve in real time over networks. Specifically, each agent node in the network aims to infer a time-varying state in a decentralized manner by using the node’s local observations and the messages received from other nodes within its communication range. As a result, the inference accuracy of a node is significantly affected by the quality of its received messages. This calls for carefully designed strategies for generating messages that are able to provide sufficient information for the receiver and are robust to channel impairments. This paper presents communication-efficient encoding strategies for generating transmitted messages and derives a sufficient condition for the boundedness of the distributed inference error of all the agent nodes over time. The findings of this paper provide guidelines for the design of communication-efficient distributed learning in complex networked systems.