scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Journal on Selected Areas in Communications in 2022"


Journal ArticleDOI
TL;DR: In this paper , the authors provide a comprehensive review on the background, range of key applications and state-of-the-art approaches of Integrated Sensing and Communications (ISAC).
Abstract: As the standardization of 5G solidifies, researchers are speculating what 6G will be. The integration of sensing functionality is emerging as a key feature of the 6G Radio Access Network (RAN), allowing for the exploitation of dense cell infrastructures to construct a perceptive network. In this IEEE Journal on Selected Areas in Communications (JSAC) Special Issue overview, we provide a comprehensive review on the background, range of key applications and state-of-the-art approaches of Integrated Sensing and Communications (ISAC). We commence by discussing the interplay between sensing and communications (S&C) from a historical point of view, and then consider the multiple facets of ISAC and the resulting performance gains. By introducing both ongoing and potential use cases, we shed light on the industrial progress and standardization activities related to ISAC. We analyze a number of performance tradeoffs between S&C, spanning from information theoretical limits to physical layer performance tradeoffs, and the cross-layer design tradeoffs. Next, we discuss the signal processing aspects of ISAC, namely ISAC waveform design and receive signal processing. As a step further, we provide our vision on the deeper integration between S&C within the framework of perceptive networks, where the two functionalities are expected to mutually assist each other, i.e., via communication-assisted sensing and sensing-assisted communications. Finally, we identify the potential integration of ISAC with other emerging communication technologies, and their positive impacts on the future of wireless networks.

177 citations


Journal ArticleDOI
TL;DR: In this paper , the authors provide a vision for scalable and trustworthy edge AI systems with integrated design of wireless communication strategies and decentralized machine learning models, as well as a holistic end-to-end system architecture to support edge AI.
Abstract: The thriving of artificial intelligence (AI) applications is driving the further evolution of wireless networks. It has been envisioned that 6G will be transformative and will revolutionize the evolution of wireless from “connected things” to “connected intelligence”. However, state-of-the-art deep learning and big data analytics based AI systems require tremendous computation and communication resources, causing significant latency, energy consumption, network congestion, and privacy leakage in both of the training and inference processes. By embedding model training and inference capabilities into the network edge, edge AI stands out as a disruptive technology for 6G to seamlessly integrate sensing, communication, computation, and intelligence, thereby improving the efficiency, effectiveness, privacy, and security of 6G networks. In this paper, we shall provide our vision for scalable and trustworthy edge AI systems with integrated design of wireless communication strategies and decentralized machine learning models. New design principles of wireless networks, service-driven resource allocation optimization methods, as well as a holistic end-to-end system architecture to support edge AI will be described. Standardization, software and hardware platforms, and application scenarios are also discussed to facilitate the industrialization and commercialization of edge AI systems.

92 citations


Journal ArticleDOI
TL;DR: In this article , the authors explore the evolution of NOMA with a particular focus on non-orthogonal multiple access (NOMA), and discuss several possible candidate techniques.
Abstract: Due to the explosive growth in the number of wireless devices and diverse wireless services, such as virtual/augmented reality and Internet-of-Everything, next generation wireless networks face unprecedented challenges caused by heterogeneous data traffic, massive connectivity, and ultra-high bandwidth efficiency and ultra-low latency requirements. To address these challenges, advanced multiple access schemes are expected to be developed, namely next generation multiple access (NGMA), which are capable of supporting massive numbers of users in a more resource- and complexity-efficient manner than existing multiple access schemes. As the research on NGMA is in a very early stage, in this paper, we explore the evolution of NGMA with a particular focus on non-orthogonal multiple access (NOMA), i.e., the transition from NOMA to NGMA. In particular, we first review the fundamental capacity limits of NOMA, elaborate on the new requirements for NGMA, and discuss several possible candidate techniques. Moreover, given the high compatibility and flexibility of NOMA, we provide an overview of current research efforts on multi-antenna techniques for NOMA, promising future application scenarios of NOMA, and the interplay between NOMA and other emerging physical layer techniques. Furthermore, we discuss advanced mathematical tools for facilitating the design of NOMA communication systems, including conventional optimization approaches and new machine learning techniques. Next, we propose a unified framework for NGMA based on multiple antennas and NOMA, where both downlink and uplink transmissions are considered, thus setting the foundation for this emerging research area. Finally, several practical implementation challenges for NGMA are highlighted as motivation for future work.

77 citations


Journal ArticleDOI
TL;DR: This tutorial summarizes the efforts to date, starting from its early adaptations, semantic-aware and task-oriented communications, covering the foundations, algorithms and potential implementations, and focuses on approaches that utilize information theory to provide the foundations.
Abstract: Communication systems to date primarily aim at reliably communicating bit sequences. Such an approach provides efficient engineering designs that are agnostic to the meanings of the messages or to the goal that the message exchange aims to achieve. Next generation systems, however, can be potentially enriched by folding message semantics and goals of communication into their design. Further, these systems can be made cognizant of the context in which communication exchange takes place, thereby providing avenues for novel design insights. This tutorial summarizes the efforts to date, starting from its early adaptations, semantic-aware and task-oriented communications, covering the foundations, algorithms and potential implementations. The focus is on approaches that utilize information theory to provide the foundations, as well as the significant role of learning in semantics and task-aware communications.

67 citations


Journal ArticleDOI
TL;DR: This paper proposes a new self-sensing IRS architecture where the IRS controller is capable of transmitting probing signals that are not only directly reflected by the target, but also consecutively reflecting by the IRS and then the target (referred to as the IRS-reflected echo link).
Abstract: Intelligent reflecting surface (IRS) has emerged as a promising technology to reconfigure the radio propagation environment by dynamically controlling wireless signal’s amplitude and/or phase via a large number of reflecting elements. In contrast to the vast literature on studying IRS’s performance gains in wireless communications, we study in this paper a new application of IRS for sensing/localizing targets in wireless networks. Specifically, we propose a new self-sensing IRS architecture where the IRS controller is capable of transmitting probing signals that are not only directly reflected by the target (referred to as the direct echo link), but also consecutively reflected by the IRS and then the target (referred to as the IRS-reflected echo link). Moreover, dedicated sensors are installed at the IRS for receiving both the direct and IRS-reflected echo signals from the target, such that the IRS can sense the direction of its nearby target by applying a customized multiple signal classification (MUSIC) algorithm. However, since the angle estimation mean square error (MSE) by the MUSIC algorithm is intractable, we propose to optimize the IRS passive reflection for maximizing the average echo signals’ total power at the IRS sensors and derive the resultant Cramer-Rao bound (CRB) of the angle estimation MSE. Last, numerical results are presented to show the effectiveness of the proposed new IRS sensing architecture and algorithm, as compared to other benchmark sensing systems/algorithms.

67 citations


Journal ArticleDOI
TL;DR: This work proposes a double-RIS-assisted coexistence system where two RISs are deployed for enhancing communication signals and suppressing mutual interference, and aims to jointly optimize the beamforming of RISs and radar to maximize communication performance while maintaining radar detection performance.
Abstract: Integrated sensing and communication (ISAC) has been regarded as one of the most promising technologies for future wireless communications. However, the mutual interference in the communication radar coexistence system cannot be ignored. Inspired by the studies of reconfigurable intelligent surface (RIS), we propose a double-RIS-assisted coexistence system where two RISs are deployed for enhancing communication signals and suppressing mutual interference. We aim to jointly optimize the beamforming of RISs and radar to maximize communication performance while maintaining radar detection performance. The investigated problem is challenging, and thus we transform it into an equivalent but more tractable form by introducing auxiliary variables. Then, we propose a penalty dual decomposition (PDD)-based algorithm to solve the resultant problem. Moreover, we consider two special cases: the large radar transmit power scenario and the low radar transmit power scenario. For the former, we prove that the beamforming design is only determined by the communication channel and the corresponding optimal joint beamforming strategy can be obtained in closed-form. For the latter, we minimize the mutual interference via the block coordinate descent (BCD) method. By combining the solutions of these two cases, a low-complexity algorithm is also developed. Finally, simulation results show that both the PDD-based and low-complexity algorithms outperform benchmark algorithms.

55 citations


Journal ArticleDOI
TL;DR: In this article , an exact linear search based algorithm is proposed for finding the optimal solution by fixing the status update rate, and a low complexity concave-convex procedure is also formulated for finding a near-optimal solution relying on the original problem and transformation into a form represented by the difference of two convex problems.
Abstract: Given the proliferation of the massive machine type communication devices (MTCDs) in beyond 5G (B5G) wireless networks, energy harvesting (EH) aided next generation multiple access (NGMA) systems have drawn substantial attention in the context of energy-efficient data sensing and transmission. However, without adaptive time slot (TS) and power allocation schemes, NGMA systems relying on stochastic sampling instants might lead to tardy actions associated both with high age of information (AoI) as well as high power consumption. For mitigating the energy consumption, we exploit a pair of sleep-scheduling policies, namely the multiple vacation (MV) policy and start-up threshold (ST) policy, which are characterized in the context of three typical multiple access protocols, including time-division multiple access (TDMA), frequency-division multiple access (FDMA) and non-orthogonal multiple access (NOMA). Furthermore, we derive closed-form expressions for the MTCD system’s peak AoI, which are formulated as the optimization objective under the constraints of EH power, status update rate and stability conditions. An exact linear search based algorithm is proposed for finding the optimal solution by fixing the status update rate. As a design alternative, a low complexity concave-convex procedure (CCP) is also formulated for finding a near-optimal solution relying on the original problem’s transformation into a form represented by the difference of two convex problems. Our simulation results show that the proposed algorithms are beneficial in terms of yielding a lower peak AoI at a low power consumption in the context of the multiple access protocols considered.

53 citations


Journal ArticleDOI
TL;DR: This paper study the DFRC design for a general scenario, where the dual-functional base station simultaneously detects the target as a multiple-input-multiple-output (MIMO) radar while communicating with multiple multi-antenna communication users (CUs).
Abstract: Spatial beamforming is an efficient way to realize dual-functional radar-communication (DFRC). In this paper, we study the DFRC design for a general scenario, where the dual-functional base station (BS) simultaneously detects the target as a multiple-input-multiple-output (MIMO) radar while communicating with multiple multi-antenna communication users (CUs). This necessitates a joint transceiver beamforming design for both MIMO radar and multi-user MIMO (MU-MIMO) communication. In order to characterize the performance tradeoff between MIMO radar and MU-MIMO communication, we first define the achievable performance region of the DFRC system. Then, both radar-centric and communication-centric optimizations are formulated to achieve the boundary of the performance region. For the radar-centric optimization, successive convex approximation (SCA) method is adopted to solve the non-convex constraint. For the communication-centric optimization, a solution based on weighted mean square error (MSE) criterion is obtained to solve the non-convex objective function. Furthermore, two low-complexity beamforming designs based on CU-selection and zero-forcing are proposed to avoid iteration, and the closed-form expressions of the low-complexity beamforming designs are derived. Simulation results are provided to verify the effectiveness of all proposed designs.

43 citations


Journal ArticleDOI
TL;DR: The concept of continuous intelligent surface (CIS) is introduced and the fundamental limits of RIS-aided ISAC systems, specifically, an RIS- aided localization and communication system, are established.
Abstract: Achieving accurate location-awareness in wireless networks requires integrated sensing and communication (ISAC), where optimization, signal processing, and data fusion are performed under a common framework. The efficiency of ISAC in complex wireless environments can be improved via the use of reconfigurable intelligent surfaces (RISs). This paper introduces the concept of continuous intelligent surface (CIS) and establishes the fundamental limits of RIS-aided ISAC systems, specifically, an RIS-aided localization and communication system. In particular, this paper considers two types of RISs, namely CISs and discrete intelligent surfaces (DISs). First, this paper proposes a general signal model for RIS-aided localization and communication valid for both near-field and far-field scenarios, and then theoretical limits on the localization and communication performance are derived. Based on the proposed model, Fisher information analyses of the localization performance in networks with RISs are performed. Numerical results show that RISs with optimized phase responses can improve the received signal-to-noise ratio (SNR) and spectral efficiency of communication, and the localization accuracy significantly.

42 citations


Journal ArticleDOI
TL;DR: The numerical results reveal that the network sum-rate of the proposed RIS-assisted CoMP NOMA networks outperforms the conventional CoMPNOMA scheme without the assistance of the RIS, the RIS- assisted CoMP orthogonal multiple access (OMA) scheme, and RIS-helpful NomA scheme, especially for low transmit power from the BSs.
Abstract: This paper investigates the integration of reconfigurable intelligent surface (RIS) with downlink non-orthogonal-multiple-access (NOMA) in a multi-user two-cell network assisted by the joint-transmission coordinated multipoint (JT-CoMP). Specifically, the RIS is deployed at the edge of two adjacent cells to assist the JT-CoMP from these two cells to multiple far NOMA users located at their edges. Under this setup, we jointly optimize the power allocation (PA) coefficients at the base stations (BSs), the user clustering (UC) policy, and the phase-shift (PS) matrix of the RIS with the objective of maximizing the network sum-rate subject to a target quality-of-service, defined in terms of the minimum required data rate at each cellular user, and the successive interference cancellation (SIC) constraints. The formulated problem ends to be a non-convex mixed-integer non-linear program that is difficult to be solved in a straightforward manner. To alleviate this issue, and with the aid of alternating optimization (AO), the original optimization problem is decomposed into two sub-problems, a joint PA and UC sub-problem and a PS sub-problem, that are solved in an alternating way. For the first sub-problem, we invoke the bi-level optimization approach to decouple the PA sub-problem from the UC sub-problem. For the PA sub-problem, closed-form expressions for the optimal PA coefficients are derived. On the other hand, the UC problem is projected to multiple 2-dimensional assignment problems, each of which is solved using the Hungarian method. Finally, the PS sub-problem is formulated as a difference-of-convex problem and an efficient solution is obtained using the successive convex approximation technique. The numerical results reveal that the network sum-rate of the proposed RIS-assisted CoMP NOMA networks outperforms the conventional CoMP NOMA scheme without the assistance of the RIS, the RIS-assisted CoMP orthogonal multiple access (OMA) scheme, and RIS-assisted NOMA scheme, especially for low transmit power from the BSs.

38 citations


Journal ArticleDOI
TL;DR: A basal semantic video conferencing (SVC) network is established, which dramatically reduces transmission resources while only losing detailed expressions, and an incremental redundancy hybrid automatic repeat-request (IR-HARQ) framework for the varying channels incorporating a novel semantic error detector is developed.
Abstract: Video conferencing has become a popular mode of meeting despite consuming considerable communication resources. Conventional video compression causes resolution reduction under a limited bandwidth. Semantic video conferencing (SVC) maintains a high resolution by transmitting some keypoints to represent the motions because the background is almost static, and the speakers do not change often. However, the study on the influence of transmission errors on keypoints is limited. In this paper, an SVC network based on keypoint transmission is established, which dramatically reduces transmission resources while only losing detailed expressions. Transmission errors in SVC only lead to a changed expression, whereas those in the conventional methods directly destroy pixels. However, the conventional error detector, such as cyclic redundancy check, cannot reflect the degree of expression changes. To overcome this issue, an incremental redundancy hybrid automatic repeat-request framework for varying channels (SVC-HARQ) incorporating a novel semantic error detector is developed. SVC-HARQ has flexibility in bit consumption and achieves a good performance. In addition, SVC-channel state information (CSI) is designed for CSI feedback to allocate the keypoint transmission and enhance the performance dramatically. Simulation shows that the proposed wireless semantic communication system can remarkably improve transmission efficiency.

Journal ArticleDOI
Bo Chang, Wei Tang, Xiaoyu Yan, Xin Tong, Zhi Chen 
TL;DR: A new integrated scheduling method of sensing, communication, and control for mmWave/THz communications in UAV networks to enable data transmission of the backhaul from UAV to the ground base station (BS).
Abstract: By providing ultra-high transmission data rate, millimeter wave (mmWave) and terahertz (THz) communications are promising to enable backhaul data transmission in cellular connected unmanned aerial vehicle (UAV) networks. In such networks, with little or no human assistance, connected autonomous UAVs (CA-UAV) can build air-ground networks and achieve seamless wide-area coverage. With the usage of high frequency (i.e., mmWave/THz), radio/radar sensing function is expected to be achieved in wireless networks, which can be used to track UAV for beam tracking in mmWave/THz communications and motion control of UAV. However, it is extremely difficult to jointly design sensing, communication, and motion control since they have been developing in relatively parallel with limited intersections. In this paper, we propose a new integrated scheduling method of sensing, communication, and control for mmWave/THz communications in UAV networks to enable data transmission of the backhaul from UAV to the ground base station (BS). In the proposed method, we first analyze the interactions among sensing, communication, and motion control, where sensing and motion control are strongly coupled to form the sensing-control pattern. Then, we provide a new definition from motion control perspective, i.e., state-to-noise-ratio, which links the relationship between sensing-control pattern activation and data rate determined by beam alignment in mmWave/THz communications. Finally, a closed-form expression is obtained for data rate triggered sensing-control pattern activation design, where both data rate requirement in mmWave/THz communications and motion control performance of UAV are guaranteed. Simulation results show remarkable performance of the proposed method.

Journal ArticleDOI
TL;DR: In this paper , a multibeam system for joint sensing and communication (JSC) based on multiple-input multiple-output (MIMO) 5G new radio (NR) waveforms is investigated.
Abstract: This work investigates a multibeam system for joint sensing and communication (JSC) based on multiple-input multiple-output (MIMO) 5G new radio (NR) waveforms. In particular, we consider a base station (BS) acting as a monostatic sensor that estimates the range, speed, and direction of arrival (DoA) of multiple targets via beam scanning using a fraction of the transmitted power. The target position is then obtained via range and DoA estimation. We derive the sensing performance in terms of probability of detection and root mean squared error (RMSE) of position and velocity estimation of a target under line-of-sight (LOS) conditions. Furthermore, we evaluate the system performance when multiple targets are present, using the optimal sub-pattern assignment (OSPA) metric. Finally, we provide an in-depth investigation of the dominant factors that affect performance, including the fraction of power reserved for sensing.

Journal ArticleDOI
TL;DR: Simulation results indicate that the proposed schemes can achieve superior energy efficiency with fast convergence for various network loads and user deployments for both single-cell and multi-cell RSMA-based VLC broadcast systems.
Abstract: With the explosive proliferation of connected devices and mobile users in the Internet-of-things, multiple access techniques are urged to be developed for the next generation wireless communications. Recently, rate-splitting multiple access (RSMA) has been a promising communication technology that holds advantages of strong robustness, low complexity, and high spectral efficiency, which can be integrated with the indoor visible light communication (VLC) broadcast system to compensate for the shortcomings of limited modulation bandwidth of LEDs. However, the research on the RSMA-based VLC systems is still in its infancy and there exist various problems to be explored. To benefit from the RSMA technique, this paper investigates the energy efficiency optimizations for both single-cell and multi-cell RSMA-based VLC broadcast systems. Specifically, these two systems are modeled, where the VLC broadcast channel follows Lambertian radiation model, and the splitting design and successive interference cancellation of RSMA are employed to mitigate the multi-user interference. Especially for multi-cell networks, the zero-forcing approach is adopted to eliminate the inter-cell interference. To maximize the energy efficiency, the precoding and power allocation problems are formulated for single-cell and multi-cell networks while accommodating multiple constraints including dynamic operation ranges of LEDs, QoS requirements, and interference elimination. For solving these non-convex fractional problems, two pieces of successive convex approximation (SCA)-based algorithms are proposed, in which the variable transformation and linear approximation are adopted. Simulation results indicate that the proposed schemes can achieve superior energy efficiency with fast convergence for various network loads and user deployments.

Journal ArticleDOI
TL;DR: A novel transcoding-enabled VR video caching and delivery framework for edge-enhanced next-generation wireless networks based on multi-agent reinforcement learning and a two-tier NOMA-based base station-multicast group matching mechanism is designed to solve the collaboration challenge during the edge delivery process.
Abstract: Virtual reality (VR) content, including 360° panoramic video, provides users with an immersive multimedia experience and therefore attracts increasing research and development attention. However, the requirement of high bandwidth and low latency of virtual reality service demand puts forward greater challenges to the current infrastructure, especially mobile networks. Inspired by the sharable nature of virtual reality content tiles, we further considered the potential opportunities for computing, caching, and multicast to address the challenges of transmission of panoramic content. This paper proposes a novel transcoding-enabled VR video caching and delivery framework for edge-enhanced next-generation wireless networks. Firstly, an edge cooperative caching scheme based on multi-agent reinforcement learning is introduced to improve the utilization efficiency of computing and storage resources, and then reduce service delay. Second, a two-tier NOMA-based base station-multicast group matching mechanism is designed to solve the collaboration challenge during the edge delivery process. A series of experiments have demonstrated the advantages of the proposed scheme in terms of cache hit rate, latency and other aspects in comparison with alternative approaches.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed framework can reduce by 41.3% data that the BS needs to transmit and improve by two-fold the total MSS compared to a standard communication network without using semantic communication techniques.
Abstract: In this paper, a semantic communication framework is proposed for textual data transmission. In the studied model, a base station (BS) extracts the semantic information from textual data, and transmits it to each user. The semantic information is modeled by a knowledge graph (KG) that consists of a set of semantic triples. After receiving the semantic information, each user recovers the original text using a graph-to-text generation model. To measure the performance of the considered semantic communication framework, a metric of semantic similarity (MSS) that jointly captures the semantic accuracy and completeness of the recovered text is proposed. Due to wireless resource limitations, the BS may not be able to transmit the entire semantic information to each user and satisfy the transmission delay constraint. Hence, the BS must select an appropriate resource block for each user as well as determine and transmit part of the semantic information to the users. As such, we formulate an optimization problem whose goal is to maximize the total MSS by jointly optimizing the resource allocation policy and determining the partial semantic information to be transmitted. To solve this problem, a proximal-policy-optimization-based reinforcement learning (RL) algorithm integrated with an attention network is proposed. The proposed algorithm can evaluate the importance of each triple in the semantic information using an attention network and then, build a relationship between the importance distribution of the triples in the semantic information and the total MSS. Compared to traditional RL algorithms, the proposed algorithm can dynamically adjust its learning rate thus ensuring convergence to a locally optimal solution. Simulation results show that the proposed framework can reduce by 41.3% data that the BS needs to transmit and improve by two-fold the total MSS compared to a standard communication network without using semantic communication techniques.

Journal ArticleDOI
TL;DR: A multiple access scheme for NGMA in wireless communication systems assisted by multiple reconfigurable intelligent surfaces (RISs) is presented and a medium access control (MAC) protocol incorporating RISs is designed, which outperforms the benchmarks in terms of system throughput and access fairness.
Abstract: The envisioned wireless networks of the future entail the provisioning of massive numbers of connections, heterogeneous data traffic, ultra-high spectral efficiency, and low latency services. This vision is spurring research activities focused on defining a next generation multiple access (NGMA) protocol that can accommodate massive numbers of users in different resource blocks, thereby, achieving higher spectral efficiency and increased connectivity compared to conventional multiple access schemes. In this article, we present a multiple access scheme for NGMA in wireless communication systems assisted by multiple reconfigurable intelligent surfaces (RISs). In this regard, considering the practical scenario of static users operating together with mobile ones, we first study the interplay of the design of NGMA schemes and RIS phase configuration in terms of efficiency and complexity. Based on this, we then propose a multiple access framework for RIS-assisted communication systems, and we also design a medium access control (MAC) protocol incorporating RISs. In addition, we give a detailed performance analysis of the designed RIS-assisted MAC protocol. Our extensive simulation results demonstrate that the proposed MAC design outperforms the benchmarks in terms of system throughput and access fairness, and also reveal a trade-off relationship between the system throughput and fairness.

Journal ArticleDOI
TL;DR: The results demonstrate that the WiGesID system outperforms the state-of-the-art method for cross-domain sensing and accurately recognizes new categories, which promotes the use of this application of Wi-Fi sensing in HCI.
Abstract: Gesture recognition is the central enabler of human-computer interaction (HCI). In addition to the semantic information contained in gestures, gesture-based user identification can effortlessly enhance HCI system security. Recently, the Wi-Fi-integrated sensing and communication (ISAC) technology has shown great potential in a field hitherto occupied by computer vision and radar sensing. In this work, leveraging Wi-Fi sensing, we propose a system called WiGesID that achieves joint gesture recognition and human identification (JGRHI). The basic idea behind WiGesID is to identify personalized spatiotemporal dynamic patterns from the gestures of different users. Moreover, we develop an effective approach to recognize new categories of gestures and users by computing relation scores between the features of the new category samples and the support samples. To evaluate the performance, we implemented WiGesID and conducted extensive experiments. The results demonstrate that our system outperforms the state-of-the-art method for cross-domain sensing and accurately recognizes new categories, which promotes the use of this application of Wi-Fi sensing in HCI.

Journal ArticleDOI
TL;DR: A sample-efficient deep reinforcement learning (DRL) algorithm is proposed to optimize the UAV trajectory, RIS configuration, and power control simultaneously and a distributionally robust DRL algorithm is further proposed to provide the worst-case performance guarantee.
Abstract: A reconfigurable intelligent surface (RIS) aided air-to-ground uplink non-orthogonal transmission framework is investigated for next generation multiple access. Occupying the same spectrum resource, unmanned aerial vehicle (UAV) users and ground users (GUs) are connected to terrestrial cellular networks via the uplink non-orthogonal multiple access (NOMA) protocol. As the flight safety is important for employing UAVs in civil airspace, the collision avoidance mechanism has to be considered during the flight. Therefore, a joint optimization problem of the UAV trajectory design, RIS configuration, and uploading power control is formulated for maximizing the network sum rate, while ensuring the UAV’s fight safety and satisfying the minimum data rate requirements of both the UAV and GU. The resultant problem is a sequential decision making one across multiple coherent time slots. Besides, the unknown locations of obstacles bring uncertainties into the decision making process. To tackle this challenging problem, a sample-efficient deep reinforcement learning (DRL) algorithm is proposed to optimize the UAV trajectory, RIS configuration, and power control simultaneously. Moreover, considering the ambiguous uncertainties in the environment, a distributionally robust DRL algorithm is further proposed to provide the worst-case performance guarantee. Numerical results demonstrate that the two proposed DRL algorithms outperform the conventional ones in terms of learning efficiency and robustness. It is also shown that the network sum rate is significantly improved by the proposed RIS-NOMA scheme compared to the conventional RIS-orthogonal multiple access (OMA) scheme and the case where no RIS is deployed.

Journal ArticleDOI
TL;DR: A dataset generation method, the Multilevel Feature Synthesis Method (Multilevel-FSM), to obtain positioning features and a specially designed deep learning positioning method, Multipath Res-Inception (MPRI), trained on the proposed dataset to enhance positioning accuracy.
Abstract: Location-based services (LBSs) provide necessary infrastructure for daily life, from bicycle sharing to nursing care. In contrast to traditional positioning methods such as Wi-Fi, Bluetooth, and ultra-wideband (UWB), fifth-generation (5G) networking is defined as a paradigm of integrated sensing and communication (ISAC). With its advantages of wide-range coverage and indoor-outdoor integration, 5G is promising for high-precision positioning in indoor and urban canyon environments. However, 5G location studies face great obstacles due to the lack of commercialized 5G ISAC base stations that support positioning functions as well as publicly available datasets. In this paper, we first propose a dataset generation method, the Multilevel Feature Synthesis Method (Multilevel-FSM), to obtain positioning features. In particular, the features of a multiple-input multiple-output (MIMO) channel are flattened into a single image to increase the information density and improve feature expression, and data augmentation is performed to provide stronger robustness to noise. Subsequently, we devise a specially designed deep learning positioning method, Multipath Res-Inception (MPRI), trained on the proposed dataset to enhance positioning accuracy. Finally, the results of extensive experiments conducted in two typical 5G scenarios (indoors and urban canyon) show that Multilevel-FSM and MPRI outperform state-of-the-art works in accuracy, time overhead and robustness to noise.

Journal ArticleDOI
TL;DR: A mmWave enabled CAVs cooperation algorithm is designed based on the proposed time-division integrated sensing and communication (TD-ISAC) system for raw sensing data sharing among CAVs and the age of information based system status update algorithm is proposed for the V2V/V2I collaborative communication mode.
Abstract: To overcome the bottleneck of unreliable environment sensing caused by sensor failure and obstacle blockage, the cooperation among connected automated vehicles (CAVs) is crucial for the reliable and efficient raw sensing data sharing in order to guarantee the driving safety. Empowered by the narrow beamwidth and high data rate abilities, the millimeter wave (mmWave) communication technology can substantially improve the environment sensing ability among multiple CAVs. In this paper, a mmWave enabled CAVs cooperation algorithm is designed based on the proposed time-division integrated sensing and communication (TD-ISAC) system for raw sensing data sharing among CAVs. Considering various computing abilities at vehicle and infrastructure, a closed-form solution to the V2V or V2V/V2I cooperative communication mode selection is theoretically achieved based on response delay analysis to guarantee the timeliness of raw sensing data sharing. And the age of information based system status update algorithm is proposed for the V2V/V2I collaborative communication mode. The feasibility of the proposed TD-ISAC system is verified by simulation and hardware testbed results. Based on simulation results, the proposed communication mode selection algorithm can effectively minimize the response time delay in different conditions. The mmWave enabled TD-ISAC hardware testbed is developed and the position error of target detection can be reduced by 18.5 % using the sensing data fusion from two vehicles, while the communication throughput remains over 2.2 Gbps.

Journal ArticleDOI
TL;DR: In this article , the authors proposed an integrated sensing and communication (ISAC) transmission framework based on the spatially spread orthogonal time frequency space (SS-OTFS) modulation by considering the fact that communication channel strengths cannot be directly obtained from radar sensing.
Abstract: In this paper, we propose a novel integrated sensing and communication (ISAC) transmission framework based on the spatially spread orthogonal time frequency space (SS-OTFS) modulation by considering the fact that communication channel strengths cannot be directly obtained from radar sensing. We first propose the concept of SS-OTFS modulation, where the key novelty is the angular domain discretization enabled by the spatial spreading/de-spreading. This discretization gives rise to simple and insightful effective models for both radar sensing and communication, which results in simplified designs for the related estimation and detection problems. In particular, we design simple beam tracking, angle estimation, and power allocation schemes for radar sensing, by utilizing the special structure of the effective radar sensing matrix. Meanwhile, we provide a detailed analysis on the pair-wise error probability (PEP) for communication, which unveils the key conditions for both precoding and power allocation designs for communication. Based on those conditions, we design a symbol-wise precoding scheme for communication based only on the delay, Doppler, and angle estimates from radar sensing, without the a priori knowledge of the communication channel fading coefficients, and also propose a suitable power allocation. Furthermore, we notice that radar sensing and communication requires different power allocations. Therefore, we discuss the performances of both the radar sensing and communication with different power allocations and show that the power allocation should be designed leaning towards radar sensing in practical scenarios. The effectiveness of the proposed ISAC transmission framework is verified by our numerical results, which also agree with our analysis and discussions.

Journal ArticleDOI
TL;DR: A new neural network-based semantic communication system for image transmission, where the task is unaware at the transmitter and the data environment is dynamic, which can be adaptive to observable datasets while keeping high performance in terms of both data recovery and task execution.
Abstract: Existing deep learning-enabled semantic communication systems often rely on shared background knowledge between the transmitter and receiver that includes empirical data and their associated semantic information. In practice, the semantic information is defined by the pragmatic task of the receiver and cannot be known to the transmitter. The actual observable data at the transmitter can also have non-identical distribution with the empirical data in the shared background knowledge library. To address these practical issues, this paper proposes a new neural network-based semantic communication system for image transmission, where the task is unaware at the transmitter and the data environment is dynamic. The system consists of two main parts, namely the semantic coding (SC) network and the data adaptation (DA) network. The SC network learns how to extract and transmit the semantic information using a receiver-leading training process. By using the domain adaptation technique from transfer learning, the DA network learns how to convert the data observed into a similar form of the empirical data that the SC network can process without re-training. Numerical experiments show that the proposed method can be adaptive to observable datasets while keeping high performance in terms of both data recovery and task execution.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a reinforcement learning based traffic offloading for SAGIN by considering the high mobility of nodes as well as frequent changing network traffic and link state, where a double Q-learning algorithm with improved delay-sensitive replay memory algorithm (DSRPM) is proposed to train the node to decide offloading strategy based on the local and neighboring historical information.
Abstract: Space-Air-Ground Integrated Networks (SAGIN) is considered as the key structure of the next generation network. The space satellites and air nodes are the potential candidates to assist and offload the terrain transmissions. However, due to the high mobility of space and air nodes as well as the high dynamic of network traffic, the conventional traffic offloading strategy is not applicable for the high dynamic SAGIN. In this paper, we propose a reinforcement learning based traffic offloading for SAGIN by considering the high mobility of nodes as well as frequent changing network traffic and link state. In the proposal, a double Q-learning algorithm with improved delay-sensitive replay memory algorithm (DSRPM) is proposed to train the node to decide offloading strategy based on the local and neighboring historical information. Furthermore, a joint information collection with hello package and offline training mechanism is proposed to assist the proposed offloading algorithm. The simulation shows that the proposal outperforms conventional offloading algorithms in terms of signaling overhead, dynamic adaptivity, packet drop rate and transmission delay.

Journal ArticleDOI
TL;DR: This paper proposes an IRS–user association strategy considering user mobility for IRS-aided multibeam transmission systems, and aims to optimize the long-term performance of systems in terms of capacity and reliability.
Abstract: In recent years, intelligent reflecting surfaces (IRSs) for large-capacity and highly reliable wireless communication have attracted widespread attention. However, a multiuser access system with multiple IRSs poses limitations in reducing the large signaling overhead of channel estimation for numerous links between the IRSs and users. One approach to reduce the exhaustive channel estimation involves associating the IRS with a user and performing beam tracking for a certain period. However, as the IRS–user association is fixed during the tracking period, the dynamic variations in their link status caused by user mobility degrade the system performance if the association is decided without prior planning. Therefore, this paper proposes an IRS–user association strategy considering user mobility for IRS-aided multibeam transmission systems. Contrary to prior works, our association strategy aims to optimize the long-term performance of systems in terms of capacity and reliability. The proposed strategy minimized performance degradation even under drastic fluctuations of link conditions, thereby reducing channel estimation overhead because both the IRS and user can be associated for long periods with low performance degradation.

Journal ArticleDOI
TL;DR: An uplink non-orthogonal multiple access (NOMA) terrestrial-satellite network is investigated, where the terrestrial base stations communicate with satellite by backhaul link, and user equipments (UEs) share spectrum resource of access link and a three-stages iterative resource allocation algorithm is proposed.
Abstract: In this paper, an uplink non-orthogonal multiple access (NOMA) terrestrial-satellite network is investigated, where the terrestrial base stations (BSs) communicate with satellite by backhaul link, and user equipments (UEs) share spectrum resource of access link. Firstly, a utility function which consists of the achieved terrestrial user rate and cross-tier interference caused by terrestrial BSs to satellite is design. Thus, the optimization problem can be modeled by maximizing the system utility function while satisfying the varying backhaul rate and UEs’ quality of service (QoS) constraints. The optimization problem is highly non-convex and can not be solved directly. Thus, we decouple the original problem into user association sub-problem, bandwidth assignment sub-problem, and power allocation sub-problem. In user association sub-problem, an enhanced-caching, preference relation, and swapping based algorithm is proposed, where the satellite UEs are selected by the channel coefficient ratio. The terrestrial UEs association considers the both caching state and backhaul link. Then we derive the closed-form expression of the bandwidth assignment. In power allocation sub-problem, we convert the non-convex term of the target function into the convex one by the Taylor expansion, and solve the transformed convex problem by an iterative power allocation algorithm. Finally, a three-stages iterative resource allocation algorithm by joint considering the three sub-problems is proposed. Simulation results are discussed to show the effectiveness of the proposed algorithms.

Journal ArticleDOI
TL;DR: In this article , the authors investigated the transmission power control to combat against aggregation errors in Air-FEEL, and obtained their structured optimal solutions, both appearing in a form of regularized channel inversion by using the Lagrangian duality method.
Abstract: Over-the-air federated edge learning (Air-FEEL) has emerged as a communication-efficient solution to enable distributed machine learning over edge devices by using their data locally to preserve the privacy. By exploiting the waveform superposition property of wireless channels, Air-FEEL allows the “one-shot” over-the-air aggregation of gradient-updates to enhance the communication efficiency, but at the cost of a compromised learning performance due to the aggregation errors caused by channel fading and noise. This paper investigates the transmission power control to combat against such aggregation errors in Air-FEEL. Different from conventional power control designs (e.g., to minimize the individual mean squared error (MSE) of the over-the-air aggregation at each round), we consider a new power control design aiming at directly maximizing the convergence speed. Towards this end, we first analyze the convergence behavior of Air-FEEL (in terms of the optimality gap) subject to aggregation errors at different communication rounds. It is revealed that if the aggregation estimates are unbiased, then the training algorithm would converge exactly to the optimal point with mild conditions; while if they are biased, then the algorithm would converge with an error floor determined by the accumulated estimate bias over communication rounds. Next, building upon the convergence results, we optimize the power control to directly minimize the derived optimality gaps under the cases without and with unbiased aggregation constraints, subject to a set of average and maximum power constraints at individual edge devices. We transform both problems into convex forms, and obtain their structured optimal solutions, both appearing in a form of regularized channel inversion, by using the Lagrangian duality method. Finally, numerical results show that the proposed power control policies achieve significantly faster convergence for Air-FEEL, as compared with benchmark policies with fixed power transmission or conventional MSE minimization.

Journal ArticleDOI
TL;DR: In this paper , an energy-aware dynamic device scheduling algorithm was proposed to optimize the training performance within the energy constraints of devices, where both communication energy for gradient aggregation and computation energy for local training were considered.
Abstract: Machine learning and wireless communication technologies are jointly facilitating an intelligent edge, where federated edge learning (FEEL) is emerging as a promising training framework. As wireless devices involved in FEEL are resource limited in terms of communication bandwidth, computing power and battery capacity, it is important to carefully schedule them to optimize the training performance. In this work, we consider an over-the-air FEEL system with analog gradient aggregation, and propose an energy-aware dynamic device scheduling algorithm to optimize the training performance within the energy constraints of devices, where both communication energy for gradient aggregation and computation energy for local training are considered. The consideration of computation energy makes dynamic scheduling challenging, as devices are scheduled before local training, but the communication energy for over-the-air aggregation depends on the $l_{2}$ -norm of local gradient, which is known only after local training. We thus incorporate estimation methods into scheduling to predict the gradient norm. Taking the estimation error into account, we characterize the performance gap between the proposed algorithm and its offline counterpart. Experimental results show that, under a highly unbalanced local data distribution, the proposed algorithm can increase the accuracy by 4.9% on CIFAR-10 dataset compared with the myopic benchmark, while satisfying the energy constraints.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a novel FD integrated sensing and communication (ISAC) scheme for the important and challenging monostatic setup, where one single ISAC node wishes to simultaneously sense a radar target while communicating with a communication receiver, which can drastically increase the communication rate and also mitigate the sensing eclipsing and near-target blind range issues, as long as the selfinterference (SI) is effectively suppressed.
Abstract: Integrated sensing and communication (ISAC) is a promising technology to fully utilize the precious spectrum and hardware in wireless systems, which has attracted significant attentions recently. This paper studies ISAC for the important and challenging monostatic setup, where one single ISAC node wishes to simultaneously sense a radar target while communicating with a communication receiver. Different from most existing schemes that rely on either radar-centric half-duplex (HD) pulsed transmission with information embedding that suffers from extremely low communication rate, or communication-centric waveform that suffers from degraded sensing performance, we propose a novel full-duplex (FD) ISAC scheme that utilizes the waiting time of conventional pulsed radars to transmit communication signals. Compared to radar-centric pulsed waveform with information embedding, the proposed design can drastically increase the communication rate, and also mitigate the sensing eclipsing and near-target blind range issues, as long as the self-interference (SI) is effectively suppressed. On the other hand, compared to communication-centric ISAC waveform, the proposed design has better auto-correlation property as it preserves the classic radar waveform for sensing. Performance analysis is developed by taking into account the residual SI, in terms of the probability of detection and ambiguity function for sensing, as well as the spectrum efficiency for communication. Numerical results are provided to show the significant performance gain of our proposed design over benchmark schemes.

Journal ArticleDOI
TL;DR: The whole DVST design is formulated as an optimization problem whose goal is to minimize the end-to-end transmission rate-distortion performance under perceptual quality metrics or machine vision task performance metrics.
Abstract: In this paper, we design a new class of high-efficiency deep joint source-channel coding methods to achieve end-to-end video transmission over wireless channels. The proposed methods exploit nonlinear transform and conditional coding architecture to adaptively extract semantic features across video frames, and transmit semantic feature domain representations over wireless channels via deep joint source-channel coding. Our framework is collected under the name deep video semantic transmission (DVST). In particular, benefiting from the strong temporal prior provided by the feature domain context, the learned nonlinear transform function becomes temporally adaptive, resulting in a richer and more accurate entropy model guiding the transmission of current frame. Accordingly, a novel rate adaptive transmission mechanism is developed to customize deep joint source-channel coding for video sources. It learns to allocate the limited channel bandwidth within and among video frames to maximize the overall transmission performance. The whole DVST design is formulated as an optimization problem whose goal is to minimize the end-to-end transmission rate-distortion performance under perceptual quality metrics or machine vision task performance metrics. Across standard video source test sequences and various communication scenarios, experiments show that our DVST can generally surpass traditional wireless video coded transmission schemes. The proposed DVST framework can well support future semantic communications due to its video content-aware and machine vision task integration abilities.