scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 2021"


Journal ArticleDOI
TL;DR: In this paper, the major design aspects of such a cellular joint communication and sensing (JCAS) system are discussed, and an analysis of the choice of the waveform that points towards choosing the one that is best suited for communication also for radar sensing is presented.
Abstract: The 6G vision of creating authentic digital twin representations of the physical world calls for new sensing solutions to compose multi-layered maps of our environments. Radio sensing using the mobile communication network as a sensor has the potential to become an essential component of the solution. With the evolution of cellular systems to mmWave bands in 5G and potentially sub-THz bands in 6G, small cell deployments will begin to dominate. Large bandwidth systems deployed in small cell configurations provide an unprecedented opportunity to employ the mobile network for sensing. In this paper, we focus on the major design aspects of such a cellular joint communication and sensing (JCAS) system. We present an analysis of the choice of the waveform that points towards choosing the one that is best suited for communication also for radar sensing. We discuss several techniques for efficiently integrating the sensing capability into the JCAS system, some of which are applicable with NR air-interface for evolved 5G systems. Specifically, methods for reducing sensing overhead by appropriate sensing signal design or by configuring separate numerologies for communications and sensing are presented. Sophisticated use of the sensing signals is shown to reduce the signaling overhead by a factor of 2.67 for an exemplary road traffic monitoring use case. We then present a vision for future advanced JCAS systems building upon distributed massive MIMO and discuss various other research challenges for JCAS that need to be addressed in order to pave the way towards natively integrated JCAS in 6G.

223 citations


Journal ArticleDOI
TL;DR: In this article, a review of recent advances in Snapshot compressive imaging hardware, theory and algorithms, including both optimization-based and deep learning-based algorithms, is presented.
Abstract: Capturing high-dimensional (HD) data is a long-term challenge in signal processing and related fields. Snapshot compressive imaging (SCI) uses a two-dimensional (2D) detector to capture HD ($\ge3$D) data in a {\em snapshot} measurement. Via novel optical designs, the 2D detector samples the HD data in a {\em compressive} manner; following this, algorithms are employed to reconstruct the desired HD data-cube. SCI has been used in hyperspectral imaging, video, holography, tomography, focal depth imaging, polarization imaging, microscopy, \etc.~Though the hardware has been investigated for more than a decade, the theoretical guarantees have only recently been derived. Inspired by deep learning, various deep neural networks have also been developed to reconstruct the HD data-cube in spectral SCI and video SCI. This article reviews recent advances in SCI hardware, theory and algorithms, including both optimization-based and deep-learning-based algorithms. Diverse applications and the outlook of SCI are also discussed.

122 citations


Journal ArticleDOI
TL;DR: In this paper, the authors review recent advances in Snapshot Compressive Imaging (SCI) hardware, theory, and algorithms, including both optimization-based and deep learning-based algorithms.
Abstract: Capturing high-dimensional (HD) data is a long-term challenge in signal processing and related fields. Snapshot compressive imaging (SCI) uses a 2D detector to capture HD (g3D) data in a snapshot measurement. Via novel optical designs, the 2D detector samples the HD data in a compressive manner; following this, algorithms are employed to reconstruct the desired HD data cube. SCI has been used in hyperspectral imaging, video, holography, tomography, focal depth imaging, polarization imaging, microscopy, and so on. Although the hardware has been investigated for more than a decade, the theoretical guarantees have only recently been derived. Inspired by deep learning, various deep neural networks have also been developed to reconstruct the HD data cube in spectral SCI and video SCI. This article reviews recent advances in SCI hardware, theory, and algorithms, including both optimizationbased and deep learning-based algorithms. Diverse applications and the outlook for SCI are also discussed.

104 citations


Journal ArticleDOI
01 Jan 2021
TL;DR: Simulation results reveal that unlike conventional MIMO architectures, IRS/ITS-aided antennas are both highly energy efficient and fully scalable in terms of the number of transmitting antennas.
Abstract: In this article, we study two novel massive multiple-input multiple-output (MIMO) transmitter architectures for millimeter wave (mmWave) communications which comprise few active antennas, each equipped with a dedicated radio frequency (RF) chain, that illuminate a nearby large intelligent reflecting/transmitting surface (IRS/ITS). The IRS (ITS) consists of a large number of low-cost and energy-efficient passive antenna elements which are able to reflect (transmit) a phase-shifted version of the incident electromagnetic field. Similar to lens array (LA) antennas, IRS/ITS-aided antenna architectures are energy efficient due to the almost lossless over-the-air connection between the active antennas and the intelligent surface. However, unlike for LA antennas, for which the number of active antennas has to linearly grow with the number of passive elements (i.e., the lens aperture) due to the non-reconfigurablility (i.e., non-intelligence) of the lens, for IRS/ITS-aided antennas, the reconfigurablility of the IRS/ITS facilitates scaling up the number of radiating passive elements without increasing the number of costly and bulky active antennas. We show that the constraints that the precoders for IRS/ITS-aided antennas have to meet differ from those of conventional MIMO architectures. Taking these constraints into account and exploiting the sparsity of mmWave channels, we design two efficient precoders; one based on maximizing the mutual information and one based on approximating the optimal unconstrained fully digital (FD) precoder via the orthogonal matching pursuit algorithm. Furthermore, we develop a power consumption model for IRS/ITS-aided antennas that takes into account the impacts of the IRS/ITS imperfections, namely the spillover loss, taper loss, aperture loss, and phase shifter loss. Moreover, we study the effect that the various system parameters have on the achievable rate and show that a proper positioning of the active antennas with respect to the IRS/ITS leads to a considerable performance improvement. Our simulation results reveal that unlike conventional MIMO architectures, IRS/ITS-aided antennas are both highly energy efficient and fully scalable in terms of the number of transmitting (passive) antennas. Therefore, IRS/ITS-aided antennas are promising candidates for realizing the potential of mmWave ultra massive MIMO communications in practice.

82 citations


Proceedings ArticleDOI
Tao Huang1, Weisheng Dong1, Xin Yuan2, Jinjian Wu1, Guangming Shi1 
01 Jun 2021
TL;DR: Wang et al. as discussed by the authors proposed a novel hyperspectral image reconstruction method based on the maximum a posterior (MAP) estimation framework using learned Gaussian Scale Mixture (GSM) prior.
Abstract: In coded aperture snapshot spectral imaging (CASSI) system, the real-world hyperspectral image (HSI) can be reconstructed from the captured compressive image in a snapshot. Model-based HSI reconstruction methods employed hand-crafted priors to solve the reconstruction problem, but most of which achieved limited success due to the poor representation capability of these hand-crafted priors. Deep learning based methods learning the mappings between the compressive images and the HSIs directly achieved much better results. Yet, it is nontrivial to design a powerful deep network heuristically for achieving satisfied results. In this paper, we propose a novel HSI reconstruction method based on the Maximum a Posterior (MAP) estimation framework using learned Gaussian Scale Mixture (GSM) prior. Different from existing GSM models using hand-crafted scale priors (e.g., the Jeffrey’s prior), we propose to learn the scale prior through a deep convolutional neural network (DCNN). Furthermore, we also propose to estimate the local means of the GSM models by the DCNN. All the parameters of the MAP estimation algorithm and the DCNN parameters are jointly optimized through end-to-end training. Extensive experimental results on both synthetic and real datasets demonstrate that the proposed method outperforms existing state-of-the-art methods. The code is available at https://see.xidian.edu.cn/faculty/wsdong/Projects/DGSM-SCI.htm.

77 citations


Journal ArticleDOI
TL;DR: In this paper, a comparison of indoor radio propagation measurements and corresponding channel statistics at 28, 73, and 140 GHz, based on extensive measurements from 2014-2020 in an indoor office environment, is provided.
Abstract: This letter provides a comparison of indoor radio propagation measurements and corresponding channel statistics at 28, 73, and 140 GHz, based on extensive measurements from 2014–2020 in an indoor office environment. Side-by-side comparisons of propagation characteristics (e.g., large-scale path loss and multipath time dispersion) across a wide range of frequencies from the low millimeter wave band of 28 GHz to the sub-THz band of 140 GHz illustrate the key similarities and differences in indoor wireless channels. The measurements and models show remarkably similar path loss exponents over frequencies in both line-of-sight (LOS) and non-LOS (NLOS) scenarios, when using a one meter free space reference distance, while the multipath time dispersion becomes smaller at higher frequencies. The 3GPP indoor channel model overestimates the large-scale path loss and has unrealistic large numbers of clusters and multipath components per cluster compared to the measured channel statistics in this letter.

74 citations


Journal ArticleDOI
TL;DR: This paper presents a probabilistic simulation of the response of the immune system to laser-spot assisted chemoreception and shows clear patterns in response to different types of laser beams.
Abstract: We propose a plug-and-play (PnP) method that uses deep-learning-based denoisers as regularization priors for spectral snapshot compressive imaging (SCI). Our method is efficient in terms of reconstruction quality and speed trade-off, and flexible enough to be ready to use for different compressive coding mechanisms. We demonstrate the efficiency and flexibility in both simulations and five different spectral SCI systems and show that the proposed deep PnP prior could achieve state-of-the-art results with a simple plug-in based on the optimization framework. This paves the way for capturing and recovering multi- or hyperspectral information in one snapshot, which might inspire intriguing applications in remote sensing, biomedical science, and material science. Our code is available at: https://github.com/zsm1211/PnP-CASSI.

69 citations


Journal ArticleDOI
TL;DR: In this paper, the authors report wide bandwidths of 65-75 GHz for three directly modulated laser design implementations, by exploiting three bandwidth enhancement effects: detuned loading, photon-photon resonance and in-cavity frequency modulation-amplitude modulation conversion.
Abstract: Today, in the face of ever increasing communication traffic, minimizing power consumption in data communication systems has become a challenge. Direct modulation of lasers, a technique as old as lasers themselves, is known for its high energy efficiency and low cost. However, the modulation bandwidth of directly modulated lasers has fallen behind those of external modulators. In this Article, we report wide bandwidths of 65–75 GHz for three directly modulated laser design implementations, by exploiting three bandwidth enhancement effects: detuned loading, photon–photon resonance and in-cavity frequency modulation–amplitude modulation conversion. Substantial reduction of chirp (α < 1.0) as well as isolator-free operation under a reflection of up to 40% are also realized. A fast data transmission of 294.7 Gb s−1 over 15 km of a standard single-mode fibre in the O-band is demonstrated. This was achieved without an optical fibre amplifier due to a high laser output power of 13.6 dBm. Directly modulated semiconductor lasers are shown to be able to operate with bandwidths exceeding 65 GHz thanks to a cavity design that harnesses photon–photon resonances.

69 citations


Journal ArticleDOI
TL;DR: A deep fully convolutional neural network, DeepRx is proposed, which executes the whole receiver pipeline from frequency domain signal stream to uncoded bits in a 5G-compliant fashion and outperforms traditional methods.
Abstract: Deep learning has solved many problems that are out of reach of heuristic algorithms. It has also been successfully applied in wireless communications, even though the current radio systems are well-understood and optimal algorithms exist for many tasks. While some gains have been obtained by learning individual parts of a receiver, a better approach is to jointly learn the whole receiver. This, however, often results in a challenging nonlinear problem, for which the optimal solution is infeasible to implement. To this end, we propose a deep fully convolutional neural network, DeepRx, which executes the whole receiver pipeline from frequency domain signal stream to uncoded bits in a 5G-compliant fashion. We facilitate accurate channel estimation by constructing the input of the convolutional neural network in a very specific manner using both the data and pilot symbols. Also, DeepRx outputs soft bits that are compatible with the channel coding used in 5G systems. Using 3GPP-defined channel models, we demonstrate that DeepRx outperforms traditional methods. We also show that the high performance can likely be attributed to DeepRx learning to utilize the known constellation points of the unknown data symbols, together with the local symbol distribution, for improved detection accuracy.

63 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose deep learning models, both centralized and federated approaches, that can perform horizontal and vertical autoscaling in multi-domain networks, and evaluate the performance of various deep learning model trained over a commercial network operator dataset and investigate the pros and cons of federated learning over centralized learning approaches.
Abstract: Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC) are two technologies expected to play a vital role in 5G and beyond networks. However, adequate mechanisms are required to meet the dynamically changing network service demands to utilize the network resources optimally and also to satisfy the demanding QoS requirements. Particularly in multi-domain scenarios, the additional challenge of isolation and data privacy among domains needs to be tackled. To this end, centralized and distributed Artificial Intelligence (AI)-driven resource orchestration techniques (e.g., virtual network function (VNF) autoscaling) are foreseen as the main enabler. In this work, we propose deep learning models, both centralized and federated approaches, that can perform horizontal and vertical autoscaling in multi-domain networks. The problem of autoscaling is modelled as a time series forecasting problem that predicts the future number of VNF instances based on the expected traffic demand. We evaluate the performance of various deep learning models trained over a commercial network operator dataset and investigate the pros and cons of federated learning over centralized learning approaches. Furthermore, we introduce the AI-driven Kubernetes orchestration prototype that we implemented by leveraging our MEC platform and assess the performance of the proposed deep learning models in a practical setup.

55 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented a 23.5-29.5 GHz TRX quad-beamformer with 6 bit phase control and 8 bit gain control for wideband multistandard applications, which achieved an effective isotropic radiated power (EIRP) of 54.8 dBm at P1dB with a 3-dB bandwidth of 23. 5-30 GHz and can scan to ±60° in the azimuth plane and +/40° in elevation plane with excellent patterns with a single point calibration at 27 GHz.
Abstract: This article presents a 23.5–29.5-GHz $8\times 8$ phased array for wideband multistandard applications. The array is based on wideband high-performance $2\times 2$ transmit/receive (TRX) quad-beamformer chips with 6 bit of phase control and 8 bit of gain control. The antenna is designed using a stacked-patch structure combined with a two-stage impedance matching network to enhance its bandwidth. The $8\times 8$ phased array achieves an effective isotropic radiated power (EIRP) of 54.8 dBm at P1dB with a 3-dB bandwidth of 23.5–30.5 GHz and can scan to ±60° in the azimuth plane and +/40° in the elevation plane with excellent patterns with a single-point calibration at 27 GHz. Measured error vector magnitude (EVM) for a 64-QAM 200 and 800-Mbaud waveforms result in a system EVM of 5% (−26 dB) in the TX mode at an average EIRP of 46–47 dBm at 24.5–29.5 GHz. Also, the wideband array is capable of 16-QAM 24-Gb/s links with an EVM <16% over all scan angles. An interband carrier aggregation (CA) system is also demonstrated with the wideband array using 200-Mbaud 64-QAM waveforms with 25- and 29-GHz carriers. The phased-array phase and amplitude settings are chosen such that the 25- and 29-GHz waveforms are radiating simultaneously at the same angle with low scan loss, resulting in an efficient system. Also, the out-of-band third-order intermodulation products generated by the power amplifier on each element are filtered out by the antenna. CA measurements with up to 50° scan angles are demonstrated with low EVM. To the best of our knowledge, this is the first demonstration of CA in millimeter-wave fifth-generation (5G) systems.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a low-rank regularized group sparse coding (LR-GSC) model to bridge the gap between the popular GSC and joint sparsity.
Abstract: Image nonlocal self-similarity (NSS) property has been widely exploited via various sparsity models such as joint sparsity (JS) and group sparse coding (GSC). However, the existing NSS-based sparsity models are either too restrictive, e.g. , JS enforces the sparse codes to share the same support, or too general, e.g. , GSC imposes only plain sparsity on the group coefficients, which limit their effectiveness for modeling real images. In this paper, we propose a novel NSS-based sparsity model, namely, low-rank regularized group sparse coding (LR-GSC) , to bridge the gap between the popular GSC and JS. The proposed LR-GSC model simultaneously exploits the sparsity and low-rankness of the dictionary-domain coefficients for each group of similar patches. An alternating minimization with an adaptive adjusted parameter strategy is developed to solve the proposed optimization problem for different image restoration tasks, including image denoising, image deblocking, image inpainting, and image compressive sensing. Extensive experimental results demonstrate that the proposed LR-GSC algorithm outperforms many popular or state-of-the-art methods in terms of objective and perceptual metrics.

Journal ArticleDOI
TL;DR: In this article, the authors present results from the Tactile Internet 40 (TACNET 40) project and introduce a tailored architecture that is focused on the communication needs given by representative Industry 40 use cases while ensuring parallel compliance to latest developments in relevant standardization.
Abstract: The increasing demand for highly customized products, as well as flexible production lines, can be seen as trigger for the “fourth industrial revolution”, referred to as “Industry 40” Current systems usually rely on wire-line technologies to connect sensors and actuators, but new use cases such as moving robots or drones demand a higher flexibility on communication services Wireless technologies, especially 5th generation wireless communication systems (5G) are best suited to address these new requirements Furthermore, this facilitates the renewal of brownfield deployments to enable a smooth migration to Industry 40 This paper presents results from the Tactile Internet 40 (TACNET 40) project and introduces a tailored architecture that is focused on the communication needs given by representative Industry 40 use cases while ensuring parallel compliance to latest developments in relevant standardization

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper developed fast and flexible algorithms for video snapshot compressive imaging based on the plug-and-play (PnP) framework and further proposed the PnP-GAP algorithm with a lower computational workload.
Abstract: We consider the reconstruction problem of video snapshot compressive imaging (SCI), which captures high-speed videos using a low-speed 2D sensor. The underlying principle of SCI is to modulate sequential high-speed frames with different masks and then these encoded frames are integrated into a snapshot on the sensor and thus the sensor can be of low-speed.On one hand, video SCI enjoys the advantages of low-bandwidth, low-power and low-cost. On the other hand, applying SCI to large-scale problems (HD or UHD videos) in our daily life is still challenging and one of the bottlenecks lies in the reconstruction algorithm. Exiting algorithms are either too slow (iterative optimization algorithms) or not flexible to the encoding process (deep learning based end-to-end networks). In this paper, we develop fast and flexible algorithms for SCI based on the plug-and-play (PnP) framework. In addition to the PnP-ADMM, we further propose the PnP-GAP algorithm with a lower computational workload. Furthermore, we extend the proposed PnP algorithms to the color SCI system using mosaic sensors. A joint reconstruction and demosaicing paradigm is developed for flexible and high quality reconstruction of color video SCI systems. Extensive results on both simulation and real datasets verify the superiority of our proposed algorithm.

Journal ArticleDOI
TL;DR: A chemical vapour deposition graphene photodetector based on the photo-thermoelectric effect, integrated on a silicon waveguide, providing frequency response >65 GHz and optimized to be interfaced to a 50 Ω voltage amplifier for direct voltage amplification.
Abstract: One of the main challenges of next generation optical communication is to increase the available bandwidth while reducing the size, cost and power consumption of photonic integrated circuits. Graphene has been recently proposed to be integrated with silicon photonics to meet these goals because of its high mobility, fast carrier dynamics and ultra-broadband optical properties. We focus on graphene photodetectors for high speed datacom and telecom applications based on the photo-thermo-electric effect, allowing for direct optical power to voltage conversion, zero dark current, and ultra-fast operation. We report on a chemical vapour deposition graphene photodetector based on the photo-thermoelectric effect, integrated on a silicon waveguide, providing frequency response >65 GHz and optimized to be interfaced to a 50 Ω voltage amplifier for direct voltage amplification. We demonstrate a system test leading to direct detection of 105 Gbit s−1 non-return to zero and 120 Gbit s−1 4-level pulse amplitude modulation optical signals. The fast carrier dynamics and ultra-broadband optical properties of graphene make it suitable for optical communications. Here, the authors demonstrate a photo-thermo-electric graphene photodetector integrated on a Si waveguide featuring 105 Gbit s−1 non-return to zero and 120 Gbit s−1 4-level pulse amplitude modulation direct detection.

Journal ArticleDOI
TL;DR: In this paper, the authors report a peta-bit-per-second class transmission demonstration in multi-mode fibers, which is enabled by combining three key technologies: a wideband optical comb-based transmitter to generate highly spectral efficient 64-quadrature-amplitude modulated signals between 1528 nm and 1610 nm wavelength, a broadband mode-multiplexer, based on multi-plane light conversion, and a 15-mode multiuser with optimized transmission characteristics for wideband operation.
Abstract: Data rates in optical fiber networks have increased exponentially over the past decades and core-networks are expected to operate in the peta-bit-per-second regime by 2030. As current single-mode fiber-based transmission systems are reaching their capacity limits, space-division multiplexing has been investigated as a means to increase the per-fiber capacity. Of all space-division multiplexing fibers proposed to date, multi-mode fibers have the highest spatial channel density, as signals traveling in orthogonal fiber modes share the same fiber-core. By combining a high mode-count multi-mode fiber with wideband wavelength-division multiplexing, we report a peta-bit-per-second class transmission demonstration in multi-mode fibers. This was enabled by combining three key technologies: a wideband optical comb-based transmitter to generate highly spectral efficient 64-quadrature-amplitude modulated signals between 1528 nm and 1610 nm wavelength, a broadband mode-multiplexer, based on multi-plane light conversion, and a 15-mode multi-mode fiber with optimized transmission characteristics for wideband operation. Space division multiplexing solutions are one way to increase future fiber information capacity. Here, the authors show peta-bit/s transmission in a standard-diameter, multimode fiber enabled by combining several practical multiplexing technologies.

Proceedings ArticleDOI
Zhengjue Wang1, Hao Zhang1, Ziheng Cheng1, Bo Chen1, Xin Yuan2 
20 Jun 2021
TL;DR: MetaSCI as discussed by the authors is composed of a shared backbone for different masks, and light-weight meta-modulation parameters to evolve to different modulation parameters for each mask, thus having the properties of fast adaptation to new masks or systems and ready to scale to large data.
Abstract: To capture high-speed videos using a two-dimensional detector, video snapshot compressive imaging (SCI) is a promising system, where the video frames are coded by different masks and then compressed to a snapshot measurement. Following this, efficient algorithms are desired to reconstruct the high-speed frames, where the state-of-the-art results are achieved by deep learning networks. However, these networks are usually trained for specific small-scale masks and often have high demands of training time and GPU memory, which are hence not flexible to i) a new mask with the same size and ii) a larger-scale mask. We address these challenges by developing a Meta Modulated Convolutional Network for SCI reconstruction, dubbed MetaSCI. MetaSCI is composed of a shared backbone for different masks, and light-weight meta-modulation parameters to evolve to different modulation parameters for each mask, thus having the properties of fast adaptation to new masks (or systems) and ready to scale to large data. Extensive simulation and real data results demonstrate the superior performance of our proposed approach. Our code is available at https://github.com/xyvirtualgroup/MetaSCI-CVPR2021.

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed a hybrid plug-and-play (H-PnP) framework based on the low-rank and deep (LRD) image model for image restoration.
Abstract: Recent works that utilized deep models have achieved superior results in various image restoration (IR) applications. Such approach is typically supervised, which requires a corpus of training images with distributions similar to the images to be recovered. On the other hand, the shallow methods, which are usually unsupervised remain promising performance in many inverse problems, e.g. , image deblurring and image compressive sensing (CS), as they can effectively leverage nonlocal self-similarity priors of natural images. However, most of such methods are patch-based leading to the restored images with various artifacts due to naive patch aggregation in addition to the slow speed. Using either approach alone usually limits performance and generalizability in IR tasks. In this paper, we propose a joint low-rank and deep (LRD) image model, which contains a pair of triply complementary priors, namely, internal and external , shallow and deep , and non-local and local priors. We then propose a novel hybrid plug-and-play (H-PnP) framework based on the LRD model for IR. Following this, a simple yet effective algorithm is developed to solve the proposed H-PnP based IR problems. Extensive experimental results on several representative IR tasks, including image deblurring, image CS and image deblocking, demonstrate that the proposed H-PnP algorithm achieves favorable performance compared to many popular or state-of-the-art IR methods in terms of both objective and visual perception.

Proceedings ArticleDOI
01 Jun 2021
TL;DR: Chen et al. as mentioned in this paper developed a memory-efficient network for large-scale video snapshot compressive imaging (SCI) based on multi-group reversible 3D convolutional neural networks.
Abstract: Video snapshot compressive imaging (SCI) captures a sequence of video frames in a single shot using a 2D detector. The underlying principle is that during one exposure time, different masks are imposed on the high-speed scene to form a compressed measurement. With the knowledge of masks, optimization algorithms or deep learning methods are employed to reconstruct the desired high-speed video frames from this snapshot measurement. Unfortunately, though these methods can achieve decent results, the long running time of optimization algorithms or huge training memory occupation of deep networks still preclude them in practical applications. In this paper, we develop a memory-efficient network for large-scale video SCI based on multi-group reversible 3D convolutional neural networks. In addition to the basic model for the grayscale SCI system, we take one step further to combine demosaicing and SCI reconstruction to directly recover color video from Bayer measurements. Extensive results on both simulation and real data captured by SCI cameras demonstrate that our proposed model outperforms previous state-of-the-art with less memory and thus can be used in large-scale problems. The code is at https: //github.com/BoChenGroup/RevSCI-net.

Journal ArticleDOI
TL;DR: This work combines an on-chip, telecom-wavelength, broadband entangled photon source with industry-grade flexible-grid wavelength division multiplexing techniques, to demonstrate reconfigurable entanglement distribution between up to 8 users in a resource-optimized quantum network topology.
Abstract: Quantum communication networks enable applications ranging from highly secure communication to clock synchronization and distributed quantum computing. Miniaturized, flexible, and cost-efficient resources will be key elements for ensuring the scalability of such networks as they progress towards large-scale deployed infrastructures. Here, we bring these elements together by combining an on-chip, telecom-wavelength, broadband entangled photon source with industry-grade flexible-grid wavelength division multiplexing techniques, to demonstrate reconfigurable entanglement distribution between up to 8 users in a resource-optimized quantum network topology. As a benchmark application we use quantum key distribution, and show low error and high secret key generation rates across several frequency channels, over both symmetric and asymmetric metropolitan-distance optical fibered links and including finite-size effects. By adapting the bandwidth allocation to specific network constraints, we also illustrate the flexible networking capability of our configuration. Together with the potential of our semiconductor source for distributing secret keys over a 60 nm bandwidth with commercial multiplexing technology, these results offer a promising route to the deployment of scalable quantum network architectures.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a comprehensive survey about the current state-of-the-art (SoA) on UAV-enabled low-latency communications (URLLC) networks.
Abstract: Ultra-reliable low-latency communications (URLLCs) and the adoption of unmanned aerial vehicles (UAVs) for network coverage improvement have emerged as key enabling communication paradigms for the successful deployment of mobile communication services envisioned in both the fifth-generation (5G) and the sixth-generation (6G) networks. This paper provides a comprehensive survey about the current state-of-the-art (SoA) on UAV-enabled URLLC networks. The core idea is to highlight the main characteristics of this new network concept as well as its critical aspects. We first perform an overview of URLLC by illustrating the main features and related implementation challenges. Subsequently, an in-depth discussion on UAV-enabled networks is provided, with a special emphasis on how URLLCs and UAV communication can be classified as complementary paradigms. Finally, a comprehensive analysis and classification of the current research advancements on UAV-enabled URLLCs networks is carried out. This paper is concluded by pointing out some the open challenges and our visions related to future directions which should be undertaken in order to pave the way towards the practical implementation of this promising network architecture.

Journal ArticleDOI
TL;DR: In this article, the authors present a vision of a new air interface that is partially designed by AI to enable optimized communication schemes for any hardware, radio environment, and application, while it is clear that 6G must cater to the needs of large distributed learning systems.
Abstract: Each generation of cellular communication systems is marked by a defining disruptive technology of its time, such as OFDM for 4G or Massive MIMO for 5G. Since AI is the defining technology of our time, it is natural to ask what role it could play for 6G. While it is clear that 6G must cater to the needs of large distributed learning systems, it is less certain if AI will play a defining role in the design of 6G itself. The goal of this article is to paint a vision of a new air interface that is partially designed by AI to enable optimized communication schemes for any hardware, radio environment, and application.

Journal ArticleDOI
TL;DR: To recover the network within a disaster area, a fast K-means-based user clustering model and jointly optimal power and time transferring allocation which can be applied in the real system by using UAVs as flying base stations for real-time recovering and maintaining network connectivity during and after disasters is proposed.
Abstract: In this work, we consider a joint optimisation of real-time deployment and resource allocation scheme for UAV-aided relay systems in emergency scenarios such as disaster relief and public safety missions. In particular, to recover the network within a disaster area, we propose a fast K-means-based user clustering model and jointly optimal power and time transferring allocation which can be applied in the real system by using UAVs as flying base stations for real-time recovering and maintaining network connectivity during and after disasters. Under the stringent QoS constraints, we then provide centralised and distributed models to maximise the energy efficiency of the considered network. Numerical results are provided to illustrate the effectiveness of the proposed computational approaches in terms of network energy efficiency and execution time for solving the resource allocation problem in real-time scenarios. We demonstrate that our proposed algorithm outperforms other benchmark schemes.

Journal ArticleDOI
TL;DR: A neural network aided power control algorithm is developed that leads to scalable Cell-Free Massive MIMO networks in which the amount of computations conducted by each AP does not depend on the number of network APs.
Abstract: An Internet-of-Things (IoT) system supports a massive number of IoT devices wirelessly. We show how to use cell-free (CF) massive multiple input and multiple output (MIMO) to provide a scalable and energy-efficient IoT system. We employ optimal linear estimation with random pilots to acquire channel state information (CSI) for MIMO precoding and decoding. In the uplink (UL), we employ optimal linear decoder and utilize random matrix (RM) theory to obtain two accurate signal-to-interference plus noise ratio (SINR) approximations involving only large-scale fading coefficients. We derive several max–min type power control algorithms based on both exact SINR expression and RM approximations. Next we consider the power control problem for downlink (DL) transmission. To avoid solving a time-consuming quasiconcave problem that requires repeat tests for the feasibility of a second-order cone programming (SOCP) problem, we develop a neural network (NN) aided power control algorithm that results in 30 times reduction in computation time. This power control algorithm leads to scalable CF Massive MIMO networks in which the amount of computations conducted by each access point (AP) does not depend on the number of network APs. Both UL and DL power control algorithms allow visibly improve the system spectral efficiency (SE) and, more importantly, lead to multifold improvements in energy efficiency (EE), which is crucial for IoT networks.

Journal ArticleDOI
11 Jan 2021
TL;DR: In this article, the European high performance BiCMOS technology platforms are presented, which have special advantages for addressing applications in the sub-millimeter-wave and THz range, and the status of the technology process is reviewed and the integration challenges are examined.
Abstract: This paper gives an overall picture from BiCMOS technologies up to THz systems integration, which were developed in the European Research project TARANTO. The European high performance BiCMOS technology platforms are presented, which have special advantages for addressing applications in the submillimeter-wave and THz range. The status of the technology process is reviewed and the integration challenges are examined. A detailed discussion on millimeter-wave characterization and modeling is given with emphasis on harmonic distortion analysis, power and noise figure measurements up to 190 GHz and 325 GHz respectively and S-parameter measurements up to 500 GHz. The results of electrical compact models of active (HBTs) and passive components are presented together with benchmark circuit blocks for model verification. BiCMOS-enabled systems and applications with focus on future wireless communication systems and high-speed optical transmission systems up to resulting net data rates of 1.55 Tbit/s are presented.

Journal ArticleDOI
21 Jan 2021
TL;DR: In this article, the authors proposed a deep learning design for location and person-independent activity recognition with WiFi, which consists of three deep neural networks (DNNs): a 2D Convolutional Neural Network (CNN) as the recognition algorithm, a 1D CNN as the state machine, and a reinforcement learning agent for neural architecture search.
Abstract: In recent years, Channel State Information (CSI) measured by WiFi is widely used for human activity recognition In this article, we propose a deep learning design for location- and person-independent activity recognition with WiFi The proposed design consists of three Deep Neural Networks (DNNs): a 2D Convolutional Neural Network (CNN) as the recognition algorithm, a 1D CNN as the state machine, and a reinforcement learning agent for neural architecture search The recognition algorithm learns location- and person-independent features from different perspectives of CSI data The state machine learns temporal dependency information from history classification results The reinforcement learning agent optimizes the neural architecture of the recognition algorithm using a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM) The proposed design is evaluated in a lab environment with different WiFi device locations, antenna orientations, sitting/standing/walking locations/orientations, and multiple persons The proposed design has 97% average accuracy when testing devices and persons are not seen during training The proposed design is also evaluated by two public datasets with accuracy of 80% and 83% The proposed design needs very little human efforts for ground truth labeling, feature engineering, signal processing, and tuning of learning parameters and hyperparameters

Journal ArticleDOI
01 Jun 2021
TL;DR: A millimetre-wave modulator and antenna array for backscatter communications at gigabit data rates, created with inkjet printing, providing a bit rate of two gigabits per second and with a front-end energy consumption of only 0.17 pJ per bit.
Abstract: Future devices for the Internet of Things will require communication systems that can deliver higher data rates at low power. Backscatter radio—in which wireless communication is achieved via reflection rather than radiation—is a low-complexity approach that requires a minimal number of active elements. However, it is typically limited to data rates of hundreds of megabits per second because of the low frequency bands used and the modulation techniques involved. Here we report a millimetre-wave modulator and antenna array for backscatter communications at gigabit data rates. This radiofrequency front-end consists of a microstrip patch antenna array and a single pseudomorphic high-electron-mobility transistor that supports a range of modulation formats including binary phase shift keying, quadrature phase shift keying and quadrature amplitude modulation. The circuit is additively manufactured with inkjet printing using silver nanoparticle inks on a flexible liquid-crystal polymer substrate. A millimetre-wave transceiver is also designed to capture and downconvert the backscattered signals and route them for digital signal processing. With the system, we demonstrate a bit rate of two gigabits per second of backscatter transmission at millimetre-wave frequencies of 24–28 GHz, and with a front-end energy consumption of 0.17 pJ per bit. A microstrip patch antenna array and a single high-electron-mobility transistor, which are created with inkjet printing, can be used for backscatter communication at millimetre-wave frequencies, providing a bit rate of two gigabits per second and with a front-end energy consumption of only 0.17 pJ per bit.

Journal ArticleDOI
TL;DR: In this paper, the authors provide an overview of the latest Wi-Fi-related news, with emphasis on the recently launched 802.11be certification program, vouching for multi-AP coordination as a must-have for critical and latency-sensitive applications.
Abstract: As hordes of data-hungry devices challenge its current capabilities, Wi-Fi strikes back with 802.11be, alias Wi-Fi 7. This brand new amendment promises a (r)evolution of unlicensed wireless connectivity as we know it. To appreciate its foreseen impact, we start by overviewing the latest Wi-Fi-related news, with emphasis on the recently launched Wi-Fi 6E certification program. We then provide an updated digest of 802.11be essential features, vouching for multi-AP coordination as a must-have for critical and latency-sensitive applications. We finally get down to the nitty-gritty of one of its most enticing implementations – coordinated beamforming – for which our standard-compliant simulations confirm near-tenfold reductions in worst case delays.

Posted ContentDOI
21 Jan 2021-bioRxiv
TL;DR: Connectome as discussed by the authors is a software package for R which facilitates rapid calculation, and interactive exploration, of cell-cell signaling network topologies contained in single-cell RNA-sequencing data.
Abstract: Single-cell RNA-sequencing data can revolutionize our understanding of the patterns of cell-cell and ligand-receptor connectivity that influence the function of tissues and organs. However, the quantification and visualization of these patterns are major computational and epistemological challenges. Here, we present Connectome, a software package for R which facilitates rapid calculation, and interactive exploration, of cell-cell signaling network topologies contained in single-cell RNA-sequencing data. Connectome can be used with any reference set of known ligand-receptor mechanisms. It has built-in functionality to facilitate differential and comparative connectomics, in which complete mechanistic networks are quantitatively compared between systems. Connectome includes computational and graphical tools designed to analyze and explore cell-cell connectivity patterns across disparate single-cell datasets. We present approaches to quantify these topologies and discuss some of the biologic theory leading to their design.

Journal ArticleDOI
TL;DR: In this paper, the current status of mode-division multiplexing (MDM) techniques in fibers and on chips is reviewed, and three system applications are introduced, including quasi-single mode transmission, multicore few-mode amplifier, and fiber sensing.
Abstract: We review the current status of mode-division multiplexing (MDM) techniques in fibers and on chips. Three system applications are introduced, including quasi-single mode transmission, multicore few-mode amplifier, and fiber sensing. We also discuss the technology development trend in terms of multiple-input-multiple-output-free MDM, economics of MDM, and quantum information processing. Finally, we provide perspectives on emerging applications beyond communications by leveraging the optical properties of high order modes, e.g., nonlinear optics in the visible regime, broadband frequency comb generation, and super resolution endoscopy.