scispace - formally typeset
Search or ask a question

Showing papers by "Bell Labs published in 2020"


Journal ArticleDOI
TL;DR: A comprehensive review and updated solutions related to 5G network slicing using SDN and NFV, and a discussion on various open source orchestrators and proof of concepts representing industrial contribution are provided.

458 citations


Journal ArticleDOI
TL;DR: In this article, the authors survey three new multiple antenna technologies that can play key roles in beyond 5G networks: cell-free massive MIMO, beamspace massive mIMO and intelligent reflecting surfaces.
Abstract: Multiple antenna technologies have attracted much research interest for several decades and have gradually made their way into mainstream communication systems. Two main benefits are adaptive beamforming gains and spatial multiplexing, leading to high data rates per user and per cell, especially when large antenna arrays are adopted. Since multiple antenna technology has become a key component of the fifth-generation (5G) networks, it is time for the research community to look for new multiple antenna technologies to meet the immensely higher data rate, reliability, and traffic demands in the beyond 5G era. Radically new approaches are required to achieve orders-of-magnitude improvements in these metrics. There will be large technical challenges, many of which are yet to be identified. In this paper, we survey three new multiple antenna technologies that can play key roles in beyond 5G networks: cell-free massive MIMO, beamspace massive MIMO, and intelligent reflecting surfaces. For each of these technologies, we present the fundamental motivation, key characteristics, recent technical progresses, and provide our perspectives for future research directions. The paper is not meant to be a survey/tutorial of a mature subject, but rather serve as a catalyst to encourage more research and experiments in these multiple antenna technologies.

430 citations


Journal ArticleDOI
TL;DR: The other major technology transformations that are likely to define 6G are discussed: cognitive spectrum sharing methods and new spectrum bands; the integration of localization and sensing capabilities into the system definition, the achievement of extreme performance requirements on latency and reliability; new network architecture paradigms involving sub-networks and RAN-Core convergence; and new security and privacy schemes.
Abstract: The focus of wireless research is increasingly shifting toward 6G as 5G deployments get underway. At this juncture, it is essential to establish a vision of future communications to provide guidance for that research. In this paper, we attempt to paint a broad picture of communication needs and technologies in the timeframe of 6G. The future of connectivity is in the creation of digital twin worlds that are a true representation of the physical and biological worlds at every spatial and time instant, unifying our experience across these physical, biological and digital worlds. New themes are likely to emerge that will shape 6G system requirements and technologies, such as: (i) new man-machine interfaces created by a collection of multiple local devices acting in unison; (ii) ubiquitous universal computing distributed among multiple local devices and the cloud; (iii) multi-sensory data fusion to create multi-verse maps and new mixed-reality experiences; and (iv) precision sensing and actuation to control the physical world. With rapid advances in artificial intelligence, it has the potential to become the foundation for the 6G air interface and network, making data, compute and energy the new resources to be exploited for achieving superior performance. In addition, in this paper we discuss the other major technology transformations that are likely to define 6G: (i) cognitive spectrum sharing methods and new spectrum bands; (ii) the integration of localization and sensing capabilities into the system definition, (iii) the achievement of extreme performance requirements on latency and reliability; (iv) new network architecture paradigms involving sub-networks and RAN-Core convergence; and (v) new security and privacy schemes.

420 citations


Journal ArticleDOI
TL;DR: This tutorial article explains the importance of considering spatial channel correlation and using signal processing schemes designed for multicell networks and presents recent results on the fundamental limits of Massive MIMO, which are not determined by pilot contamination but the ability to acquire channel statistics.
Abstract: Since the seminal paper by Marzetta from 2010, Massive MIMO has changed from being a theoretical concept with an infinite number of antennas to a practical technology. The key concepts are adopted into the 5G New Radio Standard and base stations (BSs) with $M=64$ fully digital transceivers have been commercially deployed in sub-6GHz bands. The fast progress was enabled by many solid research contributions of which the vast majority assume spatially uncorrelated channels and signal processing schemes developed for single-cell operation. These assumptions make the performance analysis and optimization of Massive MIMO tractable but have three major caveats: 1) practical channels are spatially correlated; 2) large performance gains can be obtained by multicell processing, without BS cooperation; 3) the interference caused by pilot contamination creates a finite capacity limit, as $M\to \infty $ . There is a thin line of papers that avoided these caveats, but the results are easily missed. Hence, this tutorial article explains the importance of considering spatial channel correlation and using signal processing schemes designed for multicell networks. We present recent results on the fundamental limits of Massive MIMO, which are not determined by pilot contamination but the ability to acquire channel statistics. These results will guide the journey towards the next level of Massive MIMO, which we call “Massive MIMO 2.0”.

260 citations


Journal ArticleDOI
TL;DR: MMNet is a deep learning MIMO detection scheme that significantly outperforms existing approaches on realistic channels with the same or lower computational complexity, and is 4–8dB better overall than a classic linear scheme like the minimum mean square error (MMSE) detector.
Abstract: Traditional symbol detection algorithms either perform poorly or are impractical to implement for Massive Multiple-Input Multiple-Output (MIMO) systems. Recently, several learning-based approaches have achieved promising results on simple channel models (e.g., i.i.d. Gaussian channel coefficients), but as we show, their performance degrades on real-world channels with spatial correlation. We propose MMNet, a deep learning MIMO detection scheme that significantly outperforms existing approaches on realistic channels with the same or lower computational complexity. MMNet ’s design builds on the theory of iterative soft-thresholding algorithms, and uses a novel training algorithm that leverages temporal and spectral correlation in real channels to accelerate training. These innovations make it practical to train MMNet online for every realization of the channel. On i.i.d. Gaussian channels, MMNet requires two orders of magnitude fewer operations than existing deep learning schemes but achieves near-optimal performance. On spatially-correlated channels, it achieves the same error rate as the next-best learning scheme (OAMPNet) at 2.5dB lower signal-to-noise ratio (SNR), and with at least $10\times $ less computational complexity. MMNet is also 4–8dB better overall than a classic linear scheme like the minimum mean square error (MMSE) detector.

181 citations


Journal ArticleDOI
12 Jun 2020
TL;DR: Analysis of a cellular network deployment where UAV-to-UAV (U2U) transmit-receive pairs share the same spectrum with the uplink of cellular ground users (GUEs) concludes that adopting an overlay spectrum sharing seems the most suitable approach for maintaining a minimum guaranteed rate for UAVs and a high GUE UL performance.
Abstract: We consider a cellular network deployment where UAV-to-UAV (U2U) transmit-receive pairs share the same spectrum with the uplink (UL) of cellular ground users (GUEs). For this setup, we focus on analyzing and comparing the performance of two spectrum sharing mechanisms: (i) underlay, where the same time-frequency resources may be accessed by both UAVs and GUEs, resulting in mutual interference, and (ii) overlay, where the available resources are divided into orthogonal portions for U2U and GUE communications. We evaluate the coverage probability and rate of both link types and their interplay to identify the best spectrum sharing strategy. We do so through an analytical framework that embraces realistic height-dependent channel models, antenna patterns, and practical power control mechanisms. For the underlay, we find that although the presence of U2U direct communications may worsen the uplink performance of GUEs, such effect is limited as base stations receive the power-constrained UAV signals through their antenna sidelobes. In spite of this, our results lead us to conclude that in urban scenarios with a large number of UAV pairs, adopting an overlay spectrum sharing seems the most suitable approach for maintaining a minimum guaranteed rate for UAVs and a high GUE UL performance.

130 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed an end-to-end cooling control algorithm via an off-policy offline version of the deep deterministic policy gradient (DDPG) algorithm, in which an evaluation network is trained to predict the DC energy cost along with resulting cooling effects, and a policy network was trained to gauge optimized control settings.
Abstract: Data center (DC) plays an important role to support services, such as e-commerce and cloud computing. The resulting energy consumption from this growing market has drawn significant attention, and noticeably almost half of the energy cost is used to cool the DC to a particular temperature. It is thus an critical operational challenge to curb the cooling energy cost without sacrificing the thermal safety of a DC. The existing solutions typically follow a two-step approach, in which the system is first modeled based on expert knowledge and, thus, the operational actions are determined with heuristics and/or best practices. These approaches are often hard to generalize and might result in suboptimal performances due to intrinsic model errors for large-scale systems. In this paper, we propose optimizing the DC cooling control via the emerging deep reinforcement learning (DRL) framework. Compared to the existing approaches, our solution lends itself an end-to-end cooling control algorithm (CCA) via an off-policy offline version of the deep deterministic policy gradient (DDPG) algorithm, in which an evaluation network is trained to predict the DC energy cost along with resulting cooling effects, and a policy network is trained to gauge optimized control settings. Moreover, we introduce a de-underestimation (DUE) validation mechanism for the critic network to reduce the potential underestimation of the risk caused by neural approximation. Our proposed algorithm is evaluated on an EnergyPlus simulation platform and on a real data trace collected from the National Super Computing Centre (NSCC) of Singapore. The resulting numerical results show that the proposed CCA can achieve up to 11% cooling cost reduction on the simulation platform compared with a manually configured baseline control algorithm. In the trace-based study of conservative nature, the proposed algorithm can achieve about 15% cooling energy savings on the NSCC data trace. Our pioneering approach can shed new light on the application of DRL to optimize and automate DC operations and management, potentially revolutionizing digital infrastructure management with intelligence.

128 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: Fast and flexible algorithms for SCI based on the plug-and-play (PnP) framework are developed and it is first time show that PnP can recover a UHD color video from a snapshot 2D measurement.
Abstract: Snapshot compressive imaging (SCI) aims to capture the high-dimensional (usually 3D) images using a 2D sensor (detector) in a single snapshot. Though enjoying the advantages of low-bandwidth, low-power and low-cost, applying SCI to large-scale problems (HD or UHD videos) in our daily life is still challenging. The bottleneck lies in the reconstruction algorithms; they are either too slow (iterative optimization algorithms) or not flexible to the encoding process (deep learning based end-to-end networks). In this paper, we develop fast and flexible algorithms for SCI based on the plug-and-play (PnP) framework. In addition to the widely used PnP-ADMM method, we further propose the PnP-GAP (generalized alternating projection) algorithm with a lower computational workload and prove the {global convergence} of PnP-GAP under the SCI hardware constraints. By employing deep denoising priors, we first time show that PnP can recover a UHD color video (3840x1644x48 with PNSR above 30dB) from a snapshot 2D measurement. Extensive results on both simulation and real datasets verify the superiority of our proposed algorithm.

121 citations


Journal ArticleDOI
01 Mar 2020
TL;DR: This paper builds both an end-to-end convolutional neural network (E2E-CNN) and a Plug-and-Play (PnP) framework with deep denoising priors and compares them with the iterative baseline algorithm GAP-TV and the state-of-the-art DeSCI on real data.
Abstract: We investigate deep learning for video compressive sensing within the scope of snapshot compressive imaging (SCI). In video SCI, multiple high-speed frames are modulated by different coding patterns and then a low-speed detector captures the integration of these modulated frames. In this manner, each captured measurement frame incorporates the information of all the coded frames, and reconstruction algorithms are then employed to recover the high-speed video. In this paper, we build a video SCI system using a digital micromirror device and develop both an end-to-end convolutional neural network (E2E-CNN) and a Plug-and-Play (PnP) framework with deep denoising priors to solve the inverse problem. We compare them with the iterative baseline algorithm GAP-TV and the state-of-the-art DeSCI on real data. Given a determined setup, a well-trained E2E-CNN can provide video-rate high-quality reconstruction. The PnP deep denoising method can generate decent results without task-specific pre-training and is faster than conventional iterative algorithms. Considering speed, accuracy, and flexibility, the PnP deep denoising method may serve as a baseline in video SCI reconstruction. To conduct quantitative analysis on these reconstruction algorithms, we further perform a simulation comparison on synthetic data. We hope that this study contributes to the applications of SCI cameras in our daily life.

118 citations


Journal ArticleDOI
TL;DR: This paper provides an insightful analysis for mobile networks Beyond 5G (B5G) considering the advancements and implications introduced by the evolution of softwarization, agile control and deterministic services.
Abstract: As 5G enters a stable phase in terms of system architecture, 3GPP Release 17 starts to investigate advanced features that would shape the evolution toward 6G. This paper provides an insightful analysis for mobile networks Beyond 5G (B5G) considering the advancements and implications introduced by the evolution of softwarization, agile control and deterministic services. It elaborates the 5G landscape, also investigating new business prospects and the emerging use cases, which will open new horizons for accelerating the market penetration of vertical services. It then overviews the key technologies that constitute the pillars for the evolution beyond 5G considering new radio paradigms, micro-service oriented core network, native IP based user plane, network analytics and the support of the low latency- high reliability transport layer. The open challenges considering both technical and business aspects are then overviewed, elaborating the footprint of softwarization, security and trust as well as distributed architectures and services toward 6G.

116 citations


Book ChapterDOI
23 Aug 2020
TL;DR: This work reproduces a stable single disperser CASSI system and proposes a novel deep convolutional network to carry out the real-time reconstruction by using self-attention, employing Spatial-Spectral Self-Attention (TSA) to process each dimension sequentially, yet in an order-independent manner.
Abstract: Coded aperture snapshot spectral imaging (CASSI) is an effective tool to capture real-world 3D hyperspectral images. While a number of existing work has been conducted for hardware and algorithm design, we make a step towards the low-cost solution that enjoys video-rate high-quality reconstruction. To make solid progress on this challenging yet under-investigated task, we reproduce a stable single disperser (SD) CASSI system to gather large-scale real-world CASSI data and propose a novel deep convolutional network to carry out the real-time reconstruction by using self-attention. In order to jointly capture the self-attention across different dimensions in hyperspectral images (i.e., channel-wise spectral correlation and non-local spatial regions), we propose Spatial-Spectral Self-Attention (TSA) to process each dimension sequentially, yet in an order-independent manner. We employ TSA in an encoder-decoder network, dubbed TSA-Net, to reconstruct the desired 3D cube. Furthermore, we investigate how noise affects the results and propose to add shot noise in model training, which improves the real data results significantly. We hope our large-scale CASSI data serve as a benchmark in future research and our TSA model as a baseline in deep learning based reconstruction algorithms. Our code and data are available at https://github.com/mengziyi64/TSA-Net.

Journal ArticleDOI
TL;DR: This work considers a trainable point-to-point communication system, where both transmitter and receiver are implemented as neural networks (NNs), and demonstrates that training on the bit-wise mutual information (BMI) allows seamless integration with practical bit-metric decoding (BMD) receivers.
Abstract: We consider a trainable point-to-point communication system, where both transmitter and receiver are implemented as neural networks (NNs), and demonstrate that training on the bit-wise mutual information (BMI) allows seamless integration with practical bit-metric decoding (BMD) receivers, as well as joint optimization of constellation shaping and labeling Moreover, we present a fully differentiable neural iterative demapping and decoding (IDD) structure which achieves significant gains on additive white Gaussian noise (AWGN) channels using a standard 80211n low-density parity-check (LDPC) code The strength of this approach is that it can be applied to arbitrary channels without any modifications Going one step further, we show that careful code design can lead to further performance improvements Lastly, we show the viability of the proposed system through implementation on software-defined radios (SDRs) and training of the end-to-end system on the actual wireless channel Experimental results reveal that the proposed method enables significant gains compared to conventional techniques

Journal ArticleDOI
TL;DR: The latest results on hybrid III-V on Si transmitters, receivers and packaged optical modules for high-speed optical communications and a review of recent advances in this field are reported.
Abstract: Heterogeneous integration of III-V materials onto silicon photonics has experienced enormous progress in the last few years, setting the groundwork for the implementation of complex on-chip optical systems that go beyond single device performance. Recent advances on the field are expected to impact the next generation of optical communications to attain low power, high efficiency and portable solutions. To accomplish this aim, intense research on hybrid lasers, modulators and photodetectors is being done to implement optical modules and photonic integrated networks with specifications that match the market demands. Similarly, important advances on packaging and thermal management of hybrid photonic integrated circuits (PICs) are currently in progress. In this paper, we report our latest results on hybrid III-V on Si transmitters, receivers and packaged optical modules for high-speed optical communications. In addition, a review of recent advances in this field will be provided for benchmarking purposes.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed RRC method outperforms many state-of-the-art schemes in both the objective and perceptual quality.
Abstract: In this paper, we propose a novel approach to the rank minimization problem, termed rank residual constraint (RRC) model. Different from existing low-rank based approaches, such as the well-known nuclear norm minimization (NNM) and the weighted nuclear norm minimization (WNNM), which estimate the underlying low-rank matrix directly from the corrupted observations, we progressively approximate the underlying low-rank matrix via minimizing the rank residual . Through integrating the image nonlocal self-similarity (NSS) prior with the proposed RRC model, we apply it to image restoration tasks, including image denoising and image compression artifacts reduction. Towards this end, we first obtain a good reference of the original image groups by using the image NSS prior, and then the rank residual of the image groups between this reference and the degraded image is minimized to achieve a better estimate to the desired image. In this manner, both the reference and the estimated image are updated gradually and jointly in each iteration. Based on the group-based sparse representation model, we further provide an analytical investigation on the feasibility of the proposed RRC model. Experimental results demonstrate that the proposed RRC method outperforms many state-of-the-art schemes in both the objective and perceptual quality.

Journal ArticleDOI
TL;DR: This work explores several novel architecture concepts for the 6G era driven by a decomposition of the architecture into platform, functions, orchestration and specialization aspects, and associates an open, scalable, elastic, and platform agnostic het-cloud with converged applications and services.
Abstract: The post-pandemic future will offer tremendous opportunity and challenge from transformation of the human experience linking physical, digital and biological worlds: 6G should be based on a new architecture to fully realize the vision to connect the worlds. We explore several novel architecture concepts for the 6G era driven by a decomposition of the architecture into platform, functions, orchestration and specialization aspects. With 6G, we associate an open, scalable, elastic, and platform agnostic het-cloud, with converged applications and services decomposed into micro-services and serverless functions, specialized architecture for extreme attributes, as well as open service orchestration architecture. Key attributes and characteristics of the associated architectural scenarios are described. At the air-interface level, 6G is expected to encompass use of sub-Terahertz spectrum and new spectrum sharing technologies, air-interface design optimized by AI/ML techniques, integration of radio sensing with communication, and meeting extreme requirements on latency, reliability and synchronization. Fully realizing the benefits of these advances in radio technology will also call for innovations in 6G network architecture as described.

Journal ArticleDOI
TL;DR: Experimental results on three image restoration applications demonstrate that the proposed SNSS produces superior results compared to many popular or state-of-the-art methods in both objective and perceptual quality measurements.
Abstract: Through exploiting the image nonlocal self-similarity (NSS) prior by clustering similar patches to construct patch groups, recent studies have revealed that structural sparse representation (SSR) models can achieve promising performance in various image restoration tasks. However, most existing SSR methods only exploit the NSS prior from the input degraded (internal) image, and few methods utilize the NSS prior from external clean image corpus; how to jointly exploit the NSS priors of internal image and external clean image corpus is still an open problem. In this article, we propose a novel approach for image restoration by simultaneously considering internal and external nonlocal self-similarity (SNSS) priors that offer mutually complementary information. Specifically, we first group nonlocal similar patches from images of a training corpus. Then a group-based Gaussian mixture model (GMM) learning algorithm is applied to learn an external NSS prior. We exploit the SSR model by integrating the NSS priors of both internal and external image data. An alternating minimization with an adaptive parameter adjusting strategy is developed to solve the proposed SNSS-based image restoration problems, which makes the entire algorithm more stable and practical. Experimental results on three image restoration applications, namely image denoising, deblocking and deblurring, demonstrate that the proposed SNSS produces superior results compared to many popular or state-of-the-art methods in both objective and perceptual quality measurements.

Journal ArticleDOI
TL;DR: An ensemble strategy that employs deep reinforcement schemes to learn a stock trading strategy by maximizing investment return is proposed and shown to outperform the three individual algorithms and two baselines in terms of the risk-adjusted return measured by the Sharpe ratio.
Abstract: Stock trading strategies play a critical role in investment. However, it is challenging to design a profitable strategy in a complex and dynamic stock market. In this paper, we propose an ensemble strategy that employs deep reinforcement schemes to learn a stock trading strategy by maximizing investment return. We train a deep reinforcement learning agent and obtain an ensemble trading strategy using three actor-critic based algorithms: Proximal Policy Optimization (PPO), Advantage Actor Critic (A2C), and Deep Deterministic Policy Gradient (DDPG). The ensemble strategy inherits and integrates the best features of the three algorithms, thereby robustly adjusting to different market situations. In order to avoid the large memory consumption in training networks with continuous action space, we employ a load-on-demand technique for processing very large data. We test our algorithms on the 30 Dow Jones stocks that have adequate liquidity. The performance of the trading agent with different reinforcement learning algorithms is evaluated and compared with both the Dow Jones Industrial Average index and the traditional min-variance portfolio allocation strategy. The proposed deep ensemble strategy is shown to outperform the three individual algorithms and two baselines in terms of the risk-adjusted return measured by the Sharpe ratio.

Journal ArticleDOI
TL;DR: This work studies the joint optimization of service placement and request routing in dense MEC networks with multidimensional constraints and proposes an algorithm that achieves close-to-optimal performance using a randomized rounding technique.
Abstract: The proliferation of innovative mobile services such as augmented reality, networked gaming, and autonomous driving has spurred a growing need for low-latency access to computing resources that cannot be met solely by existing centralized cloud systems. Mobile Edge Computing (MEC) is expected to be an effective solution to meet the demand for low-latency services by enabling the execution of computing tasks at the network edge, in proximity to the end-users. While a number of recent studies have addressed the problem of determining the execution of service tasks and the routing of user requests to corresponding edge servers, the focus has primarily been on the efficient utilization of computing resources, neglecting the fact that non-trivial amounts of data need to be pre-stored to enable service execution, and that many emerging services exhibit asymmetric bandwidth requirements. To fill this gap, we study the joint optimization of service placement and request routing in dense MEC networks with multidimensional constraints. We show that this problem generalizes several well-known placement and routing problems and propose an algorithm that achieves close-to-optimal performance using a randomized rounding technique. Evaluation results demonstrate that our approach can effectively utilize available storage, computation, and communication resources to maximize the number of requests served by low-latency edge cloud servers.

Journal ArticleDOI
TL;DR: In this paper, a mode-locked laser with an intracavity spectral pulse shaper was used to generate pure-quartic solitons with higher-order dispersion.
Abstract: Ultrashort pulse generation hinges on the careful management of dispersion. Traditionally, this has exclusively involved second-order dispersion, with higher-order dispersion treated as a nuisance to be minimized. Here, we show that this higher-order dispersion can be strategically leveraged to access an uncharted regime of ultrafast laser operation. In particular, our mode-locked laser—with an intracavity spectral pulse shaper—emits pure-quartic soliton pulses that arise from the interaction of fourth-order dispersion and the Kerr nonlinearity. Phase-resolved measurements demonstrate that their pulse energy scales with the third power of the inverse pulse duration. This implies a strong increase in the energy of short pure-quartic solitons compared with conventional solitons, for which the energy scales as the inverse of the pulse duration. These results not only demonstrate a novel approach to ultrafast lasers, but more fundamentally, they clarify the use of higher-order dispersion for optical pulse control, enabling innovations in nonlinear optics and its applications. By suppressing the second- and third-order intracavity dispersion using an intracavity spectral pulse shaper, a mode-locked laser that emits pure-quartic soliton pulses that arise from the interaction of the fourth-order dispersion and the Kerr nonlinearity is demonstrated.

Journal ArticleDOI
TL;DR: The use of infrastructure mounted sensors (which will be part of future smart cities) are motivated to aid establishing and maintaining mmWave vehicular communication links to demonstrate that information from these infrastructure sensors reduces the mmWave array configuration overhead.
Abstract: V2X communication in the mmWave band is one way to achieve high data rates for applications like infotainment, cooperative perception, augmented reality assisted driving, and so on. MmWave communication relies on large antenna arrays, and configuring these arrays poses high training overhead. In this article, we motivate the use of infrastructure mounted sensors (which will be part of future smart cities) to aid establishing and maintaining mmWave vehicular communication links. We provide numerical and measurement results to demonstrate that information from these infrastructure sensors reduces the mmWave array configuration overhead. Finally, we outline future research directions to help materialize the use of infrastructure sensors for mmWave communication.

Journal ArticleDOI
TL;DR: Experimental results on image denoising, image inpainting and image compressive sensing recovery, demonstrate that the proposed GSRC-NLP based image restoration algorithm is comparable to state-of-the-art Denoising methods and outperforms several testing image in painting and image CS recovery methods in terms of both objective and perceptual quality metrics.
Abstract: Group sparse representation (GSR) has made great strides in image restoration producing superior performance, realized through employing a powerful mechanism to integrate the local sparsity and nonlocal self-similarity of images. However, due to some form of degradation ( e.g. , noise, down-sampling or pixels missing), traditional GSR models may fail to faithfully estimate sparsity of each group in an image, thus resulting in a distorted reconstruction of the original image. This motivates us to design a simple yet effective model that aims to address the above mentioned problem. Specifically, we propose group sparsity residual constraint with nonlocal priors (GSRC-NLP) for image restoration. Through introducing the group sparsity residual constraint, the problem of image restoration is further defined and simplified through attempts at reducing the group sparsity residual. Towards this end, we first obtain a good estimation of the group sparse coefficient of each original image group by exploiting the image nonlocal self-similarity (NSS) prior along with self-supervised learning scheme, and then the group sparse coefficient of the corresponding degraded image group is enforced to approximate the estimation. To make the proposed scheme tractable and robust, two algorithms, i.e. , iterative shrinkage/thresholding (IST) and alternating direction method of multipliers (ADMM), are employed to solve the proposed optimization problems for different image restoration tasks. Experimental results on image denoising, image inpainting and image compressive sensing (CS) recovery, demonstrate that the proposed GSRC-NLP based image restoration algorithm is comparable to state-of-the-art denoising methods and outperforms several testing image inpainting and image CS recovery methods in terms of both objective and perceptual quality metrics.

Journal ArticleDOI
TL;DR: Analytical expressions for the downlink coverage probability and average data rate of generic LEO networks, regardless of the actual satellites’ locality and their service area geometry are derived from stochastic geometry, which abstracts the generic networks into uniform binomial point processes.
Abstract: As low Earth orbit (LEO) satellite communication systems are gaining increasing popularity, new theoretical methodologies are required to investigate such networks’ performance at large. This is because deterministic and location-based models that have previously been applied to analyze satellite systems are typically restricted to support simulations only. In this paper, we derive analytical expressions for the downlink coverage probability and average data rate of generic LEO networks, regardless of the actual satellites’ locality and their service area geometry. Our solution stems from stochastic geometry, which abstracts the generic networks into uniform binomial point processes. Applying the proposed model, we then study the performance of the networks as a function of key constellation design parameters. Finally, to fit the theoretical modeling more precisely to real deterministic constellations, we introduce the effective number of satellites as a parameter to compensate for the practical uneven distribution of satellites on different latitudes. In addition to deriving exact network performance metrics, the study reveals several guidelines for selecting the design parameters for future massive LEO constellations, e.g., the number of frequency channels and altitude.

Journal ArticleDOI
TL;DR: Compared with existing sparse representation models, the proposed JPG-SR provides an effective mechanism to integrate the local sparsity and nonlocal self-similarity of images and outperforms many state-of-the-art methods in both objective and perceptual quality.
Abstract: Sparse representation has achieved great success in various image processing and computer vision tasks. For image processing, typical patch-based sparse representation (PSR) models usually tend to generate undesirable visual artifacts, while group-based sparse representation (GSR) models lean to produce over-smooth effects. In this paper, we propose a new sparse representation model, termed joint patch-group based sparse representation (JPG-SR). Compared with existing sparse representation models, the proposed JPG-SR provides an effective mechanism to integrate the local sparsity and nonlocal self-similarity of images. We then apply the proposed JPG-SR to image restoration tasks, including image inpainting and image deblocking. An iterative algorithm based on the alternating direction method of multipliers (ADMM) framework is developed to solve the proposed JPG-SR based image restoration problems. Experimental results demonstrate that the proposed JPG-SR is effective and outperforms many state-of-the-art methods in both objective and perceptual quality.

Journal ArticleDOI
TL;DR: An adaptive dictionary is designed to bridge the gap between group-based sparse coding (GSC) and rank minimization, and WSNM is found to be the closest one to the real singular values of each patch group and is translated into a non-convex weighted norm minimization problem in GSC.
Abstract: Sparse coding has achieved a great success in various image processing tasks. However, a benchmark to measure the sparsity of image patch/group is missing since sparse coding is essentially an NP-hard problem. This work attempts to fill the gap from the perspective of rank minimization. We firstly design an adaptive dictionary to bridge the gap between group-based sparse coding (GSC) and rank minimization. Then, we show that under the designed dictionary, GSC and the rank minimization problems are equivalent, and therefore the sparse coefficients of each patch group can be measured by estimating the singular values of each patch group. We thus earn a benchmark to measure the sparsity of each patch group because the singular values of the original image patch groups can be easily computed by the singular value decomposition (SVD). This benchmark can be used to evaluate performance of any kind of norm minimization methods in sparse coding through analyzing their corresponding rank minimization counterparts. Towards this end, we exploit four well-known rank minimization methods to study the sparsity of each patch group and the weighted Schatten $p$ -norm minimization (WSNM) is found to be the closest one to the real singular values of each patch group. Inspired by the aforementioned equivalence regime of rank minimization and GSC, WSNM can be translated into a non-convex weighted $\ell _{p}$ -norm minimization problem in GSC. By using the earned benchmark in sparse coding, the weighted $\ell _{p}$ -norm minimization is expected to obtain better performance than the three other norm minimization methods, i.e. , $\ell _{1}$ -norm, $\ell _{p}$ -norm and weighted $\ell _{1}$ -norm. To verify the feasibility of the proposed benchmark, we compare the weighted $\ell _{p}$ -norm minimization against the three aforementioned norm minimization methods in sparse coding. Experimental results on image restoration applications, namely image inpainting and image compressive sensing recovery, demonstrate that the proposed scheme is feasible and outperforms many state-of-the-art methods.

Book ChapterDOI
23 Aug 2020
TL;DR: This work considers the problem of video snapshot compressive imaging (SCI), where multiple high-speed frames are coded by different masks and then summed to a single measurement, and proposes a recurrent networks solution, for the first time that recurrent networks are employed to SCI problem.
Abstract: We consider the problem of video snapshot compressive imaging (SCI), where multiple high-speed frames are coded by different masks and then summed to a single measurement. This measurement and the modulation masks are fed into our Recurrent Neural Network (RNN) to reconstruct the desired high-speed frames. Our end-to-end sampling and reconstruction system is dubbed BIdirectional Recurrent Neural networks with Adversarial Training (BIRNAT). To our best knowledge, this is the first time that recurrent networks are employed to SCI problem. Our proposed BIRNAT outperforms other deep learning based algorithms and the state-of-the-art optimization based algorithm, DeSCI, through exploiting the underlying correlation of sequential video frames. BIRNAT employs a deep convolutional neural network with Resblock and feature map self-attention to reconstruct the first frame, based on which bidirectional RNN is utilized to reconstruct the following frames in a sequential manner. To improve the quality of the reconstructed video, BIRNAT is further equipped with the adversarial training besides the mean square error loss. Extensive results on both simulation and real data (from two SCI cameras) demonstrate the superior performance of our BIRNAT system. The codes are available at https://github.com/BoChenGroup/BIRNAT.

Journal ArticleDOI
TL;DR: This paper proposes and evaluates a MILP optimization model to solve the complexities that arise from this new environment, and designs a greedy-based heuristic to investigate the possible trade-offs between execution runtime and network slice deployment.
Abstract: Network Slicing (NS) is a key enabler of the upcoming 5G and beyond system, leveraging on both Network Function Virtualization (NFV) and Software Defined Networking (SDN), NS will enable a flexible deployment of Network Functions (NFs) belonging to multiple Service Function Chains (SFC) over various administrative and technological domains. Our novel architecture addresses the complexities and heterogeneities of verticals targeted by 5G systems, whereby each slice consists of a set of SFCs, and each SFC handles specific traffic within the slice. In this paper, we propose and evaluate a MILP optimization model to solve the complexities that arise from this new environment. Our proposed model enables a cost-optimal deployment of network slices allowing a mobile network operator to efficiently allocate the underlying layer resources according to its users’ requirements. We also design a greedy-based heuristic to investigate the possible trade-offs between execution runtime and network slice deployment. For each network slice, the proposed solution guarantees the required delay and the bandwidth, while efficiently handling the use of both the VNF nodes and the physical nodes, reducing the service provider's Operating Expenditure (OPEX).

Journal ArticleDOI
TL;DR: Simulation results confirm that the UAV-aided NOMA with the proposed joint RA scheme yields better performances in terms of the SE, the EE, and the access ratio of the CUs.
Abstract: This article aims to improve spectrum efficiency (SE) for the unmanned aerial vehicle (UAV)-relayed cellular uplinks, through distinguishing both line-of-sight (LoS) and non-LoS (NLoS) links. Meanwhile, aiming to accommodate the air-to-ground (A2G) cooperative nonorthogonal multiple access (NOMA)-based cellular users (CUs) with a high energy efficiency (EE), a joint resource allocation (RA) problem is further considered for the UAV and the CUs. To solve the problem, first, an access-priority-based receiver determination (RD) method is derived. According to the RD result, the heuristic user association (UA) strategies are given. Then, based on the UA result, transmission powers of the CUs and the UAV are initialized based on their quality-of-service (QoS) demands. Furthermore, the subchannels are assigned to the associated CUs and the UAV with the reweighted message-passing algorithm. Finally, the transmission power of the CUs and the UAV is jointly fine-tuned with the proposed access control schemes. Compared with the traditional orthogonal frequency-division multiple access (OFDMA) scheme and the traditional ground-to-ground (G2G) NOMA scheme, simulation results confirm that the UAV-aided NOMA with the proposed joint RA scheme yields better performances in terms of the SE, the EE, and the access ratio of the CUs.

Journal ArticleDOI
18 Mar 2020
TL;DR: This paper develops and evaluates three adaptation techniques on four HAR datasets to evaluate their relative performance towards addressing the issue of wearing diversity, and does a careful analysis to learn the downsides of each UDA algorithm and uncover several implicit data-related assumptions without which these algorithms suffer a major degradation in accuracy.
Abstract: Wearable sensors are increasingly becoming the primary interface for monitoring human activities. However, in order to scale human activity recognition (HAR) using wearable sensors to million of users and devices, it is imperative that HAR computational models are robust against real-world heterogeneity in inertial sensor data. In this paper, we study the problem of wearing diversity which pertains to the placement of the wearable sensor on the human body, and demonstrate that even state-of-the-art deep learning models are not robust against these factors. The core contribution of the paper lies in presenting a first-of-its-kind in-depth study of unsupervised domain adaptation (UDA) algorithms in the context of wearing diversity -- we develop and evaluate three adaptation techniques on four HAR datasets to evaluate their relative performance towards addressing the issue of wearing diversity. More importantly, we also do a careful analysis to learn the downsides of each UDA algorithm and uncover several implicit data-related assumptions without which these algorithms suffer a major degradation in accuracy. Taken together, our experimental findings caution against using UDA as a silver bullet for adapting HAR models to new domains, and serve as practical guidelines for HAR practitioners as well as pave the way for future research on domain adaptation in HAR.

Proceedings ArticleDOI
01 Aug 2020
TL;DR: D-Band Radio-on-Glass modules combining two highly integrated SiGe BiCMOS transceivers (TRX) with a record low-loss glass interposer technology are presented, representing the first low-cost and highly integrated solution for spectrally efficient backhaul systems in D-Band.
Abstract: D-Band Radio-on-Glass (RoG) modules combining two highly integrated SiGe BiCMOS transceivers (TRX) with a record low-loss glass interposer technology are presented. The ICs operate at 115-155 GHz (Low-Band) and 135-170 GHz (High-Band). In this frequency range, a transmitter P sat up to 13 dBm and an average receiver NF of 8.5 dB is achieved. The integrated module supports TX constellations up to 512-QAM (2.2% EVM at 2 dBm output, 145 GHz) and data-rates up to 42 Gb/s (128-QAM). Measurements mimicking a 250-meter wireless-link demonstrate a maximum data-rate of 36 Gb/s using 64-QAM. The RoG modules represent the first low-cost and highly integrated solution for spectrally efficient backhaul systems in D-Band.

Proceedings ArticleDOI
01 Jul 2020
TL;DR: Automatic and human evaluations show that the novel Transformer-based generation framework proposed can significantly outperform state-of-the-art by a large margin.
Abstract: Text generation from a knowledge base aims to translate knowledge triples to natural language descriptions. Most existing methods ignore the faithfulness between a generated text description and the original table, leading to generated information that goes beyond the content of the table. In this paper, for the first time, we propose a novel Transformer-based generation framework to achieve the goal. The core techniques in our method to enforce faithfulness include a new table-text optimal-transport matching loss and a table-text embedding similarity loss based on the Transformer model. Furthermore, to evaluate faithfulness, we propose a new automatic metric specialized to the table-to-text generation problem. We also provide detailed analysis on each component of our model in our experiments. Automatic and human evaluations show that our framework can significantly outperform state-of-the-art by a large margin.