scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Signal Processing in 2018"


Journal ArticleDOI
TL;DR: In this article, a quadratic transform technique is proposed for solving the multiple-ratio concave-convex FP problem, where the original nonconveX problem is recast as a sequence of convex problems.
Abstract: Fractional programming (FP) refers to a family of optimization problems that involve ratio term(s). This two-part paper explores the use of FP in the design and optimization of communication systems. Part I of this paper focuses on FP theory and on solving continuous problems. The main theoretical contribution is a novel quadratic transform technique for tackling the multiple-ratio concave–convex FP problem—in contrast to conventional FP techniques that mostly can only deal with the single-ratio or the max-min-ratio case. Multiple-ratio FP problems are important for the optimization of communication networks, because system-level design often involves multiple signal-to-interference-plus-noise ratio terms. This paper considers the applications of FP to solving continuous problems in communication system design, particularly for power control, beamforming, and energy efficiency maximization. These application cases illustrate that the proposed quadratic transform can greatly facilitate the optimization involving ratios by recasting the original nonconvex problem as a sequence of convex problems. This FP-based problem reformulation gives rise to an efficient iterative optimization algorithm with provable convergence to a stationary point. The paper further demonstrates close connections between the proposed FP approach and other well-known algorithms in the literature, such as the fixed-point iteration and the weighted minimum mean-square-error beamforming. The optimization of discrete problems is discussed in Part II of this paper.

840 citations


Journal ArticleDOI
TL;DR: The potential of data transmission in a system with a massive number of radiating and sensing elements, thought of as a contiguous surface of electromagnetically active material, is considered as a large intelligent surface (LIS), which is a newly proposed concept and conceptually goes beyond contemporary massive MIMO technology.
Abstract: In this paper, we consider the potential of data transmission in a system with a massive number of radiating and sensing elements, thought of as a contiguous surface of electromagnetically active material. We refer to this as a large intelligent surface (LIS), which is a newly proposed concept and conceptually goes beyond contemporary massive MIMO technology. First, we consider capacities of single-antenna autonomous terminals communicating to the LIS where the entire surface is used as a receiving antenna array in a perfect line-of-sight propagation environment. Under the condition that the surface area is sufficiently large, the received signal after a matched-filtering operation can be closely approximated by a sinc-function-like intersymbol interference channel. Second, we analyze a normalized capacity measured per unit surface, for a fixed transmit power per volume unit with different terminal deployments. As terminal density increases, the limit of the normalized capacity [nats/s/Hz/volume-unit] achieved when wavelength $\lambda$ approaches zero is equal to half of the transmit power per volume unit divided by the noise spatial power spectral density. Third, we show that the number of independent signal dimensions that can be harvested per meter deployed surface is $2/\lambda$ for one-dimensional terminal deployment, and $\pi /\lambda ^2$ per square meter for two- and three-dimensional terminal deployments. Finally, we consider implementations of the LIS in the form of a grid of conventional antenna elements, and show that the sampling lattice that minimizes the surface area and simultaneously obtains one independent signal dimension for every spent antenna is the hexagonal lattice.

712 citations


Journal ArticleDOI
TL;DR: This work first characterize a class of ‘learnable algorithms’ and then design DNNs to approximate some algorithms of interest in wireless communications, demonstrating the superior ability ofDNNs for approximating two considerably complex algorithms that are designed for power allocation in wireless transmit signal design, while giving orders of magnitude speedup in computational time.
Abstract: Numerical optimization has played a central role in addressing key signal processing (SP) problems Highly effective methods have been developed for a large variety of SP applications such as communications, radar, filter design, and speech and image analytics, just to name a few However, optimization algorithms often entail considerable complexity, which creates a serious gap between theoretical design/analysis and real-time processing In this paper, we aim at providing a new learning-based perspective to address this challenging issue The key idea is to treat the input and output of an SP algorithm as an unknown nonlinear mapping and use a deep neural network (DNN) to approximate it If the nonlinear mapping can be learned accurately by a DNN of moderate size, then SP tasks can be performed effectively—since passing the input through a DNN only requires a small number of simple operations In our paper, we first identify a class of optimization algorithms that can be accurately approximated by a fully connected DNN Second, to demonstrate the effectiveness of the proposed approach, we apply it to approximate a popular interference management algorithm, namely, the WMMSE algorithm Extensive experiments using both synthetically generated wireless channel data and real DSL channel data have been conducted It is shown that, in practice, only a small network is sufficient to obtain high approximation accuracy, and DNNs can achieve orders of magnitude speedup in computational time compared to the state-of-the-art interference management algorithm

607 citations


Journal ArticleDOI
Liang Liu1, Wei Yu1
TL;DR: It is shown that in the asymptotic massive multiple-input multiple-output regime, both the missed device detection and the false alarm probabilities for activity detection can always be made to go to zero by utilizing compressed sensing techniques that exploit sparsity in the user activity pattern.
Abstract: This two-part paper considers an uplink massive device communication scenario in which a large number of devices are connected to a base station (BS), but user traffic is sporadic so that in any given coherence interval, only a subset of users is active. The objective is to quantify the cost of active user detection and channel estimation and to characterize the overall achievable rate of a grant-free two-phase access scheme in which device activity detection and channel estimation are performed jointly using pilot sequences in the first phase and data is transmitted in the second phase. In order to accommodate a large number of simultaneously transmitting devices, this paper studies an asymptotic regime where the BS is equipped with a massive number of antennas. The main contributions of Part I of this paper are as follows. First, we note that as a consequence of having a large pool of potentially active devices but limited coherence time, the pilot sequences cannot all be orthogonal. However, despite the nonorthogonality, this paper shows that in the asymptotic massive multiple-input multiple-output regime, both the missed device detection and the false alarm probabilities for activity detection can always be made to go to zero by utilizing compressed sensing techniques that exploit sparsity in the user activity pattern. Part II of this paper further characterizes the achievable rates using the proposed scheme and quantifies the cost of using nonorthogonal pilot sequences for channel estimation in achievable rates.

594 citations


Journal ArticleDOI
TL;DR: This work focuses on a dual-functional multi-input-multi-output (MIMO) radar-communication system, where a single transmitter with multiple antennas communicates with downlink cellular users and detects radar targets simultaneously and proposes a branch-and-bound algorithm that obtains a globally optimal solution.
Abstract: We focus on a dual-functional multi-input-multi-output (MIMO) radar-communication (RadCom) system, where a single transmitter with multiple antennas communicates with downlink cellular users and detects radar targets simultaneously. Several design criteria are considered for minimizing the downlink multiuser interference. First, we consider both omnidirectional and directional beampattern design problems, where the closed-form globally optimal solutions are obtained. Based on the derived waveforms, we further consider weighted optimizations targeting a flexible tradeoff between radar and communications performance and introduce low-complexity algorithms. Moreover, to address the more practical constant modulus waveform design problem, we propose a branch-and-bound algorithm that obtains a globally optimal solution, and derive its worst-case complexity as function of the maximum iteration number. Finally, we assess the effectiveness of the proposed waveform design approaches via numerical results.

487 citations


Journal ArticleDOI
TL;DR: A novel virtual array interpolation-based algorithm for coprime array direction-of-arrival (DOA) estimation using the Hermitian positive semi-definite Toeplitz condition and an atomic norm minimization problem with respect to the equivalent virtual measurement vector is formulated.
Abstract: Coprime arrays can achieve an increased number of degrees of freedom by deriving the equivalent signals of a virtual array. However, most existing methods fail to utilize all information received by the coprime array due to the non-uniformity of the derived virtual array, resulting in an inevitable estimation performance loss. To address this issue, we propose a novel virtual array interpolation-based algorithm for coprime array direction-of-arrival (DOA) estimation in this paper. The idea of array interpolation is employed to construct a virtual uniform linear array such that all virtual sensors in the non-uniform virtual array can be utilized, based on which the atomic norm of the second-order virtual array signals is defined. By investigating the properties of virtual domain atomic norm, it is proved that the covariance matrix of the interpolated virtual array is related to the virtual measurements under the Hermitian positive semi-definite Toeplitz condition. Accordingly, an atomic norm minimization problem with respect to the equivalent virtual measurement vector is formulated to reconstruct the interpolated virtual array covariance matrix in a gridless manner, where the reconstructed covariance matrix enables off-grid DOA estimation. Simulation results demonstrate the performance advantages of the proposed DOA estimation algorithm for coprime arrays.

394 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the massive connectivity application in which a large number of devices communicate with a base station (BS) in a sporadic fashion, and proposed an approximate message passing (AMP) algorithm design that exploits the statistics of the wireless channel and provided an analytical characterization of the probabilities of false alarm and missed detection via state evolution.
Abstract: This paper considers the massive connectivity application in which a large number of devices communicate with a base-station (BS) in a sporadic fashion. Device activity detection and channel estimation are central problems in such a scenario. Due to the large number of potential devices, the devices need to be assigned non-orthogonal signature sequences. The main objective of this paper is to show that by using random signature sequences and by exploiting sparsity in the user activity pattern, the joint user detection and channel estimation problem can be formulated as a compressed sensing single measurement vector (SMV) or multiple measurement vector (MMV) problem depending on whether the BS has a single antenna or multiple antennas and efficiently solved using an approximate message passing (AMP) algorithm. This paper proposes an AMP algorithm design that exploits the statistics of the wireless channel and provides an analytical characterization of the probabilities of false alarm and missed detection via state evolution. We consider two cases depending on whether or not the large-scale component of the channel fading is known at the BS and design the minimum mean squared error denoiser for AMP according to the channel statistics. Simulation results demonstrate the substantial advantage of exploiting the channel statistics in AMP design; however, knowing the large-scale fading component does not appear to offer tangible benefits. For the multiple-antenna case, we employ two different AMP algorithms, namely the AMP with vector denoiser and the parallel AMP-MMV, and quantify the benefit of deploying multiple antennas.

326 citations


Journal ArticleDOI
TL;DR: In this article, a broadband channel estimation algorithm for mmWave multiple input multiple output (MIMO) systems with few-bit analog-to-digital converters (ADCs) is proposed.
Abstract: We develop a broadband channel estimation algorithm for millimeter wave (mmWave) multiple input multiple output (MIMO) systems with few-bit analog-to-digital converters (ADCs). Our methodology exploits the joint sparsity of the mmWave MIMO channel in the angle and delay domains. We formulate the estimation problem as a noisy quantized compressed-sensing problem and solve it using efficient approximate message passing (AMP) algorithms. In particular, we model the angle-delay coefficients using a Bernoulli–Gaussian-mixture distribution with unknown parameters and use the expectation-maximization forms of the generalized AMP and vector AMP algorithms to simultaneously learn the distributional parameters and compute approximately minimum mean-squared error (MSE) estimates of the channel coefficients. We design a training sequence that allows fast, fast Fourier transform based implementation of these algorithms while minimizing peak-to-average power ratio at the transmitter, making our methods scale efficiently to large numbers of antenna elements and delays. We present the results of a detailed simulation study that compares our algorithms to several benchmarks. Our study investigates the effect of SNR, training length, training type, ADC resolution, and runtime on channel estimation MSE, mutual information, and achievable rate. It shows that, in a mmWave MIMO system, the methods we propose to exploit joint angle-delay sparsity allow 1-bit ADCs to perform comparably to infinite-bit ADCs at low SNR, and 4-bit ADCs to perform comparably to infinite-bit ADCs at medium SNR.

319 citations


Journal ArticleDOI
TL;DR: The impact of centralized and distributed deployments of LIS is extensively discussed, and it is shown that a distributed deployment of L IS can enlarge the coverage and improve the overall positioning performance.
Abstract: We consider the potential for positioning with a system where antenna arrays are deployed as a large intelligent surface (LIS), which is a newly proposed concept beyond massive multi-input multi-output (MIMO) In a first step, we derive Fisher-information matrix (FIM) and Cramer–Rao lower bound (CRLB) in closed form for positioning a terminal located perpendicular to the center of the LIS, whose location we refer to as being on the central perpendicular line (CPL) of the LIS For a terminal that is not on the CPL, closed-form expressions of the CRLB seem out of reach, and we alternatively find approximations that are shown to be accurate Under mild conditions, we show that the CRLB for all three Cartesian dimensions ( $x$ , $y$ , and $z$ ) decreases quadratically in the surface area of the LIS In a second step, we analyze the CRLB for positioning when there is an unknown phase $\varphi$ presented in the analog circuits of the LIS We then show that the CRLBs are dramatically degraded for all three dimensions and decrease in the third order of the surface area Moreover, with an infinitely large LIS, the CRLB for the $z$ -dimension with an unknown $\varphi$ is 6 dB higher than the case without phase uncertainty, and the CRLB for estimating $\varphi$ converges to a constant that is independent of the wavelength $\lambda$ At last, we extensively discuss the impact of centralized and distributed deployments of LIS, and show that a distributed deployment of LIS can enlarge the coverage and improve the overall positioning performance

308 citations


Journal ArticleDOI
TL;DR: This work considers detection based on deep learning, and shows it is possible to train detectors that perform well without any knowledge of the underlying channel models, and demonstrates that the bit error rate performance of the proposed SBRNN detector is better than that of a Viterbi detector with imperfect CSI.
Abstract: We consider detection based on deep learning, and show it is possible to train detectors that perform well without any knowledge of the underlying channel models Moreover, when the channel model is known, we demonstrate that it is possible to train detectors that do not require channel state information (CSI) In particular, a technique we call a sliding bidirectional recurrent neural network (SBRNN) is proposed for detection where, after training, the detector estimates the data in real time as the signal stream arrives at the receiver We evaluate this algorithm, as well as other neural network (NN) architectures, using the Poisson channel model, which is applicable to both optical and molecular communication systems In addition, we also evaluate the performance of this detection method applied to data sent over a molecular communication platform, where the channel model is difficult to model analytically We show that SBRNN is computationally efficient, and can perform detection under various channel conditions without knowing the underlying channel model We also demonstrate that the bit error rate performance of the proposed SBRNN detector is better than that of a Viterbi detector with imperfect CSI as well as that of other NN detectors that have been previously proposed Finally, we show that the SBRNN can perform well in rapidly changing channels, where the coherence time is on the order of a single symbol duration

305 citations


Journal ArticleDOI
TL;DR: This paper investigates spatial- and frequency-wideband effects in massive MIMO systems from the array signal processing point of view, and develops the efficient uplink and downlink channel estimation strategies that require much less amount of training overhead and cause no pilot contamination.
Abstract: When there are a large number of antennas in massive MIMO systems, the transmitted wideband signal will be sensitive to the physical propagation delay of electromagnetic waves across the large array aperture, which is called the spatial-wideband effect. In this scenario, the transceiver design is different from most of the existing works, which presume that the bandwidth of the transmitted signals is not that wide, ignore the spatial-wideband effect, and only address the frequency selectivity. In this paper, we investigate spatial- and frequency-wideband effects, called dual-wideband effects in massive MIMO systems from the array signal processing point of view. Taking millimeter-wave-band communications as an example, we describe the transmission process to address the dual-wideband effects. By exploiting the channel sparsity in the angle domain and the delay domain, we develop the efficient uplink and downlink channel estimation strategies that require much less amount of training overhead and cause no pilot contamination. Thanks to the array signal processing techniques, the proposed channel estimation is suitable for both TDD and FDD massive MIMO systems. Numerical examples demonstrate that the proposed transmission design for massive MIMO systems can effectively deal with the dual-wideband effects.

Journal ArticleDOI
TL;DR: A method for estimating conditionally Gaussian random vectors with random covariance matrices, which uses techniques from the field of machine learning to obtain a similarly efficient (but suboptimal) estimator).
Abstract: We present a method for estimating conditionally Gaussian random vectors with random covariance matrices, which uses techniques from the field of machine learning. Such models are typical in communication systems, where the covariance matrix of the channel vector depends on random parameters, e.g., angles of propagation paths. If the covariance matrices exhibit certain Toeplitz and shift-invariance structures, the complexity of the minimum mean squared error (MMSE) channel estimator can be reduced to ${\mathcal O}(M\log M)$ floating point operations, where $M$ is the channel dimension. While in the absence of structure the complexity is much higher, we obtain a similarly efficient (but suboptimal) estimator by using the MMSE estimator of the structured model as a blueprint for the architecture of a neural network. This network learns the MMSE estimator for the unstructured model, but only within the given class of estimators that contains the MMSE estimator for the structured model. Numerical simulations with typical spatial channel models demonstrate the generalization properties of the chosen class of estimators to realistic channel models.

Journal ArticleDOI
TL;DR: The main idea of the proposed FP approach is to decouple the interaction among the interfering links, thereby permitting a distributed and joint optimization of the discrete and continuous variables with provable convergence.
Abstract: This two-part paper develops novel methodologies for using fractional programming (FP) techniques to design and optimize communication systems. Part I of this paper proposes a new quadratic transform for FP and treats its application for continuous optimization problems. In this Part II of the paper, we study discrete problems, such as those involving user scheduling, which are considerably more difficult to solve. Unlike the continuous problems, discrete or mixed discrete-continuous problems normally cannot be recast as convex problems. In contrast to the common heuristic of relaxing the discrete variables, this work reformulates the original problem in an FP form amenable to distributed combinatorial optimization. The paper illustrates this methodology by tackling the important and challenging problem of uplink coordinated multicell user scheduling in wireless cellular systems. Uplink scheduling is more challenging than downlink scheduling, because uplink user scheduling decisions significantly affect the interference pattern in nearby cells. Furthermore, the discrete scheduling variable needs to be optimized jointly with continuous variables such as transmit power levels and beamformers. The main idea of the proposed FP approach is to decouple the interaction among the interfering links, thereby permitting a distributed and joint optimization of the discrete and continuous variables with provable convergence. The paper shows that the well-known weighted minimum mean-square-error (WMMSE) algorithm can also be derived from a particular use of FP; but our proposed FP-based method significantly outperforms WMMSE when discrete user scheduling variables are involved, both in term of run-time efficiency and optimizing results.

Journal ArticleDOI
TL;DR: Numerical simulations demonstrate that the radar transmitted power can be efficiently reduced by exploiting the communication signals scattered off the target at the radar receiver, and it is shown that the robust waveforms bound the worst-case power-saving performance of radar system for any target spectra in the uncertainty sets.
Abstract: This paper considers the problem of power minimization-based robust orthogonal frequency division multiplexing (OFDM) radar waveform design, in which the radar coexists with a communication system in the same frequency band. Recognizing that the precise characteristics of target spectra are impossible to capture in practice, it is assumed that the target spectra lie in uncertainty sets bounded by known upper and lower bounds. Based on this uncertainty model, three different power minimization-based robust radar waveform design criteria are proposed to minimize the worst-case radar transmitted power by optimizing the OFDM radar waveform, which are constrained by a specified mutual information requirement for target characterization and a minimum capacity threshold for communication system. These criteria differ in the way the communication signals scattered off the target are considered: 1) as useful energy, 2) as interference, or 3) ignored altogether at the radar receiver. Numerical simulations demonstrate that the radar transmitted power can be efficiently reduced by exploiting the communication signals scattered off the target at the radar receiver. It is also shown that the robust waveforms bound the worst-case power-saving performance of radar system for any target spectra in the uncertainty sets.

Journal ArticleDOI
TL;DR: In this article, a tractable model of the rectifier nonlinearity was developed to cope with general multicarrier modulated input waveforms, and a novel WIPT architecture was proposed based on the superposition of multichannel unmodulated and modulated waveforms at the transmitter, which is optimized as a function of channel state information so as to characterize the rate-energy region of the whole system.
Abstract: The design of wireless information and power transfer (WIPT) has so far relied on an oversimplified and inaccurate linear model of the energy harvester. In this paper, we depart from this linear model and design WIPT considering the rectifier nonlinearity. We develop a tractable model of the rectifier nonlinearity that is flexible enough to cope with general multicarrier modulated input waveforms. Leveraging that model, we motivate and introduce a novel WIPT architecture relying on the superposition of multicarrier unmodulated and modulated waveforms at the transmitter. The superposed WIPT waveforms are optimized as a function of the channel state information so as to characterize the rate-energy region of the whole system. Analysis and numerical results illustrate the performance of the derived waveforms and WIPT architecture and highlight that nonlinearity radically changes the design of WIPT. We make key and refreshing observations. First, analysis (confirmed by circuit simulations) shows that modulated and unmodulated waveforms are not equally suitable for wireless power delivery, namely, modulation being beneficial in single-carrier transmissions but detrimental in multicarrier transmissions. Second, a multicarrier unmodulated waveform (superposed to a multicarrier modulated waveform) is useful to enlarge the rate-energy region of WIPT. Third, a combination of power splitting and time sharing is in general the best strategy. Fourth, a nonzero mean Gaussian input distribution outperforms the conventional capacity-achieving zero-mean Gaussian input distribution in multicarrier transmissions. Fifth, the rectifier nonlinearity is beneficial to system performance and is essential to efficient WIPT design.

Journal ArticleDOI
TL;DR: This paper considers the joint design of a multiple-input multiple-output (MIMO) radar with co-located antennas and a MIMO communication system, and a reduced-complexity iterative algorithm, based on iterative alternating maximization of three suitably designed subproblems, is proposed and analyzed.
Abstract: This paper considers the joint design of a multiple-input multiple-output (MIMO) radar with co-located antennas and a MIMO communication system. The degrees of freedom under the designer's control are the waveforms transmitted by the radar transmit array, the filter at the radar array and the code-book employed by the communication system to form its space-time code matrix. Two formulations of the spectrum sharing problem are proposed. First, the design problem is stated as the constrained maximization of the signal-to-interference-plus-noise ratio at the radar receiver, where interference is due to both clutter and the coexistence structure, and the constraints concern both the similarity with a standard radar waveform and the rate achievable by the communication system, on top of those on the transmit energy. The resulting problem is nonconvex, but a reduced-complexity iterative algorithm, based on iterative alternating maximization of three suitably designed subproblems, is proposed and analyzed. In addition, the constrained maximization of the communication rate is also investigated. The convergence of all the devised algorithms is guaranteed. Finally, a thorough performance assessment is presented, aimed at showing the merits of the proposed approach.

Journal ArticleDOI
TL;DR: A general and flexible algorithm is proposed based on the majorization-minimization method with guaranteed monotonicity, lower computational complexity per iteration and/or convergence to a B-stationary point and many waveform constraints can be flexibly incorporated into the algorithm with only a few modifications.
Abstract: In this paper, we consider the joint design of both transmit waveforms and receive filters for a colocated multiple-input-multiple-output (MIMO) radar with the existence of signal-dependent interference and white noise. The design problem is formulated into a maximization of the signal-to-interference-plus-noise ratio (SINR), including various constraints on the transmit waveforms. Compared with the traditional alternating semidefinite relaxation approach, a general and flexible algorithm is proposed based on the majorization-minimization method with guaranteed monotonicity, lower computational complexity per iteration and/or convergence to a B-stationary point. Many waveform constraints can be flexibly incorporated into the algorithm with only a few modifications. Furthermore, the connection between the proposed algorithm and the alternating optimization approach is revealed. Finally, the proposed algorithm is evaluated via numerical experiments in terms of SINR performance, ambiguity function, computational time, and properties of the designed waveforms. The experiment results show that the proposed algorithms are faster in terms of running time and meanwhile achieve as good SINR performance as the the existing methods.

Journal ArticleDOI
TL;DR: This paper aims to elevate the notion of joint harmonic analysis to a full-fledged framework denoted as time-vertex signal processing, that links together the time-domain signal processing techniques with the new tools of graph signal processing.
Abstract: An emerging way to deal with high-dimensional noneuclidean data is to assume that the underlying structure can be captured by a graph. Recently, ideas have begun to emerge related to the analysis of time-varying graph signals. This paper aims to elevate the notion of joint harmonic analysis to a full-fledged framework denoted as time-vertex signal processing, that links together the time-domain signal processing techniques with the new tools of graph signal processing. This entails three main contributions: a) We provide a formal motivation for harmonic time-vertex analysis as an analysis tool for the state evolution of simple partial differential equations on graphs; b) we improve the accuracy of joint filtering operators by up-to two orders of magnitude; c) using our joint filters, we construct time-vertex dictionaries analyzing the different scales and the local time-frequency content of a signal. The utility of our tools is illustrated in numerous applications and datasets, such as dynamic mesh denoising and classification, still-video inpainting, and source localization in seismic events. Our results suggest that joint analysis of time-vertex signals can bring benefits to regression and learning.

Journal ArticleDOI
TL;DR: Numerical results show that the proposed method achieves a significant power saving compared to conventional approaches, while obtaining a favorable performance-complexity tradeoff.
Abstract: We propose a novel approach to enable the coexistence between Multi-Input-Multi-Output (MIMO) radar and downlink multiuser multi-input single-output communication system. By exploiting the constructive multiuser interference (MUI), the proposed approach tradeoff useful MUI power for reducing the transmit power, to obtain a power efficient transmission. This paper focuses on two optimization problems: a) Transmit power minimization at the base station (BS), while guaranteeing the receive signal-to-interference-plus-noise ratio (SINR) level of downlink users and the interference-to-noise ratio level to radar; b) Minimization of the interference from BS to radar for a given requirement of downlink SINR and transmit power budget. To reduce the computational overhead of the proposed scheme in practice, an algorithm based on gradient projection is designed to solve the power minimization problem. In addition, we investigate the tradeoff between the performance of radar and communication, and analytically derive the key metrics for MIMO radar in the presence of the interference from the BS. Finally, a robust power minimization problem is formulated to ensure the effectiveness of the proposed method in the case of imperfect channel state information. Numerical results show that the proposed method achieves a significant power saving compared to conventional approaches, while obtaining a favorable performance-complexity tradeoff.

Journal ArticleDOI
TL;DR: In this article, a coupled tensor factorization framework was proposed to solve the hyperspectral super-resolution problem from a tensor perspective, where the multidimensional structure of the HSI and the MSI was utilized to improve the performance.
Abstract: Hyperspectral super-resolution refers to the problem of fusing a hyperspectral image (HSI) and a multispectral image (MSI) to produce a super-resolution image (SRI) that admits fine spatial and spectral resolutions. State-of-the-art methods approach the problem via low-rank matrix approximations to the matricized HSI and MSI. These methods are effective to some extent, but a number of challenges remain. First, HSIs and MSIs are naturally third-order tensors (data “cubes”) and thus matricization is prone to a loss of structural information, which could degrade performance. Second, it is unclear whether these low-rank matrix-based fusion strategies can guarantee the identifiability of the SRI under realistic assumptions. However, identifiability plays a pivotal role in estimation problems and usually has a significant impact on practical performance. Third, a majority of the existing methods assume known (or easily estimated) degradation operators from the SRI to the corresponding HSI and MSI, which is hardly the case in practice. In this paper, we propose to tackle the super-resolution problem from a tensor perspective. Specifically, we utilize the multidimensional structure of the HSI and MSI to propose a coupled tensor factorization framework that can effectively overcome the aforementioned issues. The proposed approach guarantees the identifiability of the SRI under mild and realistic conditions. Furthermore, it works with little knowledge about the degradation operators, which is clearly a favorable feature in practice. Semi-real scenarios are simulated to showcase the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: This paper provides closed-form solutions for the optimum transmit policies for both systems under two basic models for the scattering produced by the radar onto the communication receiver, and account for possible correlation of the signal-independent fraction of the interference impinging on the radar.
Abstract: The focus of this paper is on coexistence between a communication system and a pulsed radar sharing the same bandwidth. Based on the fact that the interference generated by the radar onto the communication receiver is intermittent and depends on the density of scattering objects (such as, e.g., targets), we first show that the communication system is equivalent to a set of independent parallel channels, whereby precoding on each channel can be introduced as a new degree of freedom. We introduce a new figure of merit, named the compound rate , which is a convex combination of rates with and without interference, to be optimized under constraints concerning the signal-to-interference-plus-noise ratio (including signal-dependent interference due to clutter) experienced by the radar and obviously the powers emitted by the two systems: the degrees of freedom are the radar waveform and the aforementioned encoding matrix for the communication symbols. We provide closed-form solutions for the optimum transmit policies for both systems under two basic models for the scattering produced by the radar onto the communication receiver, and account for possible correlation of the signal-independent fraction of the interference impinging on the radar. We also discuss the region of the achievable communication rates with and without interference. A thorough performance assessment shows the potentials and the limitations of the proposed co-existing architecture.

Journal ArticleDOI
Liang Liu1, Wei Yu1
TL;DR: This paper characterizes each active user's achievable rate using random matrix theory under either maximal-ratio combining (MRC) or minimum mean-squared error (MMSE) receive beamforming at the base station (BS), assuming the statistics of their estimated channels as derived in Part I.
Abstract: This two-part paper aims to quantify the cost of device activity detection in an uplink massive connectivity scenario with a large number of devices but device activities are sporadic. Part I of this paper shows that in an asymptotic massive multiple-input multiple-output (MIMO) regime, device activity detection can always be made perfect. Part II of this paper subsequently shows that despite the perfect device activity detection, there is nevertheless significant cost due to device detection in terms of overall achievable rate, because of the fact that nonorthogonal pilot sequences have to be used in order to accommodate the large number of potential devices, resulting in significantly larger channel estimation error as compared to conventional massive MIMO systems with orthogonal pilots. Specifically, this paper characterizes each active user's achievable rate using random matrix theory under either maximal-ratio combining (MRC) or minimum mean-squared error (MMSE) receive beamforming at the base station (BS), assuming the statistics of their estimated channels as derived in Part I. The characterization of user rate also allows the optimization of pilot sequences length. Moreover, in contrast to the conventional massive MIMO system, the MMSE beamforming is shown to achieve much higher rate than the MRC beamforming for the massive connectivity scenario under consideration. Finally, this paper illustrates the necessity of user scheduling for rate maximization when the number of active users is larger than the number of antennas at the BS.

Journal ArticleDOI
TL;DR: An off-grid model for downlink channel sparse representation with arbitrary two-dimensional-array antenna geometry is introduced, and an efficient sparse Bayesian learning approach for the sparse channel recovery and off- grid refinement is proposed.
Abstract: This paper addresses the problem of downlink channel estimation in frequency-division duplexing massive multiple-input multiple-output systems. The existing methods usually exploit hidden sparsity under a discrete Fourier transform (DFT) basis to estimate the downlink channel. However, there are at least two shortcomings of these DFT-based methods: first, they are applicable to uniform linear arrays (ULAs) only, since the DFT basis requires a special structure of ULAs; and second, they always suffer from a performance loss due to the leakage of energy over some DFT bins. To deal with the above-mentioned shortcomings, we introduce an off-grid model for downlink channel sparse representation with arbitrary two-dimensional-array antenna geometry, and propose an efficient sparse Bayesian learning approach for the sparse channel recovery and off-grid refinement. The main idea of the proposed off-grid method is to consider the sampled grid points as adjustable parameters. Utilizing an in-exact block majorization–minimization algorithm, the grid points are refined iteratively to minimize the off-grid gap. Finally, we further extend the solution to uplink-aided channel estimation by exploiting the angular reciprocity between downlink and uplink channels, which brings enhanced recovery performance.

Journal ArticleDOI
TL;DR: This paper provides near-optimal guarantees for greedy sampling by introducing the concept of approximate submodularity and updating the classical greedy bound, and provides explicit bounds on the approximate supermodularity of the interpolation mean-square error.
Abstract: Sampling is a fundamental topic in graph signal processing, having found applications in estimation, clustering, and video compression. In contrast to traditional signal processing, the irregularity of the signal domain makes selecting a sampling set nontrivial and hard to analyze. Indeed, though conditions for graph signal interpolation from noiseless samples exist, they do not lead to a unique sampling set. The presence of noise makes choosing among these sampling sets a hard combinatorial problem. Although greedy sampling schemes are commonly used in practice, they have no performance guarantee. This work takes a twofold approach to address this issue. First, universal performance bounds are derived for the Bayesian estimation of graph signals from noisy samples. In contrast to currently available bounds, they are not restricted to specific sampling schemes and hold for any sampling sets. Second, this paper provides near-optimal guarantees for greedy sampling by introducing the concept of approximate submodularity and updating the classical greedy bound. It then provides explicit bounds on the approximate supermodularity of the interpolation mean-square error showing that it can be optimized with worst case guarantees using greedy search even though it is not supermodular. Simulations illustrate the derived bound for different graph models and show an application of graph signal sampling to reduce the complexity of kernel principal component analysis.

Journal ArticleDOI
TL;DR: This work uses a general feature-extraction operator to represent application-dependent features and proposes a general reconstruction error to evaluate the quality of resampling; by minimizing the error, it obtains a general form of optimal resamplings distribution.
Abstract: To reduce the cost of storing, processing, and visualizing a large-scale point cloud, we propose a randomized resampling strategy that selects a representative subset of points while preserving application-dependent features. The strategy is based on graphs, which can represent underlying surfaces and lend themselves well to efficient computation. We use a general feature-extraction operator to represent application-dependent features and propose a general reconstruction error to evaluate the quality of resampling; by minimizing the error, we obtain a general form of optimal resampling distribution. The proposed resampling distribution is guaranteed to be shift-, rotation- and scale-invariant in the three-dimensional space. We then specify the feature-extraction operator to be a graph filter and study specific resampling strategies based on all-pass, low-pass, high-pass graph filtering and graph filter banks. We validate the proposed methods on three applications: Large-scale visualization, accurate registration, and robust shape modeling demonstrating the effectiveness and efficiency of the proposed resampling methods.

Journal ArticleDOI
TL;DR: This work represents a bridge between matrix factorization, sparse dictionary learning, and sparse autoencoders, and it is shown that the training of the filters is essential to allow for nontrivial signals in the model, and an online algorithm to learn the dictionaries from real data, effectively resulting in cascaded sparse convolutional layers.
Abstract: The recently proposed multilayer convolutional sparse coding (ML-CSC) model, consisting of a cascade of convolutional sparse layers, provides a new interpretation of convolutional neural networks (CNNs). Under this framework, the forward pass in a CNN is equivalent to a pursuit algorithm aiming to estimate the nested sparse representation vectors from a given input signal. Despite having served as a pivotal connection between CNNs and sparse modeling, a deeper understanding of the ML-CSC is still lacking. In this paper, we propose a sound pursuit algorithm for the ML-CSC model by adopting a projection approach. We provide new and improved bounds on the stability of the solution of such pursuit and we analyze different practical alternatives to implement this in practice. We show that the training of the filters is essential to allow for nontrivial signals in the model, and we derive an online algorithm to learn the dictionaries from real data, effectively resulting in cascaded sparse convolutional layers. Last, but not least, we demonstrate the applicability of the ML-CSC model for several applications in an unsupervised setting, providing competitive results. Our work represents a bridge between matrix factorization, sparse dictionary learning, and sparse autoencoders, and we analyze these connections in detail.

Journal ArticleDOI
TL;DR: A primal-dual algorithm based on the block successive upper-bound minimization method of multipliers (BSUM-M) is developed to deal with the joint design of transmit waveform and receive filter for multiple-input multiple-output radar in the presence of signal-dependent interference.
Abstract: The paper investigates the joint design of transmit waveform and receive filter for multiple-input multiple-output radar in the presence of signal-dependent interference, subject to a peak-to-average-power ratio constraint as well as a waveform similarity constraint. Owing to this kind of signal dependence and constraints, the formulated optimization problem of the output signal-to-interference-plus-noise ratio (SINR) maximization is NP-hard. To this end, an auxiliary variable is first introduced to modify the original problem, and then a primal-dual algorithm based on the block successive upper-bound minimization method of multipliers (BSUM-M) is developed to deal with the resulting problem. Moreover, an active set method is exploited to solve the quadratic programming problem involved in each update procedure of the proposed BSUM-M algorithm. Finally, numerical simulations are performed to demonstrate the superiority of the proposed algorithm over state-of-the-art methods in terms of the output SINR, beampattern, computational complexity, pulse compression, and ambiguity properties.

Journal ArticleDOI
TL;DR: The results show that the derived asymptotic bounds are effective and also apply to the finite-dimensional MIMO, and showed that the ergodic capacity of sub-array antenna selection system scales no faster than double logarithmic rate.
Abstract: Antenna selection is a multiple-input multiple-output (MIMO) technology, which uses radio frequency (RF) switches to select a good subset of antennas. Antenna selection can alleviate the requirement on the number of RF transceivers, thus being attractive for massive MIMO systems. In massive MIMO antenna selection systems, RF switching architectures need to be carefully considered. In this paper, we examine two switching architectures, i.e., full-array and sub-array. By assuming independent and identically distributed Rayleigh flat fading channels, we use asymptotic theory on order statistics to derive the asymptotic upper capacity bounds of massive MIMO channels with antenna selection for the both switching architectures in the large-scale limit. We also use the derived bounds to further derive the upper bounds of the ergodic achievable spectral efficiency considering the channel state information (CSI) acquisition. It is also showed that the ergodic capacity of sub-array antenna selection system scales no faster than double logarithmic rate. In addition, optimal antenna selection algorithms based on branch-and-bound are proposed for both switching architectures. Our results show that the derived asymptotic bounds are effective and also apply to the finite-dimensional MIMO. The CSI acquisition is one of the main limits for the massive MIMO antenna selection systems in the time-variant channels. The proposed optimal antenna selection algorithms are much faster than the exhaustive-search-based antenna selection, e.g., 1000 × speedup observed in the large-scale system. Interestingly, the full-array and sub-array systems have very close performance, which is validated by their exact capacities and their close upper bounds on capacity.

Journal ArticleDOI
TL;DR: In particular, when diminishing (or constant) step sizes are used, we can prove convergence to a (or a neighborhood of) consensus stationary solution under some regular assumptions as mentioned in this paper, which is not the case in the nonconvex setting.
Abstract: Consensus optimization has received considerable attention in recent years. A number of decentralized algorithms have been proposed for convex consensus optimization. However, to the behaviors or consensus nonconvex optimization, our understanding is more limited. When we lose convexity, we cannot hope that our algorithms always return global solutions though they sometimes still do. Somewhat surprisingly, the decentralized consensus algorithms, DGD and Prox-DGD, retain most other properties that are known in the convex setting. In particular, when diminishing (or constant) step sizes are used, we can prove convergence to a (or a neighborhood of) consensus stationary solution under some regular assumptions. It is worth noting that Prox-DGD can handle nonconvex nonsmooth functions if their proximal operators can be computed. Such functions include SCAD, MCP, and $\ell _q$ quasi-norms, $q\in [0,1)$ . Similarly, Prox-DGD can take the constraint to a nonconvex set with an easy projection. To establish these properties, we have to introduce a completely different line of analysis, as well as modify existing proofs that were used in the convex setting.

Journal ArticleDOI
TL;DR: A novel algorithm to reconstruct a sparse signal from a small number of magnitude-only measurements, SPARTA is a simple yet effective, scalable, and fast sparse PR solver that is robust against additive noise of bounded support.
Abstract: This paper develops a novel algorithm, termed SPARse Truncated Amplitude flow (SPARTA), to reconstruct a sparse signal from a small number of magnitude-only measurements. It deals with what is also known as sparse phase retrieval (PR), which is NP-hard in general and emerges in many science and engineering applications. Upon formulating sparse PR as an amplitude-based nonconvex optimization task, SPARTA works iteratively in two stages: In stage one, the support of the underlying sparse signal is recovered using an analytically well-justified rule, and subsequently a sparse orthogonality-promoting initialization is obtained via power iterations restricted on the support; and in the second stage, the initialization is successively refined by means of hard thresholding based gradient-type iterations. SPARTA is a simple yet effective, scalable, and fast sparse PR solver. On the theoretical side, for any $n$ -dimensional $k$ -sparse ( $k\ll n$ ) signal $\boldsymbol {x}$ with minimum (in modulus) nonzero entries on the order of $(1/\sqrt{k})\Vert \boldsymbol {x}\Vert _2$ , SPARTA recovers the signal exactly (up to a global unimodular constant) from about $k^2\log n$ random Gaussian measurements with high probability. Furthermore, SPARTA incurs computational complexity on the order of $k^2n\log n$ with total runtime proportional to the time required to read the data, which improves upon the state of the art by at least a factor of $k$ . Finally, SPARTA is robust against additive noise of bounded support. Extensive numerical tests corroborate markedly improved recovery performance and speedups of SPARTA relative to existing alternatives.