scispace - formally typeset
Search or ask a question

Showing papers by "Giuseppe Caire published in 2017"


Journal ArticleDOI
TL;DR: In this article, the authors considered the canonical shared link caching network and provided a comprehensive characterization of the order-optimal rate for all regimes of the system parameters, as well as an explicit placement and delivery scheme achieving orderoptimal rates.
Abstract: We consider the canonical shared link caching network formed by a source node, hosting a library of $m$ information messages (files), connected via a noiseless multicast link to $n$ user nodes, each equipped with a cache of size $M$ files. Users request files independently at random according to an a-priori known demand distribution q. A coding scheme for this network consists of two phases: cache placement and delivery. The cache placement is a mapping of the library files onto the user caches that can be optimized as a function of the demand statistics, but is agnostic of the actual demand realization. After the user demands are revealed, during the delivery phase the source sends a codeword (function of the library files, cache placement, and demands) to the users, such that each user retrieves its requested file with arbitrarily high probability. The goal is to minimize the average transmission length of the delivery phase, referred to as rate (expressed in channel symbols per file). In the case of deterministic demands, the optimal min-max rate has been characterized within a constant multiplicative factor, independent of the network parameters. The case of random demands was previously addressed by applying the order-optimal min-max scheme separately within groups of files requested with similar probability. However, no complete characterization of order-optimality was previously provided for random demands under the average rate performance criterion. In this paper, we consider the random demand setting and, for the special yet relevant case of a Zipf demand distribution, we provide a comprehensive characterization of the order-optimal rate for all regimes of the system parameters, as well as an explicit placement and delivery scheme achieving order-optimal rates. We present also numerical results that confirm the superiority of our scheme with respect to previously proposed schemes for the same setting.

203 citations


Journal ArticleDOI
TL;DR: This work indicates that OAM multiplexing and conventional spatialmultiplexing can be simultaneously utilized to provide design flexibility and performance enhancement in line-of-sight wireless communications.
Abstract: Line-of-sight wireless communications can benefit from the simultaneous transmission of multiple independent data streams through the same medium in order to increase system capacity. A common approach is to use conventional spatial multiplexing with spatially separated transmitter/receiver antennae, for which inter-channel crosstalk is reduced by employing multiple-input-multiple-output (MIMO) signal processing at the receivers. Another fairly recent approach to transmitting multiple data streams is to use orbital-angular-momentum (OAM) multiplexing, which employs the orthogonality among OAM beams to minimize inter-channel crosstalk and enable efficient (de)multiplexing. In this paper, we explore the potential of utilizing both of these multiplexing techniques to provide system design flexibility and performance enhancement. We demonstrate a 16 Gbit/s millimeter-wave link using OAM multiplexing combined with conventional spatial multiplexing over a short link distance of 1.8 meters (shorter than Rayleigh distance). Specifically, we implement a spatial multiplexing system with a $2\times 2$ antenna aperture architecture, in which each transmitter aperture contains two multiplexed 4 Gbit/s data-carrying OAM beams. A MIMO-based signal processing is used at the receiver to mitigate channel interference. Our experimental results show performance improvements for all channels after MIMO processing, with bit-error rates of each channel below the forward error correction limit of $3.8\times 10^{-3}$ . We also simulate the capacity for both the $4\times 4$ MIMO system and the $2\times 2$ MIMO with OAM multiplexing. Our work indicates that OAM multiplexing and conventional spatial multiplexing can be simultaneously utilized to provide design flexibility. The combination of these two approaches can potentially enhance system capacity given a fixed aperture area of the transmitter/receiver (when the link distance is within a few Rayleigh distances).

144 citations


Journal ArticleDOI
TL;DR: Efficient subspace estimation algorithms that require sampling only m = O(2√M) antennas and, for a given p ≪ M, return a p-dim beamformer (subspace) that has a performance comparable with the best p-Dim beamformer designed from the full knowledge of the exact channel covariance matrix are developed.
Abstract: Massive MIMO is a variant of multiuser MIMO (Multi-Input Multi-Output) system, where the number of base-station antennas $M$ is very large and generally much larger than the number of spatially multiplexed data streams. Unfortunately, the front-end A/D conversion necessary to drive hundreds of antennas, with a signal bandwidth of 10 to 100 MHz, requires very large sampling bit-rate and power consumption. To reduce complexity, Hybrid Digital-Analog architectures have been proposed. Our work in this paper is motivated by one of such schemes named Joint Spatial Division and Multiplexing (JSDM), where the downlink precoder (resp., uplink linear receiver) is split into product of a baseband linear projection (digital) and an RF reconfigurable beamforming network (analog), such that only $m \ll M$ A/D converters and RF chains is needed. In JSDM, users are grouped according to similarity of their signal subspaces, and these groups are separated by the analog beamforming stage. Further multiplexing gain in each group is achieved using the digital precoder. Therefore, it is apparent that extracting the signal subspace of the $M$ -dim channel vectors from snapshots of $m$ -dim projections , with $m \ll M$ , plays a fundamental role in JSDM implementation. In this paper, we develop efficient subspace estimation algorithms that require sampling only $m=O(2 \sqrt{M})$ antennas and, for a given $p \ll M$ , return a $p$ -dim beamformer (subspace) that has a performance comparable with the best $p$ -dim beamformer designed from the full knowledge of the exact channel covariance matrix. We assess the performance of our proposed estimators both analytically and empirically via numerical simulations.

113 citations


Journal ArticleDOI
TL;DR: The main design concepts when integrating mmWave RANs into 5G systems are described, considering aspects such as spectrum, architecture, and backhauling/fronthauling.
Abstract: Millimeter-wave frequencies between 6 and 100 GHz provide orders of magnitude larger spectrum than current cellular allocations and allow usage of large numbers of antennas for exploiting beamforming and spatial multiplexing gains. In this article, we describe the main design concepts when integrating mmWave RANs into 5G systems, considering aspects such as spectrum, architecture, and backhauling/fronthauling. The corresponding RRM challenges, extended RRM functionalities for 5G mmWave RAN, and RRM splits are addressed. Finally, based on these discussions, a framework is proposed that allows joint backhaul and access operation for 5G mmWave RAN, which we envisage as one of the key innovative technologies in 5G. The proposed framework consists of a joint scheduling and resource allocation algorithm to improve resource utilization efficiency with low computational complexity and to fully exploit spatial multiplexing gain for fulfilling user demands.

77 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a low-complexity algorithm that uses the received UL wideband pilot snapshots in an observation window comprising several coherence blocks (CBs) to obtain an estimate of the angle-delay power spread function (PSF) of the received signal.
Abstract: We consider a massive MIMO system based on time division duplexing (TDD) and channel reciprocity, where the base stations (BSs) learn the channel vectors of their users via the pilots transmitted by the users in the uplink (UL). It is well-known that, in the limit of very large number of BS antennas, the system performance is limited by pilot contamination, due to the fact that the same set of orthogonal pilots is reused in multiple cells. In the regime of moderately large number of antennas, another source of degradation is channel interpolation because the pilot signal of each user probes only a limited number of orthogonal frequency division multiplexing (OFDM) subcarriers, and the channel must be interpolated over the other subcarriers, where no pilot symbol is transmitted. In this paper, we propose a low-complexity algorithm that uses the received UL wideband pilot snapshots in an observation window comprising several coherence blocks (CBs) to obtain an estimate of the angle-delay power spread function (PSF) of the received signal. This is generally given by the sum of the angle-delay PSF of the desired user and the angle-delay PSFs of the copilot users, i.e., the users re-using the same pilot dimensions in other cells/sectors. We propose supervised and unsupervised clustering algorithms to decompose the estimated PSF and isolate the part corresponding to the desired user only. We use this decomposition to obtain an estimate of the covariance matrix of the user wideband channel vector, which we exploit to decontaminate the desired user channel estimate by applying minimum mean squared error (MMSE) smoothing filter, i.e., the optimal channel interpolator in the MMSE sense. We also propose an effective low-complexity approximation/implementation of this smoothing filter. We use numerical simulations to assess the performance of our proposed method, and compare it with other recently proposed schemes that use the same idea of separability of users in the angle-delay domain.

74 citations


Journal ArticleDOI
TL;DR: It is shown that a decentralized random caching strategy with uniform probability over the library yields the optimal per-node capacity scaling of $\Theta (\sqrt {M/m})$ for heavy-tailed popularity distributions, thus yielding throughput scalability with the network size.
Abstract: We consider a wireless device-to-device network, where $n$ nodes are uniformly distributed at random over the network area. We let each node caches $M$ files from a library of size $m\geq M$ . Each node in the network requests a file from the library independently at random, according to a popularity distribution, and is served by other nodes having the requested file in their local cache via (possibly) multihop transmissions. Under the classical “protocol model” of wireless networks, we characterize the optimal per-node capacity scaling law for a broad class of heavy-tailed popularity distributions, including Zipf distributions with exponent less than one. In the parameter regime of interest, i.e., $m=o(nM)$ , we show that a decentralized random caching strategy with uniform probability over the library yields the optimal per-node capacity scaling of $\Theta (\sqrt {M/m})$ for heavy-tailed popularity distributions. This scaling is constant with $n$ , thus yielding throughput scalability with the network size. Furthermore, the multihop capacity scaling can be significantly better than for the case of single-hop caching networks, for which the per-node capacity is $\Theta (M/m)$ . The multihop capacity scaling law can be further improved for a Zipf distribution with exponent larger than some threshold > 1, by using a decentralized random caching uniformly across a subset of most popular files in the library. Namely, ignoring a subset of less popular files (i.e., effectively reducing the size of the library) can significantly improve the throughput scaling while guaranteeing that all nodes will be served with high probability as $n$ increases.

72 citations


Proceedings ArticleDOI
01 Jun 2017
TL;DR: The main message of the paper is that the schemes performing well in terms of DoF may not be directly appropriate for intermediate SNR regimes, and modified schemes should be employed.
Abstract: In this paper we consider a single-cell downlink scenario where a multiple-antenna base station delivers contents to multiple cache-enabled user terminals. Based on the multicasting opportunities provided by the so-called Coded Caching technique, we investigate three delivery approaches. Our baseline scheme employs the coded caching technique on top of max-min fair multicasting. The second one consists of a joint design of Zero-Forcing (ZF) and coded caching, where the coded chunks are formed in the signal domain (complex field). The third scheme is similar to the second one with the difference that the coded chunks are formed in the data domain (finite field). We derive closed-form rate expressions where our results suggest that the latter two schemes surpass the first one in terms of Degrees of Freedom (DoF). However, at the intermediate SNR regime forming coded chunks in the signal domain results in power loss, and will deteriorate throughput of the second scheme. The main message of our paper is that the schemes performing well in terms of DoF may not be directly appropriate for intermediate SNR regimes, and modified schemes should be employed.

51 citations


Posted Content
TL;DR: The results convey the important message that although directly translating schemes from the network coding ideas to wireless networks may work well at high-SNR values, careful modifications need to be considered for acceptable finite SNR performance.
Abstract: We investigate the potentials of applying the coded caching paradigm in wireless networks. In order to do this, we investigate physical layer schemes for downlink transmission from a multiantenna transmitter to several cache-enabled users. As the baseline scheme we consider employing coded caching on top of max-min fair multicasting, which is shown to be far from optimal at high SNR values. Our first proposed scheme, which is near-optimal in terms of DoF, is the natural extension of multiserver coded caching to Gaussian channels. As we demonstrate, its finite SNR performance is not satisfactory, and thus we propose a new scheme in which the linear combination of messages is implemented in the finite field domain, and the one-shot precoding for the MISO downlink is implemented in the complex field. While this modification results in the same near-optimal DoF performance, we show that this leads to significant performance improvement at finite SNR. Finally, we extend our scheme to the previously considered cache-enabled interference channels, and moreover, we provide an Ergodic rate analysis of our scheme. Our results convey the important message that although directly translating schemes from the network coding ideas to wireless networks may work well at high SNR values, careful modifications need to be considered for acceptable finite SNR performance.

49 citations


Journal ArticleDOI
TL;DR: A new type of diversity is introduced, referred to as transmit correlation diversity, which captures the fact that the channel vectors of different users may have different channel covariance matrices spanning often nearly mutually orthogonal subspaces, and can yield significant capacity gains in all regimes of interest.
Abstract: Correlation across transmit antennas in multiple-input multiple-output (MIMO) systems has been studied in various scenarios and has been shown to be detrimental or provide benefits depending on the particular system and underlying assumptions. In this paper, we investigate the effect of transmit correlation on the capacity of the Gaussian MIMO broadcast channel, with a particular interest in the large-scale array (or massive MIMO) regime. To this end, we introduce a new type of diversity, referred to as transmit correlation diversity , which captures the fact that the channel vectors of different users may have different channel covariance matrices spanning often nearly mutually orthogonal subspaces. In particular, when taking the cost of downlink training properly into account, transmit correlation diversity can yield significant capacity gains in all regimes of interest. Our analysis shows that the system multiplexing gain can be increased by a factor up to $\lfloor {M}/{r}\rfloor $ , where $M$ is the number of antennas and $r\le M$ is the common rank of the users channel covariance matrices, with respect to standard schemes that are agnostic of the transmit correlation diversity and treat the channels as if they were isotropically distributed. Thus, this new form of diversity reveals itself as a valuable “new resource” in multiuser communications.

44 citations


Posted Content
TL;DR: An information-theoretic converse bound on the average load under an arbitrary file popularity distribution is presented and an equivalent linear optimization problem with K+1 variables under the uniform file popularity is obtained.
Abstract: A centralized coded caching scheme has been proposed by Maddah-Ali and Niesen to reduce the worst-case load of a network consisting of a server with access to N files and connected through a shared link to K users, each equipped with a cache of size M. However, this centralized coded caching scheme is not able to take advantage of a non-uniform, possibly very skewed, file popularity distribution. In this work, we consider the same network setting but aim to reduce the average load under an arbitrary (known) file popularity distribution. First, we consider a class of centralized coded caching schemes utilizing general uncoded placement and a specific coded delivery strategy, which are specified by a general file partition parameter. Then, we formulate the coded caching design optimization problem over the considered class of schemes with 2^K2^N variables to minimize the average load by optimizing the file partition parameter under an arbitrary file popularity. Furthermore, we show that the optimization problem is convex, and the resulting optimal solution generally improves upon known schemes. Next, we analyze structural properties of the optimization problem to obtain design insights and reduce the complexity. Specifically, we obtain an equivalent linear optimization problem with (K+1)N variables under an arbitrary file popularity and an equivalent linear optimization problem with K+1 variables under the uniform file popularity. Under the uniform file popularity, we also obtain the closed form optimal solution, which corresponds to Maddah-Ali-Niesen's centralized coded caching scheme. Finally, we present an information-theoretic converse bound on the average load under an arbitrary file popularity.

44 citations


Posted Content
TL;DR: In this paper, a single-cell downlink scenario where a multiple-antenna base station delivers contents to multiple cache-enabled user terminals is considered, and three delivery approaches are investigated.
Abstract: In this paper we consider a single-cell downlink scenario where a multiple-antenna base station delivers contents to multiple cache-enabled user terminals. Based on the multicasting opportunities provided by the so-called Coded Caching technique, we investigate three delivery approaches. Our baseline scheme employs the coded caching technique on top of max-min fair multicasting. The second one consists of a joint design of Zero-Forcing (ZF) and coded caching, where the coded chunks are formed in the signal domain (complex field). The third scheme is similar to the second one with the difference that the coded chunks are formed in the data domain (finite field). We derive closed-form rate expressions where our results suggest that the latter two schemes surpass the first one in terms of Degrees of Freedom (DoF). However, at the intermediate SNR regime forming coded chunks in the signal domain results in power loss, and will deteriorate throughput of the second scheme. The main message of our paper is that the schemes performing well in terms of DoF may not be directly appropriate for intermediate SNR regimes, and modified schemes should be employed.

Proceedings ArticleDOI
25 Jun 2017
TL;DR: It is shown that both relaying a-la Cover-El Gamal, i.e., compress-and-forward with joint decompression and decoding, and “noisy network coding”, are optimal and a single-letter characterization of the capacity region of this model for a class of discrete memoryless channels is established.
Abstract: We study the transmission over a network in which users send information to a remote destination through relay nodes that are connected to the destination via finite-capacity error-free links, i.e., a cloud radio access network. The relays are constrained to operate without knowledge of the users' codebooks, i.e., they perform oblivious processing. The destination, or central processor, however, is informed about the users' codebooks. We establish a single-letter characterization of the capacity region of this model for a class of discrete memoryless channels in which the outputs at the relay nodes are independent given the users' inputs. We show that both relaying a-la Cover-El Gamal, i.e., compress-and-forward with joint decompression and decoding, and “noisy network coding”, are optimal. The proof of the converse part establishes, and utilizes, connections with the Chief Executive Officer (CEO) source coding problem under logarithmic loss distortion measure. Extensions to general discrete memoryless channels are also investigated. In this case, we establish inner and outer bounds on the capacity region. For memoryless Gaussian channels within the studied class of channels, we characterize the capacity under Gaussian channel inputs.

Proceedings ArticleDOI
19 Mar 2017
TL;DR: The scheme proposed in this paper can significantly enhance network throughput by exploiting space-division multiple access, i.e., allowing non-conflicting flows to be transmitted simultaneously and achieves significant gain over benchmark schemes in terms of user throughput.
Abstract: Millimeter wave (mm-wave) frequencies provide orders of magnitude larger spectrum than current cellular allocations and allow usage of high dimensional antenna arrays for exploiting beamforming and spatial multiplexing. This paper addresses the problem of joint scheduling and radio resource allocation optimization in mm-wave heterogeneous networks where mm-wave small cells are densely deployed underlying the conventional homogeneous macro cells. Furthermore, mm-wave small cells operate in time division duplexing mode and share the same spectrum and air-interface for backhaul and access links. The scheme proposed in this paper can significantly enhance network throughput by exploiting space-division multiple access, i.e., allowing non-conflicting flows to be transmitted simultaneously. The optimization problem of maximizing network throughput is formulated as a mixed integer nonlinear programming problem. To find a practical solution, this is decomposed into three steps: concurrent transmission scheduling, time resource allocation, and power allocation. A maximum independent set based algorithm is developed for concurrent transmission scheduling to improve resource utilization efficiency with low computational complexity. Through extensive simulations, we demonstrate that the proposed algorithm achieves significant gain over benchmark schemes in terms of user throughput.

Journal ArticleDOI
TL;DR: This paper presents multipath components (MPCs) tracking results from a channel sounder measurement with 1-GHz bandwidth at a carrier frequency of 5.7 GHz and describes in detail the applied algorithms and a tracking performance evaluation based on artificial channels and on measurement data from a tunnel scenario.
Abstract: A detailed understanding of the dynamic processes of vehicular radio channels is crucial for its realistic modeling. In this paper, we present multipath components (MPCs) tracking results from a channel sounder measurement with 1-GHz bandwidth at a carrier frequency of 5.7 GHz. We describe in detail the applied algorithms and perform a tracking performance evaluation based on artificial channels and on measurement data from a tunnel scenario. The tracking performance of the proposed algorithm is comparable to the tracking performance of the state-of-the-art Gaussian mixture probability hypothesis density filter but with a significantly lower complexity. The fluctuation of the measured channel gain is followed very well by the proposed tracking algorithm, with a power loss of only 2.5 dB. We present statistical distributions for the number of MPCs and the birth/death rate. The applied algorithms and tracking results can be used to enhance the development of geometry-based channel models.

Proceedings Article
15 Mar 2017
TL;DR: The proposed calibration methods can be realized via an internal calibration interconnect network which reduces transceiver interconnection effort and clutter to a minimum and is thus very well suited for implementation in highly compact large scale SDR systems with hundreds of transceivers.
Abstract: This paper presents internal calibration methods which are especially suited for implementation in software-defined radio hardware platforms which integrate a large number of transceivers. The procedures provide reciprocity calibration of the system for e.g. large scale MU-MIMO TDD communications and full absolute phase and amplitude coherence between transmitters and receivers for e.g. MIMO channel sounding, direction of arrival estimation or other smart antenna algorithms. The advantage of completely internal calibration includes increased robustness because no interference or reflections are picked up during calibration, the steady repeatable calibration performance independent from the connected antennas, and much more reliable and numerically stable estimated calibration coefficients than other techniques based on over-the-air pilot exchange. The proposed calibration methods can be realized via an internal calibration interconnect network which reduces transceiver interconnection effort and clutter to a minimum and is thus very well suited for implementation in highly compact large scale SDR systems with hundreds of transceivers.

Proceedings ArticleDOI
21 May 2017
TL;DR: A class of multiuser MIMO schemes, which rely on uplink training from the user terminals, and on uplinks/downlink channel reciprocity, are presented, which can yield substantial spatial multiplexing and ergodic user-rates improvements with respect to their orthogonal-training counterparts.
Abstract: We consider a single-cell scenario involving a single base station (BS) with a massive array serving multi-antenna terminals in the downlink of a mmWave channel. We present a class of multiuser MIMO schemes, which rely on uplink training from the user terminals, and on uplink/downlink channel reciprocity. The BS employs virtual sector-based processing according to which, user-channel estimation and data transmission are performed in parallel over non-overlapping angular sectors. The uplink training schemes we consider are non-orthogonal, that is, we allow multiple users to transmit pilots on the same pilot dimension (thereby potentially interfering with one another). Elementary processing allows each sector to determine the subset of user channels that can be resolved on the sector (effectively pilot contamination free) and, thus, the subset of users that can be served by the sector. This allows resolving multiple users on the same pilot dimension at different sectors, thereby increasing the overall multiplexing gains of the system. Our analysis and simulations reveal that, by using appropriately designed directional training beams at the user terminals, the sector-based transmission schemes we present can yield substantial spatial multiplexing and ergodic user-rates improvements with respect to their orthogonal-training counterparts.

Posted Content
TL;DR: A class of centralized coded caching schemes consisting of a general content placement strategy specified by a file partition parameter, enabling efficient and flexible content placement, and a specific content delivery strategy, enabling load reduction by exploiting common requests of different users are presented.
Abstract: We consider the classical coded caching problem as defined by Maddah-Ali and Niesen, where a server with a library of $N$ files of equal size is connected to $K$ users via a shared error-free link. Each user is equipped with a cache with capacity of $M$ files. The goal is to design a static content placement and delivery scheme such that the average load over the shared link is minimized. We first present a class of centralized coded caching schemes consisting of a general content placement strategy specified by a file partition parameter, enabling efficient and flexible content placement, and a specific content delivery strategy, enabling load reduction by exploiting common requests of different users. For the proposed class of schemes, we consider two cases for the optimization of the file partition parameter, depending on whether a large subpacketization level is allowed or not. In the case of an unrestricted subpacketization level, we formulate the coded caching optimization in order to minimize the average load under an arbitrary file popularity. A direct formulation of the problem involves $N2^K$ variables. By imposing some additional conditions, the problem is reduced to a linear program with $N(K+1)$ variables under an arbitrary file popularity and with $K+1$ variables under the uniform file popularity. We can recover Yu {\em et al.}'s optimal scheme for the uniform file popularity as an optimal solution of our problem. When a low subpacketization level is desired, we introduce a subpacketization level constraint involving the $\ell_0$ norm for each file. Again, by imposing the same additional conditions, we can simplify the problem to a difference of two convex functions (DC) problem with $N(K+1)$ variables that can be efficiently solved.

Proceedings ArticleDOI
01 Jun 2017
TL;DR: This work proposes a caching strategy based on deterministic assignment of MDS-coded packets of the library files, and a coded multicast delivery strategy where the users send linearly coded messages to each other in order to collectively satisfy their demands.
Abstract: We consider a wireless Device-to-Device (D2D) caching network, where users make arbitrary requests from a library of files and have pre-fetched (cached) information on their devices, subject to a per-node storage capacity constraint. The network is assumed to obey the “protocol model”, widely considered in the wireless network literature. Unlike other related works, which either restrict the communication to single-hop, or assume entire file caching, here we consider both multi-hop transmission and fully general caching strategies, including file subpacketization. We propose a caching strategy based on deterministic assignment of MDS-coded packets of the library files, and a coded multicast delivery strategy where the users send linearly coded messages to each other in order to collectively satisfy their demands. We show that our approach can achieve the information theoretic outer bound within a multiplicative constant factor in practical parameter regimes.

Proceedings ArticleDOI
30 Jan 2017
TL;DR: A low-complexity alternating minimization algorithm to recover the target signal from the set of its unlabeled samples and the behavior of the proposed algorithm for different signal dimensions and number of measurements empirically via numerical simulations is studied.
Abstract: In this paper, we study the recovery of a signal from a collection of unlabeled and possibly noisy measurements via a measurement matrix with random i.i.d. Gaussian components. We call the measurements unlabeled since their order is missing, namely, it is not known a priori which elements of the resulting measurements correspond to which row of the measurement matrix. We focus on the special case of ordered measurements, where only a subset of the measurements is kept and the order of the taken measurements is preserved. We identify a duality between this problem and the traditional Compressed Sensing, where we show that the unknown support (location of the nonzero elements) of a sparse signal in Compressed Sensing corresponds in a natural way to the unknown location of the measurements kept in unlabeled sensing. While in Compressed Sensing it is possible to recover a sparse signal from an under-determined set of linear equations (less equations than the dimension of the signal), successful recovery in unlabeled sensing requires taking more samples than the dimension of the signal. We develop a low-complexity alternating minimization algorithm to recover the target signal from the set of its unlabeled samples. We also study the behavior of the proposed algorithm for different signal dimensions and number of measurements empirically via numerical simulations. The results are a reminiscent of the phasetransition similar to that occurring in Compressed Sensing.

Journal ArticleDOI
TL;DR: In this article, the authors present statistical models for the number of, birth rate, lifetime, excess delay, and relative Doppler frequency of individual multipath components of individual channels.
Abstract: Realistic propagation modeling requires a detailed understanding and characterization of the radio channel properties. This paper is based on channel sounder measurements with 1-GHz bandwidth at a carrier frequency of 5.7 GHz and particular tracking methods. We present statistical models for the number of, birth rate, lifetime, excess delay, and relative Doppler frequency of individual multipath components. Our findings are concluded from 72 measurement runs in eight relevant vehicular communication scenarios and reveal wide insights into the dynamic propagation process in vehicular communication scenarios.

Posted Content
TL;DR: In this article, the tradeoff between the overhead of message passing and the achievable symmetric DoF region in the information-theoretic sense was investigated in uplink cellular networks with decoded message passing.
Abstract: The topological interference management (TIM) problem studies partially-connected interference networks with no channel state information except for the network topology (i.e., connectivity graph) at the transmitters. In this paper, we consider a similar problem in the uplink cellular networks, while message passing is enabled at the receivers (e.g., base stations), so that the decoded messages can be routed to other receivers via backhaul links to help further improve network performance. For this TIM problem with decoded message passing (TIM-MP), we model the interference pattern by conflict digraphs, connect orthogonal access to the acyclic set coloring on conflict digraphs, and show that one-to-one interference alignment boils down to orthogonal access because of message passing. With the aid of polyhedral combinatorics, we identify the structural properties of certain classes of network topologies where orthogonal access achieves the optimal degrees-of-freedom (DoF) region in the information-theoretic sense. The relation to the conventional index coding with simultaneous decoding is also investigated by formulating a generalized index coding problem with successive decoding as a result of decoded message passing. The properties of reducibility and criticality are also studied, by which we are able to prove the linear optimality of orthogonal access in terms of symmetric DoF for the networks up to four users with all possible network topologies (218 instances). Practical issues of the tradeoff between the overhead of message passing and the achievable symmetric DoF are also discussed, in the hope of facilitating efficient backhaul utilization.

Journal ArticleDOI
TL;DR: An overview of the fundamental aspects and of some recent advances in space-time coding (STC) is provided, with the description of families of codes that are optimal with respect to the DMT criterion and have error performance that is very close to the information theoretic limits.
Abstract: This work provides an overview of the fundamental aspects and of some recent advances in space-time coding (STC). Basic information theoretic results on Multiple-Input Multiple-Output (MIMO) fading channels, pertaining to capacity, diversity, and to the optimal Diversity Multiplexing Tradeoff (DMT), are reviewed. The code design for the quasi-static, outage limited, fading channel is recognized as the most challenging and innovative with respect to traditional “Gaussian” coding. Then, a survey of STC constructions is presented. This culminates with the description of families of codes that are optimal with respect to the DMT criterion and have error performance that is very close to the information theoretic limits. The paper concludes with some important recent topics, including open problems in STC design.

Posted Content
TL;DR: In this paper, the authors consider wireless networks of remote radio heads (RRH) with large antenna-arrays, operated in TDD, with uplink (UL) training and channel-reciprocity based downlink (DL) transmission.
Abstract: We consider wireless networks of remote radio heads (RRH) with large antenna-arrays, operated in TDD, with uplink (UL) training and channel-reciprocity based downlink (DL) transmission. To achieve large area spectral efficiencies, we advocate the use of methods that rely on rudimentary scheduling, decentralized operation at each RRH and user-centric DL transmission. A slotted system is assumed, whereby users are randomly scheduled (e.g., via shuffled round robin) in slots and across the limited pilot dimensions per slot. As a result, multiple users in the vicinity of an RRH can simultaneously transmit pilots on the same pilot dimension (and thus interfering with one another). Each RRH performs rudimentary processing of the pilot observations in "sectors". In a sector, the RRH is able to resolve a scheduled user's channel when that user is determined to be the only one among the scheduled users (on the same pilot dimension) with significant received power in the sector. Subsequently, only the subset of scheduled users whose channels are resolved in at least one sector can be served by the system. We consider a spatially consistent evaluation of the area multiplexing gains by means of a Poisson Point Process (PPP) problem formulation where RRHs, blockers, scatterers and scheduled user terminals are all PPPs with individual intensities. Also, we study directional training at the user terminals. Our simulations suggest that, by controlling the intensity of the scheduled user PPP and the user-pilot beam-width, many fold improvements can be expected in area multiplexing gains with respect to conventional spatial pilot reuse systems.

Posted Content
TL;DR: The sparse spatial scattering properties of the environment are used to estimate the support of the continuous, frequency-invariant scattering function from UL channel observations and use the resulting support estimate to design an efficient DL probing and UL feedback scheme in which the feedback dimension scales proportionally with the sparsity order of DL channel vectors.
Abstract: Massive Multiple-Input Multiple-Output (massive MIMO) is a variant of multi-user MIMO in which the number of antennas at each Base Station (BS) is very large and typically much larger than the number of users simultaneously served. Massive MIMO can be implemented with Time Division Duplexing (TDD) or Frequency Division Duplexing (FDD) operation. FDD massive MIMO systems are particularly desirable due to their implementation in current wireless networks and their efficiency in situations with symmetric traffic and delay-sensitive applications. However, implementing FDD massive MIMO systems is known to be challenging since it imposes a large feedback overhead in the Uplink (UL) to obtain channel state information for the Downlink (DL). In recent years, a considerable amount of research is dedicated to developing methods to reduce the feedback overhead in such systems. In this paper, we use the sparse spatial scattering properties of the environment to achieve this goal. The idea is to estimate the support of the continuous, frequency-invariant scattering function from UL channel observations and use this estimate to obtain the support of the DL channel vector via appropriate interpolation. We use the resulting support estimate to design an efficient DL probing and UL feedback scheme in which the feedback dimension scales proportionally with the sparsity order of DL channel vectors. Since the sparsity order is much less than the number of BS antennas in almost all practically relevant scenarios, our method incurs much less feedback overhead compared with the currently proposed methods in the literature, such as those based on compressed-sensing. We use numerical simulations to assess the performance of our probing-feedback algorithm and compare it with these methods.

Posted Content
TL;DR: A novel TPE method that outperforms all previous proposed methods in the general non-symmetric case of users with arbitrary antenna correlation and is significantly simpler and more flexible than previously proposed methods based on deterministic equivalents and free probability in large random matrix theory is presented.
Abstract: In TDD reciprocity-based massive MIMO it is essential to be able to compute the downlink precoding matrix over all OFDM resource blocks within a small fraction of the uplink-downlink slot duration. Early implementation of massive MIMO are limited to the simple Conjugate Beamforming (ConjBF) precoding method, because of such computation latency limitation. However, it has been widely demonstrated by theoretical analysis and system simulation that Regularized Zero-Forcing (RZF) precoding is generally much more effective than ConjBF for a large but practical number of transmit antennas. In order to recover a significant fraction of the gap between ConjBF and RZF and yet meeting the very strict computation latency constraints, truncated polynomial expansion (TPE) methods have been proposed. In this paper we present a novel TPE method that outperforms all previously proposed methods in the general non-symmetric case of users with arbitrary antenna correlation. In addition, the proposed method is significantly simpler and more flexible than previously proposed methods based on deterministic equivalents and free probability in large random matrix theory. We consider power allocation with our TPE approach, and show that classical system optimization problems such as min-sum power and max-min rate can be easily solved. Furthermore, we provide a detailed computation latency analysis specifically targeted to a highly parallel FPGA hardware architecture.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: An analysis of broadcast signaling design for cell discovery based on an information theoretical approach reveals that for low latency, single beam exhaustive scanning provides the best performance, but results in high signaling overhead; and simultaneous multi-beam scanning can significantly reduce the overhead, and provide the flexibility to achieve trade-off between latency and overhead.
Abstract: Millimeter wave (mm-wave) communication is essential for the next generation cellular networks To exploit mm-wave frequencies, directional transmissions have to be applied to compensate the high propagation loss Due to directional transmissions, initial access procedure of mm-wave communication systems needs specific design compared to conventional networks operating at sub-6 GHz This paper focuses on an important step in the initial access procedure, namely broadcast signaling design for cell discovery An analysis of such design is conducted based on an information theoretical approach, where four fundamental beam patterns, which cover most of the design options, are compared Their performances in terms of cell discovery latency and signaling overhead are analyzed The analysis reveals three key findings: (i) the average cell discovery latency depends only on beam duration and frame length, if the entire beacon interval can be accommodated in one frame; (ii) for low latency, single beam exhaustive scanning provides the best performance, but results in high signaling overhead; (iii) simultaneous multi-beam scanning can significantly reduce the overhead, and provide the flexibility to achieve trade-off between latency and overhead The analytical results are verified by extensive simulations

Posted Content
TL;DR: In this article, the capacity region of a cloud radio access network with Gaussian codebooks is characterized for compress-and-forward with joint decoding and decoding and noisy network coding.
Abstract: We study the transmission over a network in which users send information to a remote destination through relay nodes that are connected to the destination via finite-capacity error-free links, i.e., a cloud radio access network. The relays are constrained to operate without knowledge of the users' codebooks, i.e., they perform oblivious processing. The destination, or central processor, however, is informed about the users' codebooks. We establish a single-letter characterization of the capacity region of this model for a class of discrete memoryless channels in which the outputs at the relay nodes are independent given the users' inputs. We show that both relaying a-la Cover-El Gamal, i.e., compress-and-forward with joint decompression and decoding, and "noisy network coding", are optimal. The proof of the converse part establishes, and utilizes, connections with the Chief Executive Officer (CEO) source coding problem under logarithmic loss distortion measure. Extensions to general discrete memoryless channels are also investigated. In this case, we establish inner and outer bounds on the capacity region. For memoryless Gaussian channels within the studied class of channels, we characterize the capacity region when the users are constrained to time-share among Gaussian codebooks. Furthermore, we also discuss the suboptimality of separate decompression-decoding and the role of time-sharing.

Proceedings ArticleDOI
01 May 2017
TL;DR: This paper aims to design a subspace estimation/tracking algorithm that requires sampling only a small number of antennas in each training period, has a very low computational complexity, and is able to track the sharp transitions in the channel statistics very quickly.
Abstract: Massive MIMO is a variant of multiuser MIMO, in which the number of antennas M at the base-station is very large and generally much larger than the number of spatially multiplexed data streams to the users. It turns out that by increasing the number of antennas M at the base-station and as a result increasing the spatial resolution of the array, although the received signal from each user tends to be very high-dim, it lies on a low-dim subspace due to the limited angular spread of the user. This low-dim subspace structure can be exploited to improve estimation of the channel state during the training period. For example, channel vectors of the users can be estimated by sampling only a small subset rather than the whole number of antenna elements, which reduces the number of required RF chains and A/D converters at receiver front end. Moreover, the subspace information can be used to group the users based on the similarity of their subspaces in order to serve them more efficiently. Thus, it is apparent that estimating the signal subspace of the users from low-dim noisy sketches of their channel vectors plays a crucial role in massive MIMO. In this paper, we aim to design such a subspace estimation/tracking algorithm. Our proposed algorithm requires sampling only a small number of antennas in each training period, has a very low computational complexity, and is able to track the sharp transitions in the channel statistics very quickly.

Posted Content
TL;DR: In this article, two new bounds on the achievable ergodic rate of a massive MIMO system were proposed for the case of channel hardening and/or when the system is interference-limited.
Abstract: A well-known lower bound widely used in the massive MIMO literature hinges on channel hardening, i.e., the phenomenon for which, thanks to the large number of antennas, the effective channel coefficients resulting from beamforming tend to deterministic quantities. If the channel hardening effect does not hold sufficiently well, this bound may be quite far from the actual achievable rate. In recent developments of massive MIMO, several scenarios where channel hardening is not sufficiently pronounced have emerged. These settings include, for example, the case of small scattering angular spread, yielding highly correlated channel vectors, and the case of cell-free massive MIMO. In this short contribution, we present two new bounds on the achievable ergodic rate that offer a complementary behavior with respect to the classical bound: while the former performs well in the case of channel hardening and/or when the system is interference-limited (notably, in the case of finite number of antennas and conjugate beamforming transmission), the new bounds perform well when the useful signal coefficient does not harden but the channel coherence block length is large with respect to the number of users, and in the case where interference is nearly entirely eliminated by zero-forcing beamforming. Overall, using the most appropriate bound depending on the system operating conditions yields a better understanding of the actual performance of systems where channel hardening may not occur, even in the presence of a very large number of antennas.

Posted Content
TL;DR: In this article, the authors proposed a time-domain BA scheme for wideband mmWave systems, where the channel is characterized by multi-path components, different delays, Angle-of-Arrivals/Angle-ofDepartures (AoAs/AoDs), and Doppler shifts.
Abstract: Millimeter wave (mmWave) communication with large array gains is a key ingredient of next generation (5G) wireless networks. Effective communication in mmWaves usually depends on the knowledge of the channel. We refer to the problem of finding a narrow beam pair at the transmitter and at the receiver, yielding high Signal to Noise Ratio (SNR) as Beam Alignment (BA). Prior BA schemes typically considered deterministic channels, where the instantaneous channel coefficients are assumed to stay constant for a long time. In this paper, in contrast, we propose a time-domain BA scheme for wideband mmWave systems, where the channel is characterized by multi-path components, different delays, Angle-of-Arrivals/Angle-of-Departures (AoAs/AoDs), and Doppler shifts. In our proposed scheme, the Base Station (BS) probes the channel in the downlink by some sequences with good autocorrelation property (e.g., Pseudo-Noise (PN) sequences), letting each user estimate its best AoA-AoD that connects the user to the BS with two-sided high beamforming gain. We leverage the sparse nature of mmWaves in the AoA-AoD-time domain, and formulate the BA problem as a Compressed Sensing (CS) of a non-negative sparse vector. We use the recently developed Non-Negative Least Squares (NNLS) technique to efficiently find the strongest path connecting the BS and each user. Simulation results show that the proposed scheme outperforms its counterpart in terms of the training overhead and robustness to fast channel variations.