scispace - formally typeset
Search or ask a question
Author

Abdellatif Zaidi

Other affiliations: Institut Gaspard Monge
Bio: Abdellatif Zaidi is an academic researcher from Huawei. The author has contributed to research in topics: Decoding methods & Telecommunications link. The author has an hindex of 5, co-authored 17 publications receiving 78 citations. Previous affiliations of Abdellatif Zaidi include Institut Gaspard Monge.

Papers
More filters
Journal ArticleDOI
27 Jan 2020-Entropy
TL;DR: This tutorial paper focuses on the variants of the bottleneck problem taking an information theoretic perspective and discusses practical methods to solve it, as well as its connection to coding and learning aspects.
Abstract: This tutorial paper focuses on the variants of the bottleneck problem taking an information theoretic perspective and discusses practical methods to solve it, as well as its connection to coding and learning aspects. The intimate connections of this setting to remote source-coding under logarithmic loss distortion measure, information combining, common reconstruction, the Wyner–Ahlswede–Korner problem, the efficiency of investment information, as well as, generalization, variational inference, representation learning, autoencoders, and others are highlighted. We discuss its extension to the distributed information bottleneck problem with emphasis on the Gaussian model and highlight the basic connections to the uplink Cloud Radio Access Networks (CRAN) with oblivious processing. For this model, the optimal trade-offs between relevance (i.e., information) and complexity (i.e., rates) in the discrete and vector Gaussian frameworks is determined. In the concluding outlook, some interesting problems are mentioned such as the characterization of the optimal inputs (“features”) distributions under power limitations maximizing the “relevance” for the Gaussian information bottleneck, under “complexity” constraints.

81 citations

Proceedings ArticleDOI
Abdellatif Zaidi1
01 Jun 2020
TL;DR: An upper bound on the exponent-rates function is derived from a variant of the many-help one hypothesis testing against independence problem in which the source has finite differential entropy and the observation noises under the null hypothesis are Gaussian.
Abstract: We study a variant of the many-help one hypothesis testing against independence problem in which the source, not necessarily Gaussian, has finite differential entropy and the observation noises under the null hypothesis are Gaussian. Under the criterion that stipulates minimization of the Type II error exponent subject to a (constant) bound on the Type I error rate, we derive an upper bound on the exponent-rates function. The bound is shown to mirror a corresponding explicit lower bound, except that the lower bound involves the source power (variance) whereas the upper bound has the source entropy power. Part of the utility of the established bound is for investigating asymptotic exponent/rates and losses incurred by distributed detection as function of the number of observations.

11 citations

Proceedings ArticleDOI
01 Sep 2015
TL;DR: A lattice based coding scheme is developed that generalizes both compute-and-forward and successive Wyner-Ziv coding for this model of transmission over a cloud radio access network and is shown to strictly outperform the best of the aforementioned two popular schemes.
Abstract: We study the transmission over a cloud radio access network, in which multiple base stations (BS) are connected to a central processor (CP) via finite-capacity backhaul links. Focusing on maximizing the allowed sum-rate, we develop a lattice based coding scheme that generalizes both compute-and-forward and successive Wyner-Ziv coding for this model. The scheme builds on Cover and El Gamal partial-decode-compress-and-forward and is shown to strictly outperform the best of the aforementioned two popular schemes. The results are illustrated through some numerical examples.

8 citations

Proceedings ArticleDOI
22 May 2016
TL;DR: A lattice-based coding scheme is proposed in which the BSs decode linear combinations of the transmitted messages, in the spirit of Compute-and-Forward (CoF), but differs from it essentially in that the decoded equations are remapped to linear combination of the channel input symbols.
Abstract: We study the transmission over a cloud radio access network in which multiple base stations (BS) are connected to a central processor (CP) via finite-capacity backhaul links. We propose a lattice-based coding scheme in which the BSs decode linear combinations of the transmitted messages, in the spirit of Compute-and-Forward (CoF), but differs from it essentially in that the decoded equations are remapped to linear combinations of the channel input symbols, sent compressed in a lossy manner to the CP, and are not required to be linearly independent. Also, by opposition to the standard CoF, an appropriate multi-user decoder is utilized to recover the sent messages. This novel scheme differs from both classical Compute-and-Forward and Successive Wyner-Ziv (SWZ) and it is shown to outperform both schemes, in certain regimes, through some numerical examples.

8 citations

Proceedings ArticleDOI
10 Jul 2016
TL;DR: This work establishes single-letter characterizations of the optimal secure rate-distortion regions of both models and sheds important light on the role of the private link(s) for the transmission of the source S0 or for sharing a secret key that is then used to encrypt thesource S0 over the public link.
Abstract: In this work, we investigate two secure source coding models, a Helper problem and a Gray-Wyner problem. In both settings, the encoder is connected to each of the legitimate receivers through a public link as well as a private link; and an external eavesdropper intercepts every information that is sent on the public link. Specifically, in the Helper problem, a memoryless source pair (S 0 ; S 1 ) is to be compressed and sent on both links such that the component S 0 can be recovered losslessly at the legitimate receiver while being kept completely secret from an eavesdropper that overhears on the public link, and the component S 1 is recovered lossily, to within some prescribed distortion level, at the legitimate receiver. In the Gray-Wyner model, a memoryless source triple (S 0 ; S 1 ; S 2 ) is to be compressed and sent to two legitimate receivers, such that the component S 0 is recovered at both receivers losslessly and kept secret at an external eavesdropper that listens on the public link; and the component S j , is to be recovered lossily at Receiver j, j = 1, 2. We establish single-letter characterizations of the optimal secure rate-distortion regions of both models. The analysis sheds important light on the role of the private link(s), i.e., for the transmission of the source S 0 or for sharing a secret key that is then used to encrypt the source S 0 over the public link.

5 citations


Cited by
More filters
Book Chapter
01 Jan 2017
TL;DR: Considering the trend in 5G, achieving significant gains in capacity and system throughput performance is a high priority requirement in view of the recent exponential increase in the volume of mobile traffic and the proposed system should be able to support enhanced delay-sensitive high-volume services.
Abstract: Radio access technologies for cellular mobile communications are typically characterized by multiple access schemes, e.g., frequency division multiple access (FDMA), time division multiple access (TDMA), code division multiple access (CDMA), and OFDMA. In the 4th generation (4G) mobile communication systems such as Long-Term Evolution (LTE) (Au et al., Uplink contention based SCMA for 5G radio access. Globecom Workshops (GC Wkshps), 2014. doi:10.1109/GLOCOMW.2014.7063547) and LTE-Advanced (Baracca et al., IEEE Trans. Commun., 2011. doi:10.1109/TCOMM.2011.121410.090252; Barry et al., Digital Communication, Kluwer, Dordrecht, 2004), standardized by the 3rd Generation Partnership Project (3GPP), orthogonal multiple access based on OFDMA or single carrier (SC)-FDMA is adopted. Orthogonal multiple access was a reasonable choice for achieving good system-level throughput performance with simple single-user detection. However, considering the trend in 5G, achieving significant gains in capacity and system throughput performance is a high priority requirement in view of the recent exponential increase in the volume of mobile traffic. In addition the proposed system should be able to support enhanced delay-sensitive high-volume services such as video streaming and cloud computing. Another high-level target of 5G is reduced cost, higher energy efficiency and robustness against emergencies.

635 citations

Journal ArticleDOI
30 Apr 2020
TL;DR: The information bottleneck (IB) theory recently emerged as a bold information-theoretic paradigm for analyzing DL systems, and its recent impact on DL is surveyed.
Abstract: Inference capabilities of machine learning (ML) systems skyrocketed in recent years, now playing a pivotal role in various aspect of society. The goal in statistical learning is to use data to obtain simple algorithms for predicting a random variable $Y$ from a correlated observation $X$ . Since the dimension of $X$ is typically huge, computationally feasible solutions should summarize it into a lower-dimensional feature vector $T$ , from which $Y$ is predicted. The algorithm will successfully make the prediction if $T$ is a good proxy of $Y$ , despite the said dimensionality-reduction. A myriad of ML algorithms (mostly employing deep learning (DL)) for finding such representations $T$ based on real-world data are now available. While these methods are effective in practice, their success is hindered by the lack of a comprehensive theory to explain it. The information bottleneck (IB) theory recently emerged as a bold information-theoretic paradigm for analyzing DL systems. Adopting mutual information as the figure of merit, it suggests that the best representation $T$ should be maximally informative about $Y$ while minimizing the mutual information with $X$ . In this tutorial we survey the information-theoretic origins of this abstract principle, and its recent impact on DL. For the latter, we cover implications of the IB problem on DL theory, as well as practical algorithms inspired by it. Our goal is to provide a unified and cohesive description. A clear view of current knowledge is important for further leveraging IB and other information-theoretic ideas to study DL models.

95 citations

Journal ArticleDOI
TL;DR: This tutorial summarizes the efforts to date, starting from its early adaptations, semantic-aware and task-oriented communications, covering the foundations, algorithms and potential implementations, and focuses on approaches that utilize information theory to provide the foundations.
Abstract: Communication systems to date primarily aim at reliably communicating bit sequences. Such an approach provides efficient engineering designs that are agnostic to the meanings of the messages or to the goal that the message exchange aims to achieve. Next generation systems, however, can be potentially enriched by folding message semantics and goals of communication into their design. Further, these systems can be made cognizant of the context in which communication exchange takes place, thereby providing avenues for novel design insights. This tutorial summarizes the efforts to date, starting from its early adaptations, semantic-aware and task-oriented communications, covering the foundations, algorithms and potential implementations. The focus is on approaches that utilize information theory to provide the foundations, as well as the significant role of learning in semantics and task-aware communications.

67 citations

Posted Content
TL;DR: It is shown that for Gaussian source, the failure of being successively refinable with multiple side informations is only due to the inherent uncertainty on which side information will occur at the decoder, but not the progressive encoding requirement.
Abstract: We provide a complete characterization of the rate-distortion region for the multistage successive refinement of the Wyner-Ziv source coding problem with degraded side informations at the decoder. Necessary and sufficient conditions for a source to be successively refinable along a distortion vector are subsequently derived. A source-channel separation theorem is provided when the descriptions are sent over independent channels for the multistage case. Furthermore, we introduce the notion of generalized successive refinability with multiple degraded side informations. This notion captures whether progressive encoding to satisfy multiple distortion constraints for different side informations is as good as encoding without progressive requirement. Necessary and sufficient conditions for generalized successive refinability are given. It is shown that the following two sources are generalized successively refinable: (1) the Gaussian source with degraded Gaussian side informations, (2) the doubly symmetric binary source when the worse side information is a constant. Thus for both cases, the failure of being successively refinable is only due to the inherent uncertainty on which side information will occur at the decoder, but not the progressive encoding requirement.

47 citations