scispace - formally typeset
Search or ask a question

Showing papers presented at "Information Theory Workshop in 2018"


Journal ArticleDOI
01 Nov 2018
TL;DR: The total variation distance is introduced as a measure of privacy leakage by showing that it satisfies the post-processing and linkage inequalities and provides a bound on the privacy leakage measured by mutual information, maximal leakage, or the improvement in an inference attack with a bounded cost function.
Abstract: The total variation distance is proposed as a privacy measure in an information disclosure scenario when the goal is to reveal some information about available data in order to receive utility, while preserving the privacy of sensitive data from the legitimate receiver. The total variation distance is motivated as a measure of privacy-leakage by showing that: i) it satisfies the post-processing and linkage inequalities, which makes it consistent with an intuitive notion of a privacy measure; ii) the optimal utility-privacy trade-off can be solved through a standard linear program when total variation distance is employed as the privacy measure; iii) it provides a bound on the privacy-leakage measured by mutual information, maximal leakage, or the improvement in an inference attack with an arbitrary bounded cost function1.1This work has been carried out when the first author was with the Information Processing and Communications Laboratory at Imperial College London.

69 citations


Proceedings ArticleDOI
01 Nov 2018
TL;DR: A system where status updates are generated by a source and are forwarded in a First-Come-First-Served (FCFS) manner to the monitor is examined, providing a closed form of the average age of the stream which enables to optimize its packet generation rate and achieve the minimum possible average age.
Abstract: In this paper, we examine a system where status updates are generated by a source and are forwarded in a First-Come-First-Served (FCFS) manner to the monitor. We consider the case where the server has other tasks to fulfill referred to as vacations, a simple example being relaying the packets of another non age-sensitive stream. Due to the server’s necessity to go on vacations, the age process of the stream of interest becomes complicated to evaluate. By leveraging specific queuing theory tools, we provide a closed form of the average age of the stream which enables us to optimize its packet generation rate and achieve the minimum possible average age. Numerical results are provided to corroborate the theoretical findings and highlight the interaction between the stream and the vacations in question.

60 citations


Proceedings ArticleDOI
23 May 2018
TL;DR: A specific class of such caching line graphs is defined, for which the subpacketization, rate, and uncached fraction of the coded caching problem can be captured via its graph theoretic parameters.
Abstract: We present a coded caching framework using line graphs of bipartite graphs. A clique cover of the line graph describes the uncached subfiles at users. A clique cover of the complement of the square of the line graph gives a transmission scheme that satisfies user demands. We then define a specific class of such caching line graphs, for which the subpacketization, rate, and uncached fraction of the coded caching problem can be captured via its graph theoretic parameters. We present a construction of such caching line graphs using projective geometry. The presented scheme has a rate bounded from above by a constant with subpacketization level $q ^ { O \left( \left( \log _ { q } K \right) ^ { 2 } \right) }$ and uncached fraction $\Theta \left( \frac { 1 } { \sqrt { K } } \right)$), where K where K is the number of users dependent lower bound on the rate of coded caching schemes for a given broadcast setup.

51 citations


Proceedings ArticleDOI
01 Nov 2018
TL;DR: Upper bounds on the generalization error are derived in terms of a certain Wasserstein distance involving the distributions of the input and output of an algorithm under the assumption of a Lipschitz continuous loss function.
Abstract: Generalization error of a learning algorithm characterizes the gap between an algorithm’s performance on test data versus performance on training data. In recent work, Xu & Raginsky [1] showed that generalization error may be upper- bounded using the mutual information $I(S;W)$ between the input $S$ and the output $W$ of an algorithm. In this paper, we derive upper bounds on the generalization error in terms of a certain Wasserstein distance involving the distributions of $S$ and $W$ under the assumption of a Lipschitz continuous loss function. Unlike mutual information-based bounds, these new bounds are useful even for deterministic learning algorithms, or for algorithms such as stochastic gradient descent. Moreover, we show that in some natural cases these bounds are tighter than mutual information-based bounds.

47 citations


Proceedings ArticleDOI
01 Nov 2018
TL;DR: In this article, a fine-grained model that quantifies the level of non-trivial coding needed to obtain the benefits of coding in matrix-vector computation is proposed.
Abstract: In distributed computing systems, it is well recognized that worker nodes that are slow (called stragglers) tend to dominate the overall job execution time. Coded computation utilizes concepts from erasure coding to mitigate the effect of stragglers by running “coded” copies of tasks comprising a job. Stragglers are typically treated as erasures in this process. While this is useful, there are issues with applying, e.g., MDS codes in a straightforward manner. Specifically, several applications such as matrix-vector products deal with sparse matrices. MDS codes typically require dense linear combinations of submatrices of the original matrix which destroy their inherent sparsity. This is problematic as it results in significantly higher processing times for computing the submatrix-vector products in coded computation. Furthermore, it also ignores partial computations at stragglers. In this work, we propose a fine-grained model that quantifies the level of non-trivial coding needed to obtain the benefits of coding in matrix-vector computation. Simultaneously, it allows us to leverage partial computations performed by the straggler nodes. For this model, we propose and evaluate several code designs and discuss their properties.

44 citations


Proceedings ArticleDOI
01 Nov 2018
TL;DR: A scheme for private data sharing, called entangled polynomial sharing, is proposed and it is shown that it admits basic operations such as addition, multiplication, and transposing, respecting the constraint of the problem.
Abstract: In a secure multiparty computation (MPC) system, there are some sources, where each one has access to a private input. The sources want to offload the computation of a polynomial function of the inputs to some processing nodes or workers. The processors are unreliable, i.e., a limited number of them may collude to gain information about the inputs. The objective is to minimize the number of required workers to calculate the polynomial, while the colluding workers gain no information about inputs. In this paper, we assume that the inputs are massive matrices, while the workers have the limited computation and storage at each worker. As proxy for that, we assume the link between each source and each worker admits a limited communication load. We propose a scheme for private data sharing, called entangled polynomial sharing, and show that it admits basic operations such as addition, multiplication, and transposing, respecting the constraint of the problem. Thus, it allows computing arbitrary polynomial of the input matrices, while it reduces the number of servers needed significantly compared to the conventional scheme. It also generalizes the recently proposed scheme of polynomial sharing.

39 citations


Proceedings ArticleDOI
01 Nov 2018
TL;DR: This work considers two different settings for the PIR-CSI problem, and proves an upper bound on the maximum download rate as a function of the size of the database and thesize of the side information, and proposes a protocol that achieves the rate upper-bound.
Abstract: This paper considers the problem of single-server single-message private information retrieval with coded side information (PIR-CSI). In this problem, there is a server storing a database, and a user which knows a linear combination of a subset of messages in the database as a side information. The number of messages contributing to the side information is known to the server, but the indices and the coefficients of these messages are unknown to the server. The user wishes to download a message from the server privately, i.e., without revealing which message it is requesting, while minimizing the download cost. In this work, we consider two different settings for the PIR-CSI problem depending on the demanded message being or not being one of the messages contributing to the side information. For each setting, we prove an upper bound on the maximum download rate as a function of the size of the database and the size of the side information, and propose a protocol that achieves the rate upper-bound.

37 citations


Proceedings ArticleDOI
30 May 2018
TL;DR: In this article, the authors consider the problem of multi-message PIR with side information and show that the capacity is the same as the capacity of a multimessage multi-user PIR problem without private side information, but with a library of reduced size.
Abstract: We consider the problem of private information retrieval (PIR) where a single user with private side information aims to retrieve multiple files from a library stored (uncoded) at a number of servers. We assume the side information at the user includes a subset of files stored privately (i.e., the server does not know the indices of these files). In addition, we require that the identity of requests and side information at the user are not revealed to any of the servers. The problem involves finding the minimum load to be transmitted from the servers to the user such that the requested files can be decoded with the help of received and side information. By providing matching lower and upper bounds, for certain regimes, we characterize the minimum load imposed to all the servers (i.e., the capacity of this PIR problem). Our result shows that the capacity is the same as the capacity of a multi-message PIR problem without private side information, but with a library of reduced size. The effective size of the library is equal to the original library size minus the size of side information.

37 citations


Proceedings ArticleDOI
01 Nov 2018
TL;DR: In this article, a lower bound on the communication cost for any given storage and computation costs was derived for a MapReduce-like distributed computing system, where the tradeoff between the storage, the computation, and the communication was characterized.
Abstract: We consider a MapReduce-like distributed computing system. We derive a lower bound on the communication cost for any given storage and computation costs. This lower bound matches the achievable bound we proposed recently. As a result, we completely characterize the optimal tradeoff between the storage, the computation, and the communication. Our result generalizes the previous one by Li et at. to also account for the number of computed intermediate values.

34 citations


Proceedings ArticleDOI
01 Nov 2018
TL;DR: Considering equivocation and average distortion as the metrics of privacy at the detector, a tight single-letter characterization of the rate-error exponent-equivocated and rate- error exponent-distortion tradeoff is obtained.
Abstract: A distributed binary hypothesis testing problem involving two parties, a remote observer and a detector, is studied. The remote observer has access to a discrete memoryless source, and communicates its observations to the detector via a rate-limited noiseless channel. The detector tests for the independence of its own observations with that of the observer, conditioned on some additional side information. While the goal is to maximize the type 2 error exponent of the test for a given type 1 error probability constraint, it is also desired to keep a private part, which is correlated with the observer’s observations, as oblivious to the detector as possible. Considering equivocation and average distortion as the metrics of privacy at the detector, a tight single-letter characterization of the rate-error exponent-equivocation and rate-error exponent-distortion tradeoff is obtained.

28 citations


Proceedings ArticleDOI
01 Nov 2018
TL;DR: Keyless authentication is considered in an adversarial point-to-point channel where the adversary is assumed to know the code but not the message, and the authentication capacity is shown to be either zero or equal to the no-adversary capacity, depending on whether the channel satisfies a condition termed overwritability.
Abstract: Keyless authentication is considered in an adversarial point-to-point channel. Namely, a legitimate transmitter and receiver aim to communicate over a noisy channel that may or may not also contain an active adversary, capable of transmitting an arbitrary signal into the channel. If the adversary is not present, then the receiver must successfully decode the message with high probability; if it is present, then the receiver must either decode the message or detect the adversary’s presence. Thus, whenever the receiver decodes, it can be certain that the decoded message is authentic. The exact authentication capacity is characterized for discrete-memoryless adversary channels, where the adversary is assumed to know the code but not the message. The authentication capacity is shown to be either zero or equal to the no-adversary capacity, depending on whether the channel satisfies a condition termed overwritability.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: This paper discusses the application of machine-learning techniques to two communications problems and focuses on what can be learned from the resulting systems.
Abstract: Rapid improvements in machine learning over the past decade are beginning to have far-reaching effects. For communications, engineers with limited domain expertise can now use off-the-shelf learning packages to design high-performance systems based on simulations. Prior to the current revolution in machine learning, the majority of communication engineers were quite aware that system parameters (such as filter coefficients) could be learned using stochastic gradient descent. It was not at all clear, however, that more complicated parts of the system architecture could be learned as well. In this paper, we discuss the application of machine-learning techniques to two communications problems and focus on what can be learned from the resulting systems. We were pleasantly surprised that the observed gains in one example have a simple explanation that only became clear in hindsight. In essence, deep learning discovered a simple and effective strategy that had not been considered earlier.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: It is shown that, unlike the setting with passive adversaries, reliable covert communication against active adversaries requires Alice and Bob to have a shared key (of length at least $\Omega$(log $n)$) even when Bob has a better channel than James.
Abstract: Suppose that a transmitter Alice potentially wishes to communicate with a receiver Bob over an adversarially jammed binary channel. An active adversary James eavesdrops on their communication over a binary symmetric channel (BSC$(q))$, and may maliciously flip (up to) a certain fraction p of their transmitted bits based on his observation. We consider a setting where the communication must be simultaneously covert as well as reliable, i.e., James should be unable to accurately distinguish whether or not Alice is communicating, while Bob should be able to correctly recover Alice’s message with high probability regardless of the adversarial jamming strategy. We show that, unlike the setting with passive adversaries, reliable covert communication against active adversaries requires Alice and Bob to have a shared key (of length at least $\Omega$(log $n)$) even when Bob has a better channel than James. We present inner and outer bounds on the information-theoretically optimal throughputs as a function of the channel parameters, the desired level of covertness, and the amount of shared key available. Further, these bounds match for a wide range of parameters of interest. Full version [1]: https://arxiv.org/pdf/1805.02426.pdf

Proceedings ArticleDOI
01 Nov 2018
TL;DR: Staircase-PIR schemes are introduced and it is proved that they are universally robust, establishing an equivalence between robust PIR and communication efficient secret sharing.
Abstract: We consider the problem of designing private information retrieval (PIR) schemes on data of m files replicated on n servers that can possibly collude. We focus on devising robust PIR schemes that can tolerate stragglers, i.e., slow or unresponsive servers. In many settings, the number of stragglers is not known a priori or may change with time. We define universally robust PIR as schemes that achieve PIR capacity asymptotically in m and simultaneously for any number of stragglers up to a given threshold. We introduce Staircase-PIR schemes and prove that they are universally robust. Towards that end, we establish an equivalence between robust PIR and communication efficient secret sharing.

Proceedings ArticleDOI
02 Oct 2018
TL;DR: This work considers a real-time streaming source coding system in which an encoder observes a sequence of randomly arriving symbols from an i.i.d. source, and feeds binary code-words to a FIFO buffer that outputs one bit per time unit to a decoder.
Abstract: We consider a real-time streaming source coding system in which an encoder observes a sequence of randomly arriving symbols from an i.i.d. source, and feeds binary code-words to a FIFO buffer that outputs one bit per time unit to a decoder. Each source symbol represents a status update by the source, and the timeliness of the system is quantified by the age of information (AoI), defined as the time difference between the present time and the generation time of the most up-to-date symbol at the output of the decoder. When the FIFO buffer is allowed to be empty, we propose an optimal prefix-free lossless coding scheme that minimizes the average peak age based on the analysis of discrete-time Geo/G/1 queue. For more practical scenarios in which a special codeword is reserved for indicating an empty buffer, we propose an encoding scheme that assigns a codeword to the empty buffer state based on an estimate of the buffer idle time.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: In this work, it is shown that by weakening the adversary slightly, and allowing vanishing probability of error, the capacity of symmetric private information retrieval increases to $1- \frac{T+B}{N}$.
Abstract: capacity of symmetric private information retrieval with K messages, N servers (out of which any T may collude), and an omniscient Byzantine adversary (who can corrupt any B answers) is shown to be $1- \frac{T+2B}{N} [1]$, under the requirement of zero probability of error. In this work, we show that by weakening the adversary slightly (either providing secret low rate channels between the servers and the user, or limiting the observation of the adversary), and allowing vanishing probability of error, the capacity increases to $1- \frac{T+B}{N}$.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: For the binary erasure channel, the polar-coding paradigm gives rise to codes that not only approach the Shannon limit but also do so under the best possible scaling of their block length as a function of the gap to capacity.
Abstract: We prove that, at least for the binary erasure channel, the polar-coding paradigm gives rise to codes that not only approach the Shannon limit but, in fact, do so under the best possible scaling of their block length as a function of the gap to capacity. This result exhibits the first known family of binary codes that attain both optimal scaling and quasi-linear complexity of encoding and decoding. Specifically, for any fixed $\delta \gt 0$, we exhibit binary linear codes that ensure reliable communication at rates within $\epsilon \gt 0$ of capacity with block length $n = O(1/\epsilon^{2+\delta})$, construction complexity $\Theta(n)$, and encoding/decoding complexity $\Theta(n\log n)$.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: This paper firstly constructs two classes of optimal cyclic $(r,\ \delta)$-LRCs with unbounded lengths and minimum distances and presents a construction of optimalcyclic $( r,\ delta)- LRCswith unbounded length and larger minimum distance 2.
Abstract: Prakash et al. [2] introduced the concept of $(r,\ \delta)$ locally repairable codes $((r,\ \delta)$-LRCs for short) for tolerating multiple failed nodes. An $(r,\ \delta)$-LRC is called optimal if it achieves the Singleton-type bound. In this paper, inspired by the work of [3], we firstly construct two classes of optimal cyclic $(r,\ \delta)$-LRCs with unbounded lengths (i.e., lengths of these codes are independent of the alphabet size) and minimum distances $\delta+1$ or $\delta+2$, which generalize the results about the $\delta=2$ case given in [3]. Secondly, with a slightly stronger condition, we present a construction of optimal cyclic $(r,\ \delta)$-LRCs with unbounded length and larger minimum distance $ 2\delta$. Furthermore, when $\delta = 3$, we provide another class of optimal cyclic $(r,\ 3)$-LRCs with unbounded length and larger minimum distance 6.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: In this article, it was shown that the PIR capacity of non-MDS linear codes is equal to the maximum possible PIR rate for MDS-coded DSSs.
Abstract: We consider private information retrieval (PIR) for distributed storage systems (DSSs) with noncolluding nodes where data is stored using a non maximum distance separable (MDS) linear code. It was recently shown that if data is stored using a particular class of non-MDS linear codes, the MDS-PIR capacity, i.e., the maximum possible PIR rate for MDS-coded DSSs, can be achieved. For this class of codes, we prove that the PIR capacity is indeed equal to the MDS-PIR capacity, giving the first family of non-MDS codes for which the PIR capacity is known. For other codes, we provide asymmetric PIR protocols that achieve a strictly larger PIR rate compared to existing symmetric PIR protocols.

Proceedings ArticleDOI
10 Sep 2018
TL;DR: In this article, the authors studied the secure index coding problem in the presence of an eavesdropper, and established an outer bound on the underlying secure capacity region of the problem, which includes polymatroidal and security constraints.
Abstract: We study the index coding problem in the presence of an eavesdropper, where the aim is to communicate without allowing the eavesdropper to learn any single message aside from the messages it may already know as side information. We establish an outer bound on the underlying secure capacity region of the index coding problem, which includes polymatroidal and security constraints, as well as the set of additional decoding constraints for legitimate receivers. We then propose a secure variant of the composite coding scheme, which yields an inner bound on the secure capacity region of the index coding problem. For the achievability of secure composite coding, a secret key with vanishingly small rate may be needed to ensure that each legitimate receiver who wants the same message as the eavesdropper, knows at least two more messages than the eavesdropper. For all securely feasible index coding problems with four or fewer messages, our numerical results establish the secure index coding capacity region.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: In this article, the authors identify the performance limits of the multiple-input single-output (MISO) broadcast channel where cache-aided users coincide with users that do not have caches.
Abstract: The work identifies the performance limits of the multiple-input single-output broadcast channel where cache-aided users coincide with users that do not have caches. The main contribution is a new algorithm that employs perfect matchings on a bipartite graph to offer full multiplexing as well as full coded-caching gains to both cache-aided as well as cache-less users. This performance is shown to be within a factor of at most 3 from the optimal, under the assumption of linear one-shot schemes.An interesting outcome is the following: starting from a single-stream centralized coded caching setting with normalized cache size $\gamma$, then every addition of an extra transmit antenna allows for the addition of approximately$1/\gamma$ extra cache-less users, at no added delay costs. For example, starting from a single-stream coded caching setting with $\gamma=$ 1/100, every addition of a transmit antenna, allows for serving approximately an additional 100 more cache-less users, without any increase in the overall delay. Finally the work reveals the role of multiple antennas in removing the penalties typically associated to cache-size unevenness, as it shows that the performance in the presence of both cache-less and cache-aided users, matches the optimal (under uncoded cache placement) performance of the corresponding symmetric case where the same cumulative cache volume is split evenly across all users.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: In this paper, the authors consider a covert communication scenario where a transmitter wishes to communicate simultaneously to two legitimate receivers while ensuring that the communication is not detected by an adversary, the warden.
Abstract: We consider a covert communication scenario where a transmitter wishes to communicate simultaneously to two legitimate receivers while ensuring that the communication is not detected by an adversary, the warden. The legitimate receivers and the adversary observe the transmission from the transmitter via a three-user discrete or Gaussian memoryless broadcast channel. We focus on the case where the “no-input” symbol is not redundant, i.e., the output distribution at the warden induced by the no-input symbol is not a mixture of the output distributions induced by other input symbols, so that the covert communication is governed by the square root law, i.e., at most $\Theta(\sqrt{n})$ bits can be transmitted over n channel uses. We show that for such a setting, a simple time-division strategy achieves the optimal throughputs for a class of broadcast channels. Our result implies that a code that uses two separate optimal point-to-point codes each designed for the constituent channels and each used for a fraction of the time is optimal in the sense that it achieves the best constants of the $\sqrt{n}$-scaling for the throughputs. Our proof strategy combines several elements in the network information theory literature, including concave envelope representations of the capacity regions of broadcast channels and El Gamal’s outer bound for more capable broadcast channels.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: PDA characterizations for two other models: device-to-device (D2D) network and distributed computing system are explored, which allows us to transform the PDA based schemes originally designated for shared link to those networks.
Abstract: Recently, placement delivery array (PDA) was formulated to describe the placement and delivery phases with a single array for centralized coded caching scheme in an error-free shared link. In this paper, we explore PDA characterizations for two other models: device-to-device (D2D) network and distributed computing system. The inherent connections between these systems and the shared link caching system are displayed through PDA, which allows us to transform the PDA based schemes originally designated for shared link to those networks. As a result, combining with existing constructions, we can obtain schemes requiring low subpacketization level for D2D network or smaller number of files for distributed computing system.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: A novel deep learning detector based on theBP algorithm (DLBP detector) is proposed, which combines the BP algorithm with the deep learning methods and has lower complexity and better bit error rate(BER) performance.
Abstract: The belief propagation (BP) algorithm exhibits outstanding detection performance for the multiple-input multiple-output (MIMO) transmission. However, this algorithm may fail to converge because of the fully connected factor graph under the MIMO settings. To address this issue, a novel deep learning detector based on the BP algorithm (DLBP detector) is proposed, which combines the BP algorithm with the deep learning methods. The log likelihood ratios (LLR) messages are passed on the MIMO factor graph to detect the signals from the transmitters. In addition, the weights are trained via the deep learning methods and further assigned to the messages updated in the DLBP detector. Finally, simulations show that, compared with the BP detector and the damped BP detector, the DLBP detector has lower complexity and better bit error rate(BER) performance.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: Simulation results show that polar (sub)codes with 16×16 kernels can outperform polar codes with Arikan kernel, while having lower decoding complexity.
Abstract: A decoding algorithm for polar codes with binary 16×16 kernels with polarization rate 0.51828 and scaling exponents 3.346 and 3.450 is presented. The proposed approach exploits the relationship of the considered kernels and the Arikan matrix to significantly reduce the decoding complexity without any performance loss. Simulation results show that polar (sub)codes with 16×16 kernels can outperform polar codes with Arikan kernel, while having lower decoding complexity.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: An outer bound is obtained on the region of the vector Gaussian CEO problem by means of a technique that relies on the de Bruijn identity and the properties of Fisher information, for which the optimal rate-distortion region is characterized.
Abstract: In this paper, we study the vector Gaussian Chief Executive Officer (CEO) problem under logarithmic loss distortion measure. Specifically, $ K\geq 2$ agents observe independently corrupted Gaussian noisy versions of a remote vector Gaussian source, and communicate independently with a decoder or CEO over rate-constrained noise-free links. The CEO wants to reconstruct the remote source to within some prescribed distortion level where the incurred distortion is measured under the logarithmic loss penalty criterion. We find an explicit characterization of the rate-distortion region of this model. For the proof of this result, we obtain an outer bound on the region of the vector Gaussian CEO problem by means of a technique that relies on the de Bruijn identity and the properties of Fisher information. The approach is similar to Ekrem-Ulukus outer bounding technique for the vector Gaussian CEO problem under quadratic distortion measure, for which it was there found generally non-tight; but it is shown here to yield a complete characterization of the region for the case of logarithmic loss measure. Also, we show that Gaussian test channels with time-sharing exhaust the Berger-Tung inner bound, which is optimal. Furthermore, we also show that the established result under logarithmic loss provides an outer bound for a quadratic vector Gaussian CEO problem with determinant constraint, for which we characterize the optimal rate-distortion region.

Proceedings ArticleDOI
25 Nov 2018
TL;DR: Using convex optimization methods, an input distribution that achieves the secrecy capacity of a general degraded additive noise wiretap channel is presented, characterized by conditions expressed in terms of integral equations.
Abstract: In this paper, an analysis of an input distribution that achieves the secrecy capacity of a general degraded additive noise wiretap channel is presented. In particular, using convex optimization methods, an input distribution that achieves the secrecy capacity is characterized by conditions expressed in terms of integral equations. The new conditions are used to study the structure of the optimal input distribution for three different additive noise cases: vector Gaussian; scalar Cauchy; and scalar exponential.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: The classical joint source-channel coding (JSCC) setup is modified to include systematic FEC and the mismatched FEC decoder and dematcher and error exponents and achievable rates are derived.
Abstract: Probabilistic Amplitude Shaping (PAS) is a codedmodulation scheme in which the encoder is a concatenation of a distribution matcher with a systematic Forward Error Correction (FEC) code. For reduced computational complexity the decoder can be chosen as a concatenation of a mismatched FEC decoder and dematcher. This work studies the theoretic limits of PAS. The classical joint source-channel coding (JSCC) setup is modified to include systematic FEC and the mismatched FEC decoder. At each step error exponents and achievable rates for the corresponding setup are derived.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: A simple upper bound on the maximum minimum distance can be obtained from a sequence of Singleton bounds and can be achieved by randomly choosing the nonzero elements of the generator matrix from a field of a large enough size.
Abstract: The problem of designing a linear code with the largest possible minimum distance, subject to support constraints on the generator matrix, has recently found several applications. These include multiple access networks [3], [5] as well as weakly secure data exchange [4], [8]. A simple upper bound on the maximum minimum distance can be obtained from a sequence of Singleton bounds (see (3) below) and can further be achieved by randomly choosing the nonzero elements of the generator matrix from a field of a large enough size.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: A graph-based scheme is proposed that leverages user side information and coding to efficiently exploit and explore over wireless, and its performance is evaluated.
Abstract: We consider wireless recommender systems that need to learn the user preferences (explore) and use them to accordingly decide what are the most profitable recommendations to make (exploit), under bandwidth constraints. We propose a graph-based scheme that leverages user side information and coding to efficiently exploit and explore over wireless, and evaluate its performance.