scispace - formally typeset
Search or ask a question

Showing papers presented at "Information Theory Workshop in 2012"


Proceedings ArticleDOI
01 Sep 2012
TL;DR: In this paper, a point-to-point communication over a fading channel with an EH transmitter is studied considering jointly the energy costs of transmission and processing, under the assumption of known energy arrival and fading profiles, an optimal transmission policy for throughput maximization is investigated.
Abstract: In wireless networks, energy consumed for communication includes both the transmission and the processing energy. In this paper, point-to-point communication over a fading channel with an energy harvesting transmitter is studied considering jointly the energy costs of transmission and processing. Under the assumption of known energy arrival and fading profiles, optimal transmission policy for throughput maximization is investigated. Assuming that the transmitter has sufficient amount of data in its buffer at the beginning of the transmission period, the average throughput by a given deadline is maximized. Furthermore, a “directional glue pouring algorithm” that computes the optimal transmission policy is described.

105 citations


Book ChapterDOI
01 Sep 2012
TL;DR: An elementary inequality is shown which essentially upper bounds such `weak expectation' by two terms, the first of which is independent of f, while the second only depends on the `variance' of f under uniform distribution.
Abstract: Recently, there has been renewed interest in basing cryptographic primitives on weak secrets, where the only information about the secret is some non-trivial amount of (min-) entropy. From a formal point of view, such results require to upper bound the expectation of some function f(X), where X is a weak source in question. We show an elementary inequality which essentially upper bounds such ‘weak expectation’ by two terms, the first of which is independent of f, while the second only depends on the ‘variance’ of f under uniform distribution. Quite remarkably, as relatively simple corollaries of this elementary inequality, we obtain some ‘unexpected’ results, in several cases noticeably simplifying/improving prior techniques for the same problem. Examples include non-malleable extractors, leakage-resilient symmetric encryption, seed-dependent condensers and improved entropy loss for the leftover hash lemma.

86 citations


Proceedings ArticleDOI
01 Sep 2012
TL;DR: This work studies distributed compression for the uplink of a cloud radio access network, where multiple multi-antenna base stations communicate with a central unit via capacity-constrained back-haul links.
Abstract: This work studies distributed compression for the uplink of a cloud radio access network, where multiple multi-antenna base stations (BSs) communicate with a central unit, also referred to as cloud decoder, via capacity-constrained back-haul links. Distributed source coding strategies are potentially beneficial since the signals received at different BSs are correlated. However, they require each BS to have information about the joint statistics of the received signals across the BSs, and are generally sensitive to uncertainties regarding such information. Motivated by this observation, a robust compression method is proposed to cope with uncertainties on the correlation of the received signals. The problem is formulated using a deterministic worst-case approach, and an algorithm is proposed that achieves a stationary point for the problem. From numerical results, it is observed that the proposed robust compression scheme compensates for a large fraction of the performance loss induced by the imperfect statistical information.

74 citations


Proceedings ArticleDOI
01 Sep 2012
TL;DR: Besides their excellent performance near the capacity limit, LDA lattice construction is conceptually simpler than previously proposed lattices based on multiple nested binary codes and LDA decoding is less complex than real-valued message passing.
Abstract: We describe a new family of integer lattices built from construction A and non-binary LDPC codes. An iterative message-passing algorithm suitable for decoding in high dimensions is proposed. This family of lattices, referred to as LDA lattices, follows the recent transition of Euclidean codes from their classical theory to their modern approach as announced by the pioneering work of Loeliger (1997), Erez, Litsyn, and Zamir (2004–2005). Besides their excellent performance near the capacity limit, LDA lattice construction is conceptually simpler than previously proposed lattices based on multiple nested binary codes and LDA decoding is less complex than real-valued message passing.

70 citations


Proceedings ArticleDOI
01 Sep 2012
TL;DR: This paper addresses capacity comparisons when the total amount of analog radio hardware is bounded and finds that repurposing radios for cancellation can be beneficial since the resulting full-duplex system performs better in some practical SNR regimes and almost always outperforms half duplex in symmetric degrees-of-freedom (large SNR regime).
Abstract: Full duplex communication requires nodes to cancel their own signal which appears as an interference at their receive antennas. Recent work has experimentally demonstrated the feasibility of full duplex communications using software radios. In this paper, we address capacity comparisons when the total amount of analog radio hardware is bounded. Under this constraint, it is not immediately clear if one should use these radios to perform full-duplex self-interference cancellation or use the radios to give additional MIMO multiplexing advantage. We find that repurposing radios for cancellation, instead of using all of them for half-duplex over-the-air transmission, can be beneficial since the resulting full-duplex system performs better in some practical SNR regimes and almost always outperforms half duplex in symmetric degrees-of-freedom (large SNR regime).

58 citations


Proceedings ArticleDOI
01 Sep 2012
TL;DR: In this article, the maximal achievable rate for a given block length n and block error probability o over Rayleigh block-fading channels in the noncoherent setting and in the finite block-length regime was studied.
Abstract: We study the maximal achievable rate R*(n, ∈) for a given block-length n and block error probability o over Rayleigh block-fading channels in the noncoherent setting and in the finite block-length regime. Our results show that for a given block-length and error probability, R*(n, ∈) is not monotonic in the channel's coherence time, but there exists a rate maximizing coherence time that optimally trades between diversity and cost of estimating the channel.

53 citations


Proceedings ArticleDOI
01 Sep 2012
TL;DR: Through simulations, the proposed scheme outperforms the naive feedback scheme consisting in independently quantizing the channel matrices, in the sense that it yields a better sum rate performance for the same number of feedback bits.
Abstract: A simple limited feedback scheme is proposed for interference alignment on the K-user Multiple-Input-Multiple-Output Interference Channel (MIMO-IC). The scaling of the number of feedback bits with the transmit power required to preserve the multiplexing gain that can be achieved using perfect channel state information (CSI) is derived. This result is obtained through a reformulation of the interference alignment problem in order to exploit the benefits of quantization on the Grassmann manifold, which is well investigated in the single-user MIMO channel. Furthermore, through simulations we show that the proposed scheme outperforms the naive feedback scheme consisting in independently quantizing the channel matrices, in the sense that it yields a better sum rate performance for the same number of feedback bits.

49 citations


Proceedings ArticleDOI
01 Sep 2012
TL;DR: This paper introduces a 1-bit compressive sensing reconstruction algorithm that is not only robust against bit flips in the binary measurement vector, but also does not require a priori knowledge of the sparsity level of the signal to be reconstructed.
Abstract: In this paper, we introduce a 1-bit compressive sensing reconstruction algorithm that is not only robust against bit flips in the binary measurement vector, but also does not require a priori knowledge of the sparsity level of the signal to be reconstructed. Through numerical experiments, we show that our algorithm outperforms state-of-the-art reconstruction algorithms for the 1-bit compressive sensing problem in the presence of random bit flips and when the sparsity level of the signal deviates from its estimated value.

48 citations


Proceedings ArticleDOI
01 Sep 2012
TL;DR: This paper presents two complementary constructions of finite-length non-binary protograph-based codes with the focus on the short block-length regime: the first class is based on the existing approaches of applying the copy-and-permute operations to the constituent protograph with unweighted edges, followed by assigning non- binary scales to the edges of the derived graph.
Abstract: This paper presents two complementary constructions of finite-length non-binary protograph-based codes with the focus on the short block-length regime. The first class is based on the existing approaches of applying the copy-and-permute operations to the constituent protograph with unweighted edges, followed by assigning non-binary scales to the edges of the derived graph. The second class is novel and is based on the so-called graph cover of a non-binary protograph: the original protograph has fixed edge scalings and copy-and-permute operations are applied to the edge-weighted protograph. The second class is arguably more restrictive, but in turn it offers simpler design and implementation. We provide design and construction of these non-binary codes for short block-lengths. Performance, cycle distribution and the minimum distance of the binary image of selected codes over AWGN is provided for information block-lengths as low as 64 bits.

40 citations


Proceedings ArticleDOI
01 Sep 2012
TL;DR: In this paper, the authors considered the N-relay Gaussian diamond network and showed that several strategies can achieve the capacity of this network within O(log N) bits independent of the channel configurations and the operating SNR.
Abstract: We consider the N-relay Gaussian diamond network where a source node communicates to a destination node via N parallel relays. We show that several strategies can achieve the capacity of this network within O(logN) bits independent of the channel configurations and the operating SNR. The first of these strategies is partial decode-and-forward: the source node broadcasts independent messages to the relays at appropriately chosen rates, which in turn decode and forward these messages to the destination over a MAC channel. The same performance can be also achieved by compress-and-forward, quantize-map-and-forward or noisy network coding if relays quantize their observations at a decreasing resolution with N, instead of quantizing at the noise-level. The best capacity approximations currently available for this network are within O(N) bits which follow from the corresponding capacity approximations for general Gaussian relay networks.

39 citations


Proceedings ArticleDOI
01 Sep 2012
TL;DR: This paper extends the problem of cooperative data exchange to the case of multiple unicasts to a set of n clients, where each client ci is interested in a specific message xi and the clients cooperate with each others to compensate the errors occurred over the downlink.
Abstract: The advantages of coded cooperative data exchange has been studied in the literature. In this problem, a group of wireless clients are interested in the same set of packets (a multicast scenario). Each client initially holds a subset of packets and wills to obtain its missing packets in a cooperative setting by exchanging packets with its peers. Cooperation via short range transmission links among the clients (which are faster, cheaper and more reliable) is an alternative for retransmissions by the base station. In this paper, we extend the problem of cooperative data exchange to the case of multiple unicasts to a set of n clients, where each client c i is interested in a specific message x i and the clients cooperate with each others to compensate the errors occurred over the downlink. Moreover, our proposed method maintains the secrecy of individuals' messages at the price of a substantially small overhead.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: In this paper, a channel coding scheme was proposed to achieve the capacity of any discrete memoryless channel based solely on the techniques of polar coding, where source polarization and randomness extraction via polarization can be employed to shape uniformly-distributed i.i.d. random variables into approximate capacity-achieving distribution and then combine this shaper with a variant of polar channel coding, constructed by the duality with source coding, to achieve channel capacity.
Abstract: We construct a channel coding scheme to achieve the capacity of any discrete memoryless channel based solely on the techniques of polar coding. In particular, we show how source polarization and randomness extraction via polarization can be employed to “shape” uniformly-distributed i.i.d. random variables into approximate i.i.d. random variables distributed according to the capacity-achieving distribution. We then combine this shaper with a variant of polar channel coding, constructed by the duality with source coding, to achieve the channel capacity. Our scheme inherits the low complexity encoder and decoder of polar coding. It differs conceptually from Gallager's method for achieving capacity, and we discuss the advantages and disadvantages of the two schemes. An application to the AWGN channel is discussed.

Proceedings ArticleDOI
27 Apr 2012
TL;DR: The paper presents two classes of codes that allow node repair to be performed by contacting 2 and 3 surviving nodes respectively, and shows that both classes are good in terms of their rate and minimum distance, and allow their rate to be bartered for greater flexibility in the repair process.
Abstract: This paper studies the design of codes for distributed storage systems (DSS) that enable local repair in the event of node failure. This paper presents locally repairable codes based on low degree multivariate polynomials. Its code construction mechanism extends work on Noisy Interpolating Set by Dvir et al. [1]. The paper presents two classes of codes that allow node repair to be performed by contacting 2 and 3 surviving nodes respectively. It further shows that both classes are good in terms of their rate and minimum distance, and allow their rate to be bartered for greater flexibility in the repair process.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: In this article, a simple proof of threshold saturation for a wide class of coupled vector recursions is presented, where the conditions of the theorem are verified for the density-evolution equations of: (i) joint decoding of irregular low-density parity-check (LDPC) codes for a Slepian-Wolf problem with erasures, (ii) joint decode of irregular LDPC codes on an erasure multiple-access channel, and (iii) admissible protograph codes on the BEC.
Abstract: Convolutional low-density parity-check (LDPC) codes (or spatially-coupled codes) have now been shown to achieve capacity on binary-input memoryless symmetric channels. The principle behind this surprising result is the threshold-saturation phenomenon, which is defined by the belief-propagation threshold of the spatially-coupled ensemble saturating to a fundamental threshold defined by the uncoupled system. Previously, the authors demonstrated that potential functions can be used to provide a simple proof of threshold saturation for coupled scalar recursions. In this paper, we present a simple proof of threshold saturation that applies to a wide class of coupled vector recursions. The conditions of the theorem are verified for the density-evolution equations of: (i) joint decoding of irregular LDPC codes for a Slepian-Wolf problem with erasures, (ii) joint decoding of irregular LDPC codes on an erasure multiple-access channel, and (iii) admissible protograph codes on the BEC. This proves threshold saturation for these systems.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: A new class of decoders for low density parity check (LDPC) codes is presented, based on the alternating direction method of multipliers (ADMM) decomposition technique for LP decoding, which achieves much better error performance compared to LP decoder at low SNRs.
Abstract: In this paper, we present a new class of decoders for low density parity check (LDPC) codes. We are motivated by the observation that the linear programming (LP) decoder has worse error performance than belief propagation (BP) decoders at low SNRs. We base our new decoders on the alternating direction method of multipliers (ADMM) decomposition technique for LP decoding. The ADMM not only efficiently solves the LP decoding problem, but also makes it possible to explore other decoding algorithms. In particular, we add various penalty terms to the linear objective of LP decoding with the goal of suppressing pseudocodewords. Simulation results show that the new decoders achieve much better error performance compared to LP decoder at low SNRs. What is more, similar to the LP decoder, no error floor is observed at high SNRs.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: It is shown that if the set of defective items are uniformly distributed, then an l-stage pooling strategy can identify the defective set in O(l·|D|·|N|1/l) tests, on the average.
Abstract: In conventional group testing, the goal is to detect a small subset of defecting items D in a large population N by grouping arbitrary subset of N into different pools. The result of each group test T is a binary output depending on whether the group contains a defective item or not. The main challenge is to minimize the number of pools required to identify the set D. Motivated by applications in network monitoring and infection propagation, we consider the problem of group testing with graph constraints. As opposed to conventional group testing where any subset of items can be pooled, here a test is admissible if it induces a connected subgraph H ⊂ G. In contrast to the non-adaptive pooling process used in previous work, we first show that by exploiting an adaptive strategy, one can dramatically reduce the number of tests. More specifically, for any graph G, we devise a 2-approximation algorithm (and hence order optimal) that locates the set of defective items D. To obtain a good compromise between adaptive and non-adaptive strategies, we then devise a multi-stage algorithm. In particular, we show that if the set of defective items are uniformly distributed, then an l-stage pooling strategy can identify the defective set in O(l·|D|·|N|1/l) tests, on the average. In particular, for l = log(|N|) stages, the number of tests reduces to 4|D| log(|N|), which in turn is order optimum.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: A novel method for computing the erasure probability in the bit subchannels induced by the polarization kernel is proposed and the codes obtained using the proposed method outperform those based on the Arikan kernel.
Abstract: The problem of construction of binary polar codes with high-dimensional kernels is considered. A novel method for computing the erasure probability in the bit subchannels induced by the polarization kernel is proposed. The codes obtained using the proposed method outperform those based on the Arikan kernel.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: Simulation results show the lattices constructed from polar codes outperform the benchmark Barnes-Wall lattices, which are constructed from Reed-Muller codes.
Abstract: We employ polar codes as the building blocks of Construction D to construct lattices for the additive white Gaussian noise (AWGN) channel. The construction of these component polar codes is based on the idea of Pedarsani et al. for binary-input memoryless symmetric (BMS) channels. Our lattice construction takes the advantage of the performance gain of polar codes over Reed-Muller codes. Simulation results show the lattices constructed from polar codes outperform the benchmark Barnes-Wall lattices, which are constructed from Reed-Muller codes.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: It turns out that finding the optimal distortion levels depending on the channel gains is a non-trivial problem in the general N-relay setup.
Abstract: We evaluate the information-theoretic achievable rates of Quantize-Map-and-Forward (QMF) relaying schemes over Gaussian N-relay diamond networks. Focusing on vector Gaussian quantization at the relays, our goal is to understand how close to the cutset upper bound these schemes can achieve in the context of diamond networks, and how much benefit is obtained by optimizing the quantizer distortions at the relays. First, with noise-level quantization, we point out that the worst-case gap from the cutset upper bound is (N + log 2 N) bits/s/Hz. A better universal quantization level found without using channel state information (CSI) leads to a sharpened gap of log 2 N + log 2 (1 + N) + N log 2 (1 + 1/N) bits/s/Hz. On the other hand, it turns out that finding the optimal distortion levels depending on the channel gains is a non-trivial problem in the general N-relay setup. We manage to solve the two-relay problem and the symmetric N-relay problem analytically, and show the improvement via numerical evaluations both in static as well as slow-fading channels.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: This work addresses the capacity region of multi-source multi-terminal network communication problems, and study the change in capacity when one moves form independent to dependent source information, and asks whether the trade off between capacity and source independence is of continuous nature.
Abstract: In this work, we address the capacity region of multi-source multi-terminal network communication problems, and study the change in capacity when one moves form independent to dependent source information. Specifically, we ask whether the trade off between capacity and source independence is of continuous nature. We tie the question at hand to that of edge removal which has seen recent interest.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: A sum-rate outer bound is derived, which is shown to unify a number of previously derived outer bounds for special cases of cooperation or feedback in the interference channel.
Abstract: The interference channel models a wireless network where several source-destination pairs compete for the same resources. This paper considers a 4-node network, where two nodes are sources and the other two are destinations. All nodes are full-duplex and cooperate to mitigate interference. A sum-rate outer bound is derived, which is shown to unify a number of previously derived outer bounds for special cases of cooperation or feedback. The approach is shown to extend to cooperative interference networks with more than two source-destination pairs and for any partial sum-rate. How the derived bound relates to other channel models including cognitive nodes, i.e., nodes that have non-causal knowledge of the messages of some other node, is also discussed. The bound is evaluated in Gaussian noise.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: A new extremal inequality is proved by exploiting the connection between differential entropy and Fisher information as well as some fundamental estimation-theoretic inequalities to prove an outer bound of the rate region of the vector Gaussian L -terminal CEO problem.
Abstract: We derive a lower bound on each supporting hyperplane of the rate region of the vector Gaussian multiterminal source coding problem by coupling it with the CEO problem through a limiting argument. The tightness of this lower bound in the high-resolution regime and the weak-dependence regime is proved.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: A deterministic sequential coding scheme is proposed and shown to attain the optimal error exponent for any binary-input channel whose capacity is achieved by the uniform input distribution.
Abstract: This paper considers the problem of variable-length coding over a binary-input channel with noiseless feedback. A deterministic sequential coding scheme is proposed and shown to attain the optimal error exponent for any binary-input channel whose capacity is achieved by the uniform input distribution. The proposed scheme is deterministic and has only one phase of operation, in contrast to all previous coding schemes that achieve the optimal error exponent.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: It is shown that the decoding complexity can be further reduced if suitable message passing schedules are applied within the decoding window and an improvement based schedule is presented that easily adapts to different ensemble structures, window sizes, and channel parameters.
Abstract: Window decoding schedules are very attractive for message passing decoding of spatially coupled LDPC codes. They take advantage of the inherent convolutional code structure and allow continuous transmission with low decoding latency and complexity. In this paper we show that the decoding complexity can be further reduced if suitable message passing schedules are applied within the decoding window. An improvement based schedule is presented that easily adapts to different ensemble structures, window sizes, and channel parameters. Its combination with a serial (on-demand) schedule is also considered. Results from a computer search based schedule are shown for comparison.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: In this article, an entropy coder is used to compress a target genome given a known reference genome, and the proposed algorithm first generates a mapping from the reference to the target genome and then compresses this mapping with an entropy-based coder.
Abstract: DNA sequencing technology has advanced to a point where storage is becoming the central bottleneck in the acquisition and mining of more data. Large amounts of data are vital for genomics research, and generic compression tools, while viable, cannot offer the same savings as approaches tuned to inherent biological properties. We propose an algorithm to compress a target genome given a known reference genome. The proposed algorithm first generates a mapping from the reference to the target genome, and then compresses this mapping with an entropy coder. As an illustration of the performance: applying our algorithm to James Watson's genome with hg18 as a reference, we are able to reduce the 2991 megabyte (MB) genome down to 6.99 MB, while Gzip compresses it to 834.8 MB.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: This paper builds on Cover1's line of thought to consider priority encoding of communication over unknown channels without feedback, using fixed-length codes and from a single-shot, individual channel perspective.
Abstract: The idea of modeling an unknown channel using a broadcast channel was first introduced by Cover1 in 1972. This paper builds on his line of thought to consider priority encoding of communication over unknown channels without feedback, using fixed-length codes and from a single-shot, individual channel perspective. A ratio-regret metric is used to understand how well we can perform with respect to the actual channel realization.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: The minimum distortion is characterized for the counterexample and it is shown that a combination of linear coding and dirty-paper coding (DPC) proposed in [1] achieves the minimum distortion.
Abstract: Motivated by the presence of an implicit communication channel in the asymptotic version of Witsenhausen's counterexample, implicit discrete memoryless channels (IDMC) with discrete memoryless (DM) states are considered. Information-theoretic lower and upper bounds (based respectively on the ideas from rate-distortion theory and hybrid-coding) are derived on the optimal distortion in estimating the input of the implicit channel. The intuition gained from the DMIC with DM state model is then used to evaluate the optimal distortion for the asymptotic version of the Witsenhausen counterexample. The minimum distortion is characterized for the counterexample and it is shown that a combination of linear coding and dirty-paper coding (DPC) proposed in [1] achieves the minimum distortion.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: In this article, the authors investigated certain optimization problems for Shannon information measures, namely, minimization of joint and conditional entropies, maximization of mutual information I(X, Y), H(X|Y) and H(Y|X), over convex regions.
Abstract: We investigate certain optimization problems for Shannon information measures, namely, minimization of joint and conditional entropies H(X, Y), H(X|Y), H(Y|X), and maximization of mutual information I(X; Y), over convex regions. When restricted to the so-called transportation polytopes (sets of distributions with fixed marginals), very simple proofs of NP-hardness are obtained for these problems because in that case they are all equivalent, and their connection to the well-known SUBSET SUM and PARTITION problems is revealed. The computational intractability of the more general problems over arbitrary polytopes is then a simple consequence. Further, a simple class of polytopes is shown over which the above problems are not equivalent and their complexity differs sharply, namely, minimization of H(X, Y) and H(Y|X) is trivial, while minimization of H(X|Y) and maximization of I(X; Y) are strongly NP-hard problems. Finally, two new (pseudo)metrics on the space of discrete probability distributions are introduced, based on the so-called variation of information quantity, and NP-hardness of their computation is shown.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: In this article, the relative information loss induced by reducing the dimensionality of the data after performing the PCA is the same as in dimensionality reduction without PCA, and the authors show that the relative loss is decreasing with increasing sample size.
Abstract: In this work we analyze principle component analysis (PCA) as a deterministic input-output system. We show that the relative information loss induced by reducing the dimensionality of the data after performing the PCA is the same as in dimensionality reduction without PCA. Furthermore, we analyze the case where the PCA uses the sample covariance matrix to compute the rotation. If the rotation matrix is not available at the output, we show that an infinite amount of information is lost. The relative information loss is shown to decrease with increasing sample size.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: This work investigates secure multi-party sampling problems involving more than two parties and reduces the problem of characterizing distributions that can be securely sampled using pairwise setups to the problem to be characterized without any setups.
Abstract: We investigate secure multi-party sampling problems involving more than two parties. In the public discussion model, we give a simple characterization of the distributions that can be sampled without any setup. In a model which allows private point-to-point communication, we reduce the problem of characterizing distributions that can be securely sampled using pairwise setups to the problem of characterizing distributions that can be sampled without any setups.