scispace - formally typeset
Search or ask a question
Author

Jack K. Wolf

Bio: Jack K. Wolf is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Decoding methods & Block code. The author has an hindex of 56, co-authored 260 publications receiving 15233 citations. Previous affiliations of Jack K. Wolf include University of Massachusetts Amherst & Bell Labs.


Papers
More filters
Journal ArticleDOI
David Slepian1, Jack K. Wolf
TL;DR: The minimum number of bits per character R_X and R_Y needed to encode these sequences so that they can be faithfully reproduced under a variety of assumptions regarding the encoders and decoders is determined.
Abstract: Correlated information sequences \cdots ,X_{-1},X_0,X_1, \cdots and \cdots,Y_{-1},Y_0,Y_1, \cdots are generated by repeated independent drawings of a pair of discrete random variables X, Y from a given bivariate distribution P_{XY} (x,y) . We determine the minimum number of bits per character R_X and R_Y needed to encode these sequences so that they can be faithfully reproduced under a variety of assumptions regarding the encoders and decoders. The results, some of which are not at all obvious, are presented as an admissible rate region \mathcal{R} in the R_X - R_Y plane. They generalize a similar and well-known result for a single information sequence, namely R_X \geq H (X) for faithful reproduction.

4,165 citations

Journal ArticleDOI
TL;DR: It is shown that soft decision maximum likelihood decoding of any (n,k) linear block code over GF(q) can be accomplished using the Viterbi algorithm applied to a trellis with no more than q^{(n-k)} states.
Abstract: It is shown that soft decision maximum likelihood decoding of any (n,k) linear block code over GF(q) can be accomplished using the Viterbi algorithm applied to a trellis with no more than q^{(n-k)} states. For cyclic codes, the trellis is periodic. When this technique is applied to the decoding of product codes, the number of states in the trellis can be much fewer than q^{n-k} . For a binary (n,n - 1) single parity check code, the Viterbi algorithm is equivalent to the Wagner decoding algorithm.

612 citations

Proceedings ArticleDOI
12 Dec 2009
TL;DR: This work empirically characterized flash memory technology from five manufacturers by directly measuring the performance, power, and reliability, and demonstrates that performance varies significantly between vendors, devices, and from publicly available datasheets.
Abstract: Despite flash memory's promise, it suffers from many idiosyncrasies such as limited durability, data integrity problems, and asymmetry in operation granularity. As architects, we aim to find ways to overcome these idiosyncrasies while exploiting flash memory's useful characteristics. To be successful, we must understand the trade-offs between the performance, cost (in both power and dollars), and reliability of flash memory. In addition, we must understand how different usage patterns affect these characteristics. Flash manufacturers provide conservative guidelines about these metrics, and this lack of detail makes it difficult to design systems that fully exploit flash memory's capabilities. We have empirically characterized flash memory technology from five manufacturers by directly measuring the performance, power, and reliability. We demonstrate that performance varies significantly between vendors, devices, and from publicly available datasheets. We also demonstrate and quantify some unexpected device characteristics and show how we can use them to improve responsiveness and energy consumption of solid state disks by 44% and 13%, respectively, as well as increase flash device lifetime by 5.2x.

483 citations

Journal ArticleDOI
TL;DR: The class of codes discussed in this paper has the property that its error-correction capability is described in terms of correcting errors in specific digits of a code word even though other digits in the code may be decoded incorrectly.
Abstract: The class of codes discussed in this paper has the property that its error-correction capability is described in terms of correcting errors in specific digits of a code word even though other digits in the code may be decoded incorrectly. To each digit of the code words is assigned an error protection level f_{i} . Then, if f errors occur in the reception of a code word, all digits which have protection f_{i} greater than or equal to f will be decoded correctly even though the entire code word may not be decoded correctly. Methods for synthesizing these codes are described and illustrated by examples. One method of synthesis involves combining the parity check matrices of two or more ordinary random error-correcting codes to form the parity check matrix of the new code. A decoding algorithm based upon the decoding algorithms of the component codes is presented. A second method of code generation is described which follows from the observation that for a linear code, the columns of the parity check matrix corresponding to the check positions must span the column space of the matrix. Upper and lower bounds are derived for the number of check digits required for such codes. The lower bound is based upon counting the number of unique syndromes required for a specified error-correction capability. The upper bound is the result of a constructive procedure for forming the parity check matrices of these codes. Tables of numerical values for the upper and lower bounds are presented.

359 citations

Journal ArticleDOI
TL;DR: This technique provides an important link between quasi-cyclic block and convolutional codes andOptimum and suboptimum decoding algorithms for these codes are described and their performance determined by analytical and simulation techniques.
Abstract: In this paper, we introduce generalized tail biting encoding as a means to ameliorate the rate deficiency caused by zero-tail convolutional encoding This technique provides an important link between quasi-cyclic block and convolutional codes Optimum and suboptimum decoding algorithms for these codes are described and their performance determined by analytical and simulation techniques

356 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Using distributed antennas, this work develops and analyzes low-complexity cooperative diversity protocols that combat fading induced by multipath propagation in wireless networks and develops performance characterizations in terms of outage events and associated outage probabilities, which measure robustness of the transmissions to fading.
Abstract: We develop and analyze low-complexity cooperative diversity protocols that combat fading induced by multipath propagation in wireless networks. The underlying techniques exploit space diversity available through cooperating terminals' relaying signals for one another. We outline several strategies employed by the cooperating radios, including fixed relaying schemes such as amplify-and-forward and decode-and-forward, selection relaying schemes that adapt based upon channel measurements between the cooperating terminals, and incremental relaying schemes that adapt based upon limited feedback from the destination terminal. We develop performance characterizations in terms of outage events and associated outage probabilities, which measure robustness of the transmissions to fading, focusing on the high signal-to-noise ratio (SNR) regime. Except for fixed decode-and-forward, all of our cooperative diversity protocols are efficient in the sense that they achieve full diversity (i.e., second-order diversity in the case of two terminals), and, moreover, are close to optimum (within 1.5 dB) in certain regimes. Thus, using distributed antennas, we can provide the powerful benefits of space diversity without need for physical arrays, though at a loss of spectral efficiency due to half-duplex operation and possibly at the cost of additional receive hardware. Applicable to any wireless setting, including cellular or ad hoc networks-wherever space constraints preclude the use of physical arrays-the performance characterizations reveal that large power or energy savings result from the use of these protocols.

12,761 citations

Journal ArticleDOI
TL;DR: This work reveals that it is in general not optimal to regard the information to be multicast as a "fluid" which can simply be routed or replicated, and by employing coding at the nodes, which the work refers to as network coding, bandwidth can in general be saved.
Abstract: We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be multicast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. We study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the max-flow min-cut theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a "fluid" which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems.

8,533 citations

Journal ArticleDOI
TL;DR: In this article, the capacity of the Gaussian relay channel was investigated, and a lower bound of the capacity was established for the general relay channel, where the dependence of the received symbols upon the inputs is given by p(y,y) to both x and y. In particular, the authors proved that if y is a degraded form of y, then C \: = \: \max \!p(x,y,x,2})} \min \,{I(X,y), I(X,Y,Y,X,Y
Abstract: A relay channel consists of an input x_{l} , a relay output y_{1} , a channel output y , and a relay sender x_{2} (whose transmission is allowed to depend on the past symbols y_{1} . The dependence of the received symbols upon the inputs is given by p(y,y_{1}|x_{1},x_{2}) . The channel is assumed to be memoryless. In this paper the following capacity theorems are proved. 1)If y is a degraded form of y_{1} , then C \: = \: \max \!_{p(x_{1},x_{2})} \min \,{I(X_{1},X_{2};Y), I(X_{1}; Y_{1}|X_{2})} . 2)If y_{1} is a degraded form of y , then C \: = \: \max \!_{p(x_{1})} \max_{x_{2}} I(X_{1};Y|x_{2}) . 3)If p(y,y_{1}|x_{1},x_{2}) is an arbitrary relay channel with feedback from (y,y_{1}) to both x_{1} \and x_{2} , then C\: = \: \max_{p(x_{1},x_{2})} \min \,{I(X_{1},X_{2};Y),I \,(X_{1};Y,Y_{1}|X_{2})} . 4)For a general relay channel, C \: \leq \: \max_{p(x_{1},x_{2})} \min \,{I \,(X_{1}, X_{2};Y),I(X_{1};Y,Y_{1}|X_{2}) . Superposition block Markov encoding is used to show achievability of C , and converses are established. The capacities of the Gaussian relay channel and certain discrete relay channels are evaluated. Finally, an achievable lower bound to the capacity of the general relay channel is established.

4,311 citations

01 Sep 1979
TL;DR: An achievable lower bound to the capacity of the general relay channel is established and superposition block Markov encoding is used to show achievability of C, and converses are established.

3,918 citations

Journal ArticleDOI
TL;DR: The quantity R \ast (d) is determined, defined as the infimum ofrates R such that communication is possible in the above setting at an average distortion level not exceeding d + \varepsilon .
Abstract: Let \{(X_{k}, Y_{k}) \}^{ \infty}_{k=1} be a sequence of independent drawings of a pair of dependent random variables X, Y . Let us say that X takes values in the finite set \cal X . It is desired to encode the sequence \{X_{k}\} in blocks of length n into a binary stream of rate R , which can in turn be decoded as a sequence \{ \hat{X}_{k} \} , where \hat{X}_{k} \in \hat{ \cal X} , the reproduction alphabet. The average distortion level is (1/n) \sum^{n}_{k=1} E[D(X_{k},\hat{X}_{k})] , where D(x,\hat{x}) \geq 0, x \in {\cal X}, \hat{x} \in \hat{ \cal X} , is a preassigned distortion measure. The special assumption made here is that the decoder has access to the side information \{Y_{k}\} . In this paper we determine the quantity R \ast (d) , defined as the infimum ofrates R such that (with \varepsilon > 0 arbitrarily small and with suitably large n )communication is possible in the above setting at an average distortion level (as defined above) not exceeding d + \varepsilon . The main result is that R \ast (d) = \inf [I(X;Z) - I(Y;Z)] , where the infimum is with respect to all auxiliary random variables Z (which take values in a finite set \cal Z ) that satisfy: i) Y,Z conditionally independent given X ; ii) there exists a function f: {\cal Y} \times {\cal Z} \rightarrow \hat{ \cal X} , such that E[D(X,f(Y,Z))] \leq d . Let R_{X | Y}(d) be the rate-distortion function which results when the encoder as well as the decoder has access to the side information \{ Y_{k} \} . In nearly all cases it is shown that when d > 0 then R \ast(d) > R_{X|Y} (d) , so that knowledge of the side information at the encoder permits transmission of the \{X_{k}\} at a given distortion level using a smaller transmission rate. This is in contrast to the situation treated by Slepian and Wolf [5] where, for arbitrarily accurate reproduction of \{X_{k}\} , i.e., d = \varepsilon for any \varepsilon >0 , knowledge of the side information at the encoder does not allow a reduction of the transmission rate.

3,288 citations