scispace - formally typeset
Search or ask a question

Showing papers presented at "Information Theory Workshop in 1989"


Journal Article•DOI•
Razmik Karabed1, Paul H. Siegel1•
25 Jun 1989
TL;DR: A new family of codes that improve the reliability of digital communication over noisy, partial- response channels is described, and it is shown that matched-spectral-null sequences provide a distance gain on the order of 3 dB and higher for a broad class of partial-response channels.
Abstract: A new family of codes that improve the reliability of digital communication over noisy, partial-response channels is described. The codes are intended for use on channels where the input alphabet size is limited. These channels arise in the context of digital data recording and certain data transmission applications. The codes-called matched-spectral-null codes-satisfy the property that the frequencies at which the code power spectral density vanishes correspond precisely to the frequencies at which the channel transfer function is zero. It is shown that matched-spectral-null sequences provide a distance gain on the order of 3 dB and higher for a broad class of partial-response channels. The embodiment of the system incorporates a sliding-block code and a Viterbi detector based upon a reduced-complexity trellis structure. The detectors are shown to achieve the same asymptotic average performance as maximum-likelihood sequence detectors, and the sliding-block codes exclude quasi-catastrophic trellis sequences in order to reduce the required path memory length and improve worst-case detector performance. Several examples are described in detail. >

199 citations


Proceedings Article•DOI•
25 Jun 1989
TL;DR: A number of on-chip coding techniques for the protection of Random Access Memories which use multi-level as opposed to binary storage cells are investigated, including row-column codes, burst codes, hexadecimal codes, Reed-Solomon codes, concatenated codes, and some new majority-logic decodable codes.
Abstract: In this talk we investigate a number of on-chip coding techniques for the protection of Random Access Memories which use multi-level as opposed to binary storage cells. The motivation for such RAM cells is of course the storage of several bits per cell as opposed to one bit per cell [l]. Since the typical number of levels which a multi-level RAM can handle is 16 (the cell being based on a standard DRAM cell which has varying amounts of voltage stored on it) there are four bits recorded into each cell [2]. The disadvantage of multi-level RAMs is that they are much more prone to errors, and so on-chip ECC is essential for reliable operation. There are essentially three reasons for error control coding in multi-level RAMs: To correct soft errors, to correct hard errors, and to correct read errors. The source of these errors is, respectively, alpha particle radiation, hardware faults, and data level ambiguities. On-chip error correction can be used to increase the mean life before failure for all three types of errors. Coding schemes can be both bitwise and cellwise. Bitwise schemes include simple parity checks and SEC-DED codes, either by themselves or as product codes [3]. Data organization should allow for burst error correction, since alpha particles can wipe out all four bits in a single cell, and for dense memory chips, data in surrounding cells as well. This latter effect becomes more serious as feature sizes are scaled, and a single alpha particle hit affects many adjacent cells. Burst codes such as those in [4] can be used to correct for these errors. Bitwise coding schemes are more efficient in correcting read errors, since they can correct single bit errors and allow the remaining error correction power to be used elsewhere. Read errors essentially affect one bit only, since the use of Grey codes for encoding the bits into the memory cells ensures that at most one bit is flipped with each successive change in level. Cellwise schemes include Reed-Solomon codes, hexadecimal codes, and product codes. However, simple encoding and decoding algorithms are necessary, since excessive space taken by powerful but complex encoding/decoding circuits can be offset by having more parity cells and using simpler codes. These coding techniques are more useful for correcting hard and soft errors which affect the entire cell. They tend to be more complex, and they are not as efficient in correcting read errors as the bitwise codes. In the talk we will investigate the suitability and performance of various multi-level RAM coding schemes, such as row-column codes, burst codes, hexadecimal codes, Reed-Solomon codes, concatenated codes, and some new majority-logic decodable codes. In particular we investigate their tolerance to soft errors, and to feature size scaling.

71 citations


Journal Article•DOI•
25 Jun 1989
TL;DR: The solution to this problem is obtained for the case where subsequent data triples that are produced by the users are independent and identically distributed and the three symbols within each triple are assumed to be dependent.
Abstract: Three dependent users are physically separated but communicate with each other via a satellite. Each user generates data which it stores locally. In addition, each user sends a message to the satellite. The satellite processes the messages received from the users and broadcasts one common message to all three users. Each user must be capable of reconstructing the data of the other two users based upon the broadcast message and its own stored data. Our problem is to determine the minimum amount of information which must be transmitted to and from the satellite. The solution to this problem is obtained for the case where subsequent data triples that are produced by the users are independent and identically distributed. The three symbols within each triple are assumed to be dependent. Crucial for the solution is an achievability proof that involves cascaded Slepian-Wolf (1973) source coding.

68 citations


Journal Article•DOI•
25 Jun 1989
TL;DR: Results are obtained about the parameters of a class of subfield subcodes of geometric Goppa codes; in other words, the covering radii are estimated, and further, the number of information symbols whenever the minimum distance is small in relation to the length of the code is found.
Abstract: For pt.I, see Proc. AMS, vol.III, p.523-31 (1991). The minimum distance of a Goppa code is found when the length of code satisfies a certain inequality on the degree of the Goppa polynomial. In order to do this, conditions are improved on a theorem of E. Bombieri (1966). This improvement is used also to generalize a previous result on the minimum distance of the dual of a Goppa code. This approach is generalized and results are obtained about the parameters of a class of subfield subcodes of geometric Goppa codes; in other words, the covering radii are estimated, and further, the number of information symbols whenever the minimum distance is small in relation to the length of the code is found. Finally, a bound on the minimum distance of the dual code is discussed. >

40 citations


Journal Article•DOI•
Amir Dembo1•
01 Sep 1989
TL;DR: The author proves that in the limit as signal power approaches either zero (very low SNR) or infinity (very high SNR), feedback does not increase the finite block-length capacity (which for nonstationary Gaussian channels replaces the standard notion of capacity that may not exist).
Abstract: M. Pinsker and P. Ebert (Bell Syst. Tech. J., p.1705-1712, Oct.1970) proved that in channels with additive Gaussian noise, feedback at most doubles the capacity. Recently, T. Cover and S. Pombra (ibid., vol.35, no.1, p.37-43, Jan.1989) proved that feedback at most adds half a bit per transmission. Following their approach, the author proves that in the limit as signal power approaches either zero (very low SNR) or infinity (very high SNR), feedback does not increase the finite block-length capacity (which for nonstationary Gaussian channels replaces the standard notion of capacity that may not exist). Tighter upper bounds on the capacity are obtained in the process. Specializing these results to stationary channels, the author recovers some of the bounds recently obtained by L.H. Ozarow (to appear in IEEE Trans. Inf. Theory) using a different bounding technique. >

32 citations


Proceedings Article•
G.D. Forney1•
01 Jan 1989
TL;DR: In this article, the authors proposed trellis shaping, a method of selecting a minimum-weight sequence from an equivalence class of possible transmitted sequences by a search through the tree diagram of a shaping convolutional code C/sub s/.
Abstract: The author discusses trellis shaping, a method of selecting a minimum-weight sequence from an equivalence class of possible transmitted sequences by a search through the trellis diagram of a shaping convolutional code C/sub s/. Shaping gains on the order of 1 dB may be obtained with simple four-state shaping codes and with moderate constellation expansion. The shaping gains obtained with more complicated codes approach the ultimate shaping gain of 1.53 dB. With a feedback-free syndrome-former for C/sub s/, transmitted data can be recovered without catastrophic error propagation. Constellation expansion and peak-to-average energy ratio may be effectively limited by peak constraints. With lattice-theoretic constellations, the shaping operation may be characterized as a decoding of an initial sequence in a channel trellis code by a minimum-distance decoder for a shaping trellis code based on the shaping convolutional code, and the set of possible transmitted sequences is then the set of code sequences in the channel trellis code that lie in the Voronoi region of the trellis shaping code. >

13 citations


Journal Article•DOI•
25 Jun 1989
TL;DR: A simple upper bound to the performance of any encoder when used with the optimal modem and detector is presented, and these results provide a benchmark with which the performances of spread-spectrum modems and robust detection rules can be compared.
Abstract: Coherent communication over a waveform channel corrupted by thermal noise and by an unknown and arbitrary interfering signal of bounded power is considered. For a fixed encoder, a random modulator/demodulator (modem) and detector are derived. They asymptotically minimize the worst-case error probability as the blocklength of the encoder becomes large. This optimal modem is independent of the encoder, and the optimal detector is the standard correlation receiver. A simple upper bound to the performance of any encoder when used with the optimal modem and detector is presented. These results provide a benchmark with which the performance of spread-spectrum modems and robust detection rules can be compared. >

11 citations


Proceedings Article•DOI•
C.A. French1•
25 Jun 1989
TL;DR: In this article, distance-preserving RLL codes are introduced to satisfy a run-length constraint, such that the Hamming distance between any two encoder output sequences is at least as large as the distance between the corresponding encoder input sequences.
Abstract: A subset of the RLL (run-length limited) codes called distance preserving RLL codes is introduced. In addition to satisfying a run-length constraint, these codes have the property that the Hamming distance between any two encoder output sequences is at least as large as the Hamming distance between the corresponding encoder input sequences. Thus, when used in combination with a binary ECC code, a distance preserving RL code does not reduce the overall Hamming distance of the ECC/RLL combination to something below the Hamming distance of the ECC code alone. It is shown how distance preserving RLL codes can be used with binary convolutional codes to create combined ECC/RLL codes with the distance properties of the original convolutional code. >

10 citations


Proceedings Article•DOI•
25 Jun 1989
TL;DR: The yield (the probability that a chip is defect-free) is analyzed due to implementing an error correcting code (ECC) on the memory array in order to control hard defects, and it is shown that the problem of finding an optimal algorithm to switch columns is inherently intractable, and this problem is NP - complete.
Abstract: A hard defect in a semiconductor random access memory (RAM) is a cell which is "stuck-at" a certain value or is otherwise consistently unreliable. The most commonly used technique to correct hard defects during manufacturing is row/column replacement, wherein redundant rows and columns are added on to each memory array and are used to replace rows and columns which contain defective cells. This method has been applied to memory chips of modest sizes (in the 64 K - 4 M bit range). However, the strategy of replacing an entire row or column because of a single defective cell seems likely to be inefficient as the size of the memory array grows. Our research effort was motivated by a recent paper [l] in which this technique is shown to be asymptotically ineffective: as the size of the memory array grows, regardless of the rate (the amount of redundancy) the probability of obtaining an error-free array approaches zero. We consider implementing an error correcting code (ECC) on the memory array in order to control hard defects. A simple single-errorcorrecting code is used over the rows, each row containing an integral number of codewords. Since each codeword can tolerate up to one defect, this technique allows the array to suffer some defective cells and still exhibit no loss of fidelity. We analyze the yield (the probability that a chip is defect-free) due to this method, using a shortenend Hamming code to illustrate our results. The presence of multiple defective cells in some codewords would cause these codewords to become undecodable, causing the chip to be rejected. In order to further improve the yield, we also consider using redundant rows and columns in conjunction with an ECC to correct undecodable codewords in the memory array. It is to be noted that we are interested in improving the yield of memory chips which suffer from single cell defects. In the case of an entire row or column failing (due to the failure of a row driver or a column decoder) we cannot, of course, do any better than to replace the entire row or column. Algorithms to switch rows and columns are examined, and three separate cases are considered: (1) redundant rows, (2) redundant columns, and (3) redundant rows and columns. Case (1) is easily analyzed. Cases (2) and (3) are much more difficult: it is shown that the problem of finding an optimal algorithm to switch columns (case (2)) is inherently intractable, and we prove that this problem is NP - complete. As a corollary, the problem of simultaneously switching rows and columns is also shown to be NP - complete. Heuristics for cases (2) and (3) are presented, and bounds on the yield due to these techniques are derived. References

10 citations


Proceedings Article•
Shun-ichi Amari1•
01 Jan 1989

4 citations


Journal Article•DOI•
25 Jun 1989
TL;DR: An upper bound on the minimum squared distance of trellis codes by packing Voronoi cells is derived and compared with previously known bounds and is tight to search results for coset codes with a small number of states.
Abstract: An upper bound on the minimum squared distance of trellis codes by packing Voronoi cells is derived and compared with previously known bounds. The authors focus on codes with small memory for modulation formats such as pulse amplitude modulation (PAM), m-ary quadrature amplitude modulation (QAM), and m-ary phase shift keying (PSK). The bound is tight to search results for coset codes with a small number of states. >

Proceedings Article•DOI•
D. Brady1, Sergio Verdu•
25 Jun 1989
TL;DR: In this work, the exact error expression for the noncoherent, optical matched-filter receiver is derived based on the electron count in a symbol period and is valid for arbitrary quantum efficiencies and dark currents, and employs the semi-classical model of light.
Abstract: Recent work in the analysis of noncoherent, optical Code Division Multiple Access (CDMA) receivers has relied on the approximations of Gaussian-distributed Multiple Access Interference (MAI). cooperation among the users for chip synchronization, or direct observation of the optical intensity (also known as perfect-optical-to-electrical conversion). Until now.the accuracy of these approximations has not been addressed. In this work we derive the exact error expression for the noncoherent, optical matched-filter receiver based on the electron count in a symbol period. The analysis is valid for arbitrary quantum efficiencies and dark currents, and employs the semi-classical model of light. We do not assume perfect optical-to-electrical conversion, Gaussian-distributed MAI, or synchronism among the users. We then compare the exact error rate to those obtained from popular approximations. Using the prime codes as an example, we show that the assumption of perfect optical-to-electrical conversion, which leads to the "error-free" hypothesis test for a suitably small user group size, is a poor model for the photodetection process at moderate incident optical intensities and dark currents. We show that the combined assumptions of perfect optical-to-electrical conversion and Gaussian-distributed MA1 yield an overestimate of the optimal threshold and an underestimate of the error rate for small but reasonable optical powers. The error rate expression that we derive is valid for arbitrary, i.i.d. relative delays among the users. The error rate expression is considerably simplified when the delay distribution corresponding to chip-synchronism is used. We take advantage of this fact to derive upper and lower bounds on the asynchronous error rate by using the chip-synchronous expression. The tightness of these bounds for various optical energies and signature sequence sets is discussed.


Journal Article•DOI•
01 Jul 1989
TL;DR: The application of a combined test-error-correcting procedure is studied to improve the mean time to failure (MTTF) for degrading memory systems with defects.
Abstract: The application of a combined test-error-correcting procedure is studied to improve the mean time to failure (MTTF) for degrading memory systems with defects. The degradation is characterized by the probability p that within a unit of time a memory cell changes from the operational state to the permanent defect state. Bounds are given on the MTTF and it is shown that, for memories with N words of k information bits, coding gives an improvement in MTTF proportional to (k/n) N(d/sub min//sup -2/)/(d/sub min//sup -1/), where d/sub min/ and (k/n) are the minimum distance and the efficiency of the code used, respectively. Thus the time gain for a simple minimum-distance-3 is proportional to N/sup -1/. A memory word test is combined with a simple defect-matching code. This yields reliable operation with one defect in a word of length k+2 at a code efficiency k/(k+2). >

Proceedings Article•DOI•
25 Jun 1989

Proceedings Article•DOI•
T.M. Cover1, W. Equitz•
25 Jun 1989

Proceedings Article•DOI•
M. Belongie1, C. Heegard•
25 Jun 1989


Proceedings Article•DOI•
25 Jun 1989
TL;DR: In this paper, the generalized maximum-likelihood estimators of invariant form under parameter transformations are extended to the multi-parameter case, in contrast with those previously known estimators.
Abstract: The basic questions here are how to construct effective encoders @,Q2 and the related estimators 6 for the parameter 0; what is the minimum variance of these estimators ?; and what is the maximum Fisher information attainable under the rate constraints RI .R2 for the Shannon information ?. etc. We shall give several substantial answers to these problems which include as special cases the previous results by Zhang and Berger, and by Ahlswede and Burnashev. The present results have been established technically on the basis of universal coding for relevant auxiliary random variables, projection operation for multivariate Gaussian statistics, introduction of a dual pair of orthogonal coordinate systems, and also geometry of Kullback-Leibler's divergences, etc. Such an approach is essentially along the differential-geometrical standpoint provided by Amari in studying the zero-rate estimators. Our estimators can be regarded as a kind of the generalized maximum-likelihood estimators of invariant form under parameter transformations, and are very naturally extended to the multi-parameter case, in contrast with those previously known estimators. As a by-product, we shall show a simple new proof to the result by Zhang and Berger, which enables us to have a clearer understanding for the additivity condition assumed by them.

Proceedings Article•DOI•
25 Jun 1989
TL;DR: This paper proposes several novel techniques for mapping rule bases, such as are used in rule based expert systems, onto neural network architectures, and shows a clear pathway to implementing an expert system starting from raw data to a neural network that performs distributed inference.
Abstract: In this paper we propose several novel techniques for mapping rule bases, such as are used in rule based expert systems, onto neural network architectures. Our objective in doing this is to achieve a system capable of incremental learning, and distributed probabilistic inference. Such a system would be capable of performing inference many orders of magnitude faster than current serial rule based expert systems, and hence be capable of true real time operation. In addition, the rule based formalism gives the system an explicit knowledge representation, unlike current neural models. We propose an information-theoretic approach to this problem, which really has two aspects: firstly learning the model and, secondly, performing inference using this model. We will show a clear pathway to implementing an expert system starting from raw data, via a learned rule-based model, to a neural network that performs distributed inference.

Proceedings Article•DOI•
A. Sadrolhefazi1, T. Fine•
25 Jun 1989