scispace - formally typeset
Search or ask a question

Showing papers by "Jack K. Wolf published in 1971"


Journal ArticleDOI
TL;DR: All the discrete, memoryless, symmetric channels matched to the Lee metric are derived, and general properties of Lee metric block codes are investigated.
Abstract: Almost all nonbinary codes were designed for the Hamming metric. The Lee metric was defined by Lee in 1958 . Golomb and Welch (1968) and Berlekamp, 1968a , Berlekamp, 1968b have designed codes for the Lee metric. In this paper, we derive all the discrete, memoryless, symmetric channels matched to the Lee metric, and investigate general properties of Lee metric block codes. Finally, a class of cyclic Lee metric codes is defined and the number of information symbols is discussed.

42 citations


Journal ArticleDOI
TL;DR: An expression for the minimum-mean-square error achievable in encoding t samples of a stationary correlated Gaussian source is derived in terms of the covariance matrices of the source and noise sequences.
Abstract: In this paper we derive an expression for the minimum-mean-square error achievable in encoding t samples of a stationary correlated Gaussian source. It is assumed that the source output is not known exactly but is corrupted by correlated Gaussian noise. The expression is obtained in terms of the covariance matrices of the source and noise sequences. It is shown that as t \rightarrow \infty , the result agrees with a known asymptotic result, which is expressed in terms of the power spectra of the source and noise. The rate of convergence to the asymptotic results as a function of coding delay is investigated for the case where the source is first-order Markov and the noise is uncorrelated. With D the asymptotic minimum-mean-square error and D_t the minimum-mean-square error achievable in transmitting t samples, we find \mid D_t - D \mid \leq O((t^{-1} \log t) ^ {1/2}) when we transmit the noisy source vectors over a noiseless channel and \mid D_t - D \mid \leq O((t^{-1} \log t)^ {1/3}) when the channel is noisy.

6 citations


Journal ArticleDOI
TL;DR: The tap gain vector for the equalizer is developed, and an upper bound on the probability of error is derived, and results show that under certain conditions the error rate can be reduced by orders of magnitude.
Abstract: The problem considered is that of reducing intersymbol interference caused by time dispersive channels. Similarity between the response of a time dispersive channel and that of a cyclic algebraic encoder has led to a method of treating the dispersive channel as a linear encoder. A tapped delay line equalizer is used to complete the encoding process so that the result can be decoded and errors corrected. The tap gain vector for the equalizer is developed, and an upper bound on the probability of error is derived. The analysis and computer simulation results show that under certain conditions the error rate can be reduced by orders of magnitude.

4 citations