scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Communications in 1971"


Journal Articleā€¢DOIā€¢
Stephen B. Weinstein1, P. Ebert1ā€¢
TL;DR: The Fourier transform data communication system is described and the effects of linear channel distortion are investigated and a differential phase modulation scheme is presented that obviates any equalization.
Abstract: The Fourier transform data communication system is a realization of frequency-division multiplexing (FDM) in which discrete Fourier transforms are computed as part of the modulation and demodulation processes. In addition to eliminating the bunks of subcarrier oscillators and coherent demodulators usually required in FDM systems, a completely digital implementation can be built around a special-purpose computer performing the fast Fourier transform. In this paper, the system is described and the effects of linear channel distortion are investigated. Signal design criteria and equalization algorithms are derived and explained. A differential phase modulation scheme is presented that obviates any equalization.

2,507Ā citations


Journal Articleā€¢DOIā€¢
TL;DR: This tutorial paper begins with an elementary presentation of the fundamental properties and structure of convolutional codes and proceeds with the development of the maximum likelihood decoder, which yields for arbitrary codes both the distance properties and upper bounds on the bit error probability.
Abstract: This tutorial paper begins with an elementary presentation of the fundamental properties and structure of convolutional codes and proceeds with the development of the maximum likelihood decoder. The powerful tool of generating function analysis is demonstrated to yield for arbitrary codes both the distance properties and upper bounds on the bit error probability for communication over any memoryless channel. Previous results on code ensemble average error probabilities are also derived and extended by these techniques. Finally, practical considerations concerning finite decoding memory, metric representation, and synchronization are discussed.

1,040Ā citations


Journal Articleā€¢DOIā€¢
TL;DR: Convolutional coding and Viterbi decoding, along with binary phase-shift keyed modulation, is presented as an efficient system for reliable communication on power limited satellite and space channels.
Abstract: Convolutional coding and Viterbi decoding, along with binary phase-shift keyed modulation, is presented as an efficient system for reliable communication on power limited satellite and space channels. Performance results, obtained theoretically and through computer simulation, are given for optimum short constraint length codes for a range of code constraint lengths and code rates. System efficiency is compared for hard receiver quantization and 4 and 8 level soft quantization. The effects on performance of varying of certain parameters relevant to decoder complexity and cost is examined. Quantitative performance degradation due to imperfect carrier phase coherence is evaluated and compared to that of an uncoded system. As an example of decoder performance versus complexity, a recently implemented 2-Mbit/s constraint length 7 Viterbi decoder is discussed. Finally a comparison is made between Viterbi and sequential decoding in terms of suitability to various system requirements.

442Ā citations


Journal Articleā€¢DOIā€¢
Jr. G.D. Forney1ā€¢
TL;DR: The method is to define an idealized model, called the classic bursty channel, toward which most burst-correcting schemes are explicitly or implicitly aimed, and to bound the best possible performance on this channel to exhibit classes of schemes which are asymptotically optimum.
Abstract: The purpose of this paper is to organize and clarify the work of the past decade on burst-correcting codes. Our method is, first, to define an idealized model, called the classic bursty channel, toward which most burst-correcting schemes are explicitly or implicitly aimed; next, to bound the best possible performance on this channel; and, finally, to exhibit classes of schemes which are asymptotically optimum and serve as archetypes of the burstcorrecting codes actually in use. In this light we survey and categorize previous work on burst-correcting codes. Finally, we discuss qualitatively the ways in which real channels fail to satisfy the assumptions of the classic bursty channel, and the effects of such failures on the various types of burst-correcting schemes. We conclude by comparing forward-error-correction to the popular alternative of automatic repeat-request (ARQ).

295Ā citations


Journal Articleā€¢DOIā€¢
M. Gans1ā€¢
TL;DR: In a fading channel, maximal ratio diversity combilling improves the average signal-to-noise ratio over that of a single branch in proportion to the number of diversity branches combined, however, its main advantage is the reduction of the probability of deep fades.
Abstract: In a fading channel, maximal ratio diversity combilling improves the average signal-to-noise ratio over thatof a single branch in proportion to the number of diversity branches combined. However, its main advantage is the reduction of the probability of deep fades. The effect of Gaussian errors in the combiner weighting factors on the probability distribution of the output signal-to-noise ratio is computed. The limits on allowable error for a specified probability of fades below any given level are indicated. The results are applied to a mobile radio example in which the weighting factor is determined from a pilot transmitted along with the signal. To keep the pilot from overlapping the signal, they are separated either in frequency or in time. In this case the Gaussian error is due to decorrelation of the pilot from the signal either because their frequency separation or their time separation is too large.

252Ā citations


Journal Articleā€¢DOIā€¢
TL;DR: An adaptive variable length coding system is presented, developed primarily for the proposed Grand Tour missions, but many features of this system clearly indicate a much wider applicability.
Abstract: An adaptive variable length coding system is presented Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability Using sample to sample prediction, the coding system produces output rates within 025 bit/picture element (pixel) of the onedimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel This is accomplished without the necessity of storing any code words Performance improvements of 05 bit/pixel can be simply achieved by utilizing previous line correlation A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system

185Ā citations


Journal Articleā€¢DOIā€¢
A. Habibi1, P. Wintzā€¢
TL;DR: The feasibility of coding two-dimensional data arrays by first performing a two- dimensional linear transformation on the data and then block quantizing the transformed data is investigated.
Abstract: The feasibility of coding two-dimensional data arrays by first performing a two-dimensional linear transformation on the data and then block quantizing the transformed data is investigated. The Fourier, Hadamard, and Karhunen-Loeve transformations are considered. Theoretical results for Markov data and experimental results for four pictures comparing these transform methods to the standard method of raster scanning, sampling, and pulse-count modulation code are presented.

184Ā citations


Journal Articleā€¢DOIā€¢
TL;DR: An adaptive decision feedback equalizer to detect digital information transmitted by pulse-amplitude modulation through a noisy dispersive linear channel is described, and its performance through several channels is evaluated by means of analysis, computer simulation, and hardware simulation.
Abstract: An adaptive decision feedback equalizer to detect digital information transmitted by pulse-amplitude modulation (PAM) through a noisy dispersive linear channel is described, and its performance through several channels is evaluated by means of analysis, computer simulation, and hardware simulation. For the channels considered, the performance of both the fixed and the adaptive decision feedback equalizers are found to be notably better than that obtained with a similar linear equalizer. The fixed equalizer, which may be used when the channel characteristics are known, exhibits performance which is close to that of the optimum, but impractical, Bayesian receiver and is considerably superior to that of the linear equalizer. The adaptive decision feedback equalizer, which is used when the channel impulse response is unknown or time varying, has a better transient and steady-state performance than the adaptive linear equalizer. The sensitivity of the receiver structure to adjustment and quantization errors is not pronounced.

162Ā citations


Journal Articleā€¢DOIā€¢
TL;DR: The problems of sequence decision, sample timing, and carrier phase recovery in a class of linear modulation data transmission systems are treated from the viewpoint of multiparameter estimation theory.
Abstract: The problems of sequence decision, sample timing, and carrier phase recovery in a class of linear modulation data transmission systems are treated from the viewpoint of multiparameter estimation theory. The structure of the maximumlikelihood estimator is first obtained, and a decision-directed receiver is then derived. These receivers are different from the conventional one in that the carrier phase is extracted from the signal components themselves in an adaptive fashion. The structure of this adaptive demodulator and detector is then extended to the case in which the channel characteristic is unknown, and the algorithm for adjusting the carrier phase and sample instant is discussed in combination with that of adaptive equalization.

155Ā citations


Journal Articleā€¢DOIā€¢
TL;DR: A Gram-Charlier expansion is used to compute the error rate in the presence of intersymbol interference and additive Gaussian noise and the method presented is very useful for numerical computations.
Abstract: The error rate or the probability of error is an important parameter in the design of digital communication systems. In this paper a Gram-Charlier expansion is used to compute the error rate in the presence of intersymbol interference and additive Gaussian noise. The method presented is very useful for numerical computations. We also present expressions for the truncation errors. Rigorous proofs are presented in the Appendix.

140Ā citations


Journal Articleā€¢DOIā€¢
TL;DR: The transform-coding concept has been applied to the coding of color images represented by three primary color planes of data and it is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation.
Abstract: During the past few years several monochromeimage transform-coding systems have been developed. In these systems, a quantized and coded version of a spatial unitary transform of an image is transmitted over a channel, rather than an image itself. In this paper the transform-coding concept has been applied to the coding of color images represented by three primary color planes of data. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are discussed. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.

Journal Articleā€¢DOIā€¢
A. Habibi1ā€¢
TL;DR: The performance of an n th order DPCM system is studied and it is compared to the performance of the unitary-transform techniques (Hadamard, Fourier, and KarhunenLoeve) in coding two monochrome still pictures.
Abstract: Two important classes of coding schemes that use the spatial correlation of picture elements in reducing data redundancy are the differential pulse-code modulation (DPCM) and the unitary transform coding techniques. We will study the performance of an n th order DPCM system for n ranging from 1 to 22 and compare it to the performance of the unitary-transform techniques (Hadamard, Fourier, and KarhunenLoeve) in coding two monochrome still pictures. We will also consider the sensitivities of the coding systems to picture-to-picture variations.

Journal Articleā€¢DOIā€¢
TL;DR: In this paper, the use of the discrete Kalman filter as an equalizer for digital binary transmission in the presence of noise and intersymbol interference has been considered, and it has been shown that using the 6-tap KF yields a considerably smaller error probability than when a conventional transversal equalizer with 15 taps is used.
Abstract: Consideration is given to the use of the discrete Kalman filter as an equalizer for digital binary transmission in the presence of noise and intersymbol interference. When the channel is modeled as an n -tap transversal filter, the Kalman filter assumes a similar form with "feed forward and feedback." It is shown how the Kalman filter can be used to estimate both the tap weights and the binary signal. Computer results on a fixed 6-tap channel show that use of the 6-tap Kalman filter yields a considerably smaller error probability than when a conventional transversal equalizer with 15 taps is used. Limited computer studies on the same channel, assumed to be initially unknown, suggest that the Kalman filter is capable of converging rapidly in the adaptive mode. Though these results are very encouraging, much work remains in the study and optimization of performance in the adaptive mode.

Journal Articleā€¢DOIā€¢
Hisashi Kobayashi1ā€¢
TL;DR: In this survey, coding techniques and results which pertain to such problems as reduction of dc wandering, suppression of intersymbol interference, and inclusion of selfclocking capability are reviewed.
Abstract: In this survey we shall review coding techniques and results which pertain to such problems as reduction of dc wandering, suppression of intersymbol interference, and inclusion of selfclocking capability. These problems are of engineering interest in the transmission or recording of digital data. The topics to be discussed include: 1) dc free codes such as bipolar signals and feedback balanced codes, 2) correlative level codes and optimal decoding methods, 3) Fibonacci codes and run-length constraint codes, and 4) state-oriented codes.

Journal Articleā€¢DOIā€¢
M. Tasto1, P. Wintzā€¢
TL;DR: The adaptive block quantizer is proposed for coding data sources that emit a sequence of correlated real numbers with known first- and second-order statistics, and the system is optimized relative to both the mean square error and the subjective quality of the reconstructed data.
Abstract: A new source encoder called the adaptive block quantizer is proposed for coding data sources that emit a sequence of correlated real numbers with known first- and second-order statistics, Blocks of source output symbols are first classified and then block quantized in a manner that depends on their classification. The system is optimized relative to both the mean square error and the subjective quality of the reconstructed data for a certain class of pictorial data, and the resulting system performance demonstrated. Some interesting relationships between mean square error and subjective picture quality are presented.

Journal Articleā€¢DOIā€¢
Lawrence R. Rabiner1ā€¢
TL;DR: The motivation behind three design techniques that have been proposed are reviewed here, and the resulting designs are compared with respect to filter characteristics, ease of design, and methods of realization.
Abstract: Several new techniques for designing finite-duration impulse-response digital filters have become available in the past few years. The motivation behind three design techniques that have been proposed are reviewed here, and the resulting designs are compared with respect to filter characteristics, ease of design, and methods of realization. The design techniques to be discussed include window, frequency-sampling, and equiripple designs.

Journal Articleā€¢DOIā€¢
TL;DR: A new approach to asynchronous multiple access communications, employing orthogonal convolutional coding and Viterbi decoding, is presented and results indicate that the technique is quite efficient in terms of the number of users supportable at a specified bit error rate in a given system bandwidth.
Abstract: A new approach to asynchronous multiple access communications is presented. The technique, employing orthogonal convolutional coding and Viterbi decoding, is described and its performance characteristics are derived for the case in which other-user interference is the only source of noise. Results indicate that the technique is quite efficient in terms of the number of users supportable at a specified bit error rate in a given system bandwidth. Furthermore, the results of a design study are described, showing that the technique is a practical one to implement.

Journal Articleā€¢DOIā€¢
TL;DR: This work analyzes the operation of a dithered quantizer of picture luminance and shows that the addition of dither before quantization restores some of the pictorial information which a coarse quantizer would otherwise discard.
Abstract: This work analyzes the operation of a dithered quantizer of picture luminance. It is shown that the addition of dither before quantization restores some of the pictorial information which a coarse quantizer would otherwise discard. Two-dimensional ordered dither patterns are described which are considerably more effective for this purpose than random distributions of the same dither samples. The patterns of dither can also be designed so that noise artifacts on the output picture are less visible than with equivalent random dither and so that (contrary to the random case) it is not necessary to subtract the same dither pattern at the receiver for substantially best results.

Journal Articleā€¢DOIā€¢
TL;DR: A class of rate 1/2 nonsystematic convolutional codes with an undetected decoding error probability verified by simulation to be much smaller than for the best systematic codes of the same constraint length, and a "quick-look-in" feature that permits recovery of the information sequence from the hard-decisioned received data without decoding simply by modulo-two addition of the received sequences.
Abstract: Previous space applications of sequential decoding have all employed convolutional codes of the systematic type where the information sequence itself is used as one of the encoded sequences. This paper describes a class of rate 1/2 nonsystematic convolutional codes with the following desirable properties: 1) an undetected decoding error probability verified by simulation to be much smaller than for the best systematic codes of the same constraint length; 2) computation behavior with sequential decoding verified by simulation to be virtually identical to that of the best systematic codes; 3) a "quick-look-in" feature that permits recovery of the information sequence from the hard-decisioned received data without decoding simply by modulo-two addition of the received sequences; and 4) suitability for encoding by simple circuitry requiring less hardware than encoders for the best systematic codes of the same constraint length. Theoretical analyses are given to show 1) that with these codes the information sequence is extracted as reliably as possible without decoding for nonsystematic codes and 2) that the constraints imposed to achieve the quicklook-in feature do not significantly limit the error-correcting ability of the codes in the sense that the Gilbert bound on minimum distance can still be attained under these constraints. These codes have been adopted for use in several forthcoming space missions.

Journal Articleā€¢DOIā€¢
TL;DR: An optimum adaptive delta modulator-demodulator configuration is derived and it is shown that the output signal-to-noise (SNR) ratio is approximately independent of the input signal power and is subject only to the limitations of the hardware employed.
Abstract: An optimum adaptive delta modulator-demodulator configuration is derived. This device utilizes two past samples to obtain a step size which minimizes the mean square error for a Markov Gaussian source. The optimum system is compared using computer simulations with the linear delta modulator and an enhanced Abate delta modulator. In addition the performance is compared to the rate distortion bound for a Markov source. It is shown that the optimum delta modulator is neither quantization nor slope-overload limited. The highly nonlinear equations obtained for the optimum transmitter and receiver are approximated by piecewise-linear equations in order to obtain system equations which can be transformed into hardware. The derivation of the experimental system is presented. The experimental "optimum" system, an enhanced version of the Abate delta modulator and a linear delta modulator were tested and compared using sinusoidal, square-wave, and pseudorandom binary sequence inputs. The results show that the output signal-to-noise (SNR) ratio is approximately independent of the input signal power and is subject only to the limitations of the hardware employed. In addition, voice was recorded using these systems. The demodulated voice indicates negligible degradation is caused by the optimum system and by the enhanced Abate system while the linear delta modulator suffers significant degradation at a sampling frequency of 56 k/s. The systems were also tested at 19.2 k/s. At this bit rate, speech recognition, using the experimental "optimum" system, remained completely intelligible.

Journal Articleā€¢DOIā€¢
TL;DR: In this paper, the Fourier coefficients of the largest absolute value of a picture were determined for each subsection, where L is proportional to the standard deviation of the picture samples in the subsection.
Abstract: Recent advances in digital computer and optical technology have made image spectra determination practical. Pratt and Andrews [1] studied bandwidth compression using the Fourier transform of complete pictures. By treating pictures adaptively on a piecewise basis, picture detail is better represented. Also, subjective preferences of human vision can be used, which result in further improvements in picture quality. The original picture is sampled and then divided into small subsections. Each subsection is expanded in a two-dimensional Fourier series. The L -Fourier coefficients of largest absolute value are determined for each subsection, where L is proportional to the standard deviation of the picture samples in the subsection. The frequencies and complex amplitudes of those L -Fourier coefficients are transmitted. The number of quantization levels used for the Fourier coefficients in each subsection is made dependent on the standard deviation of the picture samples in the subsection, and the size of the quantum steps is made dependent on the magnitude of the largest Fourier coefficient of the subsection, aside from the average value. The frequencies of the coefficients correspond to positions in a twodimensional spatial frequency plane. These positions, or twodimensional frequencies, are transmitted by run-length coding. The process is adaptive in the sense that, its parameters vary from subsection to subsection of the picture in an effort to match the properties of the individual subsections. Subsection size and other important system constants are chosen with knowledge of the properties of human vision. We are able to obtain high-quality reconstructed pictures, using on the average 1.25 bits per picture point.

Journal Articleā€¢DOIā€¢
TL;DR: The overall system design of the device is described with particular emphasis on a noise analysis, and it is concluded that the A/D conversion points are the most important noise sources and the most costly to deal with.
Abstract: In keeping with the trend to greater use of digital circuits for signal processing, a project was undertaken to realize in an exploratory way an important telecommunication function using as great a proportion of digital hardware as possible. The function chosen is that of the A -channel bank; viz., the frequency division multiplexing (FDM) of 12 voiceband signals onto a single wire. Because of the nature of its operation the device to be described can also perform a translation between FDM analog signals and time division multiplexed (TDM) digital signals. This paper describes the overall system design of the device with particular emphasis on a noise analysis. The principal sources of noise are the A/D conversion points and the roundoff points that occur at the outputs of multipliers. Each noise source is examined in turn and its contribution to the total noise assessed. It is concluded that the A/D conversion points are the most important noise sources and the most costly to deal with.

Journal Articleā€¢DOIā€¢
G. Forney1, E. Bowerā€¢
TL;DR: The design of a rate-1/2 hard-decision sequential decoder capable of operation at data rates up to 5 M bit/s is described, substantially in agreement with predictions of a coding gain of the order of 5 dB at a 10-5error rate.
Abstract: We describe the design of a rate-1/2 hard-decision sequential decoder capable of operation at data rates up to 5 M bit/s. Test results are given for digitally generated errors, white noise, and real channels. The results are substantially in agreement with predictions of a coding gain of the order of 5 dB at a 10-5error rate.

Journal Articleā€¢DOIā€¢
TL;DR: In this article, it was shown that the relationship between the output and input signal-to-noise ratio may be significantly different than that obtained by Davenport for incoherent limiters.
Abstract: Many applications of the bandpass limiter (BPL) involve coherent demodulation following the limiter. It is shown that as a result of demodulation, the signal mean and the noise variance are direct functions of the phase angle between the signal component passed by the BPL and the coherent reference. As a result, the relationship between the output and input signal-to-noise ratio may be significantly different than that obtained by Davenport for incoherent limiters. A study is also made of the output noise spectral density, and an approximate expressison is derived as a function of the input signal-to-noise ratio, reference phase angle, and the characteristics of the input bandpass filter to the limiter. Also discussed is the first-order signal-plus-noise probability density following coherent demodulation.

Journal Articleā€¢DOIā€¢
TL;DR: Discrete forms of the Fourier, Hadamard, and Karhunen-Loeve transforms are examined for their capacity to reduce the bit rate necessary to transmit speech signals and these bit-rate reductions are shown to be somewhat independent of the transmission bit rate.
Abstract: Discrete forms of the Fourier, Hadamard, and Karhunen-Loeve transforms are examined for their capacity to reduce the bit rate necessary to transmit speech signals. To rate their effectiveness in accomplishing this goal the quantizing error (or noise) resulting for each transformation method at various bit rates is computed and compared with that for conventional companded PCM processing. Based on this comparison, it is found that Karhunen-Loeve provides a reduction in bit rate of 13.5 kbits/s, Fourier 10 kbits/s, and Hadamard 7.5 kbits/s as compared with the bit rate required for companded PCM. These bit-rate reductions are shown to be somewhat independent of the transmission bit rate.

Journal Articleā€¢DOIā€¢
TL;DR: The most important recommendation is that the highfrequency band be given more attention for telecommunication between terminals buried just beneath the air-ground interface.
Abstract: We present a review of the telecommunication possibilities for terminals that are buried in the earth or submerged in the sea. The important role of the operating frequency is stressed. Without being comprehensive, we discuss the relevance of the large number of theoretical papers on electromagnetic waves in conducting media. Our most important recommendation is that the highfrequency band be given more attention for telecommunication between terminals buried just beneath the air-ground interface. Also, we suggest that future research be directed towards the investigation of the frequency dependence of the conductivity and permittivity of geological materials.

Journal Articleā€¢DOIā€¢
B. Meister1, H. Muller, Harry Rudinā€¢
TL;DR: In this paper, a new class of criteria for the optimum capacity assignment in store-and-forward communication networks under a total fixed-cost constraint is presented, which are more sensitive to the needs of the individual user.
Abstract: This paper presents a new class of criteria for the optimum Capacity assignment in store-and-forward communication networks under a total fixed-cost constraint. Compared with conventional average-delay optimization these criteria are more sensitive to the needs of the individual user. Closed-form results are attained.

Journal Articleā€¢DOIā€¢
H. Seidel1ā€¢
TL;DR: In this paper, a feed-forward error control system was applied to an L-4 coaxial system flat-gain amplifier operating in the frequency range of 0.5-20 MHz.
Abstract: As part of a test to determine its applicability to coaxial repeaters, a feedforward error-control system was applied to an L-4 coaxial system flat-gain amplifier operating in the frequency range of 0.5-20 MHz. The error amplifier was a duplicate to the main amplifier. With three test tones at 7.5 dBm output each applied within the last L-4 master group, modulation products in the uncompensated system were observed about 65 dB down from a single-tone level. With the error loop applied, modulation products dropped to greater than 100 dB over the range, often exceeding 110 dB. In particular, the third-order intermodulation product corresponding to A + B - C was reduced by 42 dB to - 108 dB.

Journal Articleā€¢DOIā€¢
Joel Goldman1ā€¢
TL;DR: The results indicate that one cannot approximate well the effect of interference on the performance of a phase-shift-keyed PSK system by treating it as additional Gaussian noise.
Abstract: The multiple error performance of a phase-shift-keyed (PSK) communications system, when both cochannel interference (due possibly to other cochannel angle-modulated systems) and Gaussian noise additively perturb the transmitted signals, is considered. The results are fairly general: the main requirement is that the interference be circularly symmetric. All of our findings are also applicable to the case when only noise is present. The results indicate that one cannot approximate well the effect of interference on the performance of a PSK system by treating it as additional Gaussian noise. First, we derive the probability density function f A of the phase angle of a cosinusoid plus interference and Gaussian noise. We then obtain readily computable expressions (in terms of f A ) for the probability of any number of consecutive errors in an m -phase system when either coherent or differential detection is utilized. For numerical results, the interference is assumed to be due to other cochannel angle-modulated communications systems, and the double error probability and conditional probability of error are given for 2- and 4-phase systems.

Journal Articleā€¢DOIā€¢
C. Cutler1ā€¢
TL;DR: Delayed encoding is not just an improvement for existing differential coders, it promises to be a revolution in coder design and increase the signal to quantizing noise ratio (S/N) markedly.
Abstract: The decision process in source encoders can be influenced favorably by anticipating future quantizing errors and modifying the quantizer appropriately. This requires that the signal code be delayed slightly from the corresponding input signal sample. As an adaptation of an existing coder, little advantage is obtained [8]. However, the process has a stabilizing influence so that much stronger adaptation algorithms can be used to advantage, increasing the signal to quantizing noise ratio (S/N) markedly. It is believed that this fact is of general applicability, but it is shown herein only for 1-bit coders. A family of 1-bit coders (delta modulators) using exponentially adaptive step size, with two steps of integration in the feedback path has been studied using a special purpose computer facility. Such coders are ordinarily unstable and useless, but with error anticipation a measured S/N advantage of several dB over optimized adaptive coders of previous design is obtained. The study has concentrated on picture signals and an encoding which does not require a separate channel or code to signal changes in the coder. Fig. 8 compares the optimized delayed encoder operation with an optimized adaptive coder without delay. "Optimization" in the former case requires a modification of the feedback network-the use of two steps of integration instead of one. Delayed encoding is not just an improvement for existing differential coders, it promises to be a revolution in coder design.