scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Circuits and Systems for Video Technology in 1991"


Journal ArticleDOI
TL;DR: A parallel structured VLC decoder which decodes each codeword in one clock cycle regardless of its length is introduced and the required clock rate of the decoder is lower, and parallel processing architectures become easy to adopt in the entropy coding system.
Abstract: Run-length coding (RLC) and variable-length coding (VLC) are widely used techniques for lossless data compression. A high-speed entropy coding system using these two techniques is considered for digital high definition television (HDTV) applications. Traditionally, VLC decoding is implemented through a tree-searching algorithm as the input bits are received serially. For HDTV applications, it is very difficult to implement a real-time VLC decoder of this kind due to the very high data rate required. A parallel structured VLC decoder which decodes each codeword in one clock cycle regardless of its length is introduced. The required clock rate of the decoder is thus lower, and parallel processing architectures become easy to adopt in the entropy coding system. The parallel entropy coder and decoder are designed for implementation in two experimental prototype chips which are designed to encode and decode more than 52 million samples/s. Some related system issues, such as the synchronization of variable-length codewords and error concealment, are also discussed. >

219 citations


Journal ArticleDOI
TL;DR: A multiresolution representation for video signals is introduced and Interpolation in an FIR (finite impulse response) scheme solves uncovered area problems, considerably improving the temporal prediction.
Abstract: A multiresolution representation for video signals is introduced. A three-dimensional spatiotemporal pyramid algorithms for high-quality compression of advanced television sequences is presented. The scheme utilizes a finite memory structure and is robust to channel errors, provides compatible subchannels, and can handle different scan formats, making it well suited for the broadcast environment. Additional features such as fast random access and reverse playback make it suitable for digital storage as well. Model-based processing is used both over space and time, where motion-based interpolation is used. Interpolation in an FIR (finite impulse response) scheme solves uncovered area problems, considerably improving the temporal prediction. The complexity is comparable to that of previous recursive schemes. Computer simulations indicate that high compression factors (about an order of magnitude) are easily achieved with no apparent loss of quality. The scheme also has a number of commonalities with the emerging MPEG standard. >

204 citations


Journal ArticleDOI
Atul Puri1, Rangarajan Aravind1
TL;DR: The authors address the problem of adapting the Motion Picture Experts Group (MPEG) quantizer for scenes of different complexity (at bit rates around 1 Mb/s), such that the perceptual quality of the reconstructed video is optimized.
Abstract: The authors address the problem of adapting the Motion Picture Experts Group (MPEG) quantizer for scenes of different complexity (at bit rates around 1 Mb/s), such that the perceptual quality of the reconstructed video is optimized. Adaptive quantisation techniques conforming to the MPEG syntax can significantly improve the performance of the encoder. The authors concentrate on a one-pass causal scheme to limit the complexity of the encoder. The system employs prestored models for perceptual quality and a bit rate that have been experimentally derived. A framework is provided for determining these models as well as adapting them to locally varying scene characteristics. The variance of an 8*8 (luminance) block is basic to the techniques developed. Following standard practice, it is defined as the average of the square of the deviations of the pixels in the block from the mean pixel value. >

201 citations


Journal ArticleDOI
TL;DR: A family of multichannel filters based on multivariate data ordering, such as the marginal Median, the vector median, the marginal alpha -trimmed mean, and the multich channel modified trimmed mean filter, is described in detail.
Abstract: Multivariate data ordering and its use in color image filtering are presented. Several of the filters presented are extensions of the single-channel filters based on order statistics. The statistical analysis of the marginal order statistics is presented for the p-dimensional case. A family of multichannel filters based on multivariate data ordering, such as the marginal median, the vector median, the marginal alpha -trimmed mean, and the multichannel modified trimmed mean filter, is described in detail. The performance of the marginal median and the vector median filters in impulsive noise filtering is investigated. Simulation examples of the filters under study are described. >

178 citations


Journal ArticleDOI
TL;DR: A class of vector filters is developed, which are efficient smoothers in additive noise and can be designed to have detail-preserving characteristics and are used to develop ranked-order type estimators for multivariate image fields.
Abstract: The extension of ranking a set of elements in R to ranking a set of vectors in a p'th dimensional space R/sup p/ is considered. In the approach presented here vector ranking reduces to ordering vectors according to a sorted list of vector distances. A statistical analysis of this vector ranking is presented, and these vector ranking concepts are then used to develop ranked-order type estimators for multivariate image fields. A class of vector filters is developed, which are efficient smoothers in additive noise and can be designed to have detail-preserving characteristics. A statistical analysis is developed for the class of filters and a number of simulations were performed in order to quantitatively evaluate their performance. These simulations involve the estimation of both stationary multivariate random signals and color images in additive noise. >

103 citations


Journal ArticleDOI
TL;DR: The authors present an efficient block-matching algorithm called the parallel hierarchical one-dimensional search (PHODS) for motion estimation that is more suitable for hardware realization of a VLSI motion estimator.
Abstract: The authors present an efficient block-matching algorithm called the parallel hierarchical one-dimensional search (PHODS) for motion estimation. Instead of finding the two-dimensional motion vector directly, the PHODS finds two one-dimensional displacements in parallel on the two axes (say x and y) independently within the search area. The major feature of this algorithm is that its search speed for the motion vector is faster than that of the other search algorithms on account of its simpler computations and parallelism. Compared with previous research in terms of four measurements, the PHODS can rival those algorithms for performance. The hardware-oriented features of the PHODS, i.e., regularity, simplicity, and parallelism, guarantee that the PHODS is more suitable for hardware realization of a VLSI motion estimator. >

95 citations


Journal ArticleDOI
TL;DR: A CCITT compatible video coding scheme for HDTV conferencing is developed using a combination of interframe differential pulse code modulation (DPCM) and direct PCM.
Abstract: Various subband coding schemes are developed for high compression video coding applications. These schemes are based on two distinct interframe subband models. Both models are compared and show that they perform equally under certain conditions. The first model, which is relatively less complex but operates at full speed, was considered for video conferencing applications at lower rates. The second model, however, due to its generic structure which operates at a reduced speed, was found to be suitable for high quality video applications. As a result, a CCITT compatible video coding scheme for HDTV conferencing is developed. For the best performance, the higher frequency bands were coded with a combination of interframe differential pulse code modulation (DPCM) and direct PCM. >

79 citations


Journal ArticleDOI
TL;DR: A system for the compression of HDTV for DVTR's based on this rate-constrained optimal block-adaptive technique is designed using the DCT and vector quantization with a multistage structure.
Abstract: An image coding algorithm for digital video tape recorders (DVTR) must satisfy several requirements which do not arise in other applications of video compression. A key constraint on the data format is satisfied if every frame (or field) of video is partitioned into a small number of subimages, each independently coded with a fixed number of bits. This requirement excludes the use of interframe coding and most variable-rate image coding algorithms. We propose to use a new algorithm that codes a subimage efficiently under this data format constraint and allows virtually lossless reproduction at reasonable low bit rates. Each subimage is partitioned into non-overlapping blocks and each block is coded by one of a finite set of predesigned block quantizers covering a range of bit rates. For the ith block in the subimage, a rate function Ri(Li) and a distortion function Di(Li) is tabulated for each block quantizer Li. A near-optimal quantizer allocation algorithm based on the Lagrange Multiplier method is used to select a particular quantizer for each block. The objective is to minimize the distortion of the entire subimage under the constraint on the total number of bits for the subimage. A system for the compression of HDTV for DVTR's based on this rate-constrained optimal block-adaptive technique is designed using the DCT and vector quantization with a multistage structure. Simulation results demonstrate that this algorithm has the potential of achieving virtually lossless compression for digital tape recording of HDTV.

76 citations


Journal ArticleDOI
Weiping Li1
TL;DR: A vector transform, originally developed for digital filtering, is used and it is shown that this vector transform does have the decorrelation and energy-packing properties.
Abstract: A vector transform is introduced. The application of the vector transform to image coding is discussed. The development of a vector transform coding technique consists of two parts. One part is to find a vector transform that has the decorrelation and energy-packing properties. The other part is to find a coding algorithm in the vector transform domain. A vector transform, originally developed for digital filtering, is used. It is shown that this vector transform does have the decorrelation and energy-packing properties. Codebook design algorithms for the transform domain vectors are discussed. An implementation scheme for the vector transform is presented. >

69 citations


Journal ArticleDOI
TL;DR: A VLSI implementation of a Reed-Solomon codec circuit that has complete decoder and encoder functions and uses a single data/system clock is reported, suitable for use in advanced television systems.
Abstract: A VLSI implementation of a Reed-Solomon codec circuit is reported. The 1.6- mu m double metal CMOS chip is 8.2 mm by 8.4 mm, contains 200000 transistors, operates at a sustained data rate of 80 Mbits/s and executes up to 1000 MOPS while consuming less than 500 mW of power. The 10-MHz sustained byte rate for the data is independent of the error pattern. The circuit has complete decoder and encoder functions and uses a single data/system clock. Block lengths of 255 bytes as well as shortened codes are supported with no external buffering. Erasure corrections as well as random error corrections are supported with selectable correction of up to ten symbol errors. Corrected data is output at a fixed latency. These features make this Reed-Solomon processor suitable for use in advanced television systems. >

49 citations


Journal ArticleDOI
TL;DR: New architectures for short-kernel filters are developed which can reduce the entropy of subband signals better than conventional two-tap filters.
Abstract: The authors present a subband coding scheme which has the possibility of distortion-free encoding. The coding scheme, which divides input signals into frequency bands, lends itself to parallel implementation. Computer simulation is conducted using high-quality HDTV component signals. Quadrature mirror filters (QMFs) and short-kernel subband filters are compared in terms of entropy (bit-per-pel) and signal-to-noise ratio. Simulation results show that the short-kernel filters can reduce the entropy while maintaining the original picture quality. The number of subband-signal levels was found to be increased. Reduction of the number of signal levels by transformation during the filtering process is studied. From this study, new architectures for short-kernel filters are developed which can reduce the entropy of subband signals better than conventional two-tap filters. >

Journal ArticleDOI
TL;DR: The design of 2-D finite impulse response (FIR) digital filters is presented for two-dimensional sampling structure conversion and optimal infinite and finite precision filter design methods are proposed.
Abstract: The design of 2-D finite impulse response (FIR) digital filters is presented for two-dimensional sampling structure conversion Both the sampling structure model and the filter characteristics have been chosen to cover a large class of problems related to multidimensional linear processing for video signals Optimal infinite and finite precision filter design methods are proposed Experimental analysis concerning the effects of coefficient rounding on the filter frequency response was carried out, which was found to be consistent with previous theoretical results A simulation with one test picture is presented to illustrate the problem of sampling structure conversion >

Journal ArticleDOI
C.A. Gonzales1, E. Viscito1
TL;DR: The authors developed a minimax adaptive quantization algorithm that operates in the discrete cosine transform domain conforming to the Moving Picture Experts Group (MPEG) standard to optimize image quality by adapting a quantizer scaling factor to the local characteristics of the video pictures while preserving a constraint on the average output bit rate.
Abstract: The authors developed a minimax adaptive quantization algorithm that operates in the discrete cosine transform domain conforming to the Moving Picture Experts Group (MPEG) standard The algorithm is designed to optimize image quality by adapting a quantizer scaling factor to the local characteristics of the video pictures while preserving a constraint on the average output bit rate The algorithm is well suited for real-time encoder implementations of current video compression standards, such as MPEG and H261 >

Journal ArticleDOI
TL;DR: A study of block-oriented motion estimation algorithms is presented, and their application to motion-compensated temporal interpolation is described and the use of multiresolution techniques, essential for satisfactory performance, is discussed.
Abstract: A study of block-oriented motion estimation algorithms is presented, and their application to motion-compensated temporal interpolation is described. In the proposed approach, the motion field within each block is described by a function of a few parameters that can represent typical local motion vector fields.. A probabilistic formulation is then used to develop maximum-likelihood (ML) and maximum a posteriori probability (MAP) estimation criteria. The MAP criterion takes into account the dependence of the motion fields in adjacent blocks. A procedure for minimizing the resulting objective function based on the Gauss-Newton algorithm is presented. The use of multiresolution techniques, essential for satisfactory performance, is discussed. Experimental results evaluating the algorithms for the task of motion-compensated temporal interpolation are presented. The relative complexity of the algorithms is also discussed. >

Journal ArticleDOI
TL;DR: A hybrid predictive/transform system has been devised and implemented for HDTV transmission and the main parameters of this system are in accordance with those being recommended by CMTT for the transmission of conventional component TV.
Abstract: The transmission of HDTV (high-definition television) signals on available digital networks and satellites requires the adoption of sophisticated compression techniques to limit the bit-rate requirements and to provide a high-quality and reliable service. A hybrid predictive/transform system has been devised and implemented. The main parameters of this system are in accordance with those being recommended by CMTT for the transmission of conventional component TV. The basic algorithms, the codec architecture, the developed VLSI, and the equipment are described. The codec is designed to operate with the currently envisaged TV and HDTV formats and with a wide range of transmission rates. The optimization of the system and the evaluation of its performance have been carried out on the basis of a large number of subjective tests in accordance with the user requirements specified by the standardization bodies. The codecs have been extensively tested on-field during experimental point-to-multipoint satellite transmission of HDTV signals. >

Journal ArticleDOI
TL;DR: An adaptive image coding technique-two-channel conjugate classified discrete cosine transform/vector quantization (TCCCDCT/VQ)-is proposed to efficiently exploit correlation in large image blocks by taking advantage of the discrete Cosine transform and vector quantization.
Abstract: An adaptive image coding technique-two-channel conjugate classified discrete cosine transform/vector quantization (TCCCDCT/VQ)-is proposed to efficiently exploit correlation in large image blocks by taking advantage of the discrete cosine transform and vector quantization. In the transform domain, a classified discrete cosine transform/vector quantization (CDCT/VQ) scheme is proposed and TCCCDCT/VQ is developed based on the CDCT/VQ scheme. These two techniques were applied to encode test images at about 0.51 b/pixel and 0.73 b/pixel. The performances of both techniques over a noisy channel have been also tested in the transform domain. The performances of both adaptive VQ techniques are perceptually very similar for the noise-free channel case. However, when channel error is injected for the same bit rate, the TCCCDCT/VQ results in less visible distortion than the CDCT/VQ, which is based on ordinary VQ. >

Journal ArticleDOI
TL;DR: An optimization technique is presented for the design of multiplierless two-channel linear-phase finite-duration impulse-response (FIR) filter banks and is shown to yield filter banks with good filtering performance and nearly perfect signal reconstruction.
Abstract: An optimization technique is presented for the design of multiplierless two-channel linear-phase finite-duration impulse-response (FIR) filter banks. It is shown to yield filter banks with good filtering performance and nearly perfect signal reconstruction. The design employs filters whose coefficients are represented by a canonic signed-digit (CSD) code. When applied to subband image coding this technique provides an easy way to design low-complexity analysis/synthesis filter banks for high-performance codecs. Examples concerning filter design and the application of such filters to subband image coding are given. >

Journal ArticleDOI
TL;DR: Bit-level systolic arrays for real-time 2-D FIR and IIR (finite and infinite impulse response) filters are presented and are more cost effective; more regular structurally; composed of bit-level cells and latches; and fully pipelined at the bit level.
Abstract: Bit-level systolic arrays for real-time 2-D FIR and IIR (finite and infinite impulse response) filters are presented. Two-dimensional iteration and retiming techniques are depicted to illustrate block pipeline 2-D IIR filters, which guarantee high-throughput operation for real-time applications. The block (parallel) systolic architectures are refined down to the bit level. This increases the filter's throughput rate and decreases the filter's development and manufacturing costs. The AP figure is improved from O(N/sup 2/W/sup 3/) for the previous design to O(N/sup 2/W/sup 2/), i.e. by a factor of O(W), where W is the word length. Pipelining at the bit rate level is the major reason for this improvement. Another advantage of the proposed design is that it has simpler wire routing and control circuitry. In summary, these systolic-array realizations are more cost effective; more regular structurally; composed of bit-level cells and latches; and fully pipelined at the bit level. >

Journal ArticleDOI
TL;DR: The source coding scheme for the experimental research prototype that is currently being designed to demonstrate the digital coding of high-definition television (HDTV) for transport within the proposed broadband integrated services digital network (ISDN) fiber optic network is described.
Abstract: The source coding scheme for the experimental research prototype that is currently being designed to demonstrate the digital coding of high-definition television (HDTV) for transport within the proposed broadband integrated services digital network (ISDN) fiber optic network is described. The network interface will be used on a packet-based asynchronous transfer mode (ATM) technology. To maximize coding efficiency, run-length coding and variable word length coding are also incorporated in the system. The scheme uses multiple block-size Hadamard transform coding that can be implemented using a set of two-tap filters in a simple subband structure. In spite of the low hardware complexity, which is of dominant importance in the overall system performance, high coding efficiency is obtained. The major features and advantages of this scheme are outlined. >

Journal ArticleDOI
TL;DR: By taking advantage of the special VLSI architecture and optimization of the circuit technique, a one-chip realization for each, the analysis and synthesis filter bank of the luminance components can be achieved in a 1- mu m CMOS technology.
Abstract: The design of a subband filter bank for encoding high-definition television (HDTV) signals at 140 Mb/s is presented. Considering filter reconstruction errors, quantization errors, signal statistics, and VLSI implementation constraints in the filter design, quadrature mirror filters (QMFs) are found to be most appropriate. As a result, two-dimensional QMFs with 10 coefficients for vertical and 14 coefficients for horizontal filtering are proposed. A compact VLSI implementation is possible by filter architectures utilizing the special features of QMFs. The timing constraints can be relaxed by the use of polyphase filter structures for decimation and interpolation. Utilization of the symmetry property of QMFs reduces the arithmetic hardware expense approximately by a factor of two. By taking advantage of the special VLSI architecture and optimization of the circuit technique, a one-chip realization for each, the analysis and synthesis filter bank of the luminance components can be achieved in a 1- mu m CMOS technology. A second pair of filter-bank chips is required for the two chrominance components. >

Journal ArticleDOI
TL;DR: A parallel processing structure using the proposed international standard for visual telephony (CCITT P*64 kbs standard) as processing elements, to compress digital high definition television (HDTV) pictures seems to be a cost-effective solution to the HDTV hardware.
Abstract: The authors suggest a parallel processing structure using the proposed international standard for visual telephony (CCITT P*64 kbs standard) as processing elements, to compress digital high definition television (HDTV) pictures. The basic idea is to partition an HDTV picture, in space or in frequency, into smaller sub-pictures and then compress each sub-picture using a CCITT P*64 kbs coder. This seems to be a cost-effective solution to the HDTV hardware. Since each sub-picture is processed by an independent coder, without coordination these coded sub-pictures may have unequal picture quality. To maintain a uniform quality HDTV picture, the following two issues are studied: sub-channel control strategy (bits allocated to each sub-picture); and quantization and buffer control strategy for individual sub-picture coders. Algorithms to resolve these problems and their computer simulations are presented. >

Journal ArticleDOI
TL;DR: The concept of multidimensional mixed domain transform/spatiotemporal (MixeD) filtering is extended beyond the discrete Fourier transform (DFT) to include other types of discrete sinusoidal transforms, including the discrete Hartley transform (DHT) and the discrete cosinetransform (DCT).
Abstract: The concept of multidimensional mixed domain transform/spatiotemporal (MixeD) filtering is extended beyond the discrete Fourier transform (DFT) to include other types of discrete sinusoidal transforms, including the discrete Hartley transform (DHT) and the discrete cosine transform (DCT). Two MixeD filter examples are given, one using the two-dimensional (2-D) DHT and the other using the 2-D DCT, to selectively enhance a 3-D spatially planar (SP) pulse signal. The authors define the notation and provide a review of the MixeD filter method. MD and partial P-dimensional discrete transform operators are defined, and the design of MixeD filters is discussed. MixeD filters based on the 2-D DHT and the 2-D DCT are designed to selectively enhance a 3-D SP pulse. Experimental verification of these 3-D SP MixeD filters is described. >

Journal ArticleDOI
TL;DR: Several coder programs, including a discrete cosine transform coder and an intraframe differential pulse code modulation (PCM) coder, are developed to evaluate HDTV coding efficiency.
Abstract: A programmable real-time high-definition television (HDTV) signal processor (HD-VSP) has been developed. For conventional TV signals, a previously reported video signal processor (VSP) has introduced flexible software control capability based on subregional processing. In order to expand such flexibility for real-time HDTV signal processing, the HD-VSP employs eight VSP clusters and programmable time-expansion/compression units. An input HDTV signal is converted to eight time-expanded subregional signals to reduce their sampling rate to that of conventional TV signals. The converted signals are then processed by the eight clusters in the same manner as the VSP. Therefore, programs developed for conventional TV signals can be applied to HDTV with little modification. Processed signals obtained from the eight clusters are time-compressed and multiplexed to reconstruct an output HDTV signal. This HD-VSP has 16 component processors per cluster and is capable of 2.5 giga-operations/s. Several coder programs, including a discrete cosine transform coder and an intraframe differential pulse code modulation (PCM) coder, are developed to evaluate HDTV coding efficiency. >

Journal ArticleDOI
TL;DR: A discrete cosine transform (DCT)-based coding system for 130 Mbps high definition television (HDTV) transmission in asynchronous transfer mode (ATM) networks is described and results show that the two-layer DCT system is robust against cell loss.
Abstract: A discrete cosine transform (DCT)-based coding system for 130 Mbps high definition television (HDTV) transmission in asynchronous transfer mode (ATM) networks is described. The system employs low-complexity 8*4 intrafield DCT coding. Major processing stages used by this system include simplified frequency weighting, effective block to linear scans, interblock differential pulse code modulation (DPCM) for DC terms, nonuniform quantization, and two-dimensional entropy coding. Simulations show that the system can achieve high-quality coding at 130 Mbps. A simple rate statistics model is established based on the rates obtained for a group of test pictures. To avoid error propagation due to the use of variable length codes, a packet assembly format is proposed. A two-layer system that segments the coefficients into a main signal and an enhancement signal is employed. The two signals are separately assembled into cells for transmission over the ATM networks with different priorities. Simulation results show that the two-layer DCT system is robust against cell loss. >

Journal ArticleDOI
TL;DR: The author presents a new method of switched capacitor (SC) network design for two-dimensional real-time filtering that uses a novel compact matrix description of a 2-D network that is fast and efficient.
Abstract: The author presents a new method of switched capacitor (SC) network design for two-dimensional real-time filtering. A model of such a network is presented with a computer design system for 2-D SC filters. This system uses a novel compact matrix description of a 2-D network that is fast and efficient. The problem of 2-D filter design is reduced to the problem of 1-D multiport SC network synthesis. This SC network is obtained with the use of a lossless nonreciprocal prototype circuit. A 2-D high-emphasis filter illustrates the operation of this system. Such a filter can be realized as a single CMOS chip and, in comparison with digital realization, a considerable reduction in cost and power consumption can be achieved. >

Journal ArticleDOI
TL;DR: A nonseparable pyramid method based on a wavelet expansion is developed for image coding that achieves high compression rates and at the same time allows a very efficient algorithmic implementation.
Abstract: A nonseparable pyramid method based on a wavelet expansion is developed for image coding. It achieves high compression rates and at the same time allows a very efficient algorithmic implementation. In particular it uses only two additions and a shift (division by 2) for each image pixel during coding or decoding. Because the operations needed are mostly independent of each other and have a high degree of regularity, it is also possible to design very-large-scale integration (VLSI) hardware to perform this operation using an array of simple basic cells. For a typical 513*513 image one can achieve a peak signal-to-noise ratio of 30 dB with an entropy of 0.133 b/pixel. Transmitted coefficient values can be encoded with pulse code modulation to allow for simpler hardware while still requiring only 0.385 b/pixel, giving a very simple overall coding system. >

Journal ArticleDOI
TL;DR: Adaptive multistage image transform coding is discussed, and an optimal method is introduced for bit allocation that is subjectively preferable to the reconstructed images with one-stage coding.
Abstract: Adaptive multistage image transform coding is discussed, and an optimal method is introduced for bit allocation. The optimality is in the sense of minimizing the mean square reconstruction error with a given total number of bits and a given number of stages. The statistics of the coefficients in different stages and marginal analysis are used to optimise the division of the total number of bits among the stages. Experimental results indicate that, with two stages, more than 14% improvement for one class and more than 11% improvement for multiple classes are achieved in mean square reconstruction error over one-stage image transform coding. Higher improvements are achieved with three stages. The reconstructed images with multistage coding are subjectively preferable to the reconstructed images with one-stage coding. >

Journal ArticleDOI
TL;DR: A systolic array architecture for image coding using adaptive vector quantization is presented, which results in a speedup proportional to NL, and has the following advantages: there is no need for separate hardware to compute the new centroids; there isno need for a high speed interface to transfer the newCentroids into the systolics array; and there are no delays involved in the computation and transfer of new Centroids.
Abstract: A systolic array architecture for image coding using adaptive vector quantization is presented. A basic systolic cell was designed with two modes of operation. In the forward mode, the cell executes the basic distortion operation, while in the reverse mode, the cell executes the centroid computation operation. Both modes co-exist in perfect synchronism. The systolic array module essentially consists of an array of L*N basic systolic cells connected in parallel and pipeline in the direction of the vector dimension, L, and codeword dimension, N, respectively. This architecture results in a speedup proportional to NL, and has the following advantages: there is no need for separate hardware to compute the new centroids; there is no need for a high speed interface to transfer the new centroids into the systolic array; and there are no delays involved in the computation and transfer of new centroids. >

Journal ArticleDOI
TL;DR: The Matsushita 6-MHz NTSC-compatible widescreen Television system using quadrature amplitude modulation (QAM) of the video carrier and inverse Nyquist filtering is an advanced television system using the side panel method.
Abstract: The Matsushita 6-MHz NTSC-compatible widescreen television system using quadrature amplitude modulation (QAM) of the video carrier and inverse Nyquist filtering is an advanced television system using the side panel method. The principle of QAM is reviewed, and the encoding and decoding processes are discussed. Methods for time expansion and time compression for each panel and very-low-frequency frequency splitting (e.g., 0.1 MHz) at a sampling frequency of 14.3 MHz are detailed. Transmitting and regenerating the colour subcarrier for a multiplex signal and a method of preventing noticeable differences between panels are also described. >

Journal ArticleDOI
TL;DR: By using time inversion in combination with line-by-line processing, the stability problem of the conventional IIR equalizer can be eliminated and it is shown that it may be possible to implement this IIRequalizer on a single digital integrated circuit.
Abstract: Techniques that can cancel ghosts in received analog TV (for improved-definition TV, extended-definition TV, and high-definition TV) signals are presented. The fact that there are short periods of time without the analog signal (the horizontal flyback interval between the lines) is utilized to periodically cleanse a finite impulse response (FIR) or an infinite impulse response (IIR) equalizer. This line-by-line processing (cleansing) overcomes the limitation of standard equalizers to allow for 40-50 dB of suppression of ghosts, even with nulls in the spectrum, as long as the ghost delay is less than the period of time without the analog signal. Furthermore, by using time inversion in combination with line-by-line processing, the stability problem of the conventional IIR equalizer can be eliminated. It is shown that it may be possible to implement this IIR equalizer on a single digital integrated circuit. Alternatively, an FIR equalizer can be used; although it requires multiple chips (i.e. more taps), it can acquire and adapt to the ghosted channel more rapidly than an IIR equalizer. >