scispace - formally typeset
Search or ask a question

Showing papers in "Signal Processing-image Communication in 1993"


Journal ArticleDOI
TL;DR: A digital modulation system using orthogonal frequency division and multiplexing (OFDM) is addressed, which presents the advantage of coping with echoes more easily than classical single-carrier modems, thanks to the insertion of a guard interval between two symbols.
Abstract: A digital modulation system using orthogonal frequency division and multiplexing (OFDM) is addressed in this paper. Such a system presents the advantage of coping with echoes more easily than classical single-carrier modems, thanks to the insertion of a guard interval between two symbols. The signal equalization is then achieved in the frequency domain. This OFDM modem is improved by using dual polarizations. In this configuration, it can convey a 70 Mbits/s (HDTV) bit stream in an 8 MHz UHF channel. Some experimental results relate field trials carried out in several countries with such equipment.

75 citations


Journal ArticleDOI
TL;DR: This work presents the utilization of multiple cameras with different pixel apertures, and develops a new, alternately iterative signal processing algorithm available in the different aperture case, which performs satisfactorily in experimental simulations.
Abstract: Towards the development of a very high definition (VHD) image acquisition system, previously we developed the signal processing based approach with multiple cameras. The approach produces an improved resolution image with sufficiently high signal-to-noise ratio by processing and integrating multiple images taken simultaneously with multiple cameras. Originally, in this approach, we used multiple cameras with the same pixel aperture, but in this case there are severe limitations both in the arrangement of multiple cameras and in the configuration of the scene, in order to guarantee the spatial uniformity of the resultant resolution. To overcome this difficulty completely, this work presents the utilization of multiple cameras with different pixel apertures, and develops a new, alternately iterative signal processing algorithm available in the different aperture case. Experimental simulations clearly show that the utilization of multiple different-aperture cameras prospects to be good and that the alternately iterative algorithm behaves satisfactorily.

72 citations


Journal ArticleDOI
TL;DR: Fractal image compression is a popular technique for image compression as discussed by the authors, which can describe natural scenes better than shapes of traditional geometry and may offer better compression performance. But it was inspired by the fractal geometry on measuring the length of a curve using a yardstick.
Abstract: Image compression techniques based on fractals have been developed in the last few years and may promise better compression performance. Fractal image compression techniques are being developed due to the recognition that fractals can describe natural scenes better than shapes of traditional geometry. This paper describes principle and common techniques of fractal image compression. Mathematical foundations for fract image compression techniques are presented first. Then three main fractal image compression techniques are discussed. The first and most important technique is based on iterated function systems (IFS): images are compressed into compact IFS codes at encoding stage, and fractal images are generated to approximate the original image at the decoding stage. The second technique is segment-based coding: images are segmented according to the fractal dimension and these segments are coded efficiently using properties of the human visual system. The third technique is yardstick coding which is similar to DPCM and subsampling with subsequent interpolation. But it was inspired by the fractal geometry on measuring the length of a curve using a yardstick.

67 citations


Journal ArticleDOI
TL;DR: Numbers and subjective tests confirm that various adaptations of an adaptive frame/field motion-compensated video coding scheme provide significant improvement as compared to purely MPEG-1 based coding.
Abstract: The second phase of the Motion Pictures Experts Group (MPEG-2) activity is in progress and is primarily aimed at coding of high resolution video with high quality at bit-rates of 4 to 9 Mbit/s. In addition, this phase is also required to address many issues including forward and backward compatibility with the first phase (MPEG-1) standard. For MPEG-2, an adaptive frame/field motion-compensated video coding scheme is proposed. This scheme builds on the proven framework of DCT and motion-compensation based techniques already optimized in MPEG-1 for coding of lower resolution video at low bit-rates. Various adaptations include techniques to improve efficiency of coding for interlaced video source as well as improving quality by better exploitation of characteristics of the video scenes. Statistics and subjective tests confirm that these adaptations provide significant improvement as compared to purely MPEG-1 based coding. We then discuss issues of compatibility with the MPEG-1 standard and of implementation complexity of the proposed scheme.

42 citations


Journal ArticleDOI
Cesar A. Gonzales1, Eric Viscito1
TL;DR: A video coding algorithm which combines the high visual quality of hybrid motion-compensated transform-based video coding techniques with the functional advantages of scalable, multi-resolution video is described.
Abstract: In this paper, we describe a video coding algorithm which combines the high visual quality of hybrid motion-compensated transform-based video coding techniques with the functional advantages of scalable, multi-resolution video. The technique produces a hierarchical video data representation by incorporating a simple frequency domain pyramid in a hybrid motion-compensated prediction/discrete cosine transform video coding algorithm. Compared to a single-layer hybrid scheme, this method has a very low penalty in coding efficiency and code complexity.

35 citations


Journal ArticleDOI
TL;DR: Both the algorithms for the stereo and motion estimation are presented here, together with some experimental results on images obtained from natural scenes containing motion.
Abstract: In this paper, an approach with combined stereo and motion analysis to establish correspondences in a sequence of stereo images is outlined. The advantages of the presented approach are (1) in both the motion and stereo estimation no restriction on rigid and/or planar objects is assumed; (2) by introducing the image pyramid in the matching process — pyramid quided edge-point matching for motion estimation and multi-resolutional dynamic programming for disparity estimation — large motion and disparity vectors can be computed easily; (3) in order to exclude ambiguities with the dynamic programming, the cost function takes the interline, interframe and multi-resolutional spatial information into account. Both the algorithms for the stereo and motion estimation are presented here, together with some experimental results on images obtained from natural scenes containing motion.

32 citations


Journal ArticleDOI
TL;DR: This report describes the subjective assessment procedures, results of statistical processing of the data obtained, statistical processing methods used, reliability of theData evaluated, and results of analysis of the Data obtained.
Abstract: Following the MPEG-1 subjective assessment, the MPEG-2 subjective assessment was performed at the JVC Kurihama Technical Center in November 1991 in relation to compression of 5 to 10 Mbit/s high-quality moving pictures. This report describes the subjective assessment procedures, results of statistical processing of the data obtained, statistical processing methods used, reliability of the data evaluated, and results of analysis of the data obtained. It also discusses future problems, centering on subjective assessment tests of high-quality pictures closely resembling their originals and the processing of bulk data.

30 citations


Journal ArticleDOI
TL;DR: The quality of decoded pictures highly depends on the quality of non-interlaced pictures that are provided from the pre-process, which is almost the same as MPEG I.
Abstract: A video coding scheme that enables hierarchical video signal processing in MPEG II is proposed in this paper. Dividing each input picture into hierarchical layers, a proposed prediction is applied into each layer. The proposed coding scheme is to be characterized on the following points: (1) Non-interlaced pictures for each layer picture, even for the highest resolution layer. (2) Hierarchical motion estimation to provide precise motion vectors in less process time, to each layer of pictures. (3) Hierarchical prediction and layer pictures to provide various size of video signal for display, and to be compatible with MPEG I. (4) Up-sampling and down-sampling processes without phase distortion between neighbor layers. Other functions are almost the same as MPEG I. As the experimental result, the quality of decoded pictures highly depends on the quality of non-interlaced pictures that are provided from the pre-process.

28 citations


Journal ArticleDOI
TL;DR: A new colour coding algorithm which encodes the colour parameters of objects in an object-oriented analysis-synthesis coder with a hybrid scheme, where either a DPCM (Differential Pulse Code Modulation) technique or a DCT (Discrete Cosine Transform) is used whichever allows a more efficient coding.
Abstract: In this paper, a new colour coding algorithm called Hybrid Adaptive DCT/DPCM Colour Coding is presented which encodes the colour parameters of objects in an object-oriented analysis-synthesis coder with a hybrid scheme, where either a DPCM (Differential Pulse Code Modulation) technique or a DCT (Discrete Cosine Transform) is used whichever allows a more efficient coding. In experimental results, the coding efficiency of the Hybrid Adaptive DCT/DPCM Colour Coding is compared to purely block-oriented DCT coding and region-oriented transform coding for typical videophone sequences at data rates of about 64 kbit/s. The relative gain concerning the average bit-rate at a fixed image quality is about 5% compared to region-oriented transform coding and 41% compared to block-oriented DCT coding. Beside its coding efficiency, Hybrid Adaptive DCT/DPCM Coding can easily be realized by fast algorithms of low computational complexity.

25 citations


Journal ArticleDOI
TL;DR: An analysis of a video coding algorithm designed and optimized for video compression at bit-rates from 3 up to 10 Mbit/s concludes that this algorithm achieves a quality better than NTSC at 4 M bit/s and close to component quality at 9 Mbit /s.
Abstract: This paper is an analysis of a video coding algorithm designed and optimized for video compression at bit-rates from 3 up to 10 Mbit/s. This algorithm is suitable for different applications ranging from communication services to video broadcasting. This is a hybrid DCT/DPCM coding scheme originally based on an MPEG1 (Moving Picture Experts Group phase 1) algorithm modified for interlaced CCIR601 resolution pictures coded at higher rates. Some important features of the algorithm include field-based motion compensated prediction and interpolation, frame-based DCT coding and quantization, optimized frame bit allocation and quantizer assignment, and adaptive Huffman code tables for transform coefficients. This coding scheme allows easy implementation of common VCR functions, and can also operate in a low-delay mode as required for interactive video. This paper presents a brief analysis of the coding/decoding delay. The result of a subjective test conducted at Bellcore concludes that this algorithm achieves a quality better than NTSC at 4 Mbit/s and close to component quality at 9 Mbit/s. This algorithm was submitted as an MPEG2 (MPEG phase 2) proposal, and showed high performance among 21 525-format proposals.

21 citations


Journal ArticleDOI
TL;DR: A high flexibility is obtained, where several features can be included or omitted, whichever is desired for a specific service, when coding of CCIR 601 videosignals at bit-rates of about 2–10 Mbit/s.
Abstract: The PTT Research/Philips LEP proposal to MPEG for coding of CCIR 601 videosignals at bit-rates of about 2–10 Mbit/s is presented. It is a hybrid DCT scheme with field processing and a multi-layer approach in the bitstream. Therefore a high flexibility is obtained, where several features can be included or omitted, whichever is desired for a specific service. The most specific feature of this proposal is that compatibility to the MPEG1 or H.261 standards can be included by an embedded bitstream according to these standards, while the decoders do not need an additional prediction loop for this. Further provided features are random access, smooth fast forward, smooth fast backward, low end-to-end delay and ATM cell loss resilience, utilizing the multi-layer structure.

Journal ArticleDOI
TL;DR: The proposed video coding technique proposed by Columbia University for the second phase of ISO's MPEG standardization effort (MPEG-2) for coding at bit-rates up to 10 Mbit/s is a direct extension of the coding algorithm used for MPEG-1, suitable for interlaced video by allowing macroblock-by-macroblock adaptive field-based or frame-based coding.
Abstract: This paper presents the video coding technique proposed by Columbia University for the second phase of ISO's MPEG standardization effort (MPEG-2) for coding at bit-rates up to 10 Mbit/s. The technique is a direct extension of the coding algorithm used for MPEG-1, suitable for interlaced video by allowing macroblock-by-macroblock adaptive field-based or frame-based coding. Separate coding of the odd and even fields is allowed as an option, so that fields of one parity use information from the already coded fields of the opposite parity. This option has near-optimum performance, and also conveniently provides scalability (in the sense of multiresolution representation), useful for achieving graceful degradation in the presence of transmission errors. It also permits easy modification for compatibility with MPEG-1. Another feature includes escape to a non-DCT coding technique for blocks containing sharp edges. Based on both subjective and objective evaluations, extensive computer simulations have been conducted to optimize the criteria of using various modes.

Journal ArticleDOI
TL;DR: The main features of a prototype image retrieval system, nicknamed Imagine, are described, that is the ability of maintaining the service in a wide range of workstation performances and network digital rates, and response time and scale-ability.
Abstract: This paper describes the main features of a prototype image retrieval system, nicknamed Imagine. Response time and scale-ability, that is the ability of maintaining the service in a wide range of workstation performances and network digital rates, constituted the focus of the investigation. The assumption that the database is located in a site remote from the user workstation, and that the network connecting them is relatively poor in bandwidth, forced the design of rather sophisticated navigation procedures beyond the adoption of the JPEG image coding scheme. In order to have an acceptable response time, the user must in fact be able to easily identify the desired images without having to transfer a large number of eventually useless ones. Reported objective performance evaluations have been carried out on a functionally complete laboratory system.

Journal ArticleDOI
TL;DR: This work proposes an approach in which the layering is performed in the pel domain using a modified version of the conventional spatial pyramid technique, and comparisons are made with the alternative methods of subband coding and layering of DCT coefficients.
Abstract: Layered techniques offer attractive features for video coding in several applications including high definition TV (HDTV). After describing these in general, we propose an approach in which the layering is performed in the pel domain using a modified version of the conventional spatial pyramid technique. Comparisons are made with the alternative methods of subband coding and layering of DCT coefficients. Results with HDTV pictures were first demonstrated on 3 November 1992.

Journal ArticleDOI
TL;DR: This contribution deals with the digital broadcasting of HDTV channels over the cable television (CATV) distribution system, using either single-carrier QAM or an orthogonal frequency division multiplex of many QAM carriers to represent an HDTV channel.
Abstract: This contribution deals with the digital broadcasting of HDTV channels over the cable television (CATV) distribution system, using either single-carrier QAM or an orthogonal frequency division multiplex (OFDM) of many QAM carriers to represent an HDTV channel. Assuming that no error-correcting codes are used, we investigate two distinct cases: in the first case, a few HDTV channels are transmitted among many analog TV channels, whereas in the second case all transmitted channels are HDTV channels. We show that in the first case the transmit power of an HDTV channel can be substantially reduced (by about 10 dB or more) as compared to the transmit power of an analog TV channel, while still maintaining a satisfactory bit error rate (BER). In the second case, not only a considerable reduction of the total transmit power but also a reduction of amplifier cost and an increase of the number of TV channels can be achieved. Single-carrier QAM is found to perform slightly better (at most about 1 or 2 dB) than a multi-carrier QAM.

Journal ArticleDOI
TL;DR: The proposed approach called truncated overlap-add with compensation (TOAC) technique provides cost effective solution to resampling problems and can be implemented on a single VLSI chip together with inverse block transform operations.
Abstract: A new efficient method for interpolation and decimation of images by arbitrary ratio using block transform coefficients such as discrete cosine transform (DCT) is obtained. Due to multiple standards in image/video coding schemes, it is expected that decoders as well as display or recording devices need to convert the received signal from one format to another. It is essential that high quality resampling be done in lowest hardware complexity since such processing normally requires a large amount of computations. In this paper, a block based non-integer ratio resampling algorithm is developed which can be implemented very efficiently without significant increase of system complexity. For the implementation of the proposed approach, an inverse block transform (for example, inverse DCT) and resampling process are combined into one process so that no additional processing stage is required. The proposed approach called truncated overlap-add with compensation (TOAC) technique provides cost effective solution to resampling problems. It can be implemented on a single VLSI chip together with inverse block transform operations.

Journal ArticleDOI
TL;DR: A video coding algorithm submitted for consideration by ISO/MPEG in the phase of its work targeted at bit-rates up to about 10 Mbit/s was submitted with a hierarchical structure with splitting in the pel domain, achieving compatibility with the earlier MPEG draft standard.
Abstract: This paper describes a video coding algorithm submitted for consideration by ISO/MPEG in the phase of its work targeted at bit-rates up to about 10 Mbit/s. Distinguishing features of the submission were the attempt to meet a wide range of requirements, rather than concentrating solely on picture quality, and compatibility with the earlier MPEG draft standard. This was achieved with a hierarchical structure with splitting in the pel domain. The reasons for this approach are explained and the impact on implementation complexity is also covered.

Journal ArticleDOI
TL;DR: Comparisons and conclusions from a comprehensive report comparing five proposed high definition television systems are presented and the recent formation of a ‘Grand Alliance’ by the individual proponents of digital systems to propose a single system is noted.
Abstract: In the United States, the Federal Communications Commission (FCC) began a process six years ago to develop a terrestrial high definition television (HDTV) broadcasting standard. Early in 1993 a comprehensive report was released by the FCC's Advisory Committee on Advanced Television Service comparing five proposed systems that had undergone extensive testing. Although the report did not pick a ‘winning system’, it did recommend that only digital systems receive further consideration as the United States standard. This paper presents comparisons and conclusions from that report and notes the recent formation of a ‘Grand Alliance’ by the individual proponents of digital systems to propose a single system.

Journal ArticleDOI
TL;DR: This paper shows how the Block Truncation Coding-Vector Quantization algorithm may optimally adapt itself to the local nature of the picture while keeping the bit-rate fixed at a given value.
Abstract: Block Truncation Coding-Vector Quantization (BTC-VQ) is a simple, fast, non-adaptive block-based image compression technique with a moderate bit-rate of 1.0 bit/pixel. By making the algorithm adaptive it is possible to further lower the bit-rate. In this paper we show how the algorithm may optimally adapt itself to the local nature of the picture while keeping the bit-rate fixed at a given value. Examples are given of image compressions down to 0.65 bit/pixel. We further show how the adaptation process may be carried out in a fast and efficient manner.

Journal ArticleDOI
TL;DR: The paper describes the algorithm of picture encoding that was implemented, gives a global evaluation of the encoding system and presents a detailed proposal for a VLSI implementation of the decoding system.
Abstract: The paper presents the study of a VLSI implementation of a video decoder, targeted for systems with bit-rates up to 10 Mbit/s. Among the features of the decoder are a CCIR 601 4:2:2 full resolution output, the regeneration of a motion compensated prediction using both spatial and temporal interpolation techniques, and an inverse DCT of the coefficients. The paper describes the algorithm of picture encoding that was implemented, gives a global evaluation of the encoding system and presents a detailed proposal for a VLSI implementation of the decoding system.

Journal ArticleDOI
TL;DR: A new method designed to remove all ringing distortion within the smooth image areas using blockwise paraboloid fitting algorithms in the Smooth image areas is proposed.
Abstract: This paper considers the well known problem of ringing distortion in subband coded images. Exploiting the fact that ringing artifacts are masked by edges and textures we propose a new method designed to remove all ringing distortion within the smooth image areas. The original image is used for classification purposes and the subband coded image is postprocessed using blockwise paraboloid fitting algorithms in the smooth image areas. Experimental results comparing subband coded images with and without application of the proposed method are presented. The results indicate that significant visual improvements can be made at the expense of a very small increase in bit-rat.

Journal ArticleDOI
TL;DR: This paper presents the results obtained with a simulated coder in which DCT is used in all bands where expected spatial correlation is high enough and the use of a pyramid vector quantizer is proposed.
Abstract: The advent of broadband digital networks will make it possible to transmit high quality digital television. Subband coding is a very interesting coding scheme as it accommodates a transmission quality that can flexibly adapt to the available channel capacity. Subband coding (SBC) and block cosine-transform techniques have already been used together, but only to code the low-pass image. Residual spatial correlation present in higher frequencies bands has only partially been taken care of. This paper presents the results obtained with a simulated coder in which DCT is used in all bands where expected spatial correlation is high enough. The use of a pyramid vector quantizer is proposed. Results are good and compare favourably with other results published in the literature.

Journal ArticleDOI
TL;DR: This proposed algorithm processes the interlaced sequence as a sequence of even and odd fields by using the last decoded field, adaptively deinterlaced, for the motion compensated prediction of the current field.
Abstract: Video coding algorithms using block motion compensation were first developed for progressively scanned sequences and as such, are not entirely suitable for interlaced sequences In this paper we present a new approach for block-based coding of interlaced sequences. This proposed algorithm processes the interlaced sequence as a sequence of even and odd fields by using the last decoded field, adaptively deinterlaced, for the motion compensated prediction of the current field. The deinterlacing is performed at the decoder and no extra information has to be sent to guide the adaptation. The algorithm is a simple and efficient alternative to algorithms using the last two decoded fields for the motion compensated prediction of the current field. The new approach can easily incorporate the use of fast search algorithms and allows the use of true half-pixel accuracy in the estimates of the vertical component of the motion vectors. In HDTV sequences tested, this algorithm achieves superior performance due to this half-pixel accuracy.

Journal ArticleDOI
TL;DR: Simulation results show that the performance of the suggested scheme, Projection-VQ, is superior to the normal VQ scheme in the sense of PSNR and substantial improvement in subjective image quality can be attained because the staircase noise is not visible in the reconstructed image.
Abstract: Vector quantization (VQ), which is the optimal image compression technique, reduces bit-rate using interpixel correlation in a block of an image. As the block size increases, the performance of VG improves, but serious problems exist on the computational complexity and the required memory. Also, a serious problem of VQ, which is intrinsic in block coding schemes, is blocking effects. It degrades the image quality severely. This paper presents an image compression scheme, Projection-VQ, based on VQ of projections instead of an image itself. The vector dimension of a projection is much smaller than that of the image. Therefore, we may easily process a large block in projection domain. In Projection-VQ, a block image is projected at several angles. If we transmit the projections which have edge information, we can reconstruct an image without destroying edge sharpness even with a few projections. Simulation results show that the performance of the suggested scheme is superior to the normal VQ scheme in the sense of PSNR. Also, substantial improvement in subjective image quality can be attained because the staircase noise, which is a kind of blocking effects, is not visible in the reconstructed image.

Journal ArticleDOI
TL;DR: This paper compares different methods of key signal generation with respect to the resulting picture quality of HDTV shows.
Abstract: The quality requirements on HDTV are much higher compared to standard TV. For this reason sophisticated methods of signal processing are necessary within the HDTV studio. This paper compares different methods of key signal generation with respect to the resulting picture quality. Furthermore, an automatic set-up procedure for a digital chromakey mixer is described.

Journal ArticleDOI
TL;DR: By means of this technique, the service continuity can be extended without the needs for increasing the satellite transmission power, with the service quality under severe atmospheric attenuations being reduced from high definition to normal definition.
Abstract: A new coding/modulation approach is considered to improve service availability of digital high-definition television (HDTV) satellite broadcasting services at 22 GHz. This approach uses a concept of layered modulation in conjunction with layered picture coding and channel coding. By means of this technique, the service continuity can be extended without the needs for increasing the satellite transmission power, the service quality under severe atmospheric attenuations being reduced from high definition to normal definition.

Journal ArticleDOI
TL;DR: A hierarchical image coding algorithm based on sub-band coding and adaptive block-size multistage vector quantization (VQ) is proposed, and its coding performance is examined for super high definition (SHD) image.
Abstract: A hierarchical image coding algorithm based on sub-band coding and adaptive block-size multistage vector quantization (VQ) is proposed, and its coding performance is examined for super high definition (SHD) image. First, the concept on SHD image is briefly described. Next, the signal power spectrum is evaluated, and the sub-band analysis pattern is determined from its characteristics. Several quadrature mirror filters are examined from the viewpoints of reconstruction accuracy, coding gain, and low-pass signal quality. Then an optimum filter is selected for the sub-band analysis. The two-stage VQ using the adaptive bit allocation is also introduced to control quantization accuracy and to achieve high-quality image reproduction. Coding performance and hierarchical image reconstruction are demonstrated using SNR and some photographs.

Journal ArticleDOI
TL;DR: A hybrid DCT coding method that has been proposed to the Kurihama tests organised by ISO/IEC JTC1/SC2/WG8 in November 1991 is described and the mechanism of the U-VLC is explained in detail.
Abstract: A hybrid DCT coding method is described that has been proposed to the Kurihama tests organised by ISO/IEC JTC1/SC2/WG8 in November 1991. The scheme includes the composition of pseudo-progressive frames as a first step of the coder. Then the motion compensated prediction technique is used to exploit temporal redundancy. The quantization parameter is given by a linear relation with the buffer fullness. Finally, the mechanism of the U-VLC is explained in detail.

Journal ArticleDOI
TL;DR: A reference scheme for the optimum use of motion compensation in future image communication networks is presented and the problems of the representation of motion relatively to a given source image signal and of its adjustment to new frame rate environments are especially addressed.
Abstract: There is no doubt that in the near future, a large number of image processing techniques will be based on motion compensation, thus making the cascading of several ‘motion compensated’ devices in the same image chain very common. A reference scheme for the optimum use of motion compensation in future image communication networks is presented. Motion estimation is performed only once, at a very early stage of the process chain, then motion information is encoded, transmitted in a separate data channel and distributed to the cascaded motion compensated processes. The distribution scenario must take into consideration the various transformations performed on the image signal since its origination, so that the motion information distributed is always consistent with the pictures to process. The problems of the representation of motion relatively to a given source image signal and of its adjustment to new frame rate environments are especially addressed.

Journal ArticleDOI
TL;DR: An algorithm to redcue the bit-rate for transmission of the quantized DCT coefficient data in digital HDTV coders using a new scanning method based on segmentation and interleaving of DCT coefficients.
Abstract: This paper proposes an algorithm to redcue the bit-rate for transmission of the quantized DCT coefficient data in digital HDTV coders. The variable length coding compresses the quantized DCT coefficient data by removing their statistical redundancy. Zigzag scan is an effective way to improve the performance of the variable length coding. In order to reduce bit-rate further, we propose a new scanning method. Each DCT block is classified into the interleaving group and the non-interleaving group according to the number of non-zero DCT coefficients, and the DCT blocks in the non-interleaving group are encoded using the segmentation and interleaving of DCT coefficients, and the DCT blocks in the non-interleaving group are encoded using only zigzag scan. Simulation results show that the proposed method improves the bit-rate reduction performance by 6.8% when compared with the conventional method.