scispace - formally typeset
Search or ask a question

Showing papers on "Discrete cosine transform published in 1986"


Journal ArticleDOI
TL;DR: A simple yet efficient extension of this concept to the source coding of images by specifying the constraints for a set of two-dimensional quadrature mirror filters for a particular frequency-domain partition and showing that these constraints are satisfied by a separable combination of one-dimensional QMF's.
Abstract: Subband coding has become quite popular for the source encoding of speech. This paper presents a simple yet efficient extension of this concept to the source coding of images. We specify the constraints for a set of two-dimensional quadrature mirror filters (QMF's) for a particular frequency-domain partition, and show that these constraints are satisfied by a separable combination of one-dimensional QMF's. Bits are then optimally allocated among the subbands to minimize the mean-squared error for DPCM coding of the subbands. Also, an adaptive technique is developed to allocate the bits within each subband by means of a local variance mask. Optimum quantization is employed with quantizers matched to the Laplacian distribution. Subband coded images are presented along with their signal-to-noise ratios (SNR's). The SNR performance of the subband coder is compared to that of the adaptive discrete cosine transform (DCT), vector quantization, and differential vector quantization for bit rates of 0.67, 1.0, and 2.0 bits per pixel for 256 × 256 monochrome images. The adaptive subband coder has the best SNR performance.

1,181 citations


Proceedings ArticleDOI
07 Apr 1986
TL;DR: An extension of sub-band coding to two-dimensions with particular application to images by employing a 16 band decomposition using a tree structure of separable quadrature mirror filters and decimators.
Abstract: This paper presents an extension of sub-band coding to two-dimensions with particular application to images. We employ a 16 band decomposition using a tree structure of separable quadrature mirror filters and decimators. The sub-bands are encoded using DPCM with bits allocated to approximately minimize the mean-square error. A block-adaptive variant is also presented. A limited SNR comparison shows adaptive sub-band coding to outperform the adaptive discrete cosine transform and two types of vector quantizers.

188 citations


Journal ArticleDOI
TL;DR: An 8-point Fourier-cosine transform chip designed for a data rate of 100 Mbits/s is described, including algorithm modification for VLSI suitability, architectural choices, testing overhead, internal precision assignments, mask generation, and finally, verification of the layout.
Abstract: An 8-point Fourier-cosine transform chip designed for a data rate of 100 Mbits/s is described. The top-down design is presented step by step, including algorithm modification for VLSI suitability, architectural choices, testing overhead, internal precision assignments, mask generation, and finally, verification of the layout. A high-level language (C) design tool was developed concurrently with the layout. This tool allows mimicking exactly the different representations of the algorithm: software, mask, and chip. This provides an automatic cross-checking at all design stages. The VLSI environment created by this tool, as well as existing powerful CAD tools, made a fast design-time possible.

121 citations


Patent
27 Jan 1986
TL;DR: In this article, a signal analysing and synthesizing filter bank system is proposed, where the analysing bank receives a signal sampled at the rate fe and produces N contiguous subbank signals sampled at a rate fe /N.
Abstract: In a signal analysing and synthesizing filter bank system, the analysing bank receives a signal sampled at the rate fe and produces N contiguous subbank signals sampled at the rate fe /N. From the subband signals the synthesizing bank must recover the incoming signal. These filter banks are formed by modulation of a prototype filter by sinusoidal signals which, for subband k (O≦k≦N=1), have a frequency (2k30 1)fe /(4N) and respective phases +(2k+1)π/4 and -(2k+1)π/4 for the analysing and synthesizing banks. These signals are furthermore delayed by a time delay (Nc -1)/2fe), where Nc is the number of coefficients of the prototype filter. Preferably, the analysing bank is realized by the cascade arrangement of an N-branch polyphase network (12) and a double-odd discrete cosine transform calculating arrangement (14) and the synthesizing bank is realized by the cascade arrangement of a double-odd discrete cosine transform calculating arrangement (15) and an N-branch polyphase network (17).

81 citations


Patent
19 Jun 1986
TL;DR: In this paper, a discrete transform cosine circuit utilizing symmetries of the cosine matrix of coefficients was proposed to allow all multiplications to be done by "constant multipliers" comprising combinations of look-up tables and adders.
Abstract: A discrete transform cosine circuit utilizing symmetries of the cosine matrix of coefficients to allow all multiplications to be done by "constant multipliers" comprising combinations of look-up tables and adders. Transform coefficients are developed by dividing each into a sequence of blocks of preselected size, the information in the blocks is sorted to develop a specific order and the reordered blocks are applied seriatim to a first one-dimensional cosine transform circuit employing the constant multipliers. The output of the first cosine transform circuit is applied to a transposing memory and then to a second cosine transform circuit that also employs "constant multipliers".

76 citations


Journal ArticleDOI
TL;DR: A relationship between the discrete cosine transform (DCT) and the discrete Hartleytransform (DHT) is derived and it leads to a new fast and numerically stable algorithm for the DCT.
Abstract: A relationship between the discrete cosine transform (DCT) and the discrete Hartley transform (DHT) is derived. It leads to a new fast and numerically stable algorithm for the DCT.

76 citations


Journal ArticleDOI
TL;DR: Improved performance of recursive block coding algorithms results in suppression of the block-boundary effect commonly observed in traditional transform coding techniques, illustrated by comparing RBC with cosine transform coding using both one- and twodimensional algorithms.
Abstract: The concept of fast KL transform coding introduced earlier [7], [8] for first-order Markov processes and certain random fields has been extended to higher order autoregressive (AR) sequences and practical images yielding what we call recursive block coding (RBC) algorithms. In general, the rate-distortion performance for these algorithms is significantly superior to that of the conventional block KL transform algorithm. Moreover, these algorithms permit the use of small size transforms, thereby removing the need for fast transforms and making the hardware implementation of such coders more appealing. This improved performance has been verified for practical image data and results in suppression of the block-boundary effect commonly observed in traditional transform coding techniques. This is illustrated by comparing RBC with cosine transform coding using both one- and twodimensional algorithms. Examples of RBC encoded images at various rates are given.

62 citations


Proceedings ArticleDOI
20 Nov 1986
TL;DR: An adaptive cosine transform coding scheme capable of real-time operation that incorporates the human visual system properties into the coding scheme showed that the subjective quality of the processed images is significantly improved even at a low bit rate.
Abstract: An adaptive cosine transform coding scheme capable of real-time operation is described. It incorporates the human visual system properties into the coding scheme. Results showed that the subjective quality of the processed images is significantly improved even at a low bit rate of 0.2 bit/pixel. With the adaptive scheme, an average of 0.26 bix/pixel can be achieved with very little perceivable degragation.

40 citations


Proceedings ArticleDOI
01 May 1986
TL;DR: This paper explains a coding technique for transmitting television scenes at rates ranging from 64 to 320 kbit/s, using a hybrid coding algorithm, with predictive coding in the time domain and transform codes in the spatial domain to improve coding efficiency.
Abstract: This paper explains a coding technique for transmitting television scenes at rates ranging from 64 to 320 kbit/s. The algorithms to be described are applied to TV input signals with reduced spatial and temporal resolution. Use is made of a hybrid coding algorithm, with predictive coding in the time domain and transform coding in the spatial domain. To improve coding efficiency an object matching technique is used for movement compensation. The residual prediction errors are coded by adaptive block quantization with a 16*16 discrete cosine transform (DCT). On the receiver side skipped fields are reconstructed by motion adaptive interpolation (MAI).

36 citations


Journal ArticleDOI
TL;DR: Coding schemes suitable for progressive transmission of still pictures in NTSC composite format are studied and results are presented for coding the NTSC signal sampled at four times the color subcarrier frequency with an orthogonal structure.
Abstract: Coding schemes suitable for progressive transmission of still pictures in NTSC composite format are studied. Transform coding methods based on the discrete cosine transform (DCT) and the WalshHadamard transform (WHT) are used. The transform coefficients are segmented into groups having similar properties. Variable blocklength to variable length codes are used to encode the quantized coefficients for each of the groups. Progressive transmission is achieved by sending the coefficients belonging to particular groups in successive passes over the image. Results are presented for coding the NTSC signal sampled at four times the color subcarrier frequency with an orthogonal structure, and at twice the subcarrier frequency with a hexagonal structure.

35 citations


Journal ArticleDOI
TL;DR: A novel progressive quantization scheme is developed for optimal progressive transmission of transformed diagnostic images that delivers intermediately reconstructed images of comparable quality twice as fast as the more usual zig-zag sampled approach.
Abstract: In radiology, as a result of the increased utilization of digital imaging modalities, such as computed tomography (CT) and magnetic resonance imaging (MRI), over a third of the images produced in a typical radiology department are currently in digital form, and this percentage is steadily increasing. Image compression provides a means for the economical storage and efficient transmission of these diagnostic pictures. The level of coding distortion that can be accepted for clinical diagnosis purposes is not yet well-defined. In this paper we introduce some constraints on the design of existing transform codes in order to achieve progressive image transmission efficiently. The design constraints allow the image quality to be asymptotically improved such that the proper clinical diagnoses are always possible. The modified transform code outperforms simple spatial-domain codes by providing higher quality of the intermediately reconstructed images. The improvement is 10 dB for a compression factor of 256:1, and it is as high as 17.5 dB for a factor of 8:1. A novel progressive quantization scheme is developed for optimal progressive transmission of transformed diagnostic images. Combined with a discrete cosine transform, the new approach delivers intermediately reconstructed images of comparable quality twice as fast as the more usual zig-zag sampled approach. The quantization procedure is suitable for hardware implementation.

Proceedings ArticleDOI
01 Apr 1986
TL;DR: A new vector quantization scheme in discrete cosine transform domain, named DCT-VQ and its application to color image coding are described, which results in much less complex coder than a vector quantizer in original space domain.
Abstract: A new vector quantization scheme in discrete cosine transform domain, named DCT-VQ and its application to color image coding are described. In this scheme, DCT-domain is partitioned into vectors which are normalized and vector-quantized using universal vector quantizers designed with multidimensional Laplacian distribution. Adaptive coding scheme is also introduced to obtain better reconstruction of images. The color image coder employs the above scheme and encodes separately three components converted from R,G,B signals. The simulations have shown that adaptive DCT-VQ exhibits better performance than a conventional adaptive cosine transform coding with scalar quantization. The decomposition of DCT-block into vectors results in much less complex coder than a vector quantizer in original space domain.

Patent
20 May 1986
TL;DR: In this article, the first adder stage receives the cosine transform for a group of (N/2) points and a lower half-stages receives the sequence (y i ) and supplying the sequence(X 2q+1 + 1 ) of the odd components of the cosines transform.
Abstract: A circuit for the fast calculation of the discrete cosine transform (X i ), 0≦i≦N-1, in which N=2 n and n is an integer of a signal defined by a sequence (x i ), 0≦i≦N-1 includes a first adder stage receiving the sequence (x i ), 0≦i≦N-1, and supplying two sequences (x i o ) and y i i ) and 0≦i≦(N/2)-1, a group of upper half-stages receiving the sequence of x o i ) and supplying the sequence (X 2q ) of the even components of the cosine transform. That group constitutes a circuit for the fast calculation of the cosine transform for a group of (N/2) points and a group of lower half-stages receiving the sequence (y i ) and supplying the sequence (X 2q+1 ) of the odd components of the cosine transform.

Proceedings ArticleDOI
01 Apr 1986
TL;DR: This paper presents a one chip operator, achieving a full 16 × 16 DCT computation at video rate, and suggests a low-cost implementation of a high-speed DCT operator would lower the price of CODEC and could open new fields of applications for DCT in real time image processing.
Abstract: The Discrete Cosine Transform [1] is a good but computation consuming first step of many image coding and compression algorithms for good quality, low rate transmissions. A low-cost implementation of a high-speed DCT operator would lower the price of CODEC and could open new fields of applications for DCT in real time image processing. This paper presents a one chip operator, achieving a full 16 × 16 DCT computation at video rate. Algorithmic, architectural and implementation choices combined with a careful optimization of the layout have made it possible to design a reasonnable size chip exhibiting such high performance.

DOI
01 Jun 1986
TL;DR: Two new transforms which can be used as substitutes for the Walsh transform are generated using the theory of dyadic symmetry, and have an efficiency, defined in terms of their ability to decorrelate signal data, which lies between that ofThe Walsh transform and that of the discrete cosine transform.
Abstract: Two new transforms which can be used as substitutes for the Walsh transform are generated using the theory of dyadic symmetry. The new transforms have virtually the same complexity and computational requirements as the Walsh transform, employing additions, subtractions and binary shifts only, but have an efficiency, defined in terms of their ability to decorrelate signal data, which lies between that of the Walsh transform and that of the discrete cosine transform.

Patent
05 May 1986
TL;DR: In this article, the authors present a device for computing monodimensional cosine transform computations by blocks of 16 values, which consists of three elementary computing devices for carrying out operations of the addition or subtraction type; two elementary devices for performing operations of multiplication and accumulation type; a coupling device; and control means.
Abstract: The invention provides cosine transform computing devices and image decoding devices and coding devices comprising such computing devices. One embodiment of a device for calculating monodimensional cosine transforms by blocks of 16 values comprises: three elementary computing devices for carrying out operations of the addition or subtraction type; two elementary computing devices for carrying out operations of the multiplication and accumulation type; a coupling device; and control means. The number of elementary computing devices is equal to the minimum number required for carrying out the transform calculations at the timing imposed by the arrival of the values to be transformed. Each elementary computing device is reused several times for the calculation of each transform. The coupling device is controlled by the control means for connecting the elementary computing devices in series in an order which varies at each step of the succession of calculations. The coupling device comprises three fixed delay devices for delaying the transmission of certain digital values.

01 Jan 1986
TL;DR: Fast algorithms for computation of the discrete cosine transform (DCT) are evaluated through the fast Fourier transform and also by the direct method.
Abstract: Fast algorithms for computation of the discrete cosine transform (DCT) are evaluated. Implementation via the fast Fourier transform and also by the direct method are considered. DCT algorithms for arbitrary sequence lengths are also included.

Proceedings ArticleDOI
01 Apr 1986
TL;DR: An efficient discrete cosine transform image coding system using the gain/shape vector quantizers (DCT-G/S VQ) is presented and their performance is compared to that of previously reported discretecosine transform coding systems using the Max-type scalor quantizers.
Abstract: An efficient discrete cosine transform image coding system using the gain/shape vector quantizers (DCT-G/S VQ) is presented. In the coding system, AC transform coefficients in a subblock are partitoned into several bands according to the Schaming's method, and the normalized AC transform coefficients of each band are quantized with the gain/shape vector quantizer designed on a spherically symmetric probability model. In addition, an adaptive DCT-G/S VQ (A-DCT-G/S VQ) is presented by incorporating a modification of the recursive quantization technique in the DCT-G/S VQ. The coding systems are simulated on color images, and their performance is compared to that of previously reported discrete cosine transform coding systems using the Max-type scalor quantizers.

Proceedings ArticleDOI
10 Dec 1986
TL;DR: A recursive algorithm for DCT whose structure allows the generation of the next higher-order DCT from two identical lower order DCT's, which requires fewer multipliers and adders than other DCT algorithms.
Abstract: The Discrete Cosine Transform (DCT) has found wide applications in various fields, including image data compression, because it operates like the Karhunen-Loeve Transform for stationary random data. This paper presents a recursive algorithm for DCT whose structure allows the generation of the next higher-order DCT from two identical lower order DCT's. As a result, the method for implementing this recursive DCT requires fewer multipliers and adders than other DCT algorithms.

Journal ArticleDOI
M. Guglielmo1
TL;DR: The paper analyzes the effect of finite-length arithmetic in the calculation of 2-D linear transformations employed in some picture coding algorithms and determines the representation accuracy of the one- and two-dimensional coefficients required to satisfy a preassigned reconstruction error on the image.
Abstract: The paper analyzes the effect of finite-length arithmetic in the calculation of 2-D linear transformations employed in some picture coding algorithms Since the condition of zero-error in general direct and reverse transformations leads to results of little practical importance, an analysis is carried out on the statistical properties of error in 2-D linear transformation with given length of arithmetics Then the important case of discrete cosine transform (DCT) applied to real images is considered in detail The results of the paper allow a circuit designer to determine the representation accuracy of the one- and two-dimensional coefficients required to satisfy a preassigned reconstruction error on the image

Journal ArticleDOI
TL;DR: The method is similar, in its structure, to FFT with radix 2, which leads to the feature that the programming is simple, compared with the previous methods.
Abstract: The discrete cosine transform (DCT) is one of the discrete orthogonal transformations, for which fast computation algorithms are known. It has the property that the transformed sequence has most of the energy in the lower-order components, and is utilized widely in feature extraction and high-efficiency coding. The following methods have been known as the fast-cosine transform (FCT): (1) Computation is performed through the fast-Fourier transform (FFT) with real sequence; (2) Computation is made by decomposing the matrix representing DCT into sparse factors. In those methods, approximately N log2N real multiplications and (3/2)N log2N real additions are required for computation of DCT for N = 2v points. The FCT proposed in this paper differs from those methods in the following points. The DCT is represented by a finite series of Chebyshev polynomials. The order of the series is successively halved utilizing the successive factorization of Chebyshev polynomial, finally arriving at DCT values. By this method, (1/2) N log2N real multiplications and (3/2) N log2N real additions suffice to calculate the DCT for N points: the number of multiplications is halved, compared with the previous methods. The method is similar, in its structure, to FFT with radix 2, which leads to the feature that the programming is simple.

Proceedings ArticleDOI
01 Apr 1986
TL;DR: The Warp implementation of the 2- dimensional Discrete Cosine Transform and singular value decomposition is outlined, which is crucial to many real-time signal processing tasks.
Abstract: Warp is a programmable systolic array machine designed by CMU and built together with its industrial partners-GE and Honeywell. The first large scale version of the machine with an array of 10 linearly connected cells will become operational in January 1986. Each cell in the array is capable of performing 10 million 32-bit floating-point operations per second (10 MFLOPS). The 10-cell array can achieve a performance of 50 to 100 MFLOPS for a large variety of signal processing operations such as digital filtering, image compression, and spectral decomposition. The machine, augmented by a Boundary Processor, is particularly effective for computationally expensive matrix algorithms such as solution of linear systems, QR-decomposition and singular value decomposition, that are crucial to many real-time signal processing tasks. This paper outlines the Warp implementation of the 2- dimensional Discrete Cosine Transform and singular value decomposition.

Journal ArticleDOI
TL;DR: Chain coding technique, originally developed for digital representation and processing of line drawing data, has been implemented in a transform image coding algorithm with significant performance improvement.
Abstract: Chain coding technique, originally developed for digital representation and processing of line drawing data, has been implemented in a transform image coding algorithm with significant performance improvement. The algorithm is based on the observation that the boundary of the regions of zero coefficients within a transform block can be efficiently represented by sequences of fixed line segments (chains). Preliminary results indicate significant improvements over the basic coder algorithm in which the consecutive zeros in the transform block were runlength coded. The additional implementation complexity is modest.

Proceedings ArticleDOI
01 May 1986
TL;DR: A new hybrid coding technique is introduced which is based on the Discrete Cosine Transform, interframe DPCM, conditional replenishment, detection of significant subareas in the transform domain, adaptive quantization, adaptive Huffman coding and postbuffer control.
Abstract: A new Hybrid Coding Technique for Videoconference Applications at 2 Mbit /sHerbert Holzlwimmer, Walter Tengler, Achim v. BrandtZentralbereich Forschung und Technik, Siemens AGOtto- Hahn -Ring 6, 8000 Munchen 83, Federal Republic of GermanyAbstractA new hybrid coding technique is introduced which is based on the Discrete Cosine Trans-form, interframe DPCM, conditional replenishment, detection of significant subareas inthe transform domain, adaptive quantization, adaptive Huffman coding and postbuffer con-trol. This coder concept is the result of a comparison of several coding methods includinguniform /nonuniform quantization, constant /variable word length coding and prebuffer /post-buffer control schemes. An important feature of the presented coder is the selection of'transform coefficients within each block which are grouped in subareas for adaptive en-tropy coding. The components of the coder are described and its excellent performance isdemonstrated by means of SNR- measurements and a typical videoconference sequence.IntroductionHybrid coding is the combination of an orthogonal transform and a difference pulse codemodulation (DPCM) to decorrelate signals having more than one dimension. A two -dimensionaltransform within a frame and a one -dimensional DPCM in the temporal direction is an evidentchoice for image sequences. For transformation the Discrete Cosine Transform (DCT) is usedsince it is close to the statistically optimal Karhunen -Loeve transform (KLT). The quan-tized prediction errors are encoded (fixed or variable length) and fed into an output buf-fer. Adaptivity in usual hybridcoding systems is achieved by proper feedforward and /orfeed backward control of the DPCM, quantization and encoding /2/.Nonuniform Max quantizers /6/ along with constant wordlength coding have been used first/4,5/. This algorithm provides only a few reconstruction levels for the coefficients whichshould be encoded with few bits. This quantizers are very sensitive to mismatch. Huffmancoding /7/ and uniform quantization used in transform and hybridcoders /1/ are a way toovercome this sensitivity. Here mismatch affects the rate and not the distortion. Thistechnique can be made adaptive by a fixed assignment of several Huffman code tables tothe individual transform coefficients and the application of different assignment matricesfor different classes of activity /3/.

Proceedings Article
01 Jan 1986
TL;DR: An algorithm is presented for image data compression based upon vector quantization of the two-dimensional discrete cosine transformed coefficients of the ac energies of the transformed blocks to classify them into eight different ac classes.
Abstract: In this paper, an algorithm is presented for image data compression based upon vector quantization of the two-dimensional discrete cosine transformed coefficients. The ac energies of the transformed blocks are used to classify them into eight different ac classes. The ac coefficients of the transformed blocks of class one are set to zero, while those of classes two through eight are transmitted by seven different code books. The dc coefficients of all eight classes are scalar quantized by an adaptive uniform quantizer. As a result, only 4.5 bits instead of eight bits are required to transmit the dc coefficient with negligible additional degradation. Overall, this algorithm requires approximately 0.75 bits per pixel and gives an average reconstruction error of 7.1.

Proceedings ArticleDOI
12 Jun 1986
TL;DR: Vector Quantization has only recently been applied to image data compression, but shows promise of outperforming more traditional transform coding methods, especially at high compression.
Abstract: This paper addresses the problem of data compression of medical imagery such as X-rays, Computer Tomography, Magnetic Resonance, Nuclear Medicine and Ultrasound. The Discrete Cosine Transform (DCT) has been extensively studied for image data compression, and good compression has been obtained without unduly sacrificing image quality. Vector Quantization has only recently been applied to image data compression, but shows promise of outperforming more traditional transform coding methods, especially at high compression. Vector Quantization is quite well suited for those applications where the images to be processed are very much alike, or can be grouped into a small number of classifications. These and similar studies continue to suffer from the lack of a uniformly agreed upon measure of image quality. This is also exacerbated by the large variety of electronic displays and viewing conditions.


Proceedings ArticleDOI
01 Apr 1986
TL;DR: The algorithm is much less complex than adaptive transform coding (ATC) algorithms, yet at 16-kb/s it produces comparable speech quality, and outperforms traditional RELP coders for both speech and non-speech signals.
Abstract: This paper describes a medium-bit-rate speech compression algorithm for telephone applications. The basic configuration of the adaptive subbands excited transform (ASET) coder can be described as frequency domain residual subbands coding, a hybrid of transform coding and residual excited coding. The algorithm outperforms traditional RELP (residual excited linear predictive) coders for both speech and non-speech signals. The algorithm is much less complex than adaptive transform coding (ATC) algorithms, yet at 16-kb/s it produces comparable speech quality. A multichannel hardware implementation of the algorithm has been developed and is reported in [1].

Patent
10 Apr 1986
TL;DR: In this article, an approach for developing fast sine and cosine functions via a division of the angle whose trigonometric function is to be developed into a plurality of component angles is presented.
Abstract: Apparatus for developing fast sine and cosine functions via a division of the angle whose trigonometric function is to be developed into a plurality of component angles. Sine and cosine values of the component angles are obtained from look-up tables (10, 11, 12, 13, 14, 15) and appropriately combined (30). High computational speed is obtained by creating the plurality of component angles based on the radix used in the digital representation of the angle and, furthermore, by creating the component angles so that the look-up tables are approximately equal in size.

Proceedings ArticleDOI
01 Apr 1986
TL;DR: A new algorithm for the inverse Cosine transform (IDCT) is developed which falls within the unified architecture if the authors allow ourselves the luxury of double length processing.
Abstract: A common architecture, which is based on the Cooley-Tukey[1] algorithm, is developed for the Fourier, Hadamard, and forward and inverse Cosine transforms. The theory to implement the first three transforms in this architecture is well known. However, the existing algorithms for the inverse Cosine transform (IDCT) would disqualify it from the unified architecture and this led us to develop a new IDCT algorithm which falls within the unified architecture if we allow ourselves the luxury of double length processing. Details of the unified architecture and the new algorithm for the IDCT will be given in this paper.