scispace - formally typeset
Search or ask a question

Showing papers in "Signal Processing-image Communication in 1990"


Journal ArticleDOI
TL;DR: An object-oriented analysis-synthesis coder is presented which encodes arbitrarily shaped objects instead of rectangular blocks and the efficient coding of motion and shape parameters can efficiently be coded.
Abstract: An object-oriented analysis-synthesis coder is presented which encodes arbitrarily shaped objects instead of rectangular blocks. The objects are described by three parameter sets defining their motion, shape and colour. Throughout this contribution, the colour parameters denote the luminance and chrominance values of the object surface. The parameter sets of each object are obtained by image analysis based on source models of moving 2D-objects and coded by an object-dependent parameter coding. Using the coded parameter sets an image can be reconstructed by model-based image synthesis. In order to cut down the generated bit-rate of the parameter coding, the colour updating of an object is suppressed if the modelling of the object by the source model is sufficiently exact, i.e., if only a relatively small colour update information would be needed for an errorless image synthesis. Omitting colour update information, small position errors of objects denoted as geometrical distortions are allowed for image synthesis instead of quantization error distortions. Tolerating geometrical distortions, the image area to be updated by colour coding can be decreased to 4% of the image size without introducing annoying distortions. As motion and shape parameters can efficiently be coded, about 1 bit per pel remains for colour updating in a 64 kbit/s coder compared to about 0.1 bit per pel in the standard reference coder (RM8) of the CCITT. Experimental results concerning the efficient coding of motion and shape parameters are given and discussed. The coding of the colour information will be dealt with in further research.

161 citations


Journal ArticleDOI
TL;DR: A four-parameter motion model, capable of describing rotation, change of scale and translation simultaneously, is proposed, which is a differential one applied in conjunction with a multi-resolution iteration scheme to enhance its measuring range and efficiency.
Abstract: The problem of 2-D motion estimation with application to image sequence coding is considered in this paper. A four-parameter motion model, capable of describing rotation, change of scale and translation simultaneously, is proposed. The four motion parameters are estimated directly from an image sequence without the need for establishing the correspondence between pixels or features. The method is a differential one applied in conjunction with a multi-resolution iteration scheme to enhance its measuring range and efficiency. Experimental results demonstrate that the motion parameters can be estimated accurately by the method with minimal computational effort.

93 citations


Journal ArticleDOI
TL;DR: This work applies subband decompisition of HDTV signlas to a sequence and indicates bitrates on the special case of quincunx substampling and derives filter banks to go from processing to interlaced scanning as well as from interlaces to progressive.
Abstract: subband decompisition of HDTV signlas is important both for representation purpose (to create compatible subchannels) and fro coding (several proposed compression schemes include some subband division). We first review perfect reconstruction filter banks in multiple dimensions in the contest of arbitrary sampling patterns. Then we concentrate on the special case of quincunx substampling and derive filter banks to go from processing to interlaced scanning (with a highpass which contains deinterlacing information) as well as from interlaced to progressive. We apply this decompisition to a sequence and indicate bitrates.

84 citations


Journal ArticleDOI
TL;DR: The algorithm proposed in the paper is based on the well-known technique of motion-compensated prediction and DCT-coding of the prediction error, considerably enhanced with motion-Compensated interpolation of skipped video frames and selective coding of the interpolation errors.
Abstract: This paper discusses the compression of digital video signals at bit-rates around 1 Mbps for interactive playback applications. The coding algorithm is required not only to provide good-quality reconstruction of complex material but also to facilitate interactivity with the bit-stream at the decoder. The algorithm proposed in the paper is based on the well-known technique of motion-compensated prediction and DCT-coding of the prediction error. This basic approach is considerably enhanced with motion-compensated interpolation of skipped video frames and selective coding of the interpolation errors. The interactivity requirements are met by partitioning the video sequence into segments each comprised of a small number of frames. Different ways of encoding a segment are examined. An arrangement is selected that has one intra-coded frame in the center of the segment and a symmetrical pattern of predicted and interpolated frames on the two sides of the intra-coded frame. Segments of length 9 and 15 frames are evaluated. While the shorter segment leads to faster interactivity and simpler decoder implementation, the associated picture quality is much inferior to that obtained with the longer segment. Finally, a rough design of the decoder, suitable for VLSI implementation, is outlined.

48 citations


Journal ArticleDOI
TL;DR: A motion compensated hybrid DCT/DPCM video compression scheme, which has been proposed for the storage of compressed digital video, and which results in better compression efficiency than Huffman coding, thus improving image quality when a fixed bandwidth channel is available.
Abstract: We describe a motion compensated hybrid DCT/DPCM video compression scheme, which has been proposed for the storage of compressed digital video. The scheme, an extension of a current CCITT/ISO proposal for compressing still-frame images, differs from other hybrid DCT proposals in several ways. (1) Reduction of DCT block-artifacts by implementing inverse AC-quantizers which are of finer granularity than the corresponding encoder quantizers. This process, which we label ‘AC-Correction’, is applied to intraframe coded frames. (2) Entropy coding by means of an adaptive binary arithmetic coder. Arithmetic coding results in better compression efficiency than Huffman coding, thus improving image quality when a fixed bandwidth channel is available. (3) Non-linear loop filtering that preserves edge definition. (4) Tight tolerances in the rate control of groups of frames.

47 citations


Journal ArticleDOI
TL;DR: It is shown that changes in object surface orientation give rise to combined amplitude and frequency modulation of the surface texture when projected onto the image plane.
Abstract: An analysis is given of texture storage and reprojection as used in model-based image coding. It is shown that changes in object surface orientation give rise to combined amplitude and frequency modulation of the surface texture when projected onto the image plane. When stored texture is reprojected onto a model, inaccuracies in the model shape cause texture displacements under rotational motion. The use of these results to improve model-based prediction in practical systems is discussed.

28 citations


Journal ArticleDOI
TL;DR: The starting point for the proposed algorithm has been the CCITT H.261 Reference Model, future standard for synchronous transmissions at p × 64 kbits/s ( p = 1,…,30).
Abstract: This paper describes the work carried out in CSELT with the aim of providing a sensible solution to the problem of recording moving images on digital storage media. The starting point for the proposed algorithm has been the CCITT H.261 Reference Model, future standard for synchronous transmissions at p × 64 kbits/s ( p = 1,…,30). The algorithm provides all the facilities required for recording purposes.

26 citations


Journal ArticleDOI
TL;DR: The coding system described in Part 1 allows compatible coding between various image formats ranging from VT to HDP and including interlaced formats, but the hybrid scheme must be designed carefully in order to keep compatibility capabilities and avoid any drift at the compatible decoder side.
Abstract: The coding system described in Part 1 allows compatible coding between various image formats ranging from VT to HDP and including interlaced formats. Interlaced input signals are converted into their ‘progressive equivalent’ before splitting. Compatible coding is based on the so-called Format Independent Splitting (FIS) approach: the same splitting is used whatever the input format except for interlaced inputs, which do not undergo the first horizontal splitting. Temporal correlation must be taken into account to obtain good quality images at bit rates below 1 bit per pel. Nevertheless, the hybrid scheme must be designed carefully in order to keep compatibility capabilities and avoid any drift at the compatible decoder side.

23 citations


Journal ArticleDOI
TL;DR: A video codec operating at a bit rate supported by digital storage media, focus being on 1.15 Mbits/s for net video, based on the hybrid DPCM/DCT scheme of CCITT Reference Model 8 which had been optimized within CCITT Study Group XV for telecommunications applications.
Abstract: Digital storage media like CD, CD-ROM, MOD, DAT, etc. open new fields of application, e.g. for storage and retrieval of moving video information for multimedia services. This paper describes a video codec operating at a bit rate supported by these media, focus being on 1.15 Mbits/s for net video. It is based on the hybrid DPCM/DCT scheme of CCITT Reference Model 8 which had been optimized within CCITT Study Group XV for telecommunications applications at bit rates ranging from 64 kbits/s to 2 Mbits/s. New building blocks have been added to implement trick modes and to improve subjective picture quality. Periodic coding of frames in an intraframe mode provides random access and, thus, fast search and reverse playback features. Adapted coder control, zoom and pan compensation, half pel accuracy of local motion vectors, weighting of transform coefficients, and noise reduction in the coding loop are the main components improving subjective picture quality for the wider range of TV applications.

22 citations


Journal ArticleDOI
TL;DR: A directional decomposition based sequence coding technique is presented, in which spatial lowpass and highpass components are analyzed and coded separately, and a simple law for sharing the available bits between these components is stated and analytically proved.
Abstract: Second generation image coding techniques, which use information about the human visual system to reach high compression ratios, have proven very successful when applied to single images. These methods can also be applied to image sequences. A directional decomposition based sequence coding technique is presented, in which spatial lowpass and highpass components are analyzed and coded separately. A simple law for sharing the available bits between these components is stated and analytically proved by using a minimum cost/resolution optimality criterion. The detection of directional elements is carried out by using both linear and nonlinear (median) filtering. The coding is based on near optimal estimators which retain only the innovation part of information, and is well suited for differential pulse code modulation. The results of applying this method to a typical sequence are shown. The estimated compression ratio is approximately 320 : 1 (0.025 bits per pixel), allowing a transmission rate of about 41 kbit/second. The resulting image quality is reasonably good.

19 citations


Journal ArticleDOI
TL;DR: A multiresolution coding system based on a ‘full subband’ approach where each resulting band is then independently PCM encoded to obtain quite an optimal coding scheme.
Abstract: This paper describes a multiresolution coding system based on a ‘full subband’ approach. The input image is first split according to a separable hierarchical structure and each resulting band is then independently PCM encoded. In order to obtain quite an optimal coding scheme, subband filters must belong to a particular class of ‘admissible’ filters and the decomposition tree must be chosen carefully.

Journal ArticleDOI
TL;DR: An adaptive sub-band DCT coding for high-quality HDTV signal transmission and a coder configuration that reduces noise accumulation to less than 1 dB after three tandem links is proposed.
Abstract: This paper presents an adaptive sub-band DCT coding for high-quality HDTV signal transmission. In the coder, the first stage quadrature mirror filters (QMFs) decompose the input signal into two bands in the horizontal direction, while the second stage filters decompose the two bands into four bands in the vertical direction. An adaptive DCT is adopted for horizontal-low and vertical-low (LL) signal coding. To maximize bit-rate reduction efficiency, this paper proposed adaptive selection of the input signal mode to the DCT coding from the intra-field signal as well as the inter-field and motion compensated inter-frame signals. One-dimensional DPCM coding is applied to the signals on the same horizontal line of the LH band. The other-band signals are coded by PCM having a dead zone. To further reduce information bit-rates, variable length coding and run-length coding are applied to the quantized signal in each band. Computer simulation results are presented in terms of bit-per-pel and SNR of the reconstructed picture. This paper also presents codec characteristics when it is connected in tandem. It focuses mainly on causes of quantization-noise accumulation in digital tandem connections which are essential to coding algorithms. A coder configuration that reduces noise accumulation to less than 1 dB after three tandem links is proposed.

Journal ArticleDOI
TL;DR: A new image coding system is proposed in which sampled images are decomposed into subbands in a one-step procedure, Ideal filters are used and an error-free reconstruction is achieved.
Abstract: It is shown first that quadrature filters (QFs) and wavelet-generated filters designed for an exact reconstruction of an infinite signal, in a subband coding system, induce a reconstruction error at the picture boundaries. This error is evaluated. Then, a new image coding system is proposed in which sampled images are decomposed into subbands in a one-step procedure. Ideal filters are used and an error-free reconstruction is achieved. Filters are implemented with FFT. A few overhead bits are necessary. The computer load is evaluated and compared with the pyramidal subband coding with FIR QMFs. Illustrative examples allow a comparison of the proposed method with subband coding using QMFs and show clearly the PPSNR gain in the reconstructed images.

Journal ArticleDOI
TL;DR: Evaluation of coding format was done carefully in specific assessment environment, performance is evaluated not only normal playback but also fast forward/reverse and stillframe playback, and overall evaluation is scored by picture quality of each playback mode and VLSI implementability.
Abstract: The activity to make moving picture coding international standard is working on JTC1/SC2/WG8 of ISO/IEC. The eighth MPEG meeting was held in Kurihama, Japan. 15 NTSC and 3 PAL formats are submitted to this meeting and evaluated by subjective assessment test procedures. The results of coded signal is recorded to digital component D-1 VTR, edited by random specific test procedures. Evaluation of coding format was done carefully in specific assessment environment, performance is evaluated not only normal playback but also fast forward/reverse and stillframe playback. Overall evaluation is scored by picture quality of each playback mode and VLSI implementability. This article describes the method of these test procedures.

Journal ArticleDOI
TL;DR: A coding system using a hybrid coding method with motion compensated interframe DPCM, Discrete Cosine Transform and frame interpolation methods was examined as a moving picture coding system for digital storage media such as CD-ROM and local distortion is reduced by detecting distortion and coding with an encoder.
Abstract: A coding system using a hybrid coding method with motion compensated interframe DPCM, Discrete Cosine Transform and frame interpolation methods was examined as a moving picture coding system for digital storage media such as CD-ROM. The encoder, reducing the frame frequency of an input picture by half, performs hybrid coding, and the decoder performs frame interpolation to give a playback picture. It was verified that by using this technique before coding, S/N about 1.5 dB better than by direct coding could be obtained in some frames. In frame interpolation, overwriting by changing motion compensated block size has solved the problem of unoverwritten area. Local distortion which is a problem in frame interpolation is reduced by detecting distortion and coding with an encoder. Information quantity necessary for this is about 10% of overall quantity. In intraframe coding, control of quantization step size by activity has improved the picture quality in parts with small amplitude of luminance.

Journal ArticleDOI
TL;DR: A solution is given for the ‘dirty window’ effect by setting blocks to zero that were assigned to be replenished but received no bits, while a bit allocation algorithm divides the bits among the blocks assigned for replenishment.
Abstract: In this paper a low bit rate subband coding scheme for image sequences is described. Typically, the scheme is based on temporal DPCM in combination with an intraframe subband coder. In contrast to previous work, however, the subbands are divided into blocks onto which conditional replenishment is applied, while a bit allocation algorithm divides the bits among the blocks assigned for replenishment. A solution is given for the ‘dirty window’ effect by setting blocks to zero that were assigned to be replenished but received no bits. The effect of motion compensation and the extension to color images are discussed as well. Finally, several image sequence coding results are given for a bit rate of 300 kbit/s.

Journal ArticleDOI
TL;DR: A low bit-rate video codec based on motion vector replenishment that has a comparable compression efficiency with that of the frame dropping method, but does not introduce any picture ‘jerkiness’.
Abstract: A low bit-rate video codec based on motion vector replenishment is described. Motion vectors are used to update pictures at full frame rate. In addition, part of each frame is conditionally updated with a strip of interframe video data. The video data fill the remaining channel capacity not used for motion vectors. Thus under most conditions, each frame is fully updated by motion vectors and partially with interframe video data. This method has a comparable compression efficiency with that of the frame dropping method, but does not introduce any picture ‘jerkiness’. Finally the application of the proposed method to packet video networks is examined.

Journal ArticleDOI
TL;DR: An efficient coding algorithm developed for the bit rate reduction of full motion video signals that combines a motion adaptive subsampling process with a Discrete Cosine Transform based digital coding method in order to achieve a compression factor greater than twenty.
Abstract: This paper describes an efficient coding algorithm developed for the bit rate reduction of full motion video signals. The algorithm combines a motion adaptive subsampling process with a Discrete Cosine Transform (DCT) based digital coding method in order to achieve a compression factor greater than twenty. This algorithm is then adapted to the Digital Storage Media (DSM) application. Thus images with Common Intermediate Format (CIF) can be coded at a bit rate lower than 2 Mbit/s with a good quality. This algorithm has been submitted to the Moving Picture Expert Group (MPEG) of ISO in October 1989 for assessment by Philips Consumer Electronics, Laboratoires d'Electronique Philips (LEP) and Philips Research Laboratories (PRL, Redhill, United Kingdom), partners in the European project COMIS.

Journal ArticleDOI
TL;DR: The Bellcore proposed ISO-MPEG decoder for 1–1.5 Mbps rate applications is designed to perform the following features: Forward normal/fast playback, reverse normal/ fast playback, and Random access.
Abstract: In this paper, we designed and evaluated the hardware complexity of a motion video decoder, the Bellcore proposed ISO-MPEG decoder for 1–1.5 Mbps rate applications. It is designed to perform the following features: (1) Forward normal/fast playback; (2) Reverse normal/fast playback; (3) Transcoding of CCITT RM8; (4) Transcoding of JPEG Baseline System; (5) Still image build up with high resolution; (6) Random access. The decoder is partitioned into functional modules such that the hardware components can be shared for performing different features. Besides using a commercially available IDCT chip, we designed a JPEG VLD module, a loop filter module, an inverse quantizer module, an address generator module and a format converter module. All the modules are RM8 and/or JPEG compatible. We also described the organization of frame memories and procedure for integrating all the modules to perform different tasks.

Journal ArticleDOI
TL;DR: The techniques of motion detection, interframe linear block prediction and vector quantization have been incorporated in this paper, in a scheme for encoding monochrome image sequences for videoconferencing application.
Abstract: The techniques of motion detection, interframe linear block prediction and vector quantization have been incorporated in this paper, in a scheme for encoding monochrome image sequences for videoconferencing application. Data transmission rate reduction is accomplished by identifying and processing only those regions that exhibit noticeable changes between successive frames, by estimating the magnitude of the change through linear block or vector prediction and quantizing the residual vectors through a vector quantizer. The motion detector uses a modified block matching algorithm to detect the moving blocks. Perceptually-based edge detectors are used to design vector quantizer (VQ) codebooks for different classes of image blocks to achieve better visual quality. Encoding rates under 60 kbps are achieved with acceptable visual quality at nominal computational complexity.

Journal ArticleDOI
TL;DR: Two-stage coding system for reducing the usually high data rate of an HDTV signal to less than 140 Mbit/s is described, using three-dimensional subsampling to reduce the number of samples of the digitized interlaced source signal.
Abstract: A two-stage coding system for reducing the usually high data rate of an HDTV signal to less than 140 Mbit/s is herein described. Three-dimensional subsampling is used to reduce the number of samples of the digitized interlaced source signal. A motion-adaptive filter structure adjusts the three-dimensional spectrum of the television signal to some reduced region which is supported by the quincunx sampling pattern. The visibility of the transition between different spatial resolutions is decreased by 3D-filtering of slowly moving areas. As a result of subsampling, the sampling frequency is halved. Transform coding of the remaining samples is then performed. The quincunx sampling structure is rotated within blocks of 8 by 8 to result in a rectangular block structure. Different possibilities for transforming the quincunx sampled field have been investigated and compared in terms of energy concentration, entropy calculation and coding efficiency. A data reduction of the transform coefficients is sought in the range of four and five. A modified threshold coding algorithm is used to code the coefficients. Sampling, normalization and quantization are adapted and controlled by the buffer status. The buffer equalizes the variable bit-rate at the output of the variable length coder. All steps of signal processing have to be adapted to preserve the high quality of the original signal.

Journal ArticleDOI
TL;DR: This work investigates the question how spatio-temporal properties of human vision can be exploited for data reduction by means of pyramid coding and introduces motion-compensated interpolation in moving areas with correlated motion.
Abstract: In order to transmit high definition television signals economically some kind of data compression has to be utilized. We investigate the question how spatio-temporal properties of human vision can be exploited for data reduction. By means of pyramid coding we implement coding schemes which transmit spatial detail information with reduced temporal resolution. In moving areas with correlated motion we introduce motion-compensated interpolation. In the case of uncorrelated motion, where motion estimation often fails and the eye cannot follow the movement, high spatial frequencies are suppressed. The different systems are rated by means of subjective tests. Furthermore motion-compensated interpolation of the highpass signals of a subband codec using quadrature mirror filters will be analysed.

Journal ArticleDOI
TL;DR: The VTR can record the full bandwidth of 1125/60 HDTV signals, namely 30 MHz luminance, and 15 MHz for each of the two color difference signals, and has a word error rate of less than 10 −4 .
Abstract: Some of the essential features of the record/playback system of a new 1.2 Gbit/s digital VTR are described. The VTR can record the full bandwidth of 1125/60 HDTV signals, namely 30 MHz luminance, and 15 MHz for each of the two color difference signals. As for low error rate recording techniques, a cross-shape multilayered amorphous head, a 5 stage transversal filter, a playback head with a narrow trackwidth and a special PLL to cope with stunt motion are employed. At a high data rate of 148.5 Mbit/s and a high linear density of 0.345 μm per bit, the recorder confirms reliable operation with a word error rate of less than 10 −4 .

Journal ArticleDOI
TL;DR: A motion compensated predictor is used to obtain, from the present image and the previously coded one, a new image which represents the motion compensated luminance differences, and this new image is vector quantized.
Abstract: In this paper the scheme of a hybrid coder is presented. It uses a motion compensated predictor to obtain, from the present image and the previously coded one, a new image which represents the motion compensated luminance differences. Then, this new image is vector quantized. Each vector contains 16 elements (obtained from a square block of 4 × 4 pels). Four different codebooks are used for the quantization, taking into account the local image detail. To reconstruct the images at the receiver side, the estimated motion field must be transmittes; this is done using certain adequate compression techniques. At about 0.5 bpp, the coded images, obtained by proper simulations, have good quality, close to that obtained with a hybrid DCT coder.

Journal ArticleDOI
Joachim Speidel1
TL;DR: The crucial point is that the two input signals of the motion estimator are roughly quantized before searching for the optimum displacement vector resulting in a very simple arithmetic-logic-unit and smaller memories.
Abstract: Motion estimation plays an important part particularly for source encoding with motion compensating prediction of moving pictures. Due to the large amount of hardware, application is limited to systems requiring high coding efficiency. In this paper a method is proposed which can reduce complexity of the motion estimator by about a factor six compared to conventional solutions. The crucial point is that the two input signals of the motion estimator are roughly quantized before searching for the optimum displacement vector resulting in a very simple arithmetic-logic-unit and smaller memories. The method is analyzed mathematically and its good performance is demonstrated by computer simulations at bit rates from 64 to about 300 kbit/s using moving pictures.

Journal ArticleDOI
TL;DR: Owing to the time dependence, the line-shuffling is a three-dimensional operation; the paper develops a bidimensional approximation where the operation is confined to a fixed constellation.
Abstract: This paper deals with a theoretical formulation of line-shuffling, which is a standard technique proposed for compatible HDTV systems to halve the number of lines, e.g. from a 1250/2:1/50 format to a 625/2:1/50 format. An HD-MAC system is taken as reference where, starting from the 1250/2:1/50 format, the subsampling reduces the number of pixels by four. The resulting pixel constellation has a quincunx structure for each field with a periodicity of four fields. Finally, pixels of odd columns are vertically shifted to halve the number of lines. Owing to the time dependence, the line-shuffling is a three-dimensional operation; the paper develops a bidimensional approximation where the operation is confined to a fixed constellation. An equivalent scheme for the shuffling operation is derived, consisting of two vertical filters and one horizontal modulator, which is useful for shuffling frequency domain analysis. A similar model is derived for the deshuffling operation at the receiver. As an example of application of the theory scheme, the spectral analysis of the line-shuffling error in MAC compatible systems and its application to real test images are carried out.

Journal ArticleDOI
TL;DR: The results of these tests show that the Weber-Fechner law is valid for the perception of noise in nearly the whole range of brightness, which was under consideration.
Abstract: The optimum compressor function for analogue HDTV component signals was determined by ascertaining the threshold of noise perception depending on the picture brightness in the range between 2 and 330 cd/m2. The result turned out to be very similar to a logarithmic function. A processor-controlled, digital predistortion unit and a complementary correction unit facilitated the realization of a logarithmic compressor function and a gamma law function having an exponent of 0.45. The measurement of the threshold of noise perception for distorted video signals gave further evidence for uniform visual distribution of transmission noise over the brightness range by the logarithmic compressor function. The gamma law is less suitable for this purpose. This result was also supported by a demonstration with a natural test picture. The results of these tests show that the Weber-Fechner law is valid for the perception of noise in nearly the whole range of brightness, which was under consideration.

Journal ArticleDOI
TL;DR: This contribution describes how a combination of HDPCM and vector quantization can be more resistant to transmission errors than DPCM, and how it can be implemented with one ROM and one adder in the critical path by pipelining the recursive loop of theHDPCM-coder.
Abstract: Hybrid Differential Pulse Code Modulation (HDPCM) of coloured pictures can be combined with vector quantization of the coloured prediction errors to an efficient coding scheme for pictures, of which the chrominance is subsampled by a factor 2. First, this contribution describes the reasons for the combination of DPCM and vector quantization, i.e., the joint probability density of the prediction errors and the intercomponent masking. Then, the text treats how a combination of HDPCM and vector quantization can be more resistant to transmission errors than DPCM, and how it can be implemented with one ROM and one adder in the critical path by pipelining the recursive loop of the HDPCM-coder. Simulation results illustrate the performance of the coding scheme with a vector quantizer with 32 output vectors for the compression of the bit rate of HDTV-signals to 280 Mbit/s. kwBit rate reduction, picture coding, Differential Pulse Code Modulation, vector quantization

Journal ArticleDOI
Fernando Pereira, Mauro Quaglia1
TL;DR: An adaptation of the Reference Model algorithm, studied in the CCITT—Specialists Group on Coding for Visual Telephony—as future standard for synchronous transmissions, to Asynchronous Transfer Mode (ATM) networks and permits to improve the overall image quality by means of a powerful resolution control scheme.
Abstract: This paper describes an adaptation of the Reference Model algorithm, studied in the CCITT—Specialists Group on Coding for Visual Telephony—as future standard for synchronous transmissions, to Asynchronous Transfer Mode (ATM) networks The adaptation leads to advantages due to the characteristics of an asynchtonous environment, in particular its capability to manage variable bitrate (VBR) flows which permits, for example, to maintain an almost constant image quality over the whole transmission The new algorithm uses two control parameters—average bitrate and peak bitrate—and permits to improve the overall image quality by means of a powerful resolution control scheme Experiments have been done in order to know the best trade-off of spatial-temporal resolutions and quantization step that permits to reach a uniform image quality, avoiding critical situations Some results are presented for conversational and broadcasting image sequences

Journal ArticleDOI
TL;DR: One of the problems encountered in the definition of video coding algorithms for moving and still pictures due to the finite length arithmetic in the computation of orthonormal transforms is considered: mismatch and reversibility.
Abstract: This paper considers one of the problems encountered in the definition of video coding algorithms for moving and still pictures due to the finite length arithmetic in the computation of orthonormal transforms. In particular two aspects are taken into account: mismatch and reversibility. The different causes which influence the final representation of the reconstructed video samples are examined and a formula is given, expressing said final error as a function of the errors introduced at the different stages of the computation. From an upper bound of said final error the minimum number of bits required for the representation of the quantities appearing at the different stages of the computation are derived for two cases of particular interest. Alternatively, assigning the length of the arithmetic registers, it is possible to know the worst case error. With some care the results are valid for any type of fast algorithms and not only for the matrix multiplication case which is used here to attain the widest validity of the results.