scispace - formally typeset
Search or ask a question

Showing papers in "Signal Processing-image Communication in 1992"


Journal ArticleDOI
TL;DR: The quality of the compressed video with the MPEG algorithm at about 1.5 Mbit/s has been compared to that of consumer grade VCR's and the prediction error signal is further compressed with spatial redundancy reduction (DCT).
Abstract: The video compression technique developed by MPEG covers many applications from interactive systems on CD-ROM to delivery of video information over telecommunications networks. The MPEG video compression algorithm relies on two basic techniques: block based motion compensation for the reduction of the temporal redundancy and transform domain based compression for the reduction of spatial redundancy. Motion compensation techniques are applied with both predictive and interpolative techniques. The prediction error signal is further compressed with spatial redundancy reduction (DCT). The quality of the compressed video with the MPEG algorithm at about 1.5 Mbit/s has been compared to that of consumer grade VCR's.

155 citations


Journal ArticleDOI
TL;DR: The results of this study indicate that the statistics of the coefficients are best approximated by a Laplacian pdf (probability density function) for the statistical properties of the normalized coefficients.
Abstract: Block-matching motion compensation techniques are widely used in image coding algorithms. A differential signal with different characteristics from the original signal is then generated. It is important to know the statistical properties of the signal source in order to correctly characterize some parameters of the digital image coding scheme. In this paper a statistical study of the 2D-DCT coefficients of the motion-compensated blocks is performed. The results of this study indicate that the statistics of the coefficients are best approximated by a Laplacian pdf (probability density function). The influence of some types of normalization is investigated and the corresponding pdf's are estimated. An analysis indicates that the Laplacian pdf may be used as a good approximation for the statistical properties of the normalized coefficients.

74 citations


Journal ArticleDOI
Alexander Garland Macinnis1
TL;DR: The MPEG Systems Committee has produced a specification for the syntax and semantics of system layer coding of combined MPEG compressed digital video and audio, which allows for the later inclusion by ISO of other data streams.
Abstract: The MPEG Systems Committee has produced a specification for the syntax and semantics of system layer coding of combined MPEG compressed digital video and audio. The systems layer provides a framework and information required for these functions: a multiplex of various numbers of audio, video and private streams; synchronization of audio and video; management of buffers for coded information; random access and start-up conditions; and absolute time identification. In addition the specification allows for the later inclusion by ISO of other data streams.

54 citations


Journal ArticleDOI
TL;DR: This paper will focus on the NOVI-II experimental system which was created to develop signal processing architectures for all-digital SHD image processing, as well as to evaluate compression schemes for SHD images.
Abstract: In this paper, we will present a survey of the all-digital super high definition (SHD) image specifications currently under development by the authors to support this type of media integration. We will discuss specification requirements, encoding and support technologies, and present a survey of signal processing systems. Especially, we will focus on the NOVI-II experimental system which was created to develop signal processing architectures for all-digital SHD image processing, as well as to evaluate compression schemes for SHD images. Of course, any discussion of media integration cannot fail to touch on hypermedia, so we will also examine the relationship of hypermedia to super high definition images.

46 citations


Journal ArticleDOI
TL;DR: The simple strategy of treating bit-planes as independent bi-level images for JBIG coding yields compressions at least comparable to and sometimes better than the JPEG standard in its lossless mode, making it attractive in a wide variety of environments.
Abstract: The JBIG coding standard like the G3 and G4 facsimile standards defines a method for the lossless (bit-preserving) compression of bi-level (two-tone or black/white) images. One advantage it has over G3/G4 is superior compression, especially on bi-level images rendering greyscale via halftoning. On such images compression improvements as large as a factor of ten are common. A second advantage of the JBIG standard is that it can be parameterized for progressive coding. Progressive coding has application in image databases that must serve displays of differing resolution, image databases delivering images to CRT displays over medium rate (say, 9.6 to 64 kbit/s) channels, and image transmission services using packet networks having packet priority classes. It is also possible to parameterize for sequential coding in applications not benefiting from progressive buildup. It is possible to effectively use the JBIG coding standard for coding greyscale and color images as well as bi-level images. The simple strategy of treating bit-planes as independent bi-level images for JBIG coding yields compressions at least comparable to and sometimes better than the JPEG standard in its lossless mode. The excellent compression and great flexibility of JBIG coding make it attractive in a wide variety of environments.

45 citations


Journal ArticleDOI
TL;DR: A new 'reduced-mean' pyramid data structure is proposed, which gives more accurate motion vectors than conventional techniques without the transmission of extra data, and is also more efficient in the presence of large amounts of motion.
Abstract: The current two mainstream motion compensation techniques, pel-recursive and block matching algorithms are first reviewed and experimental results and comments are given. Estimation and motion compensation techniques using image pyramids are then introduced, and a new ‘reduced-mean’ pyramid data structure is proposed, which gives more accurate motion vectors than conventional techniques without the transmission of extra data. Results obtained by the use of our pyramid algorithms show smaller motion compensated frame differences than those of pel-recursive and block matching algorithms, and the pyramid algorithm is also more efficient in the presence of large amounts of motion.

44 citations


Journal ArticleDOI
TL;DR: This paper deals with the introduction of digital HDTV in such a way that broadcast programs can be picked up by HDTV and TV sets at the same time, with low hardware complexity.
Abstract: This paper deals with the introduction of digital HDTV in such a way that broadcast programs can be picked up by HDTV and TV sets at the same time, with low hardware complexity. This challenging problem, referred to as the compatibility problem in the text, only makes sense as long as it compares well with stand alone schemes in terms of picture quality and hardware complexity. Compatibility requirements mostly have implication on the motion compensation loop of the coding schemes. Proposed schemes are based on subband techniques.

36 citations


Journal ArticleDOI
TL;DR: The method proposed in this paper is the coding of the subbands using for prediction only the information of the same subbands or of the bands of lower order in the previous decoded picture.
Abstract: The new coding scheme allows the coding of different resolution video signals. The coding is performed in a way that makes it possible to decode either the whole or only some parts of the transmitted format. Transform and subband coding methods are considered. Transform coding is described in the light of the multirate filter bank theory. Coding systems of increasing efficiency are studied. The case of pure intra coding is quite easy to handle. More efficient systems include a temporal predictor. Some emphasis is put on those systems and especially on motion compensated schemes. The method proposed in this paper is the coding of the subbands using for prediction only the information of the same subbands or of the bands of lower order in the previous decoded picture.

28 citations


Journal ArticleDOI
TL;DR: If both EQTV and HDTV have to be encoded optimally, the error feedback coding strategy is the best suitable one because this strategy is able to cancel propagation of coding errors of the EQTV signal into the reconstructed HDTV signal.
Abstract: The Broadband Integrated Services Digital Network (BISDN) based on lightwave technology is supposed to become the all-purpose exchange area communication network of the future. All digital services are integrated with applications ranging from facsimile, videophone, teleconferencing to digital standard-resolution TV — sometimes referred to as Extended Quality TV (EQTV) — and High Definition TV (HDTV). In order to make efficient use of the available network bandwidth hierarchical coding schemes combine the necessary data compressions of the HDTV and EQTV signals such that the HDTV signal can be transmitted at 135 Mbit/s with the embedded EQTV signal coded in a sub-channel of approximately 35 Mbit/s. This paper investigates in detail three possible coding strategies for fixed bit-rate hierarchical coding schemes, namely distributed coding, error feedback coding and selective coding. With the aid of the rate distortionn theory it is determined under which conditions both EQTV and HDTV are encoded optimally for each of the three strategies. These conditions are verified with three introduced hierarchical subband coding schemes, namely the Distributed system, the Refinement system and the Selection system, which are direct implementations of the three coding strategies. It is concluded that if both EQTV and HDTV have to be encoded optimally, the error feedback coding strategy is the best suitable one because this strategy is able to cancel propagation of coding errors of the EQTV signal into the reconstructed HDTV signal.

26 citations


Journal ArticleDOI
TL;DR: A technical overview of MPEG++, a robust video compression and transport system for digital HDTV, which forms the basis of ‘Advanced Digital Television’ (ADTV), the all-digital terrestrial simulcast system currently under development by the Advanced Television Research Consortium (ATRC).
Abstract: This paper provides a technical overview of MPEG++, a robust video compression and transport system for digital HDTV. MPEG++ forms the basis of ‘Advanced Digital Television’ (ADTV), the all-digital terrestrial simulcast system currently under development by the Advanced Television Research Consortium (ATRC). ADTV incorporates an efficient MPEG-compatible compression algorithm at its central core, with application-specific data prioritization and transport features added as separable layers. The compression process is based on a 1440×960 (1050-line 2: 1 interlaced) HDTV format, producing a selectable bit-rate in the region of 15–20 Mbit/s. The data priorization layer of MPEG++ achieves robust delivery over an appropriate two-tier modem by separating compressed video data into high- and standard-priority bitstreams with appropriate bit-rates. This priorized data is then formatted into fixed length ‘cells’ (packets) with appropriate data-link level and service-specific adaptation level headers, designed to provide capabilities such as flexible service multiplexing, priority handling, efficient cell packing, error detection and graceful recovery from errors. An outline of each of the above MPEG++ elements (i.e., compression, prioritization and transport) is given, and is followed by a description of a software model for the system. Simulation model based performance results which illustrate MPEG++ image quality and robustness are briefly reported.

24 citations


Journal ArticleDOI
TL;DR: This paper outlines the technical description of this HDTV prototype codec presently being built, developing an HDTV codec based on DCT plus motion-compensated prediction at a bit-rate of 135 Mbit/s and an original sampling frequency of the studio standard.
Abstract: The source bit-rate of HDTV is extremely high compared with that of conventional standard TV, and a bit-rate reduction will become an important matter in the digital transmission of HDTV signals using a communication satellite or the Broadband-ISDN. The authors are developing an HDTV codec based on DCT plus motion-compensated prediction at a bit-rate of 135 Mbit/s and an original sampling frequency of the studio standard. This paper outlines the technical description of this HDTV prototype codec presently being built.

Journal ArticleDOI
TL;DR: A fast algorithm is described which reduces the number of computations by more than a factor of 20 for commonly used window sizes and performs extremely well for both additive and multiplicative noise.
Abstract: Noise smoothing is a basic operation in image processing. Numerous filters, each to be used for different noise conditions and picture types, have been proposed in the literature. A comparison study showed that the K-Nearest Neighbour filter performs extremely well for both additive noise and multiplicative noise; especially when applied in an iterative manner. However, a major drawback to its widespread use is its very heavy computational load. We describe a fast algorithm which reduces the number of computations by more than a factor of 20 for commonly used window sizes.

Journal ArticleDOI
TL;DR: Computer simulation results at 2:1 subsampling rate of the proposed local motion-adaptive interpolation techniques combined with the conventional three-step search algorithm and the fast BMA using integral projections are given.
Abstract: Interpolation techniques based on block-by-block motion compensation algorithms are studied for the video conference/video telephone signals. In this paper, we propose the local motion-adaptive interpolation technique, which can be used in the codec using a motion compensated coding-block matching algorithm (MCC-BMA). Computer simulation results at 2:1 subsampling rate of the proposed local motion-adaptive interpolation techniques combined with the conventional three-step search algorithm and the fast BMA using integral projections are given.

Journal ArticleDOI
TL;DR: A new type of block-based motion estimation algorithm is presented, which is based on the block-recursive (gradient) method and makes use of some of the merits of theBlock-matching method.
Abstract: The conventional motion estimation algorithms used in digital television coding can roughly be classified into two categories, namely the block-matching method and the recursive method. Each of them has its own strong points. In this paper, a new type of block-based motion estimation algorithm is presented, which is based on the block-recursive (gradient) method and makes use of some of the merits of the block-matching method. For a moderate translational motion, motion estimation with a subpel precision can conveniently be obtained with only a couple of recursive searches, and for a violent or complicated motion which cannot be estimated by any block-based algorithm, the local minimum of prediction errors can always be found. Our experiments show that the proposed algorithm is efficient and reliable, and obviously superior to the conventional block-recursive algorithms and the fast block-matching algorithms. The performance of the proposed algorithm tends almost to the optimum of the full search algorithm with the same estimation precision, but the computational effort is much less than that of the full search algorithm.

Journal ArticleDOI
TL;DR: An overview of the different object classes is provided, followed by the description of methodology and coding principles which are used in MHEG, and a detailed presentation of synchronized multimedia objects shows how the concept of space and time relations between objects lead to composite objects definition.
Abstract: This paper aims to present the current work achieved by a joint CCITT (SGVIII/Q9)-ISO-IEC/JTC1/SC29/WG12 group (in short: MHEG, for Multimedia and Hypermedia information coding Expert Group) This expert group has the mandate to define a generic standard for multimedia and hypermedia information objects especially intended for real time interchange The scope of this standard is first explained, along with some examples of the field of application Then an overview of the different object classes is provided, followed by the description of methodology and coding principles which are used in MHEG Lastly, a detailed presentation of synchronized multimedia objects shows how the concept of space and time relations between objects lead to composite objects definition, considered as units of information to be used, exchanged or manipulated by applications This latter idea introduces some considerations about AVIs (Audiovisual Interactive Scriptware), which are also dealt with by MHEG

Journal ArticleDOI
TL;DR: The HRV Workstation project will continue its research into the effects of integrating digital video on workstation hardware and software, in an effort to determine the costs and benefits of incorporating such features in future (mainstream, as opposed to special-function) workstations.
Abstract: Sun Microsystems Laboratories (in conjunction with David Sarnoff Research Center and Texas Instruments) is working on a DARPA-supported research project to integrate High Resolution Video (HRV) into the distributed workstation programming environment. The HRV Workstation project involves the creationof new workstation hardware and software that permits the integration of digital video as a first class data type within the system. The hardware developed for the HRV Workstation provides the basic capabilities needed to acquire, store, process, transport and display raw (i.e., uncompressed) high resolution digital video. The HRV Workstation software provides system resource management and programming interfaces in support of applications which make use of time-critical information such as HRV. The HRV Workstation hardware has been fabricated and is nearing the completion of its testing, while an initial version of the HRV Workstation's system software is being completed. Once these initial milestones are reached, the HRV Workstation project will continue its research into the effects of integrating digital video on workstation hardware and software, in an effort to determine the costs and benefits of incorporating such features in future (mainstream, as opposed to special-function) workstations.

Journal ArticleDOI
TL;DR: This paper highlights the benefits of using an embedded multiresolution modulation constellation over a modulation scheme that resorts to time or frequency multiplexing of the broadcast resolutions.
Abstract: A practical end-to-end all-digital multiresolution system is demonstrated that employs joint source-channel coding and modulation in order to achieve efficient broadcast of digital HDTV. The threshold effect plaguing single resolution systems is softened by a stepwise graceful degradation. This can be used to increase the coverage and robustness of the digital broadcast system. This approach is seen as an alternative to traditional single resolution digital transmission systems which are not designed for broadcast situations, and which suffer from the threshold effect. This paper highlights the benefits of using an embedded multiresolution modulation constellation over a modulation scheme that resorts to time or frequency multiplexing of the broadcast resolutions. Besides showing coding results and simulations of transmission effects, the paper discusses the trade-offs between low and high resolution coverage.

Journal ArticleDOI
TL;DR: A system concept based on the existing 8 mm tape format is described, which opens the possibility for designing a small portable recorder suited for digital HDTV and SDTV recordings for home use, using small magnetic recording mechanics and advanced bit-rate reduction techniques.
Abstract: In this paper we consider consumer digital HDTV recording, using small magnetic recording mechanics and advanced bit-rate reduction techniques. A system concept based on the existing 8 mm tape format is described, which opens the possibility for designing a small portable recorder suited for digital HDTV and SDTV recordings for home use. It appears that, when a system is worked out which can cope with 25 Hz and 30 Hz environments, only a very few recording bit-rates are applicable. At the same time, an elegant possibility for recording SDTV and HDTV on the same machine should be maintained. Since small recording mechanics are proposed, HDTV signals must be substantially compressed to obtain sufficient playing time. For this reason, motion-compensated DCT coding of HDTV signals has been studied for digital home-use HDTV recording, because high-quality images can be obtained with a compression factor of 7–10. The total recording system concept would result in 2 hours playing time for HDTV and 4 hours for SDTV, using an 8 mm-type cassette.

Journal ArticleDOI
TL;DR: The one year study carried out in ISO/IEC with collaboration of CCITT and CMTT, whose objective is to identify requirements for a generic video coding standard suitable for various applications such as storage and retrieval, communication and distribution, is summarized.
Abstract: In response to the potential availability of high speed digital storage and transmission media with transfer rates of several Mbit/s or above, a new generation video coding standard is needed for high quality services. This paper summarizes the one year study carried out in ISO/IEC with collaboration of CCITT and CMTT, whose objective is to identify requirements for a generic video coding standard suitable for various applications such as storage and retrieval, communication and distribution. Necessary functionalities for respective category of applications are first identified, then storage and transmission media characteristics that may have an impact on the video source coding structure are extracted. Since a particularly important requirement for the new generation standard is to consider compatibilities with the existing standards, implementation methods are presented in detail. Finally, areas awaiting intensive technical study are discussed.

Journal ArticleDOI
TL;DR: The use of DCT to implement a similar filtering process avoids the use of this additional data in an image decomposition/reconstruction subband coding scheme free of aliasing and boundary errors.
Abstract: In a recent paper an image decomposition/reconstruction subband coding scheme free of aliasing and boundary errors has been proposed. Ideal filters have been used and implemented with DFT. A few additional data are necessary to perform the exact reconstruction. We show here that the use of DCT to implement a similar filtering process avoids the use of this additional data. Practically, the computation load does not change.

Journal ArticleDOI
TL;DR: The experimental results show the simplicity and speed of operations of the temporal co-occurrence matrices for a wide range of applications in interframe video coding.
Abstract: Applications of the temporal co-occurrence matrices to interframe video coding are discussed. In the area of low bit-rate coding an adaptive version of a simple predictive/transform coding technique is proposed which increases the subjective quality of the coded images. The adaptive process is based on the homogeneity criterion calculated from the temporal co-occurrence matrices of the image sequence. In multi-layer video coding, an algorithm for the classification of the frame differences information is introduced. It divides the interframe picture element changes into two groups, exploiting the psychovisual characteristics of the human visual system (HVS). The experimental results show the simplicity and speed of operations of the temporal co-occurrence matrices for a wide range of applications in interframe video coding.

Journal ArticleDOI
TL;DR: It is concluded that an end-to-end international digital transmission of HDTV programs, wherein HD TV programs can be directly transmitted from the place of an event to a theater, is commercially viable using the codec in conjunction with the small earth station.
Abstract: This paper describes a portable HDTV digital codec and its transmission performance obtained in a transpacific field trial via an INTELSAT satellite with a transportable small earth station. This portable codec is designed for transmitting HDTV programs including 4 channels of high quality sound via a 72 MHz bandwidth transponder in an INTELSAT satellite, or via an ISDN H 4 channel in an optical fiber submarine cable. In order to realize the portable hardware for practical applications, the codec employs bit-reduction techniques whose algorithms are extremely simple from the viewpoint of hardware construction but are effective in achieving a high coding picture quality. From the results obtained in the field trial, it is concluded that an end-to-end international digital transmission of HDTV programs, wherein HDTV programs can be directly transmitted from the place of an event to a theater, is commercially viable using the codec in conjunction with the small earth station.

Journal ArticleDOI
TL;DR: The hierarchical multirate vector quantization (HMVQ) technique provides high encoded image quality with very low bit-rates and demonstrates flexibility of accurate reproduction in different detail regions.
Abstract: The hierarchical multirate vector quantization (HMVQ) introduced in this paper is an improved form of vector quantization for digital image coding. The HMVQ uses block segmentation and a structure tree to divide an original image into several layers and sub-layers according to their grey scale contrast within blocks of a certain size. Variant bit-rates are used for block coding of different layers with the same codebook. The HMVQ technique provides high encoded image quality with very low bit-rates. The processing time for codebook generation is considerably reduced by using layer by layer optimization and subsampling in low detail regions. This technique also demonstrates flexibility of accurate reproduction in different detail regions.

Journal ArticleDOI
TL;DR: These two models are able to reproduce the very ‘bursty’ characteristics of typical VBR traffic by switching between multiple fractal and Markov-based cell generation modes for the low and high bit-rate sources, respectively.
Abstract: Recent increases in the number and variety of emerging video and image-based telecommunication services have created a clear need for accurate source models for the traffic produced by these new facilities. In this paper, two suitable sources are proposed, which are able to accurately model the low (less than 5 Mbit/s) and high (5–20 Mbit/s) bit-rate traffic produced by a typical variable bit-rate (VBR) video codec. These two models are able to reproduce the very ‘bursty’ characteristics of typical VBR traffic by switching between multiple fractal and Markov-based cell generation modes for the low and high bit-rate sources, respectively. The validity of these models has been verified by comparing the traffic produced by both sources with ‘real’ packet-video traffic, produced through the simulated compression of two test images.

Journal ArticleDOI
TL;DR: The system describes a digital high-definition television (HDTV) system designed for US terrestrial broadcasting but friendly to alternate delivery means, to be simulcast with NTSC during a multi-year transition period, using all existing television bands.
Abstract: The system describes a digital high-definition television (HDTV) system designed for US terrestrial broadcasting but friendly to alternate delivery means. The new system, Digital Spectrum Compatible (DSC-)HDTV, is to be simulcast with NTSC during a multi-year transition period, using all existing television bands. DSC-HDTV uses progressively scanned video at three times the current horizontal line rate, compressed to a data rate that fits in a 6 MHz channel using a mix of two- and four-level symbols.

Journal ArticleDOI
TL;DR: Further reduction of multiplications in fast computatio of several types of DCT is achieved by extraction of a common factor from Wang's algorithm.
Abstract: Further reduction of multiplications in fast computatio of several types of DCT is achieved by extraction of a common factor from Wang's algorithm.

Journal ArticleDOI
TL;DR: This paper introduces the general architecture of a conditional access system and points out most of the specific requirements of conditional access that each designer should be aware of when specifying a digital TV system.
Abstract: Technical progresses made in compression and transmission of digital signals will perhaps soon make realistic an all-digital HDTV system. To become economically viable, such system will most certainly have to deal with conditional access mechanisms. The sooner these mechanisms are taken into account in the definition and specification of the system, the easier their implementation is. Access control mechanisms will be all the more efficient as appropriate synchronization and signalling exist in the digital signal. This paper introduces the general architecture of a conditional access system and points out most of the specific requirements of conditional access that each designer should be aware of when specifying a digital TV system.

Journal ArticleDOI
TL;DR: It was confirmed that this codec can transmit HDTV at the STM-1 rate with a picture quality satisfactory for distribution use and the technical and operational feasibilty of HDTV digital transmission through SDH transmission systems was demonstrated.
Abstract: An HDTV bit-rate reduction codec was developed, aimed at the transmission of an HDTV signal for distribution use. This codec can perform the coding of an 1125 lines/60 Hz HDTV video signal accompanied with 4-channel sound signals at about 133 Mbit/s, and transmit it at the STM-1 rate of 155.52 Mbit/s in the xynchronous digital hierarchy (SDH). The sampling frequencies are selected considering the simple relation with the studio standard as well as the required bandwidth for HDTV distribution. A hybrid DPCM/DCT coding scheme is employed as a bit-rate reduction algorithm, where intrafield 8 × 8 DCT is first performed and then interframe DPCM is carried out in the DCT coefficient domain. An adaptive intrafield/interframe mode selection is performed only for low-frequency DCT coefficients and the intrafield mode is always used for high frequency coefficients because the interframe correlation of the high frequency coefficients is fairly weak. Computer simulation experiments were carried out to examine the performance of this coding scheme. The codec hardware was implemented and transmission experiments, as well as laboratory experiments, were carried out using actual SDH transmission systems. From these experiments, it was confirmed that this codec can transmit HDTV at the STM-1 rate with a picture quality satisfactory for distribution use. The technical and operational feasibilty of HDTV digital transmission through SDH transmission systems was also demonstrated.

Journal ArticleDOI
TL;DR: An image predictive coding method using both intra- and inter-frame predictors, and a method ensuring the self-adjustment of the decoder in the presence of transmission errors, which do not affect the pixel synchronization, is proposed.
Abstract: In this article we present an image predictive coding method using both intra- and inter-frame predictors. The intra-frame predictor is an adaptive FIR filter using the well-known LMS algorithm to track continuously spatial local characteristics of the intensity. The inter-frame predictor is motion-adaptive using a pel-recursive method estimating the displacement vector. Weight coefficients are continuously adapted in order to favor the prediction mode which performs better between intra-frame and motion compensation mode. It is a backwards adaptation which does not necessitate the transmission of an overhead information. Neither the weight coefficients nor displacement vectors are transmitted. Apart from the quantized prediction error, it may be necessary to transmit the detection of a discontinuity of the displacement vector. For the examined image sequence a significant improvement is obtained in comparison with only adaptive intra-frame or only motion compensation mode. We give and discuss the extension of a known adaptive quantizer for 2D signals. A crucial problem in predictive coding, particularly with adaptive techniques, is the sensitivity to transmission errors. A method ensuring the self-adjustment of the decoder in the presence of transmission errors, which do not affect the pixel synchronization, is proposed for the intra-frame mode. Neither overhead information nor error-correcting codes are needed.

Journal ArticleDOI
TL;DR: A DCT coding concept that is versatile with respect to scanning mode - interlaced and progressive - and data rate is presented and an advanced computationally efficient motion estimation method is described that improves the coding performance and open the possibility for coding with much lower bit-rates.
Abstract: HDTV implies various new technologies with strong implications for information and imaging technology, and telecommunications as well. Various standards, image formats and scanning modes are under discussion for various applications. Until now of HDTV signals via satellite and B-ISDN a video data rate of 125 Mbit/s was considered to be necessary for distribution. But the fast development of efficient coding methods indicates that also for HDTV the trend is moving toward lower bit-rates. That could open new areas of application in the range of 50 Mbit/s. We present a DCT coding concept that is versatile with respect to scanning mode - interlaced and progressive - and data rate. It shows very good image quality also at the low data rate of 50 Mbit/s. It contains efficient mechanisms for local adaptivity which are applied for switching between different coding modes and for the control of quality parameters. For this purpose local image analysis is largely performed. Criteria and activity measures are computed in the spatial domain in a regular ways as sums of absolute spatial differences. To some extent analysis is also performed in the DCT domain for coefficient thresholding and an adaptive assignment of Huffman coding classes. An advanced computationally efficient motion estimation method is described. It improves the coding performance and open the possibility for coding with much lower bit-rates. We have performed a comparison experiment at 125 Mbit/s between interlaced and progressive scanned scenes. The most noticeable gain with the progressive scene is the absence of aliasing defects. This positive effect influences the quality more than a slightly increased coding noise and advocates the usage of the progressive scan format for high quality studio application. In a second experiment an interlaced scene is coded at 50 Mbit/s with the usage of the advanced motion estimated prediction scheme. The result is very promising with respect to a possible application for distribution.