scispace - formally typeset
Search or ask a question

Showing papers on "Inter frame published in 1993"


Patent
21 Jan 1993
TL;DR: In this article, the authors propose a method of removing frame redundancy in a computer system for a sequence of moving images, which consists of detecting a first scene change in the sequence and generating a first keyframe containing complete scene information for a first image.
Abstract: A method of removing frame redundancy in a computer system for a sequence of moving images. The method comprises detecting a first scene change in the sequence of moving images and generating a first keyframe containing complete scene information for a first image. The first keyframe is known, in a preferred embodiment, as a "forward-facing" keyframe or intra frame, and it is normally present in CCITT compressed video data. The process then comprises generating at least one intermediate compressed frame, the at least one intermediate compressed frame containing difference information from the first image for at least one image following the first image in time in the sequence of moving images. In a preferred embodiment, this at least one frame is known as an inter frame. Finally, detecting a second scene change in the sequence of moving images and generating a second keyframe containing complete scene information for an image displayed at the time just prior to the second scene change. This is known, in the preferred embodiment, as a "backward-facing" keyframe. The first keyframe and the at least one intermediate compressed frame are linked for forward play, and the second keyframe and the intermediate compressed frames are linked in reverse for reverse play. In a preferred embodiment, the intra frame is used for generation of complete scene information when the images are played in the forward direction. When this sequence is played in reverse, the backward-facing keyframe is used for the generation of complete scene information.

316 citations


Patent
21 Dec 1993
TL;DR: In this article, a method for compressing video movie data to a specified target size using intra-frame and inter-frame compression schemes is proposed. But the method does not consider the quality of the original video data.
Abstract: A method for compressing video movie data to a specified target size using intraframe and interframe compression schemes. In intraframe compression, a frame of the movie is compressed by comparing adjacent pixels within the same frame. In contrast, interframe compression compresses by comparing similarly situated pixels of adjacent frames. The method begins by compressing the first frame of the video movie using intraframe compression. The first stage of the intraframe compression process does not degrade the quality of the original data, e.g., the method uses run length encoding based on the pixels' color values to compress the video data. However, in circumstances where lossless compression is not sufficient, the method utilizes a threshold value, or tolerance, to achieve further compression. In these cases, if the color variance between pixels is less than or equal to the tolerance, the method will encode the two pixels using a single color value--otherwise, the method will encode the two pixels using different color values. The method increases or decreases the tolerance to achieve compression within the target range. In cases where compression within the target range results in an image of unacceptable quality, the method will split the raw data in half and compress each portion of data separately. Frames after the first frame are generally compressed using a combination of intraframe and interframe compression. Additionally, the method periodically encodes frames using intraframe compression only in order to enhance random frame access.

140 citations


Patent
03 Sep 1993
TL;DR: In this article, a variable-size multi-resolution motion compensation (MRMC) prediction scheme is used to produce displaced residual wavelets (DRWs), which are then adaptively quantized with their respective bit maps.
Abstract: A video coding scheme based on wavelet representation performs motion compensation in the wavelet domain rather than spatial domain. This inter-frame wavelet transform coding scheme preferably uses a variable-size multi-resolution motion compensation (MRMC) prediction scheme. The MRMC scheme produces displaced residual wavelets (DRWs). An optimal bit allocation algorithm produces a bit map for each DRW, and each DRW is then adaptively quantized with its respective bit map. Each quantized DRW is then coded into a bit stream.

126 citations


Journal ArticleDOI
TL;DR: A multiscale video representation using wavelet decomposition and variable-block-size multiresolution motion estimation (MRME) is presented and appears suitable for the broadcast environment where various standards may coexist simultaneously.
Abstract: A multiscale video representation using wavelet decomposition and variable-block-size multiresolution motion estimation (MRME) is presented. The multiresolution/multifrequency nature of the discrete wavelet transform makes it an ideal tool for representing video sources with different resolutions and scan formats. The proposed variable-block-size MRME scheme utilizes motion correlation among different scaled subbands and adapts to their importance at different layers. The algorithm is well suited for interframe HDTV coding applications and facilitates conversions and interactions between different video coding standards. Four scenarios for the proposed motion-compensated coding schemes are compared. A pel-recursive motion estimation scheme is implemented in a multiresolution form. The proposed approach appears suitable for the broadcast environment where various standards may coexist simultaneously. >

115 citations


Patent
29 Apr 1993
TL;DR: In this article, a video signal compression system includes motion compensated predictive compression apparatus for compressing respective frames of video signal according to either intraframe processing or interframe processing on a block by block basis.
Abstract: A video signal compression system includes motion compensated predictive compression apparatus for compressing respective frames of video signal according to either intraframe processing or interframe processing on a block by block basis to generate blocks of compressed data and associated motion vectors. A compressed signal formatter arranges the blocks of compressed data and the associated motion vectors according to a desired signal protocol wherein motion vectors of interframe processed frames are associated with corresponding blocks of compressed data and motion vectors of intraframe processed frames are associated with blocks substantially adjacent to corresponding blocks of compressed data. The motion vectors are included with intraframe compressed data to facilitate error concealment at respective receiver apparatus.

103 citations


Journal ArticleDOI
TL;DR: A motion-adaptive variable-bit-rate (VBR) video codec is considered, and a motion-classified model is developed to represent the characteristics of various classes of motion activities, including scene changes, which captures the motion of various video scenes through a first-order autoregressive process with time-varying parameters.
Abstract: A motion-adaptive variable-bit-rate (VBR) video codec is considered, and a motion-classified model is developed to represent the characteristics of various classes of motion activities, including scene changes. The codec switches between interframe, motion-compensated, and intraframe coding corresponding to low, medium, and high amounts of motion and scene changes, respectively. The model captures the motion of various video scenes by providing the statistics of VBR-coded video traffic through a first-order autoregressive process with time-varying parameters. The parameters of this model are obtained from a VBR-coded sample video sequence with the objective of matching the bit-rate distribution and the autocorrelation among the bit rates. The validity and accuracy of the model are evaluated, and the characteristics of aggregated traffic sources obtained with the model are discussed. >

99 citations


Journal ArticleDOI
TL;DR: A self-governing rate buffer control strategy that can automatically steer the coder to a pseudoconstant bit rate is considered and constrains quantizer adjustments so that a smoother quality transition can be attained.
Abstract: Video coding is a key to successful visual communications. An interframe video coding algorithm using hybrid motion-compensated prediction and interpolation is considered for coding studio quality video at a bit rate of over 5 Mb/s. Interframe coding without a buffer control strategy usually results in variable bit rates. Although packet networks may be capable of handling variable bit rates, in some applications, a constant bit rate is more desirable either for a simpler network configuration or for channels with fixed bandwidth. A self-governing rate buffer control strategy that can automatically steer the coder to a pseudoconstant bit rate is considered. This self-governing rate buffer control strategy employs more progressive quantization parameters, and constrains quantizer adjustments so that a smoother quality transition can be attained. Simulation results illustrate the performance of the pseudoconstant bit rate coder with this buffer control strategy. >

63 citations


Journal ArticleDOI
TL;DR: In this paper, various statistical characteristics of motion-compensated frame difference (MCFD) images were derived using the statistical model proposed by the authors (1992), and the authors provided insights into the working of the MPEG and H.261 coders, including the justification for using the same DCT for both intra and interframe modes.
Abstract: Using the statistical model proposed by the authors (1992), various statistical characteristics of motion-compensated frame difference (MCFD) images are derived. It is shown that the optimum Karhunen-Loeve (KLT) or the MCFD images is in fact identical to that of the original image sequence in intraframe mode, and that the discrete cosine transform (DCT) remains near optimal. The study this provides insights into the working of the MPEG and H.261 coders, including the justification for using the same DCT for both intra and interframe modes. Experiments on standard image sequences confirm the accuracy of the statistical model. >

57 citations


Patent
Ichiro Tamitani1
25 Feb 1993
TL;DR: In this paper, a motion picture coding apparatus for storage medium of a video rate is produced at a low cost, which includes an input picture re-arranging unit for changing the order of frames of input motion pictures, a storage circuit for storing therein decoded pictures of intra-coded and predictive coded pictures, an address generating unit, a motion detector for performing multi-stage motion vector search, a predictive signal generating unit for outputting an inter-frame predictive signal and a predictive difference signal, a quantizing unit, and locally decoding unit.
Abstract: A motion picture coding apparatus for storage medium of a video rate is produced at a low cost. The coding apparatus includes an input picture re-arranging unit for changing the order of frames of input motion pictures, a storage circuit for storing therein decoded pictures of intra-coded and predictive coded pictures, an address generating unit, a motion detector for performing multi-stage motion vector search, a predictive signal generating unit for outputting an inter-frame predictive signal and a predictive difference signal, a quantizing unit, a variable length coding unit, and locally decoding unit. The predictive signal generating unit simultaneously fetches data read out from the storage circuit to the motion detector for final stage vector search to reduce access to the storage circuit. The locally decoded pictures are placed back in order of reproduced frames to monitor the decoded picture signal by a storage circuit for storing therein decoded pictures of intra-coded and predictive coded pictures of an output of the locally decoding unit and address generating unit.

49 citations


Patent
Li Yan1
10 Dec 1993
TL;DR: In this paper, the signal for the current frame is weightedly averaged with signals for a future and prior frame, where the future frames are given less weight as they differ more from the current frames.
Abstract: Motion video is represented by digital signals. The digital signals can be compressed by coding to reduce bitspace. Noise in the signal, however, reduces the efficiency of coding. The present invention is a system and method for reducing noise in video signals by filtering. The signal for the current frame is weightedly averaged with signals for a future and prior frame. The future and prior frames are given less weight as they differ more from the current frame. When motion compensation information is available, the motion compensated future and prior frames can be used for averaging, further improving filtering.

48 citations


Patent
14 Jun 1993
TL;DR: In this paper, a set of image frame-based 'predictors' and an associated set of thresholds are used in the iterative search and sort process to reduce computational complexity.
Abstract: Locations of respective image frames contained on an image recording, such as a continuous color photographic film strip scanned by digitizing opto-electronic scanner are identified by storing scanline data produced by the scanner in a digital database, and processing the stored scanline data in accordance with set of image frame identification operators, which iteratively identify locations of nominally valid frames, beginning with the identification of all well formed frames. Each well formed frame has prescribed image frame attributes including at least a spatial region of image modulance bounded by leading and trailing edges adjacent to Dmin interframe gaps. The iterative identification procedure includes `chopping` less than well formed frames, sorting frame regions based upon geometry considerations and identifying and adjusting the size of oversized and undersize frames. To reduce computational complexity a set of image frame-based `predictors` and an associated set of thresholds are used in the iterative search and sort process.

Patent
Hyun-Soo Shin1
06 Jul 1993
TL;DR: A motion vector detection method of a digital video signal includes an edge image detecting process for detecting an edge for the video signal within a current frame and a previous frame, an image converting process for converting the edge detected image from the edge image detector into a binary-coded image, and a motion vector detecting process with respect to only an edge part from the binary coded image as mentioned in this paper.
Abstract: A motion vector detecting method of a digital video signal includes an edge image detecting process for detecting an edge for the video signal within a current frame and a previous frame, an image converting process for converting the edge detected image from the edge image detecting process into a binary-coded image, and a motion vector detecting process for detecting an interframe motion vector with respect to only an edge part from the binary-coded image.

Patent
30 Jun 1993
TL;DR: In this paper, the authors proposed an integrated frame relay protocol that allows both voice and data frames to be handled in an integrated relay system, where a Voice Frame Identifier bit is defined in the address field of a frame.
Abstract: Modifications to existing frame relay communication protocols are described which permit both voice and data frames to be handled in an integrated frame relay system. A Voice Frame Identifier bit is defined in the address field of a frame. When an intermediate node detects a voice frame, a CRC operation is performed using only the frame header; that is, voice information is excluded from the computation. The frame is flagged for priority processing in the node. When the intermediate node detects a data frame the CRC operation uses both the header and the data fields.

Patent
03 Mar 1993
TL;DR: In this paper, a method of processing a digital video signal to derive motion vectors representing motion between successive fields or frames of the video signal comprises compares the contents of blocks of pixels in a first field or frame of a video signal with the contents in a plurality of blocks in a following field or frames, and produces for each block in the first field and frame a correlation surface representing the difference between the contents so compared in the two fields and frames.
Abstract: A method of processing a digital video signal to derive motion vectors representing motion between successive fields or frames of the video signal comprises compares the contents of blocks of pixels in a first field or frame of the video signal with the contents of a plurality of blocks of pixels in a following field or frame, and produces for each block in the first field or frame a correlation surface representing the difference between the contents so compared in the two fields or frames. A grown correlation surface is produced for each block in the first field or frame by weighting the correlation surfaces for that block and a plurality of other blocks in an area around that block so as to accentuate features of the correlation surface for that block relative to those for the other blocks, and summing the weighted correlation surfaces. From each grown correlation surface, a motion vector is derived representing the motion of the content of the corresponding block between the two frames in dependence upon a minimum difference value represented by the grown correlation surface.

Patent
Ronald A. Frederick1
02 Nov 1993
TL;DR: In this article, a video signal is converted to digital form and the data of sequential frames of the signal are arranged in a plurality of blocks of pixel data that numerically represent visual characteristics of the respective pixels of the frame image.
Abstract: In a method and apparatus for producing a signal for transmission to a receiver, a video signal is converted to digital form and the data of sequential frames of the signal are arranged in a plurality of blocks of pixel data that numerically represent visual characteristics of the respective pixels of the frame image. Each block is further organized as a matrix of pixel data. The pixel data of the blocks of a "previous" video frame, and a current video frame, are stored in a memory. A row of each block of the current video signal is compared with the corresponding row of the previous video frame, and a list is made of blocks in which the averages of the pixel data exceed a predetermined threshold. The pixel data of the listed blocks with lossy compression, and is encoded for transmission along with high definition data of a predetermined number of blocks of unchanged data. The data of the "previous" frame stored in memory is updated in memory, to continually store a replica of an image that corresponds to the image that should be currently stored in the receiving station.

Patent
08 Mar 1993
TL;DR: In this article, motion vectors from one video frame to another are detected by segmenting a present frame of video data into plural blocks and then comparing a block in the present frame to a corresponding block in a preceding frame to detect rotational and zoom movement of the present block relative to the preceding block, in addition to rectilinear movement.
Abstract: Motion vectors from one video frame to another are detected by segmenting a present frame of video data into plural blocks and then comparing a block in the present frame to a corresponding block in a preceding frame to detect rotational and zoom movement of the present block relative to the preceding block, in addition to rectilinear movement.

Journal ArticleDOI
TL;DR: Real-time video transmission using the Hybrid Extended MPEG (Bellcore's proposal to ISO/MPEG) is considered and several schemes to reduce the delay are considered and compared with regular coding schemes in terms of image quality, end-to-end delay and performance of statistical multiplexing.
Abstract: The Motion Picture Experts Group (MPEG) video-coding algorithm is regarded as a promising coding algorithm for coding full-motion video. However, since MPEG was originally designed for storage applications, some problems must be solved before the algorithm can be applied to interactive services. Due to the use of periodic intraframe coding and bidirectional interframe prediction, the end-to end delay of the MPEG algorithm is much larger than that of the H.261 algorithm. In packet video transmission, the large peak in bit rate caused by periodic intraframe coding may lower performance of statistical multiplexing. In this paper, real-time video transmission using the Hybrid Extended MPEG (Bellcore's proposal to ISO/MPEG) is considered. First, the end-to-end delay of the Hybrid Extended MPEG algorithm is analyzed. Then several schemes to reduce the delay are considered and compared with regular coding schemes in terms of image quality, end-to-end delay and performance of statistical multiplexing. Error resilience of the presented schemes is also tested by simulations assuming cell loss. It is shown that the presented schemes improved the end-to-end delay and performance of statistical multiplexing significantly. >

Journal ArticleDOI
TL;DR: A generic video codec, able to compress image sequences efficiently regardless of their input formats, is presented, which can handle both interlaced and progressive input sequences, the temporal redundancies being exploited by interframe/interfield coding.
Abstract: We present a generic video codec, which is able to compress image sequences efficiently regardless of their input formats. In addition, this codec supports a wide range of bit rates, without significant changes in its main architecture. A multiresolution representation of the data is generated by a Gabor-like wavelet transform. The motion estimation is performed by a locally adaptive multigrid block-matching technique. This codec can handle both interlaced and progressive input sequences, the temporal redundancies being exploited by interframe/interfield coding. A perceptual quantization of the resulting coefficients is then performed, followed by adaptive entropy coding. Simulations using different test sequences demonstrate a reconstructed signal of good quality for a wide range of bit rates. Thereby demonstrating that this codec can perform generic coding with reduced complexity and high efficiency.

Patent
Mitsuru Maeda1
16 Sep 1993
TL;DR: In this paper, a method and apparatus for displaying images is presented in which intra-frame encoding is performed on image data at N frame intervals, where the thinned-out frames are encoded by prediction from the preceding frame codes.
Abstract: A method and apparatus for displaying images is presented in which intra-frame encoding is performed on image data at N frame intervals, where the thinned-out frames are encoded by prediction from the preceding frame codes. The frame being decoded is identified when an instruction for temporarily stopping a moving image and displaying it as a still image is given by an operation panel. If the frame being decoded is not an interpolative frame, the decoded image data is displayed when the decoding operation is complete. If, on the other hand, the frame being decoded is an interpolative frame, the intra-frame encoded frame nearest to the frame being decoded, or a predictive frame, is displayed.

Patent
Shrikant N. Parikh1, Hari N. Reddy1
02 Dec 1993
TL;DR: In this paper, the redundancy between a greater number of frames than in the prior art is utilized for lossless compression of full motion video by utilizing the redundancy in the redundancy measure.
Abstract: The present invention provides a method for loss-less compression of full motion video by utilizing the redundancy between a greater number of frames than in the prior art. Each frame is analyzed for a characteristic of change and only the change information is stored throughout the sequence of frames. Knowledge of future frames is also utilized to store base information in a library for use with the future frames.

Journal Article
TL;DR: In this paper, a new line spectrum pair (LSP) vector quantization method was proposed which uses moving average (MA) interframe prediction of the parameters of the LSP vector.
Abstract: This paper proposes a new efficient line spectrum pair (LSP) vector quantization method which uses moving average (MA) interframe prediction of the parameters. This method has two advantages over auto-regressive prediction: the degradation of decoded parameters caused by bit errors affects only a few of the following frames, and codeword decoding can start at any frame. Using MA prediction, the final reconstructed LSP vector (quantized vector) of the current frame is represented as a linear combination of current and previous frame code vectors. In this paper we describe ways to achieve a more efficient LSP coder when each frame has four 10-ms subfraines. The spectral distance obtained by this method at 30 bits per 40 ms is better than that obtained using conventional multi-stage VQ without interframe correlation at 40 bits per 40 ms.

Proceedings ArticleDOI
29 Nov 1993
TL;DR: In this paper, the impact of autocorrelation of variable bit rate (VBR) video sources on real-time scheduling algorithms is investigated. But the authors focus on the long-term and intra-frame correlation and do not consider the short-term or intraframe correlation.
Abstract: What is the impact of autocorrelation of variable bit rate (VBR) video sources on real-time scheduling algorithms? Our results show that the impact of long term, or interframe, autocorrelation is negligible, while the impact of short term, or intraframe, autocorrelation can be significant. Such results are essentially independent of the video coding scheme employed. To derive these results, we introduce a model that is based on statistical analysis performed on actual video data. Our model accurately captures the distribution and the autocorrelation function of the source bit stream on both the frame and the slice level. We show that the original video data sequence can be modeled as a collection of stationary subsequences called scenes. Within a scene, a model is derived for both the sequence of frames and of slices. In previous work at the slice level, the pseudo-periodicity of the autocorrelation function made it difficult, to develop a simple yet accurate model. One of the new elements introduced in this work is that we present a generalization of previous methods, that can easily capture this pseudo-periodicity and is suited for modeling a greater variety of autocorrelation functions. The generality of our model lies in that, by simply tuning a few parameters, it is able to reproduce the statistical behavior of sources with different types and levels of correlation. >

Proceedings ArticleDOI
22 Oct 1993
TL;DR: A novel technique to dynamically adapt motion interpolation structures by temporal segmentation is presented, and the results compare favorably with those for conventional fixed GOP structures.
Abstract: In this paper we present a novel technique to dynamically adapt motion interpolation structures by temporal segmentation. The interval between two reference frames is adjusted according to the temporal variation of the input video. The difficulty of bit rate control for this dynamic group of pictures (GOP) structure is resolved by taking advantage of temporal masking in human vision. Six different frame types are used for efficient bit rate control, and telescopic search is used for fast motion estimation because frame distances between reference frames are dynamically varying. Constant picture quality can be obtained by variable bit rate coding using this approach and the statistical bit rate behavior of the coder is discussed. Advantages for low bit rate coding and storage media applications and implications for HDTV coding are discussed. Simulations on test video including HDTV sequences are presented for various GOP structures and different bit rates, and the results compare favorably with those for conventional fixed GOP structures.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
Brian Astle1
21 May 1993
TL;DR: In this article, motion compensation is used to estimate a portion of a block of the image frame being decoded using a region from the previous image frame that abuts a subimage boundary and is smaller than the block.
Abstract: Methods and apparatuses for encoding video images using motion estimation and decoding encoded video images using motion compensation, where the motion estimation and/or the motion compensation is limited to subimages of the video images as defined by boundaries. Although motion compensation may not be used to estimate pels using data from the previous image frame that lies across the subimage boundaries, motion compensation may be used to estimate a portion of a block of the image frame being decoded using a region from the previous image frame that abuts a subimage boundary and is smaller than the block. The rest of the block may be estimated either by retaining the corresponding pels from the previous image frame or by replicating the pels from the previous image frame that lie along the subimage boundary.

Proceedings ArticleDOI
27 Apr 1993
TL;DR: An interframe differential coding scheme is presented for LSFs that is computationally efficient and easy to implement, and can be used in low-bit-rate vocoders.
Abstract: In many vocoders LSFs (line spectrum frequencies) are used to encode the linear predictive coding (LPC) parameters. An interframe differential coding scheme is presented for LSFs. The LSFs of the current speech frame are predicted by using both the LSFs of the previous frame and some of the LSFs of the current frame. Then the difference vector resulting from prediction is vector quantized. The proposed scheme is computationally efficient and easy to implement, and can be used in low-bit-rate vocoders. >

Journal ArticleDOI
TL;DR: A new two-view motion algorithm is presented and then extended to long sequence motion analysis, which automatically finds the proper model that applies to an image sequence and gives the globally optimal solution for the motion and structure parameters under the chosen model.

Journal ArticleDOI
TL;DR: A new algorithm for interframe interpolation of cinematic sequences that suppresses artifacts caused by all three major problems associated with motion compensated cinematic interpolation: interframe occlusion, interframe zooming, and figure-ground ambiguity is presented.

Journal ArticleDOI
01 Dec 1993
TL;DR: Both within-training and out-of-training tests have demonstrated the robustness of the proposed two-dimensional differential LSP coding method to data variation.
Abstract: A new line spectrum pair (LSP) encoding method, two-dimensional differential LSP (2DdLSP) coding, is proposed. This 2DdLSP approach simultaneously takes advantage of strong interframe and intraframe correlation of LSP parameters. The parameters to be quantised are the prediction residuals which possess the property of smaller variance than LSP parameters. This is the key point for the success of the 2DdLSP method. Three quantisation schemes, a scalar quantisation and two vector quantisations (VQs), are presented. The best result in simulation is achieved by using the product codebook VQ. A spectral distortion of 1 dB2 at 19 bits per frame can be obtained when the frame period is 10 ms. Both within-training and out-of-training tests have demonstrated the robustness of the proposed method to data variation.

Patent
23 Aug 1993
TL;DR: In this article, the difference information of a signal from a frame memory and an input signal is used to improve the coding efficiency by changing a filter intensity cutting off a high frequency component thereby eliminating the high-frequency component from the signal to be coded.
Abstract: PURPOSE:To improve the coding efficiency by using difference information of a signal from a frame memory and an input signal so as to change a filter intensity cutting off a high frequency component thereby eliminating the high frequency component from the signal to be coded. CONSTITUTION:A filter control section 21 uses an input picture signal 12 and a motion compensation predict signal 14 to generate a filter control signal 23. An adaptive filter section 22 executes filtering for, high frequency component elimination in response to the signal 23. A anticipation signal 24 subject to filter processing is subtracted from the signal 12 by a subtractor 3 to generate a anticipation error signal 15. A coding section 4 quantizes the signal 15 to generate coded error information 16. A local decoding section 5 decodes the coded information 16 to provide an output of a local decoding signal 17. An adder 6 adds the signals 24, 17 to generate the local decoding signal 18 and it is written in a frame memory 1.

Journal ArticleDOI
TL;DR: The NUCLEI (Nagasaki University codec using large interframe prediction errors as important parts) method is proposed in this paper as a new interframe coding method for the asynchronous transfer model (ATM).
Abstract: The NUCLEI (Nagasaki University codec using large interframe prediction errors as important parts) method is proposed in this paper as a new interframe coding method for the asynchronous transfer model (ATM). Characteristics of this method are as follows: 1) interframe predictions are done using a full band, considering the high coding efficiency during the normal state; 2) interframe predictions are done even after cell loss, considering the high coding efficiency after cell loss; and 3) utilizing the fact that the small moving areas are less important than the large moving areas in dynamic images (motion pictures), large moving areas or blocks with large interframe differences are allocated to priority (higher priority) cells as priority blocks and comparatively small blocks are allocated to nonpriority (lower priority) cells as nonpriority blocks. These characteristics are realized by assigning frame memories for storing priority block signals in addition to frame memories storing all images. It is shown by computer simulations that good interframe coding characteristics can be realized in both during the normal state and after a cell loss.