scispace - formally typeset
Search or ask a question
Topic

Inter frame

About: Inter frame is a research topic. Over the lifetime, 4154 publications have been published within this topic receiving 63549 citations.


Papers
More filters
Proceedings ArticleDOI
24 Oct 1999
TL;DR: A balanced twin-description interframe video coder is designed and performance results are presented for video transmission aver packet networks with packet losses.
Abstract: A balanced twin-description interframe video coder is designed and performance results are presented for video transmission aver packet networks with packet losses. The coder is based on a predictive multiple description quantizer structure called mutually-refining DPCM (MR-DPCM). The novel feature of this predictive quantizer is that the decoder and encoder filter states trade in either of the two single-channel modes as well as in the two-channel mode. The performance and indeed the suitability of the multiple description approach for a network with packet lasses depends an the packetization method. Two packetization methods are considered-a "correct" one and a low latency but "incorrect" one. Performance results are presented for synthetic sources as well as for a video sequence under a variety of conditions.

132 citations

Patent
Rohit Agarwal1
29 Jun 1994
TL;DR: In this article, reference frames are generated by selectively filtering blocks of decoded video frames based on a comparison of an energy measure value generated for the block and a threshold value corresponding to the quantization level used to encode the block.
Abstract: Reference frames are generated by selectively filtering blocks of decoded video frames. The decision whether to filter a block is based on a comparison of an energy measure value generated for the block and an energy measure threshold value corresponding to the quantization level used to encode the block. The energy measure threshold value for a given quantization level is selected by analyzing the results of encoding and decoding training video frames using that quantization level. The reference frames are used in encoding and decoding video frames using interframe processing.

132 citations

Proceedings ArticleDOI
01 Oct 1994
TL;DR: This paper designs and specifies an algorithm for lossless smoothing of compressed video and presents a theorem which guarantees that, if K ≥ 1, the algorithm finds a solution that satisfies the delay bound.
Abstract: Interframe compression techniques, such as those used in MPEG video, give rise to a coded bit stream where picture sizes differ by a factor of 10 or more. As a result, buffering is needed to reduce (smooth) rate fluctuations of encoder output from one picture to the next; without smoothing, the performance of networks that carry such video traffic would be adversely affected. Various techniques have been suggested for controlling the output rate of a VBR encoder to alleviate network congestion or prevent buffer overflow. Most of these techniques, however, are lossy, and should be used only as a last resort. In this paper, we design and specify an algorithm for lossless smoothing. The algorithm is characterized by three parameters: D (delay bound), K (number of pictures with known sizes), and H (lookahead interval). We present a theorem which guarantees that, if K ≥ 1, the algorithm finds a solution that satisfies the delay bound. (Although the algorithm and theorem were motivated by MPEG video, they are applicable to the smoothing of compressed video in general). To study performance characteristics of the algorithm, we conducted a large number of experiments using statistics from four MPEG video sequences.

129 citations

Patent
Taro Imagawa1, Takeo Azuma1
17 Apr 2007
TL;DR: In this article, an image generation apparatus includes an image receiving unit which receives a first video sequence including frames having a first resolution and a second video sequence having a second resolution which is higher than the first resolution.
Abstract: An image generation apparatus includes an image receiving unit which receives a first video sequence including frames having a first resolution and a second video sequence including frames having a second resolution which is higher than the first resolution. Each frame of the first video sequence is obtained with a first exposure time, and each frame of the second video sequence is obtained with a second exposure time which is longer than the first exposure time. The image generation apparatus also includes an image integration unit which generates, from the first video sequence and the second video sequence, a new video sequence including frames having a resolution which is equal to or higher than the second resolution, at a frame rate which is equal to or higher than a frame rate of the first video sequence. The new video sequence is generated by reducing a difference between a value of each frame of the second video sequence and a sum of values of frames of the new video sequence which are included within an exposure period of the frame of the second video sequence.

128 citations

Journal ArticleDOI
TL;DR: The paper provides an explanation of MCTF methods and the resulting 3D wavelet representation, and shows results obtained in the context of encoding digital cinema (DC) materials.
Abstract: Scalability at the bitstream level is an important feature for encoded video that is to be transmitted and stored with a variety of target rates or to be replayed on devices with different capabilities and resolutions. This is attractive for digital cinema applications, where the same encoded source representation could seamlessly be used for purposes of archival and various distribution channels. Conventional high-performance video compression schemes are based on the method of motion-compensated prediction, using a recursive loop in the prediction process. Due to this recursion and the inherent drift in cases of deviation between encoder and decoder states, scalability is difficult to realize and typically effects a penalty in compression performance for prediction-based coders. The method of interframe wavelet coding overcomes this limitation by replacing the prediction along the time axis by a wavelet filter, which can nevertheless be operated in combination with motion compensation. Recent advances in motion-compensated temporal filtering (MCTF) have proven that combination with arbitrary motion compensation methods is possible. Compression performance is achieved that is comparable with state of the art single-layer coders targeting only for one rate. The paper provides an explanation of MCTF methods and the resulting 3D wavelet representation, and shows results obtained in the context of encoding digital cinema (DC) materials.

127 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Image processing
229.9K papers, 3.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202324
202272
202162
202084
2019110
201897