scispace - formally typeset
Search or ask a question
Topic

Inter frame

About: Inter frame is a research topic. Over the lifetime, 4154 publications have been published within this topic receiving 63549 citations.


Papers
More filters
Patent
Shijun Sun1
08 Mar 2002
TL;DR: In this paper, different combinations of global motion parameters are estimated for a current frame interpolated to derive local motion vectors for individual image blocks, which are compared to the image blocks in the current frame.
Abstract: Different combinations of global motion parameters are estimated for a current frame interpolated to derive local motion vectors for individual image blocks. Image blocks in a reference frame identified by the local motion vectors are compared to the image blocks in the current frame. The estimated global motion parameters that provide the best match between the image blocks in the current frame and the reference frame are selected for encoding the current frame. Selected sub regions of temporally consecutive image frame s can be used in order to release the computational burden for global motion estimation and provide more robust global motion estimation results. A data truncation method can also be used to remove bias caused by foreground moving objects.

52 citations

Proceedings ArticleDOI
01 Nov 2012
TL;DR: WaveCast is proposed, a new video multicast approach that utilizes motion compensated temporal filter (MCTF) to exploit inter frame redundancy, and utilizes conventional framework to transmit motion information such that the MVs can be reconstructed losslessly.
Abstract: Wireless video broadcasting is a popular application of mobile network. However, the traditional approaches have limited supports to the accommodation of users with diverse channel conditions. The newly emerged Softcast approach provides smooth multicast performance but is not very efficient in inter frame compression. In this work, we propose a new video multicast approach: WaveCast. Different from softcast, WaveCast utilizes motion compensated temporal filter (MCTF) to exploit inter frame redundancy, and utilizes conventional framework to transmit motion information such that the MVs can be reconstructed losslessly. Meanwhile, WaveCast transmits the transform coefficients in lossy mode and performs gracefully in multicast. In experiments, Wave-Cast outperforms softcast 2dB in video PSNR at low channel SNR, and outperforms H.264 based framework up to 8dB in broadcast.

52 citations

Proceedings ArticleDOI
10 Jan 1997
TL;DR: This work derives a procedure to determine the optimal block size that minimizes the encoding rate for a typical block-based video coder and analytically model the effect of block size and derive expressions for the encoding rates for both motion vectors and difference frames, as functions of blocksize.
Abstract: In block-based video coding, the current frame to be encoded is decomposed into blocks of the same size, and a motion vector is used to improve the prediction for each block. The motion vectors and the difference frame, which contains the blocks' prediction errors, must be encoded with bits. Typically, choosing a smaller block size will improve the prediction and hence decrease the number of difference frame bits, but it will increase the number of motion bits since more motion vectors need to be encoded. Not surprisingly, there must be some value for the block size that optimizes the tradeoff between motion and difference frame bits, and thus minimizes their sum. Despite the widespread experience with block-based video coders, there is little analysis or theory that quantitatively explains the effect of block size on encoding bit rate, and ordinarily the block size for a coder is chosen based on empirical experiments on video sequences of interest. In this work, we derive a procedure to determine the optimal block size that minimizes the encoding rate for a typical block-based video coder. To do this, we analytically model the effect of block size and derive expressions for the encoding rates for both motion vectors and difference frames, as functions of block size. Minimizing these expressions leads to a simple formula that indicates how to choose the block size in these types of coders. This formula also shows that the best block size is a function of the accuracy with which the motion vectors are encoded and several parameters related to key characteristics of the video scene,such as image texture, motion activity, interframe noise, and coding distortion. We implement the video coder and use our analysis to optimize and explain its performance on real video frames.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

52 citations

Proceedings ArticleDOI
23 Jun 1991
TL;DR: The impact of bit error and cell loss in a hybrid DPCM/DCT (differential pulse code modulation/discrete cosine transform) coding algorithm is described through three different video sequence simulations.
Abstract: Experimental results on error concealment techniques to hide the video quality degradation due to bit errors and cell losses from the viewer's perception are presented. The impact of bit error and cell loss in a hybrid DPCM/DCT (differential pulse code modulation/discrete cosine transform) coding algorithm is described through three different video sequence simulations. Two effective schemes that conceal the information loss caused by bit errors and cell losses based on the information in the previous frame are presented. These error concealment techniques work very well in real-time playback as long as the location of bit error and cell loss is properly detected by the decoder. Impairments caused by a cell loss can be compensated for by simple replenishment in real-time playback. Although motion replenishment performs better in the case of uniformly slow motion, it requires that the network have a priority capability. Impairments due to undetected bit errors cannot be compensated for by any concealment algorithm. >

52 citations

Patent
Jian Zhang1, Reji Mathew1
08 Jun 2000
TL;DR: In this article, pixel errors are calculated for all pixels included in the sub-sampled block matching metric and the pixel errors belonging to the same field pattern are added together to obtain field error values (eg field SAD values).
Abstract: A method of estimating motion in interlaced video involves, firstly, a frame search ( 61 ), where a search is conducted for the frame structure using a sub-sampled block matching metric (e.g. sub-sampled SAD). The locations to be searched are either fixed or dynamically determined based on the minimum frame SAD (ie best frame block match). Next, pixel errors are calculated ( 62 ) for all pixels included in the sub-sampled block matching metric. Each pixel error is first identified as belonging to one of four field patterns (e.g. even-even, even-odd, odd-even and odd-odd). For each location, pixel errors can be classified into two field patterns, either even-even and odd-odd or even-odd and odd-even ( 63 ). The pixel errors belonging to the same field pattern are added together ( 64 ) to obtain field error values (eg field SAD values). The individual field error (or field SAD) values are used to determine ( 77-80 ) the field Motion Vectors. The location of the lowest field SAD value is taken as the position of best match. All pixel errors are then summed together ( 65 ) to obtain the frame error and the location of the lowest frame error (or frame SAD) is taken as the position of best match for the frame. The frame and field Motion Vectors can be refined by using a full block matching metric within the small search window ( 66, 73-76 ).

51 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Image processing
229.9K papers, 3.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202324
202272
202162
202084
2019110
201897