scispace - formally typeset
Search or ask a question
Topic

Inter frame

About: Inter frame is a research topic. Over the lifetime, 4154 publications have been published within this topic receiving 63549 citations.


Papers
More filters
Patent
Michael Horowitz1
12 Aug 2002
TL;DR: In this paper, a lower value of the quantization parameter is applied near a central region of a video frame, referred to as a prime video region, which has the effect of increasing the video data bit density within that area.
Abstract: The present invention allows higher quality video images to be transmitted without a concomitant increase in a total number of video data bits transmitted per frame. Quantization parameters are applied to coefficients of macroblocks within a given video frame. A lower value of the quantization parameter is applied near a central region of a video frame. This central region is referred to as a prime video region. Applying the lower quantization parameter to the prime video region has the effect of increasing the video data bit density within that area. Outside of the prime video region, the video data bit density per macroblock is decreased so as to have a zero net-gain in bit density over the entire video frame. Furthermore, there may be a plurality of prime video regions where quantization parameters are dynamically coded. In this case, the value of the quantization parameter will increase or decrease within a given prime video region based on a relative importance of a particular prime video region. Consequently, a quantization parameter matrix may vary depending on the video scene.

29 citations

Journal ArticleDOI
TL;DR: This algorithm achieves a reduction in the number of operations performed at the encoder for motion estimation by over two orders of magnitude while introducing minimal degradation to the decoded video compared with full search encoder-based motion estimation.
Abstract: This paper describes a new compression algorithm, termed network-driven motion estimation (NDME), which reduces the power dissipation of wireless video devices in a networked environment by exploiting the predictability of object motion. Since the location of an object in the current frame can often be predicted accurately from its location in previous frames, it is possible to optimally partition the motion estimation computation between the portable devices and high powered compute servers on the wired network. In network-driven motion estimation, a remote high-powered resource at the base-station (or on the wired network), predicts the motion vectors of the current frame from the motion vectors of the previous frames. The base-station sends these predicted motion vectors to a portable video encoder, where motion compensation proceeds as usual. Network-driven motion estimation adaptively adjusts the coding algorithm based on the amount of motion in the sequence, using motion prediction to code portions of the video sequence which contain a large amount of motion and conditional replenishment to code portions of the sequence which contain little scene motion. This algorithm achieves a reduction in the number of operations performed at the encoder for motion estimation by over two orders of magnitude while introducing minimal degradation to the decoded video compared with full search encoder-based motion estimation.

29 citations

Journal ArticleDOI
TL;DR: A generalized statistical tool is introduced to estimate key frames in a video sequence based on the inter-relationship between different features of image frames and the combiner model is designed to uniquely decide the status of a key frame.
Abstract: In this paper, a generalized statistical tool is introduced to estimate key frames in a video sequence. The tool works based on the inter-relationship between different features of image frames in a video. The image feature vectors are plotted in feature space as points and a randomness measure is determined from the distribution of these points. The randomness measure of the feature vectors is defined with respect to simulated random point patterns and expressed as a probability value of a frame being a key frame. Since, depending on the video content more than one inter-relationship of features can be used to determine a single key frame, different probability values are derived to support a frame as a key frame. To integrate these probability values a combiner model is designed to uniquely decide the status of a key frame. The combiner model is based on the Dempster-Shafer theory of evidence. To demonstrate the idea, randomness measures, and consequently the probabilities of a frame to be a key frame, are obtained separately from spatial domain and frequency domain features. The combined probability value enhances the confidence in selecting a frame as a key frame. The result is tested on a number of standard video sequences and it outperforms the related approach

28 citations

Journal ArticleDOI
TL;DR: A new video coding method using a reference frame [i.e., the most common frame in scene (McFIS)] generated by dynamic background modeling is proposed, which is more effective in terms of rate-distortion and computational time performance compared to the MRFs techniques.
Abstract: Motion estimation (ME) and motion compensation (MC) using variable block size, sub-pixel search, and multiple reference frames (MRFs) are the major reasons for improved coding performance of the H.264 video coding standard over other contemporary coding standards. The concept of MRFs is suitable for repetitive motion, uncovered background, non-integer pixel displacement, lighting change, etc. The requirement of index codes of the reference frames, computational time in ME & MC, and memory buffer for coded frames limits the number of reference frames used in practical applications. In typical video sequences, the previous frame is used as a reference frame with 68–92% of cases. In this article, we propose a new video coding method using a reference frame [i.e., the most common frame in scene (McFIS)] generated by dynamic background modeling. McFIS is more effective in terms of rate-distortion and computational time performance compared to the MRFs techniques. It has also inherent capability of scene change detection (SCD) for adaptive group of picture (GOP) size determination. As a result, we integrate SCD (for GOP determination) with reference frame generation. The experimental results show that the proposed coding scheme outperforms the H.264 video coding with five reference frames and the two relevant state-of-the-art algorithms by 0.5–2.0 dB with less computational time.

28 citations

Journal ArticleDOI
04 Apr 2005
TL;DR: A comparative performance study demonstrates that the proposed gradient correlation method outperforms state-of-the-art methods in frequency-domain motion estimation, in the shape of phase correlation, in terms of sub-pixel accuracy for a range of test material and motion scenarios.
Abstract: The authors present a performance study of gradient correlation in the context of the estimation of interframe motion in video sequences. The method is based on the maximisation of the spatial gradient cross-correlation function, which is computed in the frequency domain and therefore can be implemented by fast transformation algorithms. Enhancements to the baseline gradient-correlation algorithm are presented which further improve performance, especially in the presence of noise. A comparative performance study is also presented, which demonstrates that the proposed method outperforms state-of-the-art methods in frequency-domain motion estimation, in the shape of phase correlation, in terms of sub-pixel accuracy for a range of test material and motion scenarios.

28 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Image processing
229.9K papers, 3.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202324
202272
202162
202084
2019110
201897