scispace - formally typeset
Search or ask a question
Topic

Inter frame

About: Inter frame is a research topic. Over the lifetime, 4154 publications have been published within this topic receiving 63549 citations.


Papers
More filters
Patent
08 Oct 1992
TL;DR: In this paper, the M×N exclusive-OR plane of pixel change values and location displacement control values for an output pointer into a decompressed video frame is used to encode frame-to-frame differences in an exclusiveOR value.
Abstract: A process for coding a plurality of compressed video data streams in a time ordered sequence. Each compressed data stream includes coding of frame to frame differences of a video segment, which are represented as a compressed M×N exclusive-OR plane of pixel change values and location displacement control values for an output pointer into a decompressed video frame. By coding frame to frame differences in an exclusive-OR values, the replay process is made bidirectional, allowing for both forward and reverse playback of the video segment.

55 citations

Journal ArticleDOI
TL;DR: A new coding scheme is proposed, which uses the most common frame in scene (McFIS), generated by background modeling, as a long-term reference (LTR) frame for the third unipredictive reference frame, so that foreground and background areas are expected to be referenced from the two frames in the HBP structure and the McFIS, respectively.
Abstract: Generally, H.264/AVC video coding standard with hierarchical bipredictive picture (HBP) structure outperforms the classical prediction structures such as “IPPP...” and “IBBP...” through better exploitation of data correlation using reference frames and unequal quantization setting among frames. However, multiple reference frames (MRFs) techniques are not fully exploited in the HBP scheme because of the computational requirement for B-frames, unavailability of adjacent reference frames, and with no explicit sorting of the reference frames for foreground or background being used. To exploit MRFs fully and explicitly in background referencing, we observe that not a single frame of a video is appropriate to be the reference frame as no one covers adequate background of a video. To overcome the problems, we propose a new coding scheme with the HBP, which uses the most common frame in scene (McFIS), generated by background modeling, as a long-term reference (LTR) frame for the third unipredictive reference frame, so that foreground and background areas are expected to be referenced from the two frames in the HBP structure and the McFIS, respectively. There are two approaches to generate McFIS under the proposed methodology. In the first approach, we generate a McFIS using a number of original frames of a scene in a video and then encode it as an I-frame with a higher quality. For the rest of the scene, this generated I-frame is used as an LTR frame. In the second approach, we generate an McFIS from the decoded frames and then use it as an LTR frame, without the need to encode the McFIS. The first and the second approaches are suitable for a video with static background and dynamic background, respectively. In general, the second approach requires more computational time than that of the the first approach. The experiments confirm that the proposed scheme outperforms three state-of-the-art algorithms by improving the image quality significantly with reduced computational time.

55 citations

Patent
Atul Puri1, Daniel Socek1, Chang-Kee Choi1
21 Dec 2011
TL;DR: In this paper, a system and method for adaptive motion filtering to improve subpel motion prediction efficiency of interframe motion compensated video coding is described, which uses a codebook approach that is efficient in search complexity to look-up best motion filter set from a pre-calculated codebook of motion filter coefficient set.
Abstract: A system and method for adaptive motion filtering to improve subpel motion prediction efficiency of interframe motion compensated video coding is described. The technique uses a codebook approach that is efficient in search complexity to look-up best motion filter set from a pre-calculated codebook of motion filter coefficient set. In some embodiments, the search complexity is further reduced by partitioning the complete codebook into a small base codebook and a larger virtual codebook, such that the main calculations for search only need to be performed on the base codebook.

54 citations

Patent
30 Dec 1994
TL;DR: In this article, an apparatus for detecting motion vectors between a current frame and a previous frame of video signals based on a block matching motion estimation approach comprises a variable block formation section for defining a variable search block from the current frame, the search block is extended from a selected search block which is a plain picture block without having an edge of an object within the currentframe; and a motion-estimation section for estimating the variable search blocks with respect to each of candidate blocks included in the previous frame to provide a number motion vector and an error function corresponding thereto.
Abstract: An apparatus for detecting motion vectors between a current frame and a previous frame of video signals based on a block matching motion estimation approach comprises a variable block formation section for defining a variable search block from the current frame, the variable search block is extended from a selected search block which is a plain picture block without having an edge of an object within the current frame; and a motion-estimation section for estimating the variable search block with respect to each of candidate blocks included in the previous frame to provide a number motion vector and an error function corresponding thereto, the motion vector representing the displacement of pixels between the search block and a candidate block which yield a minimum error function.

54 citations

Patent
Mark Walker1
07 Nov 1994
TL;DR: In this paper, a transform is applied to a region of a current video frame to generate transform signals corresponding to the region, and an activity measure is generated using the transform signals.
Abstract: A transform is applied to a region of a current video frame to generate transform signals corresponding to the region. An activity measure is generated using the transform signals. The activity measure is then used to determine whether to encode the region as a skipped region. The region is encoded in accordance with that determination to generate an encoded bit stream for the region. In a preferred embodiment, the transform signals are DCT coefficients and the activity measure is a weighted sum of the DCT coefficients, where the weighting of the low-frequency DCT coefficients is greater than the weighting of the high-frequency DCT coefficients. The region is encoded as a skipped region if the activity measure is less than a threshold value; otherwise, the region is encoded as either an inter encoded region or an intra encoded region.

54 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Image processing
229.9K papers, 3.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202324
202272
202162
202084
2019110
201897