scispace - formally typeset
Search or ask a question
Topic

Inter frame

About: Inter frame is a research topic. Over the lifetime, 4154 publications have been published within this topic receiving 63549 citations.


Papers
More filters
Patent
Shijun Sun1
18 Feb 2005
TL;DR: In this article, a method of coding a quality scalable video sequence is provided, where an N-bit input frame is converted to an M-bit output frame, where M is an integer between 1 and N. To be backwards compatible with existing 8-bit video systems, M would be selected to be 8.
Abstract: A method of coding a quality scalable video sequence is provided. An N-bit input frame is converted to an M-bit input frame, where M is an integer between 1 and N. To be backwards compatible with existing 8-bit video systems, M would be selected to be 8. The M-bit input frame would be encoded to produce a base-layer output bitstream. An M-bit output frame would be reconstructed from the base-layer output bitstream and converted to a N-bit output frame. The N-bit output frame would be compared to the N-bit input frame to derive an N-bit image residual that could be encoded to produce an enhancement layer bitstream.

150 citations

Patent
Boon-Lock Yeo1
18 Jun 1999
TL;DR: In this article, a frame dependency for a particular frame is determined, and then the frame dependency is used to decode the frame to create a decoded version of the particular frame.
Abstract: In some embodiments, the invention includes a method of processing a video stream. The method involves detecting a request to playback a particular frame. It is determined whether a decoded version of the particular frame is in a decoded frame cache. If it is not, the method includes(i) determining a frame dependency for the particular frame; (ii) determining which of the frames in the frame dependency are in the decoded frame cache; (iii) decoding any frame in the frame dependency that is not in the decoded frame cache and placing it in the decoded frame cache; and (iv) using at least some of the decoded frames in the frame dependency to decode the particular frame to create a decoded version of the particular frame. In some embodiments, the request to playback a particular frame is part of a request to perform frame-by-frame backward playback and the method is performed for successively earlier frames with respect to the particular frame as part of the frame-by-frame backward playback. In some embodiments, the part (i) is performed whether or not it is determined that a decoded version of a particular frame is in the decoded frame cache without part (iv) being performed. Other embodiments are described and claimed.

149 citations

Proceedings ArticleDOI
05 Sep 2000
TL;DR: In this article, the authors proposed a technique to automatically extract a single key frame from a video sequence, which is designed for a system to search video on the World Wide Web.
Abstract: This paper describes a technique to automatically extract a single key frame from a video sequence. The technique is designed for a system to search video on the World Wide Web. For each video returned by a query, a thumbnail image that illustrates its content is displayed to summarize the results. The proposed technique is composed of three steps. Shot boundaries detection, shot selection, and key frame extraction within the selected shot. The shot and key frame are selected based on measures of motion and spatial activity and the likeliness to include people. The latter is determined by skin-color detection and face detection. Simulation results on a large set of video from the Internet, including movie trailers, sports, news, and animation, show the efficiency of the method. Furthermore, this is achieved at a very low complexity cost.

149 citations

Patent
10 May 1990
TL;DR: In this article, a frame interpolating circuit for obtaining an interpolated frame between the.. encoded.!..Iadd.frames, and a circuit for getting an error formed by frame interpolation is presented.
Abstract: A moving image signal encoding apparatus includes: a frame decimating circuit for extracting . .encoded.!. frames from an input moving image signal at specified intervals; a frame interpolating circuit for obtaining an interpolated frame between the . .encoded.!. .Iadd.extracted .Iaddend.frames, and a circuit for obtaining an error formed by frame interpolation. A moving image signal decoding apparatus includes: a receiving circuit for extracting a frame code from an inputted signal; a frame decoding circuit for decoding the frame code to obtain a reproduced frame, and a frame interpolating circuit for obtaining an interpolated frame between the reproduced frames. By transmitting an error of the interpolated frame from the encoding apparatus to the decoding apparatus and correcting the error of the interpolated frame with the decoding apparatus, the error of the interpolated frame is eliminated. Alternatively, depending on the value of the error of the interpolated frame obtained with the encoding apparatus, a circuit determines the operation mode as to whether the frame interpolating circuit of the decoder carries out frame interpolation or preceding value holding and sends a flag to show the operation mode to the decoder, so that improvement occurs when the error of the interpolated frame is large.

146 citations

Journal ArticleDOI
TL;DR: A co segmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion and introduces a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow.
Abstract: With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a cosegmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).

144 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Image processing
229.9K papers, 3.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202324
202272
202162
202084
2019110
201897