scispace - formally typeset
Search or ask a question
Topic

Inter frame

About: Inter frame is a research topic. Over the lifetime, 4154 publications have been published within this topic receiving 63549 citations.


Papers
More filters
Patent
02 Apr 2008
TL;DR: In this article, the authors propose a method to replace I-frame error recovery with long term reference frames, even in the case where the reference frame management messages are lost to at least one decoder.
Abstract: An apparatus, software encoded in tangible media, and a method at an encoder. The method includes sending compressed video data including a reference frame message to create a long term reference frame to a plurality of decoders at one or more destination points, receiving feedback from the decoders indicative of whether or not the decoders successfully received the reference frame message, and in the case that the received feedback is such that at least one of the decoders did not successfully receive the reference frame message or does not have the indicated recent frame, repeating sending a reference frame message to create the long term reference frame. Using the method can replace I-frame error recovery with long term reference frames, even in the case where the reference frame management messages are lost to at least one decoder.

94 citations

Patent
29 Jan 1992
TL;DR: In this paper, a video image frame area is divided into a set of subframes, and each subframe is systematically shifted such that the individual subframes progressively cycle across and wrap around the frame area.
Abstract: Digital video signals are processed by a plurality of independently operating processors to provide data for transmission in a compressed, motion compensated form. A video image frame area is divided into a set of subframes. The set of subframes is systematically shifted such that the individual subframes progressively cycle across and wrap around the video image frame area. For each successive video frame, video image data bounded by each of the different subframes is independently compressed using motion estimation to reduce data redundancy among the successive frames. The motion estimation is limited for each subframe of a current video frame to areas of a previous video frame that were bounded by the same subframe in the previous frame. In an illustrated embodiment, the set of subframes is shifted once for each successive video frame, and each subframe includes a refresh region whereby the video image frame area is progressively refreshed as the subframes are shifted thereacross. Receiver apparatus for use in decoding the independently processed subframe data is also disclosed.

94 citations

Patent
20 Jul 2005
TL;DR: In this paper, an Encoder Assisted Frame Rate Up Conversion (EA-FRUC) system that utilizes video coding and pre-processing operations at the video encoder to exploit the FRUC processing that will occur in the decoder in order to improve compression efficiency and reconstructed video quality is disclosed.
Abstract: An Encoder Assisted Frame Rate Up Conversion (EA-FRUC) system that utilizes video coding and pre-processing operations at the video encoder to exploit the FRUC processing that will occur in the decoder in order to improve compression efficiency and reconstructed video quality is disclosed. One operation of the EA-FRUC system involves determining whether to encode a frame in a sequence of frames of a video content by determining a spatial activity in a frame of the sequence of frames; determining a temporal activity in the frame; determining a spatio-temporal activity in the frame based on the determined spatial activity and the determined temporal activity; determining a level of a redundancy in the source frame based on at least one of the determined spatial activity, the determined temporal activity, and the determined spatio-temporal activity; and, encoding the non-redundant information in the frame if the determined redundancy is within predetermined thresholds.

94 citations

Patent
10 Feb 2000
TL;DR: In this article, a method for forming an output stream of data includes determining an output resolution for the output streams of data, determining the output frame rate for the outputs of the outputs, determining output color depth for outputs, and determining output subsampled frames of data.
Abstract: A method for forming an output stream of data includes determining an output resolution for the output stream of data, determining an output frame rate for the output stream of data, determining an output color depth for the output stream of data, retrieving a first frame of data, a second frame of data, and a third frame of data from an input stream of data, the input stream of data having an input resolution, an input frame rate, and an input color depth, subsampling the first frame of data, the second frame of data, and the third frame of data to respectively form a first subsampled frame of data, a second subsampled frame of data, and a third subsampled frame of data, when the output resolution is lower than the input resolution, dropping the second subsampled frame of data, when the output frame rate is lower than the input frame rate, reducing color depth for the first subsampled frame of data and the second subsampled frame of data to respectively form a first reduced frame of data and a second reduced frame of data, when the output color depth is smaller than the input color depth, and converting the first reduced frame of data and the second reduced frame of data into the output stream of data.

94 citations

Journal ArticleDOI
TL;DR: The proposed video analysis system can provide objective diagnostic support to physicians by locating polyps during colon cancer screening exams and can be used as a cost-effective video annotation solution for the large backlog of existing colonoscopy videos.
Abstract: This paper presents an automated video analysis framework for the detection of colonic polyps in optical colonoscopy. Our proposed framework departs from previous methods in that we include spatial frame-based analysis and temporal video analysis using time-course image sequences. We also provide a video quality assessment scheme including two measures of frame quality. We extract colon-specific anatomical features from different image regions using a windowing approach for intraframe spatial analysis. Anatomical features are described using an eigentissue model. We apply a conditional random field to model interframe dependences in tissue types and handle variations in imaging conditions and modalities. We validate our method by comparing our polyp detection results to colonoscopy reports from physicians. Our method displays promising preliminary results and shows strong invariance when applied to both white light and narrow-band video. Our proposed video analysis system can provide objective diagnostic support to physicians by locating polyps during colon cancer screening exams. Furthermore, our system can be used as a cost-effective video annotation solution for the large backlog of existing colonoscopy videos.

94 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Image segmentation
79.6K papers, 1.8M citations
86% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Image processing
229.9K papers, 3.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202324
202272
202162
202084
2019110
201897