scispace - formally typeset
Search or ask a question
Topic

Residual frame

About: Residual frame is a research topic. Over the lifetime, 4443 publications have been published within this topic receiving 68784 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A unified model for detecting different types of video shot transitions is presented and frame estimation scheme using the previous and the next frames is formulated using a multilayer perceptron network.
Abstract: We have presented a unified model for detecting different types of video shot transitions. Based on the proposed model, we formulate frame estimation scheme using the previous and the next frames. Unlike other shot boundary detection algorithms, instead of properties of frames, frame transition parameters and frame estimation errors based on global and local features are used for boundary detection and classification. Local features include scatter matrix of edge strength and motion matrix. Finally, the frames are classified as no change (within shot frame), abrupt change, or gradual change frames using a multilayer perceptron network. The proposed method is relatively less dependent on user defined thresholds and is free from sliding window size as widely used by various schemes found in the literature. Moreover, handling both abrupt and gradual transitions along with non-transition frames under a single framework using model guided visual feature is another unique aspect of the work.

88 citations

Patent
20 Dec 2007
TL;DR: In this paper, a row motion vector for individual rows of a current frame of a pixel array is used to map the location of pixels in the current frame to a mapped frame representing a single acquisition time.
Abstract: An imaging device and methods of stabilizing video captured using a rolling shutter operation. Motion estimation is used to determine a row motion vector for individual rows of a current frame of a pixel array. The row motion vectors are used to map the location of pixels in the current frame to a mapped frame representing a single acquisition time for the current frame.

87 citations

Patent
30 May 1995
TL;DR: In this paper, an upsampler is connected to the output of the frame buffer for providing interpolated and filtered values between the subsamples, and a new motion estimation technique is also described, which directly detects the number of bits required to be transmitted to convey the difference between predicted video data and the current video data.
Abstract: A video compression system in accordance with the present invention may use a frame buffer which is only a fraction of the size of a full frame buffer. A subsampler connected to an input of the frame buffer performs 4 to 1 subsampling on the video data to be stored in the frame buffer. This allows the frame buffer to be one-fourth the size of a full frame buffer. The subsampling may even be 9 to 1, 16 to 1, or another ratio, for a concomitant decrease in frame buffer size. An upsampler is connected to the output of the frame buffer for providing interpolated and filtered values between the subsamples. Novel methods of filtering and interpolating performed by the upsampler are described. A new motion estimation technique is also described herein which directly detects the number of bits required to be transmitted to convey the difference between the predicted video data and the current video data, where a fewer number of bits used to convey the difference corresponds to better motion estimation. The search criterion for the best estimate of movement of a block is the minimum number of bits for conveying this difference.

87 citations

Patent
06 May 1996
TL;DR: In this article, a method for determining quantization level versus bit-rate characteristics of raw video signals in video frames during a pre-encoding phase for video technologies such as MPEG and MPEG-2 is presented.
Abstract: A system and method for determining quantization level versus bit-rate characteristics of raw video signals in video frames during a pre-encoding phase for video technologies such as MPEG and MPEG-2. During a pre-encoding phase, various quantization levels are assigned to various parts of a frame, and the frame is then pre-encoded to determine a bit-rate for each quantization level used in the pre-encoding phase. Depending on the embodiment, quantization levels are assigned in one of many ways: checkerboard style, block style or any other distribution that avoids statistical anomalies. The method and system repeat the pre-encoding for plural frames, recording all quantization level versus bit-rate statistics on a frame by frame basis. These statistics are then used during encoding or re-encoding of a digital video to control the number of bits allocated to one segment of the digital video as compared to another segment, based on a target quality and target storage size for each segment. The resulting encoded digital video is stored on a digital storage medium, such as a compact disc.

87 citations

Proceedings ArticleDOI
01 Jun 2014
TL;DR: It is demonstrated that these sensors inherently perform high-speed, video compression in each pixel by describing the first decompression algorithm for this data, which performs an online optimization of the event decoding in real time.
Abstract: Dynamic and active pixel vision sensors (DAVISs) are a new type of sensor that combine a frame-based intensity readout with an event-based temporal contrast readout. This paper demonstrates that these sensors inherently perform high-speed, video compression in each pixel by describing the first decompression algorithm for this data. The algorithm performs an online optimization of the event decoding in real time. Example scenes were recorded by the 240×180 pixel sensor at sub-Hz frame rates and successfully decompressed yielding an equivalent frame rate of 2kHz. A quantitative analysis of the compression quality resulted in an average pixel error of 0.5DN intensity resolution for non-saturating stimuli. The system exhibits an adaptive compression ratio which depends on the activity in a scene; for stationary scenes it can go up to 1862. The low data rate and power consumption of the proposed video compression system make it suitable for distributed sensor networks.

87 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
81% related
Feature extraction
111.8K papers, 2.1M citations
80% related
Image segmentation
79.6K papers, 1.8M citations
80% related
Image processing
229.9K papers, 3.5M citations
78% related
Pixel
136.5K papers, 1.5M citations
78% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202313
202223
20217
20204
20196
201811