About: Residual frame is a(n) research topic. Over the lifetime, 4443 publication(s) have been published within this topic receiving 68784 citation(s).
Papers published on a yearly basis
TL;DR: A novel observation model based on motion compensated subsampling is proposed for a video sequence and Bayesian restoration with a discontinuity-preserving prior image model is used to extract a high-resolution video still given a short low-resolution sequence.
Abstract: The human visual system appears to be capable of temporally integrating information in a video sequence in such a way that the perceived spatial resolution of a sequence appears much higher than the spatial resolution of an individual frame. While the mechanisms in the human visual system that do this are unknown, the effect is not too surprising given that temporally adjacent frames in a video sequence contain slightly different, but unique, information. This paper addresses the use of both the spatial and temporal information present in a short image sequence to create a single high-resolution video frame. A novel observation model based on motion compensated subsampling is proposed for a video sequence. Since the reconstruction problem is ill-posed, Bayesian restoration with a discontinuity-preserving prior image model is used to extract a high-resolution video still given a short low-resolution sequence. Estimates computed from a low-resolution image sequence containing a subpixel camera pan show dramatic visual and quantitative improvements over bilinear, cubic B-spline, and Bayesian single frame interpolations. Visual and quantitative improvements are also shown for an image sequence containing objects moving with independent trajectories. Finally, the video frame extraction algorithm is used for the motion-compensated scan conversion of interlaced video data, with a visual comparison to the resolution enhancement obtained from progressively scanned frames.
TL;DR: This paper places frames in a new setting, where some of the elements are deleted, and shows that a normalized frame minimizes mean-squared error if and only if it is tight.
Abstract: Frames have been used to capture significant signal characteristics, provide numerical stability of reconstruction, and enhance resilience to additive noise This paper places frames in a new setting, where some of the elements are deleted Since proper subsets of frames are sometimes themselves frames, a quantized frame expansion can be a useful representation even when some transform coefficients are lost in transmission This yields robustness to losses in packet networks such as the Internet With a simple model for quantization error, it is shown that a normalized frame minimizes mean-squared error if and only if it is tight With one coefficient erased, a tight frame is again optimal among normalized frames, both in average and worst-case scenarios For more erasures, a general analysis indicates some optimal designs Being left with a tight frame after erasures minimizes distortion, but considering also the transmission rate and possible erasure events complicates optimizations greatly
28 Jan 2011
TL;DR: In this paper, the authors describe an imaging subsystem, one or more memory components, and a processor for capturing a frame of image data having a representation of a feature, where the processor is in communicative connection with executable instructions for enabling the processor for various steps.
Abstract: Devices, methods, and software are disclosed for capturing a frame of image data having a representation of a feature. In an illustrative embodiment, a device includes an imaging subsystem, one or more memory components, and a processor. The imaging subsystem is capable of providing image data representative of light incident on said imaging subsystem. The one or more memory components include at least a first memory component operatively capable of storing an input frame of the image data. The processor is in communicative connection with executable instructions for enabling the processor for various steps. One step includes receiving the input frame from the first memory component. Another step includes generating a reduced resolution frame based on the input frame, the reduced resolution frame comprising fewer pixels than the input frame, in which a pixel in the reduced resolution frame combines information from two or more pixels in the input frame. Another step includes attempting to identify transition pairs comprising pairs of adjacent pixels in the reduced resolution frame having differences between the pixels that exceed a pixel transition threshold. Another step includes attempting to identify one or more linear features between two or more identified transition pairs in the reduced resolution frame. Another step includes providing an indication of one or more identified linear features in the reduced resolution frame.
22 Jun 1989
TL;DR: In this article, unique digital codes are encoded on a video signal, the codes are retrieved at receivers and precise information concerning the time of occurrence, length, nature and quality of a monitored broadcast at a frame by frame level, is generated.
Abstract: Unique digital codes are encoded on a video signal, the codes are retrieved at receivers and precise information concerning the time of occurrence, length, nature and quality of a monitored broadcast at a frame by frame level, is generated. The codes are inserted on scan lines of the video, and vary either on a field-to-field or frame-to-frame basis. The code has a repeating first part having a unique program material identifier indicating the time, date and place of encoding, and has a second portion that varies in a predetermined non-repeating sequence which varies along the entire length of the tape, thereby uniquely identifying each frame of the video program material. Also encoded upon successive frames is a cyclic counter code with a count corresponding to the sequence of the identifier data on successive frames. When the video signal is processed by a receiver, the first portion identifier data from the various frames is mapped into selected memory locations in accordance with the count of the frame as determined by the second portion. Odd and even fields are encoded with complementary bit sequences to assist in processing the encoded data. Whenever the frame sequence is interrupted a data packet is generated representative of the condition encountered. The data packets are accumulated in log files in a memory in the receiver. The log files are transmitted to a data center, as is a copy of the encoded tape. Reports concerning the broadcast are generated.
TL;DR: A method for encoding a signal that includes a speech component that is classified in one of at least two modes, based, for example, on pitch stationarity, short-term level gradient or zero crossing rate, is described.
Abstract: A method for encoding a signal that includes a speech component is described. First and second linear prediction windows of a frame are analyzed to generate sets of filter coefficients. First and second pitch analysis windows of the frame are analyzed to generate pitch estimates. The frame is classified in one of at least two modes, e.g. voiced, unvoiced and noise modes, based, for example, on pitch stationarity, short-term level gradient or zero crossing rate. Then the frame is encoded using the filter coefficients and pitch estimates in a particular manner depending upon the mode determination for the frame, preferably employing CELP based encoding algorithms.
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
111.8K papers, 2.1M citations
79.6K papers, 1.8M citations
229.9K papers, 3.5M citations
136.5K papers, 1.5M citations