scispace - formally typeset
Search or ask a question

Showing papers on "Residual frame published in 2015"


Journal ArticleDOI
TL;DR: A gradient based R- lambda (GRL) model is proposed for the intra frame rate control, where the gradient can effectively measure the frame-content complexity and enhance the performance of the traditional R-lambda method.
Abstract: Rate control plays an important role in the rapid development of high-fidelity video services. As the High Efficiency Video Coding (HEVC) standard has been finalized, many rate control algorithms are being developed to promote its commercial use. The HEVC encoder adopts a new R-lambda based rate control model to reduce the bit estimation error. However, the R-lambda model fails to consider the frame-content complexity that ultimately degrades the performance of the bit rate control. In this letter, a gradient based R-lambda (GRL) model is proposed for the intra frame rate control, where the gradient can effectively measure the frame-content complexity and enhance the performance of the traditional R-lambda method. In addition, a new coding tree unit (CTU) level bit allocation method is developed. The simulation results show that the proposed GRL method can reduce the bit estimation error and improve the video quality in HEVC all intra frame coding.

89 citations


Patent
07 Aug 2015
TL;DR: One variation of a method for recording accidents includes: in a first mode, capturing a second video frame at a second time, storing the second frame with a first sequence of video frames, captured over a buffer duration, in local memory in the helmet, removing a first video frame captured outside of the buffer duration from local memory, and rendering a subregion of the second videoframe on a display arranged within the helmet as mentioned in this paper.
Abstract: One variation of a method for recording accidents includes: in a first mode, capturing a second video frame at a second time, storing the second video frame with a first sequence of video frames, captured over a buffer duration, in local memory in the helmet, removing a first video frame captured outside of the buffer duration from local memory, and rendering a subregion of the second video frame on a display arranged within the helmet; in response to detection of an accident involving the helmet, transitioning from the first mode into a second mode; in the second mode, capturing a second sequence of video frames, and storing the second sequence of video frames exceeding the buffer duration in local memory; and generating a video file from the first sequence of video frames and the second sequence of video frames stored in local memory.

62 citations


Journal ArticleDOI
TL;DR: It is shown that for semi-normalized symbols, the inverse of any invertible frame multiplier can always be represented as a frame multiplier with the reciprocal symbol and dual frames of the given ones, and that the set of dual frames determines a frame uniquely.

52 citations


Journal ArticleDOI
TL;DR: The proposed MC models are harnessed and a comprehensive analysis of the system is presented, to qualitatively predict the experimental results, and it is shown that the theory explains qualitatively the empirical behavior.
Abstract: Block-based motion estimation (ME) and motion compensation (MC) techniques are widely used in modern video processing algorithms and compression systems. The great variety of video applications and devices results in diverse compression specifications, such as frame rates and bit rates. In this paper, we study the effect of frame rate and compression bit rate on block-based ME and MC as commonly utilized in inter-frame coding and frame rate up-conversion (FRUC). This joint examination yields a theoretical foundation for comparing MC procedures in coding and FRUC. First, the video signal is locally modeled as a noisy translational motion of an image. Then, we theoretically model the motion-compensated prediction of available and absent frames as in coding and FRUC applications, respectively. The theoretic MC-prediction error is studied further and its autocorrelation function is calculated, yielding useful separable-simplifications for the coding application. We argue that a linear relation exists between the variance of the MC-prediction error and temporal distance. While the relevant distance in MC coding is between the predicted and reference frames, MC-FRUC is affected by the distance between the frames available for interpolation. We compare our estimates with experimental results and show that the theory explains qualitatively the empirical behavior. Then, we use the models proposed to analyze a system for improving of video coding at low bit rates, using a spatio-temporal scaling. Although this concept is practically employed in various forms, so far it lacked a theoretical justification. We here harness the proposed MC models and present a comprehensive analysis of the system, to qualitatively predict the experimental results.

31 citations


Patent
17 Sep 2015
TL;DR: In this article, a method for determining a pixel value of a texture pixel associated with a three-dimensional scan of an object is presented, which includes prioritizing a sequence of image frames in a queue based on one or more prioritization parameters.
Abstract: A method for determining a pixel value of a texture pixel associated with a three-dimensional scan of an object includes prioritizing a sequence of image frames in a queue based on one or more prioritization parameters. The method also includes selecting a first image frame from the queue. The method also includes determining a pixel value of the particular texture pixel in the first image frame. The method further includes selecting a second image frame from the queue. The second image frame has a higher priority than the first image frame based on the one or more prioritization parameters. The method also includes modifying the pixel value of the particular texture pixel based on a pixel value of the particular texture pixel in the second image frame to generate a modified pixel value of the particular texture pixel.

27 citations


Patent
02 Jun 2015
TL;DR: In this paper, a plurality of image frames corresponding to a video stream are received at a device, including a first image frame having a first resolution and a second resolution that is lower than the first resolution.
Abstract: A method includes receiving, at a device, a plurality of image frames corresponding to a video stream. The plurality of image frames include a first image frame having a first resolution and a second image frame having a second resolution that is lower than the first resolution. The method also includes detecting, at the device, a trigger by analyzing the second image frame. The method further includes designating, at the device, the first image frame as an action frame based on the trigger.

25 citations


Journal ArticleDOI
TL;DR: A novel algorithm, which is called mixed lossy and lossless (MLL) reference frame recompression, is proposed in this paper, which differs from its previous designs and achieves a much higher compression ratio.
Abstract: Frame recompression is an efficient way to reduce the huge bandwidth of external memory for video encoder, especially for P/B frame compression. A novel algorithm, which is called mixed lossy and lossless (MLL) reference frame recompression, is proposed in this paper. The bandwidth reduction comes from two sources in our scheme, which differs from its previous designs and achieves a much higher compression ratio. First, it comes from pixel truncation. We use truncated pixels (PR) for integer motion estimation (IME) and acquire truncated residuals for factional motion estimation (FME) and motion compensation (MC). Because the pixel access of IME is much larger than FME and MC, it saves about 37.5% bandwidth under 3-b truncation. Second, embedded compression of PR helps to further reduce data. The truncated pixels in the first stage greatly help to achieve a higher compression ratio than current designs. From our experiments, 3-b truncated PR can be compressed to 15.4% of the original data size, while most current embedded compressions can only achieve around 50%. For PR compression, two methods are proposed: in-block prediction and small-value optimized variable length coding. With these experiments, the total bandwidth can be reduced to 25.5%. Our proposed MLL is hardware/software friendly and also fast IME algorithm friendly frame recompression scheme. It is more suitable to work together with the data-reuse strategy than the previous schemes, and the video quality degradation is controllable and negligible.

24 citations


Patent
29 Jan 2015
TL;DR: In this article, a method, an apparatus, and a computer program product for processing touchscreen information are provided, which may include receiving touchscreen data that includes node values representative of signals generated by a touchscreen panel, generating a first data frame including difference values, and transmitting the first dataframe over a control data bus.
Abstract: A method, an apparatus, and a computer program product for processing touchscreen information are provided. The method may include receiving touchscreen data that includes node values representative of signals generated by a touchscreen panel, generating a first data frame including difference values, and transmitting the first data frame over a control data bus. Each of the difference values may be calculated as a difference between one of the node values and a different node-related value wherein the first data frame has a predefined size. The first data frame may be configured to permit a receiver of the first data frame to reconstruct the touchscreen data without information loss.

24 citations


Patent
30 Jan 2015
TL;DR: In this article, techniques for coding an ambient higher order ambisonic coefficient are described for decoding an audio bitstream with a memory and a processor, where the memory may store a first frame of a bitstream and a second frame of the bitstream.
Abstract: In general, techniques are described for coding an ambient higher order ambisonic coefficient. An audio decoding device comprising a memory and a processor may perform the techniques. The memory may store a first frame of a bitstream and a second frame of the bitstream. The processor may obtain, from the first frame, one or more bits indicative of whether the first frame is an independent frame that includes additional reference information to enable the first frame to be decoded without reference to the second frame. The processor may further obtain, in response to the one or more bits indicating that the first frame is not an independent frame, prediction information for first channel side information data of a transport channel. The prediction information may be used to decode the first channel side information data of the transport channel with reference to second channel side information data of the transport channel.

23 citations


Patent
06 Apr 2015
TL;DR: In this article, video is encoded using temporal scalability involving the creation of a base layer at a first frame rate and an enhancement layer including additional frames enabling playback at a second higher frame rate.
Abstract: Systems and methods in accordance with embodiments of this invention provide for encoding and playing back video at different frame rates using enhancement layers. In a number of embodiments, video is encoded using temporal scalability involving the creation of a base layer at a first frame rate and an enhancement layer including additional frames enabling playback at a second higher frame rate. The second higher frame rate can also be referred to as an enhanced frame rate. In a number of embodiments, the base and enhancement layers are stored in one or more container files that contain metadata describing the enhancement layer. Based on the capabilities of a playback device, it can select the particular frame rate at which to playback encoded video.

19 citations


Patent
Ximin Zhang1, Sang-Hee Lee1
25 Mar 2015
TL;DR: In this paper, a quantization parameter for a frame of a video sequence, modifying the quantization parameters based on a spatial complexity or a temporal complexity associated with the video frame, is discussed.
Abstract: Techniques related to constant quality video coding are discussed. Such techniques may include determining a quantization parameter for a frame of a video sequence, modifying the quantization parameter based on a spatial complexity or a temporal complexity associated with the video frame, and generating a block level quantization parameter for a block of the video frame based on the modified frame level quantization parameter, a complexity of the block, and a complexity of the video frame.

Patent
Neelesh N. Gokhale1
03 Aug 2015
TL;DR: In this paper, a decoder adapted to generate an intermediate decoded version of a video frame from an encoded version of the video frame, determine either an amount of high frequency basis functions or coefficients below a quantization threshold for at least one block of the frame, and generate a final decoding version of video frame based at least in part on the intermediate decoding version and the determined amount(s) for the one or more blocks.
Abstract: A decoder adapted to generate an intermediate decoded version of a video frame from an encoded version of the video frame, determine either an amount of high frequency basis functions or coefficients below a quantization threshold for at least one block of the video frame, and generate a final decoded version of the video frame based at least in part on the intermediate decoded version of the video frame and the determined amount(s) for the one or more blocks of the video frame, is disclosed. In various embodiments, the decoder may be incorporated as a part of a video system.

Patent
18 Mar 2015
TL;DR: In this article, the quality of static image frames having a relatively long residence time in a frame buffer on a sink device is improved by encoding additional information to improve the representation of the now static frame.
Abstract: One or more system, apparatus, method, and computer readable media is described for improving the quality of static image frames having a relatively long residence time in a frame buffer on a sink device. Where a compressed data channel links a source and sink, the source may encode additional frame data to improve the quality of a static frame presented by a sink display. A display source may encode frame data at a nominal quality and transmit a packetized stream of the compressed frame data. In the absence of a timely frame buffer update, the display source encodes additional information to improve the image quality of the representation of the now static frame. A display sink device presents a first representation of the frame at the nominal image quality, and presents a second representation of the frame at the improved image quality upon subsequently receiving the frame quality improvement data.

Patent
Balamanohar Paluri1
28 Dec 2015
TL;DR: In this paper, the first video frame can be processed using a convolutional neural network to output a first set of feature maps and the second set of features can be used to determine an optical flow for at least one pixel.
Abstract: Systems, methods, and non-transitory computer-readable media can obtain a first video frame and a second video frame. The first video frame can be processed using a convolutional neural network to output a first set of feature maps. The second video frame can be processed using the convolutional neural network to output a second set of feature maps. The first set of feature maps and the second set of feature maps can be processed using a spatial matching layer of the convolutional neural network to determine an optical flow for at least one pixel.

Patent
30 Sep 2015
TL;DR: In this paper, a self-correcting tracking method was proposed to track an object from a first image frame to a second image frame using optical flow object tracking, which can correct tracking errors by reacquiring the object when the tracking confidence metric is below a threshold value.
Abstract: In some implementations, a computing device can track an object from a first image frame to a second image frame using a self-correcting tracking method. The computing device can select points of interest in the first image frame. The computing device can track the selected points of interest from the first image frame to the second image frame using optical flow object tracking. The computing device can prune the matching pairs of points and generate a transform based on the remaining matching pairs to detect the selected object in the second image frame. The computing device can generate a tracking confidence metric based on a projection error for each point of interest tracked from the first frame to the second frame. The computing device can correct tracking errors by reacquiring the object when the tracking confidence metric is below a threshold value.

Patent
10 Apr 2015
TL;DR: In this paper, a method of transmitting a frame is provided by a device in a WLAN, where the device sets as additional data subcarriers some of subcarrier which are not set as data sub-carriers in at least part of fields included in a frame of a legacy frame format.
Abstract: A method of transmitting a frame is provided by a device in a WLAN. The device sets as additional data subcarriers some of subcarriers which are not set as data subcarriers in at least part of fields included in a frame of a legacy frame format, and allocates information to the additional data subcarriers.

Patent
Ximin Zhang1, Sang-Hee Lee1
09 Nov 2015
TL;DR: In this paper, the quantization parameter (QP) for a given encoding process may be determined for the frame based on the target bitrate to maintain a suitable average bitrate.
Abstract: Systems and methods for determining a target number of bits (target bitrate) for encoding a frame of video that will satisfy a buffer constraint in a parallel video encoder. The quantization parameter (QP) for a given encoding process may be determined for the frame based on the target bitrate to maintain a suitable average bitrate. In some embodiments, the bitrate used for one or more prior frame is estimated. In some embodiments, a buffer fullness update is made based on an estimated bitrate. In some embodiments, a bitrate to target for each frame is determined based on the frame type, estimated bitrate of a prior frame(s), and the updated buffer fullness.

Patent
22 Dec 2015
TL;DR: In this paper, a depth frame comprising a plurality of grid elements that each have a respective depth value is received from a depth sensor oriented towards an open end of a shipping container.
Abstract: A method and apparatus for receiving a depth frame from a depth sensor oriented towards an open end of a shipping container, the depth frame comprising a plurality of grid elements that each have a respective depth value, identifying one or more occlusions in the depth frame, correcting the one or more occlusions in the depth frame using one or more temporally proximate depth frames, and outputting the corrected depth frame for fullness estimation.

Patent
Ganmei You1, Yuan Liu1, Zhongchao Shi1, Yaojie Lu1, Gang Wang1 
13 Jul 2015
TL;DR: In this paper, an object detection method is used to detect an object in an image pair corresponding to a current frame, where the image pair includes an original image of the current frame and a disparity map of the same current frame.
Abstract: Disclosed is an object detection method used to detect an object in an image pair corresponding to a current frame. The image pair includes an original image of the current frame and a disparity map of the same current frame. The original image of the current frame includes at least one of a grayscale image and a color image of the current frame. The object detection method comprises steps of obtaining a first detection object detected in the disparity map of the current frame; acquiring an original detection object detected in the original image of the current frame; correcting, based on the original detection object, the first detection object so as to obtain a second detection object; and outputting the second detection object.

Patent
12 Jun 2015
TL;DR: In this article, the optical flow determination logic is used to quantify relative motion of a feature present in a first frame and a second frame of video with respect to the two frames of video.
Abstract: An image processing system includes a processor and optical flow determination logic. The optical flow determination logic is to quantify relative motion of a feature present in a first frame of video and a second frame of video with respect to the two frames of video. The optical flow determination logic configures the processor to convert each of the frames of video into a hierarchical image pyramid. The image pyramid comprises a plurality of image levels. Image resolution is reduced at each higher one of the image levels. For each image level and for each pixel in the first frame, the processor is configured to establish an initial estimate of a location of the pixel in the second frame and to apply a plurality of sequential searches, starting from the initial estimate, that establish refined estimates of the location of the pixel in the second frame.

Patent
12 Nov 2015
TL;DR: In this paper, the first frame identification value corresponds to an average color value of a frame of the video content and the second frame identification values correspond to a global motion vector (GMV) value of the frame.
Abstract: Various aspects of a method and system to process video content are disclosed herein. The method includes determination of a first frame identification value associated with a video content. The first frame identification value corresponds to an average color value of a frame of the video content. The method further includes determination of a second frame identification value associated with the video content. The second frame identification value corresponds to a global motion vector (GMV) value of the frame of the video content. The method further includes determination of a first intermediate frame based on one or both of the first frame identification value and the second frame identification value.

Patent
Zoran Fejzo1
26 Feb 2015
TL;DR: In this paper, a post-encoding bitrate reduction system and method for generating one more scaled compressed bitstreams from a single encoded plenary file is presented, which is scaled down by truncating bits in the data frames to conform to the bit allocation.
Abstract: A post-encoding bitrate reduction system and method for generating one more scaled compressed bitstreams from a single encoded plenary file. The plenary file contains multiple audio object files that were encoded separately using a scalable encoding process having fine-grained scalability. Activity in the data frames of the encoded audio object files at a time period are compared with each other to obtain a data frame activity comparison. Bits from an available bitpool are assigned to all of the data frames based on the data frame activity comparison and corresponding hierarchical metadata. The plenary file is scaled down by truncating bits in the data frames to conform to the bit allocation. In some embodiments frame activity is compared to a silence threshold and the data frame contains silence if the frame activity is less than or equal to the threshold and minimal bits are used to represent the silent frame.

Posted Content
TL;DR: In this paper, a group-frame construction is proposed to construct a low-coherence frame with a small number of distinct inner product values between the frame elements, in a sense approximating a Grassmanian frame.
Abstract: Many problems in areas such as compressive sensing and coding theory seek to design a set of equal-norm vectors with large angular separation. This idea is essentially equivalent to constructing a frame with low coherence. The elements of such frames can in turn be used to build high-performance spherical codes, quantum measurement operators, and compressive sensing measurement matrices, to name a few applications. In this work, we allude to the group-frame construction first described by Slepian and further explored in the works of Vale and Waldron. We present a method for selecting representations of a finite group to construct a group frame that achieves low coherence. Our technique produces a tight frame with a small number of distinct inner product values between the frame elements, in a sense approximating a Grassmanian frame. We identify special cases in which our construction yields some previously-known frames with optimal coherence meeting the Welch lower bound, and other cases in which the entries of our frame vectors come from small alphabets. In particular, we apply our technique to the problem choosing a subset of rows of a Hadamard matrix so that the resulting columns form a low-coherence frame. Finally, we give an explicit calculation of the average coherence of our frames, and find regimes in which they satisfy the Strong Coherence Property described by Mixon, Bajwa, and Calderbank.

Patent
15 Dec 2015
TL;DR: In this paper, the first cell of a first of the error correction encoded frames of a new service frame which does not have any data cells in one or more previous service frames as a result of the convolutional interleaving was detected.
Abstract: A transmitter transmits data using Orthogonal Frequency Division, OFDM, symbols. The transmitter comprising a forward error correction encoder configured to encode the data to form forward error correction encoded frames of encoded data cells, a service frame builder configured to form a service frame for transmission comprising a plurality of forward error correction encoded frames, a convolutional interleaver comprising a plurality of delay portions and configured to convolutionally interleave the data cells of the service frames, a modulation symbol mapper configured to map the interleaved and encoded data cells of the service frames onto modulation cells, and a modulator configured to modulate the sub-carriers of one or more OFDM symbols with the modulation cells. A controller is configured to form signalling data to be transmitted to include an indication of an identified first cell of a first of the forward error correction frame of a new service frame which can be decoded from cells received from the new service frame or the new service frame and one or more service frames following after the new service frame. By detecting the first cell of a first of the error correction encoded frames of a new service frame which does not have any data cells in one or more previous service frames as a result of the convolutional interleaving then a receiver, which has acquired the new service frame but none of the one or more previous service frames can decode this first forward error correction encoded frame of the new service frame and ignore the other forward error correction encoded frames earlier in the service frame. Therefore for example a receiver may power on or channels during a previous service frame and be directed to only decode a forward error correction encoded frame that it can decode.

Journal ArticleDOI
TL;DR: The rate-distortion performance of the proposed codec is superior to the state-of-the-art CS-based video codec, although there is still a considerable gap between it and traditional video codec.
Abstract: This paper presents a compressive-sensing- (CS-) based video codec which is suitable for wireless video system requiring simple encoders but tolerant, more complex decoders. At the encoder side, each video frame is independently measured by block-based random matrix, and the resulting measurements are encoded into compressed bitstream by entropy coding. Specifically, to reduce the quantization errors of measurements, a nonuniform quantization is integrated into the DPCM-based quantizer. At the decoder side, a novel joint reconstruction algorithm is proposed to improve the quality of reconstructed video frames. Firstly, the proposed algorithm uses the temporal autoregressive (AR) model to generate the Side Information (SI) of video frame, and next it recovers the residual between the original frame and the corresponding SI. To exploit the sparse property of residual with locally varying statistics, the Principle Component Analysis (PCA) is used to learn online the transform matrix adapting to residual structures. Extensive experiments validate that the joint reconstruction algorithm in the proposed codec achieves much better results than many existing methods with consideration of the reconstructed quality and the computational complexity. The rate-distortion performance of the proposed codec is superior to the state-of-the-art CS-based video codec, although there is still a considerable gap between it and traditional video codec.

Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed multiple description coding scheme with stagger frame order for stereoscopic 3-D videos outperforms state-of-the-art schemes.
Abstract: Due to the prediction structures employed in video coding, the loss of one packet will affect many following frames. In this paper, a multiple description coding scheme with stagger frame order is proposed for stereoscopic 3-D videos. First, the reference and auxiliary views in stereoscopic sequences will be asymmetrically encoded into one description, whereas the other description will be formed in the same way with one dumb frame delay. Because of the stagger frame order, the coarsely encoded B frames will be inserted into different positions of the two descriptions. If a certain frame encoded with I/P mode is lost, then its corresponding B-frame version will be employed to compensate for the loss. In each description, the quantization steps of B frames are tuned based on a closed-form solution that considers the video contents, network status, frame positions in the group of picture, and the layer of the views. For further improvement, a fusing scheme is provided. The experimental results demonstrate that the proposed scheme outperforms state-of-the-art schemes. Specifically, up to 1.3-dB gain is achieved in the case of packet loss, and 2-dB gain is obtained for the side/central performance.

Patent
03 Dec 2015
TL;DR: In this paper, a method for generating and employing a camera noise model, performed by a processing unit, is introduced to at least contain the following steps: a first frame is obtained by controlling a camera module via a camera-module controller.
Abstract: A method for generating and employing a camera noise model, performed by a processing unit, is introduced to at least contain the following steps. A camera noise model is provided. A first frame is obtained by controlling a camera module via a camera module controller. A blending ratio corresponding to each pixel value of the first frame is generated according to the camera noise model, the pixel value of the first frame and a corresponding pixel value of a second frame. A third frame is generated by fusing each pixel value of the first frame with the corresponding pixel value of the second frame according to the blending ratio. A de-nosing strength for each pixel value of the third frame is adjusted according to the blending ratio. Each pixel value of the third frame is adjusted using the corresponding de-nosing strength.

Journal ArticleDOI
TL;DR: A comprehensive set of experiments are done to justify the superiority of the proposed scheme over existing literature with respect to spatial and quality adaptation attacks as well as visual quality.
Abstract: Due to the increasing heterogeneity among the end using devices for playing multimedia content, scalable video communication attracts significant attention in recent days. As a consequence, content authentication or ownership authentication using watermarking for scalable video stream is becoming emerging research topic. In this paper, a watermarking scheme for scalable video is proposed which is robust against spatial and quality scalability. In the proposed scheme, a DC frame is generated by accumulating DC values of non-overlapping blocks for every frame in the input video sequence. DC frame sequence is up-sampled and subtracted from the original video sequence to generate residual frame sequence. Then Discrete Cosine Transform (DCT) based temporal filtering is applied on DC as well as residual frame sequence. Watermark is embedded in low pass frames of DC frames and up sampled watermark is embedded in the low pass residual frames to achieve the graceful improvement of watermark signal in successive enhancement layer. A comprehensive set of experiments are done to justify the superiority of the proposed scheme over existing literature with respect to spatial and quality adaptation attacks as well as visual quality.

Patent
30 Apr 2015
TL;DR: In a sort-middle architecture, pixel values that were computed in a previous frame may be reused for the current frame, operating in a sort middle architecture as discussed by the authors, and the contents of the color buffer or other buffers of the previous frame of a tile may be moved to the same buffer of the tile for the next frame.
Abstract: Pixel values that were computed in a previous frame may be reused for the current frame, operating in a sort-middle architecture. A hash or some other compact representation of all the data used in a tile, including all triangles, uniforms, textures, shaders, etc. is computed and stored for each tile. When rendering the next frame, that compact representation is once again computed for each tile. In a sort-middle architecture, there is a natural break point just before rasterization. At this break point, the compact representation may be compared to the compact representation computed in the previous frame for the same tile. If those compact representations are the same, then there is no need to render anything for this tile. Instead, the contents of the color buffer or other buffers of the previous frame of the tile may be moved to the same buffer of the tile for the current frame.

Patent
Ximin Zhang1, Sang-Hee Lee1
13 May 2015
TL;DR: In this paper, techniques related to designating golden frames and to determining frame sizes and/or quantization parameters for video coding are discussed, such techniques may include designating an frame as a golden frame or a non-golden frame based on whether the frame is a scene change frame, a distance of the frame to a previous golden frame, and an average temporal distortion.
Abstract: Techniques related to designating golden frames and to determining frame sizes and/or quantization parameters golden and non-golden frames for video coding are discussed. Such techniques may include designating an frame as a golden frame or a non-golden frame based on whether the frame is a scene change frame, a distance of the frame to a previous golden frame, and an average temporal distortion of the frame and determining a frame size and/or quantization parameter for the frame based on the designation and a temporal distortion of the frame.