scispace - formally typeset
Search or ask a question

Showing papers on "Residual frame published in 2001"


Journal ArticleDOI
TL;DR: This paper places frames in a new setting, where some of the elements are deleted, and shows that a normalized frame minimizes mean-squared error if and only if it is tight.

549 citations


Journal ArticleDOI
TL;DR: The new rate control algorithm attempts to achieve a good balance between spatial quality and temporal quality to enhance the overall human perceptual quality at low-bit-rates.
Abstract: A novel rate control algorithm with a variable-encoding frame rate method is proposed for low-bit-rate video coding. Most existing rate control algorithms for low-bit-rate video focus on bit allocation at the macroblock level under a constant frame-rate assumption. The proposed rate control algorithm is able to adjust the encoding frame rate at the expense of tolerable time-delay. The new rate control algorithm attempts to achieve a good balance between spatial quality and temporal quality to enhance the overall human perceptual quality at low-bit-rates. It is demonstrated that the rate control algorithm achieves higher coding efficiency at low-bit-rates, with a low additional computational cost. The proposed variable-encoding frame rate method is compatible with the bit-stream structure of H.263+.

140 citations


Patent
04 Apr 2001
TL;DR: In this paper, an end-of-frame format for the transmitted frame is provided having an end of frame plurality of symbols, and the squared magnitude of the correlation sequence is computed.
Abstract: A method of determining an end of a transmitted frame at a receiver on a frame-based communications network. An end of frame format for the transmitted frame is provided having an end of frame plurality of symbols. A received transmitted frame is filtered using filter coefficients matched to the end of frame plurality of symbols to provide a correlation sequence low-pass filtered signal. A squared magnitude of the correlation sequence is computed. The squared magnitude of the correlation sequence is low-pass filtered to provide a low-pass filtered correlation signal. The low-pass filtered correlation signal is delayed to provide a delayed low-pass filtered correlation signal. The delayed low-pass filtered correlation signal is multiplied by a fixed predetermined threshold to provide a multiplied correlation signal. The multiplied correlation signal is compared with the low-pass filtered correlation signal to provide a match/no match comparison indicative of the possible end of a transmitted frame.

139 citations


Patent
Sung-Jea Ko1, Sung-Hee Lee1
28 Feb 2001
TL;DR: In this paper, a format converter which performs frame rate conversion and de-interlacing using a bi-directional motion vector and a method thereof is presented, which includes the steps of estimating a motion vector between the current frame and the previous frame from a frame to be interpolated.
Abstract: A format converter which performs frame rate conversion and de-interlacing using a bi-directional motion vector and a method thereof are provided. The method includes the steps of (a) estimating a bi-directional motion vector between the current frame and the previous frame from a frame to be interpolated; setting the motion vector of a neighboring block that has the minimum error distortion, among motion vectors estimated in step (a), as the motion vector of the current block; and (c) forming a frame to be interpolated with the motion vector set in step (b).

136 citations


Patent
30 Jul 2001
TL;DR: In this paper, cross correlation between a reference image frame and a comparison image frame determine the direction of motion relative to x and y orthogonal axes for a pointing device that uses optical imaging to monitor movement relative to a surface.
Abstract: Cross correlation between a reference image frame and a comparison image frame determine the direction of motion relative to x and y orthogonal axes for a pointing device that uses optical imaging to monitor movement relative to a surface. Pixel data for a portion of the surface are loaded into a buffer memory that shifts the data between successive positions in the buffer memory as each pixel of a comparison frame is processed to compute cross correlation. Auto correlation is determined for positions in the reference frame and used with the cross correlation results to determine a sub-pixel interpolation for the movement of the pointing device. A new reference frame is loaded using data for the comparison frame currently being processed if the pointing device is moved sufficiently so that the next comparison frame will not overlap the existing reference frame.

111 citations


Patent
Wilfried M. Osberger1
17 Jan 2001
TL;DR: In this article, an improved visual attention model uses a robust adaptive segmentation algorithm to divide a current frame of a video sequence into a plurality of regions based upon both color and luminance.
Abstract: An improved visual attention model uses a robust adaptive segmentation algorithm to divide a current frame of a video sequence into a plurality of regions based upon both color and luminance, with each region being processed in parallel by a plurality of spatial feature algorithms including color and skin to produce respective spatial importance maps. The current frame and a previous frame are also processed to produce motion vectors for each block of the current frame, the motion vectors being compensated for camera motion, and the compensated motion vectors being converted to produce a temporal importance map. The spatial and temporal importance maps are combined using weighting based upon eye movement studies.

104 citations


Patent
Tatsunori Saito1
28 Jun 2001
TL;DR: In this paper, a PES generation section of a multiplexer detects the number of skipped frames by analyzing elementary video streams output from a video encoder to determine a PTS on the basis of the time difference between frames calculated on the based of the skipped frames.
Abstract: In order to allow the generation of a time stamp in consideration of a frame skip even in the case where the frame skip is generated, a PES generation section of a multiplexer detects the number of skipped frames by analyzing elementary video streams output from a video encoder to determine a PTS on the basis of the time difference between frames calculated on the basis of the number of skipped frames. Then, a frame to which a PTS is to be placed with the above stream analysis is cut out to insert the PTS into a PES header of this frame to be transmitted to the transmission channel.

97 citations


Patent
09 Jul 2001
TL;DR: In this article, a speech communication system and method that has an improved way of handling information lost during transmission from the encoder to the decoder is presented, which matches the energy of the synthesized speech to the energy in the previously received frame.
Abstract: A speech communication system and method that has an improved way of handling information lost during transmission from the encoder to the decoder. More specifically, the improved speech communication system more accurately recovers from losing information about a frame of speech such as line spectral frequencies (LSF's), pitch lag (or adaptive codebook excitation), fixed codebook excitation and/or gain information. After estimating lost parameters in a lost frame and synthesizing the speech, the improved system matches the energy of the synthesized speech to the energy of the previously received frame.

96 citations


Patent
18 Apr 2001
TL;DR: In this paper, a frame erasure compensation method in a variable-rate speech coder includes quantizing, with a first encoder, a pitch lag value for a current frame and a second, predictive encoder quantizing only a second delta pitch Lag value for the previous frame.
Abstract: A frame erasure compensation method in a variable-rate speech coder includes quantizing, with a first encoder, a pitch lag value for a current frame and a first delta pitch lag value equal to the difference between the pitch lag value for the current frame and the pitch lag value for the previous frame. A second, predictive encoder quantizes only a second delta pitch lag value for the previous frame (equal to the difference between the pitch lag value for the previous frame and the pitch lag value for the frame prior to that frame). If the frame prior to the previous frame is processed as a frame erasure, the pitch lag value for the previous frame is obtained by subtracting the first delta pitch lag value from the pitch lag value for the current frame. The pitch lag value for the erasure frame is then obtained by subtracting the second delta pitch lag value from the pitch lag value for the previous frame. Additionally, a waveform interpolation method may be used to smooth discontinuities caused by changes in the coder pitch memory.

86 citations


Patent
Rohit Agarwal1
18 May 2001
TL;DR: In block-based video compression, a frame is divided into blocks which define a tiling pattern as discussed by the authors, which is varied from frame-to-frame to prevent an accumulation of errors which tend to appear at tile edges and can increase over time.
Abstract: In block based video compression, a frame is divided into blocks which define a tiling pattern The tiling pattern is varied from frame-to-frame to prevent an accumulation of errors which tend to appear at tile edges and can increase over time when using block-based compression In a preferred embodiment, a normal frame is padded by a border all around the normal frame size The padding is operable to extend any blocks around the periphery of the image frame which might be smaller in dimension than the standard blocks, such as those within the interior of the frame, such that they can be treated by the block-based compression systems as full size blocks

73 citations


Patent
31 Jan 2001
TL;DR: In this article, a method for determining the frame rate and exposure time for each frame of a video collection is presented, where the two images are compared to determine if objects in the scene are in motion.
Abstract: In a method for determining the frame rate and exposure time for each frame of a video collection, an image capture system acquires at least two successive frames of a scene, separated in time. The two images are compared to determine if objects in the scene are in motion. If motion is detected, then the speed and displacement of the objects that are moving is determined. If the speed of the fastest moving object creates an unacceptable amount of image displacement, then the frame rate for the next frame is changed to one that produces an acceptable amount of image displacement. Also, if the speed of the fastest moving object creates an unacceptable amount of motion blur, then the exposure time for the next frame is changed to one that produces an acceptable amount of motion blur.

Patent
21 Dec 2001
TL;DR: In this article, a system and method for receiving information over a power line in accordance with the HomePlug specification is described, where the receiver side of the method involves separation of the data within a payload of an incoming frame into a plurality of blocks.
Abstract: In one embodiment, a system and method for receiving information over a power line in accordance with the HomePlug specification is described. The receiver side of the method involves separation of the data within a payload of an incoming frame into a plurality of blocks. Thereafter, both frame control symbols and data within the blocks are processed by Frame Control Forward Error Correction (FEC) decoding logic.

Patent
29 Mar 2001
TL;DR: In this paper, a moving object is detected by subtraction between successive frames produced by a video camera and a position and a size of the moving object are determined, and the panning, tilting and zooming functions of the video camera are controlled according to the determined position and size.
Abstract: A moving object is detected by subtraction between successive frames produced by a video camera and a position and a size of the moving object are determined. Preceding and succeeding frames produced by the video camera are stored into respective memories and the panning, tilting and zooming functions of the video camera are controlled according to the determined position and size of the moving object. A motion compensation is performed on the preceding frame to compensate for a motion of background image caused by a tracking movement of the camera so that coordinates of the motion-compensated frame are transformed to coordinates of the succeeding frame. An image difference between the motion-compensated frame and the succeeding frame is extracted as a moving object and a position and a size of the extracted image are determined, with which the video camera is adaptively controlled.

Patent
30 Mar 2001
TL;DR: In this paper, a video input mechanism, a motion detection mechanism, and a web cam mechanism are configured to transmit the second video frame if it received the motion detection signal from the motion detector mechanism.
Abstract: The disclosure includes a video (including audio) processing system for transmission of a video frame across a network. The system includes a video input mechanism, a motion detection mechanism, and a web cam mechanism. The video input mechanism is configured to receive a first video frame and a second video frame. The motion detection mechanism is configured to compare the first video frame with the second video frame. It is also configured to generate a motion-detected signal if the comparison of the frames deviates from a threshold value. The web cam mechanism is configured to transmit the second video frame if it received the motion detection signal from the motion detection mechanism. The disclosure also includes a method for processing a selected video frame for transmission across a network. The method includes receiving a video frame, comparing it with a reference frame to determine if it deviates from a threshold value, and transmitting the video frame if it does deviate from the threshold value or discarding it if does not.

Patent
Jungwood Lee1
19 Apr 2001
TL;DR: In this paper, a method and apparatus for temporally allocating bits between frames in a coding system such that temporal fluctuations are smoothed out is presented, where an average distortion measure is derived from previous picture frames and that average is compared to the distortion measure of a current frame.
Abstract: A method and apparatus for temporally allocating bits between frames in a coding system such that temporal fluctuations are smoothed out. Namely, a picture quality is monitored on a frame by frame basis. An average distortion measure is derived from previous picture frames and that average is compared to the distortion measure of a current frame, where the result is used to effect bit budget allocation for each frame in an input image sequence.

Patent
Limin Wang1
11 Apr 2001
TL;DR: In this article, a hierarchical bit allocation scheme for multiple variable bit rate digital video programs over a constant bit rate channel is proposed. But the scheme is not suitable for use in a hierarchical hierarchical bit assignment scheme that includes a super group of pictures (GOP) level (200, 202, 204), a super frame level (300), and a frame level.
Abstract: A system for rate control and buffer management during coding of multiple variable bit rate digital video programs over a constant bit rate channel. The system is suitable for use in a hierarchical bit allocation scheme that includes a super group of pictures (GOP) level (200, 202, 204), a super frame level (300), and a frame level. For each super GOP with a length N frames, for each video program (210, 220...290), the transmission rate for the current frame is set according to an average number of compressed bits for at least N previous frames, including a frame starting at N'+N-1 frames before the current frame, a frame ending at N' frames before the current frame, and intermediate frames therebetween. N' is a decoding delay of a modeled decoder that receives a respective video program. Moreover, the transmission rates of future frames are pre-set so that the average input rate of each individual video stream to the respective decoder buffer (190) is equal to the average output rate, and the total transmission rate of programs is equal to the channel rate.

Proceedings ArticleDOI
07 Oct 2001
TL;DR: The paper describes a method for obtaining high accuracy optical flow at a standard frame rate using high frame rate sequences and demonstrates significant improvements in optical flow estimation accuracy with moderate memory and computational power requirements.
Abstract: Gradient-based optical flow estimation methods such as the Lucas-Kanade (1981) method work well for scenes with small displacements but fail when objects move with large displacements. Hierarchical matching-based methods do not suffer from large displacements but are less accurate. By utilizing the high speed imaging capability of CMOS image sensors, the frame rate can be increased to obtain more accurate optical flow with wide range of scene velocities in real time. Further, by integrating the memory and processing with the sensor on the same chip, optical flow estimation using high frame rate sequences can be performed without unduly increasing the off-chip data rate. The paper describes a method for obtaining high accuracy optical flow at a standard frame rate using high frame rate sequences. The Lucas-Kanade method is used to obtain optical flow estimates at high frame rate, which are then accumulated and refined to obtain optical flow estimates at a standard frame rate. The method is tested on video sequences synthetically generated by perspective warping. The results demonstrate significant improvements in optical flow estimation accuracy with moderate memory and computational power requirements.

Patent
28 Nov 2001
TL;DR: In this paper, a method for multiplexing compressed video data streams where the time for sending portions of a video frame are adjusted to reduce latency is described, where a compressed frame is broken into parts and a part is sent in an earlier frame time.
Abstract: A method for multiplexing compressed video data streams where the time for sending portions of a video frame are adjusted to reduce latency. If a compressed frame cannot be delivered in the appropriate frame time, due to bandwidth limitations, the frame is broken into parts and a part is sent in an earlier frame time. This method allows complete frames to be available at a receiver at the correct time. Accurate methods of deriving clock signals from the data stream are also described.

Patent
04 Apr 2001
TL;DR: In this paper, the information is sent in transmit frames having a frame format comprising a fixed rate header, followed by a variable rate payload, and finally, a fixed-rate trailer.
Abstract: A method and signal therfor embodied in a carrier wave for sending information from transmit stations to receive stations over a transmission medium of a frame-based communications network. The information is sent in transmit frames having a frame format comprising a fixed rate header, followed by a variable rate payload, followed by a fixed rate trailer. The fixed rate header includes a preamble. The preamble has a repetition of four symbol sequences for facilitating power estimation, gain control, baud frequency offset estimation, equalizer training, carrier sensing and collision detection. The preamble also includes a frame control field. The frame control field has scrambler control information for frame scrambling initialization, a priority field to determine the absolute priority a transmit frame will have when determining access to the transmission medium, a payload encoding field which determines constellation encoding of payload bits in the variable rate payload, and a header check sequence for providing a cyclic redundancy check. The variable rate payload is transmitted pursuant to dynamic adjustable frame encoding parameters for improving transmission performance for a transmit frame being transmitted from a transmitting station to a receiving station. The header also includes a destination address field, a source address field and an ethertype field.

Patent
26 Oct 2001
TL;DR: In this article, the authors proposed a method to achieve frame synchronization from a received data sequence before the carrier phase and frequency offset recovery for any MPSK modulated signals on the basis of maximum likelihood theory.
Abstract: The present invention provides a method to achieve frame synchronization from a received data sequence before the carrier phase and frequency offset recovery for any MPSK modulated signals on the basis of maximum likelihood theory. Two overhead configurations are considered in developing the frame synchronization algorithms (S25). One overhead configuration consists of a unique word (3a) followed by a preamble (3b), and the other consists of unique word (7) only. After the frame synchronization is achieved, the time position is known. The data-aided algorithms for carrier phase and frequency-offsets estimation (S29) are derived.

Patent
28 Jun 2001
TL;DR: In this paper, the authors proposed a method that reduces degradation in the perceived quality of images in a video sequence due to data loss by delaying the insertion of an INTRA coded frame (50) after a periodic INTRA frame refresh, update request, or scene cut.
Abstract: The invention provides a method that reduces degradation in the perceived quality of images in a video sequence due to data loss. This effect is achieved by effectively delaying the insertion of an INTRA coded frame (50) after a periodic INTRA frame refresh, INTRA update request, or scene cut. Frames associated with INTRA frame requests are not themselves coded in INTRA format, but instead a frame (50) occurring later in the video sequence is chosen for coding in INTRA format. Preferably, the actual INTRA frame is selected such that it lies approximately mid-way between periodic INTRA requests. Frames (P2, P3) occurring prior to the actual INTRA coded frame (50) are encoded using temporal prediction, in reverse order, starting from the actual INTRA frame, while those frames (P4, P5) occurring after the INTRA coded frame (50) are encoded using temporal prediction in the forward direction.

Journal ArticleDOI
TL;DR: The more accurate and precise motion description allows us to predict where the DFD energy will be significant, thus leading to a more efficient DFD encoder compared to applying traditional still-image coding techniques.
Abstract: We propose a motion-compensated video coding system employing dense motion fields. The dense motion field is calculated at the transmitter, and the motion information is efficiently encoded and transmitted along with the residual frame. The motion estimation is performed by existing techniques in the literature, while we focus on the coding of the motion field and the displaced frame difference (DFD) frame. The dense motion field formulation leads to several novel and distinct advantages. The motion field is encoded in a lossy manner to make the motion rate manageable. The more accurate and precise motion description allows us to predict where the DFD energy will be significant, thus leading to a more efficient DFD encoder compared to applying traditional still-image coding techniques. Furthermore, the dense motion field framework allows us to refine and tailor the motion estimation process such that the resulting DFD frame is easier to encode. Simulations demonstrate superior performance against standard block-based coders, with greater advantages for sequences with more complex motion.

Patent
23 Jan 2001
TL;DR: A visual lossless encoder for processing a video frame prior to compression by a video encoder includes a threshold unit (54), a filter unit, an association unit and an altering unit as mentioned in this paper.
Abstract: A visual lossless encoder (20) for processing a video frame prior to compression by a video encoder includes a threshold unit (54), a filter unit, an association unit and an altering unit. The threshold unit (54) identifies a plurality of visual perception threshold levels to be associated with the pixels of the video frame, wherein the threshold levels define contrast levels above which a human eye can distinguish a pixel from among its neighboring pixels of the frame. The filter unit divides the video frame into portions having different detail dimensions. The association unit utilizes the threshold levels and the detail dimensions to associate the pixels of the video frame into subclasses. Each subclass includes pixels related to the same detail and which generally cannot be distinguished from each other. The altering unit alters the intensity of each pixel of the video frame according to its subclass.

Patent
23 Nov 2001
TL;DR: In this article, a security monitoring system (100) including an alarm system (102) having detectors for detection of an alarm in a structure (104); at least one camera (114, 116) for capturing image data inside and/or outside the structure; a processor (130) for selecting a subset of the image data upon the occurrence of the alarm based on a predetermined criteria; and a modem (102a) for transmitting the subset of image data to a remote location.
Abstract: A security monitoring system (100) including: an alarm system (102) having detectors for detection of an alarm in a structure (104); at least one camera (114, 116) for capturing image data inside and/or outside the structure; a processor (130) for selecting a subset of the image data upon the occurrence of the alarm based on a predetermined criteria; and a modem (102a) for transmitting the subset of image data to a remote location. Preferably, the at least one camera is a video camera, the image data is video image data, and the subset of the image data is at least one video frame of the video image data. Preferably, the processor ranks each video frame from the image data according to how well each video frame meets the predetermined criteria and the modem transmits a predetermined number of video frames having the best rank to the remote location. Preferably, the predetermined criteria is selected from a group consisting of: how centered a difference region is in the video frame; how large the difference region is; whether the difference region consists of a large difference region or a group of smaller difference regions; the contrast of the difference region; the lighting condition on the difference region; whether a face is detected in the difference; whether the video frame is blurred; how much skin color is contained in the video frame; if a person is recognized in the video frame; and the lighting condition on a region of motion in the video frame.

Patent
26 Oct 2001
TL;DR: In this paper, a method for operating a wireless communications system such as a DS-CDMA communications system, by transmitting a waveform that includes a plurality of repeating frames each having x header training base symbols in a header training symbol field (TH) and y tail training base symbol in a tail training symbol fields (TT), is described.
Abstract: A method is disclosed for operating a wireless communications system, such as a DS-CDMA communications system, by transmitting a waveform that includes a plurality of repeating frames each having x header training base symbols in a header training symbol field (TH) and y tail training base symbols in a tail training symbol field (TT). The frame is received and functions as one of a plurality of different types of frames depending on the content of at least TT. In the preferred embodiment the frame functions as one of a normal traffic frame, a termination frame, or a legacy frame providing backwards compatibility with another waveform. A given one of the frames includes four equal-size data fields separated by three equal-sized control fields, the header training symbol field (TH) and the tail training symbol field (TT).

Patent
Michael Kahn1
07 Mar 2001
TL;DR: In this paper, a method of synchronizing audio frame presentation time with a system time in an MPEG audio bit-stream decoder is provided, which includes identifying a time disparity between an audio-frame presentation time and a system-time.
Abstract: A method of synchronizing audio frame presentation time with a system time in an MPEG audio bit-stream decoder is provided. The method includes identifying a time disparity between an audio frame presentation time and a system time. A group of successive audio frames in the MPEG audio bit stream are then examined for an audio frame having an amplitude value less than a threshold value. An audio frame is selected from the examined bit stream for skipping or repeating. If an audio frame below the threshold value is identified, it is selected, otherwise the last frame in the group of frames is selected. The selected frame is then skipped or repeated to synchronize the audio presentation time to the system time in order to minimize the effect of audio artifacts on the listener.

Patent
10 Jul 2001
TL;DR: In this article, the authors proposed a method to detect candidate errors within the picture start code, picture header, and picture timestamp by adapting the number of bits and the macro-block location of the next slice or GOB.
Abstract: The disclosed invention is a method to detect candidate errors within the picture start code, picture header, and picture timestamp Upon detection, these errors may be confirmed and the impacts mitigated An error within the timestamp bit field is adaptively detected by the use of a threshold comparison, and a mechanism for concealing the timestamp information leaves only a small timestamp anomaly An error within the picture start code (PSC) is determined by adaptively analyzing the number of bits and the macro-block location of the next slice or GOB If a PSC is suspected to have been overrun due to an error, this method allows for data beyond the first GOB or slice to be recovered in the frame Depending on the extent of use of slices and GOBs, the method can recover a majority of the frame that otherwise would have been completely lost A candidate error within the picture header is evaluated by utilizing the first slice or GOB of the frame and analyzing information from a previous frame to check whether the current frame's header information should be replaced with the previous frame's header information Replacing the erred frame header can result in a majority of the data within the frame being recovered that would otherwise be discarded if the header was not replaced

Patent
Ramamurthy Mani1, Don Orofino1
18 Jul 2001
TL;DR: In this paper, a run-time frame-based processing mechanism is proposed to execute a block diagram model by propagating frame attributes from blocks on which a user specified the frame attributes information to all other blocks in the block diagram, including an indicator that specifies whether or not the data flowing from one block to another is sample-based or framebased.
Abstract: A run-time, frame-based processing mechanism executes a block diagram model by propagating frame attributes information from blocks on which a user specified the frame attributes information to all other blocks in the block diagram model. The frame attributes information includes an indicator that specifies whether or not the data flowing from one block to another is sample-based or frame-based, as well as the size of the frame in terms of number of samples and number of channels.

Proceedings ArticleDOI
TL;DR: The development of an algorithm to identify and reverse frame removal and insertion acts, at the heart of the algorithm lies the concept of a frame-pair [f,f*].
Abstract: A plausible motivation for video tampering is to alter the sequence of events. This goal can be achieved by re-indexing attacks such as frame removal, insertion, or shuffling. In this work, we report on the development of an algorithm to identify and, subject to certain limitations, reverse such acts. At the heart of the algorithm lies the concept of a frame-pair [f,f*]. Frame-pairs are unique in two ways. The first frame is the basis for watermarking of the second frame sometime in the future. A key that is unique to the location of frame f governs frame-pair temporal separation. Watermarking is done by producing a low resolution version of 24-bit frame, spreading it, and then embedding it in the color space of f*. As such, watermarking f* is tantamount to embedding a copy of frame f in a future frame. Having tied one frame, in content and timing, to another frame downstream, frame removal and insertion can be identified and, subject to certain limitations, reversed.

Patent
Christophe De Vleeschouwer1
31 Oct 2001
TL;DR: In this article, a block-by-block coding scheme for adaptive encoding of at least a part of a current frame of a sequence of frames of framed data is described which operate on a block by block coding basis.
Abstract: Methods and apparatus for adaptive encoding of at least a part of a current frame of a sequence of frames of framed data are described which operate on a block-by-block coding basis. The methods and apparatus divide at least a part of the current frame into blocks and then perform a first sub-encoding step on a block. Thereafter a second sub-encoding step is performed on the first sub-encoded block whereby the second sub-encoding step is optimized by adapting its encoding parameters based on a quantity of the first sub-encoded part of the current frame. The quantity is determined by prediction from a reference frame. Then the same steps are performed on another block of the part of the current frame. Typically, the framed data will be video frames for transmission over a transmission channel. The adaptation of the parameters for the second sub-encoding step may be made dependent upon the characteristics or limitations, e.g. bandwidth limitation, of the channel. In addition, the current frame may be discarded based on the predicted quantity and/or based on fullness of a buffer.