scispace - formally typeset
Search or ask a question

Showing papers on "Inter frame published in 1999"


Journal ArticleDOI
TL;DR: An original approach to partitioning of a video document into shots is described, which exploits image motion information, which is generally more intrinsic to the video structure itself, and other possible extensions, such as mosaicing and mobile zone detection are described.
Abstract: This paper describes an original approach to partitioning of a video document into shots. Instead of an interframe similarity measure which is directly intensity based, we exploit image motion information, which is generally more intrinsic to the video structure itself. The proposed scheme aims at detecting all types of transitions between shots using a single technique and the same parameter set, rather than a set of dedicated methods. The proposed shot change detection method is related to the computation, at each time instant, of the dominant image motion represented by a two-dimensional affine model. More precisely, we analyze the temporal evolution of the size of the support associated to the estimated dominant motion. Besides, the computation of the global motion model supplies by-products, such as qualitative camera motion description, which we describe in this paper, and other possible extensions, such as mosaicing and mobile zone detection. Results on videos of various content types are reported and validate the proposed approach.

278 citations


Patent
Boon-Lock Yeo1
18 Jun 1999
TL;DR: In this article, a frame dependency for a particular frame is determined, and then the frame dependency is used to decode the frame to create a decoded version of the particular frame.
Abstract: In some embodiments, the invention includes a method of processing a video stream. The method involves detecting a request to playback a particular frame. It is determined whether a decoded version of the particular frame is in a decoded frame cache. If it is not, the method includes(i) determining a frame dependency for the particular frame; (ii) determining which of the frames in the frame dependency are in the decoded frame cache; (iii) decoding any frame in the frame dependency that is not in the decoded frame cache and placing it in the decoded frame cache; and (iv) using at least some of the decoded frames in the frame dependency to decode the particular frame to create a decoded version of the particular frame. In some embodiments, the request to playback a particular frame is part of a request to perform frame-by-frame backward playback and the method is performed for successively earlier frames with respect to the particular frame as part of the frame-by-frame backward playback. In some embodiments, the part (i) is performed whether or not it is determined that a decoded version of a particular frame is in the decoded frame cache without part (iv) being performed. Other embodiments are described and claimed.

149 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: A balanced twin-description interframe video coder is designed and performance results are presented for video transmission aver packet networks with packet losses.
Abstract: A balanced twin-description interframe video coder is designed and performance results are presented for video transmission aver packet networks with packet losses. The coder is based on a predictive multiple description quantizer structure called mutually-refining DPCM (MR-DPCM). The novel feature of this predictive quantizer is that the decoder and encoder filter states trade in either of the two single-channel modes as well as in the two-channel mode. The performance and indeed the suitability of the multiple description approach for a network with packet lasses depends an the packetization method. Two packetization methods are considered-a "correct" one and a low latency but "incorrect" one. Performance results are presented for synthetic sources as well as for a video sequence under a variety of conditions.

132 citations


Patent
30 Jun 1999
TL;DR: In this article, the pixel difference for a pixel exceeds an applicable pixel difference threshold, the pixel is considered to be "different", and if the number of "different" pixels for a frame exceeds a certain threshold, motion has occurred, and a motion detection signal is emitted.
Abstract: The present invention comprises a method and apparatus for detection of motion in video in which frames from an incoming video stream are digitized. The pixels of each incoming digitized frame are compared to the corresponding pixels of a reference frame, and differences between incoming pixels and reference pixels are determined. If the pixel difference for a pixel exceeds an applicable pixel difference threshold, the pixel is considered to be 'different'. If the number of 'different' pixels for a frame exceeds an applicable frame difference threshold, motion is considered to have occurred, and a motion detection signal is emitted. In one or more other embodiments, the applicable frame difference threshold is adjusted depending upon the current average motion being exhibited by the most recent frames, thereby taking into account 'ambient' motion and minimizing the effects of phase lag. In one or more embodiments, different pixel difference thresholds may be assigned to different pixels or groups of pixels, thereby making certain regions of a camera's field of view more or less sensitive to motion. In one or more embodiments of the invention, a new reference frame is selected when the first frame that exhibits no motion occurs after one or more frames that exhibit motion.

93 citations


Patent
15 Jun 1999
TL;DR: In this article, the optical flow of an array of pixels in an image field is determined using adaptive spatial and temporal gradients, and artifacts are avoided for image objects which are moving smoothly relative to the image field background.
Abstract: The optical flow of an array of pixels in an image field is determined using adaptive spatial and temporal gradients. Artifacts are avoided for image objects which are moving smoothly relative to the image field background. Data from three image frames are used to determine optical flow. A parameter is defined and determined frame by frame which is used to determine whether to consider the data looking forward from frame k to k+1 or the data looking backward from frame k−1 to frame k in initializing spatial and or temporal gradients for frame k. The parameter identifies signifies the areas of occlusion, so that the gradients looking backward from frame k−1 to frame k can be used for the occluded pixel regions. The gradients looking forward are used in the other areas.

83 citations


Patent
02 Jun 1999
TL;DR: In this paper, methods and systems for obtaining a motion vector between two frames of video image data are disclosed. But they are not used to estimate the motion vector for each macroblock of a current frame with respect to a reference frame in a multi-stage operation.
Abstract: Methods and systems for obtaining a motion vector between two frames of video image data are disclosed. Specifically, methods and systems of the present invention may be used to estimate a motion vector for each macroblock of a current frame with respect to a reference frame in a multi-stage operation. In a first stage, an application implementing the process of the present invention coarsely searches a first search area of the reference frame to obtain a candidate supermacroblock that best approximates a supermacroblock in the current frame (312). In a second stage, the supermacroblock is divided into a plurality of macroblocks (314). Each of the macroblocks is used to construct search areas which are then searched to obtain a candidate macroblock that best appropriates a macroblock in the current frame (322). Additional stages may be used to further fine-tune the approximation to the macroblock. The methods and systems of the present invention may be used in optimizing digital video encoders, decoders, and video format converters.

76 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: A theoretical analysis of the overall mean squared error in hybrid video coding is presented and the optimal trade-off between INTRA and INTER coding can be determined for a given packet loss probability by minimizing the expected MSE at the decoder.
Abstract: A theoretical analysis of the overall mean squared error (MSE) in hybrid video coding is presented for the case of error prone transmission The derived model for interframe error propagation includes the effects of INTRA coding and spatial loop filtering and corresponds to simulation results very accurately For a given target bit rate, only four parameters are necessary to describe the overall distortion behavior of the decoder Using the model, the optimal trade-off between INTRA and INTER coding can be determined for a given packet loss probability by minimizing the expected MSE at the decoder

76 citations


Journal ArticleDOI
TL;DR: By combining an interframe quantizer and a memoryless "safety-net" quantizer, it is demonstrated that the advantages of both quantization strategies can be utilized, and the performance for both noiseless and noisy channels improves.
Abstract: In linear predictive speech coding algorithms, transmission of linear predictive coding (LPC) parameters-often transformed to the line spectrum frequencies (LSF) representation-consumes a large part of the total bit rate of the coder. Typically, the LSF parameters are highly correlated from one frame to the next, and a considerable reduction in bit rate can be achieved by exploiting this interframe correlation. However, interframe coding leads to error propagation if the channel is noisy, which possibly cancels the achievable gain. In this paper, several algorithms for exploiting interframe correlation of LSF parameters are compared. Especially, performance for transmission over noisy channels is examined, and methods to improve noisy channel performance are proposed. By combining an interframe quantizer and a memoryless "safety-net" quantizer, we demonstrate that the advantages of both quantization strategies can be utilized, and the performance for both noiseless and noisy channels improves. The results indicate that the best interframe method performs as good as a memoryless quantizing scheme, with 4 bits less per frame. Subjective listening tests have been employed that verify the results from the objective measurements.

65 citations


Patent
29 Jun 1999
TL;DR: In this article, an algorithm based on motion compensation uses a temporal support of five fields of video to produce a progressive frame, where the moving average of the motion compensated field lines temporally adjacent to the field to be de-interlaced are used, after a non-linear filtering, as the missing lines to complete the progressive video frame.
Abstract: An algorithm based on motion compensation uses a temporal support of five fields of video to produce a progressive frame. The moving average of the motion compensated field lines temporally adjacent to the field to be de-interlaced are used, after a non-linear filtering, as the missing lines to complete the progressive video frame.

47 citations


Patent
21 Apr 1999
TL;DR: In this paper, the interpolation of a new frame between a previous frame and a current frame of a video stream by motion compensated frame rate upsampling is performed by identifying nodes and edges of objects such as triangles present in the previous frame.
Abstract: Interpolation of a new frame between a previous frame and a current frame of a video stream by motion compensated frame rate upsampling. The interpolation method includes identifying nodes and edges of objects such as triangles present in the previous frame, constructing a superimposed triangular mesh based on the identified nodes and edges, estimating displacement such nodes in the superimposed triangular mesh from the previous frame with respect to the current frame, and rendering the new frame based on the estimated displacement of nodes. Additionally, pixels of the previous frame and the current frame may be classified according to whether a pixel's value has changed from the previous frame to the current frame. This classification may be used during rendering to reduce overall processing time. Pixel-based forward motion estimation may be used to estimate motion of pixels between the previous frame and the current frame and the estimated motion may be used in estimating node displacement.

46 citations


Patent
07 Jun 1999
TL;DR: In this article, a method of generating a motion vector associated with a current block of a current frame of a video sequence is presented. But the method is performed in both a video encoder and video decoder so that motion vector data does not need to be transmitted from the encoder to the decoder.
Abstract: A method of generating a motion vector associated with a current block of a current frame of a video sequence includes searching at least a section of a reference frame to identify previously reconstructed samples from the reference frame that best estimate motion associated with previously reconstructed samples from the current frame associated with the current block. The previously reconstructed samples may form sets of samples. Such sets may be in the form of respective templates. The method then includes computing a motion vector identifying the location of the set of previously reconstructed samples from the reference frame that best estimates the set of previously reconstructed samples from the current frame associated with the current block. It is to be appreciated such a motion estimation technique is performed in both a video encoder and video decoder so that motion vector data does not need to be transmitted from the encoder to the decoder.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: It is shown that multiple global motion models are of benefit in terms of coding efficiency and are incorporated into an H.263-based video codec and embedded into a rate-constrained motion estimation and macroblock mode decision frame work.
Abstract: A novel motion representation and estimation scheme is presented that leads to improved coding efficiency of block-based video coders like, e.g., H.263. The proposed scheme is based on motion-compensated prediction from more than one reference frame. The reference pictures are warped or globally motion-compensated versions of the previous frame. Affine motion models are used as warping parameters approximating the motion vector field. Motion compensation is performed using standard block matching in the multiple reference frames buffer. The frame reference and the affine motion parameters are transmitted as side information. The approach is incorporated into an H.263-based video codec at minor syntax changes and embedded into a rate-constrained motion estimation and macroblock mode decision frame work. In contrast to conventional global motion compensation, where one motion model is transmitted, we show that multiple global motion models are of benefit in terms of coding efficiency. Significant coding gains in comparison to TMN-10, the test model of H.263+, are achieved that provide bit-rate savings between 20% and 35% for the various sequences tested.

Patent
02 Sep 1999
TL;DR: In this article, a video controller provides information needed by the video frame encoder to encode the current frame in the video sequence, including the type of frame to be encoded (e.g., an I or P frame), the currently available bandwidth for encoding the current frames, the time since the previous encoded frame, and a quality measure that may be used to trade off spatial and temporal qualities.
Abstract: A variety of different types of video frame encoders can be configured with, e.g., a multimedia processing subsystem, as long as the video frame encoder conforms to the interface protocol of the subsystem. A video controller in the subsystem performs the higher-level functions of coordinating the encoding of the video stream, thereby allowing the video frame encoder to limit its processing to the lower, frame level. In particular, the video controller provides information needed by the video frame encoder to encode the current frame in the video sequence. In addition to the raw image data, this information includes the type of frame to be encoded (e.g., an I or P frame), the currently available bandwidth for encoding the current frame, the time since the previous encoded frame, the desired frame rate, and a quality measure that may be used to trade off spatial and temporal qualities. The video frame encoder either encodes the frame as requested or indicates to the video controller that the frame should be skipped or otherwise not encoded as requested. The video controller can then respond appropriately, e.g., by requesting the video frame encoder to encode the next frame in the video sequence.

Patent
15 Dec 1999
TL;DR: In this paper, a method, an apparatus, and a computer program product for extracting motion information (110) from a video sequence (130, 600) containing interframe motion vectors (120) are disclosed.
Abstract: A method, an apparatus, and a computer program product for extracting motion information (110) from a video sequence (130, 600) containing interframe motion vectors (120) are disclosed. In particular, motion information (110) is automatically extracted (610) from an encoded traffic video stream (600) to detect speed, density and flow. The motion information (110) extracted is under fixed camera settings and in a well-defined environment. The motion vectors (120) are first separated (610) from the compressed streams (130) during decoding and filtered (620) to eliminate incorrect and noisy motion vectors based on the well-defined environmental knowledge. By applying a projective transform (630) to the filtered motion vectors, speed, density, and flow can be detected (640, 650, 660). In this manner, a traffic monitoring system is implemented.

Journal ArticleDOI
TL;DR: Based on the empirical covariance sequence, an adequate compound covariance model is developed and the prediction gain for motion-compensated frame differences is evaluated, and the performance of the discrete cosine transform for interframe transform coding is discussed.
Abstract: The second-order statistics of motion-compensated frame differences in a low-bit-rate hybrid video coding scheme with overlapped block motion compensation are investigated. Based on the empirical covariance sequence, an adequate compound covariance model is developed. The prediction gain for motion-compensated frame differences is evaluated, and the performance of the discrete cosine transform for interframe transform coding is discussed.

Patent
04 Mar 1999
TL;DR: In this article, a discrete cosine transform (DCT) is applied to blocks of image data and the resulting transform coefficients for each block are quantized at a specified quantization level.
Abstract: During video coding, a transform such as a discrete cosine transform (DCT) is applied to blocks of image data (e.g., motion-compensated interframe pixel differences) and the resulting transform coefficients for each block are quantized at a specified quantization level. Notwithstanding the fact that some coefficients are quantized to non-zero values, at least one non-zero quantized coefficient is treated as if it had a value of zero for purposes of further processing (e.g., run-length encoding (RLE) the quantized data). When segmentation analysis is performed to identify two or more different regions of interest in each frame, the number of coefficients that are treated as having a value of zero for RLE is different for different regions of interest (e.g., more coefficients for less-important regions). In this way, the number of bits used to encode image data are reduced to satisfy bit rate requirements without (1) having to drop frames adaptively, while (2) conforming to constraints that may be imposed on the magnitude of change in quantization level from frame to frame.

Patent
Greg Conklin1
30 Jun 1999
TL;DR: In this paper, a frame generator performs a number of steps such as: (i) determining whether frame generation is appropriate, (ii) examining the first and second base frames to check for the presence of textual characters, (iii) selecting a frame generation method based upon information in the first-and second frames, and (iv) filtering the generated frames.
Abstract: System and method for generating video frames. The system includes a frame generator which generates one or more intermediate frames based upon one base frames. Each of the base frames are comprised of a plurality of macroblocks. Furthermore, one or more of the macroblocks have a motion vector. The macroblocks are comprised of a plurality of pixels. In the frame generation process, the frame generator performs a number of steps such as: (i) determines whether frame generation is appropriate, (ii) examines the first and second base frames to check for the presence of textual characters, (iii) selects a frame generation method based upon information in the first and second frames, (iv) filters the generated frames. In one embodiment, the system includes a server computer having an encoder, a client computer having a decoder, and a network connecting the server computer to the client computer. In this embodiment, the frame generator resides and executes within the client computer and receives the base frames from the decoder.

Patent
23 Aug 1999
TL;DR: In this paper, it has been recognized that in a wireless communication system certain frames of encoded speech data transmitted between a base station and a mobile unit, or between a Mobile Unit and a Base Station, are more critical than others.
Abstract: It has been recognized that in a wireless communication system certain frames of encoded speech data transmitted between a base station and a mobile unit, or between a mobile unit and a base station, are more critical than others. A frame may be determined to be erased by the receiving base station or mobile unit due to noise or interference over the wireless transmission medium. If an erased frame cannot be recreated from one or more preceding frames, then it is more critical than a frame that can be recreated by an extrapolation of data from one or more preceding frames. Accordingly, on a frame-by-frame basis, each frame in a sequence of frames is identified as being critical or non-critical. Each frame that is identified as being critical is then transmitted in a manner that is more robust than the manner in which non-critical frames are transmitted to decrease the likelihood that a receiver will determine that the frame is erased. In a CDMA system, a current frame is identified as being critical by forming a weighted sum of the differences between corresponding frame parameters that represent the current frame and frame parameters that represent a previous frame. That weighted sum is compared with a threshold. If the weighted sum exceeds the threshold, then the current frame is classified as being a critical frame and is transmitted at a higher output level than non-critical frames are transmitted.

Patent
20 Apr 1999
TL;DR: In this paper, a method and system for capturing and compressing an original uncompressed video signal which enables decoding and reversible reconstruction of a decompressed version of the original video signal is provided.
Abstract: A method and system are provided for capturing and compressing an original uncompressed video signal which enables decoding and reversible reconstruction of a decompressed version of the original video signal. The system includes an input for receiving a signal indicating a special effect operation by which a first frame of a video signal is irreversibly transformed to a special effect frame. This is achieved by combining decompressed frame pixel data of the first frame with information comprising either pixel data of a second frame or a single scaling value to be used to scale plural pixels of a frame. The information indicates a special effect operation which can be performed on decompressed pixel data of the first frame to produce a special effect frame (e.g., the information can include a separate indicator specifying a specific operation or the information by its mere presence can indicate the special effect operation). The system also includes a processor for compressing pixel data of the first frame. The processor is also for constructing a bitstream containing the compressed pixel data of the first frame and the information. The bitstream is decompressible in a first manner to reproduce a decompressed video signal including the special effect frame, produced by performing the indicated special effect operation, in place of the first frame. The bitstream is also decompressible in a second manner to produce a decompressed version of the video signal with the first frame and without the special effect frame. The system and method according to this embodiment form a novel bitstream containing the compressed video frames and the information which can be stored on a storage medium. A system is also provided with a demultiplexer, decompressor and special effects generator for presenting the video signal with or without the special effect.

Proceedings ArticleDOI
30 Oct 1999
TL;DR: Dynamic Frame Rate Control, used in conjunction with dynamic bit-rate control, allows clients to solve the rate mismatch between the bandwidth available to them and the bit- rate of the pre-encoded bitsream.
Abstract: A mechanism for dynamically varying the frame rate of pre-encoded video clips is described. An off-line encoder creates a high quality bitstream encoded at 30 fps, as well as separate files containing motion vectors for the same clip at lower frame rates. An on-line encoder decodes the bitstream (if necessary) and re-encodes it at lower frame-rates in real-time using the pre-computed, stored motion information. Dynamic Frame Rate Control, used in conjunction with dynamic bit-rate control, allows clients to solve the rate mismatch between the bandwidth available to them and the bit-rate of the pre-encoded bitsream. It also provides a means for implementing Fast Forward control for video streaming without increasing bandwidth consumption.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: It is demonstrated that combining long-term memory prediction with affine motion compensation leads to further coding gains and bit-rate savings correspond to gains in PSNR between 0.8 and 3 dB.
Abstract: Long-term memory prediction extends motion compensation from the previous frame to several past frames with the result of increased coding efficiency. We demonstrate that combining long-term memory prediction with affine motion compensation leads to further coding gains. For that, various affine motion parameter sets are estimated between frames in the long-term memory buffer and the current frame. Motion compensation is conducted using standard block matching in the multiple reference frame buffer. The picture reference and the affine motion parameters are transmitted as side information. The technique is embedded into a hybrid video coder mainly following the H.263 standard. The coder control employs Lagrangian optimization for the motion estimation and macroblock mode decision. Significant bit-rate savings between 20 and 50% are achieved for the sequences tested over TMN-10 the test model of H.263+. These bit-rate savings correspond to gains in PSNR between 0.8 and 3 dB.

Journal ArticleDOI
Jungwoo Lee1
TL;DR: An adaptive frame type selection algorithm for motion compensation, which is applied to a low bit rate video coding using MPEG-1, shows that the adaptive reference frame positioning scheme compares favorably with the fixed positioning scheme.
Abstract: In this paper we present an adaptive frame type selection algorithm for motion compensation, which is applied to a low bit rate video coding using MPEG-1. In the adaptive scheme, the number of reference frames for motion compensation is determined by a scene change detection algorithm using temporal segmentation. To choose for the distance measure for the temporal segmentation, three histogram-based measures and one variance-based measure were tested and compared. The reference frame positions may be determined by an exhaustive search algorithm which is computationally complex. The complexity can be reduced by using a binary search algorithm which exploits the monotonicity of the distance measure with respect to the reference frame interval. The target bit allocation for each picture type in a group of pictures is adjusted to allow a variable number of reference frames with the constraint of constant channel bit rate. Simulation results show that the adaptive reference frame positioning scheme compares favorably with the fixed positioning scheme at the bit rates of 64 kb/s and 14.4 kb/s.

Journal ArticleDOI
TL;DR: The problems associated with concatenating MPEG-2 transport streams (TS) and a technique to perform frame accurate seamless splicing from one MPEG-1 TS to another on compressed stream video servers are described.
Abstract: Current uncompressed video servers are capable of streaming multiple video clips back-to-back in such a way that they appear to be a single uninterrupted stream. This is a relatively simple process made possible by frame boundaries equally spaced with no interframe dependencies. With the adoption of MPEG-2 and DV digital television standards, the distribution of video in compressed format will become more common. This change is fueling the development of video servers capable of distributing compressed video in broadcast-ready format. The seamless concatenation and splicing of streams that has been taken for granted in the uncompressed domain becomes complex in the compressed domain due to the mechanics of video encoding. This paper describes the problems associated with concatenating MPEG-2 transport streams (TS) and a technique to perform frame accurate seamless splicing from one MPEG-2 TS to another on compressed stream video servers.

Patent
27 Aug 1999
TL;DR: In this paper, a frame error detection method includes the steps of determining a plurality of comparison values which include a given comparison value depending on a frame energy of a given speech frame or a change in frame energy between the given frame and a preceding speech frame.
Abstract: A frame error detection method includes the steps of determining a plurality of comparison values which include a given comparison value depending on a frame energy of a given speech frame or a change in frame energy between the given speech frame and a preceding speech frame. The given speech frame is identified as a bad speech frame if a logical combination of a plurality of criteria is met. One of the criteria is based on a comparison of a threshold value with the given comparison value depending on the frame energy or the change in frame energy. A device for frame error detection and a receiver including the device for frame error detection are also provided.

Journal ArticleDOI
01 Aug 1999
TL;DR: This paper reduces the computational complexity to get the backward motion vectors by using the dynamic search range varied for the forward motion vectors received from the transmitter and proposes the algorithm to obtain the motion vectors which are closer to the true object motion by utilizing the local characteristics of the vector field.
Abstract: We propose a new fast motion compensated interframe interpolation algorithm. First, we propose a fast algorithm that obtains backward motion vectors by using forward motion vectors received from the transmitter. It is important to predict well both the covered and uncovered regions in order to interpolate skipped frames at the receiver, therefore, backward motion vectors are needed as well as forward motion vectors. But, in the case of the conventional video communication systems, we usually receive the forward estimated motion vectors from the transmitter. To estimate the backward motion vectors used to predict the uncovered region, the conventional interframe interpolation technique requires a large amount of calculation at the receiver because it uses exhaustive motion search such as full search block matching algorithm (FSBMA). In this paper, we reduce the computational complexity to get the backward motion vectors by using the dynamic search range varied for the forward motion vectors received from the transmitter. Second, we also propose the algorithm to obtain the motion vectors which are closer to the true object motion by utilizing the local characteristics of the vector field. This algorithm makes it possible, showing that the resulting motion vectors are more adaptive to the real object motion than the conventional one. According to the simulation results, the proposed algorithm is better than the conventional one in subjective quality as well as in objective quality.


Proceedings ArticleDOI
06 Sep 1999
TL;DR: The matching method of the improved algorithm is different from the traditional block matching algorithm, and experimental results prove that a better compensation effect is obtained.
Abstract: This paper presents an improved algorithm of motion compensation MPEG video compression-vector matching motion compensation (VMMC). The matching method of the improved algorithm is different from the traditional block matching algorithm. Experimental results prove that a better compensation effect is obtained.

Patent
24 Aug 1999
TL;DR: In this article, an apparatus consisting of a decode unit which receives an encoded interlaced video signal including encoded interframe motion compensation data, and a de-interlace unit which converts the inter-link video signal to a progressive video signal, and selects a region of the video signal for a different type of conversion, the selection based on the change in position of the region between successive video frames.
Abstract: An apparatus is described comprising: a decode unit which receives an encoded interlaced video signal including encoded interframe motion compensation data, and responsively transmits a decoded interlaced video signal and associated interframe motion compensation data; and a de-interlace unit which converts the interlaced video signal to a progressive video signal, and which, responsive to the interframe motion compensation data, selects a region of the interlaced video signal for a different type of conversion, the selection based on the change in position of the region between successive video frames.

Patent
29 Jul 1999
TL;DR: In this article, a bitstream of moving pictures which has been processed by motion-compensated interframe prediction is converted into another bitstream, where the input bitstream is separated into first codes of a predetermined number of intra-coded frames, second codes of frames to be used as reference frames for inter-frame prediction and third codes other than the frames of the first and second codes, respectively.
Abstract: A bitstream of moving pictures which has been processed by motion-compensated interframe prediction is converted into another bitstream. The input bitstream is separated into first codes of a predetermined number of intra-coded frames, second codes of frames to be used as reference frames for interframe prediction and third codes of frames other than the frames of the first and second codes. The first and the second codes are decoded to reproduce a first and a second video signal, respectively. The first video signal is encoded by interframe predictive coding using the second video signal as a reference video signal to obtain re-encoded codes. The re-encoded, the second and the third codes are multiplexed to obtain a bitstream for which a video encoding method for the input bitstream is converted. The input bitstream may be separated into first codes of frames not to be used as reference frames for interframe prediction and second codes of frames to be used as the reference frames. The first codes are then separated into codes for interframe prediction and codes of interframe predictive error signals. The codes of interframe predictive error signals are decoded by using variable-length codes. The decoded codes are invsersely quantized to reproduce values of the interframe predictive error signals.

Patent
Soon-Seok Oh1
21 Oct 1999
TL;DR: In this article, the inter-frame gap (IFG) between frame units is used to determine whether a host is able to make transmissions in the next time window from the currently transmitting host.
Abstract: Ethernet collision control systems and methods for an ethernet network are provided which utilize a media access controller (MAC) which inserts a counter code in the inter-frame gap (IFG) between frame units when transmitting data of a data size which exceeds the capacity of a single frame of the ethernet protocol. The counter code provides a clock signal which may be received by other hosts on the network when they monitor the network to determine whether they are able to make transmissions. First, by recognizing the counter code between frames as distinguished from an idle period signal, the hosts seeking to transmit are notified that additional frames are expected in the next time window from the currently transmitting host. In addition, a counter is provided on each host which generates a count value from the received counter code received during inter-frame gaps and generates a retransmission criterion for controlling transmissions based on the received counter code. Accordingly, a counter code may be used to prioritize access to the network based upon the sequence with which additional hosts wishing to transmit data access the network responsive to an internally generated transmission request.