scispace - formally typeset
Search or ask a question

Showing papers by "Woo-Jin Han published in 2007"


Patent
27 Mar 2007
TL;DR: In this article, a method of assigning a priority for controlling a bit rate of a bitstream having a plurality of quality layers is provided, where a low priority is assigned to a quality layer having a small influence on a video quality reduction of the current picture when the quality layer is truncated.
Abstract: A method of assigning a priority for controlling a bit rate of a bitstream having a plurality of quality layers is provided. The method includes composing first quality layers for a reference picture, composing second quality layers for a current picture that is encoded with reference to the reference picture, and assigning a priority each of the first and second quality layers, wherein a low priority is assigned to a quality layer having a small influence on a video quality reduction of the current picture when the quality layer is truncated.

39 citations


Patent
Tammy Lee1, Woo-Jin Han1
02 Apr 2007
TL;DR: In this article, a method of encoding FGS layers by using weighted average sums was proposed, which includes calculating a first weighted average sum by using a restored block of n enhanced layer from a previous frame and a restored base layer of a current frame.
Abstract: Provided is a method of encoding FGS layers by using weighted average sums. Method includes calculating a first weighted average sum by using a restored block of n enhanced layer of a previous frame and a restored block of a base layer of a current frame; calculating a second weighted average sum by using a restored block of n enhanced layer of a next frame and a restored block of a base layer of the current frame; generating a prediction signal of n enhanced layer of the current frame by adding residual data of (n 1) enhanced layer of the current frame to a sum of the first weighted average sum and the second weighted average sum; and encoding residual data of n' enhanced layer, which is obtained by subtracting the generated prediction signal of n enhanced layer from the restored block of n' enhanced layer of the current frame.

37 citations


Patent
Tammy Lee1, Woo-Jin Han1
18 Dec 2007
TL;DR: In this paper, a method and apparatus for predicting a motion vector using a global motion vector, an encoder, a decoder, and a decoding method is presented, which includes the prediction of global motion vectors of the current block, calculating a first motion vector difference between the current motion vector and the motion vector of the adjacent partition.
Abstract: Provided are a method and apparatus for predicting a motion vector using a global motion vector, an encoder, a decoder, and a decoding method. The motion vector prediction method includes: predicting a global motion vector of the current block; calculating a first motion vector difference between a motion vector of the current block and a motion vector of the adjacent partition, and a second motion vector difference between the motion vector of the current block and the predicted global motion vector of the current block; and predicting, as the motion vector of the current block, a motion vector having a minimum Rate-Distortion (RD) cost, based on the first motion vector difference and the second motion vector difference.

28 citations


Patent
20 Jul 2007
TL;DR: In this paper, a method and apparatus for performing entropy encoding on a fine granular scalability layer is presented, where a plurality of current coefficients of a quality layer are selected among the plurality of quality layers of an image block, and a context model with respect to each of the current coefficients is selected using at least one lower coefficient corresponding to each current coefficient.
Abstract: A method and apparatus are provided for performing entropy encoding on a fine granular scalability layer. A method of entropy encoding on a plurality of current coefficients of a quality layer among a plurality of quality layers of an image block divided into the plurality of quality layers, includes determining a coding pass with respect to each of the current coefficients, selecting a context model with respect to each of the current coefficients using at least one lower coefficient corresponding to each of the current coefficients if the coding pass is a refinement pass, and performing arithmetic encoding a group of coefficients having a same selected context model among the current coefficients by using the selected context model.

28 citations


Patent
Woo-Jin Han1, Kyo-Hyuk Lee1, Tammy Lee1
07 Nov 2007
TL;DR: In this paper, a method for video encoding and decoding based on motion estimation is proposed. But the method is not suitable for video streaming and cannot be used in the case of high-quality video data.
Abstract: Provided is a method of and apparatus for video encoding and decoding based on motion estimation The method includes generating a motion vector by searching a reference picture using pixels of a current block, generating a prediction motion vector that is a prediction value of the motion vector by searching the reference picture using previously encoded pixels located adjacent to the current block, and encoding the current block based on the motion vector and the prediction motion vector By accurately predicting the motion vector of the current block, the number of bits required for encoding the motion vector can be reduced, thereby improving the compression rate of video data

19 citations


Patent
So-Young Kim1, Kyo-Hyuk Lee1, Woo-Jin Han1, Tammy Lee1, Manu Mathew1 
21 Feb 2007
TL;DR: In this paper, a method and apparatus for encoding/decoding a multi-layer interlaced video signal having macroblocks coded in an inter-laced manner is provided.
Abstract: A method and apparatus for encoding/decoding a multi-layer interlaced video signal having macroblocks coded in an interlaced manner is provided. The method includes determining whether a pair of macroblocks of a current layer are of a frame type and a corresponding pair of macroblocks of a lower layer are of a field type; and predicting and encoding a macroblock of the current layer by interpolating information of the top or bottom field of a corresponding macroblock of the lower layer, if the pair of the macroblocks of the current layer are of the frame type and the corresponding pair of the macroblocks of the lower layer are of the field type, and the top and bottom fields of the corresponding pair of the macroblocks of the lower layer have been coded in different prediction modes.

18 citations


Patent
Woo-Jin Han1, So-Young Kim1
30 Mar 2007
TL;DR: In this article, a method and apparatus are provided for reducing the inter-layer redundancy of a difference signal obtained from an intra-prediction when coding a video using multi-layer structure supporting intraprediction.
Abstract: A method and apparatus are provided for reducing the inter-layer redundancy of a difference signal obtained from an intra-prediction when coding a video using multi-layer structure supporting intra-prediction. The method includes obtaining a first difference block between a block of a first layer and a first prediction block which is used to perform an intra-prediction on the block, obtaining a second difference block between a block of a second layer corresponding to the block of the first layer and a second prediction block which is used to perform an intra-prediction on the block, and obtaining a final difference block between the first difference block and the second difference block.

17 citations


Patent
15 Jun 2007
TL;DR: In this article, an apparatus for encoding a flag used to code a video frame composed of a plurality of blocks, including a flag-assembling unit which collects flag values allotted for each block and produces a flag bit string, based on spatial correlation of the blocks, a maximum-run-determining unit which determines a maximum run of the flag bit strings, and a converting unit which converts the bits included in the flag string into a codeword having a size no more than the maximum run by using a predetermined codewords table.
Abstract: The present invention relates to a video compression technology, and more particularly, to an effective flag-coding method and apparatus thereof by using a spatial correlation among various flags used to code a video frame. In order to accomplish the object, there is provided an apparatus for encoding a flag used to code a video frame composed of a plurality of blocks, the apparatus including a flag-assembling unit which collects flag values allotted for each block and produces a flag bit string, based on spatial correlation of the blocks, a maximum-run-determining unit which determines a maximum run of the flag bit string, and a converting unit which converts the bits included in the flag bit string into a codeword having a size no more than the maximum run by using a predetermined codeword table.

15 citations


Patent
Tammy Lee1, Kyo-Hyuk Lee1, Woo-Jin Han1
08 Jan 2007
TL;DR: In this article, a method and apparatus for performing a motion prediction using an inverse motion transformation are provided, which includes generating a second motion vector by inverse-transforming a first motion vector of a second block in a lower layer.
Abstract: A method and apparatus for performing a motion prediction using an inverse motion transformation are provided. The method includes generating a second motion vector by inverse-transforming a first motion vector of a second block in a lower layer, the second block corresponding to a first block in a current layer; predicting a motion vector of the first block using the second motion vector; and encoding the first block using the predicted motion vector. The apparatus includes a motion vector inverse-transforming unit that generates a second motion vector by inverse-transforming a first motion vector of a second block in a lower layer corresponding to a first block in a current layer; a predicting unit that predicts a motion vector of the first block using the second motion vector; and an inter-prediction encoding unit that encodes the first block using the predicted motion vector.

15 citations


Patent
Bae-Keun Lee1, Woo-Jin Han1
30 Mar 2007
TL;DR: In this paper, an apparatus and method for independently parsing fine granular scalability (FGS) layers is provided for video-encoding, which includes a frame encoding unit which generates at least one quality layer from an input video frame, a coding-pass-selecting unit which selects a coding pass according to a coefficient of a reference block spatially neighboring a current block, and a pass-coding unit which losslessly codes the coefficient of the current block according to the selected coding pass.
Abstract: An apparatus and method are provided for independently parsing fine granular scalability (FGS) layers. A video-encoding method according to an exemplary embodiment of the present invention includes a frame-encoding unit which generates at least one quality layer from an input video frame, a coding-pass-selecting unit which selects a coding pass according to a coefficient of a reference block spatially neighboring a current block in order to code a coefficient of the current block included in the quality layer, and a pass-coding unit which losslessly codes the coefficient of the current block according to the selected coding pass.

15 citations


Patent
So-Young Kim1, Woo-Jin Han1
24 Aug 2007
TL;DR: In this article, the authors proposed a method and apparatus for transforming an image into a frequency domain by selectively using a plurality of frequency transform algorithms according to a frequency characteristic of the input image.
Abstract: Provided are a method and apparatus for transforming an image, in which an input image is transformed into a frequency domain by selectively using a plurality of frequency transform algorithms according to a frequency characteristic of the input image. The method includes: selecting a frequency transform algorithm to be used for a current block from a plurality of frequency transform algorithms according to a result obtained by transforming frequencies of peripheral blocks adjacent to the current block; and transforming the current block into a frequency domain by using the selected frequency transform algorithm.

Patent
14 Dec 2007
TL;DR: In this paper, a portion of a texture region included in a current picture is selected as a sample texture for synthesizing the texture region and only the sample texture is encoded in place of the texture regions.
Abstract: Provided is an image encoding/decoding method and apparatus. In the image encoding method, a portion of a texture region included in a current picture is selected as a sample texture for synthesizing the texture region and only the sample texture is encoded in place of the texture region, thereby improving the compression efficiency of encoding with respect to the texture region and thus improving the compression efficiency of encoding with respect to the entire image.

Patent
Tammy Lee1, Woo-Jin Han1
17 Dec 2007
TL;DR: In this paper, a method and apparatus for determining coding for coefficients of a residual block, an encoder and a decoder, is presented. But the method is not suitable for the case where the residual block can be omitted.
Abstract: Provided are a method and apparatus for determining coding for coefficients of a residual block, an encoder and a decoder. The method includes generating a residual block by subtracting a motion-compensated prediction block from the current block; frequency-converting coefficients of the residual block into frequency coefficients; frequency-converting coefficients of the motion-compensated prediction block into frequency coefficients; and determining coding for each of the coefficients of the frequency-converted residual block based on the coefficients of the frequency-converted prediction block. Accordingly, the amount of information transferred to a decoder can be reduced by not coding some coefficients of the residual block.

Patent
Tammy Lee1, Woo-Jin Han1, Manu Mathew1, Kyo-Hyuk Lee1, Sang-Rae Lee1 
15 May 2007
TL;DR: In this paper, a motion compensation method and apparatus that sequentially use global motion compensation and local motion compensation, a video decoding method, an encoder and a video decoder are provided.
Abstract: A motion compensation method and apparatus that sequentially use global motion compensation and local motion compensation, a video decoding method, a video encoder, and a video decoder are provided. The motion compensation method includes extracting global motion information of a reference block, performing global motion compensation by applying the extracted global motion information to the reference block, extracting local motion information of the global motion-compensated reference block, and performing local motion compensation by applying the local motion information to the global motion-compensated reference block.

Patent
21 Jun 2007
TL;DR: In this article, a method and an apparatus for efficient coding using a spatial correlation among various flags used to code a video frame is presented, which comprises: a flagassembling unit which collects flag values allotted for each block and produces a flag bit string, based on spatial correlation of blocks constituting the video frame; a maximum-run-determining unit which determines a maximum run of the flag bit strings; and a converting unit which converts the bits included in the flag string into a codeword having a size no more than the maximum run by using a predetermined cod
Abstract: PROBLEM TO BE SOLVED: To provide a method and an apparatus for efficient coding using a spatial correlation among various flags used to code a video frame. SOLUTION: The apparatus comprises: a flag-assembling unit which collects flag values allotted for each block and produces a flag bit string, based on spatial correlation of blocks constituting the video frame; a maximum-run-determining unit which determines a maximum run of the flag bit string; and a converting unit which converts the bits included in the flag bit string into a codeword having a size no more than the maximum run by using a predetermined codeword table. COPYRIGHT: (C)2008,JPO&INPIT

Proceedings Article
Kyo-Hyuk Lee1, Woo-Jin Han1, Tammy Lee1
01 Jan 2007
TL;DR: Wang et al. as discussed by the authors improved skip mode motion field by splitting 16x16 macroblock into four 8x8 sub-partitions and set each subpartition SKIP mode motion fields separately.
Abstract: H.264 (MPEG-4 AVC) is the state of the art international video coding standard which shows better coding efficiency compared to previous standards. This contribution is on the improvement of motion derivation process of H.264 SKIP mode. H.264 exploits temporal or spatial motion field correlation to derive current motion field. Temporal or spatial direct mode macroblock for B slice and skip mode macroblock for P slice are adopted for exploitation of motion field correlation. In general, H.264 SKIP mode macroblock has great impact on coding efficiency because about 30 ~ 70% of macroblocks are set as skip mode. SKIP mode macroblock derives one motion vector for whole 16x16 macroblock region from spatial correlation. In this contribution, we improved SKIP mode motion field further instead of setting one motion vector for 16x16 macroblock region. We split 16x16 macroblock into four 8x8 sub-partitions and set each sub-partition SKIP mode motion field separately. Experimental results showed average 2.05% and up to 18.63% bit rate reduction, especially higher coding efficiency in low bit rate condition.

Patent
Woo-Jin Han1, Park Jeong Hoon1
07 Nov 2007
TL;DR: In this paper, the authors proposed a method for video interprediction encoding/decoding, which can be performed using both an intra (I) picture and correlation with adjacent pictures.
Abstract: Provided is are method and apparatus for video interprediction encoding/decoding. The method of video interprediction encoding/decoding includes extracting intraprediction-encoded/decoded blocks included in previously encoded/decoded pictures and predicting a current block from the extracted blocks. Thus, video encoding/decoding can be performed using both an intra (I) picture and correlation with adjacent pictures, thereby increasing the speed of video encoding/decoding.

Patent
20 Jun 2007
TL;DR: In this paper, the authors present an apparatus for encoding a flag used to code a video frame composed of a plurality of blocks, the apparatus including a flag-assembling unit (121) which collects flag values allotted for each block and produces a flag bit string, based on spatial correlation of the blocks, a maximum-run-determining unit (122) which determines a maximum run of the flag string, and a converting unit (125) which converts the bits included in the string into a codeword having a size no more than the maximum run by using a
Abstract: The present invention relates to a video compression technology, and more particularly, to an effective flag-coding method and apparatus thereof by using a spatial correlation among various flags used to code a video frame. In order to accomplish the object, there is provided an apparatus for encoding a flag used to code a video frame composed of a plurality of blocks, the apparatus including a flag-assembling unit (121) which collects flag values allotted for each block and produces a flag bit string, based on spatial correlation of the blocks, a maximum-run-determining unit (122) which determines a maximum run of the flag bit string, and a converting unit (125) which converts the bits included in the flag bit string into a codeword having a size no more than the maximum run by using a predetermined codeword table (221).