scispace - formally typeset
W

Woo-Jin Han

Researcher at Gachon University

Publications -  153
Citations -  12343

Woo-Jin Han is an academic researcher from Gachon University. The author has contributed to research in topics: Motion compensation & Motion estimation. The author has an hindex of 35, co-authored 153 publications receiving 10953 citations. Previous affiliations of Woo-Jin Han include Samsung.

Papers
More filters
Patent

Method and apparatus for motion prediction using inverse motion transform

TL;DR: In this article, a method and apparatus for performing a motion prediction using an inverse motion transformation are provided, which includes generating a second motion vector by inverse-transforming a first motion vector of a second block in a lower layer.
Patent

Video encoding method and video encoding apparatus and video decoding method and video decoding apparatus, which perform deblocking filtering based on tree-structure encoding units

TL;DR: In this article, a method for deblocking filtering on the filtering boundary of a video is proposed, which is based on at least one data unit from among a plurality of coding units that are hierarchically configured according to depths.
Patent

Image encoding/decoding method and apparatus

TL;DR: In this paper, an image encoding/decoding method and apparatus are provided, which assign a virtual motion vector to a block that is encoded in an intra prediction mode and generate a new prediction block that are a combination of a prediction block generated by motion compensation using the virtual motion vectors and another prediction block created by intra prediction.
Proceedings ArticleDOI

Interpolation filter design in HEVC and its coding efficiency - complexity analysis

TL;DR: Improvements include discrete cosine transform based filter coefficient design, utilizing longer filter taps for luma and chroma interpolation and using higher precision operations in the intermediate computations in the HEVC interpolation filter.
Patent

Video coding method and apparatus supporting independent parsing

TL;DR: In this paper, an apparatus and method for independently parsing fine granular scalability (FGS) layers is provided for video-encoding, which includes a frame encoding unit which generates at least one quality layer from an input video frame, a coding-pass-selecting unit which selects a coding pass according to a coefficient of a reference block spatially neighboring a current block, and a pass-coding unit which losslessly codes the coefficient of the current block according to the selected coding pass.