scispace - formally typeset
Search or ask a question
Author

Elena Alshina

Bio: Elena Alshina is an academic researcher from Samsung. The author has contributed to research in topics: Pixel & Motion compensation. The author has an hindex of 20, co-authored 161 publications receiving 1695 citations. Previous affiliations of Elena Alshina include Russian Academy of Sciences & Moscow State University.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper provides a technical overview of a newly added in-loop filtering technique, sample adaptive offset (SAO), in High Efficiency Video Coding (HEVC), to reduce sample distortion by first classifying reconstructed samples into different categories, obtaining an offset for each category, and then adding the offset to each sample of the category.
Abstract: This paper provides a technical overview of a newly added in-loop filtering technique, sample adaptive offset (SAO), in High Efficiency Video Coding (HEVC) The key idea of SAO is to reduce sample distortion by first classifying reconstructed samples into different categories, obtaining an offset for each category, and then adding the offset to each sample of the category The offset of each category is properly calculated at the encoder and explicitly signaled to the decoder for reducing sample distortion effectively, while the classification of each sample is performed at both the encoder and the decoder for saving side information significantly To achieve low latency of only one coding tree unit (CTU), a CTU-based syntax design is specified to adapt SAO parameters for each CTU A CTU-based optimization algorithm can be used to derive SAO parameters of each CTU, and the SAO parameters of the CTU are inter leaved into the slice data It is reported that SAO achieves on average 35% BD-rate reduction and up to 235% BD-rate reduction with less than 1% encoding time increase and about 25% decoding time increase under common test conditions of HEVC reference software version 80

405 citations

Journal ArticleDOI
TL;DR: A novel video compression scheme based on a highly flexible hierarchy of unit representation which includes three block concepts: coding unit (CU), prediction unit (PU), and transform unit (TU), which was a candidate in the competitive phase of the high-efficiency video coding (HEVC) standardization work.
Abstract: This paper proposes a novel video compression scheme based on a highly flexible hierarchy of unit representation which includes three block concepts: coding unit (CU), prediction unit (PU), and transform unit (TU). This separation of the block structure into three different concepts allows each to be optimized according to its role; the CU is a macroblock-like unit which supports region splitting in a manner similar to a conventional quadtree, the PU supports nonsquare motion partition shapes for motion compensation, while the TU allows the transform size to be defined independently from the PU. Several other coding tools are extended to arbitrary unit size to maintain consistency with the proposed design, e.g., transform size is extended up to 64 × 64 and intraprediction is designed to support an arbitrary number of angles for variable block sizes. Other novel techniques such as a new noncascading interpolation Alter design allowing arbitrary motion accuracy and a leaky prediction technique using both open-loop and closed-loop predictors are also introduced. The video codec described in this paper was a candidate in the competitive phase of the high-efficiency video coding (HEVC) standardization work. Compared to H.264/AVC, it demonstrated bit rate reductions of around 40% based on objective measures and around 60% based on subjective testing with 1080 p sequences. It has been partially adopted into the first standardization model of the collaborative phase of the HEVC effort.

193 citations

Patent
Elena Alshina1, Alexander Alshin1, Seregin Vadim1, Nikolay Shlyakhov1, Koroteev Maxim1 
02 Jul 2009
TL;DR: In this article, a video decoding method and a video encoding method and apparatus are presented. But they do not consider the difference between the current coding unit and the predicted coding unit.
Abstract: A video encoding method and apparatus and a video decoding method and apparatus. In the video encoding method, a first predicted coding unit of a current coding unit that is to be encoded is produced, a second predicted coding unit is produced by changing a value of each pixel of the first predicted coding unit by using each pixel of the first predicted coding unit and at least one neighboring pixel of each pixel, and the difference between the current coding unit and the second predicted coding unit is encoded, thereby improving video prediction efficiency.

143 citations

Journal ArticleDOI
TL;DR: The details of the interpolation filter design of the H.265/HEVC interpolation filtering over H.264/AVC are presented and coding efficiency gains are significant for some video sequences and can reach up to 21.7%.
Abstract: Coding efficiency gains in the new High Efficiency Video Coding (H.265/HEVC) video coding standard are achieved by improving many aspects of the traditional hybrid coding framework. Motion compensated prediction, and in particular the interpolation filter, is one area that was improved significantly over H.264/AVC. This paper presents the details of the interpolation filter design of the H.265/HEVC standard. First, the improvements of H.265/HEVC interpolation filtering over H.264/AVC are presented. These improvements include novel filter coefficient design with an increased number of taps and utilizing higher precision operations in interpolation filter computations. Then, the computational complexity is analyzed, both from theoretical and practical perspectives. Theoretical complexity analysis is done by studying the worst-case complexity analytically, whereas practical analysis is done by profiling an optimized decoder implementation. Coding efficiency improvements over the H.264/AVC interpolation filter are studied and experimental results are presented. They show a 4.0% average bitrate reduction for the luma component and 11.3% average bitrate reduction for the chroma components. The coding efficiency gains are significant for some video sequences and can reach up to 21.7%.

76 citations

Patent
05 Apr 2011
TL;DR: In this paper, a method and apparatus for determining an intra prediction mode of a coding unit was proposed, and candidate intra prediction modes of a chrominance component coding unit were determined, which includes an intra-prediction mode of luminance component coding units.
Abstract: A method and apparatus for determining an intra prediction mode of a coding unit. Candidate intra prediction modes of a chrominance component coding unit, which includes an intra prediction mode of a luminance component coding unit, are determined, and costs of the chrominance component coding unit according to the determined candidate intra prediction modes are compared to determine a minimum cost intra prediction mode to be the intra prediction mode of the chrominance component coding unit.

62 citations


Cited by
More filters
01 Mar 1987
TL;DR: The variable-order Adams method (SIVA/DIVA) package as discussed by the authors is a collection of subroutines for solution of non-stiff ODEs.
Abstract: Initial-value ordinary differential equation solution via variable order Adams method (SIVA/DIVA) package is collection of subroutines for solution of nonstiff ordinary differential equations. There are versions for single-precision and double-precision arithmetic. Requires fewer evaluations of derivatives than other variable-order Adams predictor/ corrector methods. Option for direct integration of second-order equations makes integration of trajectory problems significantly more efficient. Written in FORTRAN 77.

1,955 citations

Journal ArticleDOI
Il-Koo Kim1, Min Jung-Hye1, Tammy Lee1, Woo-Jin Han2, Jeong-Hoon Park1 
TL;DR: Technical details of the block partitioning structure of HEVC are introduced with an emphasis on the method of designing a consistent framework by combining the three different units together and experimental results are provided to justify the role of each component.
Abstract: High Efficiency Video Coding (HEVC) is the latest joint standardization effort of ITU-T WP 3/16 and ISO/IEC JTC 1/SC 29/WG 11. The resultant standard will be published as twin text by ITU-T and ISO/IEC; in the latter case, it will also be known as MPEG-H Part 2. This paper describes the block partitioning structure of the draft HEVC standard and presents the results of an analysis of coding efficiency and complexity. Of the many new technical aspects of HEVC, the block partitioning structure has been identified as representing one of the most significant changes relative to previous video coding standards. In contrast to the fixed size 16 × 16 macroblock structure of H.264/AVC, HEVC defines three different units according to their functionalities. The coding unit defines a region sharing the same prediction mode, e.g., intra and inter, and it is represented by the leaf node of a quadtree structure. The prediction unit defines a region sharing the same prediction information. The transform unit, specified by another quadtree, defines a region sharing the same transformation. This paper introduces technical details of the block partitioning structure of HEVC with an emphasis on the method of designing a consistent framework by combining the three different units together. Experimental results are provided to justify the role of each component of the block partitioning structure and a comparison with the H.264/AVC design is performed.

433 citations

Journal ArticleDOI
Liquan Shen1, Zhi Liu1, Xinpeng Zhang1, Wenqiang Zhao1, Zhaoyang Zhang1 
TL;DR: A fast CU size decision algorithm for HM that can significantly reduce computational complexity while maintaining almost the same RD performance as the original HEVC encoder is proposed.
Abstract: The emerging high efficiency video coding standard (HEVC) adopts the quadtree-structured coding unit (CU). Each CU allows recursive splitting into four equal sub-CUs. At each depth level (CU size), the test model of HEVC (HM) performs motion estimation (ME) with different sizes including 2N × 2N, 2N × N, N × 2N and N × N. ME process in HM is performed using all the possible depth levels and prediction modes to find the one with the least rate distortion (RD) cost using Lagrange multiplier. This achieves the highest coding efficiency but requires a very high computational complexity. In this paper, we propose a fast CU size decision algorithm for HM. Since the optimal depth level is highly content-dependent, it is not efficient to use all levels. We can determine CU depth range (including the minimum depth level and the maximum depth level) and skip some specific depth levels rarely used in the previous frame and neighboring CUs. Besides, the proposed algorithm also introduces early termination methods based on motion homogeneity checking, RD cost checking and SKIP mode checking to skip ME on unnecessary CU sizes. Experimental results demonstrate that the proposed algorithm can significantly reduce computational complexity while maintaining almost the same RD performance as the original HEVC encoder.

406 citations

Journal ArticleDOI
TL;DR: This paper provides a technical overview of a newly added in-loop filtering technique, sample adaptive offset (SAO), in High Efficiency Video Coding (HEVC), to reduce sample distortion by first classifying reconstructed samples into different categories, obtaining an offset for each category, and then adding the offset to each sample of the category.
Abstract: This paper provides a technical overview of a newly added in-loop filtering technique, sample adaptive offset (SAO), in High Efficiency Video Coding (HEVC) The key idea of SAO is to reduce sample distortion by first classifying reconstructed samples into different categories, obtaining an offset for each category, and then adding the offset to each sample of the category The offset of each category is properly calculated at the encoder and explicitly signaled to the decoder for reducing sample distortion effectively, while the classification of each sample is performed at both the encoder and the decoder for saving side information significantly To achieve low latency of only one coding tree unit (CTU), a CTU-based syntax design is specified to adapt SAO parameters for each CTU A CTU-based optimization algorithm can be used to derive SAO parameters of each CTU, and the SAO parameters of the CTU are inter leaved into the slice data It is reported that SAO achieves on average 35% BD-rate reduction and up to 235% BD-rate reduction with less than 1% encoding time increase and about 25% decoding time increase under common test conditions of HEVC reference software version 80

405 citations

Patent
25 Sep 2013
TL;DR: In this article, a method for pixel-wise joint filtering of depth maps from a plurality of viewing angles is described, which enables to suppress the noise in the depth map data and provides improved performance for a view synthesis.
Abstract: There is disclosed a method, an apparatus, a server, a client and a non-transitory computer readable medium comprising a computer program stored therein for video coding and decoding. Depth pictures from a plurality of viewing angles are projected into a single viewing angle, making it possible to have pixel-wise joint filtering to be applied to all projected depth values. This approach enables to suppress the noise in the depth map data and provides improved performance for a view synthesis.

354 citations