scispace - formally typeset
Search or ask a question
Author

Jingning Han

Bio: Jingning Han is an academic researcher from Google. The author has contributed to research in topics: Codec & Data compression. The author has an hindex of 15, co-authored 106 publications receiving 1271 citations. Previous affiliations of Jingning Han include University of California, Santa Barbara.


Papers
More filters
Proceedings ArticleDOI
24 Jun 2018
TL;DR: A brief technical overview of key coding techniques in AV1 is provided along with preliminary compression performance comparison against VP9 and HEVC.
Abstract: AV1 is an emerging open-source and royalty-free video compression format, which is jointly developed and finalized in early 2018 by the Alliance for Open Media (AOMedia) industry consortium. The main goal of AV1 development is to achieve substantial compression gain over state-of-the-art codecs while maintaining practical decoding complexity and hardware feasibility. This paper provides a brief technical overview of key coding techniques in AV1 along with preliminary compression performance comparison against VP9 and HEVC.

260 citations

Proceedings ArticleDOI
01 Dec 2013
TL;DR: A brief technical overview of VP9 is provided along with comparisons with other state-of-the-art video codecs H.264/AVC and HEVC on standard test sets and results show VP9 to be quite competitive with mainstream state of theart codecs.
Abstract: Google has recently finalized a next generation open-source video codec called VP9, as part of the libvpx repository of the WebM project (http://www.webmproject.org/). Starting from the VP8 video codec released by Google in 2010 as the baseline, various enhancements and new tools were added, resulting in the next-generation VP9 bit-stream. This paper provides a brief technical overview of VP9 along with comparisons with other state-of-the-art video codecs H.264/AVC and HEVC on standard test sets. Results show VP9 to be quite competitive with mainstream state-of-the-art codecs.

215 citations

Journal ArticleDOI
TL;DR: The proposed adaptive prediction and transform scheme is implemented within the H.264/AVC intra-mode framework and is experimentally shown to significantly outperform the standard intra coding mode and achieve substantial reduction in blocking artifacts.
Abstract: This paper proposes a novel approach to jointly optimize spatial prediction and the choice of the subsequent transform in video and image compression. Under the assumption of a separable first-order Gauss-Markov model for the image signal, it is shown that the optimal Karhunen-Loeve Transform, given available partial boundary information, is well approximated by a close relative of the discrete sine transform (DST), with basis vectors that tend to vanish at the known boundary and maximize energy at the unknown boundary. The overall intraframe coding scheme thus switches between this variant of the DST named asymmetric DST (ADST), and traditional discrete cosine transform (DCT), depending on prediction direction and boundary information. The ADST is first compared with DCT in terms of coding gain under ideal model conditions and is demonstrated to provide significantly improved compression efficiency. The proposed adaptive prediction and transform scheme is then implemented within the H.264/AVC intra-mode framework and is experimentally shown to significantly outperform the standard intra coding mode. As an added benefit, it achieves substantial reduction in blocking artifacts due to the fact that the transform now adapts to the statistics of block edges. An integer version of this ADST is also proposed.

140 citations

Journal ArticleDOI
TL;DR: A brief technical overview of the coding tools included in VP9, along with coding performance comparisons with other state-of-the-art video codecs—namely, H.264/AVC and HEVC—on standard test sets are provided.
Abstract: Google has recently finalized a next-generation open-source video codec called VP9, as part of the libvpx repository of the WebM project (http://www.webmproject.org/). Starting from the VP8 video codec released by Google in 2010 as the baseline, various enhancements and new tools were added, resulting in the next-generation bit stream VP9. The bit stream was finalized with the exception of essential bug fixes in June 2013. Prior to the release, however, all technical developments were being conducted openly in the public experimental branch of the repository for many months. This paper provides a brief technical overview of the coding tools included in VP9, along with coding performance comparisons with other state-of-the-art video codecs—namely, H.264/AVC and HEVC—on standard test sets. While a completely fair comparison is impossible to conduct because of the limitations of the respective encoder implementations, the tests show VP9 to be quite competitive with mainstream state-of-the-art codecs.

120 citations

Proceedings ArticleDOI
14 Mar 2010
TL;DR: This paper proposes a new approach to combined spatial (Intra) prediction and adaptive transform coding in block-based video and image compression, which is implemented within the H.264/AVC intra mode, and is shown in experiments to significantly outperform the standard intra modes, and achieve significant reduction of the blocking effect.
Abstract: This paper proposes a new approach to combined spatial (Intra) prediction and adaptive transform coding in block-based video and image compression. Context-adaptive spatial prediction from available, previously decoded boundaries of the block, is followed by optimal transform coding of the prediction residual. The derivation of both the prediction and the adaptive transform for the prediction error, assumes a separable first-order Gauss-Markov model for the image signal. The resulting optimal transform is shown to be a close relative of the sine transform with phase and frequencies such that basis vectors tend to vanish at known boundaries and maximize energy at unknown boundaries. The overall scheme switches between the above sine-like transform and discrete cosine transform (per direction, horizontal or vertical) depending on the prediction and boundary information. It is implemented within the H.264/AVC intra mode, is shown in experiments to significantly outperform the standard intra mode, and achieve significant reduction of the blocking effect.

117 citations


Cited by
More filters
Journal ArticleDOI

1,008 citations

Journal ArticleDOI
TL;DR: The technical development of HAS, existing open standardized solutions, but also proprietary solutions are reviewed in this paper as fundamental to derive the QoE influence factors that emerge as a result of adaptation.
Abstract: Changing network conditions pose severe problems to video streaming in the Internet. HTTP adaptive streaming (HAS) is a technology employed by numerous video services that relieves these issues by adapting the video to the current network conditions. It enables service providers to improve resource utilization and Quality of Experience (QoE) by incorporating information from different layers in order to deliver and adapt a video in its best possible quality. Thereby, it allows taking into account end user device capabilities, available video quality levels, current network conditions, and current server load. For end users, the major benefits of HAS compared to classical HTTP video streaming are reduced interruptions of the video playback and higher bandwidth utilization, which both generally result in a higher QoE. Adaptation is possible by changing the frame rate, resolution, or quantization of the video, which can be done with various adaptation strategies and related client- and server-side actions. The technical development of HAS, existing open standardized solutions, but also proprietary solutions are reviewed in this paper as fundamental to derive the QoE influence factors that emerge as a result of adaptation. The main contribution is a comprehensive survey of QoE related works from human computer interaction and networking domains, which are structured according to the QoE impact of video adaptation. To be more precise, subjective studies that cover QoE aspects of adaptation dimensions and strategies are revisited. As a result, QoE influence factors of HAS and corresponding QoE models are identified, but also open issues and conflicting results are discussed. Furthermore, technical influence factors, which are often ignored in the context of HAS, affect perceptual QoE influence factors and are consequently analyzed. This survey gives the reader an overview of the current state of the art and recent developments. At the same time, it targets networking researchers who develop new solutions for HTTP video streaming or assess video streaming from a user centric point of view. Therefore, this paper is a major step toward truly improving HAS.

746 citations

01 Jan 2012
TL;DR: The limitations of current technologies prompted the International Standards Organization/International Electrotechnical Commission Moving Picture Experts Group (MPEG) and International Telecommunication Union-Telecommunication Standardization Sector Video Coding Experts group (VCEG) to establish the JCT-VC, with the objective to develop a new high-performance video coding standard.
Abstract: Digital video has become ubiquitous in our everyday lives; everywhere we look, there are devices that can display, capture, and transmit video. The recent advances in technology have made it possible to capture and display video material with ultrahigh definition (UHD) resolution. Now is the time when the current Internet and broadcasting networks do not even have sufficient capacity to transmit large amounts of HD content-Let alone UHD. The need for an improved transmission system is more pronounced in the mobile sector because of the introduction of lightweight HD resolutions (such as 720 pixel) for mobile applications. The limitations of current technologies prompted the International Standards Organization/International Electrotechnical Commission Moving Picture Experts Group (MPEG) and International Telecommunication Union-Telecommunication Standardization Sector Video Coding Experts Group (VCEG) to establish the Joint Collaborative Team on Video Coding (JCT-VC), with the objective to develop a new high-performance video coding standard.

281 citations

Proceedings ArticleDOI
24 Jun 2018
TL;DR: A brief technical overview of key coding techniques in AV1 is provided along with preliminary compression performance comparison against VP9 and HEVC.
Abstract: AV1 is an emerging open-source and royalty-free video compression format, which is jointly developed and finalized in early 2018 by the Alliance for Open Media (AOMedia) industry consortium. The main goal of AV1 development is to achieve substantial compression gain over state-of-the-art codecs while maintaining practical decoding complexity and hardware feasibility. This paper provides a brief technical overview of key coding techniques in AV1 along with preliminary compression performance comparison against VP9 and HEVC.

260 citations

Journal ArticleDOI
TL;DR: In this paper, the Joint Collaborative Team on Video Coding (JCT-VC) was established with the objective to develop a new high-performance video coding standard for mobile applications.
Abstract: Digital video has become ubiquitous in our everyday lives; everywhere we look, there are devices that can display, capture, and transmit video. The recent advances in technology have made it possible to capture and display video material with ultrahigh definition (UHD) resolution. Now is the time when the current Internet and broadcasting networks do not even have sufficient capacity to transmit large amounts of HD content-Let alone UHD. The need for an improved transmission system is more pronounced in the mobile sector because of the introduction of lightweight HD resolutions (such as 720 pixel) for mobile applications. The limitations of current technologies prompted the International Standards Organization/International Electrotechnical Commission Moving Picture Experts Group (MPEG) and International Telecommunication Union-Telecommunication Standardization Sector Video Coding Experts Group (VCEG) to establish the Joint Collaborative Team on Video Coding (JCT-VC), with the objective to develop a new high-performance video coding standard.

245 citations