scispace - formally typeset
Search or ask a question
Author

P. List

Bio: P. List is an academic researcher from Deutsche Telekom. The author has contributed to research in topics: Adaptive filter & Filter (signal processing). The author has an hindex of 1, co-authored 1 publications receiving 848 citations.

Papers
More filters
Journal ArticleDOI
P. List1, A. Joch, Jani Lainema2, G. Bjontegaard, Marta Karczewicz2 
TL;DR: The adaptive deblocking filter used in the H.264/MPEG-4 AVC video coding standard performs simple operations to detect and analyze artifacts on coded block boundaries and attenuates those by applying a selected filter.
Abstract: This paper describes the adaptive deblocking filter used in the H.264/MPEG-4 AVC video coding standard. The filter performs simple operations to detect and analyze artifacts on coded block boundaries and attenuates those by applying a selected filter.

884 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An overview of the technical features of H.264/AVC is provided, profiles and applications for the standard are described, and the history of the standardization process is outlined.
Abstract: H.264/AVC is newest video coding standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The main goals of the H.264/AVC standardization effort have been enhanced compression performance and provision of a "network-friendly" video representation addressing "conversational" (video telephony) and "nonconversational" (storage, broadcast, or streaming) applications. H.264/AVC has achieved a significant improvement in rate-distortion efficiency relative to existing standards. This article provides an overview of the technical features of H.264/AVC, describes profiles and applications for the standard, and outlines the history of the standardization process.

8,646 citations

Journal ArticleDOI
TL;DR: This paper provides an overview of the new tools, features and complexity of H.264/AVC.
Abstract: H.264/AVC, the result of the collaboration between the ISO/IEC Moving Picture Experts Group and the ITU-T Video Coding Experts Group, is the latest standard for video coding. The goals of this standardization effort were enhanced compression efficiency, network friendly video representation for interactive (video telephony) and non-interactive applications (broadcast, streaming, storage, video on demand). H.264/AVC provides gains in compression efficiency of up to 50% over a wide range of bit rates and video resolutions compared to previous standards. Compared to previous standards, the decoder complexity is about four times that of MPEG-2 and two times that of MPEG-4 Visual Simple Profile. This paper provides an overview of the new tools, features and complexity of H.264/AVC.

1,013 citations

Proceedings ArticleDOI
07 Dec 2015
TL;DR: A compact and efficient network for seamless attenuation of different compression artifacts is formulated and it is demonstrated that a deeper model can be effectively trained with the features learned in a shallow network.
Abstract: Lossy compression introduces complex compression artifacts, particularly the blocking artifacts, ringing effects and blurring. Existing algorithms either focus on removing blocking artifacts and produce blurred output, or restores sharpened images that are accompanied with ringing effects. Inspired by the deep convolutional networks (DCN) on super-resolution, we formulate a compact and efficient network for seamless attenuation of different compression artifacts. We also demonstrate that a deeper model can be effectively trained with the features learned in a shallow network. Following a similar "easy to hard" idea, we systematically investigate several practical transfer settings and show the effectiveness of transfer learning in low level vision problems. Our method shows superior performance than the state-of-the-arts both on the benchmark datasets and the real-world use cases (i.e. Twitter).

787 citations

Journal ArticleDOI
27 Jun 2005
TL;DR: This paper starts with an explanation of the basic concepts of video codec design and then explains how various features have been integrated into international standards, up to and including the most recent such standard, known as H.264/AVC.
Abstract: Over the last one and a half decades, digital video compression technologies have become an integral part of the way we create, communicate, and consume visual information. In this paper, techniques for video compression are reviewed, starting from basic concepts. The rate-distortion performance of modern video compression schemes is the result of an interaction between motion representation techniques, intra-picture prediction techniques, waveform coding of differences, and waveform coding of various refreshed regions. The paper starts with an explanation of the basic concepts of video codec design and then explains how these various features have been integrated into international standards, up to and including the most recent such standard, known as H.264/AVC.

681 citations

Posted Content
TL;DR: Inspired by the deep convolutional networks (DCN) on super-resolution, the authors formulate a compact and efficient network for seamless attenuation of different compression artifacts, particularly the blocking artifacts, ringing effects and blurring.
Abstract: Lossy compression introduces complex compression artifacts, particularly the blocking artifacts, ringing effects and blurring. Existing algorithms either focus on removing blocking artifacts and produce blurred output, or restores sharpened images that are accompanied with ringing effects. Inspired by the deep convolutional networks (DCN) on super-resolution, we formulate a compact and efficient network for seamless attenuation of different compression artifacts. We also demonstrate that a deeper model can be effectively trained with the features learned in a shallow network. Following a similar "easy to hard" idea, we systematically investigate several practical transfer settings and show the effectiveness of transfer learning in low-level vision problems. Our method shows superior performance than the state-of-the-arts both on the benchmark datasets and the real-world use case (i.e. Twitter). In addition, we show that our method can be applied as pre-processing to facilitate other low-level vision routines when they take compressed images as input.

415 citations