scispace - formally typeset
Search or ask a question
Author

Steffen Wittmann

Bio: Steffen Wittmann is an academic researcher from Panasonic. The author has contributed to research in topics: Adaptive filter & Filter (signal processing). The author has an hindex of 19, co-authored 63 publications receiving 1141 citations.

Papers published on a yearly basis

Papers
More filters
Patent
Steffen Wittmann1, Thomas Wedi1
16 Feb 2012
TL;DR: In this paper, a video coding apparatus consisting of a first orthogonal transformation unit performing discrete cosine transform on an input picture signal, a low pass filter performing low-pass filtering on the input image signal, and a downsampling unit downsampledging the resolution of a low-frequency image signal was presented.
Abstract: Provided is a video coding method and a video decoding method increasing the resolution and quality of images while suppressing an amount of data required for increasing the resolution. A video coding apparatus includes a first orthogonal transformation unit performing discrete cosine transform on an input picture signal, a low-pass filter performing low-pass filtering on the input picture signal, a downsampling unit downsampling the resolution of a low-frequency image signal, a coding unit compressing and coding a reduced image signal, a local decoding unit decoding a coded bit stream, a second orthogonal transformation unit performing discrete cosine transform on a decoded image signal, and a modification information generation unit generating, based on input image DCT coefficients and decoded image DCT coefficients, coefficient modification information used for modifying orthogonal transformation coefficients obtained by performing orthogonal transformation on a decoded video signal obtained from a coded bit stream.

105 citations

Patent
Steffen Wittmann1, Thomas Wedi1
28 Dec 2007
TL;DR: In this article, a texture representation method without any unnatural feeling while performing data compression equivalent to a conventional data compression or more is presented. But the method is not suitable for high-frequency data.
Abstract: The present invention has been conceived to solve the previously described problems, and provides a texture representation method without any unnatural feeling while performing data compression equivalent to a conventional data compression or more. An input signal is separated in two frequency domains. The low-frequency component is faithfully coded by a conventional image/video coding apparatus. The high-frequency component is analyzed to compute representative texture parameters. Instead of faithfully coding the high-frequency component, only the computed texture parameters are stored or transmitted to a decoding apparatus. Then, the low-frequency component is reconstructed, whereas the high-frequency component is replaced by a natural texture that has been synthesized according to the texture parameters. The reconstructed low-frequency component and the synthesized high-frequency component are merged to generate an output signal.

66 citations

Patent
06 Aug 2010
TL;DR: In this article, an encoding method including a transform step (S110) for generating a transform output signal by way of transforming an input signal, a quantization step(S120) for quantizing the transform output signals, and an entropy encoding step (s130) for encoding the quantized coefficients is described.
Abstract: Disclosed is an encoding method including a transform step (S110) for generating a transform output signal by way of transforming an input signal; a quantization step (S120) for generating quantization coefficients by way of quantizing the transform output signal; and an entropy encoding step (S130) for generating an encoded signal by way of entropy encoding the quantized coefficients. The transform step (S110) includes a first transform step (S112) for generating a first transform output signal by way of performing a first transform on the input signal using a first transform coefficient; and a second transform step (S116) for generating a second transform output signal by way of performing a second transform using a second transform coefficient on a first partial signal which is a portion of the first transform output signal. In the entropy encoding step (S130), a difference between a predetermined value, and an element included in the second transform coefficient or a second inverse transform coefficient, is calculated, and by way of compression coding the calculated difference, the second transform coefficient or the second inverse transform coefficient is compression coded.

64 citations

Proceedings ArticleDOI
12 Nov 2007
TL;DR: To support high quality video applications, the Joint Video Team (JVT) has recently added five new profiles, two new supplemental enhancement information (SEI) messages, and two new extended gamut color space indicators to the MPEG4-AVC/H.264 video coding standard.
Abstract: To support high quality video applications, the Joint Video Team (JVT) has recently added five new profiles, two new supplemental enhancement information (SEI) messages, and two new extended gamut color space indicators to the MPEG4-AVC/H.264 video coding standard. The new profiles include substantial feature enhancements for high-quality video applications, including improved-efficiency 4:4:4 video format coding, improved-efficiency lossless macroblock coding, coding 4:4:4 video pictures using three separately-coded color planes, and support of bit depths up to 14 bits per sample. The new features were developed to support a wide range of applications where high quality video compression is demanded, including professional and semi-professional scenarios in particular. They also anticipate the introduction of higher fidelity displays. In this paper, the new extensions are presented along with quantitaive estimates of the benefits of the new features and a discussion o the target aplication environments.

56 citations

Patent
Steffen Wittmann1, Thomas Wedi1
27 Mar 2006
TL;DR: In this article, an adapted set of filter parameters based on the disturbed image data and the corresponding original uncorrupted image data is determined in an encoder-decoder setup.
Abstract: The present invention determines an adapted set of filter parameters based on the disturbed image data and the corresponding original uncorrupted image data. Since the original image data is not available in a decoder, a conventional encoder can only employ an adaptive filtering based on the corrupted image signal. The present invention determines an enhanced set of filter parameters based on an uncorrupted image signal in the encoder. In order enable a filtering of image data in the decoder, the determined filter parameters are transmitted from the encoder to the decoder in combination with the encoded image data. The decoder is extracting the filter parameters from the encoded image data. In this way, the decoder applies an improved filtering event though the uncorrupted image data is not available at the decoder.

54 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An overview of the basic concepts for extending H.264/AVC towards SVC are provided and the basic tools for providing temporal, spatial, and quality scalability are described in detail and experimentally analyzed regarding their efficiency and complexity.
Abstract: With the introduction of the H.264/AVC video coding standard, significant improvements have recently been demonstrated in video compression capability. The Joint Video Team of the ITU-T VCEG and the ISO/IEC MPEG has now also standardized a Scalable Video Coding (SVC) extension of the H.264/AVC standard. SVC enables the transmission and decoding of partial bit streams to provide video services with lower temporal or spatial resolutions or reduced fidelity while retaining a reconstruction quality that is high relative to the rate of the partial bit streams. Hence, SVC provides functionalities such as graceful degradation in lossy transmission environments as well as bit rate, format, and power adaptation. These functionalities provide enhancements to transmission and storage applications. SVC has achieved significant improvements in coding efficiency with an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. This paper provides an overview of the basic concepts for extending H.264/AVC towards SVC. Moreover, the basic tools for providing temporal, spatial, and quality scalability are described in detail and experimentally analyzed regarding their efficiency and complexity.

3,592 citations

Patent
30 Sep 2013
TL;DR: In this paper, an image processing device and a method that make it possible to suppress block noise is presented, which can be applied to an image processor and can be used to suppress noise in a filter process determination unit.
Abstract: The present disclosure pertains to an image-processing device and method that make it possible to suppress block noise. A βLUT_input calculator and a clip processor determine βLUT_input, which is a value inputted to an existing-β generator and an expanded-β generator. When the value of βLUT_input qp from the clip processor is 51 or less, the existing-β generator determines β using a LUT conforming to the HVEC standard, and supplies same to a filter process determination unit. When the value of βLUT_input qp from the clip processor is larger than 51, the expanded-β generator determines the expanded β and supplies same to the filter process determination part. This disclosure, for example, can be applied to an image processing device.

340 citations

Patent
25 May 2010
TL;DR: In this paper, a color component separating unit for separating an input bit stream for the respective color components, a block dividing unit for dividing an input color component signal into blocks to generate a signal of an encoding unit area, a predicted image generating unit for generating an image for the signal, a determining unit for determining a prediction mode used for encoding according to a prediction efficiency of the predicted image, a prediction error encoding unit for encoding a difference between the predicted images corresponding to the prediction mode determined by the determining unit and the input colour component signal.
Abstract: An encoding device includes a color component separating unit for separating an input bit stream for the respective color components, a block dividing unit for dividing an input color component signal into blocks to generate a signal of an encoding unit area, a predicted image generating unit for generating a predicted image for the signal, a determining unit for determining a prediction mode used for encoding according to a prediction efficiency of the predicted image, a prediction error encoding unit for encoding a difference between the predicted image corresponding to the prediction mode determined by the determining unit and the input color component signal, and an encoding unit for variable length-coding the prediction mode, an output from the prediction error encoding unit, and a color component identification flag indicating the color component to which the input bit stream belongs as a result of the color component separation.

335 citations

Patent
Min Jung-Hye1, Woo-Jin Han1, Il-Koo Kim1
13 Aug 2010
Abstract: Disclosed are a method and a apparatus for encoding a video, and a method and apparatus for decoding a video, in which neighboring pixels used to perform intra prediction on a current block to be encoded are filtered and intra prediction is performed by using the filtered neighboring pixels.

327 citations