scispace - formally typeset
K

K. Stuhlmuller

Researcher at University of Erlangen-Nuremberg

Publications -  12
Citations -  1434

K. Stuhlmuller is an academic researcher from University of Erlangen-Nuremberg. The author has contributed to research in topics: Motion compensation & Encoder. The author has an hindex of 8, co-authored 12 publications receiving 1423 citations.

Papers
More filters
Journal ArticleDOI

Analysis of video transmission over lossy channels

TL;DR: The main focus of this paper is to show the accuracy of the derived analytical model and its applicability to the analysis and optimization of an entire video transmission system.
Journal ArticleDOI

Robust Internet video transmission based on scalable coding and unequal error protection

TL;DR: A theoretical framework is derived by which the Internet packet loss behavior can be directly related to the picture quality perceived at the receiver and it is demonstrated how this framework can be used to select appropriate parameter values for the overall system design.
Journal ArticleDOI

Error-resilient video transmission using long-term memory motion-compensated prediction

TL;DR: This paper presents a framework that incorporates an estimated error into rate-constrained motion estimation and mode decision and shows that long-term memory prediction significantly outperforms the single-frame prediction H.263-based anchor.
Proceedings ArticleDOI

Analysis of error propagation in hybrid video coding with application to error resilience

TL;DR: A theoretical analysis of the overall mean squared error in hybrid video coding is presented and the optimal trade-off between INTRA and INTER coding can be determined for a given packet loss probability by minimizing the expected MSE at the decoder.
Proceedings ArticleDOI

A content-dependent fast DCT for low bit-rate video coding

TL;DR: A new content-dependent fast discrete cosine transform (DCT) algorithm is introduced that requires less than half of the samples of an 8/spl times/8 block as input and produces only the first three DCT coefficients, far less complex than conventional full fast DCT algorithms.