scispace - formally typeset
Search or ask a question
Topic

Upsampling

About: Upsampling is a research topic. Over the lifetime, 2426 publications have been published within this topic receiving 57613 citations.


Papers
More filters
Patent
Lars Risbo1
04 Mar 2009
TL;DR: In this paper, different algorithms are applied for the upsampling and downsampling cases, and the FIR coefficients of the fractional delay FIR filter are calculated by evaluation of polynomial expressions over intervals of the filter impulse response, at times corresponding to the input sample points.
Abstract: Asynchronous sample rate conversion for use in a digital audio receiver is disclosed. Different algorithms are applied for the upsampling and downsampling cases. In the upsampling case, the input signal is upsampled and filtered, before the application of a finite impulse response (FIR) filter. In the downsampling case, the input signal is filtered by an FIR filter, and then filtered and downsampled. The FIR coefficients of the fractional delay FIR filter are calculated by evaluation of polynomial expressions over intervals of the filter impulse response, at times corresponding to the input sample points.

106 citations

Proceedings ArticleDOI
25 Jul 2011
TL;DR: A method for learning linear upsampling operators for physically-based cloth simulation, allowing us to enrich coarse meshes with mid-scale details in minimal time and memory budgets, as required in computer games.
Abstract: We propose a method for learning linear upsampling operators for physically-based cloth simulation, allowing us to enrich coarse meshes with mid-scale details in minimal time and memory budgets, as required in computer games. In contrast to classical subdivision schemes, our operators adapt to a specific context (e.g. a flag flapping in the wind or a skirt worn by a character), which allows them to achieve higher detail. Our method starts by pre-computing a pair of coarse and fine training simulations aligned with tracking constraints using harmonic test functions. Next, we train the upsampling operators with a new regularization method that enables us to learn mid-scale details without overfitting. We demonstrate generalizability to unseen conditions such as different wind velocities or novel character motions. Finally, we discuss how to re-introduce high frequency details not explainable by the coarse mesh alone using oscillatory modes.

105 citations

Patent
Steffen Wittmann1, Thomas Wedi1
16 Feb 2012
TL;DR: In this paper, a video coding apparatus consisting of a first orthogonal transformation unit performing discrete cosine transform on an input picture signal, a low pass filter performing low-pass filtering on the input image signal, and a downsampling unit downsampledging the resolution of a low-frequency image signal was presented.
Abstract: Provided is a video coding method and a video decoding method increasing the resolution and quality of images while suppressing an amount of data required for increasing the resolution. A video coding apparatus includes a first orthogonal transformation unit performing discrete cosine transform on an input picture signal, a low-pass filter performing low-pass filtering on the input picture signal, a downsampling unit downsampling the resolution of a low-frequency image signal, a coding unit compressing and coding a reduced image signal, a local decoding unit decoding a coded bit stream, a second orthogonal transformation unit performing discrete cosine transform on a decoded image signal, and a modification information generation unit generating, based on input image DCT coefficients and decoded image DCT coefficients, coefficient modification information used for modifying orthogonal transformation coefficients obtained by performing orthogonal transformation on a decoded video signal obtained from a coded bit stream.

105 citations

Journal ArticleDOI
TL;DR: A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views and it is indicated that the reconstruction can be efficiently modeled as angular restoration on an epipolar plane image (EPI).
Abstract: In this paper, a novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views. We indicate that the reconstruction can be efficiently modeled as angular restoration on an epipolar plane image (EPI). The main problem in direct reconstruction on the EPI involves an information asymmetry between the spatial and angular dimensions, where the detailed portion in the angular dimensions is damaged by undersampling. Directly upsampling or super-resolving the light field in the angular dimensions causes ghosting effects. To suppress these ghosting effects, we contribute a novel “blur-restoration-deblur” framework. First, the “blur” step is applied to extract the low-frequency components of the light field in the spatial dimensions by convolving each EPI slice with a selected blur kernel. Then, the “restoration” step is implemented by a CNN, which is trained to restore the angular details of the EPI. Finally, we use a non-blind “deblur” operation to recover the spatial high frequencies suppressed by the EPI blur. We evaluate our approach on several datasets, including synthetic scenes, real-world scenes and challenging microscope light field data. We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms. We further show extended applications, including depth enhancement and interpolation for unstructured input. More importantly, a novel rendering approach is presented by combining the proposed framework and depth information to handle large disparities.

104 citations

Journal ArticleDOI
TL;DR: The requirements of image CR are translated into operable optimization targets for training CNN-CR and the visual quality of the compact resolved image is ensured by constraining its difference from a naively downsampled version and the information loss of imageCR is measured by upsampling/super-resolving the compact-resolved image and comparing that to the original image.
Abstract: We study the dual problem of image super-resolution (SR), which we term image compact-resolution (CR). Opposite to image SR that hallucinates a visually plausible high-resolution image given a low-resolution input, image CR provides a low-resolution version of a high-resolution image, such that the low-resolution version is both visually pleasing and as informative as possible compared to the high-resolution image. We propose a convolutional neural network (CNN) for image CR, namely, CNN-CR, inspired by the great success of CNN for image SR. Specifically, we translate the requirements of image CR into operable optimization targets for training CNN-CR: the visual quality of the compact resolved image is ensured by constraining its difference from a naively downsampled version and the information loss of image CR is measured by upsampling/super-resolving the compact-resolved image and comparing that to the original image. Accordingly, CNN-CR can be trained either separately or jointly with a CNN for image SR. We explore different training strategies as well as different network structures for CNN-CR. Our experimental results show that the proposed CNN-CR clearly outperforms simple bicubic downsampling and achieves on average 2.25 dB improvement in terms of the reconstruction quality on a large collection of natural images. We further investigate two applications of image CR, i.e., low-bit-rate image compression and image retargeting. Experimental results show that the proposed CNN-CR helps achieve significant bits saving than High Efficiency Video Coding when applied to image compression and produce visually pleasing results when applied to image retargeting.

104 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
90% related
Image segmentation
79.6K papers, 1.8M citations
90% related
Feature extraction
111.8K papers, 2.1M citations
89% related
Deep learning
79.8K papers, 2.1M citations
88% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023469
2022859
2021330
2020322
2019298
2018236