scispace - formally typeset
Search or ask a question
Topic

Upsampling

About: Upsampling is a research topic. Over the lifetime, 2426 publications have been published within this topic receiving 57613 citations.


Papers
More filters
Proceedings ArticleDOI
14 Jun 2020
TL;DR: Experimental results show that the method outperforms the conventional perceptual loss, and the method achieved second and first place in the LPIPS and PI measures respectively for NTIRE 2020 perceptual extreme SR challenge.
Abstract: The performance of image super-resolution (SR) has been greatly improved by using convolutional neural networks. Most of the previous SR methods have been studied up to ×4 upsampling, and few were studied for ×16 upsampling. The general approach for perceptual ×4 SR is using GAN with VGG based perceptual loss, however, we found that it creates inconsistent details for perceptual ×16 SR. To this end, we have investigated loss functions and we propose to use GAN with LPIPS [23] loss for perceptual extreme SR. In addition, we use U-net structure discriminator [14] together to consider both the global and local context of an input image. Experimental results show that our method outperforms the conventional perceptual loss, and we achieved second and first place in the LPIPS and PI measures respectively for NTIRE 2020 perceptual extreme SR challenge.

48 citations

Posted Content
18 Jul 2017
TL;DR: In this article, the authors present an extensive comparison of a variety of decoders for pixel-wise prediction tasks and identify two decoder types which give a consistently high performance.
Abstract: Many machine vision applications require predictions for every pixel of the input image (for example semantic segmentation, boundary detection). Models for such problems usually consist of encoders which decreases spatial resolution while learning a high-dimensional representation, followed by decoders who recover the original input resolution and result in low-dimensional predictions. While encoders have been studied rigorously, relatively few studies address the decoder side. Therefore this paper presents an extensive comparison of a variety of decoders for a variety of pixel-wise prediction tasks. Our contributions are: (1) Decoders matter: we observe significant variance in results between different types of decoders on various problems. (2) We introduce a novel decoder: bilinear additive upsampling. (3) We introduce new residual-like connections for decoders. (4) We identify two decoder types which give a consistently high performance.

48 citations

Patent
19 Feb 1999
TL;DR: In this article, a cyclic convolution filter is used to limit the effects of spurious frequency domain components caused by transitions between successive OFDM bursts, and the filtering is provided by a combination of a finite impulse response (FIR) filter having non-linear phase characteristics.
Abstract: Systems and methods for converting a baseband OFDM signal to an IF signal while minimizing lengthening of the impulse response duration experienced by the OFDM signal. A conversion technique according to the present invention provides sufficient filtering to limit the effects of spurious frequency domain components caused by transitions between successive OFDM bursts. In one embodiment, the filtering is provided by a combination of a finite impulse response (FIR) filter having non-linear phase characteristics and a cyclic convolution filter. Conversion from the frequency domain into the time domain, upsampling, and cyclic filtering may be combined into one operation.

48 citations

Posted Content
TL;DR: In this article, a method for accurate and efficient up-sampling of sparse depth data, guided by high-resolution imagery, is presented. But their approach goes beyond the use of intensity cues only and additionally exploits object boundary cues through structured edge detection and semantic scene labeling for guidance.
Abstract: We present a novel method for accurate and efficient up- sampling of sparse depth data, guided by high-resolution imagery. Our approach goes beyond the use of intensity cues only and additionally exploits object boundary cues through structured edge detection and semantic scene labeling for guidance. Both cues are combined within a geodesic distance measure that allows for boundary-preserving depth in- terpolation while utilizing local context. We model the observed scene structure by locally planar elements and formulate the upsampling task as a global energy minimization problem. Our method determines glob- ally consistent solutions and preserves fine details and sharp depth bound- aries. In our experiments on several public datasets at different levels of application, we demonstrate superior performance of our approach over the state-of-the-art, even for very sparse measurements.

48 citations

Patent
14 Jul 2014
TL;DR: In this article, a method of processing a depth image includes receiving a high-resolution color image and a low-resolution depth image corresponding to the high resolution color image, generating a feature vector based on a depth distribution of the low resolution depth image, and selecting a filter to upsample the low level depth image by classifying a generated feature vector according to a previously learnt classifier.
Abstract: A method of processing a depth image includes receiving a high-resolution color image and a low-resolution depth image corresponding to the high-resolution color image, generating a feature vector based on a depth distribution of the low-resolution depth image, selecting a filter to upsample the low-resolution depth image by classifying a generated feature vector according to a previously learnt classifier, upsampling the low-resolution depth image by using a selected filter, and outputting an upsampled high-resolution depth image.

48 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
90% related
Image segmentation
79.6K papers, 1.8M citations
90% related
Feature extraction
111.8K papers, 2.1M citations
89% related
Deep learning
79.8K papers, 2.1M citations
88% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023469
2022859
2021330
2020322
2019298
2018236