scispace - formally typeset
Search or ask a question
Topic

Upsampling

About: Upsampling is a research topic. Over the lifetime, 2426 publications have been published within this topic receiving 57613 citations.


Papers
More filters
Proceedings ArticleDOI
01 Jun 2018
TL;DR: A novel end-to-end deep neural network that generates dynamic upsampling filters and a residual image, which are computed depending on the local spatio-temporal neighborhood of each pixel to avoid explicit motion compensation is proposed.
Abstract: Video super-resolution (VSR) has become even more important recently to provide high resolution (HR) contents for ultra high definition displays. While many deep learning based VSR methods have been proposed, most of them rely heavily on the accuracy of motion estimation and compensation. We introduce a fundamentally different framework for VSR in this paper. We propose a novel end-to-end deep neural network that generates dynamic upsampling filters and a residual image, which are computed depending on the local spatio-temporal neighborhood of each pixel to avoid explicit motion compensation. With our approach, an HR image is reconstructed directly from the input image using the dynamic upsampling filters, and the fine details are added through the computed residual. Our network with the help of a new data augmentation technique can generate much sharper HR videos with temporal consistency, compared with the previous methods. We also provide analysis of our network through extensive experiments to show how the network deals with motions implicitly.

503 citations

Proceedings ArticleDOI
29 Jul 2007
TL;DR: A new method for upsampling images which is capable of generating sharp edges with reduced input-resolution grid-related artifacts, based on a statistical edge dependency relating certain edge features of two different resolutions, which is generically exhibited by real-world images.
Abstract: In this paper we propose a new method for upsampling images which is capable of generating sharp edges with reduced input-resolution grid-related artifacts. The method is based on a statistical edge dependency relating certain edge features of two different resolutions, which is generically exhibited by real-world images. While other solutions assume some form of smoothness, we rely on this distinctive edge dependency as our prior knowledge in order to increase image resolution. In addition to this relation we require that intensities are conserved; the output image must be identical to the input image when downsampled to the original resolution. Altogether the method consists of solving a constrained optimization problem, attempting to impose the correct edge relation and conserve local intensities with respect to the low-resolution input image. Results demonstrate the visual importance of having such edge features properly matched, and the method's capability to produce images in which sharp edges are successfully reconstructed.

480 citations

Journal ArticleDOI
TL;DR: A theoretical framework for the analysis, synthesis, and computational complexity of multirate filter banks is derived and it is shown how to obtain aliasing/ crosstalk-free reconstruction, and when perfect reconstruction is possible.
Abstract: Multirate filter banks produce multiple output signals by filtering and subsampling a single input signal, or conversely, generate a single output by upsampling and interpolating multiple inputs. Two of their main applications are subband coders for speech processing and transmultiplexers for telecommunications. Below, we derive a theoretical framework for the analysis, synthesis, and computational complexity of multirate filter banks. The use of matrix notation leads to basic results derived from properties of linear algebra. Using rank and determinant of filter matrices, it is shown how to obtain aliasing/ crosstalk-free reconstruction, and when perfect reconstruction is possible. The synthesis of filters for filter banks is also explored, three design methods are presented, and finally, the computational complexity is considered.

463 citations

Posted Content
TL;DR: SegNet as mentioned in this paper uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling, which eliminates the need for learning to upsample.
Abstract: We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN and also with the well known DeepLab-LargeFOV, DeconvNet architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. We show that SegNet provides good performance with competitive inference time and more efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at this http URL.

433 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: A novel deep fully convolutional network model for accurate salient object detection and an effective hybrid upsampling method to reduce the checkerboard artifacts of deconvolution operators in the authors' decoder network are proposed.
Abstract: Deep convolutional neural networks (CNNs) have delivered superior performance in many computer vision tasks. In this paper, we propose a novel deep fully convolutional network model for accurate salient object detection. The key contribution of this work is to learn deep uncertain convolutional features (UCF), which encourage the robustness and accuracy of saliency detection. We achieve this via introducing a reformulated dropout (R-dropout) after specific convolutional layers to construct an uncertain ensemble of internal feature units. In addition, we propose an effective hybrid upsampling method to reduce the checkerboard artifacts of deconvolution operators in our decoder network. The proposed methods can also be applied to other deep convolutional networks. Compared with existing saliency detection methods, the proposed UCF model is able to incorporate uncertainties for more accurate object boundary inference. Extensive experiments demonstrate that our proposed saliency model performs favorably against state-ofthe-art approaches. The uncertain feature learning mechanism as well as the upsampling method can significantly improve performance on other pixel-wise vision tasks.

433 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
90% related
Image segmentation
79.6K papers, 1.8M citations
90% related
Feature extraction
111.8K papers, 2.1M citations
89% related
Deep learning
79.8K papers, 2.1M citations
88% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023469
2022859
2021330
2020322
2019298
2018236