scispace - formally typeset
Search or ask a question
Topic

Upsampling

About: Upsampling is a research topic. Over the lifetime, 2426 publications have been published within this topic receiving 57613 citations.


Papers
More filters
Proceedings ArticleDOI
07 Dec 2015
TL;DR: A novel method for depth image superresolution which combines recent advances in example based upsampling with variational superresolution based on a known blur kernel is proposed and clearly outperforms existing approaches on multiple real and synthetic datasets.
Abstract: In this paper we propose a novel method for depth image superresolution which combines recent advances in example based upsampling with variational superresolution based on a known blur kernel. Most traditional depth superresolution approaches try to use additional high resolution intensity images as guidance for superresolution. In our method we learn a dictionary of edge priors from an external database of high and low resolution examples. In a novel variational sparse coding approach this dictionary is used to infer strong edge priors. Additionally to the traditional sparse coding constraints the difference in the overlap of neighboring edge patches is minimized in our optimization. These edge priors are used in a novel variational superresolution as anisotropic guidance of the higher order regularization. Both the sparse coding and the variational superresolution of the depth are solved based on a primal-dual formulation. In an exhaustive numerical and visual evaluation we show that our method clearly outperforms existing approaches on multiple real and synthetic datasets.

102 citations

Journal ArticleDOI
TL;DR: This paper presents a data-driven filter method to approximate the ideal filter for depth image super-resolution instead of hand-designed filters, and introduces a coarse-to-fine convolutional neural network framework to learn different sizes of filter kernels.
Abstract: Depth image super-resolution is a significant yet challenging task. In this paper, we introduce a novel deep color guided coarse-to-fine convolutional neural network (CNN) framework to address this problem. First, we present a data-driven filter method to approximate the ideal filter for depth image super-resolution instead of hand-designed filters. Based on large data samples, the filter learned is more accurate and stable for upsampling depth image. Second, we introduce a coarse-to-fine CNN to learn different sizes of filter kernels. In the coarse stage, larger filter kernels are learned by the CNN to achieve crude high-resolution depth image. As to the fine stage, the crude high-resolution depth image is used as the input so that smaller filter kernels are learned to gain more accurate results. Benefit from this network, we can progressively recover the high frequency details. Third, we construct a color guidance strategy that fuses color difference and spatial distance for depth image upsampling. We revise the interpolated high-resolution depth image according to the corresponding pixels in high-resolution color maps. Guided by color information, the depth of high-resolution image obtained can alleviate texture copying artifacts and preserve edge details effectively. Quantitative and qualitative experimental results demonstrate our state-of-the-art performance for depth map super-resolution.

101 citations

01 Jan 2002
TL;DR: In this paper, the authors compare two general and formal solutions to the problem of fusion of multispectral images with high-resolution panchromatic observations, and propose a generalized Laplacian pyramid and an undecimated discrete wavelet transform.
Abstract: This paper compares two general and formal solutions to the problem of fusion of multispectral images with high-resolution panchromatic observations. The former exploits the undecimated discrete wavelet transform, which is an octave bandpass representation achieved from a conventional discrete wavelet transform by omitting all decimators and upsampling the wavelet filter bank. The latter relies on the generalized Laplacian pyramid, which is another oversampled structure obtained by recursively subtracting from an image an expanded decimated lowpass version. Both the methods selectively perform spatial-fre- quencies spectrum substitution from an image to another. In both schemes, context dependency is exploited by thresholding the local correlation coefficient between the images to be merged, to avoid injection of spatial details that are not likely to occur in the target image. Unlike other multiscale fusion schemes, both the present decompositions are not critically subsampled, thus avoiding possible impairments in the fused images, due to missing cancellation of aliasing terms. Results are presented and discussed on SPOT data.

100 citations

Proceedings Article
12 Feb 2017
TL;DR: An end-to-end transformative discriminative neural network devised for super-resolving unaligned and very small face images with an extreme upscaling factor of 8.5 and significantly outperforms the state-of-the-art.
Abstract: Conventional face hallucination methods rely heavily on accurate alignment of low-resolution (LR) faces before upsampling them. Misalignment often leads to deficient results and unnatural artifacts for large upscaling factors. However, due to the diverse range of poses and different facial expressions, aligning an LR input image, in particular when it is tiny, is severely difficult. To overcome this challenge, here we present an end-to-end transformative discriminative neural network (TDN) devised for super-resolving unaligned and very small face images with an extreme upscaling factor of 8. Our method employs an upsampling network where we embed spatial transformation layers to allow local receptive fields to line-up with similar spatial supports. Furthermore, we incorporate a class-specific loss in our objective through a successive discriminative network to improve the alignment and upsampling performance with semantic information. Extensive experiments on large face datasets show that the proposed method significantly outperforms the state-of-the-art.

100 citations

Journal ArticleDOI
TL;DR: This work proposes a novel approach to resize images with L/M resizing ratio in the discrete cosine transform (DCT) domain, which exploits the multiplication-convolution property of DCT (multiplication in the spatial domain corresponds to symmetric convolution in the DCT domain).
Abstract: Image resizing is to change an image size by upsampling or downsampling of a digital image. Most still images and video frames on digital media are given in a compressed domain. Image resizing of a compressed image can be performed in the spatial domain via decompression and recompression. In general, resizing of a compressed image in a compressed domain is much faster than that in the spatial domain. We propose a novel approach to resize images with L/M resizing ratio in the discrete cosine transform (DCT) domain, which exploits the multiplication-convolution property of DCT (multiplication in the spatial domain corresponds to symmetric convolution in the DCT domain). When an image is given in terms of its 8/spl times/8 block-DCT coefficients, its resized image is also obtained in 8/spl times/8 block-DCT coefficients. The proposed approach is computationally fast and produces visually fine images with high PSNR.

98 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
90% related
Image segmentation
79.6K papers, 1.8M citations
90% related
Feature extraction
111.8K papers, 2.1M citations
89% related
Deep learning
79.8K papers, 2.1M citations
88% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023469
2022859
2021330
2020322
2019298
2018236