scispace - formally typeset
Search or ask a question
Topic

Upsampling

About: Upsampling is a research topic. Over the lifetime, 2426 publications have been published within this topic receiving 57613 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A feature representation of local edges by means of a multileVEL filtering network, namely, multilevel modified finite Radon transform network (MMFRTN), is presented and Experimental results demonstrate the effectiveness of the proposed method over some state-of-the-art methods.
Abstract: A local line-like feature is the most important discriminate information in the image upsampling scenario. In recent example-based upsampling methods, grayscale and gradient features are often adopted to describe the local patches, but these simple features cannot accurately characterize complex patches. In this paper, we present a feature representation of local edges by means of a multilevel filtering network, namely, multilevel modified finite Radon transform network (MMFRTN). In the proposed MMFRTN, the MFRT is utilized in the filtering layer to extract the local line-like feature; the nonlinear layer is set to be a simple local binary process; for the feature-pooling layer, we concatenate the mapped patches as the feature of local patch. Then, we propose a new example-based upsampling method by means of the MMFRTN feature. Experimental results demonstrate the effectiveness of the proposed method over some state-of-the-art methods.

14 citations

Proceedings ArticleDOI
30 Jul 2017
TL;DR: This work presents Pictory, a mobile app that empowers users to transform photos into artistic renditions by using a combination of neural style transfer with user-controlled state-of-the-art nonlinear image filtering.
Abstract: This work presents Pictory, a mobile app that empowers users to transform photos into artistic renditions by using a combination of neural style transfer with user-controlled state-of-the-art nonlinear image filtering. The combined approach features merits of both artistic rendering paradigms: deep convolutional neural networks can be used to transfer style characteristics at a global scale, while image filtering is able to simulate phenomena of artistic media at a local scale. Thereby, the proposed app implements an interactive two-stage process: first, style presets based on pre-trained feed-forward neural networks are applied using GPU-accelerated compute shaders to obtain initial results. Second, the intermediate output is stylized via oil paint, watercolor, or toon filtering to inject characteristics of traditional painting media such as pigment dispersion (watercolor) as well as soft color blendings (oil paint), and to filter artifacts such as fine-scale noise. Finally, on-screen painting facilitates pixel-precise creative control over the filtering stage, e. g., to vary the brush and color transfer, while joint bilateral upsampling enables outputs at full image resolution suited for printing on real canvas.

14 citations

Journal ArticleDOI
TL;DR: In this paper, Li et al. proposed a variational image inpainting method to estimate the complete depth and reflectance images, while concurrently excluding those hidden points from the image view point.
Abstract: This paper presents a novel strategy to generate, from 3-D lidar measures, dense depth and reflectance images coherent with given color images. It also estimates for each pixel of the input images a visibility attribute. 3-D lidar measures carry multiple information, e.g. relative distances to the sensor (from which we can compute depths) and reflectances. When projecting a lidar point cloud onto a reference image plane, we generally obtain sparse images, due to undersampling. Moreover, lidar and image sensor positions typically differ during acquisition; therefore points belonging to objects that are hidden from the image view point might appear in the lidar images. The proposed algorithm estimates the complete depth and reflectance images, while concurrently excluding those hidden points. It consists in solving a joint (depth and reflectance) variational image inpainting problem, with an extra variable to concurrently estimate handling the selection of visible points. As regularizers, two coupled total variation terms are included to match, two by two, the depth, reflectance, and color image gradients. We compare our algorithm with other image-guided depth upsampling methods, and show that, when dealing with real data, it produces better inpainted images, by solving the visibility issue.

14 citations

Journal ArticleDOI
TL;DR: This work exploited deconvolution network as a learnable upsampling layer which takes low-resolution high-level feature maps as input and outputs enlarged feature maps to better represent target appearance in visual tracking.
Abstract: Object tracking can be tackled by learning a model of tracking the target’s appearance sequentially. Therefore, robust appearance representation is a critical step in visual tracking. Recently, deep convolution network has demonstrated remarkable ability in visual tracking via leveraging robust high-level features. To obtain these high-level features, convolution and pooling operations are executed alternatively in deep convolution network. However, these operations lead to low spatial resolution feature maps which degrade the localization precision in tracking. While low level features have sufficient spatial resolution, their representation ability is insufficient. To mitigate this issue, we exploited deconvolution network in visual tracking. This deconvolution network works as a learnable upsampling layer which takes low-resolution high-level feature maps as input and outputs enlarged feature maps. Meanwhile, the low level feature maps are fused with these high level feature maps via a summarization operation to better represent target appearance. We formulate the network training as a regression issue and train this network end to end. Extensive experiments on two tracking benchmarks demonstrate the effectiveness of our method.

14 citations

Patent
Anil Ubale1, Partha Sriram1
14 Sep 2006
TL;DR: In this article, a combined synthesis and analysis filterbank is used to generate transformed frequency-band coefficients indicative of at least one sample of the input audio data by transforming frequency coefficients in a manner equivalent to upsampling the frequencyband coefficients and filtering the resulting up-sampled values.
Abstract: Methods and systems for transcoding input audio data in a first encoding format to generate audio data in a second encoding format, and filterbanks for use in such systems. Some such systems include a combined synthesis and analysis filterbank (configured to generate transformed frequency-band coefficients indicative of at least one sample of the input audio data by transforming frequency-band coefficients in a manner equivalent to upsampling the frequency-band coefficients and filtering the resulting up-sampled values to generate the transformed frequency-band coefficients, where the frequency-band coefficients are partially decoded versions of input audio data that are indicative of the at least one sample) and a processing subsystem configured to generate transcoded audio data in the second encoding format in response to the transformed frequency-band coefficients. Some such methods include the steps of: generating frequency-band coefficients indicative of at least one sample of input audio data by partially decoding frequency coefficients of the input audio data; generating transformed frequency-band coefficients indicative of the at least one sample of the input audio data by transforming the frequency-band coefficients in a manner equivalent to upsampling the frequency-band coefficients to generate up-sampled values and filtering the up-sampled values; and in response to the transformed frequency-band coefficients, generating the transcoded audio data so that the transcoded audio data are indicative of each sample of the input audio data.

14 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
90% related
Image segmentation
79.6K papers, 1.8M citations
90% related
Feature extraction
111.8K papers, 2.1M citations
89% related
Deep learning
79.8K papers, 2.1M citations
88% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023469
2022859
2021330
2020322
2019298
2018236