scispace - formally typeset
Search or ask a question
Topic

Upsampling

About: Upsampling is a research topic. Over the lifetime, 2426 publications have been published within this topic receiving 57613 citations.


Papers
More filters
Posted Content
TL;DR: This paper presents a novel neural network using multi scale feature fusion at various scales for accurate and efficient semantic image segmentation and outperforms previous state of the art methods on semantic segmentation.
Abstract: In this paper, we present a novel neural network using multi scale feature fusion at various scales for accurate and efficient semantic image segmentation. We used ResNet based feature extractor, dilated convolutional layers in downsampling part, atrous convolutional layers in the upsampling part and used concat operation to merge them. A new attention module is proposed to encode more contextual information and enhance the receptive field of the network. We present an in depth theoretical analysis of our network with training and optimization details. Our network was trained and tested on the Camvid dataset and Cityscapes dataset using mean accuracy per class and Intersection Over Union (IOU) as the evaluation metrics. Our model outperforms previous state of the art methods on semantic segmentation achieving mean IOU value of 74.12 while running at >100 FPS.

21 citations

Patent
02 Dec 2008
TL;DR: In this paper, the upsampling mechanism is used to interpolate a discrete-time input sample stream with time alignment utilizing the addition of randomized high frequency noise (i.e. dithering) in order to eliminate spectral regrowth spurs that would otherwise appear in the output after rounding.
Abstract: A novel and useful apparatus for and method of upsampling/interpolating a discrete-time input sample stream with time alignment utilizing the addition of randomized high frequency noise. The upsampling mechanism is an effective implementation of a second order interpolator that eliminates the need for a conventional filter as the filtering action is effectively built into the mechanism. The upsampling mechanism takes the derivative of the discrete-time input sample stream, thereby effectively providing another order of interpolation over a conventional interpolator. Before outputting the interpolated signal, an integrator takes the integral of the interpolated samples. Any processing performed between the derivative and integrator blocks effectively provides an additional order of interpolation. High frequency noise (i.e. dithering) is added to the differentiated samples in order to eliminate the spectral regrowth spurs that would otherwise appear in the output after rounding. Delay alignment is performed on the differentiated samples in order to time align both phase/frequency and amplitude samples that are processed on different paths.

20 citations

Journal ArticleDOI
TL;DR: A fast and efficient image upsampling method that makes use of high-resolution local structure constraints that recovered finer pixel-level texture details and obtained top-level objective performance with a low time cost compared with state-of-the-art methods.
Abstract: With the development of ultra-high-resolution display devices, the visual perception of fine texture details is becoming more and more important. A method of high-quality image upsampling with a low cost is greatly needed. In this paper, we propose a fast and efficient image upsampling method that makes use of high-resolution local structure constraints. The average local difference is used to divide a bicubic-interpolated image into a sharp edge area and a texture area, and these two areas are reconstructed separately with specific constraints. For reconstruction of the sharp edge area, a high-resolution gradient map is estimated as an extra constraint for the recovery of sharp and natural edges; for the reconstruction of the texture area, a high-resolution local texture structure map is estimated as an extra constraint to recover fine texture details. These two reconstructed areas are then combined to obtain the final high-resolution image. The experimental results demonstrated that the proposed method recovered finer pixel-level texture details and obtained top-level objective performance with a low time cost compared with state-of-the-art methods.

20 citations

Posted Content
TL;DR: Affinity-Aware Upsampling (A2U) is introduced where upsampling kernels are generated using a light-weight lowrank bilinear model and are conditioned on second-order features, offering the potential for building compact models.
Abstract: We show that learning affinity in upsampling provides an effective and efficient approach to exploit pairwise interactions in deep networks. Second-order features are commonly used in dense prediction to build adjacent relations with a learnable module after upsampling such as non-local blocks. Since upsampling is essential, learning affinity in upsampling can avoid additional propagation layers, offering the potential for building compact models. By looking at existing upsampling operators from a unified mathematical perspective, we generalize them into a second-order form and introduce Affinity-Aware Upsampling (A2U) where upsampling kernels are generated using a light-weight lowrank bilinear model and are conditioned on second-order features. Our upsampling operator can also be extended to downsampling. We discuss alternative implementations of A2U and verify their effectiveness on two detail-sensitive tasks: image reconstruction on a toy dataset; and a largescale image matting task where affinity-based ideas constitute mainstream matting approaches. In particular, results on the Composition-1k matting dataset show that A2U achieves a 14% relative improvement in the SAD metric against a strong baseline with negligible increase of parameters (<0.5%). Compared with the state-of-the-art matting network, we achieve 8% higher performance with only 40% model complexity.

20 citations

Journal ArticleDOI
TL;DR: The proposed model is intended to offer a simplified CNN model with less overhead and higher performance, and offers outstanding outcomes in comparison to similar early approaches like FCN and VGG16 in terms of performance vs. trainable parameters.
Abstract: This research presents the idea of a novel fully-Convolutional Neural Network (CNN)-based model for probabilistic pixel-wise segmentation, titled Encoder-decoder-based CNN for Road-Scene Understanding (ECRU). Lately, scene understanding has become an evolving research area, and semantic segmentation is the most recent method for visual recognition. Among vision-based smart systems, the driving assistance system turns out to be a much preferred research topic. The proposed model is an encoder-decoder that performs pixel-wise class predictions. The encoder network is composed of a VGG-19 layer model, while the decoder network uses 16 upsampling and deconvolution units. The encoder of the network has a very flexible architecture that can be altered and trained for any size and resolution of images. The decoder network upsamples and maps the low-resolution encoder’s features. Consequently, there is a substantial reduction in the trainable parameters, as the network recycles the encoder’s pooling indices for pixel-wise classification and segmentation. The proposed model is intended to offer a simplified CNN model with less overhead and higher performance. The network is trained and tested on the famous road scenes dataset CamVid and offers outstanding outcomes in comparison to similar early approaches like FCN and VGG16 in terms of performance vs. trainable parameters.

20 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
90% related
Image segmentation
79.6K papers, 1.8M citations
90% related
Feature extraction
111.8K papers, 2.1M citations
89% related
Deep learning
79.8K papers, 2.1M citations
88% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023469
2022859
2021330
2020322
2019298
2018236