scispace - formally typeset
Search or ask a question
Topic

Upsampling

About: Upsampling is a research topic. Over the lifetime, 2426 publications have been published within this topic receiving 57613 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: An optimization model using a nonconvex regularizer established in a high-dimensional feature space is used to offer precisely localized depth boundaries and enables reducing texture copying and depth bleeding artifacts significantly on a variety of range data sets.
Abstract: This paper describes a method for high-quality depth superresolution. The standard formulations of image-guided depth upsampling, using simple joint filtering or quadratic optimization, lead to texture copying and depth bleeding artifacts. These artifacts are caused by inherent discrepancy of structures in data from different sensors. Although there exists some correlation between depth and intensity discontinuities, they are different in distribution and formation. To tackle this problem, we formulate an optimization model using a nonconvex regularizer. A nonlocal affinity established in a high-dimensional feature space is used to offer precisely localized depth boundaries. We show that the proposed method iteratively handles differences in structure between depth and intensity images. This property enables reducing texture copying and depth bleeding artifacts significantly on a variety of range data sets. We also propose a fast alternating direction method of multipliers algorithm to solve our optimization problem. Our solver shows a noticeable speed up compared with the conventional majorize-minimize algorithm. Extensive experiments with synthetic and real-world data sets demonstrate that the proposed method is superior to the existing methods.

43 citations

Journal ArticleDOI
19 Jan 2022-Symmetry
TL;DR: A joint model of a fast guided filter and a matched filter is suggested for enhancing abnormal retinal images containing low vessel contrasts, shown to be an established technique for vessel extraction in diabetic retinopathy.
Abstract: Fundus images have been established as an important factor in analyzing and recognizing many cardiovascular and ophthalmological diseases. Consequently, precise segmentation of blood using computer vision is vital in the recognition of ailments. Although clinicians have adopted computer-aided diagnostics (CAD) in day-to-day diagnosis, it is still quite difficult to conduct fully automated analysis based exclusively on information contained in fundus images. In fundus image applications, one of the methods for conducting an automatic analysis is to ascertain symmetry/asymmetry details from corresponding areas of the retina and investigate their association with positive clinical findings. In the field of diabetic retinopathy, matched filters have been shown to be an established technique for vessel extraction. However, there is reduced efficiency in matched filters due to noisy images. In this work, a joint model of a fast guided filter and a matched filter is suggested for enhancing abnormal retinal images containing low vessel contrasts. Extracting all information from an image correctly is one of the important factors in the process of image enhancement. A guided filter has an excellent property in edge-preserving, but still tends to suffer from halo artifacts near the edges. Fast guided filtering is a technique that subsamples the filtering input image and the guidance image and calculates the local linear coefficients for upsampling. In short, the proposed technique applies a fast guided filter and a matched filter for attaining improved performance measures for vessel extraction. The recommended technique was assessed on DRIVE and CHASE_DB1 datasets and achieved accuracies of 0.9613 and 0.960, respectively, both of which are higher than the accuracy of the original matched filter and other suggested vessel segmentation algorithms.

42 citations

Proceedings ArticleDOI
04 May 2014
TL;DR: Experimental results show that the proposed depth enhancement and up-sampling techniques produce slightly more accurate depth at the full resolution with improved rendering quality of intermediate views.
Abstract: Depth images are often presented at a lower spatial resolution, either due to limitations in the acquisition of the depth or to increase compression efficiency. As a result, upsampling low-resolution depth images to a higher spatial resolution is typically required prior to depth image based rendering. In this paper, depth enhancement and up-sampling techniques are proposed using a graph-based formulation. In one scheme, the depth is first upsampled using a conventional method, then followed by a graph-based joint bilateral filtering to enhance edges and reduce noise. A second scheme avoids the two-step processing and upsamples the depth directly using the proposed graph-based joint bilateral upsampling. Both filtering and interpolation problems are formulated as regularization problems and the solutions are different from conventional approaches. Further, we also studied operations on different graph structures such as star graph and 8-connected graph. Experimental results show that the proposed methods produce slightly more accurate depth at the full resolution with improved rendering quality of intermediate views.

42 citations

Journal ArticleDOI
TL;DR: A novel signal transform, called a moving band chirp Z-transform, is introduced in order to allow the entire azimuth aperture to be focused simultaneously without any need for temporary unaliasing, which requires upsampling, or subaperture processing.
Abstract: The main operational mode of the European Space Agency's upcoming Sentinel-1 operational satellite will be the Terrain Observation by Progressive Scans (TOPS) imaging mode. This paper presents a very efficient wavenumber domain processor for the processing of TOPS mode data. In particular, a novel signal transform, called a moving band chirp Z-transform, is introduced in order to allow the entire azimuth aperture to be focused simultaneously without any need for temporary unaliasing, which requires upsampling, or subaperture processing.

42 citations

Journal ArticleDOI
TL;DR: A deep learning based method is used and several customized modules are integrated to replace the basic form of convolution with deformable and Atrous convolutions in specific layers, for adapting to the non-rigid characters and larger receptive field of cancerous regions.
Abstract: Automatic gastric cancer segmentation is a challenging problem in digital pathology image analysis. Accurate segmentation of gastric cancer regions can efficiently facilitate clinical diagnosis and pathological research. Technically, this problem suffers from various sizes, vague boundaries, and the non-rigid characters of cancerous regions. For addressing these challenges, we use a deep learning based method and integrate several customized modules. Structurally, we replace the basic form of convolution with deformable and Atrous convolutions in specific layers, for adapting to the non-rigid characters and larger receptive field. We take advantage of the Atrous Spatial Pyramid Pooling module and encoder-decoder based semantic-level embedding networks for multi-scale segmentation. In addition, we propose a lightweight decoder to fuse the contexture information, and utilize the dense upsampling convolution for boundary refinement at the end of the decoder. Experimentally, sufficient comparative experiments are enforced on our own gastric cancer segmentation dataset, which is delicately annotated to pixel-level by medical specialists. The quantitative comparisons against several prior methods demonstrate the superiority of our approach. We achieve 91.60% for pixel-level accuracy and 82.65% for mean Intersection over Union.

42 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
90% related
Image segmentation
79.6K papers, 1.8M citations
90% related
Feature extraction
111.8K papers, 2.1M citations
89% related
Deep learning
79.8K papers, 2.1M citations
88% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023469
2022859
2021330
2020322
2019298
2018236