scispace - formally typeset
Search or ask a question
Topic

Upsampling

About: Upsampling is a research topic. Over the lifetime, 2426 publications have been published within this topic receiving 57613 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a meta-subnetwork is learned to adjust the weights of the residual graph convolution (RGC) blocks dynamically, and a farthest sampling block is adopted to sample different numbers of points.
Abstract: Point cloud upsampling is vital for the quality of the mesh in three-dimensional reconstruction. Recent research on point cloud upsampling has achieved great success due to the development of deep learning. However, the existing methods regard point cloud upsampling of different scale factors as independent tasks. Thus, the methods need to train a specific model for each scale factor, which is both inefficient and impractical for storage and computation in real applications. To address this limitation, in this work, we propose a novel method called ``Meta-PU" to firstly support point cloud upsampling of arbitrary scale factors with a single model. In the Meta-PU method, besides the backbone network consisting of residual graph convolution (RGC) blocks, a meta-subnetwork is learned to adjust the weights of the RGC blocks dynamically, and a farthest sampling block is adopted to sample different numbers of points. Together, these two blocks enable our Meta-PU to continuously upsample the point cloud with arbitrary scale factors by using only a single model. In addition, the experiments reveal that training on multiple scales simultaneously is beneficial to each other. Thus, Meta-PU even outperforms the existing methods trained for a specific scale factor only.

34 citations

Journal ArticleDOI
TL;DR: A novelty model for medical image segmentation based on deep multiscale convolutional neural network (CNN) has strong robustness compared with other segmentation methods and can boost the segmentation accuracy.
Abstract: Medical image segmentation is one of the hot issues in the related area of image processing. Precise segmentation for medical images is a vital guarantee for follow-up treatment. At present, however, low gray contrast and blurred tissue boundaries are common in medical images, and the segmentation accuracy of medical images cannot be effectively improved. Especially, deep learning methods need more training samples, which lead to time-consuming process. Therefore, we propose a novelty model for medical image segmentation based on deep multiscale convolutional neural network (CNN) in this article. First, we extract the region of interest from the raw medical images. Then, data augmentation is operated to acquire more training datasets. Our proposed method contains three models: encoder, U-net, and decoder. Encoder is mainly responsible for feature extraction of 2D image slice. The U-net cascades the features of each block of the encoder with those obtained by deconvolution in the decoder under different scales. The decoding is mainly responsible for the upsampling of the feature graph after feature extraction of each group. Simulation results show that the new method can boost the segmentation accuracy. And, it has strong robustness compared with other segmentation methods.

34 citations

Patent
24 Apr 2013
TL;DR: In this paper, multiple phase-control coils are utilized, simultaneously data in an K space central area is collected in a high-density mode, Gaussian distribution is utilized to conduct random downsampling K space data of the periphery of the Kspace central area, the k space data which is collected by each coil conducts Fourier transform and is transformed into an image space, according to sensitive information of each coil, signals of the image space conduct linear fitting, a reconstructed spinning density image p is formed, and a frequency domain signal after postero-medial rotatory instability
Abstract: The invention provides an image processing method based on sparse sampling magnetic resonance imaging. Multiple phase-control coils are utilized, simultaneously data in an K space central area is collected in a high-density mode, Gaussian distribution is utilized to conduct random downsampling K space data of the periphery of the K space central area, the K space data which is collected by each coil conducts Fourier transform and is transformed into an image space, according to sensitive information of each coil, signals of the image space conduct linear fitting, a reconstructed spinning density image p is formed, and a frequency domain signal after postero-medial rotatory instability (PMRI) downsampling is utilized to obtain a reference image. Therefore, information which is based on local noise variance conducts ration and calculating for a regular bound term to achieve precise image reconstruction.

34 citations

Journal ArticleDOI
TL;DR: Two undecimated forms of the Dual Tree Complex Wavelet Transform (DT-CWT) are introduced together with their application to image denoising and robust feature extraction to offer a trade-off between denoised performance, computational efficiency and memory requirements.
Abstract: Two undecimated forms of the Dual Tree Complex Wavelet Transform (DT-CWT) are introduced together with their application to image denoising and robust feature extraction. These undecimated transforms extend the DT-CWT through the removal of downsampling of filter outputs together with upsampling of the complex filter pairs in a similar structure to the Undecimated Discrete Wavelet Transform (UDWT).Both developed transforms offer exact translational invariance, improved scale-to-scale coefficient correlation together with the directional selectivity of the DT-CWT. Additionally, within each developed transform, the subbands are of a consistent size. They therefore benefit from a direct one-to-one relationship between co-located coefficients at all scales and therefore this offers consistent phase relationships across scales. These advantages can be exploited within applications such as denoising, image fusion, segmentation and robust feature extraction. The results of two example applications (bivariate shrinkage denoising and robust feature extraction) demonstrate objective and subjective improvements over the DT-CWT. The two novel transforms together with the DT-CWT offer a trade-off between denoising performance, computational efficiency and memory requirements. HighlightsProposed transforms have exact translational invariance.Coefficients have one-to-one cross scale relationships.Improved results for two example applications.Matlab code available at: www.bristol.ac.uk/vi-lab/projects/udtcwt.

34 citations

Posted Content
TL;DR: A Feature-fusion Encoder-Decoder Network (FED-Net) based 2D segmentation model is proposed to tackle the challenging problem of liver lesion segmentation from CT images and achieves competitive results compared with other state-of-the-art methods.
Abstract: Liver lesion segmentation is a difficult yet critical task for medical image analysis. Recently, deep learning based image segmentation methods have achieved promising performance, which can be divided into three categories: 2D, 2.5D and 3D, based on the dimensionality of the models. However, 2.5D and 3D methods can have very high complexity and 2D methods may not perform satisfactorily. To obtain competitive performance with low complexity, in this paper, we propose a Feature-fusion Encoder-Decoder Network (FED-Net) based 2D segmentation model to tackle the challenging problem of liver lesion segmentation from CT images. Our feature fusion method is based on the attention mechanism, which fuses high-level features carrying semantic information with low-level features having image details. Additionally, to compensate for the information loss during the upsampling process, a dense upsampling convolution and a residual convolutional structure are proposed. We tested our method on the dataset of MICCAI 2017 Liver Tumor Segmentation (LiTS) Challenge and achieved competitive results compared with other state-of-the-art methods.

34 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
90% related
Image segmentation
79.6K papers, 1.8M citations
90% related
Feature extraction
111.8K papers, 2.1M citations
89% related
Deep learning
79.8K papers, 2.1M citations
88% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023469
2022859
2021330
2020322
2019298
2018236