scispace - formally typeset
Search or ask a question

Showing papers on "Bicubic interpolation published in 2022"


Posted ContentDOI
TL;DR: The proposed DnSRGAN method can solve the problem of high noise and artifacts that cause the cardiac image to be reconstructed incorrectly during super-resolution, and is capable of high-quality reconstruction of noisy cardiac images.

22 citations


Proceedings ArticleDOI
01 Jun 2022
TL;DR: The 1st NTIRE challenge on stereo image super-resolution (restoration of rich details in a pair of low-resolution stereo images) with a focus on new solutions and results is summarized in this paper .
Abstract: In this paper, we summarize the 1st NTIRE challenge on stereo image super-resolution (restoration of rich details in a pair of low-resolution stereo images) with a focus on new solutions and results. This challenge has 1 track aiming at the stereo image super-resolution problem under a standard bicubic degradation. In total, 238 participants were successfully registered, and 21 teams competed in the final testing phase. Among those participants, 20 teams successfully submitted results with PSNR (RGB) scores better than the baseline. This challenge establishes a new benchmark for stereo image SR.

15 citations


Journal ArticleDOI
Lin Wang1, Kuk-Jin Yoon1
TL;DR: A novel Semi-supervised Student-Teacher Super-Resolution approach called S 2 TSR that super-resolves both labelled and unlabeled LR images via adversarial learning and proposes a new SR network via non-local and attention mechanisms to learn better features from the limited labeled LR images.

12 citations


Journal ArticleDOI
TL;DR: STNet as discussed by the authors is an end-to-end generative framework that synthesizes spatiotemporal super-resolution volumes with high fidelity for time-varying data.
Abstract: We present STNet, an end-to-end generative framework that synthesizes spatiotemporal super-resolution volumes with high fidelity for time-varying data. STNet includes two modules: a generator and a spatiotemporal discriminator. The input to the generator is two low-resolution volumes at both ends, and the output is the intermediate and the two-ending spatiotemporal super-resolution volumes. The spatiotemporal discriminator, leveraging convolutional long short-term memory, accepts a spatiotemporal super-resolution sequence as input and predicts a conditional score for each volume based on its spatial (the volume itself) and temporal (the previous volumes) information. We propose an unsupervised pre-training stage using cycle loss to improve the generalization of STNet. Once trained, STNet can generate spatiotemporal super-resolution volumes from low-resolution ones, offering scientists an option to save data storage (i.e., sparsely sampling the simulation output in both spatial and temporal dimensions). We compare STNet with the baseline bicubic+linear interpolation, two deep learning solutions ( $\mathsf{SSR}+\mathsf{TSF}$ , STD), and a state-of-the-art tensor compression solution (TTHRESH) to show the effectiveness of STNet.

10 citations


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a terrain feature-aware superresolution model (TfaSR) to guide digital elevation models (DEMs) towards the extraction and optimization of terrain features.
Abstract: Neural networks (NNs) have demonstrated the potential to recover finer textural details from lower-resolution images by superresolution (SR). Given similar grid-based data structures, some researchers have transferred image SR methods to digital elevation models (DEMs). These efforts have yielded better results than traditional spatial interpolation methods. However, terrain data present inherently different characteristics and practical meanings compared with natural images. This makes it unsuitable for existing SR methods on perceptually visual features of images to be directly adopted for extracting terrain features. In this paper, we argue that the problem lies in the lack of explicit terrain feature modeling and thus propose a terrain feature-aware superresolution model (TfaSR) to guide DEM SR towards the extraction and optimization of terrain features. Specifically, a deep residual module and a deformable convolution module are integrated to extract deep and adaptive terrain features, respectively. In addition, explicit terrain feature-aware optimization is proposed to focus on local terrain feature refinement during training. Extensive experiments show that TfaSR achieves state-of-the-art performance in terrain feature preservation during DEM SR. Specifically, compared with the traditional bicubic interpolation method and existing neural network methods (SRGAN, SRResNet, and SRCNN), the RMSE of our results is improved by 1.1% to 23.8% when recovering the DEM from 120 m to 30 m, by 4.9% to 22.7% when recovering the DEM from 60 m to 30 m, and by 7.8% to 53.7% when recovering the DEM from 30 m to 10 m. The source code that has been developed is shared on Figshare (https://doi.org/10.6084/m9.figshare.19597201).

8 citations


Journal ArticleDOI
19 Jan 2022-Sensors
TL;DR: Zhang et al. as discussed by the authors investigated the question of whether the traditional interpolation method and three excellent image super-resolution methods based on neural networks are appropriate for DEM SR and found that SRGAN (Super-Resolution with Generative Adversarial Network) performed the best performance on accuracy evaluation over a series of DEM SR experiments.
Abstract: High-resolution digital elevation models (DEMs) play a critical role in geospatial databases, which can be applied to many terrain-related studies such as facility siting, hydrological analysis, and urban design. However, due to the limitation of precision of equipment, there are big gaps to collect high-resolution DEM data. A practical idea is to recover high-resolution DEMs from easily obtained low-resolution DEMs, and this process is termed DEM super-resolution (SR). However, traditional DEM SR methods (e.g., bicubic interpolation) tend to over-smooth high-frequency regions on account of the operation of averaging local variations. With the recent development of machine learning, image SR methods have made great progress. Nevertheless, due to the complexity of terrain characters (e.g., peak and valley) and the huge difference between elevation field and image RGB (Red, Green, and Blue) value field, there are few works that apply image SR methods to the task of DEM SR. Therefore, this paper investigates the question of whether the state-of-the-art image SR methods are appropriate for DEM SR. More specifically, the traditional interpolation method and three excellent SR methods based on neural networks are chosen for comparison. Experimental results suggest that SRGAN (Super-Resolution with Generative Adversarial Network) presents the best performance on accuracy evaluation over a series of DEM SR experiments.

8 citations



Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed a semi-supervised student-teacher super-resolution (S2TSR) approach that super-resolves both labeled and unlabeled LR images via adversarial learning.

7 citations


Journal ArticleDOI
TL;DR: In this paper , the Lagrange-Chebyshev I kind interpolation polynomial is used for image resizing in both downscaling and upscaling at any desired size.

7 citations


Journal ArticleDOI
01 Dec 2022-Energy
TL;DR: In this paper , the influence of convergent curve of supersonic nozzle on the carbon separation process was investigated, and the CFD model based on nucleation and growth theory, second-order upwind scheme and density-based solution method were employed to predict the CO 2 supersonIC separation process in the nozzle.

7 citations


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a Lightening Super-Resolution (LSR) deep network, which uses the back-projection to learn the enhanced and dark features in low-resolution space iteratively and up-sample the enhanced features at the last stage of the network to get the final enhanced and high-resolution image.

Journal ArticleDOI
TL;DR: In this paper , a super-resolution (SR) model based on a convolutional neural network and applied it to the near-surface temperature in urban areas was proposed, which incorporates a skip connection, a channel attention mechanism, and separated feature extractors for the inputs of temperature, building height, downward shortwave radiation, and horizontal velocity.

Journal ArticleDOI
TL;DR: In this paper , a 2D-to-3D technique is developed to realize SR 3D shape from 2D fringe images in fringe projection profilometry (FPP), which is applied to obtain the real-world dataset where paired LR-HR images on the same scene are captured.
Abstract: Deep learning technique has exhibited promising performance in achieving high-resolution (HR) images from their low-resolution (LR) images in the super-resolution (SR) field. However, most of the existing SR methods have two underlying problems. First, degraded datasets (i.e., bicubic downsampling) are usually used to train and evaluate the network model, which may lead to less effective in practical scenarios. Second, the 2-D-to-3-D SR technique is lacking. In this article, a real-world 2-D-to-3-D technique is developed to realize SR 3-D shape from 2-D fringe images in fringe projection profilometry (FPP). An FPP system consisting of one projector and a dual camera is applied to obtain the real-world dataset where paired LR-HR images on the same scene are captured. The 3-D geometrical constraints solved from the FPP system are employed to align the image pairs by pixel-to-pixel mapping so that a more accurate dataset can be obtained. In addition, a flexible multiple-to-two network structure is introduced to achieve an SR 3-D point cloud from multiple phase-shifting patterns. Experiments demonstrate the comparison between traditional degraded training and our training.

Journal ArticleDOI
21 Apr 2022-Sensors
TL;DR: The proposed method, which learns a dictionary to capture topographical features and reconstructs them using a dictionary, produces super-resolution with high interpretability.
Abstract: The comprehensive production of detailed bathymetric maps is important for disaster prevention, resource exploration, safe navigation, marine salvage, and monitoring of marine organisms. However, owing to observation difficulties, the amount of data on the world’s seabed topography is scarce. Therefore, it is essential to develop methods that effectively use the limited data. In this study, based on dictionary learning and sparse coding, we modified the super-resolution technique and applied it to seafloor topographical maps. Improving on the conventional method, before dictionary learning, we performed pre-processing to separate the teacher image into a low-frequency component that has a general structure and a high-frequency component that captures the detailed topographical features. We learn the topographical features by training the dictionary. As a result, the root-mean-square error (RMSE) was reduced by 30% compared with bicubic interpolation and accuracy was improved, especially in the rugged part of the terrain. The proposed method, which learns a dictionary to capture topographical features and reconstructs them using a dictionary, produces super-resolution with high interpretability.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a U-shaped attention connection network (US-ACN) for remote-sensing image super-resolution (SR) to solve the problem of insufficient synthetic data for training.
Abstract: In recent years, deep learning-based remote-sensing image super-resolution (SR) methods have made significant progress, and these methods require a large number of synthetic data for training. To obtain sufficient training data, researchers often generate synthetic data via fixed bicubic downsampling methods. However, the synthesized data cannot reflect the complex degradation process of real remote-sensing images. Thus, performance will dramatically reduce when these methods work in real low-resolution (LR) remote-sensing images. This letter proposes a U-shaped attention connection network (US-ACN) for remote-sensing image SR to solve this issue. Our US-ACN does not rely on any synthetic external dataset for training and merely requires one LR image to complete the training. The US-ACN utilizes remote-sensing images’ strong internal feature repetitiveness and fully learns this internal repetitive feature through a well-designed US-ACN to achieve the remote-sensing image SR. In addition, we design a 3-D attention module to generate effective 3-D weights by modeling channel and spatial attention weights, which is more helpful for the learning of internal features. Through the U-shaped connection among attention modules, context information propagation and attention weights learning are fully utilized. Many experiments show that our US-ACN adequately adapts to the remote-sensing image SR in various situations and performs advanced performance.

Journal ArticleDOI
TL;DR: In this article , an unsupervised deep learning semantic interpolation approach was proposed to synthesize new intermediate slices from encoded low-resolution examples by exploiting the latent space generated by autoencoders.



Journal ArticleDOI
TL;DR: In this paper , a fast and novel method for single image reconstruction using the super-resolution (SR) technique has been proposed in which the working principle of the proposed scheme has been divided into three components.
Abstract: A fast and novel method for single-image reconstruction using the super-resolution (SR) technique has been proposed in this paper. The working principle of the proposed scheme has been divided into three components. A low-resolution image is divided into several homogeneous or non-homogeneous regions in the first component. This partition is based on the analysis of texture patterns within that region. Only the non-homogeneous regions undergo the sparse representation for SR image reconstruction in the second component. The obtained reconstructed region from the second component undergoes a statistical-based prediction model to generate its more enhanced version in the third component. The remaining homogeneous regions are bicubic interpolated and reflect the required high-resolution image. The proposed technique is applied to some Large-scale electrical, machine and civil architectural design images. The purpose of using these images is that these images are huge in size, and processing such large images for any application is time-consuming. The proposed SR technique results in a better reconstructed SR image from its lower version with low time complexity. The performance of the proposed system on the electrical, machine and civil architectural design images is compared with the state-of-the-art methods, and it is shown that the proposed scheme outperforms the other competing methods.

Proceedings ArticleDOI
01 Jan 2022
TL;DR: In this paper , a transition from classic to deep learning upscaling is discussed, where a set of one-layer architectures that use interpretable mechanisms to upscale images is proposed. But the authors find that for high speed requirements, eSR becomes better at trading-off image quality and runtime performance.
Abstract: Classic image scaling (e.g. bicubic) can be seen as one convolutional layer and a single upscaling filter. Its implementation is ubiquitous in all display devices and image processing software. In the last decade deep learning systems have been introduced for the task of image super-resolution (SR), using several convolutional layers and numerous filters. These methods have taken over the benchmarks of image quality for upscaling tasks. Would it be possible to replace classic upscalers with deep learning architectures on edge devices such as display panels, tablets, laptop computers, etc.? On one hand, the current trend in Edge–AI chips shows a promising future in this direction, with rapid development of hardware that can run deep–learning tasks efficiently. On the other hand, in image SR only few architectures have pushed the limit to extreme small sizes that can actually run on edge devices at real-time. We explore possible solutions to this problem with the aim to fill the gap between classic upscalers and small deep learning configurations. As a transition from classic to deep–learning upscaling we propose edge–SR (eSR), a set of one–layer architectures that use interpretable mechanisms to upscale images. Certainly, a one–layer architecture cannot reach the quality of deep learning systems. Nevertheless, we find that for high speed requirements, eSR becomes better at trading–off image quality and runtime performance. Filling the gap between classic and deeplearning architectures for image upscaling is critical for massive adoption of this technology. It is equally important to have an interpretable system that can reveal the inner strategies to solve this problem and guide us to future improvements and better understanding of larger networks.


Journal ArticleDOI
TL;DR: In this article , a generalizable low-frequency loss (LFL) was proposed to imitate the distribution of target LR images without using any paired examples, and an adaptive data loss (ADL) was designed for the downsampler which can be adaptively learned and updated from the data during the training loops.
Abstract: Most image super-resolution (SR) methods are developed on synthetic low-resolution (LR) and high-resolution (HR) image pairs that are constructed by a predetermined operation, e.g., bicubic downsampling. As existing methods typically learn an inverse mapping of the specific function, they produce blurry results when applied to real-world images whose exact formulation is different and unknown. Therefore, several methods attempt to synthesize much more diverse LR samples or learn a realistic downsampling model. However, due to restrictive assumptions on the downsampling process, they are still biased and less generalizable. This study proposes a novel method to simulate an unknown downsampling process without imposing restrictive prior knowledge. We propose a generalizable low-frequency loss (LFL) in the adversarial training framework to imitate the distribution of target LR images without using any paired examples. Furthermore, we design an adaptive data loss (ADL) for the downsampler, which can be adaptively learned and updated from the data during the training loops. Extensive experiments validate that our downsampling model can facilitate existing SR methods to perform more accurate reconstructions on various synthetic and real-world examples than the conventional approaches.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a novel data acquisition process to shoot a large set of LR~HR image pairs using real cameras, which can be aligned at very high sub-pixel precision by a novel spatial-frequency dual-domain registration method.
Abstract: The performance of deep learning based image super-resolution (SR) methods depend on how accurately the paired low and high resolution images for training characterize the sampling process of real cameras. Low and high resolution (LR~HR) image pairs synthesized by degradation models (e.g., bicubic downsampling) deviate from those in reality; thus the synthetically-trained DCNN SR models work disappointingly when being applied to real-world images. To address this issue, we propose a novel data acquisition process to shoot a large set of LR~HR image pairs using real cameras. The images are displayed on an ultra-high quality screen and captured at different resolutions. The resulting LR~HR image pairs can be aligned at very high sub-pixel precision by a novel spatial-frequency dual-domain registration method, and hence they provide more appropriate training data for the learning task of super-resolution. Moreover, the captured HR image and the original digital image offer dual references to strengthen supervised learning. Experimental results show that training a super-resolution DCNN by our LR HR dataset achieves higher image quality than training it by other datasets in the literature. Moreover, the proposed screen-capturing data collection process can be automated; it can be carried out for any target camera with ease and low cost, offering a practical way of tailoring the training of a DCNN SR model separately to each of the given cameras.


Journal ArticleDOI
TL;DR: In this article , the authors proposed a super-resolution and transfer learning model, which is composed of pre-training and fine-tuning models to improve the spatial resolution of the VIS images of the FY4 satellite.
Abstract: Remote sensing images acquired by the FY4 satellite are crucial for regional cloud monitoring and meteorological services. Inspired by the success of deep learning networks in image super-resolution, we applied image super-resolution to FY4 visible spectrum (VIS) images. However, training a robust network directly for FY4 VIS image super-resolution remains challenging due to the limited provision of high resolution FY4 sample data. Here, we propose a super-resolution and transfer learning model, FY4-SR-Net. It is composed of pre-training and fine-tuning models. The pre-training model was developed using a deep residual network and a large number of FY4 A 4km and 1km resolution VIS images as the training data. The knowledge derived from 4 km to 1 km resolution images was incorporated into FY4 B 1 km to 0.25 km resolution VIS images. The FY4-SR-Net is fine-tuned by incorporating limited 1km and 0.25km resolution panchromatic (PAN) images, and then producing 1km super-resolution VIS images of the FY4 satellite. Using the one-day FY4 test dataset for qualitative and quantitative evaluations, the FY4-SR-Net outperformed the classic bicubic interpolation approach with a 16.12% reduction in root mean square error (RMSE) and a 2.97% rise in peak signal-to-noise ratio (PSNR) averages. The structural similarity (SSIM) value average increased by 0.0026. This work provides a new precedent for improving the spatial resolution of FY4 series meteorological satellites, which has important scientific significance and application properties.

Journal ArticleDOI
TL;DR: In this paper , a multi-degradation, unsupervised image super-resolution method based on deep learning is proposed. But the method cannot super-resolve favorable images in the real world due to insufficient or unavailable HR images.
Abstract: In remote sensing, it is desirable to improve image resolution by using the image super-resolution (SR) technique. However, there are two challenges: the first one is that high-resolution (HR) images are insufficient or unavailable; another one is that the single degradation model such as bicubic (BIC) cannot super-resolve favorable images in the real world. To address the above two problems, this article presents a multi-degradation, unsupervised SR method based on deep learning. This framework consists of a degrader $ {D}$ to fit the image degradation model and a generator $ {G}$ to generate SR image. By introducing $ {D}$ , calculating the loss function between SR image and HR image as supervised SR methods did can be converted into calculating loss between low resolution (LR) image and image degraded by SR image, thereby realizing unsupervised learning. Experiments on several degradation models show that our method renders the state-of-the-art results compared with existing unsupervised SR methods, and achieves competitive results in contrast with supervised SR methods. Moreover, for real remote sensing images obtained by the Jilin-1 satellite, our method obtained more plausible results visually, which demonstrate the potential in real-world applications.

Proceedings ArticleDOI
05 Sep 2022
TL;DR: In this article , a sound field estimation method based on a physics-informed convolutional neural network (PICNN) using spline interpolation is proposed, where the loss function penalizes deviation from the Helmholtz equation.
Abstract: A sound field estimation method based on a physics-informed convolutional neural network (PICNN) using spline interpolation is pro-posed. Most of the sound field estimation methods are based on wavefunction expansion, making the estimated function satisfy the Helmholtz equation. However, these methods rely only on physical properties; thus, they suffer from a significant deterioration of accuracy when the number of measurements is small. Recent learning-based methods based on neural networks have advantages in esti-mating from sparse measurements when training data are available. However, since physical properties are not taken into consideration, the estimated function can be a physically infeasible solution. We propose the application of PICNN to the sound field estimation problem by using a loss function that penalizes deviation from the Helmholtz equation. Since the output of CNN is a spatially discretized pressure distribution, it is difficult to directly evaluate the Helmholtz-equation loss function. Therefore, we incorporate bicubic spline interpolation in the PICNN framework. Experimental results indicated that accurate and physically feasible estimation from sparse measurements can be achieved with the proposed method.

Proceedings ArticleDOI
01 Aug 2022
TL;DR: In this article , an approach for real-time super-resolution on mobile devices is presented, which is able to deal with a wide range of degradations in the real-world scenarios.
Abstract: Image Super-Resolution (ISR), which aims at recovering High-Resolution (HR) images from the corresponding Low-Resolution (LR) counterparts. Although recent progress in ISR has been remarkable. However, they are way too computationally intensive to be deployed on edge devices, since most of the recent approaches are deep learning based. Besides, these methods always fail in real-world scenes, since most of them adopt a simple fixed “ideal” bicubic downsampling kernel from high-quality images to construct LR/HR training pairs which may lose track of frequency-related details. In this work, an approach for real-time ISR on mobile devices is presented, which is able to deal with a wide range of degradations in the real-world scenarios. Extensive experiments on traditional super-resolution datasets (Set5, Set14, BSD100, Urban100, Manga109, DIV2K) and real-world images with a variety of degradations demonstrate that our method outperforms the state-of-art methods, resulting in higher PSNR and SSIM, lower noise and better visual quality. Most importantly, our method achieves real-time performance on mobile or edge devices.

Journal ArticleDOI
Yue Wang, Haoran Meng, Xinyue Liu, Jiahao Liu, Xu Cui 
TL;DR: In this paper , the authors proposed an oversampled super-pixel image reconstruction method, which can be expressed as the implementation of nearest-neighbor interpolation to replace blank pixels in sparse sub-phase shift holograms.
Abstract: Parallel phase-shifting digital holography (PPSDH) employing a polarization image sensor can suppress zero-order and twin-image noise through a single exposure, achieve instantaneous measurement of complex-valued dynamic objects, and have broad applications in the areas of biomedicine, etc. To improve the imaging resolution of PPSDH, we propose an oversampled super-pixel image reconstruction method, which can be expressed as the implementation of nearest-neighbor interpolation to replace blank pixels in sparse sub-phase-shift holograms. We found experimentally that the maximum spatial lateral resolution of the reconstructed image based on the existing super-pixel method, B-spline, bicubic, bilinear, and the proposed nearest-neighbor interpolation was 12.4 µm, 11.4 µm, 9.8 µm, 8.8 µm, and 7.8 µm, respectively. The main reason for not reaching the ideal value of 6.9 µm was the inherent residual zero-order and twin-image noise, which needs to be removed in the future.

Journal ArticleDOI
Tibor Valuch1
TL;DR: Zhang et al. as discussed by the authors proposed an online super-resolution (ONSR) method, which allows the model weights to be updated according to the degradation of the test image.