scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Image super-resolution

Linwei Yue1, Huanfeng Shen1, Jie Li1, Qiangqiang Yuan1, Hongyan Zhang1, Liangpei Zhang1 
01 Nov 2016-Signal Processing (Elsevier)-Vol. 128, pp 389-408
TL;DR: This paper aims to provide a review of SR from the perspective of techniques and applications, and especially the main contributions in recent years, and discusses the current obstacles for future research.
About: This article is published in Signal Processing.The article was published on 2016-11-01. It has received 378 citations till now.
Citations
More filters
Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, the gradient residual minimization (GRM) was integrated with a local prior to improve the quality of underwater image enhancement, and the experimental results demonstrate that the NLP-GRM is superior to existing underwater enhancement methods.
Abstract: Underwater image enhancement is necessary to study aquatic flora and fauna. However, due to light absorption and scattering, acquired subaqueous images are gravely hazed and degraded. This results in low contrast of the underwater image. In literature, many algorithms aim to dehaze and enhance an image’s quality. The NLP-GRM method aims to design an efficient algorithm that carries out superior results under a different environmental condition in terms of visual analysis and objective evaluation. Our approach integrates the gradient residual minimization (GRM) and then on local prior to a new method called NLP-GRM. Initially, an underwater image is processed through a nonlocal prior to dehaze the image by color assumption. The nonlocal prior (NLP) output works with robust GRM, reinforcing the edge strength and detail in the image post removal of underwater haze. The execution of the NLP-GRM algorithm has been observed by quantitative metrics and subjectively as well. The experimental results demonstrate that the NLP-GRM is superior to existing underwater enhancement methods.

1 citations

Journal Article
TL;DR: This paper compares different SR models that specialize in single image processing and will take a glance at how they evolved to take on many different objectives and shapes over the years.
Abstract: Super-resolution (SR), the process of obtaining highresolution images from one or more low-resolution observations of the same scene, has been a very popular topic of research in the last few decades in both signal processing and image processing areas. Due to the recent developments in Convolutional Neural Networks [1], the popularity of SR algorithms has skyrocketed as the barrier of entry has been lowered significantly. Recently, this popularity has spread into video processing areas to the lengths of developing SR models that work in real-time. In this paper, we compare different SR models that specialize in single image processing and will take a glance at how they evolved to take on many different objectives and shapes over the years.

1 citations

Journal ArticleDOI
TL;DR: In this article , a fully automated workflow based on soft computing to characterize the heterogeneous flow properties of cores for predictive continuum-scale models is proposed to better understand the impacts of heterogeneity on flow.
Abstract: The influence of core-scale heterogeneity on continuum-scale flow and laboratory measurements are not well understood. To address this issue, we propose a fully automated workflow based on soft computing to characterize the heterogeneous flow properties of cores for predictive continuum-scale models. While the proposed AI-based workflow inherently has no trained knowledge of rock petrophysical properties, our results demonstrate that image features and morphological properties provide sufficient measures for petrophysical classification. Micro X-ray computed tomography (μxCT) image features were extracted from full core plug images by using a Convolutional Neural Network and Minkowski functional measurements. The features were then classified into specific classes using Principal Component Analysis followed by K-means clustering. Next, the petrophysical properties of each class were evaluated using pore-scale simulations to substantiate that unique classes were identified. The μxCT image was then up-scaled to a continuum-scale grid based on the defined classes. Last, simulation results were evaluated against real-time flooding data monitored by Positron Emission Tomography. Both homogeneous sandstone and heterogeneous carbonate were tested. Simulation and experimental saturation profiles compared well, demonstrating that the workflow provided high-fidelity characterization. Overall, we provided a novel workflow to build digital rock models in a fully automated way to better understand the impacts of heterogeneity on flow.

1 citations

Proceedings ArticleDOI
01 Nov 2019
TL;DR: A method to increase the similarity between pixels by performing the operation of the Resnet module which has an effect similar to the ensemble operation to give a better high resolution image.
Abstract: The Resnet model is similar to the ensemble, and its performance and parameters can be considered according to the modular design. Currently, Resnet is widely used as a backbone network. In particular, the Resnet module that compensates the weight can consider the similarity of pixels. Therefore, in this paper, we propose a method to increase the similarity between pixels by performing the operation of the Resnet module which has an effect similar to the ensemble operation. It give us a better high resolution image.

1 citations


Cites background from "Image super-resolution"

  • ...As the result, demand for high quality has been increasing [1]....

    [...]

  • ...Because of those constraints, the number of pixels in a device is limited, the quality[1] of the acquired image is degraded as shown in Fig....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations

Journal ArticleDOI
TL;DR: It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.
Abstract: Summary. We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strongly correlated predictors tend to be in or out of the model together.The elastic net is particularly useful when the number of predictors (p) is much bigger than the number of observations (n). By contrast, the lasso is not a very satisfactory variable selection method in the

16,538 citations


"Image super-resolution" refers background in this paper

  • ...As the l2 norm represents a smoothing prior and the l1 norm tends to preserve the edges, the lp ( ≤ ≤ p 1 2) norm achieves a balance between them, thereby avoiding the staircase effect [110]....

    [...]

Journal ArticleDOI
TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.

15,225 citations


"Image super-resolution" refers background in this paper

  • ...[93,103], based on the fact that an image is naturally “blocky” and discontinuous....

    [...]

Book
01 Jan 1977

8,009 citations


"Image super-resolution" refers background in this paper

  • ...In the early years, the smoothness of natural images was mainly considered, which leads to the quadratic property of the regularizations [99,100]....

    [...]