scispace - formally typeset
Search or ask a question
Topic

High-dynamic-range imaging

About: High-dynamic-range imaging is a research topic. Over the lifetime, 766 publications have been published within this topic receiving 22577 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, an entropy-maximized adaptive histogram equalization (EMAHE) algorithm is proposed to improve the ability of images to express the details of dark or low-contrast targets.
Abstract: The digital time delay integration (digital TDI) technology of the complementary metal-oxide-semiconductor (CMOS) image sensor has been widely adopted and developed in the optical remote sensing field. However, the details of targets that have low illumination or low contrast in scenarios of high contrast are often drowned out because of the superposition of multi-stage images in digital domain multiplies the read noise and the dark noise, thus limiting the imaging dynamic range. Through an in-depth analysis of the information transfer model of digital TDI, this paper attempts to explore effective ways to overcome this issue. Based on the evaluation and analysis of multi-stage images, the entropy-maximized adaptive histogram equalization (EMAHE) algorithm is proposed to improve the ability of images to express the details of dark or low-contrast targets. Furthermore, in this paper, an image fusion method is utilized based on gradient pyramid decomposition and entropy weighting of different TDI stage images, which can improve the detection ability of the digital TDI CMOS for complex scenes with high contrast, and obtain images that are suitable for recognition by the human eye. The experimental results show that the proposed methods can effectively improve the high-dynamic-range imaging (HDRI) capability of the digital TDI CMOS. The obtained images have greater entropy and average gradients.

7 citations

Proceedings ArticleDOI
04 May 2020
TL;DR: Two different approaches to high dynamic range (HDR) imaging are considered – gamma encoding and modulo encoding – and a combination of deep image prior and total variation (TV) regularization for reconstructing low-light images is proposed.
Abstract: Traditionally, dynamic range enhancement for images has involved a combination of contrast improvement (via gamma correction or histogram equalization) and a denoising operation to reduce the effects of photon noise. More recently, modulo-imaging methods have been introduced for high dynamic range photography to significantly expand dynamic range at the sensing stage itself. The transformation function for both of these problems is highly non-linear, and the image reconstruction procedure is typically non-convex and ill-posed. A popular recent approach is to regularize the above inverse problem via a neural network prior (such as a trained autoencoder), but this requires extensive training over a dataset with thousands of paired regular/HDR image data samples.In this paper, we introduce a new approach for HDR image reconstruction using neural priors that require no training data. Specifically, we employ deep image priors, which have been successfully used for imaging problems such as denoising, super-resolution, inpainting and compressive sensing with promising performance gains over conventional regularization techniques. In this paper, we consider two different approaches to high dynamic range (HDR) imaging – gamma encoding and modulo encoding – and propose a combination of deep image prior and total variation (TV) regularization for reconstructing low-light images. We demonstrate the significant improvement achieved by both of these approaches as compared to traditional dynamic range enhancement techniques.

7 citations

Journal ArticleDOI
TL;DR: This paper introduces the first approach (to the best of the knowledge) to the reconstruction of highresolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
Abstract: Photographs captured by smartphones and mid-range cameras have limited spatial resolution and dynamic range, with noisy response in underexposed regions and color artefacts in saturated areas. This paper introduces the first approach (to the best of our knowledge) to the reconstruction of highresolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing. This method uses a physically-accurate model of image formation to combine an iterative optimization algorithm for solving the corresponding inverse problem with a learned image representation for robust alignment and a learned natural image prior. The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration, and features that are learned end to end from synthetic yet realistic data. Extensive experiments demonstrate its excellent performance with super-resolution factors of up to ×4 on real photographs taken in the wild with hand-held cameras, and high robustness to low-light conditions, noise, camera shake, and moderate object motion.

7 citations

Proceedings ArticleDOI
17 Oct 2021
TL;DR: In this paper, a multi-step feature fusion method was proposed to fuse the features in a stack of blocks having the same structure, and the design of the component block that effectively performs two operations essential to the problem, i.e., comparing and selecting appropriate images/regions.
Abstract: This paper considers the problem of generating an HDR image of a scene from its LDR images. Recent studies employ deep learning and solve the problem in an end-to-end fashion, leading to significant performance improvements. However, it is still hard to generate a good quality image from LDR images of a dynamic scene captured by a hand-held camera, e.g., occlusion due to the large motion of foreground objects, causing ghosting artifacts. The key to success relies on how well we can fuse the input images in their feature space, where we wish to remove the factors leading to low-quality image generation while performing the fundamental computations for HDR image generation, e.g., selecting the best-exposed image/region. We propose a novel method that can better fuse the features based on two ideas. One is multi-step feature fusion; our network gradually fuses the features in a stack of blocks having the same structure. The other is the design of the component block that effectively performs two operations essential to the problem, i.e., comparing and selecting appropriate images/regions. Experimental results show that the proposed method outperforms the previous state-of-the-art methods on the standard benchmark tests.

7 citations

Proceedings ArticleDOI
TL;DR: The HDR Visual Difference Predictor (HDR-VDP-2) is primarily a visibility prediction metric i.e. whether the signal distortion is visible to the eye and to what extent and it also employs a pooling function to compute an overall quality score.
Abstract: High Dynamic Range (HDR) signals capture much higher contrasts as compared to the traditional 8-bit low dynamic range (LDR) signals. This is achieved by representing the visual signal via values that are related to the real-world luminance, instead of gamma encoded pixel values which is the case with LDR. Therefore, HDR signals cover a larger luminance range and tend to have more visual appeal. However, due to the higher luminance conditions, the existing methods cannot be directly employed for objective quality assessment of HDR signals. For that reason, the HDR Visual Difference Predictor (HDR-VDP-2) has been proposed. HDR-VDP-2 is primarily a visibility prediction metric i.e. whether the signal distortion is visible to the eye and to what extent. Nevertheless, it also employs a pooling function to compute an overall quality score. This paper focuses on the pooling aspect in HDR-VDP-2 and employs a comprehensive database of HDR images (with their corresponding subjective ratings) to improve the prediction accuracy of HDR-VDP-2. We also discuss and evaluate the existing objective methods and provide a perspective towards better HDR quality assessment.

7 citations


Network Information
Related Topics (5)
Pixel
136.5K papers, 1.5M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Object detection
46.1K papers, 1.3M citations
81% related
Convolutional neural network
74.7K papers, 2M citations
80% related
Image segmentation
79.6K papers, 1.8M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202333
202260
202129
202034
201937
201837