scispace - formally typeset
Search or ask a question
Topic

High dynamic range

About: High dynamic range is a research topic. Over the lifetime, 4280 publications have been published within this topic receiving 76293 citations. The topic is also known as: HDR.


Papers
More filters
Proceedings ArticleDOI
20 Feb 2007
TL;DR: In this paper, a folded multiple capture (CMC) scheme is proposed to achieve high linearity and high SNR at low power consumption for 3D-IC IR focal plane arrays.
Abstract: The stringent performance requirements of many infrared imaging applications warrant the development of precision high dynamic range, high speed focal plane arrays. In addition to achieving high dynamic range, the readout circuits for these image sensors must achieve high linearity and SNR at low power consumption. Two high dynamic range image sensor schemes that have been developed for visible range imaging were reviewed first and discuss why they cannot meet the stringent performance demands of infrared imaging. A new dynamic range extension scheme, folded multiple capture, was then described that can meet these performance requirements. Dynamic range is extended using synchronous self-reset while high SNR is maintained using few non-uniformly spaced captures and least-squares fit to estimate pixel photocurrent. The paper concludes with a description of a prototype of this architecture targeted for 3D-IC IR focal plane arrays

15 citations

Patent
27 Jul 2001
TL;DR: In this paper, the problem of synthesizing an image with high dynamic range at the same frame rate as that of an original image was solved by solving the problem that a frame rate is reduced in half in a conventional technique because an image of one frame is synthesized from images of two frames by a single synthetic algorithm.
Abstract: PROBLEM TO BE SOLVED: To provide a method and an apparatus capable of generating an image with high dynamic range at the same frame rate as that of an original image by solving the problem that a frame rate is reduced in half in a conventional technique because an image of one frame is synthesized from images of two frames by a single synthetic algorithm as shown in FIG. (a) 11 in synthesizing an image with high dynamic range from an image sequence in which imaging time is changed into longer and shorter. SOLUTION: As shown in FIGs. (b) 12 and (c) 13, an image with high dynamic range of one frame is formed from an original image of substantial one frame by using two different synthetic algorithms alternately to provide an image with high dynamic range by the same frame rate as that of the original image. In each algorithm, the image of two frames whose imaging times are different are not synthesized as they are, but are synthesized after prediction-forming an image to be obtained at the same imaging time. Thus, unnatural images can be prevented from being synthesized in the case of a motion image.

15 citations

Journal ArticleDOI
TL;DR: In this study, a novel image contrast enhancement method, called low dynamic range histogram equalization (LDR-HE), is proposed based on the Quantized Discrete Haar Wavelet Transform (HWT), which provides a scalable and controlled dynamic range reduction in the histograms when the inverse operation is done in the reconstruction phase in order to regulate the excessive contrast enhancement rate.
Abstract: Conventional contrast enhancement methods stretch histogram bins to provide a uniform distribution. However, they also stretch the existing natural noises which cause abnormal distributions and annoying artifacts. Histogram equalization should mostly be performed in low dynamic range (LDR) in which noises are generally distributed in high dynamic range (HDR). In this study, a novel image contrast enhancement method, called low dynamic range histogram equalization (LDR-HE), is proposed based on the Quantized Discrete Haar Wavelet Transform (HWT). In the frequency domain, LDR-HE performs a de-boosting operation on the high-pass channel by stretching the high frequencies of the probability mass function to the nearby zero. For this purpose, greater amplitudes than the absolute mean frequency in the high pass band are divided by a hyper alpha parameter. This damping parameter, which regulates the global contrast on the processed image, is the coefficient of variations of high frequencies, i.e., standard deviation divided by mean. This fundamental procedure of LDR-HE definitely provides a scalable and controlled dynamic range reduction in the histograms when the inverse operation is done in the reconstruction phase in order to regulate the excessive contrast enhancement rate. In the experimental studies, LDR-HE is compared with the 14 most popular local, global, adaptive, and brightness preserving histogram equalization methods. Experimental studies qualitatively and quantitatively show promising and encouraging results in terms of different quality measurement metrics such as mean squared error (MSE), peak signal-to-noise ratio (PSNR), Contrast Improvement Index (CII), Universal Image Quality Index (UIQ), Quality-aware Relative Contrast Measure (QRCM), and Absolute Mean Brightness Error (AMBE). These results are not only assessed through qualitative visual observations but are also benchmarked with the state-of-the-art quantitative performance metrics.

15 citations

Proceedings ArticleDOI
Raymond Lo1, Steve Mann1, Jason Huang1, Valmiki Rampersad1, Tao Ai1 
29 Oct 2012
TL;DR: This work presents highly parallelizable and computationally efficient High Dynamic Range (HDR) image compositing, reconstruction, and spatotonal mapping algorithms for processing HDR video, implemented in the EyeTap Digital Glass electric seeing aid, for use in everyday life.
Abstract: We present highly parallelizable and computationally efficient High Dynamic Range (HDR) image compositing, reconstruction, and spatotonal mapping algorithms for processing HDR video We implemented our algorithms in the EyeTap Digital Glass electric seeing aid, for use in everyday life We also tested the algorithms in extreme dynamic range situations, such as, electric arc welding Our system runs in real-time, and requires no user intervention, and no fine-tuning of parameters after a one-time calibration, even under a wide variety of very difficult lighting conditions (eg electric arc welding, including detailed inspection of the arc, weld puddle, and shielding gas in TIG welding) Our approach can render video at 1920x1080 pixel resolution at interactive frame rates that vary from 24 to 60 frames per second with GPU acceleration We also implemented our system on FPGAs (Field Programmable Gate Arrays) for being miniaturized and built into eyeglass frames

15 citations

Journal ArticleDOI
TL;DR: A stacked convolutional neural network (SCNN) is proposed that predicts high dynamic range (HDR) 360° RMs with varying roughness from a limited field of view, low dynamic range photograph and provides high-fidelity rendering of virtual objects to match with the background photograph.
Abstract: Corresponding lighting and reflectance between real and virtual objects is important for spatial presence in augmented and mixed reality (AR and MR) applications. We present a method to reconstruct real-world environmental lighting, encoded as a reflection map (RM), from a conventional photograph. To achieve this, we propose a stacked convolutional neural network (SCNN) that predicts high dynamic range (HDR) 360 ${}^\circ$ ∘ RMs with varying roughness from a limited field of view, low dynamic range photograph. The SCNN is progressively trained from high to low roughness to predict RMs at varying roughness levels, where each roughness level corresponds to a virtual object’s roughness (from diffuse to glossy) for rendering. The predicted RM provides high-fidelity rendering of virtual objects to match with the background photograph. We illustrate the use of our method with indoor and outdoor scenes trained on separate indoor/outdoor SCNNs showing plausible rendering and composition of virtual objects in AR/MR. We show that our method has improved quality over previous methods with a comparative user study and error metrics.

15 citations


Network Information
Related Topics (5)
Pixel
136.5K papers, 1.5M citations
88% related
Image processing
229.9K papers, 3.5M citations
86% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Feature extraction
111.8K papers, 2.1M citations
83% related
Image segmentation
79.6K papers, 1.8M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023122
2022263
2021164
2020243
2019238
2018262