Topic
High dynamic range
About: High dynamic range is a research topic. Over the lifetime, 4280 publications have been published within this topic receiving 76293 citations. The topic is also known as: HDR.
Papers published on a yearly basis
Papers
More filters
••
08 Sep 2018TL;DR: The proposed method is the first framework to create high dynamic range images based on the estimated multi-exposure stack using the conditional generative adversarial network structure and is significantly similar to the ground truth than other state-of-the-art algorithms.
Abstract: High dynamic range images contain luminance information of the physical world and provide more realistic experience than conventional low dynamic range images. Because most images have a low dynamic range, recovering the lost dynamic range from a single low dynamic range image is still prevalent. We propose a novel method for restoring the lost dynamic range from a single low dynamic range image through a deep neural network. The proposed method is the first framework to create high dynamic range images based on the estimated multi-exposure stack using the conditional generative adversarial network structure. In this architecture, we train the network by setting an objective function that is a combination of L1 loss and generative adversarial network loss. In addition, this architecture has a simplified structure than the existing networks. In the experimental results, the proposed network generated a multi-exposure stack consisting of realistic images with varying exposure values while avoiding artifacts on public benchmarks, compared with the existing methods. In addition, both the multi-exposure stacks and high dynamic range images estimated by the proposed method are significantly similar to the ground truth than other state-of-the-art algorithms.
104 citations
•
01 Jan 2010TL;DR: The results presented here show that in fact MaxRGB works surprisingly well when tested on a new dataset of 105 high dynamic range images, and also better than previously reported when some simple pre-processing is applied to the images of the standard 321 image set.
Abstract: The poor performance of the MaxRGB illuminationestimation method is often used in the literature as a foil when promoting some new illumination-estimation method. However, the results presented here show that in fact MaxRGB works surprisingly well when tested on a new dataset of 105 high dynamic range images, and also better than previously reported when some simple pre-processing is applied to the images of the standard 321 image set [1]. The HDR images in the dataset for color constancy research were constructed in the standard way from multiple exposures of the same scene. The color of the scene illumination was determined by photographing an extra HDR image of the scene with 4 Gretag Macbeth mini Colorcheckers at 45 degrees relative to one another placed in it. With preprocessing, MaxRGB’s performance is statistically equivalent to that of Color by Correlation [2] and statistically superior to that of the Greyedge [3] algorithm on the 321 set (null hypothesis rejected at the 5% significance level). It also performs as well as Greyedge on the HDR set. These results demonstrate that MaxRGB is far more effective than it has been reputed to be so long as it is applied to image data that encodes the full dynamic range of the original scene. Introduction MaxRGB is an extremely simple method of estimating the chromaticity of the scene illumination for color constancy and automatic white balancing based on the assumption that the triple of maxima obtained independently from each of the three color channels represents the color of the illumination. It is often used as a foil to demonstrate how much better some newly proposed algorithm performs in comparison. However, is its performance really as bad as it has been reported [1,3-5] to be? Is it really any worse than the algorithms to which it is compared?1 The prevailing belief in the field about the inadequacy of MaxRGB is reflected in the following two quotations from two different anonymous reviewers criticizing a manuscript describing a different illumination-estimation proposal: “Almost no-one uses Max RGB in the field (or in commercial cameras). That this, rejected method, gives better performance than the (proposed) method is grounds alone for rejection.” “The first and foremost thing that attracts attention is the remarkable performance of the Scale-by-Max (i.e. White-Patch) algorithm. This algorithm has the highest performance on two of the three data sets, which is quite remarkable by itself.” Paper’s title inspired by Charles Poynton, “The Rehabilitation of Gamma,” Proc. of Human Vision and Electronic Imaging III SPIE 3299, 232-249, 1998. We hypothesize that there are two reasons why the effectiveness of MaxRGB may have been underestimated. One is that it is important not to apply MaxRGB naively as the simple maximum of each channel, but rather it is necessary to preprocess the image data somewhat before calculating the maximum, otherwise a single bad pixel or spurious noise will lead to the maximum being incorrect. The second is that MaxRGB generally has been applied to 8-bit-per-channel, non-linear images, for which there is both significant tone-curve compression and clipping of high intensity values. To test the pre-processing hypothesis, the effects of preprocessing by median filtering, and resizing by bilinear filtering, are compared to that of the common pre-processing, which simply discards pixels for which at least one channel is maximal (i.e., for n-bit images when R=2n-1 or G=2n-1 or B=2n-1). To test the dynamic-range hypothesis, a new HDR dataset for color constancy research has been constructed which consists of images of 105 scenes. For each scene there are HDR2 (high dynamic range) images with and without Macbeth mini Colorchecker charts, from which the chromaticity of the scene illumination is measured. This data set is now available on-line. MaxRGB is a special and extremely limited case of Retinex [6]. In particular, it corresponds to McCann99 Retinex [7] when the number of iterations is infinite, or to path-based Retinex [8] without thresholding but with infinite paths. Retinex and MaxRGB both depend on the assumption that either there is a white surface in the scene, or there are three separate surfaces reflecting maximally in the R, G and B sensitivity ranges. In practice, most digital still cameras are incapable of capturing the full dynamic range of a scene and use exposures and tone reproduction curves that clip or compress high digital counts. As a result, the maximum R, G and B digital counts from an image generally do not faithfully represent the corresponding maximum scene radiances. Barnard et al. [9] present some tests using artificial clipping of images that show the effect that lack of dynamic range can have on various illumination-estimation algorithms. To determine whether or not MaxRGB is really as poor as it is report to be in comparison to other illumination-estimation algorithms, we compare the performance of several algorithms on the new image database. We also find that two simple preprocessing strategies lead to significant performance improvement in the case of MaxRGB. Tests described below show that MaxRGB performs as well on this new HDR data set as other representative and recently published algorithms. We also find that two simple pre-processing strategies lead to significant performance improvement. The results reported here extend those of an earlier study [10] in a number of ways: the size of the dataset 2 Note that the scenes were not necessarily of high dynamic range. The term HDR is used here to mean simply that that full dynamic range of the scene is captured within the image. 3 www.cs.sfu.ca/~colour/data Page 1 of 4
103 citations
••
01 Oct 2011
TL;DR: The DSSC instrument as mentioned in this paper is based on a silicon pixel sensor with a DEPFET as a central amplifier structure and has detection efficiency close to 100% for X-rays from 0.5 keV up to 10 keV.
Abstract: We present the development of the DSSC instrument: an ultra-high speed detector system for the new European XFEL in Hamburg. The DSSC will be able to record X-ray images with a maximum frame rate of 4.5 MHz. The system is based on a silicon pixel sensor with a DEPFET as a central amplifier structure and has detection efficiency close to 100% for X-rays from 0.5 keV up to 10 keV. The sensor will have a size of approximately 210 × 210 mm composed of 1024 × 1024 pixels with hexagonal shape. Two hundred fifty six mixed signal readout ASICs are bump-bonded to the detector. They are designed in 130 nm CMOS technology and provide full parallel readout. The signals coming from the sensor are processed by an analog filter, immediately digitized by 8-bit ADCs and locally stored in an SRAM, which is able to record at least 640 frames. In order to fit the dynamic range of about 104 photons of 1 keV per pixel into a reasonable output signal range, achieving at the same time single 1 keV photon resolution, a non-linear characteristic is required. The proposed DEPFET provides the needed dynamic range compression at the sensor level. The most exciting and challenging property is that the single 1 keV photon resolution and the high dynamic range are accomplished within the 220 ns frame rate of the system. The key properties and the main design concepts of the different building blocks of the system are discussed. Measurements with the analog front-end of the readout ASIC and a standard DEPFET have already shown a very low noise which makes it possible to achieve the targeted single photon resolution for 1 keV photons at 4.5 MHz and also for 0.5 keV photons at half of the speed. In the paper the new experimental results obtained coupling a single pixel to an 8 × 8 ASIC prototype are shown. This 8 × 8 ASIC comprises the complete readout chain from the analog front-end to the ADC and the memory. The characterization of a newly fabricated non-linear DEPFET is presented for the first time.
103 citations
••
TL;DR: In this paper, a high dynamic range CMOS image sensor with inpixel light-to-frequency conversion has been designed, which can achieve a linear dynamic range of over 115 dB and an overall dynamic range over 130 dB.
Abstract: A high dynamic range CMOS image sensor with inpixel light-to-frequency conversion has been designed. The prototype chip was fabricated in a standard 0.18-mum single-poly six-metal CMOS technology. The experimental results show that, operating at 1.2 V, the sensor can achieve a linear dynamic range of over 115 dB and an overall dynamic range of over 130 dB
102 citations
••
TL;DR: In this article, a convolutional neural network (CNN) was used to reconstruct an HDR image from a single low dynamic range (LDR) image, and the final HDR image can be formed by merging these inference results.
Abstract: Recently, high dynamic range (HDR) imaging has attracted much attention as a technology to reflect human visual characteristics owing to the development of the display and camera technology. This paper proposes a novel deep neural network model that reconstructs an HDR image from a single low dynamic range (LDR) image. The proposed model is based on a convolutional neural network composed of dilated convolutional layers and infers LDR images with various exposures and illumination from a single LDR image of the same scene. Then, the final HDR image can be formed by merging these inference results. It is relatively simple for the proposed method to find the mapping between the LDR and an HDR with a different bit depth because of the chaining structure inferring the relationship between the LDR images with brighter (or darker) exposures from a given LDR image. The method not only extends the range but also has the advantage of restoring the light information of the actual physical world. The proposed method is an end-to-end reconstruction process, and it has the advantage of being able to easily combine a network to extend an additional range. In the experimental results, the proposed method shows quantitative and qualitative improvement in performance, compared with the conventional algorithms.
101 citations