Topic
High-dynamic-range imaging
About: High-dynamic-range imaging is a research topic. Over the lifetime, 766 publications have been published within this topic receiving 22577 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A learning-based stereo HDR imaging (SHDRI) method with three convolutional neural network modules that perform specific tasks, including exposure calibration CNN module, hole-filling CNN module and HDR fusion CNN (HDRF-CNN) module, to combine with traditional image processing methods to model SHDRI pipeline is proposed.
Abstract: It is possible to generate stereo high dynamic range (HDR) images/videos by using a pair of cameras with different exposure parameters. In this article, a learning-based stereo HDR imaging (SHDRI) method with three modules is proposed. In the proposed method, we construct three convolutional neural network (CNN) modules that perform specific tasks, including exposure calibration CNN (EC-CNN) module, hole-filling CNN (HF-CNN) module and HDR fusion CNN (HDRF-CNN) module, to combine with traditional image processing methods to model SHDRI pipeline. To avoid ambiguity, we assume that the left-view image is under-exposed and the right-view image is over-exposed. Specifically, the EC-CNN module is first constructed to convert stereo multi-exposure images into the same exposure to facilitate subsequent stereo matching. Then, based on the estimated disparity map, the right-view image is forward-warped to generate the initial left-view over-exposure image. After that, extra exposure information is utilized to guide hole-filling. Finally, the HDRF-CNN module is constructed and employed to extract fusion features to fuse the hole-filled left-view over-exposure image with the original left-view under-exposure image into the left-view HDR image. Right-view HDR images can be generated in the same way. In addition, we propose an effective two-phase training strategy to overcome the lack of a sufficient large stereo multi-exposure dataset. The experimental results demonstrate that the proposed method can generate stereo HDR images with high visual quality. Furthermore, the proposed method achieves better performance in comparison with the latest SHDRI method.
14 citations
•
TL;DR: A novel way to get the chrominance information of the scene, and the saturation of the fused image can be adjusted using one user-controlled parameter is presented, and SIDWTBEF can give comparative results compared to other shift dependent exposure fusion methods.
Abstract: Until now, most exposure fusion methods are easy to be influenced by the location of object in the image. However, when capturing the source images, slight shift in the camera's position will yield blurry or double images. In order to solve the problem, a method called SIDWTBEF is proposed, which is based on shift-invariant discrete wavelet transform (SIDWT). It is more robust to images those have slight shift. On the other hand, in this paper, we present a novel way to get the chrominance information of the scene, and the saturation of the fused image can be adjusted using one user-controlled parameter. The luminance images sequence of the source images are decomposed by SIDWT into sub-images with a certain level scale. In the transform domain, different fusion rules are used for the high-pass sub-images and the low-pass sub-images combination respectively. In the end, in order to reduce the inconsistencies induced by the fusion rule after applying the inverse transform of SIDWT, an enhancement operator is proposed. Experiments show that SIDWTBEF can give comparative results compared to other shift dependent exposure fusion methods.
14 citations
••
TL;DR: The experimental results show that the proposed method is superior to the existing HDR imaging methods in quantitative and qualitative analysis, and can quickly generate high-quality HDR images.
14 citations
••
13 Oct 2011
TL;DR: In this paper, the authors proposed an FPGA-based architecture that can produce a real-time high dynamic range video from successive image acquisition, which consists of a pipeline of different algorithmic phases: automatic exposure control during image capture, alignment of successive images in order to minimize camera and objects movements, building of an HDR image by combining the multiple frames, and final tonemapping for viewing on a LCD display.
Abstract: Many camera sensors suffer from limited dynamic range. The result is that there is a lack of clear details in displayed images and videos. This paper describes our approach to generate high dynamic range (HDR) from an image sequence while modifying exposure times for each new frame. For this purpose, we propose an FPGA-based architecture that can produce a real-time high dynamic range video from successive image acquisition. Our hardware platform is build around a standard low dynamic range CMOS sensor and a Virtex 5 FPGA board. The CMOS sensor is a EV76C560 provided by e2v. This 1.3 Megapixel device offers novel pixel integration/readout modes and embedded image pre-processing capabilities including multiframe acquisition with various exposure times, approach consists of a pipeline of different algorithmic phases: automatic exposure control during image capture, alignment of successive images in order to minimize camera and objects movements, building of an HDR image by combining the multiple frames, and final tonemapping for viewing on a LCD display. Our aim is to achieve a realtime video rate of 25 frames per second for a full sensor resolution of 1, 280 × 1, 024 pixels.
13 citations
••
TL;DR: A novel solution for recovering lost details in clipped and over-exposed areas by taking advantage of channel cross-correlation in RGB images, and exploring several potential applications, including extending to video as well as using it as a preprocessing step prior to reverse tone mapping.
13 citations