scispace - formally typeset
Search or ask a question

Showing papers on "High-dynamic-range imaging published in 2016"


Journal ArticleDOI
TL;DR: In this article, a convolutional sparse coding (CSC) based method is proposed to recover high-quality HDRI images from a single coded exposure, which achieves higher quality reconstructions than alternative methods.
Abstract: Current HDR acquisition techniques are based on either i fusing multibracketed, low dynamic range LDR images, ii modifying existing hardware and capturing different exposures simultaneously with multiple sensors, or iii reconstructing a single image with spatially-varying pixel exposures. In this paper, we propose a novel algorithm to recover high-quality HDRI images from a single, coded exposure. The proposed reconstruction method builds on recently-introduced ideas of convolutional sparse coding CSC; this paper demonstrates how to make CSC practical for HDR imaging. We demonstrate that the proposed algorithm achieves higher-quality reconstructions than alternative methods, we evaluate optical coding schemes, analyze algorithmic parameters, and build a prototype coded HDR camera that demonstrates the utility of convolutional sparse HDRI coding with a custom hardware platform.

95 citations


Journal ArticleDOI
TL;DR: This paper describes a complete FPGA-based smart camera architecture named HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) which produces a real-time high dynamic range (HDR) live video stream from multiple captures.
Abstract: This paper describes a complete FPGA-based smart camera architecture named HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) which produces a real-time high dynamic range (HDR) live video stream from multiple captures. A specific memory management unit has been defined to adjust the number of acquisitions to improve HDR quality. This smart camera is built around a standard B&W CMOS image sensor and a Xilinx FPGA. It embeds multiple captures, HDR processing, data display and transfer, which is an original contribution compared to the state-of-the-art. The proposed architecture enables a real-time HDR video flow for a full sensor resolution (1.3 Mega pixels) at 60 frames per second.

54 citations


Journal ArticleDOI
TL;DR: Increasing traction toward supporting HDR and wide color gamut (WCG) is witnessing at the industrial level, which calls for a widely accepted standard for higher bit depth support that can be seamlessly integrated into existing products and applications.
Abstract: High bit depth data acquisition and manipulation have been largely studied at the academic level over the last 15 years and are rapidly attracting interest at the industrial level. An example of the increasing interest for high-dynamic range (HDR) imaging is the use of 32-bit floating point data for video and image acquisition and manipulation that allows a variety of visual effects that closely mimic the real-world visual experience of the end user [1] (see Figure 1). At the industrial level, we are witnessing increasing traction toward supporting HDR and wide color gamut (WCG). WCG leverages HDR for each color channel to display a wider range of colors. Consumer cameras are currently available with a 14- or 16-bit analog-to-digital converter. Rendering devices are also appearing with the capability to display HDR images and video with a peak brightness of up to 4,000 nits and to support WCG (ITU-R Rec. BT.2020 [2]) rather than the historical ITU-R Rec. BT.709 [3]. This trend calls for a widely accepted standard for higher bit depth support that can be seamlessly integrated into existing products and applications.

42 citations


Journal ArticleDOI
TL;DR: How real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation is demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of in vivo fast tracer kinetics during functional imaging.
Abstract: In conventional confocal/multiphoton fluorescence microscopy, images are typically acquired under ideal settings and after extensive optimization of parameters for a given structure or feature, often resulting in information loss from other image attributes. To overcome the problem of selective data display, we developed a new method that extends the imaging dynamic range in optical microscopy and improves the signal-to-noise ratio. Here we demonstrate how real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation. We address reconstruction and segmentation performance on samples with different size, anatomy and complexity. Finally, in vivo real-time high dynamic range imaging is also demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of in vivo fast tracer kinetics during functional imaging.

38 citations


Journal ArticleDOI
TL;DR: The problem of limited dynamic range in the standard imaging pipeline is explained and a survey of state-of-the-art research in HDR imaging is presented, including the technology's history, specialized cameras that capture HDR images directly, and algorithms for capturing HDR images using sequential stacks of differently exposed images.
Abstract: High dynamic range (HDR) imaging enables the capture of an extremely wide range of the illumination present in a scene and so produces images that more closely resemble what we see with our own eyes. In this article, we explain the problem of limited dynamic range in the standard imaging pipeline and then present a survey of state-of-the-art research in HDR imaging, including the technology's history, specialized cameras that capture HDR images directly, and algorithms for capturing HDR images using sequential stacks of differently exposed images. Because this last is among the most common methods for capturing HDR images using conventional digital cameras, we also discuss algorithms to address artifacts that occur when using with this method for dynamic scenes. Finally, we consider systems for the capture of HDR video and conclude by reviewing open problems and challenges in HDR imaging.

28 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrated that the proposed tone mapping method with contrast preservation and lightness correction is more suitable for dynamic range compression than existing tone mapping methods, while it also preserves the color of a scene in an effective way.
Abstract: In real-world environments, the human visual system perceives a wide range of luminance in a scene. However, the full range of tones in a high dynamic range (HDR) scene cannot be displayed on standard display devices due to their low dynamic range (LDR). Therefore, tone mapping is necessary to faithfully display a HDR scene on an LDR display device. Existing tone mapping methods have problems because details and contrast in a scene are not preserved faithfully, and they also distort the colors in a scene. Thus, we propose a tone mapping method for preserving the detail in an HDR scene using a weighted least squares filter, which preserves the global contrast in a scene by using a competitive learning neural network, before applying a tone reproduction operator to preserve the color without shifting the lightness. According to the Helmholtz–Kohlrausch effect, the perception of brightness depends on the lightness, chroma, and hue of a color. For example, the luminance of pixels with specific colors such as red and blue is low in an HDR scene. The proposed method corrects the lightness of pixels according to the color (i.e., lightness, chroma, and hue) of a tone-mapped image. Experimental results with several test sets of images demonstrated that the proposed tone mapping method with contrast preservation and lightness correction is more suitable for dynamic range compression than existing tone mapping methods, while it also preserves the color of a scene in an effective way.

17 citations


Patent
03 May 2016
TL;DR: In this paper, a two read, two analog-to-digital conversion method was used for generating a high dynamic range (HDR) image signal for each pixel based on signals read from the pixel and on light conditions.
Abstract: An imaging system may include an image sensor having an array of dual gain pixels. Each pixel may be operated using a two read method such that all signals are read in a high gain configuration in order to improve the speed or to reduce the power consumption of imaging operations. Each pixel may be operated using a two read, two analog-to-digital conversion method in which two sets of calibration data are stored. A high dynamic range (HDR) image signal may be produced for each pixel based on signals read from the pixel and on light conditions. The HDR image may be produced based on a combination of high and low gain signals and one or both of the two sets of calibration data. A system of equations may be used for generating the HDR image. The system of equations may include functions of light intensity.

17 citations


Journal ArticleDOI
TL;DR: A novel CS-based image sensor design is presented, allowing a compressive acquisition without changing the classical pixel design, as well as the overall sensor architecture, and HDR CS is enabled thanks to specific time diagrams of the control signals.
Abstract: Standard image sensors feature dynamic range about 60 to 70 dB while the light flux of natural scenes may be over 120 dB. Most imagers dedicated to address such dynamic ranges, need specific, and large pixels. However, canonical imagers can be used for high dynamic range (HDR) by performing multicapture acquisitions to compensate saturation. This technique is made possible at the expense of the need for large memory requirements and an increase of the overall acquisition time. On the other hand, the implementation of compressive sensing (CS) raises the same issues regarding the modifications of both the pixel and the readout circuitry. Assuming HDR images are sufficiently sparse, CS claims they can be reconstructed from few random linear measurements. A novel CS-based image sensor design is presented in this paper allowing a compressive acquisition without changing the classical pixel design, as well as the overall sensor architecture. In addition to regular CS, HDR CS is enabled thanks to specific time diagrams of the control signals. An alternative nondestructive column-based readout mode constitutes the main change compared to a traditional functioning. The HDR reconstruction, which is also presented in this paper, is based on merging the information of multicapture compressed measurements while taking into account noise sources and nonlinearities introduced by both the proposed acquisition scheme and its practical implementation.

14 citations


Proceedings ArticleDOI
01 Nov 2016
TL;DR: This paper evaluates the performance of the full feature extraction pipeline, including detection and description, on ten different image representations: low dynamic range (LDR), seven different tone mapped (TM) HDR and two HDR imaging (linear and log encoded) representations.
Abstract: High dynamic range (HDR) imaging has potential to facilitate computer vision tasks such as image matching where lighting transformations hinder the matching performance. However, little has been done to quantify the gains with different possible HDR representations for vision algorithms like feature extraction. In this paper, we evaluate the performance of the full feature extraction pipeline, including detection and description, on ten different image representations: low dynamic range (LDR), seven different tone mapped (TM) HDR and two HDR imaging (linear and log encoded) representations. We measure the impact of using these different representations for feature matching using mean average precision (mAP) scores on four illumination change datasets. We perform feature extraction using four popular schemes in the literature: SIFT, SURF, BRISK, FREAK. With respect to previous studies, our observations confirm the advantages of HDR over conventional LDR imagery, and the fact that HDR linear values are not appropriate for vision tasks. However, HDR representations that work best for keypoint detection are not necessarily optimal when the full feature extraction is taken into account.

13 citations


Journal ArticleDOI
TL;DR: A rigorous performance analysis is done and a performance model for the multigrid algorithm is derived that guides us to an improved implementation, where the overall performance of more than 25 frames per second for 16.8 Megapixel images doing full high dynamic range compression including data transfers between CPU and GPU.
Abstract: Image-processing applications like high dynamic range imaging can be done efficiently in the gradient space. For it, the image has to be transformed to gradient space and back. While the forward transformation to gradient space is fast by using simple finite differences, the backward transformation requires the solution of a partial differential equation. Although one can use an efficient multigrid solver for the backward transformation, it shows that a straightforward implementation of the standard algorithm does not lead to satisfactory runtime results for real-time high dynamic range compression of larger 2D X-ray images even on GPUs. Therefore, we do a rigorous performance analysis and derive a performance model for our multigrid algorithm that guides us to an improved implementation, where we achieve an overall performance of more than 25 frames per second for 16.8 Megapixel images doing full high dynamic range compression including data transfers between CPU and GPU. Together with a simple OpenGL visualization it becomes possible to perform real-time parameter studies on medical data sets.

11 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: A baseband quantization scheme that exploits content characteristics to reduce the needed tonal resolution per image is presented that is of low computational complexity and provides robust performance on a wide range of content types in different viewing environments and applications.
Abstract: High dynamic range imaging is currently being introduced to television, cinema and computer games. While it has been found that a fixed encoding for high dynamic range imagery needs at least 11 to 12 bits of tonal resolution, current mainstream image transmission interfaces, codecs and file formats are limited to 10 bits. To be able to use current generation imaging pipelines, this paper presents a baseband quantization scheme that exploits content characteristics to reduce the needed tonal resolution per image. The method is of low computational complexity and provides robust performance on a wide range of content types in different viewing environments and applications.

Journal ArticleDOI
TL;DR: Both quantitative and qualitative results show the superiority of Penrose pixel layout over square pixel layout in HDR imaging, using a Gaussian mixture model for regularization.
Abstract: High-dynamic-range (HDR) imaging is becoming increasingly popular and widespread. The most common multishot HDR approach, based on multiple low-dynamic-range images captured with different expo- sures, has difficulties in handling camera and object movements. The spatially varying exposures (SVE) tech- nology provides a solution to overcome this limitation by obtaining multiple exposures of the scene in only one shot but suffers from a loss in spatial resolution of the captured image. While aperiodic assignment of exposures has been shown to be advantageous during reconstruction in alleviating resolution loss, almost all the existing imaging sensors use the square pixel layout, which is a periodic tiling of square pixels. We propose the Penrose pixel layout, using pixels in aperiodic rhombus Penrose tiling, for HDR imaging. With the SVE technology, Penrose pixel layout has both exposure and pixel aperiodicities. To investigate its performance, we have to reconstruct HDR images in square pixel layout from Penrose raw images with SVE. Since the two pixel layouts are different, the traditional HDR reconstruction methods are not applicable. We develop a reconstruction method for Penrose pixel layout using a Gaussian mixture model for regularization. Both quantitative and quali- tative results show the superiority of Penrose pixel layout over square pixel layout. © 2016 SPIE and IS&T (DOI: 10 .1117/1.JEI.25.3.033024)

Patent
26 Oct 2016
TL;DR: In this paper, a high dynamic range imaging method for removing ghosts through moving object detection and extension is presented, where a Markov random field based method is used for detecting the moving area, an image segmentation-based method was used for extending the moving areas, and the weight of each pixel was adjusted according to a mask image of the moving object after extended, and then the weight was applied to final exposure fusion.
Abstract: The invention provides a high dynamic range imaging method for removing ghosts through moving object detection and extension. The important thought of the method of the invention is to extend a moving area under edge constraints to enhance a moving area detection result. A Markov random field-based method is used for detecting the moving area, an image segmentation-based method is used for extending the moving area, the weight of each pixel is adjusted according to a mask image of the moving area after extended, the weight of each pixel is applied to final exposure fusion, and the problem of ghosts appearing in the high dynamic range imaging can be effectively solved.


Proceedings ArticleDOI
01 Jul 2016
TL;DR: This paper proposes a method for the generation of high dynamic range image from a single input image using power law transformation and shows the effectiveness of the proposed approach which is verified qualitatively and quantitatively.
Abstract: High Dynamic Range Imaging (HDRI) is an emerging technique for the generation of high quality images. Most commonly adopted method for generating HDR images is the fusion of multiple exposure Low Dynamic Range (LDR) images. But in such cases the output image can get affected by certain artifacts such as image misalignments, ghosting etc. Single shot HDR Imaging is an efficient approach to overcome these artifacts. In this paper, we propose a method for the generation of high dynamic range image from a single input image using power law transformation. Power law transformation generates differently exposed images by varying the gamma value of the input image. Once the multiple exposure images are generated, they are fused together based on certain quality measures such as contrast, saturation, well exposedness etc. The result shows the effectiveness of the proposed approach which is verified qualitatively and quantitatively.

Journal ArticleDOI
TL;DR: This paper presents an algorithm which combines focus stacking and HDR imaging in order to produce an image with both extended DOF and dynamic range from a set of differently focused and exposed images.
Abstract: Focus stacking and high dynamic range (HDR) imaging are two paradigms of computational photography. Focus stacking aims to produce an image with greater depth of field (DOF) from a set of images taken with different focus distances; HDR imaging aims to produce an image with higher dynamic range from a set of images taken with different exposure values. In this paper, we present an algorithm which combines focus stacking and HDR imaging in order to produce an image with both extended DOF and dynamic range from a set of differently focused and exposed images. The key step in our algorithm is focus stacking regardless of the differences in exposure values of input images. This step includes photometric and spatial registration of images, and image fusion to produce all-in-focus images. This is followed by HDR radiance estimation and tonemapping. We provide experimental results with real data to illustrate the algorithm.

Proceedings ArticleDOI
20 Mar 2016
TL;DR: Experiments on real image sets show that the proposed algorithm provides comparable or even better image qualities than state-of-the-art approaches, while demanding lower computational resources.
Abstract: We propose a ghost-free high dynamic range (HDR) image synthesis algorithm using a rank minimization framework. Based on the linear dependency among irradiance maps from low dynamic range (LDR) images, we formulate ghost-free HDR imaging as a low-rank matrix completion problem. The main contribution is to solve it efficiently via the augmented Lagrange multiplier (ALM) method, where the optimization variables are updated by closed-form solutions. Experiments on real image sets show that the proposed algorithm provides comparable or even better image qualities than state-of-the-art approaches, while demanding lower computational resources.

Book ChapterDOI
01 Jan 2016
TL;DR: A new fuzzy information measure is introduced by which the quality of HDR images can be improved and both the amount of visible details and thequality of sensing can be increased.
Abstract: Digital image processing can often improve the quality of visual sensing of images and real-world scenes. Recently, high dynamic range (HDR) imaging techniques have become more and more popular in the field. Both classical and soft computing–based methods proved to be advantages in revealing the non-visible parts of images and realistic scenes. However, extracting as much details as possible is not always enough because the sensing capability of the human eye depends on many other factors and the visual quality is not always proportional to the rate of accurate reproduction of the scene. In this paper, a new fuzzy information measure is introduced by which the quality of HDR images can be improved and both the amount of visible details and the quality of sensing can be increased.

Proceedings ArticleDOI
12 Sep 2016
TL;DR: This work has developed several techniques for the on-chip implementation of single-exposure extension of the dynamic range on the upper extreme of the range, i.
Abstract: High dynamic range imaging is central in application fields like surveillance, intelligent transportation and advanced driving assistance systems. In some scenarios, methods for dynamic range extension based on multiple captures have shown limitations in apprehending the dynamics of the scene. Artifacts appear that can put at risk the correct segmentation of objects in the image. We have developed several techniques for the on-chip implementation of single-exposure extension of the dynamic range. We work on the upper extreme of the range, i. e. administering the available full-well capacity. Parameters are adapted pixel-wise in order to accommodate a high intra-scene range of illuminations.

Proceedings ArticleDOI
TL;DR: This paper describes a real-time ghost removing hardware implementation on high dynamic range video ow added for the authors' HDR FPGA based smart camera which is able to provide full resolution HDR video stream at 60 fps and presents experimental results to show the efficiency of the implemented method.
Abstract: High dynamic range (HDR) imaging generation from a set of low dynamic range images taken in different exposure times is a low cost and an easy technique. This technique provides a good result for static scenes. Temporal exposure bracketing cannot be applied directly for dynamic scenes, since camera or object motion in bracketed exposures creates ghosts in the resulting HDR image. In this paper we describe a real-time ghost removing hardware implementation on high dynamic range video ow added for our HDR FPGA based smart camera which is able to provide full resolution (1280 x 1024) HDR video stream at 60 fps. We present experimental results to show the efficiency of our implemented method in ghost removing.

Proceedings ArticleDOI
26 Jun 2016
TL;DR: Experimental results show that the proposed approach can produce informative HDR composition images balancing the influence caused by the low or high luminance and preserving the contrast, colors, and salience.
Abstract: Linear interpolation is a simple yet effective technique for image composition which works wells for low dynamic range (LDR) images with a fixed range of pixel values. However, it cannot provide good performance for high dynamic range (HDR) images because a high luminance image usually dominates the composition result. This paper proposes a novel algorithm for HDR composition using the linear interpolation. Our scheme decomposes HDR images to be composed into three layers where a linear interpolation can be applied on each layer individually that has been normalized. The algorithm contains three steps including the image decomposition, the image feature composition, and finally the HDR map estimation and image re-rendering. Experimental results show that the proposed approach can produce informative HDR composition images balancing the influence caused by the low or high luminance and preserving the contrast, colors, and salience. The comparison demonstrates that our scheme outperforms the current state-of-the-art methods.

Patent
Hiroyuki Kobayashi1
02 Feb 2016
TL;DR: In this article, a device that combines a first image photographed with a first amount of exposure and a second image photographed by a second person with a second exposure lower than the first image's exposure is presented.
Abstract: A device that combines a first image photographed with a first amount of exposure and a second image photographed with a second amount of exposure lower than the first amount of exposure, thereby generating a combined image having a wider dynamic range than a dynamic range for the amounts of exposure of the first and second images, the device includes a motion region extraction unit configured to extract at least one motion region in which an object moving between the first image and the second image is shown; a combining ratio determining unit configured to increase a combining ratio of the second image to the first image for a pixel in a background region outside of the at least one motion region such that the higher a luminance value of a pixel of the first image, the higher the combining ratio.

DOI
01 Jan 2016
TL;DR: Two new real-time computational imaging systems that provide new functionalities and better performance compared to conventional cameras are presented in this thesis: GigaEye II, a modular high-resolution imaging system that introduces the concept of distributed image processing in the real- time camera systems.
Abstract: The standard cameras are designed to truthfully mimic the human eye and the visual system. In recent years, commercially available cameras are becoming more complex, and offer higher image resolutions than ever before. However, the quality of conventional imaging methods is limited by several parameters, such as the pixel size, lens system, the diffraction limit, etc. The rapid technological advancements, increase in the available computing power, and introduction of Graphics Processing Units (GPU) and Field-Programmable-Gate-Arrays (FPGA) open new possibilities in the computer vision and computer graphics communities. The researchers are now focusing on utilizing the immense computational power offered on the modern processing platforms, to create imaging systems with novel or significantly enhanced capabilities compared to the standard ones. One popular type of the computational imaging systems offering new possibilities is a multi-camera system. This thesis will focus on FPGA-based multi-camera systems that operate in real-time. The aim of themulti-camera systems presented in this thesis is to offer a wide field-of-view (FOV) video coverage at high frame rates. The wide FOV is achieved by constructing a panoramic image from the images acquired by the multi-camera system. Two new real-time computational imaging systems that provide new functionalities and better performance compared to conventional cameras are presented in this thesis. Each camera system design and implementation are analyzed in detail, built and tested in real-time conditions. Panoptic is a miniaturized low-cost multi-camera system that reconstructs a 360 degrees view in real-time. Since it is an easily portable system, it provides means to capture the complete surrounding light field in dynamic environment, such as when mounted on a vehicle or a flying drone. The second presented system, GigaEye II , is a modular high-resolution imaging system that introduces the concept of distributed image processing in the real-time camera systems. This thesis explains in detail howsuch concept can be efficiently used in real-time computational imaging systems. The purpose of computational imaging systems in the form of multi-camera systems does not end with real-time panoramas. The application scope of these cameras is vast. They can be used in 3D cinematography, for broadcasting live events, or for immersive telepresence experience. The final chapter of this thesis presents three potential applications of these systems: object detection and tracking, high dynamic range (HDR) imaging, and observation of multiple regions of interest. Object detection and tracking, and observation of multiple regions of interest are extremely useful and desired capabilities of surveillance systems, in security and defense industry, or in the fast-growing industry of autonomous vehicles. On the other hand, high dynamic range imaging is becoming a common option in the consumer market cameras, and the presented method allows instantaneous capture of HDR videos. Finally, this thesis concludes with the discussion of the real-time multi-camera systems, their advantages, their limitations, and the future predictions.

Journal ArticleDOI
TL;DR: A proposed multidimensional method for analysing light from sky vault high dynamic range images using a classification system to check the relevance of each image attribute, choosing the most suitable sky model from the ISO 15469:2004 (CIE S 011/E:2003) standard.
Abstract: Although the development of high dynamic range images allowed broadening sky vault analysis for lighting research purposes, the sky classification is still performed using subjective methods. There is no established metric for comparison, only isolated approaches that do not address the characteristics of the images. This paper presents a proposed multidimensional method for analysing light from sky vault high dynamic range images. A Matlab routine was applied. It uses a classification system to check the relevance of each image attribute, choosing the most suitable sky model from the ISO 15469:2004 (CIE S 011/E:2003) standard. The results, that can be plotted or exported, indicate that the routine is able to choose the most relevant model for the photographed sky, thus allowing the creation of a sky classification database.

Proceedings ArticleDOI
18 Dec 2016
TL;DR: This paper achieves robustness to motion and saturation via an energy minimization formulation with spatio-temporal constraints and demonstrates the superiority of the method over state-of-the-art techniques for a variety of challenging dynamic scenes.
Abstract: Given a set of sequential exposures, High Dynamic Range imaging is a popular method for obtaining high-quality images for fairly static scenes. However, this typically suffers from ghosting artifacts for scenes with significant motion. Also, existing techniques cannot handle heavily saturated regions in the sequence. In this paper, we propose an approach that handles both the issues mentioned above. We achieve robustness to motion (both object and camera) and saturation via an energy minimization formulation with spatio-temporal constraints. The proposed approach leverages information from the neighborhood of heavily saturated regions to correct such regions. The experimental results demonstrate the superiority of our method over state-of-the-art techniques for a variety of challenging dynamic scenes.

Proceedings ArticleDOI
01 Dec 2016
TL;DR: This paper proposes a non-local mapping based on learned features directly from the image-texture, using a Convolutional Neural Network, which yields sub-optimal results and is demonstrated using various applications in the HDR domain.
Abstract: Color mapping is a fundamental task for many important computer vision applications such as High Dynamic Range Imaging (HDRI), Stereo Matching, Camera Calibration and various other tasks. Typically, the task of color mapping is to transfer the colors of an image to a reference distribution. For example, this way, it is possible to simulate different camera exposures using a single image, e.g., by transforming a dark image to a brighter image showing the same scene. Most approaches for color mapping are local in the sense that they just apply a pixel-wise (local) mapping to generate the color mapped image. In this paper, we empirically show that this approach yields sub-optimal results and we propose a non-local mapping based on learned features directly from the image-texture, using a Convolutional Neural Network. This way, we learn to generate an image which would have been captured by a certain factor of the actual exposure time. We demonstrate our method using various applications in the HDR domain and compare our results against other state-of-the-art methods where we obtain excellent results, both visually as well as numerically.

Journal ArticleDOI
TL;DR: A GPU version is presented, which is perceptually equal to the standard version but with much improved computational performance, and a hybrid three-stage approach over a traditional individual TMO is presented.
Abstract: In this paper, we present a new technique for displaying High Dynamic Range (HDR) images on Low Dynamic Range (LDR) displays in an efficient way on the GPU. The described process has three stages. First, the input image is segmented into luminance zones. Second, the tone mapping operator (TMO) that performs better in each zone is automatically selected. Finally, the resulting tone mapping (TM) outputs for each zone are merged, generating the final LDR output image. To establish the TMO that performs better in each luminance zone we conducted a preliminary psychophysical experiment using a set of HDR images and six different TMOs. We validated our composite technique on several (new) HDR images and conducted a further psychophysical experiment, using an HDR display as the reference that establishes the advantages of our hybrid three-stage approach over a traditional individual TMO. Finally, we present a GPU version, which is perceptually equal to the standard version but with much improved computational performance.

Proceedings ArticleDOI
01 Dec 2016
TL;DR: This work introduces an objective function consisting of data fidelity and gradient-based constraint functions, and HDR images are produced via minimizing it based on the estimated CRF and the fused gradients to estimate irradiance values and gradients of HDR images.
Abstract: We propose a fusion method for high dynamic range (HDR) imaging based on the estimated camera response function (CRF) and fused gradients from input multi-exposure images. We introduce an objective function consisting of data fidelity and gradient-based constraint functions, and HDR images are produced via minimizing it. These functions are respectively defined based on the estimated CRF and the fused gradients to estimate irradiance values and gradients of HDR images. Consequently, the proposed method produces natural HDR images inferred from their multi-exposure images with fine details. Through simulations, we show that the proposed method outperforms previous ones objectively and perceptually.

Proceedings ArticleDOI
11 Jul 2016
TL;DR: This work presents a robust HDR imaging system which can deal with blurry LDR images, overcoming the limitations of most existing HDR methods.
Abstract: High dynamic range (HDR) images can show more details and luminance information in general display device than low dynamic image (LDR) images. We present a robust HDR imaging system which can deal with blurry LDR images, overcoming the limitations of most existing HDR methods. Experiments on real images show the effectiveness and competitiveness of the proposed method.

Proceedings ArticleDOI
01 Jan 2016
TL;DR: A fuzzy information measure is introduced which evaluates the level of information of pictures and this amount is used to scale the qualification and transformation of images.
Abstract: Digital image processing can often improve the quality of visual sensing of images and real world’s scenes however the optimization of the used algorithms is not always an easy task. The suitable parameter settings of the methods often depend on the features of the scenes and may also depend on the aim of the (further) processing. In this paper, a fuzzy information measure is introduced which evaluates the level of information of pictures. The idea behind the technique is that the amount of information in an image is strongly related to the number and complexity of the objects in the image. The primary information about the objects is usually related to their boundaries, i.e. the characteristic, corner and edge, pixels carry the most relevant information about the image content. The measure presented in this paper sums up the fuzzy level of details and this amount is used to scale the qualification and transformation of images. The presented technique can advantageously be built into different image processing algorithms favorably used in Image Quality Improvement and High Dynamic Range Imaging. The qualification measure may also be applied for increasing the color differentiation capability of the human eye and thus, for improving visual sensing.