scispace - formally typeset
Search or ask a question

Showing papers on "Tone mapping published in 2012"


Journal ArticleDOI
TL;DR: A new scheme to handle HDR scenes by integrating locally adaptive scene detail capture and suppressing gradient reversals introduced by the local adaptation is proposed, which functions as the tone mapping of an HDR image to the SDR image, and it is superior to both global and local tone mapping operators.
Abstract: The luminance of a natural scene is often of high dynamic range (HDR). In this paper, we propose a new scheme to handle HDR scenes by integrating locally adaptive scene detail capture and suppressing gradient reversals introduced by the local adaptation. The proposed scheme is novel for capturing an HDR scene by using a standard dynamic range (SDR) device and synthesizing an image suitable for SDR displays. In particular, we use an SDR capture device to record scene details (i.e., the visible contrasts and the scene gradients) in a series of SDR images with different exposure levels. Each SDR image responds to a fraction of the HDR and partially records scene details. With the captured SDR image series, we first calculate the image luminance levels, which maximize the visible contrasts, and then the scene gradients embedded in these images. Next, we synthesize an SDR image by using a probabilistic model that preserves the calculated image luminance levels and suppresses reversals in the image luminance gradients. The synthesized SDR image contains much more scene details than any of the captured SDR image. Moreover, the proposed scheme also functions as the tone mapping of an HDR image to the SDR image, and it is superior to both global and local tone mapping operators. This is because global operators fail to preserve visual details when the contrast ratio of a scene is large, whereas local operators often produce halos in the synthesized SDR image. The proposed scheme does not require any human interaction or parameter tuning for different scenes. Subjective evaluations have shown that it is preferred over a number of existing approaches.

155 citations


Book ChapterDOI
07 Oct 2012
TL;DR: The proposed implementation demonstrates that the bilateral filter can be as efficient as the recent edge-preserving filtering methods, especially for high-dimensional images, and derives a new filter named gradient domain bilateral filter from the proposed recursive implementation.
Abstract: This paper proposes a recursive implementation of the bilateral filter. Unlike previous methods, this implementation yields an bilateral filter whose computational complexity is linear in both input size and dimensionality. The proposed implementation demonstrates that the bilateral filter can be as efficient as the recent edge-preserving filtering methods, especially for high-dimensional images. Let the number of pixels contained in the image be N, and the number of channels be D, the computational complexity of the proposed implementation will be O(ND). It is more efficient than the state-of-the-art bilateral filtering methods that have a computational complexity of O(ND2) [1] (linear in the image size but polynomial in dimensionality) or O(Nlog(N)D) [2] (linear in the dimensionality thus faster than [1] for high-dimensional filtering). Specifically, the proposed implementation takes about 43 ms to process a one megapixel color image (and about 14 ms to process a 1 megapixel grayscale image) which is about 18 × faster than [1] and 86× faster than [2]. The experiments were conducted on a MacBook Air laptop computer with a 1.8 GHz Intel Core i7 CPU and 4 GB memory. The memory complexity of the proposed implementation is also low: as few as the image memory will be required (memory for the images before and after filtering is excluded). This paper also derives a new filter named gradient domain bilateral filter from the proposed recursive implementation. Unlike the bilateral filter, it performs bilateral filtering on the gradient domain. It can be used for edge-preserving filtering but avoids sharp edges that are observed to cause visible artifacts in some computer graphics tasks. The proposed implementations were proved to be effective for a number of computer vision and computer graphics applications, including stylization, tone mapping, detail enhancement and stereo matching.

154 citations


Book ChapterDOI
Lu Yuan1, Jian Sun1
07 Oct 2012
TL;DR: This paper will automate the interactive correction technique by estimating the image specific S-shaped non-linear tone curve that best fits the input image by creating a new Zone-based region-level optimal exposure evaluation, which would consider both the visibility of individual regions and relative contrast between regions.
Abstract: We study the problem of automatically correcting the exposure of an input image. Generic auto-exposure correction methods usually fail in individual over-/under-exposed regions. Interactive corrections may fix this issue, but adjusting every photograph requires skill and time. This paper will automate the interactive correction technique by estimating the image specific S-shaped non-linear tone curve that best fits the input image. Our first contribution is a new Zone-based region-level optimal exposure evaluation, which would consider both the visibility of individual regions and relative contrast between regions. Then a detail-preserving S-curve adjustment is applied based on the optimal exposure to obtain the final output. We show that our approach enables better corrections comparing with popular image editing tools and other automatic methods.

152 citations


Proceedings ArticleDOI
28 Feb 2012
TL;DR: This paper presents and discusses the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping, and the outcome is a full-featured visualization system that meets neurobiologists' demands for qualitative analysis of confocal microscope data.
Abstract: 2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists' demands for qualitative analysis of confocal microscopy data.

98 citations


Patent
31 May 2012
TL;DR: In this paper, the authors present a system for local tone mapping using a spatially varying local tone curve applied to a pixel of the image data to preserve local contrast when displayed on the display.
Abstract: Systems and methods for local tone mapping are provided. In one example, an electronic device includes an electronic display, an imaging device, and an image signal processor. The electronic display may display images of a first bit depth, and the imaging device may include an image sensor that obtains image data of a higher bit depth than the first bit depth. The image signal processor may process the image data, and may include local tone mapping logic that may apply a spatially varying local tone curve to a pixel of the image data to preserve local contrast when displayed on the display. The local tone mapping logic may smooth the local tone curve applied to the intensity difference between the pixel and another nearby pixel exceeds a threshold.

93 citations


Journal ArticleDOI
TL;DR: This study presents two real-time architectures using resource constrained FPGA and GPU devices for the computation of a new algorithm which performs tone mapping, contrast enhancement, and glare mitigation.
Abstract: Low-level computer vision algorithms have high computational requirements. In this study, we present two real-time architectures using resource constrained FPGA and GPU devices for the computation of a new algorithm which performs tone mapping, contrast enhancement, and glare mitigation. Our goal is to implement this operator in a portable and battery-operated device, in order to obtain a low vision aid specially aimed at visually impaired people who struggle to manage themselves in environments where illumination is not uniform or changes rapidly. This aid device processes in real-time, with minimum latency, the input of a camera and shows the enhanced image on a head mounted display (HMD). Therefore, the proposed operator has been implemented on battery-operated platforms, one based on the GPU NVIDIA ION2 and another on the FPGA Spartan III, which perform at rates of 30 and 60 frames per second, respectively, when working with VGA resolution images (640 × 480).

85 citations


Journal ArticleDOI
01 Nov 2012
TL;DR: This work reduces computational complexity with respect to the state-of-the-art, and adds a spatially varying model of lightness perception to scene reproduction.
Abstract: Managing the appearance of images across different display environments is a difficult problem, exacerbated by the proliferation of high dynamic range imaging technologies. Tone reproduction is often limited to luminance adjustment and is rarely calibrated against psychophysical data, while color appearance modeling addresses color reproduction in a calibrated manner, albeit over a limited luminance range. Only a few image appearance models bridge the gap, borrowing ideas from both areas. Our take on scene reproduction reduces computational complexity with respect to the state-of-the-art, and adds a spatially varying model of lightness perception. The predictive capabilities of the model are validated against all psychophysical data known to us, and visual comparisons show accurate and robust reproduction for challenging high dynamic range scenes.

75 citations


Proceedings ArticleDOI
TL;DR: This paper proposes a temporal coherency algorithm that is designed to analyze a video as a whole, and from its characteristics adapts each tone mapped frame of a sequence in order to preserve the temporal co herency.
Abstract: One of the main goals of digital imagery is to improve the capture and the reproduction of real or synthetic scenes on display devices with restricted capabilities. Standard imagery techniques are limited with respect to the dynamic range that they can capture and reproduce. High Dynamic Range (HDR) imagery aims at overcoming these limitations by capturing, representing and displaying the physical value of light measured in a scene. However, current commercial displays will not vanish instantly hence backward compatibility between HDR content and those displays is required. This compatibility is ensured through an operation called tone mapping that retargets the dynamic range of HDR content to the restricted dynamic range of a display device. Although many tone mapping operators exist, they focus mostly on still images. The challenges of tone mapping HDR videos are more complex than those of still images since the temporal dimensions is added. In this work, the focus was on the preservation of temporal coherency when performing video tone mapping. Two main research avenues are investigated: the subjective quality of tone mapped video content and their compression efficiency. Indeed, tone mapping independently each frame of a video sequence leads to temporal artifacts. Those artifacts impair the visual quality of the tone mapped video sequence and need to be reduced. Through experimentations with HDR videos and Tone Mapping Operators (TMOs), we categorized temporal artifacts into six categories. We tested video tone mapping operators (techniques that take into account more than a single frame) on the different types of temporal artifact and we observed that they could handle only three out of the six types. Consequently, we designed a post-processing technique that adapts to any tone mapping operator and reduces the three types of artifact not dealt with. A subjective evaluation reported that our technique always preserves or increases the subjective quality of tone mapped content for the sequences and TMOs tested. The second topic investigated was the compression of tone mapped video content. So far, work on tone mapping and video compression focused on optimizing a tone map curve to achieve high compression ratio. These techniques changed the rendering of the video to reduce its entropy hence removing any artistic intent or constraint on the final results. That is why, we proposed a technique that reduces the entropy of a tone mapped video without altering its rendering.

73 citations


Proceedings ArticleDOI
12 Jun 2012
TL;DR: A comparative study of most famous tone mapping algorithms is presented and it is concluded that Reinhard tone mapping operators are the best in term of visual pleasure and maintaining image integrity.
Abstract: Real world contains a wide range of intensities that cannot be captured with traditional imaging devices. Moreover, even if these images are captured with special procedures, existing display devices cannot display them. This paper presents a comparative study of most famous tone mapping algorithms. Tone mapping is the process of compressing high dynamic range images into a low dynamic range so they can be displayed by traditional display devices. The study implements six tone mapping algorithms and performs a comparison between them by visual rating. Independent participant were asked to rate these images based on a given rating scheme. The study concluded that Reinhard tone mapping operators are the best in term of visual pleasure and maintaining image integrity. In addition, exponential tone mapping operators have achieved better rating compared the logarithmic operators.

59 citations


Proceedings Article
01 Jan 2012
TL;DR: The purpose of this research is to present a complete TMO solution that can be implemented in real time hardware to yield a high-quality automated LDR video stream suitable for direct use by today’s recording, broadcast, and display equipment.
Abstract: Tone Mapping of HDR Images has been studied extensively since the introduction of digital HDR capture methods. However, until recently HDR video has not been realized in a viable form as that given by Tocci et al.[1], which will lead to readily available HDR video cameras. Because they capture video with much broader dynamic range than can currently be displayed, these cameras present a unique challenge. In order to maintain backward-compatibility with legacy broadcast, recording, and display equipment, HDR video cameras need to provide a real-time tonemapped LDR video stream without the benefit of post-processing steps. The purpose of this research is to present a complete TMO solution that can be implemented in real time hardware to yield a high-quality automated LDR video stream suitable for direct use by today’s recording, broadcast, and display equipment.

56 citations


Journal ArticleDOI
TL;DR: A new quality assessment system is developed, where both temporal consistency and spatial consistency are introduced to account for ghosting artifacts and results of various dynamic scenes are shown to prove the effectiveness of the proposed method.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: This paper provides an ”histogram-based” method for inverse tone mapping that contains a content-adaptive inverse Tone mapping operator, which has different responses with different scene characteristics and scene classification is included in this algorithm to select the environment parameters.
Abstract: Tone mapping is an important technique used for displaying high dynamic range (HDR) content on low dynamic range (LDR) devices. On the other hand, inverse tone mapping enables LDR content to appear with an HDR effect on HDR displays. The existing inverse tone mapping algorithms usually focus on enhancing the luminance in over-exposed regions with less (or even no) effort on the process of the wellexposed regions. In this paper, we propose an algorithm with not only enhancement in the over-exposed regions but also in the remaining well-exposed regions. This paper provides an ”histogram-based” method for inverse tone mapping. The proposed algorithm contains a content-adaptive inverse tone mapping operator, which has different responses with different scene characteristics. Scene classification is included in this algorithm to select the environment parameters. Lastly, enhancement of the over-exposed regions, which reconstructs the truncated information, is performed.

Patent
01 Mar 2012
TL;DR: In this article, an output tone-mapped image is generated based on the high-resolution gray scale image and the local multiscale gray scale ratio image, each being of a different spatial resolution level.
Abstract: In a method to generate a tone-mapped image from a high-dynamic range image (HDR), an input HDR image is converted into a logarithmic domain and a global tone-mapping operator generates a high-resolution gray scale ratio image from the input HDR image. Based at least in part on the high-resolution gray scale ratio image, at least two different gray scale ratio images are generated and are merged together to generate a local multiscale gray scale ratio image that represents a weighted combination of the at least two different gray scale ratio images, each being of a different spatial resolution level. An output tone-mapped image is generated based on the high-resolution gray scale image and the local multiscale gray scale ratio image.

Patent
25 Sep 2012
TL;DR: In this paper, a vision system for a vehicle includes an imaging sensor disposed at the vehicle and having an exterior field of view, and an image processor, which is operable to process successive frames of captured image data.
Abstract: A vision system for a vehicle includes an imaging sensor disposed at the vehicle and having an exterior field of view, and an image processor. The imaging sensor captures image data and the image processor is operable to process successive frames of captured image data. Responsive to a determination of a low visibility driving condition, the image processor increases the contrast of the captured images to enhance discrimination of objects in the field of view of the imaging sensor. The image processor may execute a first brightness transfer function on frames of captured image data to enhance the contrast of captured image data, and may execute a second brightness transfer function over successive frames of earlier captured image data and may execute a tone mapping of the earlier captured image data and the currently captured image data to generate a contrast enhanced output image.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: This work uses a non-parametric Bayesian regression technique - local Gaussian process regression - to learn for each pixel's narrow-gamut color a probability distribution over the scene colors that could have created it, and shows that these distributions are effective in simple probabilistic adaptations of two popular applications: multi-exposure imaging and photometric stereo.
Abstract: Consumer digital cameras use tone-mapping to produce compact, narrow-gamut images that are nonetheless visually pleasing. In doing so, they discard or distort substantial radiometric signal that could otherwise be used for computer vision. Existing methods attempt to undo these effects through deterministic maps that de-render the reported narrow-gamut colors back to their original wide-gamut sensor measurements. Deterministic approaches are unreliable, however, because the reverse narrow-to-wide mapping is one-to-many and has inherent uncertainty. Our solution is to use probabilistic maps, providing uncertainty estimates useful to many applications. We use a non-parametric Bayesian regression technique — local Gaussian process regression — to learn for each pixel's narrow-gamut color a probability distribution over the scene colors that could have created it. Using a variety of consumer cameras we show that these distributions, once learned from training data, are effective in simple probabilistic adaptations of two popular applications: multi-exposure imaging and photometric stereo. Our results on these applications are better than those of corresponding deterministic approaches, especially for saturated and out-of-gamut colors.

Journal ArticleDOI
01 Jul 2012
TL;DR: The proposed binocular tone mapping framework generates a binocular low-dynamic range (LDR) image pair that preserves more human-perceivable visual content than a single LDR image using the additional image domain.
Abstract: By extending from monocular displays to binocular displays, one additional image domain is introduced. Existing binocular display systems only utilize this additional image domain for stereopsis. Our human vision is not only able to fuse two displaced images, but also two images with difference in detail, contrast and luminance, up to a certain limit. This phenomenon is known as binocular single vision. Humans can perceive more visual content via binocular fusion than just a linear blending of two views. In this paper, we make a first attempt in computer graphics to utilize this human vision phenomenon, and propose a binocular tone mapping framework. The proposed framework generates a binocular low-dynamic range (LDR) image pair that preserves more human-perceivable visual content than a single LDR image using the additional image domain. Given a tone-mapped LDR image (left, without loss of generality), our framework optimally synthesizes its counterpart (right) in the image pair from the same source HDR image. The two LDR images are different, so that they can aggregately present more human-perceivable visual richness than a single arbitrary LDR image, without triggering visual discomfort. To achieve this goal, a novel binocular viewing comfort predictor (BVCP) is also proposed to prevent such visual discomfort. The design of BVCP is based on the findings in vision science. Through our user studies, we demonstrate the increase of human-perceivable visual richness and the effectiveness of the proposed BVCP in conservatively predicting the visual discomfort threshold of human observers.

Journal ArticleDOI
TL;DR: A noise reduction method and an adaptive contrast enhancement for local tone mapping (TM) that compresses the luminance of high dynamic range (HDR) image and decomposes the compressed luminance into multi-scale subbands using the discrete wavelet transform.
Abstract: In this paper, we propose a noise reduction method and an adaptive contrast enhancement for local tone mapping (TM). The proposed local TM algorithm compresses the luminance of high dynamic range (HDR) image and decomposes the compressed luminance of HDR image into multi-scale subbands using the discrete wavelet transform. For noise reduction, the decomposed images are filtered using a bilateral filter and soft-thresholding. And then, the dynamic ranges of the filtered subbands are enhanced by considering local contrast using the modified luminance compression function. Finally, the color of the tone-mapped image is reproduced using an adaptive saturation control parameter. We generate the tone-mapped image using the proposed local TM. Computer simulation with noisy HDR images shows the effectiveness of the proposed local TM algorithm in terms of visual quality as well as the local contrast. It can be used in various displays with noise reduction and contrast enhancement.

Proceedings Article
01 Dec 2012
TL;DR: It was confirmed that the proposed methods significantly reduce the bit depth of the enhance layer, even though the compensation slightly increases coding noise.
Abstract: This report proposes two layered bit depth scalable coding methods for high dynamic range (HDR) images expressed in floating point data format. From the base layer bit stream, low dynamic range (LDR) images are decoded. They are tone mapped appropriately for human eye sensitivity, and shortened to a standard bit depth, e.g. 8 [bit]. From the enhance layer bit stream, HDR images are decoded. However the bit depth of this layer has been huge in the existing method. To reduce it, we divide the tone mapping into a reversible logarithmic mapping and its compensation. It was confirmed that the proposed methods significantly reduce the bit depth of the enhance layer, even though the compensation slightly increases coding noise.

Proceedings ArticleDOI
25 Mar 2012
TL;DR: A two-layer High Dynamic Range coding scheme using a new tone mapping method that transforms an HDR image onto a Low Dynamic Range (LDR) image by using a base map that is a smoothed version of the HDR luminance.
Abstract: This paper proposes a two-layer High Dynamic Range (HDR) coding scheme using a new tone mapping. Our tone mapping method transforms an HDR image onto a Low Dynamic Range (LDR) image by using a base map that is a smoothed version of the HDR luminance. In our scheme, the HDR image can be reconstructed from the tone mapped LDR image. Our method makes use of this property to realize a two-layer HDR coding by encoding both of the tone mapped LDR image and the base map. This paper validates its effectiveness of our approach through some experiments.

Journal ArticleDOI
TL;DR: This paper investigates the perception of countershading in the context of a novel mask‐based contrast enhancement algorithm and analyzes the circumstances under which the resulting profiles turn from image enhancement to artifact for a range of parameters and viewing conditions.
Abstract: Countershading is a common technique for local image contrast manipulations, and is widely used both in automatic settings, such as image sharpening and tonemapping, as well as under artistic control, such as in paintings and interactive image processing software. Unfortunately, countershading is a double-edged sword: while correctly chosen parameters for a given viewing condition can significantly improve the image sharpness or trick the human visual system into perceiving a higher contrast than physically present in an image, wrong parameters, or different viewing conditions can result in objectionable halo artifacts. In this paper we investigate the perception of countershading in the context of a novel mask-based contrast enhancement algorithm and analyze the circumstances under which the resulting profiles turn from image enhancement to artifact for a range of parameters and viewing conditions. Our experimental results can be modeled as a function of the width of the countershading profile. We employ this empirical function in a range of applications such as image resizing, view dependent tone mapping, and countershading analysis in photographs and works of fine art. © 2012 Wiley Periodicals, Inc.

Proceedings ArticleDOI
01 Dec 2012
TL;DR: This work presents a method that compresses the dynamic range of an image while preserving local features, and the result is an image that retains the fidelity of its features within a greatly reduced dynamic range.
Abstract: Non-photographic images having a high dynamic range, such as aeromagnetic images, are difficult to present in a manner that facilitates interpretation. Standard photographic high dynamic range (HDR) algorithms may be unsuitable, or inapplicable to such data. We present a method that compresses the dynamic range of an image while preserving local features. It makes no assumptions about the formation of the image, the feature types it contains, or its range of values. Thus, unlike algorithms designed for photographic images, this algorithm can be applied to a wide range of scientific images. The method is based on extracting local phase and amplitude values across the image using monogenic filters. The dynamic range of the image can then be reduced by applying a range reducing function to the amplitude values, for example taking the logarithm, and then reconstructing the image using the original phase values. An important attribute of this approach is that the local phase information is preserved, this is important for the human visual system in interpreting the image. The result is an image that retains the fidelity of its features within a greatly reduced dynamic range. An additional advantage of the method is that the range of spatial frequencies that are used to reconstruct the image can be chosen via high-pass filtering to control the scale of analysis.

Proceedings ArticleDOI
02 May 2012
TL;DR: The results of the experiments show that HDR imaging techniques improve the repeatability rate of feature point detectors significantly, compared to standard low dynamic range imagery techniques.
Abstract: This paper evaluates the suitability of High Dynamic Range (HDR) imaging techniques for feature point detection under extreme lighting conditions. The conditions are extreme in respect to the dynamic range of the lighting within the test scenes used. This dynamic range cannot be captured using standard low dynamic range imagery techniques without loss of detail. Four widely used feature point detectors are used in the experiments: Harris corner detector, Shi-Tomasi, FAST and Fast Hessian. Their repeatability rate is studied under changes of camera viewpoint, camera distance and scene lighting with respect to the image formats used. The results of the experiments show that HDR imaging techniques improve the repeatability rate of feature point detectors significantly.

Proceedings ArticleDOI
03 Aug 2012
TL;DR: This paper validated the composite technique on several (new) HDR images and conducted a further psychophysical experiment, using an HDR display as reference, that establishes the advantages of the hybrid three-stage approach over a traditional individual TMO.
Abstract: In this paper we present a new technique for the display of High Dynamic Range (HDR) images on Low Dynamic Range (LDR) displays. The described process has three stages. First, the input image is segmented into luminance zones. Second, the tone mapping operator (TMO) that performs better in each zone is automatically selected. Finally, the resulting tone mapping (TM) outputs for each zone are merged, generating the final LDR output image. To establish the TMO that performs better in each luminance zone we conducted a preliminary psychophysical experiment using a set of HDR images and six different TMOs. We validated our composite technique on several (new) HDR images and conducted a further psychophysical experiment, using an HDR display as reference, that establishes the advantages of our hybrid three-stage approach over a traditional individual TMO.

Proceedings ArticleDOI
28 Nov 2012
TL;DR: The image decomposition method can be regarded as the fundamental tool to generate multiple image editing applications, such as image denoising, edge detection, detail enhancement, cartoon JPEG artifact removal, local tone mapping, and contrast enhancement under low backlight condition.
Abstract: We present an image decomposition method using L1 fidelity term with L0 norm of gradient to decompose an image into base layer and detail layer. Generally, the L1 fidelity should be preferable to the L2 norm when the erroneous measurements exist. It is also reported that the L0 norm of gradient is a better prior term than total variation and the L2 norm of gradient. Therefore, we combine these two benefits to obtain our base layer by adopting our method using L1 fidelity and L0 gradient. Our image decomposition method can be regarded as the fundamental tool to generate multiple image editing applications, such as image denoising, edge detection, detail enhancement, cartoon JPEG artifact removal, local tone mapping, and contrast enhancement under low backlight condition. Experimental results show that our proposed method is promising as compared to the existing methods.

Patent
Noam Levy1, Guy Rapaport1
02 Jul 2012
TL;DR: In this article, a pipelined architecture is proposed to transform a Low Dynamic Range image sequence into a High Dynamic Range (HDR) image sequence by performing image alignment, image mixing, and a tone mapping on the adjacent image frames.
Abstract: Embodiments are directed towards enabling digital cameras to digitally process a captured a Low Dynamic Range image sequence at a real time video rate, and to convert the image sequence into an High Dynamic Range (HDR) image sequence using a pipelined architecture. Two or more image frames are captured using different exposure settings and then combined to form a single HDR output frame in a video sequence. The pipelined architecture operate on adjacent image frames by performing an image alignment, an image mixing, and a tone mapping on the adjacent image frames to generate the HDR image sequence.

Journal ArticleDOI
Guang Deng1
TL;DR: A generalized LIP (GLIP) model is developed that not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS.
Abstract: The logarithmic image processing (LIP) model is a mathematical theory providing generalized linear operations for image processing. The gigavision sensor (GVS) is a new imaging device that can be described by a statistical model. In this paper, by studying these two seemingly unrelated models, we develop a generalized LIP (GLIP) model. With the LIP model being its special case, the GLIP model not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS. A new parametric LIP model is also developed. To illustrate the application of the new scalar multiplication operation, we propose an energy-preserving algorithm for tone mapping, which is a necessary step in image dehazing. By comparing with results using two state-of-the-art algorithms, we show that the new scalar multiplication operation is an effective tool for tone mapping.

Proceedings ArticleDOI
TL;DR: It is suggested that VA needs consideration for evaluating the overall perceptual impact of TMOs on HDR content, since the existing studies so far have only considered the quality or aesthetic appeal angle.
Abstract: High Dynamic Range (HDR) images/videos require the use of a tone mapping operator (TMO) when visualized on Low Dynamic Range (LDR) displays. From an artistic intention point of view, TMOs are not necessarily transparent and might induce different behavior to view the content. In this paper, we investigate and quantify how TMOs modify visual attention (VA). To that end both objective and subjective tests in the form of eye-tracking experiments have been conducted on several still image content that have been processed by 11 different TMOs. Our studies confirm that TMOs can indeed modify human attention and fixation behavior significantly. Therefore our studies suggest that VA needs consideration for evaluating the overall perceptual impact of TMOs on HDR content. Since the existing studies so far have only considered the quality or aesthetic appeal angle, this study brings in a new perspective regarding the importance of VA in HDR content processing for visualization on LDR displays.

Patent
13 Jul 2012
TL;DR: In this article, a method for eliminating fog from one image based on edge information and tone mapping and a method thereof are provided to estimate a fog value necessary for eliminating a fog component from an input image using a dark channel prior value.
Abstract: PURPOSE: A device for eliminating fog from one image based on edge information and tone mapping and a method thereof are provided to estimate a fog value necessary for eliminating a fog component from an input image using a dark channel prior value. CONSTITUTION: An image analyzing unit(110) generates a dark channel prior which shows depth information of an input image. A fog value estimating unit(120) estimates a fog value of the input image based on a value of the dark channel prior. A transfer map generating unit(130) generates a transfer map for eliminating a fog component of the input image. An image restoring unit(140) eliminates the fog component from the input image. The image restoring unit generates a restoration image.

Proceedings ArticleDOI
TL;DR: An architecture to achieve backward compatible HDR technology with JPEG is provided and efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression is demonstrated.
Abstract: High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

Journal ArticleDOI
TL;DR: Investigations in obtaining complete and accurate depth maps for a robust and accurate 3d monoplotting support in vision-based mobile mapping web services and first results obtained by interpolating incomplete areas in the disparity maps and by fusing the disparity Maps with additional LiDAR data are presented.
Abstract: In this paper we introduce a state-of-the-art stereovision mobile mapping system with different stereo imaging sensors and present a series of performance tests carried out with this system. The aim of these empirical tests was to investigate different performance aspects under real-world conditions. The investigations were carried out with different cameras and camera configurations in order to determine their potential and limitations for selected application scenarios. In brief the test set consists of investigations in geometric accuracy, in radiometric image quality and in the extraction of 3d point clouds using dense matching algorithms. The first tests resulted in an absolute overall 3d accuracy of 4-5 cm depending mainly on the quality of the navigation system used for direct georeferencing. The relative point measurement accuracy was approx. 2 cm and 1 cm for the 2 megapixel and the 11 megapixel sensors respectively. In the second series of investigations we present results obtained by applying tone mapping algorithms often used for high dynamic range images (HDR). In our third series of tests refinements of the radiometric calibration of the stereo system and corrections of the vignetting resulted in an improved completeness of the depth map on the roadside environment. We conclude the paper with investigations in obtaining complete and accurate depth maps for a robust and accurate 3d monoplotting support in vision-based mobile mapping web services. For this we present first results obtained by interpolating incomplete areas in the disparity maps and by fusing the disparity maps with additional LiDAR data.