scispace - formally typeset
Search or ask a question

Showing papers on "Tone mapping published in 2011"


Proceedings ArticleDOI
25 Jul 2011
TL;DR: The use of 1D operations leads to considerable speedups over existing techniques and potential memory savings; its computational cost is not affected by the choice of the filter parameters; and it is the first edge-preserving filter to work on color images at arbitrary scales in real time, without resorting to subsampling or quantization.
Abstract: We present a new approach for performing high-quality edge-preserving filtering of images and videos in real time. Our solution is based on a transform that defines an isometry between curves on the 2D image manifold in 5D and the real line. This transform preserves the geodesic distance between points on these curves, adaptively warping the input signal so that 1D edge-preserving filtering can be efficiently performed in linear time. We demonstrate three realizations of 1D edge-preserving filters, show how to produce high-quality 2D edge-preserving filters by iterating 1D-filtering operations, and empirically analyze the convergence of this process. Our approach has several desirable features: the use of 1D operations leads to considerable speedups over existing techniques and potential memory savings; its computational cost is not affected by the choice of the filter parameters; and it is the first edge-preserving filter to work on color images at arbitrary scales in real time, without resorting to subsampling or quantization. We demonstrate the versatility of our domain transform and edge-preserving filters on several real-time image and video processing tasks including edge-preserving filtering, depth-of-field effects, stylization, recoloring, colorization, detail enhancement, and tone mapping.

738 citations


Journal ArticleDOI
TL;DR: In this article, an approach is proposed which consists in computing the ratio between the gradient of the visible edges between the image before and after contrast restoration, which is an indicator of visibility enhancement.
Abstract: The contrast of outdoor images acquired under adverse weather conditions, especially foggy weather, is altered by the scattering of daylight by atmospheric particles. As a consequence, different methods have been designed to restore the contrast of these images. However, there is a lack of methodology to assess the performances of the methods or to rate them. Unlike image quality assessment or image restoration areas, there is no easy way to have a reference image, which makes the problem not straightforward to solve. In this paper, an approach is proposed which consists in computing the ratio between the gradient of the visible edges between the image before and after contrast restoration. In this way, an indicator of visibility enhancement is provided based on the concept of visibility level, commonly used in lighting engineering. Finally, the methodology is applied to contrast enhancement assessment and to the comparison of tone-mapping operators.

555 citations


Proceedings ArticleDOI
25 Jul 2011
TL;DR: This paper shows state-of-the-art edge-aware processing using standard Laplacian pyramids, and proposes a set of image filters to achieve edge-preserving smoothing, detail enhancement, tone mapping, and inverse tone mapping.
Abstract: The Laplacian pyramid is ubiquitous for decomposing images into multiple scales and is widely used for image analysis. However, because it is constructed with spatially invariant Gaussian kernels, the Laplacian pyramid is widely believed as being unable to represent edges well and as being ill-suited for edge-aware operations such as edge-preserving smoothing and tone mapping. To tackle these tasks, a wealth of alternative techniques and representations have been proposed, e.g., anisotropic diffusion, neighborhood filtering, and specialized wavelet bases. While these methods have demonstrated successful results, they come at the price of additional complexity, often accompanied by higher computational cost or the need to post-process the generated results. In this paper, we show state-of-the-art edge-aware processing using standard Laplacian pyramids. We characterize edges with a simple threshold on pixel values that allows us to differentiate large-scale edges from small-scale details. Building upon this result, we propose a set of image filters to achieve edge-preserving smoothing, detail enhancement, tone mapping, and inverse tone mapping. The advantage of our approach is its simplicity and flexibility, relying only on simple point-wise nonlinearities and small Gaussian convolutions; no optimization or post-processing is required. As we demonstrate, our method produces consistently high-quality results, without degrading edges or introducing halos.

445 citations


Journal ArticleDOI
TL;DR: It is shown that the appropriate choice of a tone-mapping operator (TMO) can significantly improve the reconstructed HDR quality and a statistical model is developed that approximates the distortion resulting from the combined processes of tone- mapping and compression.
Abstract: For backward compatible high dynamic range (HDR) video compression, the HDR sequence is reconstructed by inverse tone-mapping a compressed low dynamic range (LDR) version of the original HDR content. In this paper, we show that the appropriate choice of a tone-mapping operator (TMO) can significantly improve the reconstructed HDR quality. We develop a statistical model that approximates the distortion resulting from the combined processes of tone-mapping and compression. Using this model, we formulate a numerical optimization problem to find the tone-curve that minimizes the expected mean square error (MSE) in the reconstructed HDR sequence. We also develop a simplified model that reduces the computational complexity of the optimization problem to a closed-form solution. Performance evaluations show that the proposed methods provide superior performance in terms of HDR MSE and SSIM compared to existing tone-mapping schemes. It is also shown that the LDR image quality resulting from the proposed methods matches that produced by perceptually-based TMOs.

196 citations


Journal ArticleDOI
Xiaolin Wu1
TL;DR: This paper proposes a novel algorithmic approach of image enhancement via optimal contrast-tone mapping that maximizes expected contrast gain subject to an upper limit on tone distortion and optionally to other constraints that suppress artifacts.
Abstract: This paper proposes a novel algorithmic approach of image enhancement via optimal contrast-tone mapping. In a fundamental departure from the current practice of histogram equalization for contrast enhancement, the proposed approach maximizes expected contrast gain subject to an upper limit on tone distortion and optionally to other constraints that suppress artifacts. The underlying contrast-tone optimization problem can be solved efficiently by linear programming. This new constrained optimization approach for image enhancement is general, and the user can add and fine tune the constraints to achieve desired visual effects. Experimental results demonstrate clearly superior performance of the new approach over histogram equalization and its variants.

123 citations


Journal ArticleDOI
TL;DR: This work proposes a tone mapping operator with two stages that implements visual adaptation and local contrast enhancement, based on a variational model inspired by color vision phenomenology, and compares very well with the state of the art.
Abstract: Tone Mapping is the problem of compressing the range of a High-Dynamic Range image so that it can be displayed in a Low-Dynamic Range screen, without losing or introducing novel details: The final image should produce in the observer a sensation as close as possible to the perception produced by the real-world scene. We propose a tone mapping operator with two stages. The first stage is a global method that implements visual adaptation, based on experiments on human perception, in particular we point out the importance of cone saturation. The second stage performs local contrast enhancement, based on a variational model inspired by color vision phenomenology. We evaluate this method with a metric validated by psychophysical experiments and, in terms of this metric, our method compares very well with the state of the art.

111 citations


Proceedings ArticleDOI
28 Jun 2011
TL;DR: This paper shows how tone mapping techniques can be used to dynamically increase the image brightness, thus allowing the LCD backlight levels to be reduced, and describes how the Gamma function was overcame by using adaptive thresholds to apply different Gamma values to images with differing brightness levels.
Abstract: In this paper, we show how tone mapping techniques can be used to dynamically increase the image brightness, thus allowing the LCD backlight levels to be reduced. This saves significant power as the majority of the LCD's display power is consumed by its backlight. The Gamma function (or equivalent) can be efficiently implemented in smartphones with minimal resource cost. We describe how we overcame the Gamma function's non-linear nature by using adaptive thresholds to apply different Gamma values to images with differing brightness levels. These adaptive thresholds allow us to save significant amounts of power while preserving the image quality. We implemented our solution on a laptop and two Android smartphones. Finally, we present measured analytical results for two different games (Quake III and Planeshift), and user study results (using Quake III and 60 participants) that shows that we can save up to 68% of the display power without significantly affecting the perceived gameplay quality.

95 citations


Patent
Gabriel G. Marcu1, Steve Swen1
08 Dec 2011
TL;DR: In this article, the authors proposed a method to generate a low dynamic range image from a high-dynamic range image by determining one or more regions of the image containing pixels having values that are outside a first range and inside a second range.
Abstract: Methods and apparatuses for generating a low dynamic range image for a high dynamic range scene. In one aspect, a method to generate a low dynamic range image from a high dynamic range image, includes: determining one or more regions of the high dynamic range image containing pixels having values that are outside a first range and inside a second range; computing a weight distribution from the one or more regions; and generating the low dynamic range image from the high dynamic range image using the weight distribution. In another aspect, a method of image processing, includes: detecting one or more regions in a first image of a high dynamic range scene according to a threshold to generate a mask; and blending the first image and a second image of the scene to generate a third image using the mask.

80 citations


Journal ArticleDOI
TL;DR: A novel bottom-up segmentation algorithm is developed through superpixel grouping which enables us to detect scene changes and directly generate the ghost-free LDR image of the dynamic scene.
Abstract: High Dynamic Range (HDR) imaging requires one to composite multiple, differently exposed images of a scene in the irradiance domain and perform tone mapping of the generated HDR image for displaying on Low Dynamic Range (LDR) devices. In the case of dynamic scenes, standard techniques may introduce artifacts called ghosts if the scene changes are not accounted for. In this paper, we consider the blind HDR problem for dynamic scenes. We develop a novel bottom-up segmentation algorithm through superpixel grouping which enables us to detect scene changes. We then employ a piecewise patch-based compositing methodology in the gradient domain to directly generate the ghost-free LDR image of the dynamic scene. Being a blind method, the primary advantage of our approach is that we do not assume any knowledge of camera response function and exposure settings while preserving the contrast even in the non-stationary regions of the scene. We compare the results of our approach for both static and dynamic scenes with that of the state-of-the-art techniques.

70 citations


Journal ArticleDOI
TL;DR: New k factor decision method and highlight compression operator are proposed to enhance the appearance and naturalness of rendered High Dynamic Range (HDR) images and shows better rendering in terms of naturalness and dark area details than the previous tone-mapping algorithm.
Abstract: In this paper, new k factor decision method and highlight compression operator are proposed to enhance the appearance and naturalness of rendered High Dynamic Range (HDR) images. The retinex algorithm is one of the outstanding local operators, which well preserves local contrast in highlights. However, the retinex algorithm gives a worse overall appearance and undistinguishable dark area contrast than global operators or other local operators in some cases. The most prominent improvement of the proposed method is that the decision method of the k factor, which is one of the parameters in retinex algorithm, is proposed by using the dynamic range in images. The proposed parameter decision method enhances the overall quality and preference of the image and solves any parameter setting problems. Also, dark area details become more distinguishable by the highlight compression operator. According to the results of many HDR image experiments, the proposed method shows better rendering in terms of naturalness and dark area details than the previous tone-mapping algorithm.1.

63 citations


Patent
31 Aug 2011
TL;DR: In this article, a back-end pixel processing unit 120 that receives pixel data after being processed by at least one of the front-end pixels processing unit 80 and a pixel processing pipeline 82 is described.
Abstract: Disclosed embodiments provide for a an image signal processing system 32 that includes back-end pixel processing unit 120 that receives pixel data after being processed by at least one of a front-end pixel processing unit 80 and a pixel processing pipeline 82. In certain embodiments, the back-end processing unit 120 receives luma/chroma image data and may be configured to apply face detection operations, local tone mapping, bright, contrast, color adjustments, as well as scaling. Further, the back-end processing unit 120 may also include a back-end statistics unit 2208 that may collect frequency statistics. The frequency statistics may be provided to an encoder 118 and may be used to determine quantization parameters that are to be applied to an image frame.

Journal ArticleDOI
TL;DR: A mechanistic model connects the data to theories of adaptation and provides insight about how the underlying visual response varies with context, as well as characterized how the luminance-to-lightness mapping changes with stimulus context.

Journal ArticleDOI
TL;DR: The proposed watermarking system belongs to the blind, detectable category; it is based on the quantization index modulation (QIM) paradigm and employs higher order statistics as a feature and shows positive results and demonstrates the system effectiveness with current state-of-art TM algorithms.
Abstract: High dynamic range (HDR) images represent the future format for digital images since they allow accurate rendering of a wider range of luminance values. However, today special types of preprocessing, collectively known as tone-mapping (TM) operators, are needed to adapt HDR images to currently existing displays. Tone-mapped images, although of reduced dynamic range, have nonetheless high quality and hence retain some commercial value. In this paper, we propose a solution to the problem of HDR image watermarking, e.g., for copyright embedding, that should survive TM. Therefore, the requirements imposed on the watermark encompass imperceptibility, a certain degree of security, and robustness to TM operators. The proposed watermarking system belongs to the blind, detectable category; it is based on the quantization index modulation (QIM) paradigm and employs higher order statistics as a feature. Experimental analysis shows positive results and demonstrates the system effectiveness with current state-of-art TM algorithms.

Patent
05 Jul 2011
TL;DR: In this paper, an encoding apparatus for encoding a first view high dynamic range image and a second view high-dynamic range image comprising of first and second HDR image receivers (203, 1201) was proposed.
Abstract: Several approaches are disclosed for combining HDR and 3D image structure analysis and coding, in particular an encoding apparatus for encoding a first view high dynamic range image and a second view high dynamic range image comprising: first and second HDR image receivers(203, 1201) arranged to receive the first view high dynamic range image and a second view high dynamic range image; a predictor (209) arranged to predict the first view high dynamic range image from a low dynamic range representation of the first view high dynamic range image;and a view predictor (1203) to predict the second view high dynamic range image from at least one of the first view high dynamic range image, a low dynamic range representation of the second view high dynamic range image, or a low dynamic range representation of the first view high dynamic range image.

Journal ArticleDOI
01 Dec 2011-Displays
TL;DR: A distortion-free data hiding algorithm which can embed secret messages into high dynamic range (HDR) images and performs with adaptive message embedding where pixels conceal different amounts of secret messages based on their homogeneous representations.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: A fast pattern matching scheme termed Matching by Tone Mapping (MTM) which allows matching under non-linear tone mappings and is empirically shown to be highly discriminative and robust to noise.
Abstract: We propose a fast pattern matching scheme termed Matching by Tone Mapping (MTM) which allows matching under non-linear tone mappings. We show that, when tone mapping is approximated by a piecewise constant function, a fast computational scheme is possible requiring computational time similar to the fast implementation of Normalized Cross Correlation (NCC). In fact, the MTM measure can be viewed as a generalization of the NCC for non-linear mappings and actually reduces to NCC when mappings are restricted to be linear. The MTM is shown to be invariant to non-linear tone mappings, and is empirically shown to be highly discriminative and robust to noise.

Journal ArticleDOI
01 Feb 2011
TL;DR: A local TM algorithm, in which the input HDR is segmented using K-means algorithm and a display gamma parameter is set automatically for each segmented region, which generates the tone-mapped image by the proposed local TM.
Abstract: Tone mapping (TM) algorithms reproduce the high dynamic range (HDR) images on low dynamic range (LDR) display devices such as monitors or printers. In this paper, we propose a local TM algorithm, in which the HDR input is segmented using the K-means algorithm and a display gamma parameter is set automatically for each segmented region. The proposed TM algorithm computes the luminance of an input that is the radiance map generated from a set of LDR images acquired with varying exposure settings. Then, according to the bilateral filtered luminance, an image is divided into a number of regions using the K-means algorithm. The display gamma value is set automatically according to the mean value of each region. Then, the tone of HDR image is reproduced by a local TM method with adaptive gamma value. We generate the tone-mapped image using the proposed local TM. Computer simulation with real LDR images shows the effectiveness of the proposed local TM algorithm in terms of the visual quality as well as the local contrast. It can be used for contrast and color enhancement in various display and acquisition devices.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: A novel view of HDR capture is taken, which is based on a computational photography approach, that proposes to first optically encode both the low dynamic range portion of the scene and highlight information into a low dynamicrange image that can be captured with a conventional image sensor.
Abstract: Without specialized sensor technology or custom, multi-chip cameras, high dynamic range imaging typically involves time-sequential capture of multiple photographs. The obvious downside to this approach is that it cannot easily be applied to images with moving objects, especially if the motions are complex. In this paper, we take a novel view of HDR capture, which is based on a computational photography approach. We propose to first optically encode both the low dynamic range portion of the scene and highlight information into a low dynamic range image that can be captured with a conventional image sensor. This step is achieved using a cross-screen, or star filter. Second, we decode, in software, both the low dynamic range image and the highlight information. Lastly, these two portions can be combined to form an image of a higher dynamic range than the regular sensor dynamic range.

Proceedings ArticleDOI
TL;DR: This work proposes a criterion for the automatic detection of image flicker by analyzing the log average pixel brightness of the tone mapped frame, and proposes a generic method to reduce flicker as a post processing step.
Abstract: In order to display a high dynamic range (HDR) video on a regular low dynamic range (LDR) screen, it needs to be tone mapped. A great number of tone mapping (TM) operators exist - most of them designed to tone map one image at a time. Using them on each frame of an HDR video individually leads to flicker in the resulting sequence. In our work, we analyze three tone mapping operators with respect to flicker. We propose a criterion for the automatic detection of image flicker by analyzing the log average pixel brightness of the tone mapped frame. Flicker is detected if the difference between the averages of two consecutive frames is larger than a threshold derived from Stevens' power law. Fine-tuning of the threshold is done in a subjective study. Additionally, we propose a generic method to reduce flicker as a post processing step. It is applicable to all tone mapping operators. We begin by tone mapping a frame with the chosen operator. If the flicker detection reports a visible variation in the frame's brightness, its brightness is adjusted. As a result, the brightness variation is smoothed over several frames, becoming less disturbing.

Patent
25 Jan 2011
TL;DR: In this paper, an interactive graphical user interface is provided for the end user to specify the source image for separate part of the final high dynamic range image, either by creating a image mask or scribble on the image.
Abstract: A new high dynamic range image synthesis which can handle the local object motion, wherein an interactive graphical user interface is provided for the end user, through which one can specify the source image for separate part of the final high dynamic range image, either by creating a image mask or scribble on the image. The high dynamic range image synthesis includes the following steps: capturing low dynamic range images with different exposures; registering the low dynamic range images; estimating camera response function; converting the low dynamic range images to temporary radiance images using estimated camera response function; and fusing the temporary radiance images into a single high dynamic range (HDR) image by employing a method of layered masking.

Patent
16 Mar 2011
TL;DR: In this article, a method for brightness and contrast enhancement includes computing a luminance histogram of a digital image, computing first distances from the luminance Histogram to a plurality of predetermined luminance HGs, estimating first control point values for a global tone mapping curve from predetermined control points values corresponding to a subset of the predetermined HGs selected based on the computed first distances.
Abstract: A method for brightness and contrast enhancement includes computing a luminance histogram of a digital image, computing first distances from the luminance histogram to a plurality of predetermined luminance histograms, estimating first control point values for a global tone mapping curve from predetermined control point values corresponding to a subset of the predetermined luminance histograms selected based on the computed first distances, and interpolating the estimated control point values to determine the global tone mapping curve. The method may also include dividing the digital image into a plurality of image blocks, and enhancing each pixel in the digital image by computing second distances from a pixel in an image block to the centers of neighboring image blocks, and computing an enhanced pixel value based on the computed second distances, predetermined control point values corresponding to the neighboring image blocks, and the global tone mapping curve.

Proceedings ArticleDOI
25 Jul 2011
TL;DR: This paper presents a perceptually based algorithm for modeling the color shift that occurs for human viewers in low-light scenes known as the Purkinje effect and leverages current HDR techniques to control the image's dynamic range.
Abstract: In this paper we present a perceptually based algorithm for modeling the color shift that occurs for human viewers in low-light scenes. Known as the Purkinje effect, this color shift occurs as the eye transitions from photopic, cone-mediated vision in well-lit scenes to scotopic, rod-mediated vision in dark scenes. At intermediate light levels vision is mesopic with both the rods and cones active. Although the rods have a spectral response distinct from the cones, they still share the same neural pathways. As light levels decrease and the rods become increasingly active they cause a perceived shift in color. We model this process so that we can compute perceived colors for mesopic and scotopic scenes from spectral image data. We also describe how the effect can be approximated from standard high dynamic range RGB images. Once we have determined rod and cone responses, we map them to RGB values that can be displayed on a standard monitor to elicit the intended color perception when viewed photopically. Our method focuses on computing the color shift associated with low-light conditions and leverages current HDR techniques to control the image's dynamic range. We include results generated from both spectral and RGB input images.

Patent
Ahmed El-Mahdy1, Hisham El-Shishiny1
06 Dec 2011
TL;DR: In this article, a method for tone-mapping a High Dynamic Range (HDR) data video stream encoded in MPEG format is proposed. But the method is limited to MPEG-4 and requires the decoding of the data video HDR data stream to generate decoded I-frames, auxiliary decoded data related to P-frames and auxiliary decoding related to B-frames.
Abstract: A method for tone-mapping a High Dynamic Range (HDR) data video stream encoded in MPEG format, the method comprising decoding the data video HDR data stream to generate decoded I-frames, auxiliary decoded data related to P-Frames, and auxiliary decoded data related to B-Frames, the method further comprising applying a tone mapping function to each decoded I-Frame to provide a tone-mapped I-Frame according to a tone mapping operator, for each P-frame depending on a reference I-frame, computing the tone-mapped P-frame from the tone-mapped I-Frame previously determined for the reference I-frame, the reference I-Frame, and the auxiliary decoding data related to the P-Frame; and, for each B-frame, computing the tone-mapped B-frame from the tone mapped I-frame previously determined for the reference I-frame, the tone mapped P-frame previously determined for the reference P-frame, and the auxiliary decoding data related to the B-Frame.

Patent
05 Jan 2011
TL;DR: In this article, a bilateral filter is used to extract a base frame with luminance information and a tone mapping operation is applied to the base frame to generate a relatively low dynamic range base frame, which is then compressed separately.
Abstract: A method of producing a compressed video data stream ( 12 ) by compressing a stream of relatively high dynamic range video frames ( 2 ). A bilateral filter ( 3 ) extracts a base frame ( 4 ) with luminance information. The base frame ( 4 ) and the original frame ( 2 ) are used to provide a detail frame ( 5 ) with chroma information. A tone mapping operation ( 6 ) is selected ( 7, 11 ) and applied to the base frame to generate a relatively low dynamic range base frame ( 8 ), which is then compressed ( 9 ). The detail frame ( 5 ) is compressed separately. Final frame data ( 12 ) is then created, consisting of the compressed relatively low dynamic range base frame, the compressed detail frame, and stored information in respect of the tone mapping operation that had been applied to the base frame.

Journal ArticleDOI
TL;DR: A simple algorithm is presented that selectively adjusts the local gradients in affected regions of the filtered image so that they are consistent with those in the original image.
Abstract: We present a method for restoring antialiased edges that are damaged by certain types of nonlinear image filters. This problem arises with many common operations such as intensity thresholding, tone mapping, gamma correction, histogram equalization, bilateral filters, unsharp masking, and certain nonphotorealistic filters. We present a simple algorithm that selectively adjusts the local gradients in affected regions of the filtered image so that they are consistent with those in the original image. Our algorithm is highly parallel and is therefore easily implemented on a GPU. Our prototype system can process up to 500 megapixels per second and we present results for a number of different image filters.

Journal ArticleDOI
01 Jul 2011
TL;DR: An integrated photographic and gradient tone-mapping processor that can be configured for different applications is presented, resulting in a 50% improvement in speed and area as compared with previously-described processors.
Abstract: Due to recent advances in high dynamic range (HDR) technologies, the ability to display HDR images or videos on conventional LCD devices has become more and more important. Many tone-mapping algorithms have been proposed to meet this end, the choice of which depends on display characteristics such as luminance range, contrast ratio and gamma correction. An ideal HDR tone-mapping processor should have a robust core functionality, high flexibility, and low area consumption, and therefore an ARM-core-based system-on-chip (SOC) platform with a HDR tone-mapping application-specific integrated circuit (ASIC) is suitable for such applications. In this paper, we present a systematic methodology for the development of a tone-mapping processor of optimized architecture using an ARM SOC platform, and illustrate the use of this novel HDR tone-mapping processor for both photographic and gradient compression. Optimization is achieved through four major steps: common module extraction, computation power enhancement, hardware/software partition, and cost function analysis. Based on the proposed scheme, we present an integrated photographic and gradient tone-mapping processor that can be configured for different applications. This newly-developed processor can process 1,024 × 768 images at 60 fps, runs at 100 MHz clock and consumes a core area of 8.1 mm2 under TSMC 0.13 μm technology, resulting in a 50% improvement in speed and area as compared with previously-described processors.

Patent
30 Jun 2011
TL;DR: In this article, an approach for generating high dynamic range images from a low dynamic range image is presented, which is performed using a mapping relating input data in the form of input sets of image spatial positions.
Abstract: An approach is provided for generating a high dynamic range image from a low dynamic range image. The generation is performed using a mapping relating input data in the form of input sets of image spatial positions and a combination of color coordinates of low dynamic range pixel values associated with the image spatial positions to output data in the form of high dynamic range pixel values. The mapping is generated from a reference low dynamic range image and a corresponding reference high dynamic range image. Thus, a mapping from the low dynamic range image to a high dynamic range image is generated on the basis of corresponding reference images. The approach may be used for prediction of high dynamic range images from low dynamic range images in an encoder and decoder. A residual image may be generated and used to provide improved high dynamic range image quality.

Proceedings Article
05 Jul 2011
TL;DR: In this article, the problem of compositing a high dynamic range (HDR) image for display on a standard low dynamic range device involves matte-based fusion of multiple images captured with different camera exposures.
Abstract: The problem of compositing a high dynamic range (HDR) image for display on a standard low dynamic range device involves matte-based fusion of multiple images captured with different camera exposures, followed by a suitable tone mapping of the fused HDR image. The fused image should represent the entire scene in a clear, well-exposed manner by bringing the under- and over-exposed regions from the input images into the display range of the device while preserving the local contrast. We define matting as a multi-objective optimization problem based on these desired characteristics of the output, and provide the solution using an Euler-Lagrange technique. The proposed technique yields visually appealing fused images with a high value of contrast. Our technique produces the fused image of a low dynamic range, and thus it eliminates the need for generation of an intermediate HDR image and associated tone mapping. Additionally, our technique does not require any knowledge of the camera response functions or exposure settings.

Proceedings ArticleDOI
26 Oct 2011
TL;DR: A novel adaptive color mapping method for virtual objects in mixed-reality environments that takes the camera into account and thus can also handle changes of its parameters during runtime and shows that virtual objects look visually more plausible than by just applying tone-mapping operators.
Abstract: We present a novel adaptive color mapping method for virtual objects in mixed-reality environments. In several mixed-reality applications, added virtual objects should be visually indistinguishable from real objects. Recent mixed-reality methods use global-illumination algorithms to approach this goal. However, simulating the light distribution is not enough for visually plausible images. Since the observing camera has its very own transfer function from real-world radiance values to RGB colors, virtual objects look artificial just because their rendered colors do not match with those of the camera. Our approach combines an on-line camera characterization method with a heuristic to map colors of virtual objects to colors as they would be seen by the observing camera. Previous tone-mapping functions were not designed for use in mixed-reality systems and thus did not take the camera-specific behavior into account. In contrast, our method takes the camera into account and thus can also handle changes of its parameters during runtime. The results show that virtual objects look visually more plausible than by just applying tone-mapping operators.

Patent
Peng Lin1
05 Jan 2011
TL;DR: Adaptive local tone mapping may be used to convert a high dynamic range image to a low dynamic range (LDR) image as discussed by the authors, which has a lower dynamic range than the input image.
Abstract: Adaptive local tone mapping may be used to convert a high dynamic range image to a low dynamic range image Tone mapping may be performed on an on a Bayer domain image A high dynamic range image may be filtered to produce a luminance signal An illumination component of the luminance signal may be compressed A reflectance component of the luminance signal may be sharpened After the luminance signal has been processed, it may be used in producing an output image in the Bayer domain that has a lower dynamic range than the input image The output Bayer domain image may be demosaiced to produce an RGB image Tone-mapping may be performed with a tone-mapping processor