scispace - formally typeset
Search or ask a question

Showing papers on "Tone mapping published in 2009"


Proceedings ArticleDOI
01 Sep 2009
TL;DR: A novel algorithm and variants for visibility restoration from a single image which allows visibility restoration to be applied for the first time within real-time processing applications such as sign, lane-marking and obstacle detection from an in-vehicle camera.
Abstract: One source of difficulties when processing outdoor images is the presence of haze, fog or smoke which fades the colors and reduces the contrast of the observed objects. We introduce a novel algorithm and variants for visibility restoration from a single image. The main advantage of the proposed algorithm compared with other is its speed: its complexity is a linear function of the number of image pixels only. This speed allows visibility restoration to be applied for the first time within real-time processing applications such as sign, lane-marking and obstacle detection from an in-vehicle camera. Another advantage is the possibility to handle both color images or gray level images since the ambiguity between the presence of fog and the objects with low color saturation is solved by assuming only small objects can have colors with low saturation. The algorithm is controlled only by a few parameters and consists in: atmospheric veil inference, image restoration and smoothing, tone mapping. A comparative study and quantitative evaluation is proposed with a few other state of the art algorithms which demonstrates that similar or better quality results are obtained. Finally, an application is presented to lane-marking extraction in gray level images, illustrating the interest of the approach.

1,219 citations


Journal ArticleDOI
TL;DR: This work proposes a technique for fusing a bracketed exposure sequence into a high quality image, without converting to High dynamic range (HDR) first, which avoids camera response curve calibration and is computationally efficient.
Abstract: We propose a technique for fusing a bracketed exposure sequence into a high quality image, without converting to High dynamic range (HDR) first. Skipping the physically based HDR assembly step simplifies the acquisition pipeline. This avoids camera response curve calibration and is computationally efficient. It also allows for including flash images in the sequence. Our technique blends multiple exposures, guided by simple quality measures like saturation and contrast. This is done in a multiresolution fashion to account for the brightness variation in the sequence. The resulting image quality is comparable to existing tone mapping operators.

578 citations


Journal ArticleDOI
01 Apr 2009
TL;DR: The results indicate that the relation between contrast compression and the color saturation correction that matches color appearance is non‐linear and smaller color correction is required for small change of contrast.
Abstract: Tone mapping algorithms offer sophisticated methods for mapping a real-world luminance range to the luminance range of the output medium but they often cause changes in color appearance. In this work we conduct a series of subjective appearance matching experiments to measure the change in image colorfulness after contrast compression and enhancement. The results indicate that the relation between contrast compression and the color saturation correction that matches color appearance is non-linear and smaller color correction is required for small change of contrast. We demonstrate that the relation cannot be fully explained by color appearance models. We propose color correction formulas that can be used with existing tone mapping algorithms. We extend existing global and local tone mapping operators and show that the proposed color correction formulas can preserve original image colors after tone scale manipulation.

184 citations


Book
17 Aug 2009
TL;DR: A graphical, intuitive introduction to bilateral filtering, a practical guide for efficient implementation, an overview of its numerous applications, as well as mathematical analysis.
Abstract: Bilateral filtering is one of the most popular image processing techniques. The bilateral filter is a nonlinear process that can blur an image while respecting strong edges. Its ability to decompose an image into different scales without causing haloes after modification has made it ubiquitous in computational photography applications such as tone mapping, style transfer, relighting, and denoising. Bilateral Filtering: Theory and Applications provides a graphical, intuitive introduction to bilateral filtering, a practical guide for efficient implementation, an overview of its numerous applications, as well as mathematical analysis. This broad and detailed overview covers theoretical and practical issues that will be useful to researchers and software developers.

136 citations


Journal ArticleDOI
01 Dec 2009
TL;DR: It is shown that current rTMO approaches fall short when the input image is not exposed properly, and proposed a method to automatically set a suitable gamma value for each image, based on the image key and empirical data, which enhances visible details without causing artifacts in incorrectly-exposed regions.
Abstract: Most existing image content has low dynamic range (LDR), which necessitates effective methods to display such legacy content on high dynamic range (HDR) devices. Reverse tone mapping operators (rTMOs) aim to take LDR content as input and adjust the contrast intelligently to yield output that recreates the HDR experience. In this paper we show that current rTMO approaches fall short when the input image is not exposed properly. More specifically, we report a series of perceptual experiments using a Brightside HDR display and show that, while existing rTMOs perform well for under-exposed input data, the perceived quality degrades substantially with over-exposure, to the extent that in some cases subjects prefer the LDR originals to images that have been treated with rTMOs. We show that, in these cases, a simple rTMO based on gamma expansion avoids the errors introduced by other methods, and propose a method to automatically set a suitable gamma value for each image, based on the image key and empirical data. We validate the results both by means of perceptual experiments and using a recent image quality metric, and show that this approach enhances visible details without causing artifacts in incorrectly-exposed regions. Additionally, we perform another set of experiments which suggest that spatial artifacts introduced by rTMOs are more disturbing than inaccuracies in the expanded intensities. Together, these findings suggest that when the quality of the input data is unknown, reverse tone mapping should be handled with simple, non-aggressive methods to achieve the desired effect.

122 citations


Journal ArticleDOI
27 Jul 2009
TL;DR: A psychophysical study is conducted in order to acquire appearance data for many different luminance levels covering most of the dynamic range of the human visual system, yielding a generalized color appearance model that can be used to adapt the tone and color of images to different dynamic ranges for cross-media reproduction while maintaining appearance that is close to human perception.
Abstract: Display technology is advancing quickly with peak luminance increasing significantly, enabling high-dynamic-range displays. However, perceptual color appearance under extended luminance levels has not been studied, mainly due to the unavailability of psychophysical data. Therefore, we conduct a psychophysical study in order to acquire appearance data for many different luminance levels (up to 16,860 cd/m2) covering most of the dynamic range of the human visual system. These experimental data allow us to quantify human color perception under extended luminance levels, yielding a generalized color appearance model. Our proposed appearance model is efficient, accurate and invertible. It can be used to adapt the tone and color of images to different dynamic ranges for cross-media reproduction while maintaining appearance that is close to human perception.

71 citations


Journal ArticleDOI
TL;DR: Experiments indicate that the results produced by the method are less prone to visible artifacts than the ones obtained with the state-of-the-art technique for real-time automatic computation of brightness enhancement functions.
Abstract: This paper presents an automatic technique for producing high-quality brightness-enhancement functions for real-time reverse tone mapping of images and videos. Our approach uses a bilateral filter to obtain smooth results while preserving sharp luminance discontinuities, and can be efficiently implemented on GPUs. We demonstrate the effectiveness of our approach by reverse tone mapping several images and videos. Experiments based on HDR visible difference predicator and on an image distortion metric indicate that the results produced by our method are less prone to visible artifacts than the ones obtained with the state-of-the-art technique for real-time automatic computation of brightness enhancement functions.

68 citations


Journal ArticleDOI
01 Apr 2009
TL;DR: It is argued that the temporal properties of glare are a strong means to increase perceived brightness and to produce realistic and attractive renderings of bright light sources and that dynamic glare‐renderings are often perceived as more attractive depending on the chosen scene.
Abstract: Glare is a consequence of light scattered within the human eye when looking at bright light sources. This effect can be exploited for tone mapping since adding glare to the depiction of high-dynamic range (HDR) imagery on a low-dynamic range (LDR) medium can dramatically increase perceived contrast. Even though most, if not all, subjects report perceiving glare as a bright pattern that fluctuates in time, up to now it has only been modeled as a static phenomenon. We argue that the temporal properties of glare are a strong means to increase perceived brightness and to produce realistic and attractive renderings of bright light sources. Based on the anatomy of the human eye, we propose a model that enables real-time simulation of dynamic glare on a GPU. This allows an improved depiction of HDR images on LDR media for interactive applications like games, feature films, or even by adding movement to initially static HDR images. By conducting psychophysical studies, we validate that our method improves perceived brightness and that dynamic glare-renderings are often perceived as more attractive depending on the chosen scene.

62 citations


Book ChapterDOI
29 Jul 2009
TL;DR: The presented biological model allows reliable dynamic range compression with natural color constancy properties and its non-separable spatio-temporal filter enhances HDR video content processing with an added temporal constancy.
Abstract: From moonlight to bright sun shine, real world visual scenes contain a very wide range of luminance; they are said to be High Dynamic Range (HDR). Our visual system is well adapted to explore and analyze such a variable visual content. It is now possible to acquire such HDR contents with digital cameras; however it is not possible to render them all on standard displays, which have only Low Dynamic Range (LDR) capabilities. This rendering usually generates bad exposure or loss of information. It is necessary to develop locally adaptive Tone Mapping Operators (TMO) to compress a HDR content to a LDR one and keep as much information as possible. The human retina is known to perform such a task to overcome the limited range of values which can be coded by neurons. The purpose of this paper is to present a TMO inspired from the retina properties. The presented biological model allows reliable dynamic range compression with natural color constancy properties. Moreover, its non-separable spatio-temporal filter enhances HDR video content processing with an added temporal constancy.

47 citations


Journal ArticleDOI
TL;DR: A psychophysical study is presented to evaluate the performance of inverse (reverse) tone mapping algorithms and to investigate if a high level of complexity is needed and if a correlation exists between image content and quality.
Abstract: In recent years inverse tone mapping techniques have been proposed for enhancing low-dynamic range (LDR) content for a high-dynamic range (HDR) experience on HDR displays, and for image based lighting. In this paper, we present a psychophysical study to evaluate the performance of inverse (reverse) tone mapping algorithms. Some of these techniques are computationally expensive because they need to resolve quantization problems that can occur when expanding an LDR image. Even if they can be implemented efficiently on hardware, the computational cost can still be high. An alternative is to utilize less complex operators; although these may suffer in terms of accuracy. Our study investigates, firstly, if a high level of complexity is needed for inverse tone mapping and, secondly, if a correlation exists between image content and quality. Two main applications have been considered: visualization on an HDR monitor and image-based lighting.

38 citations


Patent
Sheng Lin1
15 Jan 2009
TL;DR: In this paper, the local contrast values and the tone-mapped values are combined, respectively, for the corresponding pixels in the image to produce the enhanced image, which is then used to generate the final image.
Abstract: Methods and systems for enhancing an image. Respective local contrast values are determined for selected pixels of the image by, for each selected pixel, adjusting a respective luminance value of the pixel by an average luminance value of neighboring pixels to obtain the local contrast value. Respective tone-mapped values are determined for further selected pixels in the image based on a global luminance value representing the image. The local contrast values and the tone-mapped values are combined, respectively, for the corresponding pixels in the image to produce the enhanced image.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: Inspired by Retinex theory and histogram rescaling techniques, the proposed method tries to realize natural rendering of image with respect to the constraints listed above.
Abstract: A new method for Natural Rendering of Color Image based on Retinex (NRCIR) is proposed. Here, the word “natural” means that the ambience of image (warm or cold color impression) should not be changed after enhancement. Furthermore, the treatment should not introduce any additional light sources and should not produce halo effect or amplify blocking effect. Inspired by Retinex theory and histogram rescaling techniques, the proposed method tries to realize natural rendering of image with respect to the constraints listed above. Extensive tests with different types of natural images have been performed. The obtained results clearly demonstrate the efficiency of the proposed method.

Patent
05 Jun 2009
TL;DR: In this article, a tone mapping curve can automatically be generated within the sensor and adjusted appropriately for the scene based on predetermined parameters, such as the light-product value of a given image.
Abstract: A device, method, computer useable medium, and processor programmed to automatically generate tone mapping curves in a digital camera based on image metadata are described. By examining image metadata from a digital camera's sensor, such as the light-product, one can detect sun-lit, high-light, and low-light scenes. Once the light-product value has been calculated for a given image, a tone mapping curve can automatically be generated within the sensor and adjusted appropriately for the scene based on predetermined parameters. Further, it has been determined that independently varying the slopes of the tone mapping curve at the low end (S 0 ) and high end (S 1 ) of the curve results in more visually appealing images. By dynamically and independently selecting S 0 and S 1 values based on image metadata, more visually pleasing images can be generated.

Proceedings ArticleDOI
28 May 2009
TL;DR: Contrast brushes, an interactive method for directly brushing contrast adjustments onto an image, is implemented using a histogram warping approach that implements tone mapping using piecewisedefined, continuously differentiable, monotonic splines.
Abstract: We implement contrast brushes, an interactive method for directly brushing contrast adjustments onto an image. The adjustments are performed by a histogram warping approach that implements tone mapping using piecewisedefined, continuously differentiable, monotonic splines. This allows the independent specification of tone changes and contrast adjustments without causing halo or contouring artifacts, while still endowing contrast brushes with intelligible parameters that render their effects predictable for the user. A user study demonstrates that contrast brushes can prove more effective than Adobe Photoshop's interactive contrast enhancement tools.

Patent
Yi-Chen Chiu1, Lidong Xu1, Hong Jiang.1
15 Apr 2009
TL;DR: In this article, a scalable video codec may convert lower bit depth video to higher bit depth videos using decoded lower-bit depth video for tone mapping and tone mapping derivation.
Abstract: A scalable video codec may convert lower bit depth video to higher bit depth video using decoded lower bit depth video for tone mapping and tone mapping derivation. The conversion can also use the filtered lower bit depth video for tone mapping and tone mapping derivation.

Proceedings ArticleDOI
01 Dec 2009
TL;DR: This paper proposes a new way to combine high-dynamic-range image fusion and superresolution methods in a two-stage scheme and shows that only two input images can sufficiently capture the dynamic range of the scene.
Abstract: This paper discusses a new framework to enhance image and video quality. Recent advances in high-dynamic-range image fusion and superresolution make it possible to extend the intensity range or to increase the resolution of the image beyond the limitations of the sensor. In this paper, we propose a new way to combine both of these fusion methods in a two-stage scheme. To achieve robust image enhancement in practical application scenarios, we adapt state-of-the-art methods for automatic photometric camera calibration, controlled image acquisition, image fusion and tonemapping. With respect to high-dynamic-range reconstruction, we show that only two input images can sufficiently capture the dynamic range of the scene. The usefulness and performance of this system is demonstrated on images taken with various types of cameras.

Journal ArticleDOI
TL;DR: A novel fast approximation of the trilateral filter for high dynamic range (HDR) image tone mapping using a signal processing approach and the experimental results show satisfactory performancce.

Journal ArticleDOI
TL;DR: This paper implemented volumetric unsharp masking at interactive frame rates based on current GPU features, and performed experiments on various volume data sets to validate this local contrast enhancement.
Abstract: Feature enhancement is important for the interpretation of complex structures and the detection of local details in volume visualization. We present a simple and effective method, volumetric unsharp masking, to enhance local contrast of features. In general, unsharp masking is an operation that adds the scaled high-frequency part of the signal to itself. The signal in this paper is the radiance at each sample point in the ray-casting based volume rendering, and the radiance depends on both transfer functions and lighting. Our volumetric unsharp masking modulates the radiance by adding back the scaled difference between the radiance and the smoothed radiance. This local color modulation does not change the shape of features due to the same opacity, but it does enhance local contrast of structures in a unified manner. We implemented volumetric unsharp masking at interactive frame rates based on current GPU features, and performed experiments on various volume data sets to validate this local contrast enhancement. The results showed that volumetric unsharp masking reveals more local details and improves depth perception.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: An automatic algorithm for high dynamic range compression based on the properties of human visual system is proposed to avoid the halo artifact and automates the parameter adjustment process while preserving the image details.
Abstract: It is often required to map the radiances of a real scene to a smaller dynamic range so that the image can be properly displayed. However, most such algorithms suffer from the halo artifact or require manual parameter tweaking that is often a tedious process for the user. We propose an automatic algorithm for high dynamic range compression based on the properties of human visual system. The algorithm is performed in the gradient domain to avoid the halo artifact. It automates the parameter adjustment process while preserving the image details. Performance comparison is provided to illustrate the advantages of the proposed algorithm.

Patent
27 Dec 2009
TL;DR: A real-time image generator may include a first block extracting only a luminance component having a saturation, hue, and value domain from red, green and blue values of an image as discussed by the authors.
Abstract: A real-time image generator is disclosed. A real-time image generator may include a first block extracting only a luminance component having a saturation, hue, and value domain from red, green and blue values of an image. A second block outputs a log summation value and pixel count value with respect to a luminance component of an overall image by using the extracted luminance component and a natural log value. A third block calculates a luminance average value of the image by using the natural log summation value and the pixel count value outputted in the second block, the third block generating a tone mapping look up table including a tone mapping operator (L d ) for each luminance range to obtain a final output image using the calculated luminance average value. The third block outputs a tone mapped red, green and blue value by multiplying a corresponding tone mapping operator (L d ) of the tone mapping look up table by a red, green and blue value of the input image.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: A high-speed method of correcting and compressing the dynamic range of images that can be operated intuitively and naturally and attained two million pixels per 0.3 sec of operation and real-time VGA movie processing without the need for SIMD instruction, multi-thread operation, or additional hardware.
Abstract: We describe a high-speed method of correcting and compressing the dynamic range of images that can be operated intuitively and naturally. Adaptive operations are conducted for shadow, middle, and highlight tones in the local areas of images. Although natural image processing can be achieved with default parameters, we can set the parameters for each of these three tones individually and intuitively. We attained two million pixels per 0.3 sec of operation and real-time VGA movie processing without the need for SIMD instruction, multi-thread operation, or additional hardware by using the Athlon64 X2 3800+ CPU.

Proceedings ArticleDOI
TL;DR: Two studies that examine the visual perception of similarity and global tone mapping functions are argued to be a useful descriptor of an artist's perceptual goals with respect to global illumination and presented evidence that mapping the scene to a painting with different implied lighting properties produces a less efficient mapping.
Abstract: An emerging body of research suggests that artists consistently seek modes of representation that are efficiently processed by the human visual system, and that these shared properties could leave statistical signatures. In earlier work, we showed evidence that perceived similarity of representational art could be predicted using intensity statistics to which the early visual system is attuned, though semantic content was also found to be an important factor. Here we report two studies that examine the visual perception of similarity. We test a collection of non-representational art, which we argue possesses useful statistical and semantic properties, in terms of the relationship between image statistics and basic perceptual responses. We find two simple statistics-both expressed as single values-that predict nearly a third of the overall variance in similarity judgments of abstract art. An efficient visual system could make a quick and reasonable guess as to the relationship of a given image to others (i.e., its context) by extracting these basic statistics early in the visual stream, and this may hold for natural scenes as well as art. But a major component of many types of art is representational content. In a second study, we present findings related to efficient representation of natural scene luminances in landscapes by a well-known painter. We show empirically that elements of contemporary approaches to high-dynamic range tone-mapping-which are themselves deeply rooted in an understanding of early visual system coding-are present in the way Vincent Van Gogh transforms scene luminances into painting luminances. We argue that global tone mapping functions are a useful descriptor of an artist's perceptual goals with respect to global illumination and we present evidence that mapping the scene to a painting with different implied lighting properties produces a less efficient mapping. Together, these studies suggest that statistical regularities in art can shed light on visual processing.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: In the proposed scheme, the luminance of an HDR image is decomposed into a base layer with large gradients and a detail layer with small gradients by using an adaptive half quadratic regularization method.
Abstract: This paper presents a new adaptive tone mapping for high dynamic range (HDR) images. In the proposed scheme, the luminance of an HDR image is decomposed into a base layer with large gradients and a detail layer with small gradients by using an adaptive half quadratic regularization method. The base layer is compressed by a novel global mapping to reduce the dynamic range while the detail layer can be amplified to enhance the local contrasts. With the proposed scheme, the generated low dynamic range (LDR) images look more natural and local details are also preserved very well.

Patent
Nils Kokemohr1
25 Sep 2009
TL;DR: In this paper, a method for filtering a digital image, comprising segmenting the digital image into a plurality of tiles, computing tile histograms corresponding to each of the plurality, is presented.
Abstract: A method for filtering a digital image, comprising segmenting the digital image into a plurality of tiles; computing tile histograms corresponding to each of the plurality of tiles; deriving a plurality of tile transfer functions from the tile histograms preferably using 1D convolutions; interpolating a tile transfer function from the plurality of tile transfer functions; and filtering the digital image with the interpolated tile transfer function. Many filters otherwise difficult to conceive or to implement are possible with this method, including an edge-preserving smoothing filter, HDR tone mapping, edge invariant gradient or entropy detection, image upsampling, and mapping coarse data to fine data.

Proceedings Article
01 Jan 2009
TL;DR: The findings are 1) that limiting image dynamic range does change the apparent gloss of surfaces depicted in the images, and that objects shown in SDR images are perceived to have lower gloss than Objects shown in HDR images; 2) that gloss differences are less discriminable in S DR images than in HDR image; and 3) that surface geometry and environmental illumination modulate these effects.
Abstract: In this paper we present results from an experiment designed to investigate the effects of image dynamic range on apparent surface gloss Using a high dynamic range display, we present high dynamic range (HDR) and standard dynamic range (tone mapped, SDR) renderings of glossy objects in pairs and ask subjects to choose the glossier object We analyze the results of the experiments using Thurstonian scaling, and derive common scales of perceived gloss for the objects depicted in both the HDR and SDR images To investigate the effects of geometric complexity, we use both simple and complex objects To investigate the effects of environmental illumination, we use both a simple area light source and a captured, real-world illumination map Our findings are 1) that limiting image dynamic range does change the apparent gloss of surfaces depicted in the images, and that objects shown in SDR images are perceived to have lower gloss than objects shown in HDR images; 2) that gloss differences are less discriminable in SDR images than in HDR images; and 3) that surface geometry and environmental illumination modulate these effects Introduction One of the defining characteristics of glossy surfaces is that they reflect images of their surroundings High gloss surfaces produce sharp detailed reflection images that clearly show all the features of the surround, while low gloss surfaces produce blurry images that only show bright “highlight” features Due to the presence of light sources and shadows, the illumination field incident on glossy surfaces can have high luminance dynamic range This means that the reflections from glossy surfaces can also be high dynamic range However in conventional images of glossy objects, these high dynamic range reflections must be clipped or compressed through tone mapping so the images fit within the output range of the display medium (see Figure 1) While the utility of conventional display systems demonstrates that the general characteristics of glossy surfaces are still conveyed by these tone-mapped images, an open question is whether the tone mapping process distorts apparent gloss of the imaged surfaces In this paper we present results from an experiment designed to investigate the effects of image dynamic range on apparent surface gloss using a high dynamic range display In the experiments we present high dynamic range (HDR) and standard dynamic range (tone mapped, SDR) renderings of glossy objects in pairs and ask subjects to choose the glossier object We analyze the results of the experiments using Thurstonian scaling, and derive common scales of perceived gloss for the objects depicted in both the HDR and SDR images To investigate the effects of geometric complexity, we use both simple and complex objects To investigate the effects of environmental illumination, we use both a simple area light source and a captured, real-world illumination map Our findings are 1) that limiting image dynamic range does change the apparent gloss of surfaces depicted in the images, and that objects shown in SDR images are perceived to have lower gloss than objects shown in HDR images; 2) that objects differing slightly in gloss are less discriminable in SDR images than in HDR images, and 3) that surface geometry and environmental illumination modulate these effects The following sections describe our methods and results Figure 1 High dynamic range (HDR) and standard dynamic range (SDR) images of a bunny object The image pair on the top looks similar in limited dynamic range prints, but would appear different on a high dynamic range display that could reproduce the full luminance range in the HDR image (see the false color image pair on the bottom) Related Work The earliest modern studies of gloss perception have been attributed to Ingersoll [1] who examined the appearance of glossy papers In 1937, Hunter [2] observed at least six different visual attributes related to apparent gloss He defined these as: specular gloss: perceived brightness associated with the specular reflection from a surface contrast gloss: perceived relative brightness of specularly and diffusely reflecting areas 17th Color Imaging Conference Final Program and Proceedings 193 distinctness-of-image (DOI) gloss: perceived sharpness of images reflected in a surface haze: perceived cloudiness in reflections near the specular direction sheen: perceived shininess at grazing angles in otherwise matte surfaces absence-of-texture gloss: perceived surface smoothness and uniformity In 1937, Judd [3] formalized Hunter’s observations by writing expressions that related them to the physical features of surface bidirectional reflectance distribution functions (BRDFs) Hunter and Judd’s research established a conceptual framework that has dominated work in gloss perception to the present day In 1987, Billmeyer and O’Donnell [4] published an important paper that investigated the multidimensional nature of gloss perception They collected ratings of the differences in apparent gloss between pairs of acrylic-painted panels with varying gloss levels viewed under a fluorescent desk lamp outfitted with a chicken-wire screen, then used multidimensional scaling techniques to discover the dimensionality of perceived gloss For their experimental conditions, they found that gloss could be described by a single dimension However, this work was significant because it was the first to study the multidimensional nature of gloss perception without preconceptions about how many or what the dimensions might be In a 1986 report to the CIE, Christie [5] summarized the research findings on gloss perception up to that date Since that time, McCamy [6,7] has published a pair of review papers on the gloss attributes of metallic surfaces and Seve [8] and Lozano [9] have outlined frameworks for describing gloss that seek to improve on Hunter’s classifications In the Imaging Science literature, there has been considerable interest in the effects of gloss on printed image quality with efforts to characterize artifacts like differential gloss, bronzing, and gloss mottle [10,11,12,13,14,15] One of the challenges in conducting gloss perception research is producing and controlling the stimuli used in the experiments Generating consistent physical samples is very difficult Therefore, the development of physically-based computer graphics techniques that can produce and present radiometrically accurate images of complex scenes has been a boon to the psychophysical study of gloss perception One of the earliest computer graphics studies was done by Nishida and Shinya [16] who rendered bumpy glossy surfaces using direct point lighting They found that observers made consistent errors in matching gloss properties across different surface geometries and suggested that the results of their experiments could be explained with a simple image histogram matching strategy Pellacini et al [17] conducted a set of experiments inspired by Billmeyer and O’Donnell’s multidimensional scaling studies, but with images of a glossy ball inside a checkerboard box with a ceiling-mounted area light source For this stimulus set, they found that observers used two dimensions to judge gloss, “c” a measure related to the contrast of the image reflected by the surface, and “d” a measure related to the sharpness of the reflected image Ferwerda et al [18] extended this work to characterize multidimensional gloss differences More recent work has examined the role of natural illumination patterns [19] and complex object geometry [20] on surface gloss perception Although computer graphics has greatly facilitated the study of gloss perception, one of the caveats of all of these studies is that they use images of glossy surfaces as stimuli rather than the physical surfaces themselves Because the potentially high dynamic range reflections from glossy surfaces are compressed for display, there is the potential that the gloss properties of the displayed surfaces are distorted In our experiment, we employ an HDR display to enable more accurate presentation of physicallybased glossy stimuli Experiments We conducted a scaling experiment to investigate the effects of image dynamic range on apparent surface gloss The stimuli and procedure are described in the following sections

Proceedings ArticleDOI
19 Apr 2009
TL;DR: Not only the quality of the HDRI but also the one of LDRI are improved, compared with a state of the art in conventional HDRI compression.
Abstract: In this paper, we propose a coding algorithm for High Dynamic Range Images (HDRI). Our encoder applies a tone mapping model based on scaled μ-Lawencoding, followed by a conventional Low Dynamic Range Image (LDRI) encoder. The tone mapping model is designed to minimize the difference between the tone mappedHDRI and its LDR version. By virtue of the nature of the model, not only the quality of the HDRI but also the one of LDRI are improved, compared with a state of the art in conventional HDRI compression. Furthermore the error caused by our tone mapping model encoding is theoretically analyzed.

Journal ArticleDOI
TL;DR: A piecewise tone reproduction operator with chromatic adaptation that achieves good subjective results while preserving details of the image and the proposed algorithm has a fast, simple and practical structure for implementation.
Abstract: To display high dynamic range (HDR) images onto conventional displayable devices that have low dynamic range (LDR) such as monitors and printers, we propose a piecewise tone reproduction operator with chromatic adaptation. The strong point of our operator is to reproduce displayable LDR images while maintaining a perceptual match between the real world and the displayed image. The algorithm for dynamic range reduction relies on piecewise constructs and suitable tone reproduction functions that depend on the estimations of global luminance modification and local luminance adaptation. Combined with dynamic range reduction, the proposed algorithm also applies the chromatic adaptation technique of the color appearance model in order to preserve the chromatic appearance and color consistency across scene and display environments. The experimental results show that the proposed algorithm achieves good subjective results while preserving details of the image. Furthermore, the proposed algorithm has a fast, simple and practical structure for implementation.

Journal Article
TL;DR: An approach to detect traffic guidance signs and recognise the structure of junction information on them using tone mapping technique and based on graph theory, which allows more effective detection in different lighting and environmental conditions compared with conventional approaches.
Abstract: In this paper we present an approach to detect traffic guidance signs and recognise the structure of junction information on them. The detection algorithm is based on using differently exposed images. These images are combined into one using tone mapping technique in order to minimize effects of bad environment conditions and low dynamic range of CCD- cameras. This technique allows robust sign detection in various lighting conditions. To localize sign candidates color segmentation is used. To minimize number of false detection filtering operations based on geometrical and color properties is applied. Recognition process is based on graph theory. Each sign candidate is decomposed into principal components and the region which represents junction structure is mapped into a graph. This graph is checked for possible mapping mistakes. Finally, the graph is analyzed in order to extract all possible paths of junction crossing. These paths must represent the real structure of the junction and correspond to the road law. The proposed method allows more effective detection in different lighting and environmental conditions such as insufficient or excessive lighting, rain, fog etc compared with conventional approaches.

Proceedings ArticleDOI
08 Dec 2009
TL;DR: A tone-mapping operator which is built on an existing histogram adjustment technique, which incorporates certain characteristics of the human visual system, to restrain the problem of extreme contrast enhancement of certain segments and intensive compression of others, associated with histogram equalization based techniques.
Abstract: Tone-mapping operators are used to produce low dynamic range versions of the high dynamic range images, while preserving as much details as possible. We have proposed a tone-mapping operator which is built on an existing histogram adjustment technique. It incorporates certain characteristics of the human visual system, to restrain the problem of extreme contrast enhancement of certain segments and intensive compression of others, associated with histogram equalization based techniques. Test results show significant improvement over traditional histogram adjustment. The proposed method also does quite well compared to the other state of the art tone-mapping operators.

Proceedings ArticleDOI
08 Jul 2009
TL;DR: Interactive evolution as a computational tool for tone mapping is proposed and an evolution strategy that blends the results from several tone mapping operators while at the same time adapting their parameters is found to yield promising results with little effort required of the user.
Abstract: Tone mapping is a computational task of significance in the context of displaying high dynamic range images on low dynamic range devices While a number of tone mapping algorithms have been proposed and are in common use, there is no single operator that yields optimal results under all conditions Moreover, obtaining satisfactory mappings often requires the manual tweaking of parameters This paper proposes interactive evolution as a computational tool for tone mapping An evolution strategy that blends the results from several tone mapping operators while at the same time adapting their parameters is found to yield promising results with little effort required of the user