scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Enhanced local tone mapping for detail preserving reproduction of high dynamic range images

TL;DR: Robustness and flexibility to achieve desired appearance makes ELTM suitable for applications where user experience is the primary concern as is the case with consumer electronics products.
About: This article is published in Journal of Visual Communication and Image Representation.The article was published on 2018-05-01. It has received 15 citations till now. The article focuses on the topics: Tone mapping & Robustness (computer science).
Citations
More filters
Journal ArticleDOI
TL;DR: A novel TM method based on macro-micro modeling is proposed, which can address the common problems in existing TM methods, such as exposure imbalance and halo artifact, and is superior to the current state-of-the-art TM methods in both subjective and objective evaluations.
Abstract: Tone mapping(TM) aims to adapt high dynamic range (HDR) images to conventional displays with visual information preserved. In this paper, a novel TM method based on macro-micro modeling is proposed, which can address the common problems in existing TM methods, such as exposure imbalance and halo artifact. From a microscopic perspective, multi-layer decomposition and reconstruction are applied to model the properties of brightness, structure, and detail for HDR images, and then different strategies are adopted for each layer by the human visual system (HVS) to reduce the overall brightness contrast and retain as much scene information. From a macroscopic perspective, scene content-based global operator is designed to adaptively adjust the scene brightness so that it is consistent with the subjective perception of human eyes. Both the micro and macro models are processed in parallel, which can ensure the integrity and subjective consistency of scene information. Experiments with numerous HDR images and TM methods are conducted and the results show that the proposed method achieves visually compelling results with little exposure imbalance and halo artifact, and is superior to the current state-of-the-art TM methods in both subjective and objective evaluations.

6 citations

Journal ArticleDOI
16 Jun 2021-Sensors
TL;DR: In this article, a cascaded-architecture-type reproduction method is proposed to simultaneously enhance local details and retain the naturalness of original global contrast, where individual histogram bin widths are first adjusted according to the property of overall image content.
Abstract: Photographic reproduction and enhancement is challenging because it requires the preservation of all the visual information during the compression of the dynamic range of the input image. This paper presents a cascaded-architecture-type reproduction method that can simultaneously enhance local details and retain the naturalness of original global contrast. In the pre-processing stage, in addition to using a multiscale detail injection scheme to enhance the local details, the Stevens effect is considered for adapting different luminance levels and normally compressing the global feature. We propose a modified histogram equalization method in the reproduction stage, where individual histogram bin widths are first adjusted according to the property of overall image content. In addition, the human visual system (HVS) is considered so that a luminance-aware threshold can be used to control the maximum permissible width of each bin. Then, the global tone is modified by performing histogram equalization on the output modified histogram. Experimental results indicate that the proposed method can outperform the five state-of-the-art methods in terms of visual comparisons and several objective image quality evaluations.

5 citations

Journal ArticleDOI
TL;DR: In this paper, a hybrid deep emperor penguin classifier was proposed to accurately classify the tone mapped images for different visualisation applications, where a selective deep neural network was trained to predict the quality of a tone-mapped image.
Abstract: One of the main open challenges in visualisation applications such as cathode ray tube (CRT) monitor, liquid-crystal display (LCD), and organic light-emitting diode (OLED) display is the robustness for high dynamic range (HDR) environs. This is due to the imperfections in the sensor and the incapability to track interest points successfully because of the brightness constancy in visualisation applications. To address this problem, different tone mapping operators are required for visualising HDR images on standard displays. However, these standard displays have different dynamic ranges. Thus, there is a need for a new model to find the best quality tone mapped image for specific kinds of visualisation applications. The authors propose a hybrid deep emperor penguin classifier to accurately classify the tone mapped images for different visualisation applications. Here, a selective deep neural network is trained to predict the quality of a tone-mapped image. Based on this quality, a decision is made as to the suitability of the image for CRT monitor, LCD display or OLED display. Also, they evaluate the proposed model on the TMIQD database and the simulation results prove that the proposed model outperforms the state-of-the-art image quality assessment methods.

4 citations

Journal ArticleDOI
01 Dec 2020
TL;DR: A statistical clustering-based tone mapping technique that would be able to adapt the local content of an image as well as its color and its local structures by exploiting the image in the worldwide repetition is proposed.
Abstract: Tone mapping algorithms reproduce high dynamic range (HDR) images on low dynamic range images in the standard display devices such as LCD, CRT, projectors, and printers. In this paper, we propose a statistical clustering-based tone mapping technique that would be able to adapt the local content of an image as well as its color. At first, the HDR image is partitioned into many overlapped color patches and we disintegrate each color patch into three segments: patch mean, color variation and color structure. Then based on the color structure component, the extracted color patches are clustered into a number of clusters by k-means clustering technique. For each cluster, the statistical signal processing technique namely Hessian multi set canonical correlations (HesMCC) has been produced to ascertain the transform matrix. Moreover, the HesMCC are fundamentally utilized for performing the dimensionality reduction of patches and to form effective tone mapped images. Contrasting with the current strategies, the procedures in the proposed clustering-based strategy can better adapt image color and its local structures by exploiting the image in the worldwide repetition. Experimental results show that the running time of the proposed method is less about 88.32%, 92%, 68.9%, and 29.4%, while comparing with other existing tone mapping methods.

3 citations

Journal ArticleDOI
TL;DR: In this article, a logarithm transformation is first performed to increase global brightness and then, to increase local contrast, a novel gradient domain tone mapping operator is proposed, and a fuzzy enhancement operator is performed on the result obtained by minimizing the new cost function, to further increase contrast and prevent halo artifacts created by manipulating the gradient domain.
Abstract: X-ray images of complex workpiece often suffer low brightness and contrast in industrial defect inspection. In this paper, we propose an enhancement framework to improve the visual quality. In detail, a logarithm transformation is first to increase global brightness, and then, to increase local contrast, we propose a novel gradient domain tone mapping operator, in which we design a new gradient attenuation function based on fuzzy entropy and construct a new cost function constrained by a hybrid regularization. Finally a fuzzy enhancement operator is performed on the result obtained by minimizing the new cost function, to further increase contrast and prevent halo artifacts created by manipulating the gradient domain. Experiments show that the proposed method produces good visual effect for defect inspection for X-ray image of complex workpiece.

2 citations

References
More filters
Journal ArticleDOI
TL;DR: The guided filter is a novel explicit image filter derived from a local linear model that can be used as an edge-preserving smoothing operator like the popular bilateral filter, but it has better behaviors near edges.
Abstract: In this paper, we propose a novel explicit image filter called guided filter. Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter [1], but it has better behaviors near edges. The guided filter is also a more generic concept beyond smoothing: It can transfer the structures of the guidance image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter naturally has a fast and nonapproximate linear time algorithm, regardless of the kernel size and the intensity range. Currently, it is one of the fastest edge-preserving filters. Experiments show that the guided filter is both effective and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc.

4,730 citations

Journal ArticleDOI
TL;DR: Despite its simplicity, it is able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms.
Abstract: We propose a natural scene statistic-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model that operates in the spatial domain. The new model, dubbed blind/referenceless image spatial quality evaluator (BRISQUE) does not compute distortion-specific features, such as ringing, blur, or blocking, but instead uses scene statistics of locally normalized luminance coefficients to quantify possible losses of “naturalness” in the image due to the presence of distortions, thereby leading to a holistic measure of quality. The underlying features used derive from the empirical distribution of locally normalized luminances and products of locally normalized luminances under a spatial natural scene statistic model. No transformation to another coordinate frame (DCT, wavelet, etc.) is required, distinguishing it from prior NR IQA approaches. Despite its simplicity, we are able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms. BRISQUE has very low computational complexity, making it well suited for real time applications. BRISQUE features may be used for distortion-identification as well. To illustrate a new practical application of BRISQUE, we describe how a nonblind image denoising algorithm can be augmented with BRISQUE in order to perform blind image denoising. Results show that BRISQUE augmentation leads to performance improvements over state-of-the-art methods. A software release of BRISQUE is available online: http://live.ece.utexas.edu/research/quality/BRISQUE_release.zip for public use and evaluation.

3,780 citations

Journal ArticleDOI
TL;DR: The mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects is described.
Abstract: Sensations of color show a strong correlation with reflectance, even though the amount of visible light reaching the eye depends on the product of reflectance and illumination. The visual system must achieve this remarkable result by a scheme that does not measure flux. Such a scheme is described as the basis of retinex theory. This theory assumes that there are three independent cone systems, each starting with a set of receptors peaking, respectively, in the long-, middle-, and short-wavelength regions of the visible spectrum. Each system forms a separate image of the world in terms of lightness that shows a strong correlation with reflectance within its particular band of wavelengths. These images are not mixed, but rather are compared to generate color sensations. The problem then becomes how the lightness of areas in these separate images can be independent of flux. This article describes the mathematics of a lightness scheme that generates lightness numbers, the biologic correlate of reflectance, independent of the flux from objects

3,480 citations

Proceedings ArticleDOI
01 Jul 2002
TL;DR: The work presented in this paper leverages the time-tested techniques of photographic practice to develop a new tone reproduction operator and uses and extends the techniques developed by Ansel Adams to deal with digital images.
Abstract: A classic photographic task is the mapping of the potentially high dynamic range of real world luminances to the low dynamic range of the photographic print. This tone reproduction problem is also faced by computer graphics practitioners who map digital images to a low dynamic range print or screen. The work presented in this paper leverages the time-tested techniques of photographic practice to develop a new tone reproduction operator. In particular, we use and extend the techniques developed by Ansel Adams to deal with digital images. The resulting algorithm is simple and produces good results for a wide variety of images.

1,708 citations

Journal ArticleDOI
TL;DR: A practical implementation of the retinex is defined without particular concern for its validity as a model for human lightness and color perception, and the trade-off between rendition and dynamic range compression that is governed by the surround space constant is described.
Abstract: The last version of Land's (1986) retinex model for human vision's lightness and color constancy has been implemented and tested in image processing experiments. Previous research has established the mathematical foundations of Land's retinex but has not subjected his lightness theory to extensive image processing experiments. We have sought to define a practical implementation of the retinex without particular concern for its validity as a model for human lightness and color perception. We describe the trade-off between rendition and dynamic range compression that is governed by the surround space constant. Further, unlike previous results, we find that the placement of the logarithmic function is important and produces best results when placed after the surround formation. Also unlike previous results, we find the best rendition for a "canonical" gain/offset applied after the retinex operation. Various functional forms for the retinex surround are evaluated, and a Gaussian form is found to perform better than the inverse square suggested by Land. Images that violate the gray world assumptions (implicit to this retinex) are investigated to provide insight into cases where this retinex fails to produce a good rendition.

1,674 citations