scispace - formally typeset
Search or ask a question

Showing papers on "Tone mapping published in 2019"


Proceedings ArticleDOI
15 Jun 2019
TL;DR: In this paper, the authors propose a technique to "unprocess" images by inverting each step of an image processing pipeline, thereby allowing them to synthesize realistic raw sensor measurements from commonly available Internet photos.
Abstract: Machine learning techniques work best when the data used for training resembles the data used for evaluation. This holds true for learned single-image denoising algorithms, which are applied to real raw camera sensor readings but, due to practical constraints, are often trained on synthetic image data. Though it is understood that generalizing from synthetic to real images requires careful consideration of the noise properties of camera sensors, the other aspects of an image processing pipeline (such as gain, color correction, and tone mapping) are often overlooked, despite their significant effect on how raw measurements are transformed into finished images. To address this, we present a technique to “unprocess” images by inverting each step of an image processing pipeline, thereby allowing us to synthesize realistic raw sensor measurements from commonly available Internet photos. We additionally model the relevant components of an image processing pipeline when evaluating our loss function, which allows training to be aware of all relevant photometric processing that will occur after denoising. By unprocessing and processing training data and model outputs in this way, we are able to train a simple convolutional neural network that has 14%-38% lower error rates and is 9×-18× faster than the previous state of the art on the Darmstadt Noise Dataset, and generalizes to sensors outside of that dataset as well.

369 citations


Journal ArticleDOI
TL;DR: This paper describes a system for capturing clean, sharp, colorful photographs in light as low as 0.3 lux, where human vision becomes monochromatic and indistinct, and employs "motion metering", which uses an estimate of motion magnitudes to identify the number of frames and the per-frame exposure times that together minimize both noise and motion blur in a captured burst.
Abstract: Taking photographs in low light using a mobile phone is challenging and rarely produces pleasing results. Aside from the physical limits imposed by read noise and photon shot noise, these cameras are typically handheld, have small apertures and sensors, use mass-produced analog electronics that cannot easily be cooled, and are commonly used to photograph subjects that move, like children and pets. In this paper we describe a system for capturing clean, sharp, colorful photographs in light as low as 0.3 lux, where human vision becomes monochromatic and indistinct. To permit handheld photography without flash illumination, we capture, align, and combine multiple frames. Our system employs "motion metering", which uses an estimate of motion magnitudes (whether due to handshake or moving objects) to identify the number of frames and the per-frame exposure times that together minimize both noise and motion blur in a captured burst. We combine these frames using robust alignment and merging techniques that are specialized for high-noise imagery. To ensure accurate colors in such low light, we employ a learning-based auto white balancing algorithm. To prevent the photographs from looking like they were shot in daylight, we use tone mapping techniques inspired by illusionistic painting: increasing contrast, crushing shadows to black, and surrounding the scene with darkness. All of these processes are performed using the limited computational resources of a mobile device. Our system can be used by novice photographers to produce shareable pictures in a few seconds based on a single shutter press, even in environments so dim that humans cannot see clearly.

78 citations


Journal ArticleDOI
TL;DR: A BLInd QUality Evaluator is proposed to blindly predict the quality of Tone-Mapped Images (BLIQUE-TMI) without accessing the corresponding HDR versions to create a fair comparison of different TMOs.
Abstract: High dynamic range (HDR) image, which has a powerful capacity to represent the wide dynamic range of real-world scenes, has been receiving attention from both academic and industrial communities. Although HDR imaging devices have become prevalent, the display devices for HDR images are still limited. To facilitate the visualization of HDR images in standard low dynamic range displays, many different tone mapping operators (TMOs) have been developed. To create a fair comparison of different TMOs, this paper proposes a BLInd QUality Evaluator to blindly predict the quality of Tone-Mapped Images (BLIQUE-TMI) without accessing the corresponding HDR versions. BLIQUE-TMI measures the quality of TMIs by considering the following aspects: 1) visual information; 2) local structure; and 3) naturalness. To be specific, quality-aware features related to the former two aspects are extracted in a local manner based on sparse representation, while quality-aware features related to the third aspect are derived based on global statistics modeling in both intensity and color domains. All the extracted local and global quality-aware features constitute a final feature vector. An emergent machine learning technique, i.e., extreme learning machine, is adopted to learn a quality predictor from feature space to quality space. The superiority of BLIQUE-TMI to several leading blind IQA metrics is well demonstrated on two benchmark databases.

56 citations


Journal ArticleDOI
TL;DR: DeepTMO as discussed by the authors proposes a conditional generative adversarial network (cGAN) to learn to adapt to vast scenic-content (e.g., outdoor, indoor, human, structures, etc.) but also tackles the HDR related scene-specific challenges such as contrast and brightness, while preserving the fine-grained details.
Abstract: A computationally fast tone mapping operator (TMO) that can quickly adapt to a wide spectrum of high dynamic range (HDR) content is quintessential for visualization on varied low dynamic range (LDR) output devices such as movie screens or standard displays. Existing TMOs can successfully tone-map only a limited number of HDR content and require an extensive parameter tuning to yield the best subjective-quality tone-mapped output. In this paper, we address this problem by proposing a fast, parameter-free and scene-adaptable deep tone mapping operator (DeepTMO) that yields a high-resolution and high-subjective quality tone mapped output. Based on conditional generative adversarial network (cGAN), DeepTMO not only learns to adapt to vast scenic-content (e.g., outdoor, indoor, human, structures, etc.) but also tackles the HDR related scene-specific challenges such as contrast and brightness, while preserving the fine-grained details. We explore 4 possible combinations of Generator-Discriminator architectural designs to specifically address some prominent issues in HDR related deep-learning frameworks like blurring, tiling patterns and saturation artifacts. By exploring different influences of scales, loss-functions and normalization layers under a cGAN setting, we conclude with adopting a multi-scale model for our task. To further leverage on the large-scale availability of unlabeled HDR data, we train our network by generating targets using an objective HDR quality metric, namely Tone Mapping Image Quality Index (TMQI). We demonstrate results both quantitatively and qualitatively, and showcase that our DeepTMO generates high-resolution, high-quality output images over a large spectrum of real-world scenes. Finally, we evaluate the perceived quality of our results by conducting a pair-wise subjective study which confirms the versatility of our method.

39 citations


Patent
16 Jul 2019
TL;DR: In this paper, a display method of displaying, on a display device, video of video data where luminance of video is defined by a first EOTF indicating a correlation of HDR luminance and code values is presented.
Abstract: Provided is a display method of displaying, on a display device, video of video data where luminance of video is defined by a first EOTF indicating a correlation of HDR luminance and code values The method includes: acquiring the video data; performing, regarding each of multiple pixels making up the video in the acquired video data, first determining of determining whether luminance of that pixel exceeds a first predetermined luminance; performing, regarding each of the multiple pixels, dual tone mapping where luminance of that pixel is reduced by a different format in a case of the luminance of the pixel being found to exceed the first predetermined luminance from the first determining, and a case of the luminance of the pixel being found to be equal to or lower than the first predetermined luminance; and displaying the video on the display device using the results of the dual tone mapping

39 citations


Journal ArticleDOI
TL;DR: A two-step framework, consisting of a luminance-invariant guidance model based on a support vector regressor (SVR) to optimally adapt the tone mapping function for image matching; and an energy maximization model to generate appropriate training samples for learning the SVR, is proposed.
Abstract: In this paper, we propose a new framework to optimally tone map the high dynamic range (HDR) content for image matching under drastic illumination variations. Since tone mapping operators (TMO) have traditionally been used for displaying HDR scenes, their design is suboptimal when used for computer vision tasks, such as image matching. We address this suboptimality by proposing a two-step framework, consisting of: first, a luminance-invariant guidance model based on a support vector regressor (SVR) to optimally adapt the tone mapping function for image matching; and second, an energy maximization model to generate appropriate training samples for learning the SVR. At each step, we collectively address both stages of keypoint detection and descriptor extraction in the feature matching framework. By locally altering the intrinsic characteristics of the tone mapping function, the learned guidance model facilitates the extraction of local invariant features in the presence of illumination variations. We demonstrate that the proposed TMO significantly outperforms perceptually driven state-of-the-art TMOs on a dataset of HDR scenes characterized by challenging lighting variations, such as day/night transitions.

37 citations


Journal ArticleDOI
TL;DR: Experiments and comparisons to other methods show that geometry details are better preserved in the scenario with high compression ratios, and structures are clearly preserved without shape distortion and interference from details.
Abstract: Bas-relief is characterized by its unique presentation of intrinsic shape properties and/or detailed appearance using materials raised up in different degrees above a background. However, many bas-relief modeling methods could not manipulate scene details well. We propose a simple and effective solution for two kinds of bas-relief modeling (i.e., structure-preserving and detail-preserving ) which is different from the prior tone mapping alike methods. Our idea originates from an observation on typical 3D models, which are decomposed into a piecewise smooth base layer and a detail layer in normal field. Proper manipulation of the two layers contributes to both structure-preserving and detail-preserving bas-relief modeling. We solve the modeling problem in a discrete geometry processing setup that uses normal-based mesh processing as a theoretical foundation. Specifically, using the two-step mesh smoothing mechanism as a bridge, we transfer the bas-relief modeling problem into a discrete space, and solve it in a least-squares manner. Experiments and comparisons to other methods show that (i) geometry details are better preserved in the scenario with high compression ratios, and (ii) structures are clearly preserved without shape distortion and interference from details.

36 citations


Journal ArticleDOI
Ting Luo1, Gangyi Jiang1, Mei Yu1, Haiyong Xu1, Wei Gao1 
TL;DR: The experimental results show that the proposed method can efficiently resist different TMOs and common image attacks, outperforming other existing HDR image watermarking methods.

25 citations


Journal ArticleDOI
TL;DR: The experimental results show that the HDR images predicted by the proposed iTM-Net have higher-quality than the HDR ones predicted by conventional inverse tone mapping methods, including the state of the art, in terms of both HDR-VDP-2.2 and PU encoding + MS-SSIM.
Abstract: In this paper, we propose a novel inverse tone mapping network, called “iTM-Net.” For training iTM-Net, we also propose a novel loss function that considers the non-linear relation between low dynamic range (LDR) and high dynamic range (HDR) images. For inverse tone mapping with convolutional neural networks (CNNs), we first point out that training CNNs with a standard loss function causes a problem due to the non-linear relation between the LDR and HDR images. To overcome the problem, the novel loss function non-linearly tone-maps target HDR images into LDR ones on the basis of a tone mapping operator, and the distance between the tone-mapped images and predicted ones are then calculated. The proposed loss function enables us not only to normalize the HDR images but also to reduce the non-linear relation between LDR and HDR ones. The experimental results show that the HDR images predicted by the proposed iTM-Net have higher-quality than the HDR ones predicted by conventional inverse tone mapping methods, including the state of the art, in terms of both HDR-VDP-2.2 and PU encoding + MS-SSIM. In addition, compared with loss functions that do not consider the non-linear relation, the proposed loss function is shown to improve the performance of CNNs.

22 citations


Journal ArticleDOI
TL;DR: A blind quality index based on luminance partition for tone-mapped images is presented and is shown to outperform those state-of-the-art metrics according to extensive experiments conducted on two publicly available databases.

22 citations


Proceedings ArticleDOI
01 Dec 2019
TL;DR: A tone mapping network (TMNet) in Hue-Saturation-Value (HSV) color space to obtain better luminance and color mapping and can achieve better performance in both subjective and objective evaluations is proposed.
Abstract: Tone mapping operators can convert high dynamic range (HDR) images to low dynamic range (LDR) images so that we can enjoy the informative contents of HDR images with LDR devices. However, current state-of-the-art tone mapping algorithms mainly focus on the luminance mapping while neglecting the color component. Meanwhile, they often suffer from halo artifacts and over-enhancement. In this paper, we propose a tone mapping network (TMNet) in Hue-Saturation-Value (HSV) color space to obtain better luminance and color mapping. We adopt the improved Wasserstein generative adversarial network (WGAN-GP) as the basic architecture and further introduce several improvements. A meticulously designed loss function is adopted to push tone mapped image to the natural image manifold. What’s more, we create a tone mapped image dataset in which the label images are manually adjusted by photographers. Compared with some state-of-the-art tone mapping methods, the proposed method can achieve better performance in both subjective and objective evaluations.

Proceedings Article
01 Jan 2019
TL;DR: A new end-to-end tone mapping approach based on Deep Convolutional Adversarial Networks (DCGANs) is introduced along with a data augmentation technique, and shown to improve upon the latest state-of-the-art on benchmarking datasets.
Abstract: Low-light images require localised processing to enhance details, contrast and lighten dark regions without affecting the appearance of the entire image. A range of tone mapping techniques have been developed to achieve this, with the latest state-of-the-art methods leveraging deep learning. In this work, a new end-to-end tone mapping approach based on Deep Convolutional Adversarial Networks (DCGANs) is introduced along with a data augmentation technique, and shown to improve upon the latest state-of-the-art on benchmarking datasets. We carry out comparisons using the MIT-Adobe FiveK (MIT5K) and the LOL datasets, as they provide benchmark training and testing data, which is further enriched with data augmentation techniques to increase diversity and robustness. A U-net is used in the generator and a patch-GAN in the discriminator, while a perceptually-relevant loss function based on VGG is used in the generator. The results are visually pleasing, and shown to improve upon the state-of-the-art Deep Retinex, Deep Photo Enhancer and GLADNet on the most widely used benchmark dataset MIT-5K and LOL, without additional computational requirements.

Journal ArticleDOI
TL;DR: A 3D color homography model which approximates photo-realistic color transfer algorithm as a combination of a 3D perspective transform and a mean intensity mapping is proposed.
Abstract: Color transfer is an image editing process that naturally transfers the color theme of a source image to a target image. In this paper, we propose a 3D color homography model which approximates photo-realistic color transfer algorithm as a combination of a 3D perspective transform and a mean intensity mapping. A key advantage of our approach is that the re-coded color transfer algorithm is simple and accurate. Our evaluation demonstrates that our 3D color homography model delivers leading color transfer re-coding performance. In addition, we also show that our 3D color homography model can be applied to color transfer artifact fixing, complex color transfer acceleration, and color-robust image stitching.

Journal ArticleDOI
TL;DR: A real-time hardware implementation of an exponent-based tone mapping algorithm of Horé et al., that uses both local and global image information for improving the contrast and increasing the brightness of tone-mapped images.
Abstract: In this paper, we present a real-time hardware implementation of an exponent-based tone mapping algorithm of Hore et al., that uses both local and global image information for improving the contrast and increasing the brightness of tone-mapped images. Although there are several tone mapping algorithms available in the literature, most of them require manual tuning of their rendering parameters. However, in our implementation, the algorithm has an embedded automatic key parameter estimation block that controls the brightness of the tone-mapped images. We also present the implementation of a Gaussian-based halo-reducing filter. The hardware implementation is described in Verilog and synthesized for a field programmable gate array device. Experimental results performed on different wide dynamic range images show that we are able to get images which are of good visual quality and have good brightness and contrast. The good performance of our hardware architecture is also confirmed quantitatively with the high peak signal-to-noise ratio and structural similarity index.

Patent
19 Jul 2019
TL;DR: In this paper, an image processing method and device, a storage medium and electronic equipment, and the method comprises the steps: obtaining multiple frames of raw images and a first synthetic image; identifying a face region and a target overexposure region in the first composite image; obtaining brightness relations between the target over-exposure area and the face area in the multiple frames.
Abstract: The embodiment of the invention discloses an image processing method and device, a storage medium and electronic equipment, and the method comprises the steps: obtaining multiple frames of raw imagesand a first synthetic image; identifying a face region and a target overexposure region in the first composite image; obtaining brightness relations between the target overexposure area and the face area in the multiple frames of raw images; determining the expected brightness of the target overexposure area according to the brightness relations, wherein the expected brightness comprises the expected brightness of each pixel point in the target overexposure area; generating a first tone mapping operator corresponding to the target overexposure area according to the current brightness of the target overexposure area in the first composite image and the expected brightness; according to the preset tone mapping operator and the first tone mapping operator, carrying out tone mapping processingon the first composite image to generate a second composite image. The phenomenon that the brightness difference between the target overexposure area and the human face in the HDR image is obvious iseliminated.

Journal ArticleDOI
TL;DR: Simulation results provide good results, both in terms of visual quality and TMQI metric, compared to existing competitive TM approaches.

Journal ArticleDOI
Haonan Su1, Cheolkon Jung1, Lu Wang1, Shuyao Wang2, Yuanjia Du2 
01 Jan 2019-Displays
TL;DR: Experimental results demonstrate that the proposed adaptive tone mapping for display enhancement under ambient light significantly enhances the brightness, contrast, details, and colors of displayed images and outperforms other state-of-the-art methods under ambientLight.

Proceedings ArticleDOI
01 Sep 2019
TL;DR: A novel tone mapping operator that fusions the multiple versions of a single HDR input obtained by clipping and normalizing its intensity based on a complete set of disjoint intervals is introduced, designed to offer a good rendering of the local structures.
Abstract: This paper introduces a novel tone mapping operator, designed to offer a good rendering of the local structures. The new operator fusions the multiple versions of a single HDR input obtained by clipping and normalizing its intensity based on a complete set of disjoint intervals. Defining the weight map associated to each version to be its clipping interval indicator function promotes contrast enhancement, but induces artifacts when neighboring pixels belong to distinct intervals. We thus propose to smooth out the indicators across neighboring pixels with similar intensity, using a standard cross-bilateral filter. With such weight maps, the fusion operator becomes equivalent to applying histogram equalization on the image regions on which the cross-bilateral filter diffuses the indicators, and is therefore referred to as Bilateral Histogram Equalization (BHE) operator. It compares favorably to previous tone mapping algorithms.

Journal ArticleDOI
TL;DR: This paper proposed the first binocular tone mapping operator to more effectively distribute visual content to an LDR pair, leveraging the great representability and interpretability of deep convolutional neural network.
Abstract: Binocular tone mapping is studied in the previous works to generate a fusible pair of LDR images in order to convey more visual content than one single LDR image. However, the existing methods are all based on monocular tone mapping operators. It greatly restricts the preservation of local details and global contrast in a binocular LDR pair. In this paper, we proposed the first binocular tone mapping operator to more effectively distribute visual content to an LDR pair, leveraging the great representability and interpretability of deep convolutional neural network. Based on the existing binocular perception models, novel loss functions are also proposed to optimize the output pairs in terms of local details, global contrast, content distribution, and binocular fusibility. Our method is validated with a qualitative and quantitative evaluation, as well as a user study. Statistics show that our method outperforms the state-of-the-art binocular tone mapping frameworks in terms of both visual quality and time performance.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: The proposed hue compensation method is demonstrated not only to produce images with small hue degradation but also to maintain well-mapped luminance, in terms of three kinds of criterion: TMQI, hue value in CIEDE2000, and the maximally saturated color on the constant-hue plane.
Abstract: We propose a novel JPEG XT image compression with hue compensation for two-layer HDR coding. LDR images produced from JPEG XT bitstreams have some distortion in hue due to tone mapping operations. In order to suppress the color distortion, we apply a novel hue compensation method based on the maximally saturated colors. Moreover, the bitstreams generated by using the proposed method are fully compatible with the JPEG XT standard. In an experiment, the proposed method is demonstrated not only to produce images with small hue degradation but also to maintain well-mapped luminance, in terms of three kinds of criterion: TMQI, hue value in CIEDE2000, and the maximally saturated color on the constant-hue plane.

Journal ArticleDOI
TL;DR: A new tone mapping function for contrast enhancement is proposed using an optimization approach that is subject to constraints such as the output image needing to be enhanced naturally and noticeably and outperforms other contrast enhancement methods in terms of both objective and subjective criteria.
Abstract: Conventional contrast enhancement methods, including global and local enhancements, produce enhanced images with some limitations. Global contrast enhancement does not take the local characteristics into consideration, and therefore, the enhancement performance could be limited. On the other hand, a local contrast enhancement method achieves a noticeable improvement, but it generates unnatural improvement results compared with the input image. Due to the complementary characteristics of these two methods, it is hard to achieve remarkable contrast enhancement without visual artifacts. To overcome the limitations, we propose a new tone mapping function for contrast enhancement using an optimization approach that is subject to constraints such as the output image needing to be enhanced naturally and noticeably. Since contrast enhancement without artificiality is possible when the enhancement process mimics the human eye, we model the human visual perception system, and then, the model is incorporated into the proposed tone mapping function. Consequently, the contrast of the image is adaptively enhanced according to a region that is more attractive to a person. The experimental results demonstrate that the proposed algorithm outperforms other contrast enhancement methods in terms of both objective and subjective criteria.

Patent
16 Apr 2019
TL;DR: In this article, a multi-exposure image fusion method is proposed, which consists of three steps: inputting an original image, performing feature analysis on the original image; obtaining an exposure type of the original images; adjusting the exposure value of a camera simulation curve function according to the exposure type; generating k exposure images with different exposure degrees, respectively calculating a brightness mean value weight, a saturation weight, and a contrast weight of each exposure image; calculating to obtain a fusion weight, according to brightness mean values weight, saturation weight and contrast weight; weighing and fusing k
Abstract: The invention provides a multi-exposure image fusion method. The multi-exposure image fusion method comprises the following steps: inputting an original image; performing feature analysis on the original image; obtaining an exposure type of the original image; adjusting the exposure value of a camera simulation curve function according to the exposure type of the original image; generating k exposure images with different exposure degrees, respectively calculating a brightness mean value weight, a saturation weight and a contrast weight of each exposure image; calculating to obtain a fusion weight of each exposure image according to the brightness mean value weight, saturation weight and contrast weight of each exposure image; weighing and fusing k exposure images with different exposure degrees according to the fusion weight of each exposure image. The fusion image is obtained. Tone mapping is carried out on the fusion image to obtain the target image. The problem that a traditional multi-exposure image fusion method enables the target image to be overall ashed and low in contrast is solved. The local contrast of the target image is improved, the color of the target image is enhanced, and the target image presents more details.

Journal ArticleDOI
TL;DR: This paper has suggested clustering-based release detection and cluster core initialization protocols for open tone mapping, which uses the modified K-object clustering algorithm in the cluster the data sets, using a density-based multi-level data suppression algorithm.
Abstract: The high dynamic range (HDR) imaging and displaying a wide range of imaging levels in the imaging industry is found in the world using devices with limited dynamic range. Generally, the clustering system plays an important role in tone mapping. Clustering is a combination of similar properties based on their properties. Maximum detection and cluster core initiation is a major problem in the cluster; has been used to remove and identify abnormal data from the database. The data value can be represented by the value data outside the boundary of the sample data. In this paper, we have suggested clustering-based release detection and cluster core initialization protocols for open tone mapping, which uses the modified K-object clustering algorithm in the cluster the data sets. A density-based multi-level data suppression (DBMSDC) algorithm is used the early cluster centers calculated using the DBMSDC algorithm have been found to be very close to the desired cluster centers. Exposure has been detected using a weight based center approach and the change K-material clustering has been removed. Test results show that the proposed methods reach advanced and efficient solutions, while the art tone mapping protocols.

Journal ArticleDOI
TL;DR: A novel TM method based on macro-micro modeling is proposed, which can address the common problems in existing TM methods, such as exposure imbalance and halo artifact, and is superior to the current state-of-the-art TM methods in both subjective and objective evaluations.
Abstract: Tone mapping(TM) aims to adapt high dynamic range (HDR) images to conventional displays with visual information preserved. In this paper, a novel TM method based on macro-micro modeling is proposed, which can address the common problems in existing TM methods, such as exposure imbalance and halo artifact. From a microscopic perspective, multi-layer decomposition and reconstruction are applied to model the properties of brightness, structure, and detail for HDR images, and then different strategies are adopted for each layer by the human visual system (HVS) to reduce the overall brightness contrast and retain as much scene information. From a macroscopic perspective, scene content-based global operator is designed to adaptively adjust the scene brightness so that it is consistent with the subjective perception of human eyes. Both the micro and macro models are processed in parallel, which can ensure the integrity and subjective consistency of scene information. Experiments with numerous HDR images and TM methods are conducted and the results show that the proposed method achieves visually compelling results with little exposure imbalance and halo artifact, and is superior to the current state-of-the-art TM methods in both subjective and objective evaluations.

Journal ArticleDOI
TL;DR: This paper contributes to enhance night vision sources by hybrid vision enhancement (HVE) system without affecting performance of hardware implementation, and shows that the proposed HVE system perform very efficient than existing system in terms of hardware utilization, maximum clock frequency, and power.
Abstract: Night vision system has become a critical component of modern warfare and the ability to see in nighttime conditions allows military maneuvers and a potential advantage to the forces equipped with this technology. These night vision systems rely on the very low light levels of night sky illumination to help image the targeted scene and its surroundings. Many research works have been undertaken to overcome issues in hardware implementation. In this paper, we contribute to enhance night vision sources by hybrid vision enhancement (HVE) system without affecting performance of hardware implementation. The proposed hybrid system consists of two algorithms such asoptimizedtone mapping (OTM) and adaptive gamma correction (AGC) algorithm. Normally, hybrid systems are not an area efficient, here we modify the tone mapping algorithm byoptimized filter design with the exponential basis. The differential evolution optimization algorithm is used to enhance the filter design. The proposed HVE system implementation is designed in Verilog language and synthesized with different FPGA device families in Xilinx tool. Simulation result shows that our proposed HVE system is able to enhance vision of wide dynamic range (WDR) images to good visual quality. The synthesis result shows that our proposed HVE system perform very efficient than existing system in terms of hardware utilization, maximum clock frequency, and power.

Posted Content
TL;DR: In this paper, a JPEG XT image compression with hue compensation for two-layer HDR coding is proposed, which is based on the maximally saturated color on the constant-hue plane.
Abstract: We propose a novel JPEG XT image compression with hue compensation for two-layer HDR coding. LDR images produced from JPEG XT bitstreams have some distortion in hue due to tone mapping operations. In order to suppress the color distortion, we apply a novel hue compensation method based on the maximally saturated colors. Moreover, the bitstreams generated by using the proposed method are fully compatible with the JPEG XT standard. In an experiment, the proposed method is demonstrated not only to produce images with small hue degradation but also to maintain well-mapped luminance, in terms of three kinds of criterion: TMQI, hue value in CIEDE2000, and the maximally saturated color on the constant-hue plane.

Proceedings ArticleDOI
28 Jul 2019
TL;DR: A clustering-based contrast enhancement technique is presented for computed tomography (CT) images, which shows promising results in terms of uniformity, sharpness, and uniformity in the response of patients to CT images.
Abstract: Modern medical science strongly depends on imaging technologies for accurate diagnose and treatment planning. Raw medical images generally require post-processing - like edge and contrast enhancement, and noise removal - for visualization. In this paper, a clustering-based contrast enhancement technique is presented for computed tomography (CT) images.

Proceedings ArticleDOI
Min Zhao1, Liquan Shen1, Mingxing Jiang1, Linru Zheng1, Ping An1 
01 Sep 2019
TL;DR: A novel no-reference image quality assessment (IQA) model to evaluate the perceptual quality of tone-mapped images (TMIs) is proposed, which shows admirable performance when tested on ESPL-LIVE HDR image database.
Abstract: Research on tone mapping operators (TMOs) attracts more attention recently, which can transform high dynamic range (HDR) images to low dynamic range (LDR) images for visualizing them on the common displays. In this paper, we propose a novel no-reference image quality assessment (IQA) model to evaluate the perceptual quality of tone-mapped images (TMIs). Specifically, local phase congruency (LPC) is first computed to evaluate the image sharpness and some statistical characteristics are extracted on the edge maps to measure the halo effect. Meanwhile, TMIs are transformed to opponent color (OC) space to gain the global image chro-maticity and local image contract in the chromatic field. Finally, a regression module is learnt using support vector regression (SVR) to train the mapping function that maps all the features to subjective quality scores. The model shows admirable performance when tested on ESPL-LIVE HDR image database.

Journal ArticleDOI
Mei Yu1, Wang Yang1, Gangyi Jiang1, Yongqiang Bai1, Ting Luo1 
TL;DR: A highly robust HDR image watermarking algorithm based on Tucker decomposition that is robust against existing TM attacks while keeping better image imperceptibility and embedding capacity is proposed.
Abstract: High dynamic range (HDR) imaging technique has received much attentions in the recent years for its abundant details and wide dynamic range of luminance. In order to display HDR images on low dynamic range display devices, tone mapping (TM) process is required. However, the TM can be deemed as an inevitable attack in the field of HDR image copyright protection. In this paper, a highly robust HDR image watermarking algorithm based on Tucker decomposition is proposed. Firstly, Tucker decomposition is carried out on HDR color image to obtain the first feature map of the core tensor, which contains most of the energies in the HDR host image. Then, a Auto-Regressive prediction method is used to establish a local correlation model of the first feature map, so that watermark can be embedded in the first feature map according to the prediction result and its true value. Furthermore, a low-complexity luminance masking approach is designed based on modified specular free map to exclude the areas not suitable for watermark embedding, and an embedding intensity selection strategy is used to balance the imperceptibility and robustness of the watermark. The experimental results show that the proposed algorithm is robust against existing TM attacks while keeping better image imperceptibility and embedding capacity.

Patent
06 Jun 2019
TL;DR: In this paper, a local pixel neighborhood metric is determined for at least one raw pixel in a region-of-interest, which identifies on one or more raw pixels near the at least 1 raw pixel.
Abstract: Embodiments related to local tone mapping for symbol reading. A local pixel neighborhood metric is determined for at least one raw pixel in a region-of-interest, which identifies on one or more raw pixels near the at least one raw pixel. A local mapping function is determined for the at least one raw pixel that maps the value of the raw pixel to a mapped pixel value with a mapped bit depth that is smaller than the bit depth associated with the raw image. The local mapping function is based on a value of at least one other raw pixel near the at least one raw pixel within the local pixel neighborhood metric, and at least one parameter determined based on the raw image. A mapped image is computed for the region-of-interest by applying the local mapping function to the raw image.