scispace - formally typeset
Search or ask a question
Author

Hojatollah Yeganeh

Bio: Hojatollah Yeganeh is an academic researcher from University of Waterloo. The author has contributed to research in topics: Image quality & Tone mapping. The author has an hindex of 14, co-authored 30 publications receiving 1203 citations. Previous affiliations of Hojatollah Yeganeh include Amirkabir University of Technology & Sharif University of Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: An objective quality assessment algorithm for tone-mapped images is proposed by combining: 1) a multiscale signal fidelity measure on the basis of a modified structural similarity index and 2) a naturalness measure onThe basis of intensity statistics of natural images.
Abstract: Tone-mapping operators (TMOs) that convert high dynamic range (HDR) to low dynamic range (LDR) images provide practically useful tools for the visualization of HDR images on standard LDR displays. Different TMOs create different tone-mapped images, and a natural question is which one has the best quality. Without an appropriate quality measure, different TMOs cannot be compared, and further improvement is directionless. Subjective rating may be a reliable evaluation method, but it is expensive and time consuming, and more importantly, is difficult to be embedded into optimization frameworks. Here we propose an objective quality assessment algorithm for tone-mapped images by combining: 1) a multiscale signal fidelity measure on the basis of a modified structural similarity index and 2) a naturalness measure on the basis of intensity statistics of natural images. Validations using independent subject-rated image databases show good correlations between subjective ranking score and the proposed tone-mapped image quality index (TMQI). Furthermore, we demonstrate the extended applications of TMQI using two examples - parameter tuning for TMOs and adaptive fusion of multiple tone-mapped images.

525 citations

Journal ArticleDOI
TL;DR: Validations based on four publicly available databases show that the proposed patch-based contrast quality index (PCQI) method provides accurate predictions on the human perception of contrast variations.
Abstract: Contrast is a fundamental attribute of images that plays an important role in human visual perception of image quality With numerous approaches proposed to enhance image contrast, much less work has been dedicated to automatic quality assessment of contrast changed images Existing approaches rely on global statistics to estimate contrast quality Here we propose a novel local patch-based objective quality assessment method using an adaptive representation of local patch structure, which allows us to decompose any image patch into its mean intensity, signal strength and signal structure components and then evaluate their perceptual distortions in different ways A unique feature that differentiates the proposed method from previous contrast quality models is the capability to produce a local contrast quality map, which predicts local quality variations over space and may be employed to guide contrast enhancement algorithms Validations based on four publicly available databases show that the proposed patch-based contrast quality index (PCQI) method provides accurate predictions on the human perception of contrast variations

270 citations

Journal ArticleDOI
TL;DR: A gradient ascent-based algorithm, which starts from any initial point in the space of all possible images and iteratively moves towards the direction that improves MEF-SSIM until convergence, and the final high quality fused image appears to have little dependence on the initial image.
Abstract: We propose a multi-exposure image fusion (MEF) algorithm by optimizing a novel objective quality measure, namely the color MEF structural similarity (MEF-SSIM $_c$ ) index. The design philosophy we introduce here is substantially different from existing ones. Instead of pre-defining a systematic computational structure for MEF ( e.g. , multiresolution transformation and transform domain fusion followed by image reconstruction), we directly operate in the space of all images, searching for the image that optimizes MEF-SSIM $_c$ . Specifically, we first construct the MEF-SSIM $_c$ index by improving upon and expanding the application scope of the existing MEF-SSIM algorithm. We then describe a gradient ascent-based algorithm, which starts from any initial point in the space of all possible images and iteratively moves towards the direction that improves MEF-SSIM $_c$ until convergence. Numerical and subjective experiments demonstrate that the proposed algorithm consistently produces better quality fused images both visually and in terms of MEF-SSIM $_c$ . The final high quality fused image appears to have little dependence on the initial image. The proposed optimization framework is readily extensible to construct better MEF algorithms when better objective quality models for MEF are available.

151 citations

Journal ArticleDOI
TL;DR: Numerical and subjective experiments demonstrate that the proposed algorithm consistently produces better quality tone mapped images even when the initial images of the iteration are created by the most competitive TMOs.
Abstract: Tone mapping operators (TMOs) aim to compress high dynamic range (HDR) images to low dynamic range (LDR) ones so as to visualize HDR images on standard displays. Most existing TMOs were demonstrated on specific examples without being thoroughly evaluated using well-designed and subject-validated image quality assessment models. A recently proposed tone mapped image quality index (TMQI) made one of the first attempts on objective quality assessment of tone mapped images. Here, we propose a substantially different approach to design TMO. Instead of using any predefined systematic computational structure for tone mapping (such as analytic image transformations and/or explicit contrast/edge enhancement), we directly navigate in the space of all images, searching for the image that optimizes an improved TMQI. In particular, we first improve the two building blocks in TMQI—structural fidelity and statistical naturalness components—leading to a TMQI-II metric. We then propose an iterative algorithm that alternatively improves the structural fidelity and statistical naturalness of the resulting image. Numerical and subjective experiments demonstrate that the proposed algorithm consistently produces better quality tone mapped images even when the initial images of the iteration are created by the most competitive TMOs. Meanwhile, these results also validate the superiority of TMQI-II over TMQI. 1 1 Partial preliminary results of this work were presented at ICASSP 2013 and ICME 2014.

133 citations

Proceedings ArticleDOI
13 May 2008
TL;DR: This paper presents a novel algorithm for contrast enhancement based on histogram equalization (HE) which has better results comparing with bi histogramequalization (BHE) algorithm based on visual criterion and a mathematical criterion.
Abstract: Histogram based techniques is one of the important digital image processing techniques which can be used for image enhancement. One of the advantages of histogram based techniques is simplicity of implementation of the algorithm. Also it should be mentioned that histogram based techniques is much less expensive comparing to the other methods. Histogram based techniques for image enhancement is mostly based on equalizing the histogram of the image and increasing the dynamic range corresponding to the image. Histogram equalization (HE) method has two main disadvantages which affect efficiency of this method. For solving the above problems, some techniques have proposed for example using bi histogram equalization (BHE) algorithm instead of histogram equalization (HE). It should be mentioned that bi histogram equalization (BHE) is one of the best proposed algorithm which has proposed until now. This paper presents a novel algorithm for contrast enhancement based on histogram equalization (HE). Our proposed algorithm applies some preprocessing steps on the histogram corresponding to the image and then applies histogram equalization. We have applied our proposed algorithm on a database which includes 220 normal images and results are promising. Our proposed method has better results comparing with bi histogram equalization (BHE) algorithm based on visual criterion and a mathematical criterion.

108 citations


Cited by
More filters
01 Jan 2016
TL;DR: The remote sensing and image interpretation is universally compatible with any devices to read and is available in the digital library an online access to it is set as public so you can get it instantly.
Abstract: Thank you very much for downloading remote sensing and image interpretation. As you may know, people have look hundreds times for their favorite novels like this remote sensing and image interpretation, but end up in malicious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they are facing with some malicious virus inside their computer. remote sensing and image interpretation is available in our digital library an online access to it is set as public so you can get it instantly. Our book servers spans in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Merely said, the remote sensing and image interpretation is universally compatible with any devices to read.

1,802 citations

Journal ArticleDOI
TL;DR: This paper proposes to use the convolutional neural network (CNN) to train a SICE enhancer, and builds a large-scale multi-exposure image data set, which contains 589 elaborately selected high-resolution multi-Exposure sequences with 4,413 images.
Abstract: Due to the poor lighting condition and limited dynamic range of digital imaging devices, the recorded images are often under-/over-exposed and with low contrast. Most of previous single image contrast enhancement (SICE) methods adjust the tone curve to correct the contrast of an input image. Those methods, however, often fail in revealing image details because of the limited information in a single image. On the other hand, the SICE task can be better accomplished if we can learn extra information from appropriately collected training data. In this paper, we propose to use the convolutional neural network (CNN) to train a SICE enhancer. One key issue is how to construct a training data set of low-contrast and high-contrast image pairs for end-to-end CNN learning. To this end, we build a large-scale multi-exposure image data set, which contains 589 elaborately selected high-resolution multi-exposure sequences with 4,413 images. Thirteen representative multi-exposure image fusion and stack-based high dynamic range imaging algorithms are employed to generate the contrast enhanced images for each sequence, and subjective experiments are conducted to screen the best quality one as the reference image of each scene. With the constructed data set, a CNN can be easily trained as the SICE enhancer to improve the contrast of an under-/over-exposure image. Experimental results demonstrate the advantages of our method over existing SICE methods with a significant margin.

632 citations

Journal ArticleDOI
TL;DR: This work introduces an effective technique to enhance the images captured underwater and degraded due to the medium scattering and absorption by building on the blending of two images that are directly derived from a color-compensated and white-balanced version of the original degraded image.
Abstract: We introduce an effective technique to enhance the images captured underwater and degraded due to the medium scattering and absorption. Our method is a single image approach that does not require specialized hardware or knowledge about the underwater conditions or scene structure. It builds on the blending of two images that are directly derived from a color-compensated and white-balanced version of the original degraded image. The two images to fusion, as well as their associated weight maps, are defined to promote the transfer of edges and color contrast to the output image. To avoid that the sharp weight map transitions create artifacts in the low frequency components of the reconstructed image, we also adapt a multiscale fusion strategy. Our extensive qualitative and quantitative evaluation reveals that our enhanced images and videos are characterized by better exposedness of the dark regions, improved global contrast, and edges sharpness. Our validation also proves that our algorithm is reasonably independent of the camera settings, and improves the accuracy of several image processing applications, such as image segmentation and keypoint matching.

601 citations

Journal ArticleDOI
TL;DR: This paper proposes a novel objective image quality assessment (IQA) algorithm for MEF images based on the principle of the structural similarity approach and a novel measure of patch structural consistency and shows that the proposed model well correlates with subjective judgments and significantly outperforms the existing IQA models for general image fusion.
Abstract: Multi-exposure image fusion (MEF) is considered an effective quality enhancement technique widely adopted in consumer electronics, but little work has been dedicated to the perceptual quality assessment of multi-exposure fused images. In this paper, we first build an MEF database and carry out a subjective user study to evaluate the quality of images generated by different MEF algorithms. There are several useful findings. First, considerable agreement has been observed among human subjects on the quality of MEF images. Second, no single state-of-the-art MEF algorithm produces the best quality for all test images. Third, the existing objective quality models for general image fusion are very limited in predicting perceived quality of MEF images. Motivated by the lack of appropriate objective models, we propose a novel objective image quality assessment (IQA) algorithm for MEF images based on the principle of the structural similarity approach and a novel measure of patch structural consistency. Our experimental results on the subjective database show that the proposed model well correlates with subjective judgments and significantly outperforms the existing IQA models for general image fusion. Finally, we demonstrate the potential application of the proposed model by automatically tuning the parameters of MEF algorithms. 1 The subjective database and the MATLAB code of the proposed model will be made available online. Preliminary results of Section III were presented at the 6th International Workshop on Quality of Multimedia Experience , Singapore, 2014.

530 citations

Journal ArticleDOI
TL;DR: An objective quality assessment algorithm for tone-mapped images is proposed by combining: 1) a multiscale signal fidelity measure on the basis of a modified structural similarity index and 2) a naturalness measure onThe basis of intensity statistics of natural images.
Abstract: Tone-mapping operators (TMOs) that convert high dynamic range (HDR) to low dynamic range (LDR) images provide practically useful tools for the visualization of HDR images on standard LDR displays. Different TMOs create different tone-mapped images, and a natural question is which one has the best quality. Without an appropriate quality measure, different TMOs cannot be compared, and further improvement is directionless. Subjective rating may be a reliable evaluation method, but it is expensive and time consuming, and more importantly, is difficult to be embedded into optimization frameworks. Here we propose an objective quality assessment algorithm for tone-mapped images by combining: 1) a multiscale signal fidelity measure on the basis of a modified structural similarity index and 2) a naturalness measure on the basis of intensity statistics of natural images. Validations using independent subject-rated image databases show good correlations between subjective ranking score and the proposed tone-mapped image quality index (TMQI). Furthermore, we demonstrate the extended applications of TMQI using two examples - parameter tuning for TMOs and adaptive fusion of multiple tone-mapped images.

525 citations